INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Irreducible factors of $X^p-1$ in $(\mathbb{Z}/q \mathbb{Z})[X]$ Is it possible to determine how many irreducible factors has $X^p-1$ in the polynomial ring $(\mathbb{Z}/q \mathbb{Z})[X]$ has and maybe even the degrees of the irreducible factors? (Here $p,q$ are primes with $\gcd(p,q)=1$.)
It has one factor of degree $1$, namely $x-1$. All the remaining factors have the same degree, namely the order of $q$ in the multiplicative group $(\mathbb{Z}/p \mathbb{Z})^*$. To see it: this is the length of every orbit of the action of the Frobenius $a\mapsto a^q$ on the set of the roots of $(x^p-1)/(x-1)$ in the algebraic closure.
What does "increases in proportion to" mean? I came across a multiple choice problem where a function $f(x) = \frac{x^2 - 1}{x+1} - x$ is given. One has to choose the statement that is correct about the function. The different statements about the function included: * *(1) the function increases in proportion to $x^2$. *(2) the function increases in proportion to $x$ *(3) the function is constant. Obviously the function is constant, but my questions is what the "increases in proportion to" means. I haven't come across it before, and was wondering if it is standard for something. I would like so see an example of a function that can be said to satisfy this statement.
To answer your question about examples where $f(x)$ would be proportional to $x$, and $x^2$, we only need to slightly modify the original function. $f(x) = \frac {x^2-1}{x+1} $ is interesting between $x=-2$ and $x=2$, but as $x$ becomes really large or really small, the "-1" term and the "+1" term are insignificant and the expression is approximately $\frac {x^2}{x}$ or $x$. This would be proportional to $x$. If $f(x) = \frac{x^3-1}{x+1} - x$, then as $x$ becomes really large or really small, the function breaks down to $\frac{x^3}{x} -x $ or $x^2$. This would be proportional to $x^2$ for much of $x$.
Normal Field Extension $X^4 -4$ has a root in $\Bbb Q(2^{1/2})$ but does not split in $\Bbb Q(2^{1/2})$ implying that $\Bbb Q(2^{1/2})$ is not a normal extension of $\Bbb Q$ according to most definitions. But $\Bbb Q(2^{1/2})$ is considered a normal extension of $\Bbb Q$ by everybody. What am I missing here?
You are missing the fact that $x^4-4$ is not irreducible over $\mathbb{Q}$: $x^4-4 = (x^2-2)(x^2+2)$. The definition you have in mind says that if $K/F$ is algebraic, then $K$ is normal if and only if every irreducible $f(x)\in F[x]$ that has at least one root in $K$ actually splits in $K$. Your test polynomial is not irreducible.
Exercise on compact $G_\delta$ sets I'm having trouble proving an exercise in Folland's book on real analysis. Problem: Consider a locally compact Hausdorff space $X$. If $K\subset X$ is a compact $G_\delta$ set, then show there exists a $f\in C_c(X, [0,1])$ with $K=f^{-1}(\{1\})$. We can write $K=\cap_1^\infty U_i$, where the $U_i$ are open. My thought was to use Urysohn's lemma to find functions $f_i$ which are 1 on $K$ and $0$ outside of $U_i$, but I don't see how to use them to get the desired function. If we take the limit, I think we just get the characteristic function of $K$. I apologize if this is something simple. It has been a while since I've done point-set topology.
As you have said, we can use Urysohn's lemma for compact sets to construct a sequence of functions $f_i$ such that $f_i$ equals $1$ in $K$ and $0$ outside $U_i$. Furthermore, $X$ is locally compact, so there is an open neighbourhood $U$ of $K$ whose closure is compact. We can then assume without loss of generality that $U_i\subseteq U$ Then we can put $f=\sum_i2^{-i} f_i$. Clearly, $f^{-1}[\{1\}]=K$. Moreover, $f$ is the uniform limit of continuous functions (because $f_i$ are bounded by $1$), so it is continuous, and its support is contained in $U$, so $f$ is the function you seek.
proof for $ (\vec{A} \times \vec{B}) \times \vec{C} = (\vec{A}\cdot\vec{C})\vec{B}-(\vec{B}\cdot\vec{C})\vec{A}$ this formula just pop up in textbook I'm reading without any explanation $ (\vec{A} \times \vec{B}) \times \vec{C} = (\vec{A}\cdot\vec{C})\vec{B}-(\vec{B}\cdot\vec{C})\vec{A}$ I did some "vector arithmetic" using the determinant method but I'm not getting the answer to agreed with above formula. I'm wondering if anyone saw this formula before and know the proof of it? the final result that i get is $b_{1}(a_{3}c_{3}+a_{2}c_{2})-a_{1}(b_{2}c_{2}+b_{3}c_{3})i$ $b_{2}(a_{3}c_{3}+a_{1}c_{1})-a_{2}(b_{3}c_{3}+b_{1}c_{1})j$ $b_{3}(a_{2}c_{2}+a_{1}c_{1})-a_{3}(b_{2}c_{2}+b_{1}c_{1})k$ But I failed to see any correlation for $(\vec{A}\cdot\vec{C})$ and $(\vec{B}\cdot\vec{C})$ part...
vector $ \vec A\times \vec B$ is perpendicular to the plane containing $\vec A $ and $\vec B$.Now, $(\vec A\times \vec B)\times \vec C$ is perpendicular to plane containing vectors $\vec C$ and $\vec A\times \vec B$, thus $(\vec A\times \vec B)\times \vec C$ is in the plane containing $\vec A$ and $ \vec B$ and hence $(\vec A\times \vec B)\times \vec C$ is a linear combination of vectors $\vec A$ and $ \vec B\implies (\vec A\times \vec B)\times \vec C=\alpha \vec A + \beta \vec B$. Take dot product with $\vec C$ both sides gives $0$ on the L.H.S(as $(\vec A\times \vec B)\times \vec C$ is perpendicular to $\vec C$), hence $0=\alpha (\vec A. \vec C)+\beta(\vec B.\vec C)\implies \frac{\beta}{\vec A. \vec C}=\frac{-\alpha}{\vec B. \vec C}=\lambda \implies \alpha=-\lambda(\vec B. \vec C)$ and $\beta=\lambda(\vec A. \vec C) \implies (\vec A\times \vec B)\times \vec C=\lambda((\vec A. \vec C)\vec B-(\vec B. \vec C)\vec A)$. Here, $\lambda$ is independent of the magnitudes of vectors as if magnitude of any vector is multiplied by any scalar, that appears explicitly on both sides of the equation.Thus , putting unit vectors $\vec i,\vec j,\vec i$ in the equation gives $\vec j=\lambda(\vec j)\implies \lambda=1$ and hence $(\vec A\times \vec B)\times \vec C=((\vec A. \vec C)\vec B-(\vec B. \vec C)\vec A)$
Idea behind factoring an operator? Suppose if we have an operator $\partial_t^2-\partial_x^2 $ what does it mean to factorise this operator to write it as $(\partial_t-\partial_x) (\partial_t+\partial x)$ When does it actually make sense and why ?
In the abstract sense, the decomposition $x^2-y^2=(x+y)(x-y)$ is true in any ring where $x$ and $y$ commute (in fact, if and only if they commute). For sufficiently nice (smooth) functions, differentiation is commutative, that is, the result of derivation depends on the degrees of derivation and not the order in which we apply them, so the differential operators on a set of smooth ($C^\infty$) functions (or abstract power series or other such objects) form a commutative ring with composition, so the operation makes perfect sense in that case in a quite general way. However, we only need $\partial_x\partial_t=\partial_t\partial_x$, and for that we only need $C^2$. Of course, differential operators on $C^2$ do not form a ring (since higher order derivations may not make sense), but the substitution still is correct for the same reasons. You can look at the differentiations as polynomials of degree 2 with variables $\partial_\textrm{something}$. For some less smooth functions it might not make sense.
proving :$\frac{ab}{a^2+3b^2}+\frac{cb}{b^2+3c^2}+\frac{ac}{c^2+3a^2}\le\frac{3}{4}$. Let $a,b,c>0$ how to prove that : $$\frac{ab}{a^2+3b^2}+\frac{cb}{b^2+3c^2}+\frac{ac}{c^2+3a^2}\le\frac{3}{4}$$ I find that $$\ \frac{ab}{a^{2}+3b^{2}}=\frac{1}{\frac{a^{2}+3b^{2}}{ab}}=\frac{1}{\frac{a}{b}+\frac{3b}{a}} $$ By AM-GM $$\ \frac{ab}{a^{2}+3b^{2}} \leq \frac{1}{2 \sqrt{3}}=\frac{\sqrt{3}}{6} $$ $$\ \sum_{cyc} \frac{ab}{a^{2}+3b^{2}} \leq \frac{\sqrt{3}}{2} $$ But this is obviously is not working .
I have a Cauchy-Schwarz proof of it,hope you enjoy.:D first,mutiply $2$ to each side,your inequality can be rewrite into $$ \sum_{cyc}{\frac{2ab}{a^2+3b^2}}\leq \frac{3}{2}$$ Or $$ \sum_{cyc}{\frac{(a-b)^2+2b^2}{a^2+3b^2}}\geq \frac{3}{2}$$ Now,Using Cauchy-Schwarz inequality,we have $$ \sum_{cyc}{\frac{(a-b)^2+2b^2}{a^2+3b^2}}\geq \frac{\left(\sum_{cyc}{\sqrt{(a-b)^2+2b^2}}\right)^2}{4(a^2+b^2+c^2)}$$ Therefore,it's suffice to prove $$\left(\sum_{cyc}{\sqrt{(a-b)^2+2b^2}}\right)^2\geq 6(a^2+b^2+c^2) $$ after simply expand,it's equal to $$ \sum_{cyc}{\sqrt{[(a-b)^2+2b^2][(b-c)^2+2c^2]}}\geq a^2+b^2+c^2+ab+bc+ca $$ Now,Using Cauchy-Schwarz again,we notice that $$ \sqrt{[(a-b)^2+2b^2][(b-c)^2+2c^2]}\geq (b-a)(b-c)+2bc=b^2+ac+bc-ab$$ sum them up,the result follows. Hence we are done! Equality occurs when $a=b=c$
Quick probability question If there is an $80\%$ chance of rain in the next hour, what is the percentage chance of rain in the next half hour? Thanks.
You could assume that occurrence of rain is a point process with constant rate lambda. Then the process is Poisson and the probability of rain in an interval of time [0,t] is given by the exponential distribution and if T = time until the event rain then P[T<=t] = 1-exp(-λt) assuming λ is the rate per hour then P(rain in 1 hour)=1-exp(-λ). and P(rain in 1/2 hour)=1-exp(-λ/2). Other models for the point process would give different answers. In general the longer the time interval greater the chance of rain.
Splitting field and subextension Definition: Let $K/F$ be a field extension and let $p(x)\in F[x]$, we say that $K$ is splitting field of $p$ over $F$ if $p$ splits in $K$ and $K$ is generated by $p$'s roots; i.e. if $a_{0},...,a_{n}\in K$ are the roots of $p$ then $K=F(a_{0},...a_{n})$. What I am trying to understand is this: in my lecture notes it is written that if $K/E$,$E/F$ are field extensions then $K$ is splitting field of $p$ over $F$ iff $K$ is splitting field of $p$ over $E$. If I assume $K$ is splitting field of $p$ over $F$ then $$\begin{align*}K=F(a_{0,}...,a_{n})\subset E(a_{0,}...,a_{n})\subset K &\implies F(a_{0,}...,a_{n})=E(a_{0,}...,a_{n})\\ &\implies K=E(a_{0,}...,a_{n}). \end{align*}$$ Can someone please help with the other direction ? help is appreciated!
This is false. Let $F=\mathbb{Q}$, let $E=\mathbb{Q}(\sqrt{2})$, let $K=\mathbb{Q}(\sqrt{2},\sqrt{3})$, and let $p=x^2-3\in F[x]$. Then the splitting field for $p$ over $E$ is $K$, but the splitting field for $p$ over $F$ is $\mathbb{Q}(\sqrt{3})\subsetneq K$. Let's say that all fields under discussion live in an algebraically closed field $L$. Letting $M$ be the unique splitting field for $p$ over $F$ inside $L$, then the splitting field for $p$ over $E$ inside $L$ is equal to $M$ if and only if $ME=M$, which is the case if and only if $E\subseteq M$. In other words, you'll get the same splitting field for $p$ over $F$ and over $E$ if and only if $E$ were already isomorphic to a subfield of the splitting field for $p$ over $F$. When $E$ does not have that property, there is "extra stuff" in $E$ (for example, $\sqrt{2}\in E$ in the example) that will need to also be contained in the splitting field for $p$ over $E$.
Big List of Fun Math Books To be on this list the book must satisfy the following conditions: * *It doesn't require an enormous amount of background material to understand. *It must be a fun book, either in recreational math (or something close to) or in philosophy of math. Here are my two contributions to the list: * *What is Mathematics? Courant and Robbins. *Proofs that Really Count. Benjamin and Quinn.
Both of the following books are really interesting and very accessible. Stewart, Ian - Math Hysteria This book covers the math behind many famous games and puzzles and requires very little math to be able to grasp. Very light hearted and fun. Bellos, Ales - Alex's Adventures in Numberland This book is about maths in society and everyday life, from monkeys doing arithmetics to odds in Las Vegas.
Ratio of sides in a triangle vs ratio of angles? Given a triangle with the ratio of sides being $X: Y : Z$, is it true that the ratio of angles is also $X: Y: Z$? Could I see a proof of this? Thanks
No, it is not true. Consider a 45-45-90 right triangle: (image from Wikipedia) The sides are in the ratio $1:1:\sqrt{2}$, while the angles are in the ratio $1:1:2$.
In a field $F=\{0,1,x\}$, $x + x = 1$ and $x\cdot x = 1$ Looking for some pointers on how to approach this problem: Let $F$ be a field consisting of exactly three elements $0$, $1$, $x$. Prove that $x + x = 1$ and that $x x = 1$.
Write down the addition and multiplication tables. Much of them are known immediately from the properties of 0 and 1, and there's only one way to fill in the rest so that addition and multiplication by nonzero are both invertible.
Given that $x=\dfrac 1y$, show that $∫\frac {dx}{x \sqrt{(x^2-1)}} = -∫\frac {dy}{\sqrt{1-y^2}}$ Given that $x=\dfrac 1y$, show that $\displaystyle \int \frac 1{x\sqrt{x^2-1}}\,dx = -\int \frac 1{\sqrt{1-y^2}}\,dy$ Have no idea how to prove it. here is a link to wolframalpha showing how to integrate the left side.
Substitute $x=1/y$ and $dx/dy=-1/y^2$ to get $\int y^2/(\sqrt{1-y^2}) (-1/y^2)dy$
Finding the critical points of $\sin(x)/x$ and $\cosh(x^2)$ Could someone help me solve this: What are all critical points of $f(x)=\sin(x)/x$ and $f(x)=\cosh(x^2)$? Mathematica solutions are also accepted.
Taken from here : $x=c$ is a critical point of the function $f(x)$ if $f(c)$ exists and if one of the following are true: * *$f'(c) = 0$ *$f'(c)$ does not exist The general strategy for finding critical points is to compute the first derivative of $f(x)$ with respect to $x$ and set that equal to zero. $$f(x) = \frac{\sin x}{x}$$ Using the quotient rule, we have: $$f'(x) = \frac{x\cdot \cos x - \sin x \cdot 1}{x^2}$$ $$f'(x) = \frac{x \cos x}{x^2} - \frac{\sin x}{x^2}$$ Dividing through by $x$ for the left terms, we now have: $$f'(x) = \frac{\cos x}{x} - \frac{\sin x}{x^2}$$ Now set that equal to zero and solve for your critical points. Do the same for $f(x) = \cosh(x^2)$. Don't forget the chain rule! For $f(x) = \cosh (x^2)$, recall that $\frac{d}{dx} \cosh (x) = \sinh (x)$. So, $$f'(x) = \sinh(x^2) \cdot \frac{d}{dx} (x^2)$$ $$f'(x) = 2x \sinh(x^2)$$ $$0 = 2x \sinh(x^2) $$ $x = 0$ is your only critical point along the reals.
Finding a function that fits the "lowest points" of another one I came up with this problem, which I cannot solve myself. Consider the function: $\displaystyle f(x) = x^{\ln(|\pi \cos x ^ 2| + |\pi \tan x ^ 2|)}$, which has singularities at $\sqrt{\pi}\sqrt{n + \dfrac{1}{2}}$, with $n \in \mathbb{Z}$. Looking at its graph: we can see it is globally increasing: I was wondering if there exists a function $g(x)$, such that $f(x) - g(x) \ge 0, \forall x \in \mathbb{R^{+}}$ and that best fits the "lowest points" of $f(x)$. Sorry for the inaccurate terminology but I really don't know how to express this concept mathematically. Here is, for example, $g(x) = x ^ {1.14}$ (in red): Actually $g(x)$ is not correct because for small values of $x$ it is greater than $f(x)$. Is it possible to find such a $g(x)$, given that the "nearest" is $g(x)$ to $f(x)$'s "lowest points" the better it is? Again, sorry for my terminology, I hope you could point me in the right direction. Thanks,
As $a^{\ln b}=\exp(\ln a\cdot\ln b)=b^{\ln a}$ the function $f$ can be written in the following way: $$f(x)=\bigl(\pi|\cos(x^2)|+\pi|\tan(x^2)|\bigr)^{\ln x}\ .$$ Now the auxiliary function $$\phi:\quad{\mathbb R}\to[0,\infty],\qquad t\mapsto \pi(|\cos(t)|+|\tan(t)|)$$ is periodic with period $\pi$ and assumes its minimum $\pi$ at the points $t_n=n\pi$. The function $$\psi(x):=\phi(x^2)=\pi|\cos(x^2)|+\pi|\tan(x^2)|\bigr)$$ assumes the same values as $\phi$; in particular it is $\geq\pi$ for all $x\geq0$ and $=\pi$ at the points $x_n:=\sqrt{n\pi}$ $\ (n\geq0)$. Therefore $$f(x)=\bigl(\psi(x)\bigr)^{\ln x}\geq \pi^{\ln x}=x^{\ln\pi}\qquad(x\geq1)$$ and $=x^{\ln\pi}$ at the $x_n>1$. For $0<x<1$ the inequality is the other way around because $y\mapsto q^y$ is decreasing when $0<q<1$.
A limit question related to the nth derivative of a function This evening I thought of the following question that isn't related to homework, but it's a question that seems very challenging to me, and I take some interest in it. Let's consider the following function: $$ f(x)= \left(\frac{\sin x}{x}\right)^\frac{x}{\sin x}$$ I wonder what is the first derivative (1st, 2nd, 3rd ...) such that $\lim\limits_{x\to0} f^{(n)}(x)$ is different from $0$ or $+\infty$, $-\infty$, where $f^{(n)}(x)$ is the nth derivative of $f(x)$ (if such a case is possible). I tried to use W|A, but it simply fails to work out such limits. Maybe i need the W|A Pro version.
The Taylor expansion is $$f(x) = 1 - \frac{x^2}{6} + O(x^4),$$ so \begin{eqnarray*} f(0) &=& 1 \\ f'(0) &=& 0 \\ f''(0) &=& -\frac{1}{3}. \end{eqnarray*} $\def\e{\epsilon}$ Addendum: We use big O notation. Let $$\e = \frac{x}{\sin x} - 1 = \frac{x^2}{6} + O(x^4).$$ Then \begin{eqnarray*} \frac{1}{f(x)} &=& (1+\e)^{1+\e} \\ &=& (1+\e)(1+\e)^\e \\ &=& (1+\e)(1+O(\e\log(1+\e))) \\ &=& (1+\e)(1+O(\e^2)) \\ &=& 1+\e + O(\e^2), \end{eqnarray*} so $f(x) = 1-\e + O(\e^2) = 1-\frac{x^2}{6} + O(x^4)$.
Show inequality generalization $\sum (x_i-1)(x_i-3)/(x_i^2+3)\ge 0$ Let $f(x)=\dfrac{(x-1)(x-3)}{x^2+3}$. It seems to be that: If $x_1,x_2,\ldots,x_n$ are positive real numbers with $\prod_{i=1}^n x_i=1$ then $\sum_{i=1}^n f(x_i)\ge 0$. For $n>2$ a simple algebraic approach gets messy. This would lead to a generalization of this inequality, but even the calculus solution offered there for $n=3$ went into cases. I thought about Jensen's inequality, but $f$ is not convex on $x>0$. Can someone prove or disprove the above claim?
Unfortunately, this is not true. Simple counterexample: My original counterexample had some ugly numbers in it, but fortunately, there is a counterexample with nicer numbers. However, the explanation below might still prove informative. Note that for $x>0$, $$ f(x)=\frac{(x-1)(x-3)}{x^2+3}=1-\frac{4x}{x^2+3}\lt1\tag{1} $$ Next, we compute $$ f(2)=-\frac17\tag{2} $$ Let $x_0=\frac{1}{256}$ and $x_k=2$ for $1\le k\le8$. The product of the $x_k$ is $\frac{1}{256}\cdot2^8=1$, yet by $(1)$ and $(2)$, the sum of the $f(x_k)$ is less than $1-\frac87\lt0$. Original couunterexample: Let $x_0=e^{-3.85}$ and $x_k=e^{.55}$ for $1\le k\le 7$. We get $f(x_0)=0.971631300121646$ and $f(x_k)=-0.154700260422285$ for $1\le k\le 7$. Then, $$ \prod_{k=0}^7x_k=1 $$ yet $$ \sum_{k=0}^7f(x_k)=-0.111270522834348 $$ Explanation: Let me explain how I came up with this example. $\prod\limits_{k=0}^nx_k=1$ is equivalent to $\sum\limits_{k=0}^n\log(x_k)=0$. Therefore I considered $u_k=\log(x_k)$. Now we want $$ \sum_{k=0}^nu_k=0 $$ to mean that $$ \sum_{k=0}^n\frac{(e^{u_i}-1)(e^{u_i}-3)}{e^{2u_i}+3}\ge0 $$ I first looked at the graph of $\large\frac{(e^{u}-1)(e^{u}-3)}{e^{2u}+3}$. If the graph were convex, the result would be true. $\hspace{2cm}$ Unfortunately, the graph was not convex, but I did note that $f(u)$ dipped below $0$ with a minimum of less than $-\frac17$ near $u=.55$, and that it was less than $1$ everywhere. Thus, if I took $u=.55$ for $7$ points and $u=-3.85$ for the other, the sum of the $u_k$ would be $0$, yet the sum of the $f(e^{u_k})$ would be less than $0$.
A problem dealing with even perfect numbers. Question: Show that all even perfect numbers end in 6 or 8. This is what I have. All even perfect numbers are of the form $n=2^{p-1}(2^p -1)$ where $p$ is prime and so is $(2^p -1)$. What I did was set $2^{p-1}(2^p -1)\equiv x\pmod {10}$ and proceeded to show that $x=6$ or $8$ were the only solutions. Now, $2^{p-1}(2^p -1)\equiv x\pmod {10} \implies 2^{p-2}(2^p -1)\equiv \frac{x}{2}\pmod {5}$, furthermore there are only two solutions such that $0 \le \frac{x}{2} < 5$. So I plugged in the first two primes and only primes that satisfy. That is if $p=2$ then $\frac{x}{2}=3$ when $p=3$ then $\frac{x}{2}=4$. These yield $x=6$ and $x=8$ respectively. Furthermore All solutions are $x=6+10r$ or $x=8+10s$. I would appreciate any comments and or alternate approaches to arrive at a good proof.
$p$ is prime so it is 1 or 3$\mod 4$. So, the ending digit of $2^p$ is (respectively) 2 or 8 (The ending digits of powers of 2 are $2,4,8,6,2,4,8,6,2,4,8,6...$ So, the ending digit of $2^{p-1}$ is (respectively) 6 or 4; and the ending digit of $2^p-1$ is (respectively) 1 or 7. Hence the ending digit of $2^{p-1}(2^p-1)$ is (respectively) $6\times1$ or $4\times7$ modulo 10, i.e., $6$ or $8$.
Orthogonal projection to closed, convex subset in a Hilbert space I don't understand one step in the proof of the following lemma (Projektionssatz): Let $X$ a Hilbert space with scalar product $(\cdot)_X$ and let $A\subset X$ be convex and closed. Then there is a unique map $P:X\rightarrow A$ that satisfies: $\|x-P(x)\| = \inf_{y\in A} \|x- y\|$. This is equivalent to the following statement: (1) For all $a\in A$ and fixed $x\in X$, $\mbox{Re}\bigl( x-P(x), a-P(x) \bigr)_X \le 0$. I don't understand the following step in the proof that (1) implies the properties of $P$: Let $a\in A$. Then $\|x-P(x)\|^2 + 2\mbox{Re}\bigl( x-P(x), P(x)-a \bigr)_X + \|P(x)-a\|^2 \ge \|x-P(x)\|^2$. I don't understand the "$\ge$". How do we get rid of the term $\|P(x)-a\|$ on the left hand side? Thank you very much!
It is non-negative! Hence, just drop it.
Weak*-convergence of regular measures Let $K$ be a compact Hausdorff space. Denote by $ca_r(K)$ the set of all countably additive, signed Borel measures which are regular and of bounded variation. Let $(\mu_n)_{n\in\mathbb{N}}\subset ca_r(K)$ be a bounded sequence satisfying $\mu_n\geq 0$ for all $n\in\mathbb{N}$. Can we conclude that $(\mu_n)$ (or a subsequence) converges in the weak*-topology to some $\mu\in ca_r(K)$ with $\mu\geq 0$?
We cannot. Let $K = \beta \mathbb{N}$ be the Stone-Cech compactification of $\mathbb{N}$, and let $\mu_n$ be a point mass at $n \in \mathbb{N} \subset K$. Suppose to the contrary $(\mu_n)$ has a weak-* convergent subsequence $\mu_{n_k}$. Define $f : \mathbb{N} \to \mathbb{R}$ by $f(n_k) = (-1)^k$, $f(n) = 0$ otherwise. Then $f$ has a continuous extension $\tilde{f} : K \to \mathbb{R}$. By weak-* convergence, the sequence $\left(\int \tilde{f} d\mu_{n_k}\right)$ should converge. But in fact $\int \tilde{f} d\mu_{n_k} = \tilde{f}(n_k) = (-1)^k$ which does not converge. If $C(K)$ is separable, then the weak-* topology on the closed unit ball $B$ of $C(K)^* = ca_r(K)$ is metrizable. In particular it is sequentially compact and so in that case every bounded sequence of regular measures has a weak-* convergent subsequence. As Andy Teich points out, it is sufficient for $K$ to be a compact metrizable space. Also, since there is a natural embedding of $K$ into $B$, if $B$ is metrizable then so is $K$. One might ask whether it is is possible for $B$ to be sequentially compact without being metrizable. I don't know the answer but I suspect it is not possible, i.e. that metrizability of $B$ (and hence $K$) is necessary for sequential compactness. We do know (by Alaoglu's theorem) that closed balls in $C(K)^*$ are weak-* compact, so what we can conclude in general is that $\{\mu_n\}$ has at least one weak-* limit point. However, as the above example shows, this limit point need not be a subsequential limit.
How to calculate $\int_{-a}^{a} \sqrt{a^2-x^2}\ln(\sqrt{a^2-x^2})\mathrm{dx}$ Well,this is a homework problem. I need to calculate the differential entropy of random variable $X\sim f(x)=\sqrt{a^2-x^2},\quad -a<x<a$ and $0$ otherwise. Just how to calculate $$ \int_{-a}^a \sqrt{a^2-x^2}\ln(\sqrt{a^2-x^2})\,\mathrm{d}x $$ I can get the result with Mathematica,but failed to calculate it by hand.Please give me some idea.
[Some ideas] You can rewrite it as follows: $$\int_{-a}^a \sqrt{a^2-x^2} f(x) dx$$ where $f(x)$ is the logarithm. Note that the integral, sans $f(x)$, is simply a semicircle of radius $a$. In other words, we can write, $$\int_{-a}^a \int_0^{\sqrt{a^2-x^2}} f(x) dy dx=\int_{-a}^a \int_0^{\sqrt{a^2-x^2}} \ln{\sqrt{a^2-x^2}} dy dx$$ Edit: Found a mistake. Thinking it through. :-)
How to approach integrals as a function? I'm trying to solve the following question involving integrals, and can't quite get what am I supposed to do: $$f(x) = \int_{2x}^{x^2}\root 3\of{\cos z}~dz$$ $$f'(x) =\ ?$$ How should I approach such integral functions? Am I just over-complicating a simple thing?
Using the Leibnitz rule of differentiation of integrals, which states that if \begin{align} f(x) = \int_{a(x)}^{b(x)} g(y) \ dy, \end{align} then \begin{align} f^{\prime}(x) = g(b(x)) b^{\prime}(x) - g(a(x)) a^{\prime}(x). \end{align} Thus, for your problem $a^{\prime}(x) = 2$ and $b^{\prime}(x) = 2x$ and, therefore, \begin{align} f^{\prime}(x) = \int_{2x}^{x^2} \sqrt[3]{\cos z} dz = \sqrt[3]{\cos (x^2)} (2 x) - \sqrt[3]{\cos (2x)} (2). \end{align}
How many elements in a ring can be invertible? If $R$ is a finite ring (with identity) but not a field, let $U(R)$ be its group of units. Is $\frac{|U(R)|}{|R|}$ bounded away from $1$ over all such rings? It's been a while since I cracked an algebra book (well, other than trying to solve this recently), so if someone can answer this, I'd prefer not to stray too far from first principles within reason.
$\mathbb{F}_p \times\mathbb{F}_q$ has $(p-1)(q-1)$ invertible elements, so no. Since $\mathbb{F}_2^n$ has $1$ invertible element, the proportion is also not bounded away from $0$.
Order of a product of subgroups. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$. Let $H$, $K$ be subgroups of $G$. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$. I need this theorem to prove something.
We know that $$HK=\bigcup_{h\in H} hK$$ and each $hK$ has the same cardinality $|hK|=|K|$. (See ProofWiki.) We also know that for any $h,h'\in G$ either $hK\cap h'K=\emptyset$ or $hK=h'K$. So the only problem is to find out how many of the cosets $hK$, $h\in H$, are distinct. Since $$hK=h'K \Leftrightarrow h^{-1}h'\in K$$ (see ProofWiki) we see that for each $k\in K$, the elements $h'=hk$ represent the same set. (We have $k=h^{-1}h'$.) We also see that if $k=h^{-1}h'$ then $k$ must belong to $H$. Since the number of elements that represent the same coset is $|H\cap K|$, we have $|H|/|H\cap K|$ distinct cosets and $\frac{|H||K|}{|H\cap K|}$ elements in the union.
Uniqueness of morphism in definition of category theory product (etc) I'm trying to understand the categorical definition of a product, which describes them in terms of existence of a unique morphism that makes such-and-such a diagram commute. I don't really feel I've totally understood the motivation for this definition: in particular, why must that morphism be unique? What's the consequence of omitting the requirement for uniqueness in, say, Set?
This is a question which you will be able to answer yourself after some experience ... anyway: The cartesian product $X \times Y$ of two sets $X,Y$ has the property: Every element of $X \times Y$ has a representation $(x,y)$ with unique elements $x \in X$ and $y \in Y$. This is the important and characteristic property of ordered pairs. In other words, if $*$ denotes the one-point set: For every two morphisms $x : * \to X$ and $y : * \to Y$ there is a unique morphism $(x,y) : * \to X \times Y$ such that $p_X \circ (x,y) = x$ and $p_Y \circ (x,y) = y$. But since we can do everything pointwise, the same holds for arbitrary sets instead of $*$: For every two morphisms $x : T \to X$ and $y : T \to Y$ (which you may think of families of elements in $X$ resp. $Y$, also called $T$-valued points in the setting of functorial algebraic geometry), there is a unique morphism $(x,y) : T \to X \times Y$ such that $p_X \circ (x,y) = x$ and $p_Y \circ (x,y) = y$. Once you have understood this in detail, this motivates the general definition of a product diagram in a category. After all, these appear everywhere in mathematics (products of groups, vector spaces, $\sigma$-algebras, topological spaces, etc.). Of course, the uniqueness statement is essential. Otherwise, the product won't be unique and in the case of sets you will get many more objects instead of the usual cartesian product. In general, an object $T$ satisfies the universal property of $X \times Y$ without the uniqueness requirement iff there is a retraction $T \to X \times Y$. So for example in the category of sets every set larger than the cartesian product will qualify.
Can mathematical definitions of the form "P if Q" be interpreted as "P if and only if Q"? Possible Duplicate: Alternative ways to say “if and only if”? So when I come across mathematical definitions like "A function is continuous if...."A space is compact if....","Two continuous functions are homotopic if.....", etc when is it okay to assume that the definition includes the converse as well?
Absolutely. The definition will state that we say [something] is $P$ if $Q$. Thus, every time that $Q$ holds, $P$ also holds. The definition would be useless if the other direction didn't hold, though. We want our terms to be consistent, so it is tacitly assumed that we will also say $P$ only if $Q$. Many texts prefer to avoid leaving this as tacit, and simply state it as "if and only if" in their definitions.
Differential Inequality Help I have the inequality $f''(x)x + f'(x) \leq 0$ Also, $f''(x)<0$ and $f'(x)>0$ and $x \in R^+$. And I need to figure out when it is true. I know it is a fairly general question, but I couldn't find any information in several textbooks I have skimmed. Also, I am not sure if integrating would require a sign reversal or not, so I cant go ahead and try to manipulate it my self. Any help or mention of a helpful source would be much appreciated. edit: forgot to mention $f(x)\geq 0$ for every $x \in R^+$
$$0\geq f''(x)x+f'(x)=(f'(x)x)'$$ that is why the function $$f'(x)x$$ decreases for positive reals. Then the maximum should be at $0$, so $$f'(x)x\leq 0,\quad x \in R^+$$ which is a contradiction.
Convergence of $\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$ What theorem should I use to show that $$\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$$ is convergent no matter what value $x$ takes?
Note that $(-1)^n = (-1)^{n-2}$. Hence, $$\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!} = (x-2) \sum_{n=0}^\infty\frac{(-4)^{n-2}}{(n-2)!} = (x-2) \left(\sum_{n=0}^\infty\frac{(-4)^{n}}{n!} \right)$$ where we have interpreted $\dfrac1{(-1)!} = 0 = \dfrac1{(-2)!}$. This is a reasonable interpretation since $\dfrac1{\Gamma(0)} = 0 = \dfrac1{\Gamma(-1)}$. Now recall that $$\sum_{n=0}^\infty\frac{y^{n}}{n!} = \exp(y).$$ Can you now conclude that the series converges no matter what value $x$ takes?
Combinations - at least and at most There are 20 balls - 6 red, 6 green, 8 purple We draw five balls and at least one is red, then replace them. We then draw five balls and at most one is green. In how many ways can this be done if the balls are considered distinct? My guess: $${4+3-1 \choose 3-1} \cdot {? \choose ?}$$ I don't know how to do the second part...at most one is green? Thanks for your help.
Event A Number of ways to choose 5 balls = $ _{}^{20}\textrm{C}_5 $ Number of ways to choose 5 balls with no red balls = $ _{}^{14}\textrm{C}_5 $ Hence the number of ways to choose 5 balls including at least one red ball =$ _{}^{20}\textrm{C}_5 - _{}^{14}\textrm{C}_5 $ Event B Number of ways to choose 5 balls with no green balls = $ _{}^{14}\textrm{C}_5 $ Number of ways to choose 5 balls with exactly one green ball = $ _{}^{14}\textrm{C}_4 \times 6 $ (we multiply here by 6 because we have 6 choices for the green ball we choose.) Since the above two choice processes are mutually exclusive we have that, Hence the number of ways to choose 5 balls including at most one green ball =$ _{}^{14}\textrm{C}_5 + _{}^{14}\textrm{C}_4 \times 6 $ Events A and B are independent. Therefore, the total number of ways of doing A and B = $ (_{}^{20}\textrm{C}_5 - _{}^{14}\textrm{C}_5) \times (_{}^{14}\textrm{C}_5 + 6_{}^{14}\textrm{C}_4 )$
Fermat's theorem on sums of two squares composite number Suppose that there is some natural number $a$ and $b$. Now we perform $c = a^2 + b^2$. This time, c is even. Will this $c$ only have one possible pair of $a$ and $b$? edit: what happens if c is odd number?
Not necessarily. For example, note that $50=1^2+7^2=5^2+5^2$, and $130=3^2+11^2=7^2+9^2$. For an even number with more than two representations, try $650$. We can produce odd numbers with several representations as a sum of two squares by taking a product of several primes of the form $4k+1$. To get even numbers with multiple representations, take an odd number that has multiple representations, and multiply by a power of $2$. To help you produce your own examples, the following identity, often called the Brahmagupta Identity, is quite useful: $$(a^2+b^2)(x^2+y^2)=(ax\pm by)^2 +(ay\mp bx)^2.$$
Please explain this notation equation I am confused by this equation as I rarely use math in my job but need this for a program that I am working on. What exactly does the full expression mean? Note that $m^*_i{_j}$ refers to a matrix whose values have already been obtained. Define the transition matrix $M = ${$m_i{_j}$} as follows: for $i\not=j$ set $m_i{_j}$ to $m^*_i{_j}/|U|$ and let $m_i{_i} = 1-\Sigma_{j\not=i} m_i{_j}$
To obtain the transition matrix $M$ from the matrix $M^*=(m^*_{ij})$, the rule gives us two steps. First, for all off-diagonal terms $m^*_{ij}$ where $i\neq j$ we simply divide the existing entry by $\lvert U\rvert$ (in this case $\lvert U\rvert =24$), and we temporarily replace the diagonal entries $m^*_{ii}$ by $0.$ Second, to get the $i^{\rm th}$ diagonal entry $m_{ii}$ of $M$ we sum up all entries in the $i^{\rm th}$ row of this intermediate matrix and subtract the resulting sum from $1,$ giving $1-\sum_{j\neq i}m_{ij}.$
When is $(6a + b)(a + 6b)$ a power of two? Find all positive integers $a$ and $b$ for which the product $(6a + b)(a + 6b)$ is a power of $2$. I havnt been able to get this one yet, found it online, not homework! any help is appreciated thanks!
Assume, (6a + b)(a + 6b) = 2^c where, c is an integer ge 0 Assume, (6a + b) = 2^r where, r is an integer ge 0 Assume, (a + 6b) = 2^s where, s is an integer ge 0 Now, (2^r)(2^s) = 2^c i.e. r + s = c Now, (6a + b) + (a + 6b) = 2^r + 2^s i.e. 7(a + b) = 2^r + 2^s i.e. a + b = (2^r)/7 + (2^s)/7 Now, (6a + b) - ( a + 6b) = 2^r - 2^s i.e. a - b = (2^r)/5 - (2^s)/5 Now we have, a + b = (2^r)/7 + (2^s)/7 a - b = (2^r)/5 - (2^s)/5 Here solving for a we get, a = ((2^r)(6) - 2^s)/35 Now solving for b we get, b = ((2^s)(6) - 2^r)/35 A careful observation reveals that there are no positive integers a and b for which the product (6a + b)(a + 6b) is a power of 2.
limit question on Lebesgue functions Let $f\in L^1(\mathbb{R})$. Compute $\lim_{|h|\rightarrow\infty}\int_{-\infty}^\infty |f(x+h)+f(x)|dx$ If $f\in C_c(\mathbb{R})$ I got the limit to be $\int_{-\infty}^\infty |f(x)|dx$. I am not sure if this is right.
* *Let $f$ a continuous function with compact support, say contained in $-[R,R]$. For $h\geq 2R$, the supports of $\tau_hf$ and $f$ are disjoint (they are respectively $[-R-h,R-h]$ and $[-R,R]$ hence \begin{align*} \int_{\Bbb R}|f(x+h)+f(x)|dx&=\int_{[-R,R]}|f(x+h)+f(x)|+\int_{[-R-h,R-h]}|f(x+h)+f(x)|\\ &=\int_{[-R,R]}|f(x)|+\int_{[-R-h,R-h]}|f(x+h)|\\ &=2\int_{\Bbb R}|f(x)|dx. \end{align*} *If $f\in L^1$, let $\{f_n\}$ a sequence of continuous functions with compact support which converges to $f$ in $L^1$, for example $\lVert f-f_n\rVert_{L^1}\leq n^{—1}$. Let $L(f,h):=\int_{\Bbb R}|f(x+h)+f(x)|dx$. We have \begin{align} \left|L(f,h)-L(f_n,h)\right|&\leq \int_{\Bbb R}|f(x+h)-f_n(x+h)+f(x)-f_n(x)|dx\\ &\leq \int_{\Bbb R}(|f(x+h)-f_n(x+h)|+|f(x)-f_n(x)|)dx\\ &\leq 2n^{-1}, \end{align} and we deduce that $$|L(f,h)-2\lVert f\rVert_{L^1}|\leq 4n^{-1}+|L(f_n,h)-2\lVert f_n\rVert_{L^1}|.$$ We have for each integer $n$, $$\limsup_{h\to +\infty}|L(f,h)-2\lVert f\rVert_{L^1}|\leq 4n^{—1}.$$ This gives the wanted result.
Trying to prove $\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+(\sin(\frac{\pi}{t}) -\frac{\pi}{t}\cos(\frac{\pi}{t}))^2}dt$ I posted this incorrectly several hours ago and now I'm back! So this time it's correct. Im trying to show that for $n\geq 1$: $$\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}dt$$ I checked this numerically for several values of $n$ up through $n=500$ and the bounds are extremely tight. I've been banging my head against this integral for a while now and I really can see no way to simplify it as is or to shave off a tiny amount to make it more palatable. Hopefully someone can help me. Thanks.
Potato's answer is what's going on geometrically. If you want it analytically:$$\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2} \geq \sqrt{\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}$$ $$ = \bigg|\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\bigg|$$ The above expression is the absolute value of the derivative of $t\sin(\pi/t)$. So your integral is greater than $$\int_{1 \over n + 1}^{1 \over n + {1 \over 2}}|(t\sin({\pi \over t}))'|\,dt + \int_{1 \over n + {1 \over 2}}^{1 \over n}|(t\sin({\pi \over t}))'|\,dt$$ This is at least what you get when you put the absolute values on the outside, or $$\bigg|\int_{1 \over n + 1}^{1 \over n + {1 \over 2}}(t\sin({\pi \over t}))'\,dt\bigg| + \bigg|\int_{1 \over n + {1 \over 2}}^{1 \over n}(t\sin({\pi \over t}))'\,dt\bigg|$$ Then the fundamental theorem of calculus says this is equal to the following, for $f(t) = t \sin(\pi/t)$: $$\bigg|f({1 \over n + {1 \over 2}}) - f(0)\bigg| + \bigg|f({1 \over n}) - f({1 \over n + {1 \over 2}})\bigg|$$ $$= \bigg|{1 \over n + {1 \over 2}} - 0\bigg| + \bigg|0 -{1 \over n + {1 \over 2}}\bigg|$$ $$ = {2 \over n + {1 \over 2}}$$
Find Elementary Matrics E1 and E2 such that $E_2E_1$A = I I am studying Linear Algebra part-time and would like to know if anyone has advice on solving the following type of questions: Considering the matrix: $$A = \begin{bmatrix}1 & 0 & \\-5 & 2\end{bmatrix}$$ Find elementary Matrices $E_1$ and $E_2$ such that $E_2E_1A = I$ Firstly can this be re-written as? $$E_2E_1 = IA^{-1}$$ and that is the same as? $$E_2E_1 = A^{-1}$$ So I tried to find $E_1$ and $E_2$ such that $E_2E_1 = A^{-1}$: My solution: $$A^{-1} = \begin{bmatrix}1 & 0 & \\{\frac {5}{2}} & {\frac {1}{2}}\end{bmatrix}$$ $$E_2 = \begin{bmatrix}1 & 0 & \\0 & {\frac {5}{2}}\end{bmatrix}$$ $$E_1 = \begin{bmatrix}1 & 0 & \\1 & {\frac {1}{5}}\end{bmatrix}$$ This is the incorrect answer. Any help as to what I did wrong as well as suggestions on how to approach these questions would be aprpeciated. Thanks
Just look at what needs to be done. First, eliminate the $-5$ using $E_1 = \begin{bmatrix} 1 & 0 \\ 5 & 1 \end{bmatrix}$. This gives $$ E_1 A = \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}.$$ Can you figure out what $E_2$ must be?
When does a morphism preserve the degree of curves? Suppose $X \subset \mathbb{P}_k^n$ is a smooth, projective curve over an algebraically closed field $k$ of degree $d$ . In this case, degree of $X$ is defined as the leading coefficient of $P_X$, where $P_X$ is the Hilbert polynomial of $X$. I guess Hartshorne using following fact without proof: Under certain condition, the projection morphism $\phi$ from $O \notin X$ to some lower dimension space $\mathbb{P}_k^m$ gives a curve $\phi(X)$, and $deg X= deg\ \phi(X)$. I don't know which condition is needed to make this possible (the condition for $X \cong \phi(X)$ is given in the book), and more importantly, why they have the same degree. One thing giving me trouble is the definition of degree by Hilbert polynomial. Is it possible to define it more geometrically?
Chris Dionne already explained the condition for $\phi$ to be an isomorphism from $X$ to $\phi(X)$. Suppose $\phi$ is a projection to $Q=\mathbb P^{n-1}$. Let $H'$ be a hyperplane in $Q$, then the schematic pre-image $\phi^{-1}Q=H\setminus \{ O\}$ where $H$ is a hyperplane of $P:=\mathbb P^n$ passing through $O$. Now $$O_P(H)|_X =(O_P(H)|_{P\setminus \{ O \}})|_X \simeq (\phi^*O_Q(H'))|_X \simeq (\phi|{}_X)^*(O_Q(H')|_{\phi(X)})$$ hence $$ \deg X=\deg O_P(H)|_X= \deg O_Q(H')|_{\phi(X)}=\deg \phi(X).$$
how to solve system of linear equations of XOR operation? how can i solve this set of equations ? to get values of $x,y,z,w$ ? $$\begin{aligned} 1=x \oplus y \oplus z \end{aligned}$$ $$\begin{aligned}1=x \oplus y \oplus w \end{aligned}$$ $$\begin{aligned}0=x \oplus w \oplus z \end{aligned}$$ $$\begin{aligned}1=w \oplus y \oplus z \end{aligned}$$ this is not a real example, the variables don't have to make sense, i just want to know the method.
The other answers are fine, but you can use even more elementary (if somewhat ad hoc) methods, just as you might with an ordinary system of linear equations over $\Bbb R$. You have this system: $$\left\{\begin{align*} 1&=x\oplus y\oplus z\\ 1&=x\oplus y\oplus w\\ 0&=x\oplus w\oplus z\\ 1&=w\oplus y\oplus z \end{align*}\right.$$ Add the first two equations: $$\begin{align*} (x\oplus y\oplus z)\oplus(x\oplus y\oplus w)&=(z\oplus w)\oplus\Big((x\oplus y)\oplus(x\oplus y)\Big)\\ &=z\oplus w\oplus 0\\ &=z\oplus w\;, \end{align*}$$ and $1\oplus 1=0$, so you get $z+w=0$. Substitue this into the last two equations to get $0=x\oplus 0=x$ and $1=y\oplus 0=y$. Now you know that $x=0$ and $y=1$ so $x\oplus y=1$. Substituting this into the first two equations, we find that $1=1\oplus z$ and $1=1\oplus w$. Add $1$ to both sides to get $0=z$ and $0=w$. The solution is therefore $x=0,y=1,z=0,w=0$.
Proving that for every real $x$ there exists $y$ with $x+y^2\in\mathbb{Q}$ I'm having trouble with proving this question on my assignment: For all real numbers $x$, there exists a real number $y$ so that $x + y^2$ is rational. I'm not sure exactly how to prove or disprove this. I proved earlier that for all real numbers $x$, there exists some $y$ such that $x+y$ is rational by cases and i'm assuming this is similar but I'm having trouble starting.
Hint $\rm\ x\! +\! y^2 = q\in\mathbb Q\iff y^2 = q\!-\!x\:$ so choose a rational $\rm\:q\ge x\:$ then let $\rm\:y = \sqrt{q-x}.$ Remark $\ $ Note how writing it out equationally makes it clearer what we need to do to solve it. Just as for "word problems", the key is learning how to translate the problem into the correct algebraic (equational) representation - the rest is easy.
Exponentiation of Center of Lie Algebra Let $G$ be a Lie Group, and $g$ its Lie Algebra. Show that the subgroup generated by exponentiating the center of $g$ generates the connected component of $Z(G)$, the center of $G$. Source: Fulton-Harris, Exercise 9.1 The difficulty lies in showing that exponentiating the center of $g$ lands in the center of $G$. Since the image of $exp(Z(g))$ is the union of one parameter subgroups that are disjoint, we know it connected. Also I can show this for the case when $G = Aut(V) $ and $g = End(V)$ since we have $G \subset g $. EDIT: G not connected
$G$ is connected and so is generated by elements of the form $\exp Y$ for $Y \in g$. Therefore it is sufficient to show that for $X \in Z(g)$ $\exp X$ and $\exp Y$ commute. Now define $\gamma: \mathbb R \to G$ by $\gamma(t) = \exp(X)\exp(tY)\exp(-X)$. Then $$ \gamma'(0) = Ad_{\exp(X)} Y = e^{ad_X} Y = (1 + ad_X + \frac{1}{2}ad_X \circ ad_X + \cdots)(Y) = Y. $$ But it is also easily seen that $\gamma$ is a homomorphism, i.e. $\gamma(t+s) = \gamma(t)\gamma(s)$. This characterizes the exponential map so that $\gamma(t) = \exp(tY)$. Taking $t=1$ shows that $\exp X$ and $\exp Y$ commute.
Kernel of $T$ is closed iff $T$ is continuous I know that for a Banach space $X$ and a linear functional $T:X\rightarrow\mathbb{R}$ in its dual $X'$ the following holds: \begin{align}T \text{ is continuous } \iff \text{Ker }T \text{ is closed}\end{align} which probably holds for general operators $T:X\rightarrow Y$ with finite-dimensional Banach space $Y$. I think the argument doesn't work for infinite-dimensional Banach spaces $Y$. Is the statement still correct? I.e. continuity of course still implies the closedness of the kernel for general Banach spaces $X,Y$ but is the converse still true?
The result is false if $Y$ is infinite dimensional. Consider $X=\ell^2$ and $Y=\ell^1$ they are not isomorphic as Banach spaces (the dual of $\ell^1$ is not separable). However they both have a Hamel basis of size continuum therefore they are isomorphic as vector spaces. The kernel of the vector space isomorphism is closed (since is the zero vector) but it can not be continuous.
$\ell_0$ Minimization (Minimizing the support of a vector) I have been looking into the problem $\min:\|x\|_0$ subject to:$Ax=b$. $\|x\|_0$ is not a linear function and can't be solved as a linear (or integer) program in its current form. Most of my time has been spent looking for a representation different from the one above (formed as a linear/integer program). I know there are approximation methods (Basis Pursuit, Matching Pursuit, the $\ell_1$ problem), but I haven't found an exact formulation in any of my searching and sparse representation literature. I have developed a formulation for the problem, but I would love to compare with anything else that is available. Does anyone know of such a formulation? Thanks in advance, Clark P.S. The support of a vector $s=supp(x)$ is a vector $x$ whose zero elements have been removed. The size of the support $|s|=\|x\|_0$ is the number of elements in the vector $s$. P.P.S. I'm aware that the $\|x\|_0$ problem is NP-hard, and as such, probably will not yield an exact formulation as an LP (unless P=NP). I was more referring to an exact formulation or an LP relaxation.
Consider the following two problems $$ \min:\|x\|_0 \text{ subject to } Ax = b \tag{P0} $$ $$ \min:\|x\|_1 \text{ subject to } Ax = b \tag{P1} $$ The theory of compressed assert that the optimal solution to the linear program $(P1)$ is an optimal solution to $(P0)$ i.e., the sparsest vector, given the following conditions on $A.$ Let $B = (b_{j,k})$ be an $n \times n$ orthogonal matrix (but not necessarily orthonormal). Let the coherence of $B$ denoted by $$\mu = \max_{j,k} | b_{j,k} |.$$ Let $A$ be the $m \times n$ matrix formed by taking any random $m$ rows of $B.$ If $m \in O(\mu^2 |s| \log n)$ then $(P1)$ is equivalent to $(P0).$ More in the papers referenced in the Wikipedia article on compressed sensing.
Nth derivative of $\tan^m x$ $m$ is positive integer, $n$ is non-negative integer. $$f_n(x)=\frac {d^n}{dx^n} (\tan ^m(x))$$ $P_n(x)=f_n(\arctan(x))$ I would like to find the polynomials that are defined as above $P_0(x)=x^m$ $P_1(x)=mx^{m+1}+mx^{m-1}$ $P_2(x)=m(m+1)x^{m+2}+2m^2x^{m}+m(m-1)x^{m-2}$ $P_3(x)=(m^3+3m^2+2m)x^{m+3}+(3m^3+3m^2+2m)x^{m+1}+(3m^3-3m^2+2m)x^{m-1}+(m^3-3m^2+2m)x^{m-3}$ I wonder how to find general formula of $P_n(x)$? I also wish to know if any orthogonal relation can be found for that polynomials or not? Thanks for answers EDIT: I proved Robert Isreal's generating function. I would like to share it. $$ g(x,z) = \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x) = \tan^m(x+z) $$ $$ \frac {d}{dz} (\tan^m(x+z))=m \tan^{m-1}(x+z)+m \tan^{m+1}(x+z)=m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m-1}(x)+m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m+1}(x)= \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (m\tan^{m-1}(x)+m\tan^{m+1}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (\dfrac{d}{dx}(\tan^{m}(x)))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))$$ $$ \frac {d}{dz} ( \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x) )= \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x) = \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x)=\sum_{k=0}^\infty \dfrac{z^{k}}{k!} \dfrac{d^{k+1}}{dx^{k+1}} \tan^m(x)$$ I also understood that it can be written for any function as shown below .(Thanks a lot to Robert Isreal) $$ \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} h^m(x) = h^m(x+z) $$ I also wrote $P_n(x)$ as the closed form shown below by using Robert Israel's answer. $$P_n(x)=\frac{n!}{2 \pi i}\int_0^{2 \pi i} e^{nz}\left(\dfrac{x+\tan(e^{-z})}{1-x \tan(e^{-z})}\right)^m dz$$ I do not know next step how to find if any orthogonal relation exist between the polynomials or not. Maybe second order differential equation can be found by using the relations above. Thanks for advice.
The formula used to obtain the exponential generating function in Robert's answer is most easily seen with a little operator calculus. Let $\rm\:D = \frac{d}{dx}.\,$ Then the operator $\rm\,{\it e}^{\ zD} = \sum\, (zD)^k/k!\:$ acts as a linear shift operator $\rm\:x\to x+z\,\:$ on polynomials $\rm\:f(x)\:$ since $$\rm {\it e}^{\ zD} x^n =\, \sum\, \dfrac{(zD)^k}{k!} x^n =\, \sum\, \dfrac{z^k}{k!} \dfrac{n!}{(n-k)!}\ x^{n-k} =\, \sum\, {n\choose k} z^k x^{n-k} =\, (x+z)^n$$ so by linearity $\rm {\it e}^{\ zD} f(x) = f(x\!+\!z)\:$ for all polynomials $\rm\:f(x),\:$ and also for all formal power series $\rm\,f(x)\,$ such that $\rm\:f(x\!+\!z)\,$ converges, i.e. where $\rm\:ord_x(x\!+\!z)\ge 1,\:$ e.g. for $\rm\: z = tan^{-1} x = x -x^3/3 +\, \ldots$
Distance in vector space Suppose $k≧3$, $x,y \in \mathbb{R}^k$, $|x-y|=d>0$, and $r>0$. Then prove (i)If $2r > d$, there are infinitely many $z\in \mathbb{R}^k$ such that $|z-x| = |z-y| = r$ (ii)If $2r=d$, there is exactly one such $z$. (iii)If $2r < d$, there is no such $z$ I have proved the existence of such $z$ for (i) and (ii). The problem is i don't know how to show that there are infinitely many and is exactly one such z resectively. Plus i can't derive a contradiction to show that there is no such z for (iii). Please give me some suggestions
If there's a $z$ satisfying $|z-x|=r=|z-y|$ then by the triangle inequality, $d=|x-y|\le|z-x|+|z-y|=2r$ So if $d>2r$ there would've been no such $z$!
How many triangles with integral side lengths are possible, provided their perimeter is $36$ units? How many triangles with integral side lengths are possible, provided their perimeter is $36$ units? My approach: Let the side lengths be $a, b, c$; now, $$a + b + c = 36$$ Now, $1 \leq a, b, c \leq 18$. Applying multinomial theorem, I'm getting $187$ which is wrong. Please help.
The number of triangles with perimeter $n$ and integer side lengths is given by Alcuin's sequence $T(n)$. The generating function for $T(n)$ is $\dfrac{x^3}{(1-x^2)(1-x^3)(1-x^4)}$. Alcuin's sequence can be expressed as $$T(n)=\begin{cases}\left[\frac{n^2}{48}\right]&n\text{ even}\\\left[\frac{(n+3)^2}{48}\right]&n\text{ odd}\end{cases}$$ where $[x]$ is the nearest integer function, and thus $T(36)=27$. See this article by Krier and Manvel for more details. See also Andrews, Jordan/Walch/Wisner, these two by Hirschhorn, and Bindner/Erickson.
Question related to regular pentagon My question is- ABCDE is e regular pentagon.If AB = 10 then find AC. Any solution for this question would be greatly appreciated. Thank you, Hey all thanks for the solutions using trigonometry....can we also get the solution without using trigonometry? –
Here's a solution without trig: [Edit: here is a diagram which should make this more intuitive hopefully.] First, $\angle ABC=108^{\circ}$; I won't prove this here, but you can do it rather trivially by dividing the pentagon into 3 triangles, e.g. $\triangle{ABC}$, $\triangle{ACD}$, and $\triangle{ADE}$, and summing and rearranging their angles to find the interior angle. Draw segment $\overline{AC}$. Then $\triangle ABC$ is isosceles, and has 180 degrees, therefore $\angle ACB=\angle BAC=(180^{\circ}-108^{\circ})/2=36^{\circ}$. Draw segment $\overline{BE}$. By symmetry, $\triangle ABC \cong \triangle ABE$, and so therefore $\angle ABE=36^{\circ}$. Let $F$ be the point where segment $\overline{AC}$ intersects segment $\overline{BE}$. Then $\angle AFB = 180^{\circ}-\angle BAC - \angle ABE = 180^{\circ}-36^{\circ}-36^{\circ} = 108^{\circ}$. This means that $\triangle ABC \sim \triangle ABF$, as both triangles have angles of $36^{\circ}$, $36^{\circ}$, and $108^{\circ}$. Next, we can say $\angle CBE=\angle ABC-\angle ABE = 108^{\circ} - 36^{\circ} = 72^{\circ}$, and from there, $\angle BFC = \angle AFC - \angle AFB = 180^{\circ} - 108^{\circ} = 72^{\circ}$. Therefore $\triangle BCF$ is isoceles, as it has angles of $36^{\circ}$, $72^{\circ}$, and $72^{\circ}$. This means that $\overline{CF}=\overline{BC}=\overline{AB}=10$ (adding in the information given to us in the problem). The ratios of like sides of similar triangles are equal. Therefore, $$\frac{\overline{AC}}{\overline{AB}}=\frac{\overline{AB}}{\overline{AF}}$$ We know that $\overline{AC}=\overline{AF}+\overline{CF}=\overline{AF}+10$. Let's define $x:=\overline{AF}$. Substituting everything we know into the previous equation, $$\frac{x+10}{10}=\frac{10}{x}$$ Cross-multiply and solve for $x$ by completing the square. (Or, if you prefer, you can use the quadratic formula instead.) $$x(x+10)=100$$ $$x^2+10x=100$$ $$x^2+10x+25=125$$ $$(x+5)^2=125$$ $$x+5=\pm 5\sqrt 5$$ $$x=-5\pm 5\sqrt 5$$ Choose the positive square root, as $x:=\overline{AF}$ can't be negative. $$\overline{AF}=-5 + 5\sqrt 5$$ Finally, recall that earlier we proved $\overline{AC}=\overline{AF}+10$. Plug in to get the final answer: $$\boxed{ \overline{AC} = 5 + 5\sqrt 5 = 5(1 + \sqrt 5)}$$ Hope this helps! :)
How to compute the min-cost joint assignment to a variable set when checking the cost of a single joint assignment is high? I want to compute the min-cost joint assignment to a set of variables. I have 50 variables, and each can take on 5 different values. So, there are 550 (a huge number) possible joint assignments. Finding a good one can be hard! Now, the problem is that computing the cost of any assignment takes about 15-20 minutes. Finding an approximation to the min-cost assignment is also okay, doesn't have to be the global solution. But with this large computational load, what is a logical approach to finding a low-cost joint assignment?
In general, if the costs of different assignments are completely arbitrary, there may be no better solution than a brute force search through the assignment space. Any improvements on that will have to come from exploiting some statistical structure in the costs that gives us a better-than-random chance of picking low-cost assignments to try. Assuming that the costs of similar assignments are at least somewhat correlated, I'd give some variant of shotgun hill climbing a try. Alternatively, depending on the shape of the cost landscape, simulated annealing or genetic algorithms might work better, but I'd try hill climbing first just to get a baseline to compare against. Of course, this all depends on what the cost landscape looks like. Without more detail on how the costs are calculated, you're unlikely to get very specific answers. If you don't even know what the cost landscape looks like yourself, well, then you'll just have to experiment with different search heuristics and see what works best.
Ergodicity of the First Return Map I was looking for some results on Infinite Ergodic Theory and I found this proposition. Do you guys know how to prove the last item (iii)? I managed to prove (i) and (ii) but I can't do (iii). Let $(X,\Sigma,\mu,T)$ be a $\sigma$-finite space with $T$ presearving the measure $\mu$, $Y\in\Sigma$ sweep-out s.t. $0<\mu(Y)<\infty$. Making $$\varphi(x)= \operatorname{min}\{n\geq0; \ T^n(x)\in Y\}$$ and also $$T_Y(x) = T^{\varphi(x)}(x)$$ if $T$ is conservative then (i) $\mu|_{Y\cap\Sigma}$ under the action of $T_Y$ on $(Y,Y\cap\Sigma,\mu|_{Y\cap\Sigma})$; (ii) $T_Y$ is conservative; (iii) If $T$ is ergodic, then $T_Y$ is ergodic on $(Y,Y\cap\Sigma,\mu|_{Y\cap\Sigma})$. Any ideas? Thank you guys in advance!!!
Let be $B \subset Y$. To prove the invariance of $\mu_A$ it is sucient to prove that, $$\mu(T_Y^{-1}B)=\mu(B)$$ First, $$\mu(T_Y^{-1}B)=\sum_{n=1}^{\infty}\mu(Y\cap\{\varphi_Y =n \}\cap T^{-n}B)$$ Now, $$\{ \varphi_Y\leq n\}\cap T^{-n-1}B=T^{-1}( \{\varphi_Y\leq n-1\}\cap T^{-n}B)\cup T^{-1}(Y \cap\{\varphi_Y =n \}\cap T^{-n}B ) $$ This gives by invariance of the measure $$\mu(Y\cap \cap\{\varphi_Y =n \}\cap T^{-n}B)=\mu(B_n)-\mu(B_{n-1}) $$ where $B_n=\{ \varphi_Y\leq n\}\cap T^{-n-1}B.$ We have $\mu(B_n)\to\mu(B)$ as $n\to \infty$, thus $$\mu(T_Y^{-1} B)=\lim\mu(B_n)=\mu(B).~~~~ :)$$ Let us assume now the ergodicity of the original system. Let $B\subset Y$ be a measurable $T_Y$-invariant subset. For any $x \in B$ , the first iterate $T^n x~~~(n\geq 1)$ that belongs to $Y$ also belongs to $B$ , which means that $\varphi_B=\varphi_Y$ on $B.$ But if, $\mu(B)\neq 0$ , Kac's lemma gives that $$\int_{B}\varphi_B d\mu=1=\int_{Y}\varphi_{Y} d\mu$$ which implies that $\mu(B \setminus A) = 0$, proving ergodicity. :)
Find a function that satisfies the following five conditions. My task is to find a function $h:[-1,1] \to \mathbb{R}$ so that (i) $h(-1) = h(1) = 0$ (ii) $h$ is continuously differentiable on $[-1,1]$ (iii) $h$ is twice differentiable on $(-1,0) \cup (0,1)$ (iv) $|h^{\prime\prime}(x)| < 1$ for all $x \in (-1,0)\cup(0,1)$ (v) $|h(x)| > \frac{1}{2}$ for some $x \in [-1,1]$ The source I have says to use the function $h(x) = \frac{3}{4}\left(1-x^{4/3}\right)$ which fails to satisfy condition (iv) so it is incorrect. I'm starting to doubt the validity of the problem statement because of this. So my question is does such a function exist? If not, why? Thanks!
Let $h$ satisfy (i)-(iv) and $x_0$ be a point where the maximum of $|h|$ is attained on $[-1,1]\;$. WLOG we can assume that $x_0\ge0$. Then $$ |h(x_0)|=|h(x_0)-h(1)|=\left|\int_{x_0}^1h'(y)\,dy\right|= $$ $$ \left|\int_{x_0}^1\int_{x_0}^yh''(z)\,dz\;dy\right|\le \sup_{(0,1)}|h''|\int_{x_0}^1\int_{x_0}^y\,dz=\frac{(1-x_0)^2}2\sup_{(0,1)}|h''|\le \frac12. $$
Prove that , any primitive root $r$ of $p^n$ is also a primitive root of $p$ For an odd prime $p$, prove that any primitive root $r$ of $p^n$ is also a primitive root of $p$ So I have assumed $r$ have order $k$ modulo $p$ , So $k|p-1$.Then if I am able to show that $p-1|k$ then I am done .But I haven't been able to show that.Can anybody help me this method?Any other type of prove is also welcomed.
Note that an integer $r$ with $\gcd(r,p)=1$ is a primitive root modulo $p^k$ when the smallest $b$ such that $r^b\equiv1\bmod p^k$ is $b=p^{k-1}(p-1)$. Suppose that $r$ is not a primitive root modulo $p$, so there is some $b<p-1$ such that $r^b\equiv 1\bmod p$. In other words, there is some integer $t$ such that $r^b=1+pt$. Then of course we have that $p^{n-1}b<p^{n-1}(p-1)$, and $$r^{p^{n-1}b}\equiv 1\bmod p^n$$ because of the binomial theorem. (mouse over to reveal spoilers)
Counting zero-digits between 1 and 1 million I just remembered a problem I read years ago but never found an answer: Find how many 0-digits exist in natural numbers between 1 and 1 million. I am a programmer, so a quick brute-force would easily give me the answer, but I am more interested in a pen-and-paper solution.
Just to show there is more than one way to do it: How many zero digits are there in all six-digit numbers? The first digit is never zero, but if we pool all of the non-first digits together, no value occurs more often than the others, so exactly one-tenth of them will be zeroes. There are $9\cdot 5\cdot 10^5$ such digits all of all, and a tenth of them is $9\cdot 5 \cdot 10^4$. Repeating that reasoning for each possible length, the number of zero digits we find between $1$ and $999999$ inclusive is $$\sum_{n=2}^6 9(n-1)10^{n-2} = 9\cdot 54321 = 488889 $$ To that we may (depending on how we interpret "between" in the problem statement) need to add the 6 zeroes from 1,000,000 itself, giving a total of 488,895.
Application of Banach Separation theorem Let $(\mathcal{H},\langle\cdot,\cdot\rangle)$ be a Hilbert Space, $U\subset \mathcal{H},U\not=\mathcal{H}$ be a closed subspace and $x\in\mathcal{H}\setminus U$. Prove that there exists $\phi\in\mathcal{H}^*$, such that\begin{align}\text{Re } \phi(x)<\inf_{u\in U}\text{Re }\phi(u) \end{align} Hint: Observe that $\inf_{u\in U}\text{Re }\phi(u)\leq0$. This seems like an application of the Banach Separation theorem. But the way I know it is not directly applicable. I know that for two disjoint convex sets $A$ and $B$ of which one is open there exists a functional seperating them. Is there anything special in this problem about $\mathcal{H}$ being Hilbert and not some general Banach space?
There is more general result which proof you can find in Rudin's Functional Analysis Let $A$ and $B$ be disjoint convex subsets of topological vector space $X$. If $A$ is compact and $B$ is closed then there exist $\varphi\in X^*$ such that $$ \sup\limits_{x\in A}\mathrm{Re}(\varphi(x))<\inf\limits_{x\in B}\mathrm{Re}(\varphi(x)) $$ Your result follows if we take $X=\mathcal{H}$, $A=\{x\}$ and $B=U$. Of course this is a sledgehammer for such a simple problem, because in case of Hilbert space we can explicitly say that functional $$ \varphi(z)=\langle z, \mathrm{Pr}_U(x)-x\rangle $$ will fit, where $\mathrm{Pr}_U(x)$ is the unique orthogonal projection of vector $x$ on closed subspace $U$.
Looking for a 'second course' in logic and set theory (forcing, large cardinals...) I'm a recent graduate and will likely be out of the maths business for now - but there are a few things that I'd still really like to learn about - forcing and large cardinals being two of them. My background is what one would probably call a 'first graduate course' in logic and set theory (some intro to ZFC, ordinals, cardinals, and computability theory). Can you recommend any books or online lecture notes which are accessible to someone with my previous knowledge? Thanks a lot!
I would recommend the following as excellent graduate level introductions to set theory, including forcing and large cardinals. * *Thomas Jech, Set Theory. *Aki Kanamori, The higher infinite. See the review I wrote of it for Studia Logica. I typically recommend to my graduate students, who often focus on both forcing and large cardinals, that they should read both Jech and Kunen (mentioned in Francis Adams answer) and play these two books off against one another. For numerous topics, Jech will have a high-level explanation that is informative when trying to understand the underlying idea, and Kunen will have a greater level of notational detail that helps one understand the particulars. Meanwhile, Kanamori's book is a great exploration of the large cardinal hierarchy. I would also recommend posting (and answering) questions on forcing and large cardinals here and also on mathoverflow. Probably most forcing questions belong on mathoverflow.
Factorize $f$ as product of irreducible factors in $\mathbb Z_5$ Let $f = 3x^3+2x^2+2x+3$, factorize $f$ as product of irreducible factors in $\mathbb Z_5$. First thing I've used the polynomial reminder theorem so to make the first factorization: $$\begin{aligned} f = 3x^3+2x^2+2x+3 = (3x^2-x+3)(x+1)\end{aligned}$$ Obviously then as second step I've taken care of that quadratic polynomial, so: $x_1,x_2=\frac{-b\pm\sqrt{\Delta}}{2a}=\frac{1\pm\sqrt{1-4(9)}}{6}=\frac{1\pm\sqrt{-35}}{6}$ my question is as I've done calculations in $\mathbb Z_5$, was I allowed to do that: as $-35 \equiv_5 100 \Rightarrow \sqrt{\Delta}=\sqrt{-35} = \sqrt{100}$ then $x_1= \frac{11}{6} = 1 \text { (mod 5)}$, $x_2= -\frac{3}{2} = 1 \text { (mod 5)}$, therefore my resulting product would be $f = (x+1)(x+1)(x+1)$. I think I have done something illegal, that is why multiplying back $(x+1)(x+1)$ I get $x^2+2x+1 \neq 3x^2-x+3$. Any ideas on how can I get to the right result?
If $f(X) = aX^2 + bX + c$ is a quadratic polynomial with roots $x_1$ and $x_2$ then $f(X) = a(X-x_1)(X-x_2)$ (the factor $a$ is necessary to get the right leading coefficient). You found that $3x^2-x+3$ has a double root at $x_1 = x_2 = 1$, so $3x^2-x+3 = 3(x-1)^2$. Your mistakes were * *You forgot to multiply by the leading coefficient $3$. *You concluded that a root in $1$ corresponds to the linear factor $(x+1)$, but this would mean a root in $-1$. The right linear factor is $(x-1)$.
Functional equations of (S-shape?) curves I am looking for the way to "quite easily" express particular curves using functional equations. What's important (supposing the chart's size is 1x1 - actually it doesn't matter in the final result): * *obviously the shape - as shown in the picture; *there should be exactly three solutions to f(x) = x: x=0, x close to or equal 0.5 and x=1; *(0.5,0.5) should be the point of intersection with y = x; *it would be really nice, if both of the arcs are scalable - as shown in the left example (the lower arc is more significant than the upper one). I've done some research, but nothing seemed to match my needs; I tried trigonometric and sigmoid functions too, they turned out to be quite close to what I want. I'd be grateful for any hints or even solutions. P.S. The question was originally asked at stackoverflow and I was suggested to look for help here. Some answers involved splines, cumulative distribution functions or the logistic equation. Is that the way to go?
Have you tried polynomial interpolation? It seems that for 'well-behaved' graphs like the ones you are looking for (curves for image processing?), it could work just fine. At the bottom of this page you can find an applet demonstrating it. There is a potential problem with the interpolated degree 4 curve possibly becoming negative though. I'll let you know once I have worked out a better way to do this. EDIT: The problem associated with those unwanted 'spikes' or oscillations is called Runge's phenomenon . I would guess that image manipulation programs like Photoshop actually use Spline interpolation to find the curves.
Throwing balls into $b$ buckets: when does some bucket overflow size $s$? Suppose you throw balls one-by-one into $b$ buckets, uniformly at random. At what time does the size of some (any) bucket exceed size $s$? That is, consider the following random process. At each of times $t=1, 2, 3, \dots$, * *Pick up a ball (from some infinite supply of balls that you have). *Assign it to one of $b$ buckets, uniformly at random, and independent of choices made for previous balls. For this random process, let $T = T(s,b)$ be the time such that * *At time $T-1$ (after the $T-1$th ball was assigned), for each bucket, the number of balls assigned to it was $\le s$. *At time $T$ (after the $T$th ball was assigned), there is some bucket for which the number of balls assigned to it is $s + 1$. What can we say about $T$? If we can get the distribution of $T(s,b)$ that would be great, else even knowing its expected value and variance, or even just expected value, would be good. Beyond the obvious fact that $T \le bs+1$ (and therefore $E[T]$ exists), I don't see anything very helpful. The motivation comes from a real-life computer application involving hashing (the numbers of interest are something like $b = 10000$ and $s = 64$).
I just wrote some code to find the rough answer (for my particular numbers) by simulation. $ gcc -lm balls-bins.c -o balls-bins && ./balls-bins 10000 64 ... Mean: 384815.56 Standard deviation: 16893.75 (after 25000 trials) This (384xxx) is within 2% of the number ~377xxx, specifically $$ T \approx b \left( (s + \log b) \pm \sqrt{(s + \log b)^2 - s^2} \right) $$ that comes from the asymptotic results (see comments on the question), and I must say I am pleasantly surprised. I plan to edit this answer later to summarise the result from the paper, unless someone gets to it first. (Feel free!)
If $f:D\to \mathbb{R}$ is continuous and exists $(x_n)\in D$ such as that $x_n\to a\notin D$ and $f(x_n)\to \ell$ then $\lim_{x\to a}f(x)=\ell$? Assertion: If $f:X\setminus\left\{a\right\}\to \mathbb{R}$ is continuous and there exists a sequence $(x_n):\mathbb{N}\to X\setminus\left\{a\right\}$ such as that $x_n\to a$ and $f(x_n)\to \ell$ prove that $\lim_{x\to a}f(x)=\ell$ I have three questions: 1) Is the assertion correct? If not, please provide counter-examples. In that case can the assertion become correct if we require that $f$ is monotonic, differentiable etc.? 2)Is my proof correct? If not, please pinpoint the problem and give a hint to the right direcition. Personally, what makes me doubt it are the choices of $N$ and $\delta$ since they depend on another 3)If the proof is correct, then is there a way to shorten it? My Proof: Let $\epsilon>0$. Since $f(x_n)\to \ell$ \begin{equation} \exists N_1\in \mathbb{N}:n\ge N_1\Rightarrow \left|f(x_n)-\ell\right|<\frac{\epsilon}{2}\end{equation} Thus, $\left|f(x_{N_1})-\ell\right|<\frac{\epsilon}{2}$ and by the continuity of $f$ at $x_{N_1}$, \begin{equation} \exists \delta_1>0:\left|x-x_{N_1}\right|<\delta_1\Rightarrow \left|f(x)-f(x_{N_1})\right|<\frac{\epsilon}{2} \end{equation} Since $x_n\to a$, \begin{equation} \exists N_2\in \mathbb{N}:n\ge N_2\Rightarrow \left|x_n-a\right|<\delta_1\end{equation} Thus, $\left|x_{N_2}-a\right|<\delta_1$ and by letting $N=\max\left\{N_1,N_2\right\}$, \begin{gather} 0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N+x_N-a\right|<\delta_1\Rightarrow \left|x-x_N\right|-\left|x_N-a\right|<\delta_1\\ 0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N\right|<\delta_1+\left|x_N-a\right| \end{gather} By the continuity of $f$ at $x_N$, \begin{equation} \exists \delta_3>0:0<\left|x-x_N\right|<\delta_3\Rightarrow \left|f(x)-f(x_N)\right|<\frac{\epsilon}{2} \end{equation} Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that, \begin{gather} 0<\left|x-a\right|<\delta\Rightarrow \left|x-x_N\right|<\delta\Rightarrow \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\Rightarrow \left|f(x)-\ell\right|-\left|f(x_N)-\ell\right|<\frac{\epsilon}{2}\\ 0<\left|x-a\right|<\delta\Rightarrow\left|f(x)-\ell\right|<\left|f(x_N)-\ell\right|+\frac{\epsilon}{2}<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon \end{gather} We conclude that $\lim_{x\to a}f(x)=\ell$ Thank you in advance EDIT: The proof is false. One of the mistakes is in this part: "Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that, \begin{gather} 0<\left|x-a\right|<\delta{\color{Red} \Rightarrow} \left|x-x_N\right|<\delta{\color{Red} \Rightarrow} \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\end{gather}"
You need to have $f(x_n) \to l$ for all sequences $x_n \to a$, not just one sequence. For example, let $a=(0,0)$ with $f(x,y) = \frac{x y}{x^2+y^2}$. This is continuous on $\mathbb{R}^2 \setminus \{a\}$, and the sequence $x_n=(\frac{1}{n},0) \to a$, with $f(x_n) \to 0$ (excuse abuse of notation), but $f$ is not continuous at $a$.
Preorders, chains, cartesian products, and lexicographical order Definitions: A preorder on a set $X$ is a binary relation $\leq$ on $X$ which is reflexive and transitive. A preordered set $(X, \leq)$ is a set equipped with a preorder.... Where confusion cannot result, we refer to the preordered set $X$ or sometimes just the preorder $X$. If $x \leq y$ and $y \leq x$ then we shall write $x \cong y$ and say that $x$ and $y$ are isomorphic elements. Write $x < y$ if $x \le y$ and $y \not\le x$ for each $x, y \in X$. This gives a strict partial order on $X$. (See also, this question.) Given two preordered sets $A$ and $B$, the lexicographical order on the Cartesian product $A \times B$ is defined as $(a,b) \le_{lex} (a',b')$ if and only if $a < a'$ or ($a \cong a'$ and $b \le b'$). The result is a preorder. A subset $C$ of a preorder $X$ is called a chain if for every $x,y \in C$ we have $x \leq y$ or $y \leq x$.... We shall say that a preorder $X$ is a chain ... if the underlying set $X$ is such. Exercise: Let $C$ and $C'$ be chains. Show that the set of pairs $(c, c')$, where $c \in C$ and $c' \in C'$, is also a chain when ordered lexicographically
Suppose $(c_1, c_1')$ and $(c_2, c_2')$ are pairs with $c_1, c_2 \in C$ and $c_1', c_2' \in C'$. Then $c_1 \le c_2$ or $c_2 \le c_1$ and $c_1' \le c_2'$ or $c_2' \le c_1'$. If $c_1 \le c_2$ but $c_2 \not\le c_1$ then $c_1 < c_2$ and $(c_1, c_1') \le_{lex} (c_2, c_2')$. Similarly, if $c_1 \not\le c_2$ and $c_2 \le c_1$ then $(c_2, c_2') \le_{lex} (c_1, c_1')$. If $c_1 \le c_2$ and $c_2 \le c_1$ then $c_1 \cong c_2$. Now if $c_1' \le c_2'$ then $(c_1, c_1') \le_{lex} (c_2, c_2')$. On the other hand if $c_2' \le c_1'$ then $(c_2, c_2') \le_{lex} (c_1, c_1')$.
Finding two numbers given their sum and their product Which two numbers when added together yield $16$, and when multiplied together yield $55$. I know the $x$ and $y$ are $5$ and $11$ but I wanted to see if I could algebraically solve it, and found I couldn't. In $x+y=16$, I know $x=16/y$ but when I plug it back in I get something like $16/y + y = 16$, then I multiply the left side by $16$ to get $2y=256$ and then ultimately $y=128$. Am I doing something wrong?
Given $$ x + y = 16 \tag{1} $$ and $$ x \cdot y = 55. \tag{2} $$ We can use the identity: $$ ( x-y)^2 = ( x+y)^2 - 4 \cdot x \cdot y \tag{3} $$ to get $$ x - y = \sqrt{256 - 4 \cdot 55} = \sqrt{36} = \pm \, 6. \tag{4} $$ Finding half sum and half difference of equations (1) and (4) gives us $$ (x,y) = (11,5) , \, (5,11) \tag{5} $$
Is there any way to find a angle of a complex number without a calculator? Transforming the complex number $z=-\sqrt{3}+3i$ into polar form will bring me to the problem to solve this two equations to find the angle $\phi$: $\cos{\phi}=\frac{\Re z}{|z|}$ and $\sin{\phi}=\frac{\Im z}{|z|}$. For $z$ the solutions are $\cos{\phi}=-0,5$ and $\sin{\phi}=-0,5*\sqrt{3}$. Using Wolfram Alpha or my calculator I can get $\phi=\frac{2\pi}{3}$ as solution. But using a calculator is forbidden in my examination. Do you know any (cool) ways to get the angle without any other help?
The direct calculation is $$\arg(-\sqrt 3+ 3i)=\arctan\frac{3}{-\sqrt{3}}=\arctan (-\sqrt 3)=\arctan \frac{\sqrt 3/2}{-1/2}$$ As the other two answers remark, you must learn by heart the values of at least the sine and cosine at the main angle values between zero and $\,\pi/2\,$ and then, understanding the trigonometric circle, deduce the functions' values anywhere on that circle. The solution you said you got is incorrectly deduced as you wrote $$\,cos\phi=-0,5\,,\,\sin\phi=-0,5\cdot \sqrt 3\,$$ which would give you both values of $\,\sin\,,\,\cos\,$ negative, thus putting you in the third quadrant of the trigonometric circle, $\,\{(x,y)\;:\;x,y<0\}\,$, which is wrong as the value indeed is $\,2\pi/3\,$ (sine is positive!), but who knows how did you get to it. In the argument's calculation above please do note the minus sign is at $\,1/2\,$ in the denominator, since that's what corresponds to the $\,cos\,$ in the polar form, but the sine is positive thus putting us on the second quadrant $\,\{(x,y,)\;:\;x<0<y\}\,$ . So knowing that $$\sin x = \sin(\pi - x)\,,\,\cos x=-\cos(\pi -x)\,,\,0\leq x\leq \pi/2$$ and knowing the basic values for the basic angles, gives you now $$-\frac{1}{2}=-\cos\frac{\pi}{3}\stackrel{\text{go to 2nd quad.}}=\cos\left(\pi-\frac{\pi}{3}\right)=\cos\frac{2\pi}{3}$$ $$\frac{\sqrt 3}{2}=\sin\frac{\pi}{3}\stackrel{\text{go to 2nd quad.}}=\sin\left(\pi-\frac{\pi}{3}\right)=\sin\frac{2\pi}{3}$$
Where are good resources to study combinatorics? I am an undergraduate wiht basic knowledge of combinatorics, but I want to obtain sound knowledge of this topic. Where can I find good resources/questions to practice on this topic? I need more than basic things like the direct question 'choosing r balls among n' etc.; I need questions that make you think and challenge you a bit.
Lots of good suggestions here. Another freely available source is Combinatorics Through Guided Discovery. It starts out very elementary, but also contains some interesting problems. And the book is laid out as almost entirely problem-based, so it useful for self study.
Solving Recurrence $T_n = T_{n-1}*T_{n-2} + T_{n-3}$ I have a series of numbers called the Foo numbers, where $F_0 = 1, F_1=1, F_2 = 1 $ then the general equation looks like the following: $$ F_n = F_{n-1}(F_{n-2}) + F_{n-3} $$ So far I have got the equation to look like this: $$T_n = T_{n-1}*T_{n-2} + T_{n-3}$$ I just don't know how to solve the recurrence. I tried unfolding but I don't know if i got the right answer: $$ T_n = T_{n-i}*T_{n-(i+1)} + T_{n-(i+2)} $$ Please help, I need to describe the algorithm which I have done but the analysing of the running time is frustrating me.
Numerical data looks very good for $$F_n \approx e^{\alpha \tau^n}$$ where $\tau = (1+\sqrt{5})/2 \approx 1.618$ and $\alpha \approx 0.175$. Notice that this makes sense: When $n$ is large, the $F_{n-1} F_{n-2}$ term is much larger than $F_{n-3}$, so $$\log F_n \approx \log F_{n-1} + \log F_{n-2}.$$ Recursions of the form $a_n = a_{n-1} + a_{n-2}$ always have closed forms $\alpha \tau^n + \beta \tau^{-n}$. Here's a plot of $\log \log F_n$ against $n$, note that it looks very linear. A best fit line (deleting the first five values to clean up end effects) gives that the slope of this line is $0.481267$; the value of $\log \tau$ is $0.481212$. Your sequence is in Sloane but it doesn't say much of interest; if you have anything to say, you should add it to Sloane.
Existence of valuation rings in a finite extension of the field of fractions of a weakly Artinian domain without Axiom of Choice Can we prove the following theorem without Axiom of Choice? This is a generalization of this problem. Theorem Let $A$ be a weakly Artinian domain. Let $K$ be the field of fractions of $A$. Let $L$ be a finite extension field of $K$. Let $B$ be a subring of $L$ containing $A$. Let $P$ be a prime ideal of $B$. Then there exists a valuation ring of $L$ dominating $B_P$. As for why I think this question is interesting, please see(particularly Pete Clark's answer): Why worry about the axiom of choice?
Lemma 1 Let $A$ be an integrally closed weakly Artinian domain. Let $S$ be a multiplicative subset of $A$. Let $A_S$ be the localization with respect to $S$. Then $A_S$ is an integrally closed weakly Artinian domain. Proof: Let $K$ be the field of fractions of $A$. Suppose that $x \in K$ is integral over $A_S$. $x^n + a_{n-1}x^{n-1} + ... + a_1x + a_0 = 0$, where $a_i \in A_S$. Hence there exists $s \in S$ such that $sx$ is integral over $A$. Since $A$ is integrally closed, $sx \in A$. Hence $x \in A_S$. Hence $A_S$ is integrally closed. Let $f$ be a non-zero element of $A_S$. $f = a/s$, where $a \in A, s \in S$. Then $fA_S = aA_S$. By this, $aA$ is a product of prime ideals of $A$. Let $P$ be a non-zero prime ideal $P$ of $A$. Since $P$ is maximal, $A_S/P^nA_S$ is isomorphic to $A/P^n$ or $0$. Hence $leng_{A_S} A_S/aA_S$ is finite. QED Lemma 2 Let $A$ be an integrally closed weakly Artinian domain. Let $P$ be a non-zero prime ideal of $A$. Then $A_P$ is a discrete valuation ring. Proof: By Lemma 1 and this, every non-zero ideal of $A_P$ has a unique factorization as a product of prime ideals. Hence $PA_P \neq P^2A_p$. Let $x \in PA_P - P^2A_P$. Since $PA_P$ is the only non-zero prime ideal of $A_P$, $xA = PA_P$. Since every non-zero ideal of $A_P$ can be written $P^nA_P$, $A_P$ is a principal ideal domain. Hence $A_P$ is a discrete valuation ring. QED Proof of the title theorem We can assume that $P \neq 0$. Let $C$ be the integral closure of $B$ in $L$. By this, $C$ is a weakly Artinian $A$-algebra in the sense of Definition 2 of my answer to this. By Lemma 2 of my answer to this, $C$ is a weakly Artinian ring. Let $S = B - P$. Let $C_P$ and $B_P$ be the localizations of $C$ and $B$ with respect to $S$ respectively. By this, $leng_A C/PC$ is finite. Hence by Lemma 7 of my answer to this, $C/PC$ is finitely generated as an $A$-module.Hence $C/PC$ is finitely generated as a $B$-module. Hence $C_P/PC_P$ is finitely generated as a $B_P$-module. Since $PC_P \neq C_P$, by the lemma of the answer by QiL to this, there exists a maximal ideal of $C_P/PC_P$ whose preimage is $PB_P$. Hence there exists a maximal ideal $Q$ of $C_P$ such tha $PB_P = Q \cap B_P$. Let $Q' = Q \cap C$. Then $Q'$ is a prime ideal of $C$ lying over $P$. By Lemma 2, $C_Q'$ is a discrete valuation ring and it dominates $B_P$. QED
Do equal distance distributions imply equivalence? Let $A$ and $B$ be two binary $(n,M,d)$ codes. We define $a_i = \#\{(w_1,w_2) \in A^2:\:d(w_1,w_2) = i\}$, and same for $b_i$. If $a_i = b_i$ for all $i$, can one deduct that $A$ and $B$ are equivalent, i.e. equal up to permutation of positions and permutation of letters in a fixed position?
See Example 3.5.4 and the "other examples ... given in Section IV-4.9." EDIT: Here are two inequivalent binary codes with the same distance enumerator. Code A consists of the codewords $a=00000000000$, $b=11110000000$, $c=11111111100$, $d=11000110011$; code R consists of the codewords $r=0000000000$, $s=1111000000$, $t=0111111111$, $u=1000111100$. Writing $xy$ for the Hamming distance between words $x$ and $y$, we have $ab=rs=4$, $bc=ru=5$, $cd=tu=6$, $ac=rt=9$, and $ad=bd=st=su=7$, so the distance enumerators are identical. But code A has the 7-7-4 triangle $adb$, while code R has the 7-7-6 triangle $tsu$, so there is no distance-preserving bijection between the two codes. I have also posted this as an answer to the MO question. I'm sure there are shorter examples. MORE EDIT: a much shorter example: Code A consists of the codewords $a=000$, $b=100$, $c=011$, $d=111$; code R consists of the codewords $r=0000$, $s=1000$, $t=0100$, $u=0011$. Both codes have two pairs of words at each of the distances 1, 2, and 3; R has a 1-1-2 triangle, A doesn't.
A strangely connected subset of $\Bbb R^2$ Let $S\subset{\Bbb R}^2$ (or any metric space, but we'll stick with $\Bbb R^2$) and let $x\in S$. Suppose that all sufficiently small circles centered at $x$ intersect $S$ at exactly $n$ points; if this is the case then say that the valence of $x$ is $n$. For example, if $S=[0,1]\times\{0\}$, every point of $S$ has valence 2, except $\langle0,0\rangle$ and $\langle1,0\rangle$, which have valence 1. This is a typical pattern, where there is an uncountable number of 2-valent points and a finite, possibly empty set of points with other valences. In another typical pattern, for example ${\Bbb Z}^2$, every point is 0-valent; in another, for example a disc, none of the points has a well-defined valence. Is there a nonempty subset of $\Bbb R^2$ in which every point is 3-valent? I think yes, one could be constructed using a typical transfinite induction argument, although I have not worked out the details. But what I really want is an example of such a set that can be exhibited concretely. What is it about $\Bbb R^2$ that everywhere 2-valent sets are well-behaved, but everywhere 3-valent sets are crazy? Is there some space we could use instead of $\Bbb R^2$ in which the opposite would be true?
I claim there is a set $S \subseteq {\mathbb R}^2$ that contains exactly three points in every circle. Well-order all circles by the first ordinal of cardinality $\mathfrak c$ as $C_\alpha, \alpha < \mathfrak c$. By transfinite induction I'll construct sets $S_\alpha$ with $S_\alpha \subseteq S_\beta$ for $\alpha < \beta$, and take $S = \bigcup_{\alpha < {\mathfrak c}} S_\alpha$. These will have the following properties: * *$S_\alpha$ contains exactly three points on every circle $C_\beta$ for $\beta \le \alpha$. *$S_\alpha$ does not contain more than three points on any circle. *$\text{card}(S_\alpha) \le 3\, \text{card}(\alpha)$ We begin with $S_1$ consisting of any three points on $C_1$. Now given $S_{<\alpha} = \bigcup_{\beta < \alpha} S_\beta$, consider the circle $C_\alpha$. Let $k$ be the cardinality of $C_\alpha \cap S_{<\alpha}$. By property (2), $k \le 3$. If $k = 3$, take $S_\alpha = S_{<\alpha}$. Otherwise we need to add in $3-k$ points. Note that there are fewer than ${\mathfrak c}$ circles determined by triples of points in $S_{<\alpha}$, all of which are different from $C_\alpha$, and so there are fewer than $\mathfrak c$ points of $C_\alpha$ that are on such circles. Since $C_\alpha$ has $\mathfrak c$ points, we can add in a point $a$ of $C_\alpha$ that is not on any of those circles. If $k \le 1$, we need a second point $b$ not to be on the circles determined by triples in $S_{<\alpha} \cup \{a\}$, and if $k=0$ a third point $c$ not on the circles determined by triples in $S_{<\alpha} \cup \{a,b\}$. Again this can be done, and it is easy to see that properties (1,2,3) are satisfied. Finally, any circle $C_\alpha$ contains exactly three points of $S_\alpha$, and no more than three points of $S$ (if it contained more than three points of $S$, it would have more than three in some $S_\beta$, contradicting property (2)).
Solve for $x$; $\tan x+\sec x=2\cos x;-\infty\lt x\lt\infty$ Solve for $x$; $\tan x+\sec x=2\cos x;-\infty\lt x\lt\infty$ $$\tan x+\sec x=2\cos x$$ $$\left(\dfrac{\sin x}{\cos x}\right)+\left(\dfrac{1}{\cos x}\right)=2\cos x$$ $$\left(\dfrac{\sin x+1}{\cos x}\right)=2\cos x$$ $$\sin x+1=2\cos^2x$$ $$2\cos^2x-\sin x+1=0$$ Edit: $$2\cos^2x=\sin x+1$$ $$2(1-\sin^2x)=\sin x+1$$ $$2\sin^2x+\sin x-1=0$$ $\sin x=a$ $$2a^2+a-1=0$$ $$(a+1)(2a-1)=0$$ $$a=-1,\dfrac{1}{2}$$ $$\arcsin(-1)=-90^\circ=-\dfrac{\pi}{2}$$ $$\arcsin\left(\dfrac{1}{2}\right)=30^\circ=\dfrac{\pi}{6}$$ $$180^\circ-30^\circ=150^\circ=\dfrac{5 \pi}{6}$$ $$x=\dfrac{\pi}{6},-\dfrac{\pi}{2},\dfrac{5 \pi}{6}$$ I actually do not know if those are the only answers considering my range is infinite:$-\infty\lt x\lt\infty$
\begin{eqnarray} \left(\dfrac{\sin x}{\cos x}\right)+\left(\dfrac{1}{\cos x}\right)&=&2\cos x\\ \sin x + 1 &=& 2 \cos^2 x \\ \sin x + 1 &=& 2(1-\sin^2 x)\\ \end{eqnarray} Then $\sin x = -1$ or $\sin x = 1/2$ and the solutions are $-\pi + 2k \pi, \pi/6 + 2 k\pi$ and $5\pi/6 + 2k \pi$.
Is there a proof of Gödel's Incompleteness Theorem without self-referential statements? For the proof of Gödel's Incompleteness Theorem, most versions of proof use basically self-referential statements. My question is, what if one argues that Gödel's Incompleteness Theorem only matters when a formula makes self-reference possible? Is there any proof of Incompleteness Theorem that does not rely on self-referential statements?
Roughly speaking, the real theorem is that the ability to express the theory of integer arithmetic implies the ability to express formal logic. Gödel's incompleteness theorem is really just a corollary of this: once you've proven the technical result, it's a simple matter to use it to construct variations of the Liar's paradox and see what they imply. Strictly speaking, you still cannot create self-referential statements: the (internal) self-referential statement can only be interpreted as such by invoking the correspondence between external logic (the language in which you are expressing your theory) and internal logic (the language which your theory expresses).
limit of an integral with a Lorentzian function We want to calculate the $\lim_{\epsilon \to 0} \int_{-\infty}^{\infty} \frac{f(x)}{x^2 + \epsilon^2} dx $ for a function $f(x)$ such that $f(0)=0$. We are physicist, so the function $f(x)$ is smooth enough!. After severals trials, we have not been able to calculate it except numerically. It looks like the normal Lorentzian which tends to the dirac function, but a $\epsilon$ is missing. We wonder if this integral can be written in a simple form as function of $f(0)$ or its derivatives $f^{(n)}(0)$ in 0. Thank you very much.
I'll assume that $f$ has compact support (though it's enough to suppose that $f$ decreases very fast). As $f(0)=0$ he have $f(x)=xg(x)$ for some smooth $g$. Let $g=h+k$, where $h$ is even and $k$ is odd. As $k(0)=0$, again $k(x)=xm(x)$ for some smooth $m$. We have $$\int_{-\infty}^{\infty} \frac{f(x)}{x^2 + \epsilon^2} dx =\int_{-\infty}^{\infty} \frac{xg(x)}{x^2 + \epsilon^2} dx =\int_{-\infty}^{\infty} \frac{x(h(x)+xm(x))}{x^2 + \epsilon^2} dx = \int_{-\infty}^{\infty} \frac{x^2m(x)}{x^2 + \epsilon^2} dx $$ (the integral involing $h$ is $0$ for parity reasons) and $$\int_{-\infty}^{\infty} \frac{x^2m(x)}{x^2 + \epsilon^2} dx=\int_{-\infty}^{\infty} m(x)dx-\int_{-\infty}^{\infty} \frac{m(x)}{(x/\epsilon)^2 + 1} dx. $$ The last integral converges to $0$, so the limit is $\int_{-\infty}^{\infty} m(x)dx$ where (I recall) $$m(x)=\frac{f(x)+f(-x)}{2x^2}.$$
Evaluating $\int(2x^2+1)e^{x^2}dx$ $$\int(2x^2+1)e^{x^2}dx$$ The answer of course: $$\int(2x^2+1)e^{x^2}\,dx=xe^{x^2}+C$$ But what kind of techniques we should use with problem like this ?
You can expand the integrand, and get $$2x^2e^{x^2}+e^{x^2}=$$ $$x\cdot 2x e^{x^2}+1\cdot e^{x^2}=$$ Note that $x'=1$ and that $(e^{x^2})'=2xe^{x^2}$ so you get $$=x\cdot (e^{x^2})'+(x)'\cdot e^{x^2}=(xe^{x^2})'$$ Thus you integral is $xe^{x^2}+C$. Of course, the above is integration by parts in disguise, but it is good to develop some observational skills with problems of this kind.
An example of bounded linear operator Define $\ell^p = \{ x (= \{ x_n \}_{-\infty}^\infty) \;\; | \;\; \| x \|_{\ell^p} < \infty \} $ with $\| x \|_{\ell^p} = ( \sum_{n=-\infty}^\infty \|x_n \|^p )^{1/p} $ if $ 1 \leqslant p <\infty $, and $ \| x \|_{\ell^p} = \sup _{n} | x_n | $ if $ p = \infty $. Let $k = \{ k_n \}_{-\infty}^\infty \in \ell^1 $. Now define the operator $T$ , for $x \in \ell^p$ , $$ (Tx)_n = \sum_{j=-\infty}^\infty k_{n-j} x_j \;\;(n \in \mathbb Z).$$ Then prove that $T\colon\ell^p \to\ell^p$ is a bounded, linear operator with $$ \| Tx \|_{\ell^p} \leqslant \| k \|_{\ell^1} \| x \|_{\ell^p}. $$ Would you give me a proof for this problem?
In the first comment I suggested the following strategy: write $T=\sum_j T_j$, where $T_j$ is a linear operator defined by $T_jx=\{k_jx_{n-j}\}$. You should check that this is indeed correct, i.e., summing $T_j$ over $j$ indeed gives $T$. Next, show that $\|T_j\|=|k_j|$ using the definition of the operator norm. Finally, use the triangle inequality $\|Tx\|_{\ell^p}\le \sum_j \|T_jx\|_{\ell_p}$.
Exam question: Normals While solving exam questions, I came across this problem: Let $M = \{(x, y, z) \in \mathbb{R^3}; (x - 5)^5 + y^6 + z^{2010} = 1\}$. Show that for every unit vector, $v \in \mathbb{R^3}$, exists a single vertex $p \in M$ such that $N(p) = v$, where $N(p)$ is the outer surface normal to $M$ at $p$. I don't know where to start. Ideas?
It is straightforward, but tedious, to analytically show that the assertion is false. A picture illustrates the problem more clearly. First, note that if $z=0$, then the normal will also have a zero $z$ component, so we can take $z=0$ and look for issues there. Plotting the contour of $(x-5)^5+y^6 = 1$ gives the picture below: Pick a normal $v$ to the surface around $(2,2.5)$. Then note that in the vicinity of $(6,1)$, it is possible to find a normal that matches $v$.
Proving:$\tan(20^{\circ})\cdot \tan(30^{\circ}) \cdot \tan(40^{\circ})=\tan(10^{\circ})$ how to prove that : $\tan20^{\circ}.\tan30^{\circ}.\tan40^{\circ}=\tan10^{\circ}$? I know how to prove $ \frac{\tan 20^{0}\cdot\tan 30^{0}}{\tan 10^{0}}=\tan 50^{0}, $ in this way: $ \tan{20^0} = \sqrt{3}.\tan{50^0}.\tan{10^0}$ $\Longleftrightarrow \sin{20^0}.\cos{50^0}.\cos{10^0} = \sqrt{3}.\sin{50^0}.\sin{10^0}.\cos{20^0}$ $\Longleftrightarrow \frac{1}{2}\sin{20^0}(\cos{60^0}+\cos{40^0}) = \frac{\sqrt{3}}{2}(\cos{40^0}-\cos{60^0}).\cos{20^0}$ $\Longleftrightarrow \frac{1}{4}\sin{20^0}+\frac{1}{2}\sin{20^0}.\cos{40^0} = \frac{\sqrt{3}}{2}\cos{40^0}.\cos{20^0}-\frac{\sqrt{3}}{4}.\cos{20^0}$ $\Longleftrightarrow \frac{1}{4}\sin{20^0}-\frac{1}{4}\sin{20^0}+\frac{1}{4}\sin{60^0} = \frac{\sqrt{3}}{4}\cos{60^0}+\frac{\sqrt{3}}{4}\cos{20^0}-\frac{\sqrt{3}}{4}\cos{20^0}$ $\Longleftrightarrow \frac{\sqrt{3}}{8} = \frac{\sqrt{3}}{8}$ Could this help to prove the first one and how ?Do i just need to know that $ \frac{1}{\tan\theta}=\tan(90^{\circ}-\theta) $ ?
Another approach: Lets, start by arranging the expression: $$\tan(20°) \tan(30°) \tan(40°) = \tan(30°) \tan(40°) \tan(20°)$$$$=\tan(30°) \tan(30°+10°) \tan(30° - 10°)$$ Now, we will express $\tan(30° + 10°) $ and $\tan(30° - 10°)$ as the ratio of Prosthaphaeresis Formulas, giving us: $$\tan(30°) \left( \frac{\tan(30°) + \tan(10°)}{1 - \tan(30°) \tan(10°)}\right) \left( \frac{\tan(30°) - \tan(10°)}{1 + \tan(30°) \tan(10°)}\right) $$ $$= \tan(30°) \left( \frac{\tan^2(30°) - \tan^2(10°)}{1 - \tan^2(30°) \tan^2(10°)}\right) $$ Substituting the value of $\color{blue}{\tan(30°)}$, $$ = \tan(30°) \left(\frac{1 - 3\tan²(10°)}{3 - \tan²(10°)}\right) $$ Multiplying and dividing by $\color{blue}{\tan(10°)}$, $$=\tan(30°) \tan(10°) \left(\frac{1 - 3\tan²(10°)}{3 \tan(10°) - \tan^3(10°)}\right) $$ It can be easily shown that: $\color{blue}{\tan 3A =\large \frac{3 \tan A - \tan^3A}{1-3\tan^2A}}$, Thus, our problem reduces to: $$=\tan(30°) \tan(10°) \frac{1}{\tan(3\times 10°)}= \tan(10°)$$ QED!
Is there a "natural" topology on powersets? Let's say a topology T on a set X is natural if the definition of T refers to properties of (or relationships on) the members of X, and artificial otherwise. For example, the order topology on the set of real numbers is natural, while the discrete topology is artificial. Suppose X is the powerset of some set Y. We know some things about X, such as that some members of X are subsets of other members of X. This defines an order on the members of X in terms of the subset relationship. But the order is not linear so it does not define an order topology on X. I haven't found a topology on powersets that I would call natural. Is there one?
Natural is far from well-defined. For example, I don’t see why the discrete topology on $\Bbb R$ is any less natural than the order topology; one just happens to make use of less of the structure of $\Bbb R$ than the other. That said, the Alexandrov topology on $\wp(X)$ seems pretty natural: $$\tau=\left\{\mathscr{U}\subseteq\wp(X):\forall A,B\subseteq X\big(B\supseteq A\in\mathscr{U}\to B\in\mathscr{U}\big)\right\}\;.$$ In other words, the open sets are the upper sets in the partial order $\langle\wp(X),\subseteq\rangle$.
Proof of $(A - B) - C = A - (B \cup C)$ I have several of these types of problems, and it would be great if I can get some help on one so I have a guide on how I can solve these. I tried asking another problem, but it turned out to be a special case problem, so hopefully this one works out normally. The question is: Prove $(A - B) - C = A - (B \cup C)$ I know I must prove both sides are not equivalent to each other to complete this proof. Here's my shot: We start with left side. * *if $x \in C$, then $x \notin A$ and $x \in B$. *So $x \in (B \cup C)$ *So $A - (B \cup C)$ Is this the right idea? Should I then reverse the proof to prove it the other way around, or is that unnecessary? Should it be more formal? Thanks!
Working with the Characteristic function of a set makes these problems easy: $$1_{(A - B) - C}= 1_{A-B} - 1_{A-B}1_C=(1_A- 1_A1_B)-1_A1_C+ 1A1_B1_C \,$$ $$1_{A-(B \cup C)}= 1_{A}- 1_{A}1_{B \cup C}=1_A- 1_A ( 1_B+ 1_C -1_B1_C)\,$$ It is easy now to see that the RHS are equal.
Trigonometric equation inversion I am trying to invert the following equation to have it with $\theta$ as the subject: $y = \cos \theta \sin \theta - \cos \theta -\sin \theta$ I tried both standard trig as well as trying to reformulate it as a differential equation (albeit I might have chosen an awkward substitution). Nothing seems to stick and I keep on ending up with pretty nasty expressions. Any ideas?
Rewrite the equation as $(1-\cos\theta)(1-\sin\theta)=1+y.$ Now make the Weierstrass substitution $t=\tan(\theta/2)$. It is standard that $\cos\theta=\frac{1-t^2}{1+t^2}$ and $\sin\theta=\frac{2t}{1+t^2}$. So our equation becomes $$\frac{2t^2}{1+t^2}\cdot \frac{(1-t)^2}{1+t^2}=1+y.$$ Take the square root, and clear denominators. We get $$\sqrt{2}t(1-t)=\sqrt{1+y}(1+t^2).$$ This is a quadratic in $t$.
Coordinate-free method to determine local maxima/minima? If there is a function $f : M \to \mathbb R$ then the critical point is given as a point where $$d f = 0$$ $df$ being 1-form (btw am I right here?). Is there a coordinate independent formulation of a criteria to determine if this point is a local maximum or minimum (or a saddle point)?
Let $p$ be a critical point for a smooth function $f:M\to \mathbb{R}.$ Let $(x_\,\ldots,x_n)$ be an arbitrary smooth coordinate chart around $p$ on $M.$ From multivariate calculus we know that a sufficient condition for $p$ to be a local maximum (resp. minimum) of $f$ is the positiveness (resp. negativeness) of the Hessian $H(f,p)$ of $f$ at $p$ which is the bilinear map on $T_pM$ defined locally by $$H(f,p)=\left.\frac{\partial^2f}{\partial x_i\partial x_j}dx^i\otimes dx^j\right|_p,$$ here the Einstein convention on summation is working. However, as Thomas commented, the Hessian of a function at a critical point has a coordinate-free espression. Infact, $H(f,p): T_pM\times T_pM\to\mathbb{R}$ is characterized by $$H(f,p)(X(p),Y(p))=(\left.\mathcal{L}_X(\mathcal{L}_Y f))\right|_p$$ for any smooth vector fields $X$ and $Y$ on $M$ around $p.$ Note that without a Riemannian metric on $M$ you cannot invariantly define the Hessian of a function at a non-critical point.
Prove $ (r\sin A \cos A)^2+(r\sin A \sin A)^2+(r\cos A)^2=r^2$ How would I verify the following trig identity? $$(r\sin A \cos A)^2+(r\sin A \sin A)^2+(r\cos A)^2=r^2$$ My work thus far is $$(r^2\cos^2A\sin^2A)+(r^2\sin^2A\sin^2A)+(r^2\cos^2A)$$ But how would I continue? My math skills fail me.
Just use the distributive property and $\sin^2(x)+\cos^2(x)=1$: $$ \begin{align} &(r\sin(A)\cos(A))^2+(r\sin(A)\sin(A))^2+(r\cos(A))^2\\ &=r^2\sin^2(A)(\cos^2(A)+\sin^2(A))+r^2\cos^2(A)\\ &=r^2\sin^2(A)+r^2\cos^2(A)\\ &=r^2(\sin^2(A)+\cos^2(A))\\ &=r^2\tag{1} \end{align} $$ This can be generalized to $$ \begin{align} &(r\sin(A)\cos(B))^2+(r\sin(A)\sin(B))^2+(r\cos(A))^2\\ &=r^2\sin^2(A)(\cos^2(B)+\sin^2(B))+r^2\cos^2(A)\\ &=r^2\sin^2(A)+r^2\cos^2(A)\\ &=r^2(\sin^2(A)+\cos^2(A))\\ &=r^2\tag{2} \end{align} $$ $(2)$ verifies that spherical coordinates have the specified distance from the origin.
Use the Division Algorithm to show the square of any integer is in the form $3k$ or $3k+1$ Use Division Algorithm to show the square of any int is in the form 3k or 3k+1 What confuses me about this is that I think I am able to show that the square of any integer is in the form $X*k$ where $x$ is any integer. For Example: $$x = 3q + 0 \\ x = 3q + 1 \\ x = 3q + 2$$ I show $3k$ first $$(3q)^2 = 3(3q^2)$$ where $k=3q^2$ is this valid use of the division algorithm? If it is then can I also say that int is in the form for example 10*k for example $(3q)^2 = 10*(\frac{9}{10}q^2)$ where $k=(\frac{9}{10}q^2)$ Why isn't this valid? Am I using the div algorithm incorrectly to show that any integer is the form 3k and 3k+1, if so how do I use it? Keep in mind I am teaching myself Number Theory and the only help I can get is from you guys in stackexchange.
Hint $\ $ Below I give an analogous proof for divisor $5$ (vs. $3),$ exploiting reflection symmetry. Lemma $\ $ Every integer $\rm\:n\:$ has form $\rm\: n = 5\,k \pm r,\:$ for $\rm\:r\in\{0,1,2\},\ k\in\Bbb Z.$ Proof $\ $ By the Division algorithm $$\rm\begin{eqnarray} n &=&\:\rm 5\,q + \color{#c00}r\ \ \ for\ \ some\ \ q,r\in\Bbb Z,\:\ r\in [0,4] \\ &=&\:\rm 5\,(q\!+\!1)-(\color{#0a0}{5\!-\!r}) \end{eqnarray}$$ Since $\rm\:\color{#0a0}{5\!-\!r}+\color{#c00}r = 5,\,$ one summand is $\le 2,\,$ so lies in $\{0,1,2\},\,$ yielding the result. Theorem $\ $ The square of an integer $\rm\,n\,$ has form $\rm\, n^2 = \,5\,k + r\,$ for $\rm\:r\in \{0,1,4\}.$ Proof $\ $ By Lemma $\rm\ n^2 = (5k\pm r)^2 = 5\,(5k^2\!\pm 2kr)+r^2\,$ for $\rm\:r\in \{0,1,2\},\,$ so $\rm\: r^2\in\{0,1,4\}.$ Remark $\ $ Your divisor of $\,3\,$ is analogous, with $\rm\:r\in \{0,1\}\,$ so $\rm\:r^2\in \{0,1\}.\,$ The same method generalizes for any divisor $\rm\:m,\,$ yielding that $\rm\:n^2 = m\,k + r^2,\,$ for $\rm\:r\in\{0,1,\ldots,\lfloor m/2\rfloor\}.$ The reason we need only square half the remainders is because we have exploited reflection symmetry (negation) to note that remainders $\rm > n$ can be transformed to negatives of remainders $\rm < n,\,$ e.g. $\rm\: 13 = 5\cdot 2 +\color{#0A0} 3 = 5\cdot 3 \color{#C00}{- 2},\,$ i.e. remainder $\rm\:\color{#0A0}3\leadsto\,\color{#C00}{-2},\,$ i.e. $\rm\:3 \equiv -2\pmod 5.\:$ This amounts to using a system of balanced (or signed) remainders $\rm\, 0,\pm1,\pm2,\ldots,\pm n\ $ vs. $\rm\ 0,1,2,\ldots,2n[-1].\:$ Often this optimization halves work for problems independent of the sign of the remainder. All of this is much clearer when expressed in terms of congruences (modular arithmetic), e.g. the key inference above $\rm\:n\equiv r\:\Rightarrow\:n^2\equiv r^2\pmod m\:$ is a special case of the ubiquitous Congruence Product Rule $\rm\ \ A\equiv a,\ B\equiv b\ \Rightarrow\ AB\equiv ab\ \ (mod\ m)$ Proof $\rm\:\ \ m\: |\: A\!-\!a,\ B\!-\!b\:\ \Rightarrow\:\ m\ |\ (A\!-\!a)\ B + a\ (B\!-\!b)\ =\ AB - ab $ For an introduction to congruences see any decent textbook on elementary number theory.
How do I know if a simple pole exists (and how do I find it) for $f(z)$ without expanding the Laurent series first? In general, how do I recognize that a simple pole exists and find it, given some $\Large f(z)$? I want to do this without finding the Laurent series first. And specifically, for the following function: $\Large f(z) = \frac{z^2}{z^4+16}$
Here is how you find the roots of $z^4+16=0$, $$ z^4 = -16 \Rightarrow z^4 = 16\, e^{i\pi} \Rightarrow z^4 = 16\, e^{i\pi+i2k\pi} \Rightarrow z = 2\, e^{\frac{i\pi + i2k\pi}{4} }\, k =0\,,1\,,2\,,3\,.$$
what should be the range of $u$ satisfying following equation. Let us consider that $u\in C^2(\Omega)\cap C^0(\Omega)$ and satisfying the following equation . $\Delta u=u^3-u , x\in\Omega$ and $u=0 $ in $\partial \Omega$ $\Omega \subset \mathbb R^n$ and bounded . I need hints to find out what possible value $u$ can take ? Thank you for ur hints in advance .
We must of course assume $u$ is continuous on the closure of $\Omega$. Since this is bounded and $u = 0$ on $\partial \Omega$, if $u > 0$ somewhere it must achieve a maximum in $\Omega$. Now $u$ is subharmonic on any part of the domain where $u > 1$, so the maximum principle says ...
composition of convex function with harmonic function. Consider $u:\Omega \to \mathbb R$ be harmonic and $f$ be convex function. How do i prove that $f\circ u$ is subharmonic? It seems straight forward : $\Delta (f\circ u (x)) \ge f(\Delta u(x))=0 $. Is this all to this problem? Is there a better way? Also $|u|^p$ is subharmonic for $p\ge1$ , this seems also very obvious because the map $t\to |t|^p$ is convex mapping. Any comments, improvements or answers are welcome.
Basically you do what Davide Giraudo suggested. We use the definition as given on the EOM An upper semicontinuous function $v:\Omega \to\mathbb{R}\cup \{-\infty\}$ is called subharmonic if for every $x_0 \in \Omega$ and for every $r > 0$ such that $\overline{B_r(x_0)} \subset \Omega$, that $$ v(x_0) \leq \frac{1}{|\partial B_r(x_0)|} \int_{\partial B_r(x_0)} v(y) \mathrm{d}y $$ Now, using that $f$ is convex, Jensen's inequality takes the following form: $$ f\left( \frac{1}{|\partial B_r(x_0)|} \int_{\partial B_r(x_0)} v(y)\mathrm{d}y \right) \leq \frac{1}{|\partial B_r(x_0)|}\int_{\partial B_r(x_0)} f\circ v(y) \mathrm{d}y $$ If $u$ is harmonic, we have that $$ \frac{1}{|\partial B_r(x_0)|} \int_{\partial B_r(x_0)} u(y)\mathrm{d}y = u(x_0) $$ so combining the two facts you have that $$ f\circ u(x_0) \leq \frac{1}{|\partial B_r(x_0)|}\int_{\partial B_r(x_0)} f\circ u(y) \mathrm{d}y $$ which is precisely the definition for $f\circ u$ to be subharmonic.
Can continuity of inverse be omitted from the definition of topological group? According to Wikipedia, a topological group $G$ is a group and a topological space such that $$ (x,y) \mapsto xy$$ and $$ x \mapsto x^{-1}$$ are continuous. The second requirement follows from the first one, no? (by taking $y=0, x=-x$ in the first requirement) So we can drop it in the definition, right?
There is even a term for a group endowed with a topology such that multiplication is continuous (but inversion need not be): a paratopological group.
Universal Cover of projective plane glued to Möbius band This is the second part of exercise 1.3.21 in page 81 of Hatcher's book Algebraic topology, and the first part is answered here. Consider the usual cell structure on $\mathbb R P^2$, with one 1-cell and one 2-cell attached via a map of degree 2. Consider the space $X$ obtained by gluing a Möbius band along the 1-cell via a homeomorphism with its boundary circle. Compute $\pi_1(X)$, describe the universal cover of $X$ and the action of $\pi_1(X)$ on the universal cover. Using van Kampen's theorem, $\pi_1(X)=\langle x,y \mid x^2=1, x=y^2\rangle=\langle y\mid y^4=1\rangle=\mathbb Z_4$. I have tried gluing spheres and Möbius strips in various configurations, but have so far not been successful. Any suggestions?
Let $M$ be the Möbius band and let $D$ be the $2$-cell of $RP^2$. Then $X$ is the result of gluing $M$ to $D$ along a map $\partial D\rightarrow \partial M$ of degree $2$. Hence $\pi_1(X)$ has a single generator $\gamma$, that comes from the inclusion $M\rightarrow X$, and the attached disc $D$ kills $\gamma^4$, hence $\pi_1(X)\cong {\mathbb Z}/4$. Alternatively, take the homotopy $M\times I\rightarrow M$ that shrinks the Möbius band to its central circle $S\subset M$; this gives a homotopy from $X = M\cup D$ to the space ${\mathbb S}^1\cup_f D$, where $f\colon \partial D\rightarrow {\mathbb S}^1$ is a map of degree $4$. The universal cover of the space ${\mathbb S}^1\cup_f D$ is described in Hatcher's Example 1.47, and it is the homeomorphic to the quotient of $D\times\{1,2,3,4\}$ by the relation $(x,i)\sim (y,j)$ iff $x=y$ and $x\in \partial D\times \{i\}$. Now let $D$ denote de closed unit disc in ${\mathbb R}^2$. The universal cover of the space $X$ is homeomorphic to the quotient of $D\times \{a,b,c,d\}\cup S^1\times [-1,1]$ by the relations * *$(x,a)\sim (x,b)\sim (x,1)$ for all $x\in S^1 = \partial D$ *$(x,c)\sim(x,d)\sim (x,-1)$ for all $x\in S^1=\partial D$ and $\pi_1(X)\cong {\mathbb Z}/4$ acts as follows: * *$(re^{2\pi i\theta},a)\to (re^{2\pi i(\theta + 1/4)}, c)\to (re^{2\pi i(\theta+1/2)},b)\to (re^{2\pi i (\theta + 3/4)},d)\rightarrow (re^{2\pi i\theta},a)$ for the points in the discs $D\times \{a,b,c,d\}$ *$(e^{2\pi i \theta},t)\to (e^{2\pi i (\theta+1/4)},-t)\to (e^{2\pi i (\theta + 1/2)},t)\to (e^{2\pi i(\theta + 3/4)},-t)\rightarrow (e^{2\pi i \theta},t)$ for the points in $S^1\times [-1,1]$. and the map $\tilde{X}\rightarrow X$ sends the four discs to the disc and the cylinder to the Möbius band.
Showing that the closure of the closure of a set is its closure I have the feeling I'm missing something obvious, but here it goes... I'm trying to prove that for a subset $A$ of a topological space $X$, $\overline{\overline{A}}=\overline{A}$. The inclusion $\overline{\overline{A}} \subseteq \overline{A}$ I can do, but I'm not seeing the other direction. Say we let $x \in \overline{A}$. Then every open set $O$ containing $x$ contains a point of $A$. Now if $x \in \overline{\overline{A}}$, then every open set containing $x$ contains a point of $\overline{A}$ distinct from $x$. My problem is: couldn't $\{x,a\}$ potentially be an open set containing $x$ and containing a point of $A$, but containing no other point in $\overline{A}$? (Also, does anyone know a trick to make \bar{\bar{A}} look right? The second bar is skewed to the left and doesn't look very good.)
The condition you want to check is \[ x \in \bar A \quad \Leftrightarrow \quad \text{for each open set $U$ containing $x$, $U \cap A \neq \emptyset$} \] This definition implies, among other things, that $A \subset \bar A$. Indeed, with the notation above we always have $x \in U \cap A$. Is it clear why this implies the remaining inclusion in your problem? If you instead require that $U \cap (A - \{x\}) \neq \emptyset$ then you have defined the set $A'$ of limit points of $A$. We have $\bar A = A \cup A'$. Simple examples such as $A = \{0\}$ inside of $\mathbb R$, for which $\bar A = A$ but $A' = \emptyset$, can be helpful in keeping this straight.
No pandiagonal latin squares with order divisible by 3? I would like to prove the claim that pandiagonal latin squares, which are of form 0 a 2a 3a ... (n-1)a b b+a b+2a b+3a ... b+ (n-1)a . . . . . . . . . (n-1)b (n-1)b +a (n-1)b +2a (n-1)b +3a ... (n-1)(b+a) for some $a,b\in (0,1...n-1)$ cannot exist when the order is divisible by 3. I think we should be able to show this if we can show that the pandiagonal latin square of order $n$ can only exist iff it is possible to break the cells of a $n \times n$ grid into $n$ disjoint superdiagonals. Then we would show that an $n\times n$ array cannot have a superdiagonal if $n$ is a multiple of $2$ or $3$. I am, however, not able to coherently figure out either part of this proof. Could someone perhaps show me the steps of both parts? A superdiagonal on an $n \times n$ grid is a collection of $n$ cells within the grid that contains exactly one representative from each row and column as well as exactly one representative from each broken left diagonal and broken right diagonal. EDIT: Jyrki, could you please explain how the claim follows from #1?
[Edit: replacing the old set of hints with a detailed answer] Combining things from the question as well as its title and another recent question by the same use tells me that the question is the following. When $n$ is divisible by three, show that no combination of parameters, $a,b\in R=\mathbb{Z}/n\mathbb{Z}$ in a general recipe $L(i,j)=ai+bj, i,j \in R$ for constructing latin squares, gives rise to a latin square with the extra property (=pandiagonality) that the entries on each broken diagonal parallel to either main or back diagonal are all distinct. The first thing we observe that $a$ cannot be divisible by three or, more precisely, it cannot belong to the ideal $3R$. For if it were, then all the entries on the first row would also be in the ideal $3R$. Consequently some numbers, for example $1$, would not appear at all on the first row, and the construction would not yield a latin square at all. Similarly, the parameter $b$ cannot be divisible by three, as then the first column would only contain numbers divisible by three. So both $a$ and $b$ are congruent to either $1$ or $2$ modulo $3$. We split this into two cases. If $a\equiv b \pmod 3$ (i.e. both congruent to one or both congruent to two), then $a-b$ is divisible by three. When we move one position to the right and one up in the shown diagram, we always add $a-b$ (modulo $n$) to the entry. This means that the entries along an ascending broken diagonal are all congruent to each other modulo three. Again some entries will be missing as the entries are limited to a coset of the ideal $3R$. The remaining case is that one of $a,b$ is congruent to $1$ modulo $3$ and the other is congruent to $2$ modulo $3$. In that case their sum $a+b$ will be divisible by three. When we move one position to the right and one down on the table, we add $b+a$ (modulo $n$) to the entry. This means that the entries along a descending broken diagonal are all congruent to each other modulo three, and we run into a similar contradiction as earlier. To summarize: The solution relies on the special property of number three that for all integers $a,b$ the product $ab(a-b)(a+b)$ will be divisible by three, so in one of the four critical directions (horizontal, vertical, descending/ascending diagonal) the entries will always congruent to each other modulo $3$.
For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root? For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root? Approach: Case I : Only $1$ positive root, this implies $0$ lies between the roots, so $$f(0)<0$$ and $$D > 0$$ Case II: Both roots positive. It implies $0$ lies behind both the roots. So, $$f(0)>0$$ $$D≥0$$ Also, abscissa of vertex $> 0 $ I did the calculation and found the intersection but its not correct. Please help. Thanks.
You only care about the larger of the two roots - the sign of the smaller root is irrelevant. So apply the quadratic formula to get the larger root only, which is $\frac{-2(k-1)+\sqrt{4(k-1)^2-4(k+5)}}{2} = -k+1+\sqrt{k^2-3k-4}$. You need the part inside the square root to be $\geq 0$, so $k$ must be $\geq 4$ or $\leq -1$. Now, if $k\geq 4$, then to have $-k+1+\sqrt{k^2-3k-4}>0$, you require $k^2-2k-4> (k-1)^2$, which is a contradiction. Alternately, if $k\leq -1$, then $-k+1+\sqrt{k^2-3k-4}$ must be positive, as required. So you get the required result whenever $k\leq -1$.
Evaluating the product $\prod\limits_{k=1}^{n}\cos\left(\frac{k\pi}{n}\right)$ Recently, I ran across a product that seems interesting. Does anyone know how to get to the closed form: $$\prod_{k=1}^{n}\cos\left(\frac{k\pi}{n}\right)=-\frac{\sin(\frac{n\pi}{2})}{2^{n-1}}$$ I tried using the identity $\cos(x)=\frac{\sin(2x)}{2\sin(x)}$ in order to make it "telescope" in some fashion, but to no avail. But, then again, I may very well have overlooked something. This gives the correct solution if $n$ is odd, but of course evaluates to $0$ if $n$ is even. So, I tried taking that into account, but must have approached it wrong. How can this be shown? Thanks everyone.
The roots of the polynomial $X^{2n}-1$ are $\omega_j:=\exp\left(\mathrm i\frac{2j\pi}{2n}\right)$, $0\leq j\leq 2n-1$. We can write \begin{align} X^{2n}-1&=(X^2-1)\prod_{j=1}^{n-1}\left(X-\exp\left(\mathrm i\frac{2j\pi}{2n}\right)\right)\left(X-\exp\left(-\mathrm i\frac{2j\pi}{2n}\right)\right)\\ &=(X^2-1)\prod_{j=1}^{n-1}\left(X^2-2\cos\left(\frac{j\pi}n\right)X+1\right). \end{align} Evaluating this at $X=i$, we get $$(-1)^n-1=(-2)(-2\mathrm i)^{n-1}\prod_{j=1}^{n-1}\cos\left(\frac{j\pi}n\right),$$ hence \begin{align} \prod_{j=1}^n\cos\left(\frac{j\pi}n\right)&=-\prod_{j=1}^{n-1}\cos\left(\frac{j\pi}n\right)\\ &=\frac{(-1)^n-1}{2(-2\mathrm i)^{n-1}}\\ &=\frac{(-1)^n-1}2\cdot \frac{\mathrm i^{n-1}}{2^{n-1}}. \end{align} The RHS is $0$ if $n$ is even, and $-\dfrac{(-1)^m}{2^{2m}}=-\dfrac{\sin(n\pi/2)}{2^{n-1}}$ if $n$ is odd with $n=2m+1$.
Moment generating function for the uniform distribution Attempting to calculate the moment generating function for the uniform distrobution I run into ah non-convergent integral. Building of the definition of the Moment Generating Function $ M(t) = E[ e^{tx}] = \left\{ \begin{array}{l l} \sum\limits_x e^{tx} p(x) &\text{if $X$ is discrete with mass function $p( x)$}\\ \int\limits_{-\infty}^\infty e^{tx} f( x) dx &\text{if $X$ is continuous with density $f( x)$} \end{array}\right. $ and the definition of the Uniform Distribution $ f( x) = \left\{ \begin{array}{l l} \frac{ 1}{ b - a} & a < x < b\\ 0 & otherwise \end{array} \right. $ I end up with a non-converging integral $\begin{array}{l l} M( t) &= \int\limits_{-\infty}^\infty e^{tx} f(x) dx\\ &= \int\limits_{-\infty}^\infty e^{tx} \frac{ 1}{ b - a} dx\\ &= \left. e^{tx} \frac{ 1}{ t(b - a)} \right|_{-\infty}^{\infty}\\ &= \infty \end{array}$ I should find $M(t) = \frac{ e^{tb} - e^{ta}}{ t(b - a)}$, what am I missing here?
The limits of integration are not correct. You should integrate from $a$ to $b$ not from $-\infty$ to $+\infty$.
Calculating maximum of function I want to determine the value of a constant $a > 0$ which causes the highest possible value of $f(x) = ax(1-x-a)$. I have tried deriving the function to find a relation between $x$ and $a$ when $f'(x) = 0$, and found $x = \frac{1-a}{2}$. I then insert it into the original function: $f(a) = \frac{3a - 6a^2 - 5a^3}{8}$ I derived it to $f'(a) = \frac{-15a^2 + 12a - 3}{8}$ I thought deriving the function and equaling it to $0$ would lead to finding a maximum, but I can't find it. I can't go beyond $-15a^2 + 12a = 3$. Where am I going wrong?
We can resort to some algebra along with the calculus you are using, to see what happens with this function: $$f(x)=ax(1−x−a)=-ax^2+(a-a^2)x$$ Note that this is a parabola. Since the coefficient of the $x^2$ term is negative, it opens downward so that the maximum value is at the vertex. As you have already solved, the vertex has x-coordinate $x=\frac{1−a}{2}$. Additionally, Théophile showed that the vertex's y-coordinate is $f(\frac{1−a}{2})=a(\frac{1−a}{2})^2$ which is unbounded as $a$ increases.
Bounded ration of functions If $f(x)$ is a continuous function on $\mathbb R$, and $f$ is doesn't vanish on $\mathbb R$, does this imply that the function $\frac{f\,'(x)}{f(x)}$ be bounded on $\mathbb R$? The function $1/f(x)$ will be bounded because $f$ doesn't vanish, and I guess that the derivative will reduce the growth of the function $f$, so that the ration will be bounded, is this explanation correct!?
In fact for any positive continuous function $g(x)$ on $\mathbb R$ and any $y_0 \ne 0$, the differential equation $\dfrac{dy}{dx} = g(x) y$ with initial condition $y(0) = y_0$ has a unique solution $y = f(x) = y_0 \exp\left(\int_0^x g(t)\ dt\right)$, and this is a nonvanishing continuous function with $f'(x)/f(x) = g(x)$. So $f'/f$ can do anything a continuous function can.
simple integration question I've tried but I cannot tell whether the following is true or not. Let $f:[0,1]\rightarrow \mathbb{R}$ be a nondecreasing and continuous function. Is it true that I can find a Lebesgue integrable function $h$ such that $$ f(x)=f(0)+\int_{0}^{x}h(x)dx $$ such that $f'=h$ almost everywhere? Any hint on how to proceed is really appreciated it!
The Cantor function (call it $f$) is a counterexample. It is monotone, continuous, non-constant and has a zero derivative a.e. If the above integral formula holds, then you have $f'(x) = h(x)$ at every Lebesgue point of $f$, which is a.e. $x \in [0,1]$. Since $f'(x) = 0$ a.e., we have that $h$ is essentially zero. Since $f$ is not constant, we have a contradiction. See Rudin's "Real & Complex Analysis" Theorem 7.11 and Section 7.16 for details.
Question about direct sum of Noetherian modules is Noetherian Here is a corollary from Atiyah-Macdonald: Question 1: The corollary states that finite direct sums of Noetherian modules are Noetherian. But they prove that countably infinite sums are Noetherian, right? (so they prove something stronger) Question 2: I have come up with the following proof of the statement in the corollary, can you tell me if it's correct? Thank you: Assume $M_i$ are Noetherian and let $(\bigoplus_i L_i)_n$ be an increasing sequence of submodules in $\bigoplus_i M_i$. Then in particular, $L_{in}$ is an increasing sequence in $M_i$ and hence stabilises, that is, for $n$ greater some $N_i$, $L_{in} = L_{in+1} = \dots $. Now set $N = \max_i N_i$. Then $(\bigoplus_i L_i)_n$ stabilises for $n> N$ and is equal to $\bigoplus_i L_i$, where $L_i = L_{iN_i}$. This proves that finite direct sums of Noetherian modules are Noetherian so it's a bit weaker. But if it's correct it proves the corollary.
As pointed out by others, the submodules of $M=M_1\oplus M_2$ are not necessarily direct sums of submodules of $M_1$ and $M_2$. Nevertheless, you always have the exact sequence $$0\rightarrow M_1\rightarrow M\rightarrow M_2\rightarrow 0$$ Then $M$ is Noetherian if (and only if) $M_1$ and $M_2$ are Noetherian. One direction is trivial (if M is Noetherian then $M_1$ and $M_2$ are Noetherian). I prove the other direction here: Assume $M_1$ and $M_2$ are Noetherian. Given nested submodules $N_1\subset N_2\subset \cdots$ in $M$, we can see their images stabilize in $M_1$ and $M_2$. More precisely, the chain $$(N_1\cap M_1)\subset (N_2\cap M_1)\subset\cdots$$ terminates in $M_1$, say at length $j_1$ and so does $$(N_1+M_1)/M_1\subset (N_2+M_1)/M_1\subset\cdots$$ in $M_2$, say at length $j_2$. Set $j=\max\{j_1,j_2\}$ to get $$(N_{j}\cap M_1)=(N_{j+1}\cap M_1)=\cdots$$ in $M_1$ and $$(N_{j}+M_1)/M_1=(N_{j+1}+M_1)/M_1=\cdots$$ in $M_2$. But $N_j$'s are nested modules. Hence the above equalities can occur if and only if $N_j=N_{j+1}=\cdots$. To check this claim, pick $n\in N_{j+1}$. Then $m:=n'-n\in M_1$ for some $n'\in N_{j}$. But $m\in N_{j+1}$ as well. Hence $m\in N_{j+1}\cap M_1$, that is $m\in N_{j}\cap M_1$ by the first equality above. So $m\in N_j$, that is $n\in N_j$, giving us $N_{j+1}\subset N_j$. So the chain $N_1\subset N_2\subset ...$ terminates in $M$.
Check if two 3D vectors are linearly dependent I would like to determine with code (c++ for example) if two 3D vectors are linearly dependent. I know that if I could determine that the expression $ v_1 = k · v_2 $is true then they are linearly dependent; they are linearly independent otherwise. I've tried to construct an equation system to determine that, but since there could be zeros anywhere it gets very tricky and could end with divisions by zero and similar. I've also though about using some matrices/determinants, but since the matrix would look like: $$ \begin{matrix} x_1 & y_1 & z_1\\ x_2 & y_2 & z_2\\ \end{matrix} $$ i don't see an easy way to check for the the linear dependency... any ideas? Thanks!
Here is the portion of the code you need: if((x1*y2 - x2*y1) != 0 || (x1*z2 - x2*z1) != 0 || (y1*z2 - y2*z1) != 0) { //Here you have independent vectors } else { //Here the vectors are linearly dependent }
Computing a sum of binomial coefficients: $\sum_{i=0}^m \binom{N-i}{m-i}$ Does anyone know a better expression than the current one for this sum? $$ \sum_{i=0}^m \binom{N-i}{m-i}, \quad 0 \le m \le N. $$ It would help me compute a lot of things and make equations a lot cleaner in the context where it appears (as some asymptotic expression of the coefficients of a polynomial defined over cyclic graphs). Perhaps the context doesn't help much though. For instance, if $N = m$, the sum is $N+1$, and for $N = m+1$, this sum is $N + (N-1) + \dots + 1 = N(N+1)/2$. But otherwise I don't know how to compute it.
Here's my version of the proof (and where it arose in the context). You have a cyclic graph of size $n$, which is essentially $n$ vertices and a loop passing through them, so that you can think of them as the roots of unity on a circle (I don't want to draw because it would take a lot of time, but it really helps). You want to count the number of subsets of size $k$ ($0 \le k \le n$) of this graph which contain 3 consecutive vertices (for some random reason that actually makes sense to me, but the rest is irrelevant). You can fix 3 consecutive vertices, assume that the two adjacent vertices to those 3 are not in the subset, and then place the $k-3$ remaining anywhere else. Or you can fix $4$ consecutive vertices, assume again that the two adjacent vertices to those $4$ are not in the subset, and then place the $k-4$ remaining anywhere else. Or you can fix $5$, blablabla, etc. If you assume $n$ is prime, every rotation of the positions you've just counted gives you a distinct subset of the graph, so that the number of such subsets would be $$ n\sum_{i=0}^{k-3} \binom{n-5-i}{k-3-i}. $$ On the other hand, if you look at it in another way : suppose there is $\ell$ consecutive vertices in your subset. Look at the $3$ vertices in the $\ell$ consecutive ones that are adjacent to a vertex not in your subset, and place the remaining ones everywhere else. Again, the number of such subsets is $$ n\binom{n-4}{k-3}. $$ If $n$ is not prime, my computations are not accurate (they don't count well what I want to count in my context, but that's another question I'm trying to answer for myself), but I do precisely the same mistakes in both computations! So the two are actually equal. Removing the $n$'s on both lines, replacing $n-5$ and $k-3$ by $N$ and $m$, we get the identity.
checking whether certain numbers have an integral square root I was wondering if it is possible to find out all $n \geq 1$, such that $3n^2+2n$ has an integral square root, that is, there exists $a \in \mathbb{N}$ such that $3n^2+2n = a^2$ Also, similarly for $(n+1)(3n+1)$. Thanks for any help!
$3n^2+2n=a^2$, $9n^2+6n=3a^2$, $9n^2+6n+1=3a^2+1$, $(3n+1)^2=3a^2+1$, $u^2=3a^2+1$ (where $u=3n+1$), $u^2-3a^2=1$, and that's an instance of Pell's equation, and you'll find tons of information on solving those in intro Number Theory textbooks, and on the web, probably even here on m.se. Try similar manipulations for your other question.
Functional and Linear Functional may I know what is the distinction between functional analysis and linear functional analysis? I do a search online and came to a conclusion that linear functional analysis is not functional analysis and am getting confused by them. When I look at the book Linear Functional Analysis published by Joan Cerda by AMS, there is not much difference in its content compared to other titles with just the heading Functional Analysis. Hope someone can clarify for me. Thank You.
It is mostly a matter of taste. Traditionally functional analysis has dealt mostly with linear operators, whereas authors would typically title their books nonlinear functional analysis if they consider the theory of differentiable (or continuous or more general) mappings between Banach (or more general function) spaces. But it seems more recently the subject of "nonlinear functional analysis" has been gaining traction and many more people now adhere to the philosophy "classifying mathematical problems as linear and nonlinear is like classifying the universe as bananas and non-bananas." So instead of labeling all the non-bananas, some people now agree that it makes more sense to instead label the bananas...
What is the formula to find the count of (sets of) maps between sets that equal the identity map? Given two finite sets $A$ and $B$, what is the formula to find the number of pairs of mappings $f, g$ such that $g \circ f = 1_A$?
A function $f\colon A\to B$ has a left inverse, $g\colon B\to A$ such that $g\circ f = 1_A$, if and only if $f$ is one-to-one. Given a one-to-one function $f\colon A\to B$, then the values of a left inverse $g\colon B\to A$ of $f$ on $f(A)$ are uniquely determined; the values on $B-f(A)$ are free. I'm going to assume first that the sets are finite. Therefore: the number of ordered pairs $(f,g)\in \{a\colon A\to B\}\times\{b\colon B\to A\}$ such that $g\circ f = 1_A$ is: * *$0$ if $|A|\gt |B|$ (no injections from $A$ to $B$); *if $|A|\leq |B|$, same $|A|=n\leq m=|B|$, then there are $\frac{m!}{(m-n)!}$ ways of mapping $A$ to $B$ in a one-to-one manner; and there are $n^{m-n}$ ways of mapping $B-f(A)$ to $A$ (choose which of the $n$ things in $A$ each of the $m-n$ things in $B-f(A)$ is mapped to). So the number of pairs is $$\frac{n^{m-n}m!}{(m-n)!}.$$ Though not necessary, I had already written much of what is below, so here goes: For infinite sets the situation is a bit more complicated, but the analysis is the same. The number of sets of size $|A|$ contained in $|B|$ (necessary for embeddings) is either $|B|$ if $|A|\lt |B|$, or is $2^{|A|}$ when $|A|=|B|$; then you need to count the number of bijections between two sets of cardinality $|A|$. This is $2^{|A|}$ (I believe). Thus, there are $|B|2^{|A|}$ one-to-one functions from $A$ to $B$ when $|A|\leq|B|$. Then you should have $2^{|B|}$ ways to complete "most" of those functions to full left inverses $B\to A$. So I'm going to hazard a guess that the number of pairs would be $0$ if $|A|\gt |B|$, and $|B|2^{|A|}2^{|B|} = 2^{|B|}$ if $|A|\leq |B|$ and $|A|$ is infinite. I'll leave it for someone who knows more cardinal arithmetic than I (Asaf? Andres?) to correct that if I'm wrong. Finally, if $A$ is finite and $B$ is infinite, there are $|B|$ possible 1-to-1 function $A\to B$, and then $2^{|B|}$ ways of mapping the complement of the image back to $A$, so that would also give $2^{|B|}$ pairs in this case.
Are there "one way" integrals? If we suppose that we can start with any function we like, can we work "backwards" and differentiate the function to create an integral that is hard to solve? To define the question better, let's say we start with a function of our choosing, $f(x)$. We can then differentiate the function with respect to $x$ do get $g(x)$: $$g(x) = f'(x)$$ This, in turn, implies, under appropriate conditions, that the integral of $g(x)$ is $f(x)$: $$\int_a^b { g(x) dx } = [f(x)]_a^b$$ I'm wondering what conditions are appropriate to allow one to easily get a $g(x)$ and $f(x)$ that assure that $f(x)$ can't be easily found from $g(x)$. SUMMARY OF THE QUESTION Can we get a function, $g(x)$, that is hard to integrate, yet we know the solution to? It's important that no one else should be able to find the solution, $f(x)$, given only $g(x)$. Please help! POSSIBLE EXAMPLE This question/integral seems like it has some potential. DEFINITION OF HARDNESS The solution to the definite integral can be returned with the most $n$ significant digits correct. Then it is hard to do this if the time it takes is an exponential number of this time. In other words, if we get the first $n$ digits correct, it would take roughly $O(e^n)$ seconds to do it.
Here is an example (by Harold Davenport) of a function that is hard to integrate: $$\int{2t^6+4t^5+7t^4-3t^3-t^2-8t-8\over (2t^2-1)^2\sqrt{t^4+4t^3+2t^2+1}}\,dt\ .$$ The primitive is given by (check it!) $$\eqalign{&-2\log(t^2+4t+2)-\log(\sqrt2+2t)\cr&+ \log\left(\sqrt2 t+2\sqrt2-2\sqrt{t^4+4t^3+2t^2+1}\,\right)\cr &-5\log(2t^2-1)-5\log\left(2\sqrt2 t+4\sqrt2 -t^2-4t-6\right) \cr&+ 5\log\left(\bigl(4\sqrt2+19\bigr)\sqrt{t^4+4t^3+2t^2+1}\right. \cr &\qquad\qquad\left. - 16\sqrt2 t^2 -8\sqrt2 t +6\sqrt2 -29t^2-38t+5\right)\cr}$$ $$\eqalign{ &+ 2\log\left(\bigl(10\sqrt2+17\bigr)\sqrt{t^4+4t^3+2t^2+1}\right.\cr &\qquad\qquad\left.+4\sqrt2 t^2+16\sqrt2 t -2\sqrt2-11t^2-44t-39\right) \cr &+ {1\over2}\log\left(\bigl(731\sqrt2 t+71492\sqrt2-70030t-141522\bigr) \sqrt{t^4+4t^3+2t^2+1} \right.\cr &\qquad\qquad-40597\sqrt2t^3-174520\sqrt2t^2-122871\sqrt2 t+50648\sqrt2\cr&\qquad\qquad\left.+90874t^3+ 403722t^2+ 272622t-61070 \vphantom{\sqrt{t^4}}\right)\cr & + {(2t+1)\sqrt{t^4+4t^3+2t^2+1}\over 4t^2-2}\ .\cr}$$
What is the difference between normal and perpendicular? What is the difference when a line is said to be normal to another and a line is said to be perpendicular to other?
If normal is 90 degree to the surface, that means normal is used in 3D. Perpendicular in that case is more in 2D referring two same entities (line-line) with 90 degree angle between them. Normal in this case could refer 2 different entities (line-surface) making this term valid for 3D case (I definitely remember hearing term "line normal to surface", rather then "line perpendicular to the surface") - line and surface makes 3D case; line and line is one plane - 2D. This I just figured out from one of the answers here which made most sense for me.
Need to find the recurrence equation for coloring a 1 by n chessboard So the question asks me to find the number of ways H[n] to color a 1 by n chessboard with 3 colors - red, blue and white such that the number of red squares is even and number of blue squares is at least one. I am doing it in this way - 1.If the first square is white then the remaining n-1 squares can be colored in H[n-1] ways. 2.If the first square is red then another red will be needed in the n-1 remaining squares and the rest n-2 can be colored in H[n-2] ways. (i.e (n-1)*H[n-2]) 3.And now is the problem with blue. If I put blue in the first square and say that the rest n-1 squares can be colored in H[n-1] ways that will be wrong as I already have a blue and may not need any more(while H[n-1] requires one blue at least). I thought of adding H'[n-1] to H[n] = H[n-1] + (n-1)*H[n-2] which gives H[n] = H[n-1] + (n-1)*H[n-2] + H'[n-1] where H'[n] is the number of ways to fill n squares with no blue squares(so H'[n] = (n-1)*H'[n-2] + H'[n-1]). So now I'm kind of really confused how to solve such an equation -> H[n] = H[n-1] + (n-1)*H[n-2] + H'[n-1]. (I am specifically asked not to use exponential generating function to solve problem).
Letting $H[n]$ be the number of ways to color the chessboard with an even number of red squares and at least one blue square, and letting $G[n]$ be the very closely related number of ways to color the chessboard with an odd number of red squares with at least one blue square, we notice that $H[n]+G[n]$ is the number of ways to color with at least one blue square, regardless of the condition on red squares. We get then $H[n] + G[n] = 3^n - 2^n$ We similarly can define $H'[n]$ and $G'[n]$ as the related answers to the counting problem where the conditions on blue squares is removed. Let us solve the problem with no condition on the blues first. The first square can either blue white or blue followed by a board with an even number of red squares, or begin with a red square followed by a board with an odd number of red squares. We further notice that $H'[n]+G'[n]=3^n$. We have then that $H'[n] = 2H'[n-1] + G'[n-1] = 2H'[n-1]+3^{n-1}-H'[n-1] = H'[n-1]+3^{n-1}$ It follows that when solving along with an initial condition that $H'[1]=2$ that $H'[n]=\frac{1}{2}(3^n+1)$ Back to the original problem, we have then that if the first square is white that the remaining $n-1$ squares must have an even number of red squares and at least one blue. If the first square is red that the remaining $n-1$ squares must have an odd number of red squares and at least one blue, and finally that if the first square is blue that the remaining squares has an even number of red squares with no further condition on blue squares. We have then $H[n] = H[n-1] + G[n-1] + H'[n-1]$ and simplifying, $H[n] = H[n-1] + 3^{n-1}-2^{n-1}-H[n-1]+\frac{1}{2}(3^{n-1}+1) = \frac{1}{2}(3^n-2^n+1)$, ironically all of the lower terms having been cancelled and we are already at our closed form solution.
Proof of the divisibility rule of 17. Rule: Subtract 5 times the last digit from the rest of the number, if the result is divisible by 17 then the number is also divisible by 17. How does this rule work? Please give the proof.
Let $$n=\sum_{k=0}^N 10^k a_k$$ be the number we want to test for divisibility, where the $a_k$s are the digits in the decimal expansion of $n$. We form the second number $m$ by the process you describe, $$m = \sum_{k=1}^N 10^{k-1} a_k - 5 a_0 = \frac{n-a_0}{10}- 5 a_0 $$ Now suppose $17|m$. Then there exists a natural number $b$ such that $17b = m$. We then have $$ 17 b = \frac{n-a_0}{10}- 5 a_0 $$ $$ 10 * 17 b = n-a_0- 50 a_0 \implies n= 17(10b + 3a_0) $$ and so $n$ is divisible by 17.