title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Is there a simple example of a transitive vector field on the three sphere? | You can define a nowhere-zero vector field on any odd-dimension sphere. For instance, one is given by embedding the $(2n-1)$-sphere as the unit sphere in $\Bbb R^{2n}$, and at the point $x = (x_1, x_2, x_3,\ldots,x_{2n})\in S^{2n-1}$ define the tangent vector
$$
v_x = (x_2, -x_1, x_4, -x_3, \ldots, x_{2n}, -x_{2n-1})
$$
This is readily seen to be orthogonal to $x$, and therefore tangent to the unit sphere at $x$. At the same time, it is never zero, because the origin of $\Bbb R^{2n}$ is not part of our sphere.
On an even-dimension sphere, it can never be done, for the same reasons that it cannot be done on $S^2$ in particular. For instance, my proof here (stolen from theorem 2.28 in Hatcher, where my above example for odd-dimensional spheres can also be found) covers all of those cases simultaneously. |
A family of polynomials whose roots all have moduli 1 | This follows from surprisingly simple properties of the sine function, when we make the substitution $x=e^{it}, t\in[0,2\pi)$.
We see that
$$
x^{-(k+1)}f_k(x)=k\left(x^{k+1}-\frac1{x^{k+1}}\right)+(k+1)\left(x^k-\frac1{x^k}\right).
$$
So with $x=e^{it}$ we see that $f(e^{it})=0$ if and only if
$$
g_k(t)=k\sin((k+1)t)+(k+1)\sin(kt)=0.
$$
It is easy to check that $f_k(x)$ has a simple zero at $x=1$ and a triple zero at $x=-1$ corresponding to a simple zero of $g_k(t)$ at $t=0$ and a triple zero at $t=\pi$.
Let's look at the sign changes in the interval $[0,\pi]$ (the other half is symmetric because $g_k(t)$ is an odd function). Fix an integer $j$ in the range $0<j\le k$. At $t_j:=j\pi/(k+1)$ we have
$$
\begin{aligned}
g_k(t_j)&=k\sin((k+1)j\pi/(k+1))+(k+1)\sin(kj\pi/(k+1))\\
&=(k+1)\sin(kj\pi/(k+1)).
\end{aligned}
$$
For this range of $j$ we have $\lfloor\frac{kj}{k+1}\rfloor=j-1$. It follows that $t_j\in ((j-1)\pi,j\pi)$, implying that $g(t_j)$ has the same sign as $(-1)^{j+1}$ for all choices
$j=1,2,\ldots,k$. Therefore $g_k(t)$ has a zero in all the intervals $(t_j,t_{j+1})$,
$j=1,2,\ldots,k-1$, for a total of $k-1$ zeros in the interval $(0,\pi)$. Together with the $(k-1)$ mirror image zeros in the interval $t\in(\pi,2\pi)$ and the four known zeros (simple at $t=0$ and triple at $t=\pi$) we have accounted for
$$
N=1+3+2(k-1)=2k+2=\deg f_k(x)
$$
zeros of $f_k(x)$ on the unit circle. The claim follows.
In the plot below $k=7$. The more complicated (blue) wave is $g_7(t)$. The orange simple wave is $8\sin 7t$. Its sign determines the sign of $g_7(t)$ at the points $t_1,\ldots t_7$. These are the points in $(0,\pi)$ where the two waves intersect, and are also tickmarked on the horizontal axis. |
Converting Between Non-Decimal Bases? | You can use the same process you use for converting to/from base 10 but you will have to do arithmetic in another base. $17_{10}=32_5$, so $0.2E9B03_{17}=\frac{2\cdot 32^5+30\cdot 32^4+14\cdot 32^3+21\cdot 32^2+3}{32^6}_5$ Now multiply out the numerator and do long division in base $5$ to get the part right of the fraction point. For the whole numbers you do the same thing without the division. |
Perspective of log-sum-exp as exponential cone | I will write up the solution for the sake of completeness and because the question seems to have appeared elsewhere http://ask.cvxr.com/t/convert-the-perspective-function-of-log-sum-exp-to-cvx/5232
The inequality $$t\geq x_0 f(x/x_0)$$ is
$$t\geq x_0\cdot\log(\exp(x_1/x_0)+\cdots+\exp(x_n/x_0)).$$
Since $x_0>0$, dividing by it and then following the same procedure as for log-sum-exp in https://docs.mosek.com/modeling-cookbook/expo.html#log-sum-exp we arrive at the inequality
$$1\geq \sum \exp\left(\frac{x_i-t}{x_0}\right)$$
which we can rewrite as
$$x_0\geq \sum x_0\exp\left(\frac{x_i-t}{x_0}\right)$$
and eventually
$$x_0\geq \sum z_i,\quad z_i\geq x_0\exp\left(\frac{x_i-t}{x_0}\right)$$
with exponential cones on the right-hand side. |
Shortest distance from point and line | Calculate the intersection line:
$$\begin{cases}x+y&+&z&=0\\x&-&z&=4\end{cases}\implies y=-4-2z\;,\;\;x=4+z$$
and the line is $\;(4,-4,0)+t(1,-2,1)\;,\;\;t\in\Bbb R\;$ , so the distance is
$$\frac{||\;\left[(0,1,2)-(4,-4,0)\right]\times\left[(0,1,2)-(5,-6,1)\right]\;||}{||\;(4,-4,0)-(5,-6,1)\;||}=\frac{||\;(-4,5,2)\times(-5,7,1)\;||}{||\;(-1,2,-1)\;||}=$$
$$\frac{||\;(-9,-6,-3)\;||}{\sqrt6}=\frac{\sqrt{126}}{\sqrt6}=\sqrt{21}$$ |
Probability of selecting elements with replacement | If you're drawing with equal probability and subsequent draws are independent of each other, then the probability that you do not draw, for example, "1" in a single draw from $M$ elements is $\frac{M-1}{M}$.
The probability that you do not draw any "1"s in $N$ draws is the product of the individual probabilities, i.e. $\frac{M-1}{M}\cdot\frac{M-1}{M}\cdot\ldots\cdot\frac{M-1}{M}=\left(\frac{M-1}{M}\right)^N = \left(1 - \frac{1}{M}\right)^N$.
The probability that you draw at least 1 "1" in the $N$ draws is the complement of the probability that you draw none, i.e. it is $1 - \left(1 - \frac{1}{M}\right)^N$. |
Sum of digits divisible by $27$ | New answer instead of editing the - already accepted - answer because the argument is significantly different (and much simpler).
Let $Q(x)$ denote the digit sum of $x$.
Let $r\ge 1$. Then $n=10^r-1=\underbrace{99\ldots 9}_r$ is the smallest $n$ such that among any $n$ consecutive positive integers, at least one has digit sum a multiple of $9r$.
That no smaller $n$ works, is immediately clear because in $1,2,3,\ldots, 10^r-2$, all digit sums are $>0$ and $<9r$.
Remains to show that in any sequence of $n$ consecutive integers, a digit sum divisible by $9r$ occurs.
This is well-known for $r=1$. For $r>1$, consider $n$ consecutive positive integers
$$a,a+1,\ldots, a+n. $$
Among the first $9\cdot 10^{r-1}=n-(10^{r-1}-1)$ terms, one is a multiple of $9\cdot 10^{r-1}$. Say, $9\cdot 10^{r-1}\mid a+k=:b$ with $0\le k<9\cdot 10^{r-1}$. Then $Q(b)$ is a multiple of $9$, and as the lower $r-1$ digits of $b$ are zero, we have $Q(b+i)=Q(b)+Q(i)$ for $0\le i<10^{r-1}$ and hence
$$Q( b+10^j-1)=Q(b)+9j,\qquad 0\le j\le r-1.$$ (Note that $k+10^{r-1}-1<10^r-1$, so these terms are really all in our given sequence). It follows that $9r$ divides one of these $Q(b+10^j-1)$. |
$p \mid q^r+1$, Prove that either $2r\mid p-1$ or $p \mid q^2-1$ | $(2r,p-1)$ is not always $2$, since $r$ is a prime and $p-1$ is even, if it is not $2$, it means $r| p-1$, so $2r|p-1$. |
Does $\sum_{n=1}^{\infty} \frac1{2^{\sqrt{n}}}$ converge | The integral test tell us that $$\int_{1}^{\infty}\frac{1}{2^{\sqrt{t}}}dt\stackrel{u=\sqrt{t}}{=}2\int_{1}^{\infty}\frac{u}{2^{u}}du
$$ $$\stackrel{IBP}{=}\frac{1}{\log\left(2\right)}+\frac{2}{\log\left(2\right)}\int_{1}^{\infty}2^{-u}du=\frac{1+\log\left(2\right)}{\log^{2}\left(2\right)}$$ so the series converges. |
Alternative proof to $M$ maximal $\iff R/M$ is a field | Really long indeed!
Suppose $M$ is maximal and that $J$ is an ideal of $S=R/M$. Suppose $J$ contains a nonzero element $r+M$. Then $r\notin M$, so $rR+M=R$ and so $1=rx+y$, with $x\in R$ and $y\in M$. Therefore
$$
1+M=rx+y+M=rx+M=(r+M)(x+M)\in J
$$
so $J=S$.
Suppose $S=R/M$ has only the trivial ideals. Suppose $I$ is an ideal of $R$ with $M\subsetneq I$. Let $r\in I$, $r\notin M$. Then $r+M$ is a nonzero ideal of $S$, hence the ideal it generates is $S$. Therefore
$$
1+M=(r+M)(x+M)
$$
for some $x\in R$. This means $1=rx+y$, for some $y\in M$. Since $y\in M\subset I$, we conclude $1\in I$ and so $I=R$. |
Meaning of Dot Products in Regards to Linear Algebra | The dot product gives you information about lengths and angles.
The dot product of two vectors tells you the angle between them, via the equation $$u \cdot v = ||u|| ||v|| \cos \theta.$$ A consequence of this is that the dot product of a vector with itself tells you the length of the vector, via the equation $$v \cdot v = ||v||^2.$$
If you wanted to, you could actually define length and angle by means of the dot product, using the two equations above. |
Integral as infimum of integrals | Not always. For example, if $\mu$ is a Dirac measure at a point $x \neq 0$. |
Proving all solutions of an ODE to be periodic | If $\alpha = 0$, we get a linear ODE with constant coefficients whose solutions are $x(t) = A\cos(t) + B\sin(t)$ and so periodic. Assume that $\alpha \neq 0$. Performing the substitution $x'(t) = v(x(t))$, we have
$$ x''(t) = v'(x(t))x'(t) = v'(x(t))v(x(t)) $$
and so
$$ v'(x)v(x) + x - \sin(\alpha x) = 0 $$
or
$$ v \, dv = (\sin(\alpha x) - x) \, dx. $$
Integrating, we get
$$ \frac{v^2}{2} + \left( \frac{\cos (\alpha x)}{\alpha} + \frac{x^2}{2} \right) = C. $$
Renaming the constant $C$, we then have
$$ F(x,v) = \alpha(x^2 + v^2) + 2\cos(\alpha x) = C $$
and we found a conserved quantity. What does this mean? Given a solution $x(t)$ with initial conditions $x(t_0) = x_0$ and $x'(t_0) = x'_0$, the solution must satisfy
$$ F(x(t), v(x(t))) = F(x(t), x'(t)) \equiv F(x_0, x'_0) $$
and so $(x(t),x'(t))$ must lie on a fixed level set of $F$. The only critical point of $F$ is at the origin $(0,0)$ as the equation
$$ (\nabla F)(x,v) = 2\alpha(x - \sin(\alpha x), v) = (0, 0) $$
has a unique solution $v = x = 0$ (here we use $0 < \alpha \leq 1$). This critical point is a minimum and all level sets are compact (we can even say that if $C > 2$ then $F^{-1}(C)$ is diffeomorphic to $S^1$) which implies that the solutions exist for all time and must be periodic. |
Question about the ring of polynomials bounded on a real variety | I wish I had thought more about this before I posted. Anyway, here is a counterexample. Let $f\in \mathbb{R}[x,y]$ be the polynomial $y(x^2+1)-1$. Let $I$ be the ideal generated by $f$. Let $V$ be the zero-set of $f$ in $\mathbb{R}^2$. Writing the equation $f=0$ in the form $$y=\frac{1}{1+x^2},$$ it is clear that the polynomial $y$ is bounded on $V$. Moreover $y$ is not an element of the ring $\mathbb{R}[\mathbb{R}+I]$ because $y$ is not constant as a function on $V$. We must verify that $I$ is prime and real radical.
$I$ is prime: To check this, we note that the quotient ring $\mathbb{R}[x,y]/I$ is isomorphic to the ring $\mathbb{R}[x,1/(1+x^2)]$, which is an integral domain, being a subring of the field $\mathbb{R}(x)$.
I is real radical: It is enough to find an order on the ring $\mathbb{R}[x,1/(1+x^2)]$. But the field $\mathbb{R}(x)$ can be ordered: For example think of the elements as functions on $\mathbb{R}$ ordered by eventual domination.
It seems that the general problem of calculating the ring of bounded functions on a real variety is much much harder than I realized. If anyone has any references or thoughts on the matter please post. |
Bilinear transformation and eigenvalues | We know that if a matrix $\;T\;$ is invertible, then $\;Tu=\alpha u\implies T^{-1}u=\alpha^{-1}u\;$ , so if $\;Bu=\mu u\;$ , then $\;(B\pm I)u=(\mu\pm 1)u\;$ , and from here:
$$(B-I)^{-1}u=(\mu-1)^{-1}u\implies$$
so assuming the eigenvector is common we get
$$\lambda u=Au=(B+I)(B-I)^{-1}u=(B+I)((\mu-1)^{-1}u)=(\mu-1)^{-1}(B+I)u=\frac{\mu+1}{\mu-1}u$$
Deduce now the final line of the proof. |
Constructing a special point in quadrilateral | The ratio $\frac{AD}{BC}$ fixes the scale between the two triangles. So you are looking at a point $M$ such that
$$\frac{AM}{BM} = \frac{DM}{CM} = \frac{AD}{BC}$$
The last of these fractions is given from your quadrilateral. The other two restrict $M$ to lie on a certain Apollonian circle. So I'd construct these two circles, and find $M$ as their point of intersection.
Note that the triangles are not really similar since they have opposite orientation. Your question seems a bit vague in terms of orientation, hence I took the liberty of interpreting it like this. If you have more strict and different orientation requirements, please make them explicit. |
Show that there is a constant $M$ such that for all $x,y \in X$ we have $|f(x) - f(y)| \leq M |x-y| + \epsilon$. | Your proof looks fine, but I think it is more illustrative to prove directly.
Choose $\epsilon>0$. Since $X$ is compact, $f$ is uniformly continuous so there is some $\delta>0$ such that if $\|x-y\| < \delta$ then $\|f(x)-f(y)\| < \epsilon$. Now let $M= \max_{\|x-y\| \ge \delta} { \|f(x)-f(y)\| \over \|x-y\|}$. The $\max$ exists because $(x,y) \to { \|f(x)-f(y)\| \over \|x-y\|}$ is
continuous on the compact set $\{(x,y) | x,y \in X, \|x-y\| \ge \delta \}$.
Then
$\|f(x)-f(y)\| \le M\|x-y\| + \epsilon$. |
CFG for L = { (+,-)* | #(-) - #(+) ≤ 3 at every position of the word } | Hint:
$$ \#(\texttt{-}) - \#(\texttt+) \leq 3 \Leftrightarrow \#(\texttt-) \leq 3 + \#(\texttt+) $$
Let's consider the following languages:
(a) $ \#(\texttt-) = 3 + \#(\texttt+) $
(b) $ \#(\texttt-) < 3 + \#(\texttt+) $
If we call $A$ the grammar generated by (a) and $ B $ the grammar generated by (b), then your grammar is:
$$ S \to A \mid B $$
A little help:
$$ \begin{align*}
A &\to K\texttt-K\texttt-K\texttt-K \\
K &\to K\texttt-K\texttt+K\mid K\texttt+K\texttt-K\mid\varepsilon
\end{align*} $$
Try to think how you can construct the grammar $ B $ and you're done. |
$(X,Y)$ normal, find $P(\mathrm{sign}(X) \neq \mathrm{sign}(Y)$ | Hint: you can write $$
Y = \delta X+ \sqrt{1 - \delta^2} Z
$$
for some $Z$ independent to $X,Y$, with normal distribution, centered and of variance 1. |
Negating a statement | Your answer is "really correct". It is perhaps more usual in English to use "but" rather than "and" in a sentence as the one in your example. Formally (i.e., mathematically), there is no difference. |
Singular solution of first order differential equation | $$y = px + p^3\tag1$$which is of Clairaut's form and hence the general solution is $$y=cx+c^3\qquad\text{, where $~c~$ is an arbitrary constant.}\tag2$$
Since the given equation is of Clairaut's form, $p-$discriminant and $c-$discriminant will be exactly same. Hence we will find the only $p-$discriminant.
Differentiating $(1)$ by $~p~$, we have $$x+3p^2=0~.\tag3$$
which implies that the domain of the singular solution is $x≤0~$, there should be no complex values in a real problem.
The differential equation $(1)$ can be written as,
$$y=px+p^3=\frac13p(x+3p^2)+\frac23px=\frac23px\qquad\text{(using $(3)$)}$$
Squaring both side,
$$y^2=\frac49p^2x^2= \frac49~x^2~\left(-\dfrac x3\right)=-\frac4{27}~x^3\qquad\text{(using $(3)$)}$$
$$\implies 4x^3+27y^2=0$$
which is the required singular solution.
${}$
thanks to Lutz Lehmann. |
Is there any tri-angle ? | Take any triangle $\triangle ABC$ of area $100 \,\text{cm}^2$ with $B,C$ on a fixed line $\mathbf{L}\,$. Flatten the triangle by pushing vertex $A$ down towards $\mathbf{L}$ while sliding $B,C$ outwards along the line as to keep the area constant. In the limit, the distances from any point inside the triangle to either of the three sides tend to $0\,$, so all heights will eventually become $\le 1 \,\text{cm}$ for $\,A\,$ sufficiently close to $\,\mathbf{L}\,$.
[ EDIT ] For a concrete example of such a triangle, consider a triangle $\triangle ABC$ isosceles at $A$ with base $400 \,\text{cm}$ and height $\frac{1}{2} \,\text{cm}\,$, so that thea area is $\,\frac{1}{2} \cdot \frac{1}{2} \cdot 400 = 100 \,\text{cm}^2\,$. Then the equal sides will be $\sqrt{200^2+\left(\frac{1}{2}\right)^2} \,\text{cm}\,$ each, and the corresponding heights are $\,\frac{2 \,\cdot\, 100}{\sqrt{200^2+\left(\frac{1}{2}\right)^2}} \lt 1 \, \text{cm}\,$. |
Entropy and Joint Probabilty Distributions for iid random variables | Below is an expanded version of the proof given in Elements Of Information Theory by T.M Cover (2nd Edition, pg 40 - Lemma 2.10.1) for the inequality in question.
If $X_1$ and $X_2$ are iid (i.e. $X_1 \sim p(x)$ and $X_2 \sim p(x)$ )
\begin{align}
P(X_1 = X_2) &= \sum_{x \in \mathcal{X}}
P(X_1 = x , X_2 = x) \\
&= \sum_{x \in \mathcal{X}}
P(X_1 = x) P(X_2 = x) \\
&= \sum_{x \in \mathcal{X}} p(x) p(x) \\
&= \sum_{x \in \mathcal{X}} p^2(x)
\end{align}
Using Jenson's Inequality we know that if a function, $f(y)$, is convex then $\mathbb{E}_{p(x)} \left[ f(y) \right] \geq f\left( \mathbb{E}_{p(x)}[y] \right)$, with equality if and only if $\textbf{y}$ is a constant.
The function $f(y) = 2^{y}$ is convex and so if $z = \mathbb{E}_{p(x)} \left[\phi \right]$, then
$$f(z) = f(\mathbb{E}_{p(x)} \left[\phi \right]) \leq \mathbb{E}_{p(x)} [f (\phi) ]$$
Where
$$\mathbb{E}_{p(x)} [f (\phi) ] = \sum _{x \in \mathcal {X}} p(x) f(\phi)$$
Let $\phi = \log_2 p(x)$. Note that
\begin{align}
2^{-H(X)} &= 2^{\mathbb{E}_{p(x)}[\log _2 p(x)]} \\
&= f(y) |_{y = \mathbb{E}_{p(x)}[\log _2 p(x)]} \\
&= f(y) |_{y = \mathbb{E}_{p(x)}[\phi]} \\
&= f(\mathbb{E}_{p(x)}[\phi]) \\
&\leq \mathbb{E}_{p(x)} [f (\phi) ] \\
&= \mathbb{E}_{p(x)} 2^{\phi} \\
&= \mathbb{E}_{p(x)} 2^{\log _2 p(x)} \\
&= \mathbb{E}_{p(x)} p(x) \\
&= \textstyle{\sum} p(x)p(x)\\
&= \textstyle{\sum} p^2 (x) \\
&= P(X_1 = X_2)
\end{align}
What this means is $P(X_1 = X_2) \geq 2^{-H(X)}$, with equality if and only if $\log p(x)$ is a constant. If $\log p(x)$ is constant then $p(x)$ is also constant and so the distribution in that case is uniform. $\square$ |
An integral from 0 to infinity | For the evaluation of
$$\int_0^\infty \frac{\sin^{2n-1}x}{x}dx\quad\mbox{and}\quad
\int_0^\infty \frac{\sin^{2n-1}x\cos x}{x}d x,$$
apply the Power-reduction formulae,
$$\sin^{2n-1}x=\frac{1}{4^{n-1}}\sum_{k=0}^{n-1}(-1)^{k}\binom{2n-1}{n+k}\sin((2k+1)x),$$
note that
$$\sin((2k+1)x)\cos x=\frac{1}{2}\left(\sin((2k+2)x)+\sin(2kx)\right),$$
and use the fact that for $a>0$,
$$\int_{0}^{+\infty}\frac{\sin(ax)}{x}\,dx=\frac{\pi}{2}.$$
Hence
$$\int_0^\infty \frac{\sin^{2n-1}x}{x}dx=
\frac{1}{4^{n-1}}\sum_{k=0}^{n-1}(-1)^{k}\binom{2n-1}{n+k}\int_0^\infty\frac{\sin((2k+1)x)}{x}dx=\frac{n\pi}{4^n(2n-1)}\binom{2n}{n}$$
and
$$\int_0^\infty \frac{\sin^{2n-1}x\cos x}{x}dx=\frac{\pi}{2\cdot4^n(2n-1)}\binom{2n}{n}.$$ |
Summing a series of integrals | This really isn't so bad.
$$\int_0^i\frac{i}{(i+x^2)^{3/2}}dx=\frac{i}{\sqrt{i(i+1)}}.$$
So you're summing:
$$\sum_{i=m}^n\frac{i}{i+1}.$$
The latter sum unfortunately doesn't have an explicit form, unless you're willing to use digamma functions. |
Prove that all the five sequences converge to the same point $P \in \mathbb{R}^3$. | Partial answer (The estimate below are a bit hard to make precise. Feel free to edit/criticize/downvote):
Let $O = \frac{1}{5}(A_n+B_n+C_n+D_n+E_n)$ be the center of mass of the five points $A_n, B_n, C_n , D_n, E_n$. Note that $O$ is really independent of $n$ as the next five points are midpoints of the previous five. By a translation of the initial data, we assume $O$ is the origin.
Let $I_n = \max\{|A_n|, |B_n|, |C_n|, |D_n|, |E_n|\}$. It's obiouvs that $I_n\ge I_{n+1}$. To show your claim, it suffices to show that $I_n \to 0$. (Thus $P = O$)
One is tempting to show that $I_{n+1} <(1-\epsilon) I_n$ for some $\epsilon$ small independent of $n$. But this cannot be done in general. Indeed we do have
Claim:
$I_{n+6} < (1-\epsilon) I_n$ for some $\epsilon>0$ small, for all $n$.
To show this, WLOG let $I_n = |A_n|$.
If $\left|\frac{1}{2}(A_n+ B_n) \right| \le (1-\epsilon)I_n$, we go on to the next step (in the next step there would be at most 4 elemnet with $|\cdot|\ge (1-\epsilon)I_n$).
If not, then we can choose $\epsilon$ small so that
$$\left|\frac{1}{2}(A_n+ B_n) \right| \ge (1-\epsilon)|A_n|$$
only if $\cos \theta_{A_nB_n} >1-\delta$ and $|B_n| > (1-\delta)|A_n|$, where $\theta_{A_nB_n}$ is the angle between $A_n$ and $B_n$. ($\delta$ here is small, depending on $\epsilon$).
Now as $O$ is the center of mass of the five points, $C_n$ will not be closed to $B_n$, and $E_n$ not to $A_n$. We choose $\epsilon>0$ small so that
$$\left| \frac{1}{2}(A_n +E_n) \right|, \left|\frac{1}{2}(B_n+ C_n) \right| \le (1-2\epsilon)|A_n|$$
Keep this in mind, we go to the next $(n+1)$ step.
Now we have $|A_{n+1}| \ge (1-\epsilon)|I_n|$, $|B_{n+1}|, |E_{n+1}|\le (1-2\epsilon)|I_n|$. So $|A_{n+2}| , |E_{n+2}| \le (1-\epsilon) I_n$. In partucular in the $n+2$ step, at most three elements have $|\cdot | \ge (1-\epsilon) I_n$.
Then it is obvious that in the $n+6$ step, we have
$$I_{n+6} < (1-\epsilon)I_n$$
(Remark I suspect one can show $I_{n+2} <(1-\epsilon)I_n$) |
Opposite effective classes in a Grothendieck group | Here is an example. Let $k$ be a field and let $\mathcal{A}$ be the category of morphisms $f:V\to W$ of $k$-vector spaces such that $\ker f$ and $\operatorname{coker} f$ are both finite-dimensional. By the snake lemma, $\mathcal{A}$ is an abelian subcategory of the category of all morphisms of $k$-vector spaces, and in particular it is an abelian category.
Now let $A$ be the object $0\to k$, let $B$ be the object $k\to 0$, and let $C$ be the object $k\stackrel{1}\to k$. Note that there is an infinite direct sum of copies of $C$ in $\mathcal{A}$, so $[C]=0$ in $K_0(\mathcal{A})$. Since there is a short exact sequence $0\to A\to C\to B\to 0$, this means $[A]=-[B]$.
However, by the snake lemma, there is a homomorphism $\chi:K_0(\mathcal{A})\to\mathbb{Z}$ which sends $f:V\to W$ to $\dim\ker(f)-\dim\operatorname{coker}(f)$. In particular, $\chi([A])=-1$, so $[A]\neq 0$. (In fact, it is not difficult to see that $\chi$ is an isomorphism.) |
Difference between function, derivative and second derivative? | Not really; those examples are unrealistically simple. One of the most useful techniques is to match up zeroes of one graph $-$ places where it crosses the $x$-axis $-$ with ‘flat spots’ $-$ horizontal tangents $-$ of another. If graph $A$ has a zero everywhere that graph $B$ has a horizontal tangent, there’s a good chance that $A$ represents the derivative of the function whose graph is $B$. Of course you should then check further: is $A$ positive $-$ above the $x$-axis $-$ where $B$ is increasing, and negative where $B$ is decreasing? If so you’ve almost certainly found a match: either $A$ is the graph of $f\;'$ and $B$ that of $f$, or $A$ is the graph of $f\;''$ and $B$ that of $f\;'$.
Once you’ve matched up one pair like this, the third graph is generally pretty easy to identify. In the example just described, in which $A$ shows the derivative of the $B$ function, there are just two possibilities: either $C$ shows the derivative of the $A$ function, or $B$ shows the derivative of the $C$ function. It shouldn’t be hard to choose between the two by using the same ideas that I mentioned in the first paragraph. |
Natural question about harmonic functions | I don't think this is true. Consider, for example, the open cube $Q=(0,1)\times (0,1)$. Define on the boundary of $Q$ a function $v$ by the following: $v(x)=1$ for $x\in Q_1=\{(z,1):\ z\in [0,1]\}$ and $v(x)=0$ on the rest.
Consider the problem
$$\tag{1}
\left\{ \begin{array}{rl}
\Delta u=0 &\mbox{ if $x\in Q$} \\
u=v &\mbox{ if $x\in \partial Q$}
\end{array} \right.
$$
Problem $(1)$ has a unique solution $u$, which is harmonic in $Q$. Now, for each $n=1,2,...$, define a sequence of functions $v_n\in H^{1/2}(\partial Q)\cap C(\partial Q)$ by the following: $v_n(x)\le v_{n+1}(x)$,
$v_n(x)=1$ for $x\in \{(z,1):\ z\in [1/2^{n+1},1-1/2^{n+1}]\}$,
$0\le v(x)\le 1$ for $x\in \{(z,1):\ z\in [1/2^{n+2},1/2^{n+1}]\cup[1-1/2^{n+1},1-1/2^{n+2}]\}$,
$v_n(x)=0$ for $x\in \{(z,1):\ [0,1/2^{n+2}]\cup [1-1/2^{n+2},1]\}$.
For each $n$ there is a unique solution $u_n$ to the problem
$$
\left\{ \begin{array}{rl}
\Delta u=0 &\mbox{ if $x\in Q$} \\
u=v_n &\mbox{ if $x\in \partial Q$}
\end{array} \right.
$$
The sequence $u_n$ is harmonic in $Q$ and is continous in $\partial Q$. By the maximum principle $u_n(x)\le u_{n+1}(x)$ for all $n$ and $u_n(x)\to u(x)$ for all $x\in \overline{Q}$.
Ps: It is possible that the above sequence $u_n$ is not continuous on the boundary, because $Q$ is not regular, however, this can be easily fixed, by considering regular sets, and applying the same reasoning as here, for example, one could consider regularize $Q$ on the corners, or one could consider a circle, and define $v$ to be different whether you take points in the upper half part or in the lower half part of the circle. |
How to compute $\lim _{x\to 0}\frac{(1+x)^{1\over x}-e}{x}$ without using a series expansion? | L'Hospital's Rule saves the day.
$$\begin{align}
\lim_{x\to 0}\frac{(1+x)^{1/x}-e}{x}&=\lim_{x\to 0}\left(\frac{(1+x)^{1/x}}{x+1}\frac{x-(1+x)\log (1+x)}{x^2}\right)\\\\
&=e\lim_{x\to 0}\left(\frac{-\log (1+x)}{2x}\right)\\\\
&=-\frac{e}{2}
\end{align}$$ |
Integral test error approximation | You want $\sum_{n = N+1}^{\infty} \frac{1}{n^4} < .005$.
We have the integral comparison
$$\int_{N+1}^{\infty} \frac{1}{x^4}< \sum_{n = N+1}^{\infty} \frac{1}{n^4}< \int_{N}^{\infty} \frac{1}{x^4}$$
or
$$\frac{1}{3(N+1)^3} < \sum_{n = N+1}^{\infty} \frac{1}{n^4} < \frac{1}{3N^3}$$
The equation $\frac{1}{3N^3} = 0.005$ has the solution $N = 4.05...$ and we see that
$$\frac{1}{3 \cdot 5^3} < \sum_{n = 5}^{\infty} \frac{1}{n^4} < \frac{1}{3\cdot 4^3}$$
or
$$ 0.00266 < \sum_{n = 5}^{\infty} \frac{1}{n^4} < 0.00520$$
Still that does not guarantee that the remainder starting with the fifth term is less than $0.005$ but we are close. Let's get a better approximation:
$$\sum_{n = 5}^{\infty} \frac{1}{n^4} = \frac{1}{5^4} + \sum_{n = 6}^{\infty} \frac{1}{n^4}< \frac{1}{5^4} + \frac{1}{3\cdot 5^3} = 0.0016+ 0.00266... = 0.00426... < 0.005$$
Thus the first such $N$ is $4$ ( anything larger OK too).
$\bf{Added:}$
Thanks to the observation of @Claude Leibovici: we check
$$\frac{\pi^4}{90} - ( 1+ \frac{1}{2^4} + \frac{1}{3^4} + \frac{1}{4^4}) =0.00357...$$ while
$$\frac{\pi^4}{90} - ( 1+ \frac{1}{2^4} + \frac{1}{3^4} ) = 0.00747...$$ |
Is it true that there is a exact sequence of form $0 \to K \to A^m \to P \to O$ where $K$ is finitely generated? | If $P$ is generated by $p_1,\ldots, p_m$, we obtain a homomorphism $f\colon A^m\to P$ by sending the $(a_1,\ldots, a_m)$ to $a_1p_1+\ldots + a_mp_m$. This homomorphism is onto, giving rise to a short exact sequence
$$0\to K\to A^m\to P\to 0 $$
As $P$ is projective, we can pull the identity $P\to P$ to a homomorphism $g\colon P\to A^m$ such that $f\circ g=\operatorname{id}_P$. The sequence is said to split.
A sequence split on the right also splits on the left (this holds in more general context): Given $x\in A^n$, we see that $f(x-g(f(x)))=0$, hence $x\mapsto x-g(f(x))$ is a map $A^m\to K$, and is in fact an epimorphism. |
Find a basis and dimension for the vector space $V=\bigl\{f(x)\in\mathbb{R}[x];\ \deg f<5,\ f''(0)=f(1)=f(-1)=0\bigr\}.$ | Hint:
Write $$
f(x)=a_0+a_1x+\cdots+a_4x^4.
$$
Find the relations among the $a_i$'s using $$f''(0)=f(1)=f(-1)=0.$$ For instance, what can you tell by $f(1)=0$?
Once you have the relations (precisely, linear equations) for these $a_i$'s, you would get a subspace of $\mathbb{R}^5$, the dimension of which is what you are looking for. |
Calculate $P(1 \le X \le 2)$ | Hint: $P(a \leq X \leq b) = F(b) - lim_{n\rightarrow\infty}F(a-\frac{1}{n})$. |
Compactness implies Continuity? | $f(x) = 1$ if $x$ is rational, otherwise $f(x)=0.$ So $f(K)$ is either one or two points. |
Boundary layer problem | This is not a boundary layer perturbation problem. The proposed $ϵ=100x^2$ is not a small term around $x=1$.
One reduced equation is
$$
y'=y^2-2y+1=(y-1)^2
$$
with solution
$$
y\equiv 1\text{ or }y=1-\frac{1}{x-c}.
$$
Now one can add perturbation terms either to $y$ or to $(y-1)^{-1}$ |
Probability - Biased coin — Betting game | What you have stated is a version of the classical gamblers ruin problem. There is a good article on that at wikipedia:
https://en.wikipedia.org/wiki/Gambler's_ruin
Since you put all your wealth at stake in the first bet, it is clear that probability of ruin is larger than $1-p$. (That is the probability of loosing everything on round one, and you can loose it later also). |
If G is a finite group and $x \in G$, there is an integer n $\geq 1$ such that $x^n = e$ | Suppose that $x \in G$. If $x = e$ then set $n = 1$. Suppose that $x \neq e$. Since $G$ is finite the sequence $x^{n}$ for $n \in \mathbb{N}$ has duplicates. Suppose that $x^{m} = x^{n}$ for distinct $m$ and $n$ with $m < n$. Then $x^{n - m} = e$. |
How to evaluate $\int _{-\infty }^{\infty }\!{\frac {\cos \left( x \right) }{{x}^{4}+1}}{dx}$ | This can be done using residues, with the function $f(z) = \exp(i z)/(z^4 + 1)$
and a contour that goes along the real axis and returns on a circular arc in the upper half plane. |
What Is Exponentiation? | My chief understanding of the exponential and the logarithm come from Spivak's wonderful book Calculus. He devotes a chapter to the definitions of both.
Think of exponentiation as some abstract operation $f_a$ ($a$ is just some index, but you'll see why it's there) that takes a natural number $n$ and spits out a new number $f_a(n)$. You should think of $f_a(n) = a^n$.
To match our usual notion of exponentiation, we want it to satisfy a few rules, most importantly $f_a(n+m) = f_a(n)f_a(m)$. Like how $a^{n+m} = a^na^m$.
Now, we can extend this operation to the negative integers using this rule: take $f_a(-n)$ to be $1/f_a(n)$. then $f_a(0) = f_a(n-n) = f_a(n)f_a(-n) = 1$, like how $a^0=1$.
Then we can extend the operation to the rational numbers, by taking $f_a(n/m) = \sqrt[m]{f_a(n)}$. Like how $a^{n/m} = \sqrt[m]{a^n}$.
Now, from here we can look to extend $f_a$ to the real numbers. This takes more work than what's happened up to now. The idea is that we want $f_a$ to satisfy the basic property of exponentiation: $f_a(x+y)=f_a(x)f_a(y)$. This way we know it agrees with usual exponentiation for natural numbers, integers, and rational numbers. But there are a million ways to extend $f_a$ while preserving this property, so how do we choose?
Answer: Require $f_a$ to be continuous.
This way, we also have a way to evaluate $f_a(x)$ for any real number $x$: take a sequence of rational numbers $x_n$ converging to $x$, then $f_a(x)$ is $\lim_{n\to\infty} f_a(x_n)$. This seems like a pretty reasonable property to require!
Now, actually constructing a function that does this is hard. It turns out it's easier to define its inverse function, the logarithm $\log(z)$, which is the area under the curve $y=1/x$ from $1$ to $z$ for $0<z<\infty$. Once you've defined the logarithm, you can define its inverse $\exp(z) = e^z$. You can then prove that it has all the properties of the exponential that we wanted, namely continuity and $\exp(x+y)=\exp(x)\exp(y)$. From here you can change the base of the exponential: $a^x = (e^{\log a})^x = e^{x\log a}$.
To conclude: the real exponential function $\exp$ is defined (in fact uniquely) to be a continuous function $\mathbb{R}\to\mathbb{R}$ satisfying the identity $\exp(x+y)=\exp(x)\exp(y)$ for all real $x$ and $y$. One way to interpret it for real numbers is as a limit of exponentiating by rational approximations. Its inverse, the logarithm, can similarly be justified.
Finally, de Moivre's formula $e^{ix} = \cos(x)+i\sin(x)$ is what happens when you take the Taylor series expansion of $e^x$ and formally use it as its definition in the complex plane. This is more removed from intuition; it's really a bit of formal mathematical symbol-pushing. |
Finding Unions and intersections given two probabilities. | Because $\Pr(A\cap B)$ is in this case equal to $\Pr(A)\Pr(B)$, the events $A$ and $B$ are independent, so the answer you got is correct.
However, you would not necessarily be viewed as having written a correct solution. If for example $\Pr(A\cap B)=0.1$, multiplying $0.8$ by $0.6$ would yield the wrong answer.
I suggest using an argument like this. We have
$$\Pr(A^c\cap B)+\Pr(A\cap B)=\Pr(B).$$
We know two of the terms, so we can find the third, which is $0.6-0.12$.
Your other two arguments are correct. |
Generating function of derangements | You know that
$$e^z=\sum_{n\ge 0}\frac1{n!}z^n$$
and that
$$D(z)=\sum_{n\ge 0}\frac{d(n)}{n!}z^n\;.$$
Now take the Cauchy product:
$$D(z)e^z=\left(\sum_{n\ge 0}\frac{d(n)}{n!}z^n\right)\left(\sum_{n\ge 0}\frac1{n!}z^n\right)=\sum_{n\ge 0}c_nz^n\;,$$
where
$$c_n=\sum_{k=0}^n\left(\frac{d(k)}{k!}\cdot\frac1{(n-k)!}\right)=\sum_{k=0}^n\frac{d(k)}{k!(n-k)!}=\frac1{n!}\sum_{k=0}^n\frac{n!}{k!(n-k)!}d(k)\;.$$
What is another notation for $\dfrac{n!}{k!(n-k)!}$?
Now use the identity that you were given and one of the first power series that you learned to express $D(z)e^z$ as a rather simple function of $z$, and solve for $D(z)$. |
Can we add Lagrangian-like multipliers for a joint constraint $g(x, y)=0$ in a min-max problem? | When solving this kind of problem
$$
\min_x \max_y f(x, y) \ \ \text{ s.t. } \ g(x, y) = 0
$$
we are searching for the $f_g(x,y)$ saddle points and those points are handled normally in the Lagrange Multipliers procedures so
$$
L(x,y,\mu) = f(x,y)+\mu g(x,y)
$$
and the stationary points are determined by
$$
\nabla L = 0
$$
regarding only saddle points. |
Would stating that $(-\infty, +\infty)= \mathbb{R}$ be correct? | Yeah it is correct.
The first notation means all real numbers larger than $-\infty$ and less than $+\infty$, and the second notation means all real numbers, but since any real number is larger than $-\infty$ and less than $+\infty$, it excludes no real numbers. |
Vector Space of Polynomial Functions | Suppose that $p_0, p_1, ..., p_m$ are polynomials in $P_m(F)$ such that $p_i(2)=0$ for each $i$. Prove that the set of vectors $p_0, ..., p_m$ is linearly dependent.
If each $p_i(t)$ evaluates to $0$ at $t = 2$, why can you conclude that the $p_i$ must therefore be linearly dependent?:
For intuition: Try to construct linearly independent polynomials, say for $P_2(F)$, such that for each, $p_i(2) = 0$; and see why these polynomials cannot, in fact, be linearly independent.
Suppose there are $m+1$ linearly independent polynomials such that $p_i(2) = 0 \forall p_i$. Then $p_0 = 0$. Hence, e.g., 1 would necessarily be linearly independent of the $m+1$ $p_i$, also in $P_m(F)$. That would give a basis of $P_m(F)$ that has $m+2$ linear independent vectors. Impossible! |
Differentiation about the square root of x and y | Implicitly:
You'll want to differentiate the function to $x$
$$\frac{\sqrt x}{dx} + \frac{\sqrt y}{dx} = \frac{16}{dx}$$
$$\frac{1}{2\sqrt{x}} + \frac{\frac{dy}{dx}}{2\sqrt{y}} = 0$$
$$\frac{dy}{dx}=-\frac{\sqrt{y}}{\sqrt{x}}=\frac{dy}{dx}=\frac{\sqrt{x}-16}{\sqrt{x}}$$ |
Property of integral and integrator. (NET June 2011) | 1 is definitely false. Think of a sequence of rectangles with decreasing width and increasing height
2 is false. The "typewriter sequence" is the standard counterexample.
3 is false, by the same example as 1.
4 is true (assuming that when you say each function is uniformly bounded, that means that there is a uniform bound for all the functions). This is the bounded convergence theorem. |
How can I actually solve this kind of partial differential equations? | You have found one particular solution, and since the PDE is linear any other solution will differ from that one by a solution of the homogeneous equation:
$$
x \frac{\partial z}{\partial x}+t \frac{\partial z}{\partial t}+y \frac{\partial z}{\partial y}=0
.
$$
This equation says that the directional derivative of $z$ in the radial direction is zero; in other words, $z$ is constant on each ray from the origin.
So the general solution of your PDE is
$$
z(x,y,t) = \frac{xyt}{3} + g(x,y,t)
$$
where $g$ is a function which depends only on the direction of the vector $(x,y,t)$, not on its magnitude. (Expressed in spherical coordinates, $g$ would depend only on the angles $\theta$ and $\phi$, not on the radial variable $r$.)
There are some issues to consider if you want a smooth function at the origin ($g$ must be constant in that case), but this is to be expected since the left-hand side of your PDE is zero there. |
Determining the relevant set/field for least upper bounds | The fact that $\mathbb{R}$ and $\mathbb{Q}$ are fields doesn't come into play here; what matters is the set we are looking for a supremum in.
Given an ordered set $T$, and a subset $S$, the supremum of $S$ in $T$ is the least element of $T$ that is greater than or equal to every element of $S$. That is, $x=\sup(S)$ must satisfy
$s\leq x$ for all $s\in S$
$x\leq y$ for all $y\in T$ such that ($s\leq y$ for all $s\in S$)
Now, depending on what $T$ is, an element satisfying these properties may or may not exist in $T$. If we consider $S_2$ as a subset of $\mathbb{R}$, there is a supremum for $S_2$: it is $\sqrt{2}$. Either you have not mentioned it, or the book leaves it implicit, but the claim is that $S_2$ has no supremum when we are thinking of it as living in $\mathbb{Q}$, i.e. $S=S_2$ and $T=\mathbb{Q}$.
Suppose a rational number $r$ has the property that $s\leq r$ for all $s\in S_2$. If we had $r^2<2$, then $t=2-r^2>0$, and for any $t>0$ there is some $n\in\mathbb{N}$ such that both $\frac{t}{2}>\frac{1}{n^2}>0$ and $\frac{t}{2}>\frac{2}{n}>0$. Then we have
$$(r+\tfrac{1}{n})^2=r^2+\tfrac{2}{n}+\tfrac{1}{n^2}<r^2+t=2,$$
so $r$ cannot be a supremum for $S_2$ in $\mathbb{Q}$ because $r+\frac{1}{n}\in S_2$ but $r+\frac{1}{n}\not\leq r$.
Thus, any rational number $r$ has the property that $s\leq r$ for all $s\in S_2$ must also have the property that $r^2\geq 2$. We know that equality is impossible, as $\sqrt{2}$ is irrational, so we must have that $2<r^2$. But by a similar argument, there is some $n\in\mathbb{N}$ such that $2<(r-\tfrac{1}{n})^2$, so that $r-\frac{1}{n}$ also has the property that $s\leq r-\frac{1}{n}$ for all $s\in S_2$; but $r-\frac{1}{n}<r$, so $r$ cannot be a supremum for $S_2$.
Thus, we have shown that there is actually no supremum for $S_2$, when it is considered as a subset of $\mathbb{Q}$; for any element of $\mathbb{Q}$ that is greater than everything in $S_2$, we can find a strictly smaller element of $\mathbb{Q}$ with that same property.
In fact, this argument actually shows that there is no supremum for $S_2$ in any dense subset of $\mathbb{R}$ that is missing $\sqrt{2}$. |
Asymptotic behavior of an integrable function | No, you are right there is a counterexample.
Join the following dots:
$$\left\{\left(n-\frac{1}{2^n},0\right),(n,1),\left(n+\frac{1}{2^n},0\right)\;:\;n\in\mathbb{N}^+\right\}.$$
and you will obtain the graph of a piecewise linear function with the desired property.
P.S. Here is an unbounded version:
$$\left\{\left(n-\frac{1}{4^n},0\right),(n,2^n),\left(n+\frac{1}{4^n},0\right)\;:\;n\in\mathbb{N}^+\right\}. $$ |
Prove $4^n-1$ is divisible by $3$, for all $n\in\Bbb N$? | Note that
$$4^n-1\equiv(1)^n-1\equiv 0 \mod 3$$ |
Pigeon holes principle | HINT: Notice that $257=2^8+1$, so you might guess that there will be $2^8$ pigeonholes. You have three atomic sentences. They can have $2^3=8$ different combinations of truth values. How many different truth tables can you build from these $8$ combinations of truth values? Now notice that if $p_1\iff p_2$ is a tautology, then $p_1$ and $p_2$ have the same truth table. (Why?) |
linear algebra, show B is a basis for $\mathbb{R}^3$ | To show it is a basis, pick a vector ${\bf x}=(x,y,z)$ in the space. What you want to show is that the system
$$a_1(1,0,1)+a_2(1,1,2)+a_3(1,2,4)=\bf{x}$$ has always a solution ${\bf v}=(a_1,a_2,a_3)$ regardless of the choice of ${\bf x}\in \Bbb R^3$. Another way would be to show they are linearly independent, for $\dim \Bbb R^3=3$, so three linearly independent vectors will span $\Bbb R^3$. To this end, show that they span the origin only in the trivial way, that is:
$$a_1(1,0,1)+a_2(1,1,2)+a_3(1,2,4)=0$$
only when $a_1,a_2,a_3=0$.
For the second, just apply the previous solution with the special case ${\bf x}=(3,-1,3)$.
NOTE As julien has commented, it suffices to show either that the vectors are linearly independent or that they span $\Bbb R^3$. Why is this so? If the vectors are not linearly independent, they will not span $\Bbb R^3$, and if the vectors are linearly independent, they will span $\Bbb R^3$. This means the two statements are equivalent, and this roots from the fact $\Bbb R^3$ has dimension $3$. |
If $c \in \mathbb{R}, c<0, A \subseteq \mathbb{R}$ prove that $\sup(cA) = c \cdot \inf(A)$ | Your attempt at a proof is, at the least, insufficient. I suggest you to use less symbols and to spell out what you need to prove.
Let $r=\inf(A)$; you want to prove
$cr\ge x$, for all $x\in cA$
for all $\varepsilon>0$, there exists $y\in cA$ such that $y>cr-\varepsilon$.
First condition. Since $r=\inf(A)$, we know that $r\le a$, for all $a\in A$; if $x\in cA$, then $x=ca$, for some $a\in A$; from $r\le a$ and $c<0$, it follows $cr\ge ca=x$.
Second condition. There exists $b\in A$ such that $r-\dfrac{\varepsilon}{c}>b$, because $-\varepsilon/c>0$. Then
$$
c\left(r-\frac{\varepsilon}{c}\right)<cb
$$
which is the same as
$$
cr-\varepsilon < cb
$$
and we can take $y=cb\in cA$. |
What is the limit behavior of this random sum? | Let $\phi$ be the characteristic function of $X_1$. Then $\phi_n$, the characteristic function of $S_n$, is given by
$$\phi_n(t)=\prod_{j=1}^n\phi\left(\frac tj\right),$$
so $(S_n,n\geqslant 1)$ converges in distribution if and only if for each $t\in\mathbb R$, the product $\prod_{j=1}^\infty\phi\left(\frac tj\right)$ is convergent (in the usual sense or $\phi(t/j)=0$ for some $j$).
There are case where the product is convergent, and when the product is divergent. For example, consider random variables with characteristic function $\phi(t):=e^{-|t|^\alpha}$, where $0\lt\alpha\lt 2$. Then the product is convergent if and only if $\alpha\gt 1$.
If $S_n$ converges in distribution to $0$ then $|\phi(t)|=1$ for each $t$. |
How to proof that the $\mathbb{Z}$-span of weights of a faithful $L$-modul contains the root lattic? | $V$ being a faithful ${\mathfrak g}$-module means that the structure homomorphism $\rho: {\mathfrak g}\to{\mathfrak g}{\mathfrak l}(V)$ is injective. By definition, $\rho$ is a homomorphism of Lie algebras, but we may also view it as a morphism of ${\mathfrak g}$-modules if we equip ${\mathfrak g}$ with the adjoint action and ${\mathfrak g}{\mathfrak l}(V)$ with the ${\mathfrak g}$-action given by $X.- := [\rho(X),-]$.
Now your claim follows from the following observations: Firstly, by the injectivity of ${\mathfrak g}\to {\mathfrak g}{\mathfrak l}(V)$ any root of ${\mathfrak g}$ (i.e. any weight of the adjoint representation of ${\mathfrak g}$) is a weight of ${\mathfrak g}{\mathfrak l}(V)$. Secondly, ${\mathfrak g}{\mathfrak l}(V)\cong V\otimes_{\mathbb k} V^{\ast}$ as ${\mathfrak g}$-modules, and weights add upon tensoring. Can you fill in the details yourself? |
On the image of an embedding of $\mathbb{R}$ in $\mathbb{R}^3$ | $g_1(x)=x_2-x_1^2$ and $g_2(x)=x_3-x_1^3$ do that. |
Find the area included between an arc of cycloid $x=a(\theta - \sin (\theta))$, $y=a(1-\cos (\theta))$ and its base. | There are different ways to use integration to compute an area.
What you seem to have overlooked is that you are given a parametric equation for the cycloid, namely
$$\begin{align*}
x(\theta) &= a (\theta - \sin \theta) \\
y(\theta) &= a (1 - \cos \theta)
\end{align*}$$
which, for a fixed constant $a$, relates the $(x,y)$ coordinate of a point on the cycloid to a parameter $\theta$. Therefore, the expression for the area $$\int y \, dx$$ that you cite is not an integral with respect to $x$, but actually with respect to $\theta$. Written out completely, it actually means
$$A = \int_{\theta = \theta_0}^{\theta_1} y(\theta) \, \frac{d}{d\theta}[x(\theta)] \, d\theta.$$
In your case the interval of integration over half a period is $\theta \in [0, \pi]$ as the text states. |
$X$ a normed space. $(x_n) \subset X$ s.t. $|f(x_n)| < M_f$ for all $n$ and $\forall f \in X^{\ast}$ then there is $C$ s.t. $\|x_n\| \le C$ | This is just the Banach-Steinhaus Theorem applied to the dual space $X^*$ which is always a Banach space, even if $X$ is only a normed space.
For $x \in X$ let $\hat x \in X^{**} = (X^*)^*$ be defined via $\hat x(f) = f(x)$ for all $f \in X^*$. Then, for all $n \in \mathbb{N}$ and $f \in X^*$ we have
$$
|\hat{x_n}(f)| = |f(x_n)| \leq M_f,
$$
i.e. the family of operators $(\hat{x_n})_{n \in \mathbb{N}}$ is pointwise bounded.
Banach-Steinhaus gives you that the family is uniformly bounded, i.e. there exists some $C > 0$ such that
$$
\|\hat{x_n}\| \leq C,
$$
for all $n \in \mathbb{N}$.
But $X$ is isometrically isomorphic to a subspace of its Bi-dual, i.e. $\|\hat{x_n}\| = \|x_n\|$. |
Linear algebra with a transformation i think | If $P$ is of the Jordan Canonical Form, then it is easy to verify that $P^2=P$ implies (2). In addition, you can verify that $P^2=P$ if and only if $P$ is a diagonal matrix with eigenvalues in $\{0,1\}$. |
What is a monotonic increasing function $[0,1] \to [0,1]$? | As André Nicolas wrote in the comments:
There are many choices, for example $x^\alpha$ for suitable $\alpha$, which could be constant but need not be. For your example, $\alpha=1/2$ will give a sort of OK fit. |
The greatest integer function | For $x\in [2,6)$, $x\to 3x^2$ is an increasing function and it attains the values in $[3\cdot 2^2,3\cdot 6^2)=[12,108)$. For any integer $k\in [12,107]$, let
$$I_k=\{x\in [2,6): k\leq 3x^2<k+1\}=[\sqrt{k/3},\sqrt{(k+1)/3}).$$
Note that if $x\in I_k$ then $\lfloor 3x^2\rfloor=k$ which means that $\lfloor 3x^2\rfloor$ is constant on each interval $I_k$.
Moreover
$|I_k|=\sqrt{(k+1)/3}-\sqrt{k/3}$ where $|I_k|$ is the length of $I_k$.
Hence by the definition of integral,
$$\int_{2}^6 \lfloor 3x^2\rfloor dx=\sum_{k=12}^{107} k|I_k|
=\frac{1}{\sqrt{3}}\sum_{k=12}^{107} k(\sqrt{k+1}-\sqrt{k})
\approx 206.005$$
where $|I_k|$ is the length of the interval $I_k$.
P.S. For general information about the floor function take a look HERE. |
Properties of Variance for Random Variables | Variances always add, so the answer should be $16+81=97$. |
Will there be a surjective homomorphism...? | Hint: the two parts are equivalent. If such a homomorphism exists, then its kernel is a subgroup of $\mathbb{Q}$ of index $2$. But the homomorphism does not exist. For choose some element $ x\in \mathbb{Q}$ that gets mapped to $1$. What does $\frac{x}{2}$ get mapped to? |
2011 IMC Section A Problem 3 | So we have $$...=1!^23!^2...2009!^2\cdot 2011!^2 \cdot 2\cdot 4\cdot 6...\cdot 2010\cdot 2012$$
$$=1!^23!^2...2009!^2\cdot 2011!^2 \cdot 2^{1006}\cdot \color{red}{1006!}$$ |
Show that the last component of an eigenvector of a tridiagonal hermitian matrix is not 0. | Let $A$ be a Hermitian tridiagonal matrix of order $n$, with $a_{i, i + 1} \ne 0$, $i = 1, \ldots, n - 1$.
Write $A = \begin{bmatrix} A^{(1)} & y\\y^* & a\end{bmatrix}$, as you have defined, and suppose that $x = \begin{bmatrix}x^{(1)} \\ 0\end{bmatrix}$ is an eigenvector. Then
$$Ax = \lambda x \implies \begin{bmatrix} A^{(1)} & y\\y^* & a\end{bmatrix} \begin{bmatrix}x^{(1)} \\ 0\end{bmatrix} = \lambda \begin{bmatrix}x^{(1)} \\ 0\end{bmatrix} \implies \begin{array}{l} A^{(1)} x^{(1)} = \lambda x^{(1)}\\ y^*x^{(1)} = 0\end{array}.$$
Since $A$ is tridiagonal with $a_{n,n+1} \ne 0$, $y^* = \begin{bmatrix}0 & \cdots & 0 & a^*_{n,n+1}\end{bmatrix}$, so that $y^*x^{(1)} = 0 \implies x^{(1)}_{n-1} = 0$.
Now, $A^{(1)}$ is a Hermitian tridigonal matrix of order $n - 1$, with $a^{(1)}_{i, i+1} \ne 0$, $i = 1, \ldots, n - 2$, and having $x^{(1)} = \begin{bmatrix}x^{(2)} \\ 0 \end{bmatrix}$ as an eigenvector. Thus, by induction, we find that every component of $x$ is zero, which contradicts the assumption that $x$ is an eigenvector.
Thus, any eigenvector of $A$ must have a non-zer0 $n$th component. |
Hessian matrix of the mahalanobis distance wrt the Cholesky decomposition of a covariance matrix | Let $u=x-\mu$, $f:\Lambda\rightarrow\Lambda\Lambda^T$, $g:\Sigma\rightarrow u^T\Sigma^{-1}u$ and $h=g\circ f$.
Then $Dg_{\Sigma}: H\in S_k\rightarrow -u^T\Sigma^{-1}H\Sigma^{-1}u$ and $Df_{\Lambda}:K\in T_k\rightarrow K\Lambda^T+\Lambda K^T$, where $S_k$ is the set of symmetric matrices and $T_k$ is the set of lower triangular matrices. Thus $Dh_{\Lambda}:K\rightarrow -u^T\Sigma^{-1}(K\Lambda^T+\Lambda K^T)\Sigma^{-1}u=-2tr(\Lambda^T\Sigma^{-1}uu^T\Sigma^{-1}K)=-2<\Sigma^{-1}uu^T\Sigma^{-1}\Lambda,K>$
where $<Y,Z>$ is the real scalar product $tr(Y^TZ)$.
Let $\nabla (h)_{\Lambda}=-2\Sigma^{-1}uu^T\Sigma^{-1}\Lambda$; we'll say that the previous function is the gradient of $h$ because, for every $i\geq j$, $\dfrac{\partial h}{\partial \Lambda_{ij}}=\nabla(h)_{i,j}$; the strictly upper part of $\nabla (h)$ is useless.
The second derivative is the following symmetric bilinear form:
$D^2h_{\Lambda}:(K,L)\in T_k\times T_k\rightarrow 2u^T\Sigma^{-1}(L\Lambda^T+\Lambda L^T)\Sigma^{-1}(K\Lambda^T+\Lambda K^T)\Sigma^{-1}u-u^T\Sigma^{-1}(KL^T+LK^T)\Sigma^{-1}u$.
For example, if $p\geq q,r\geq s$, then $\dfrac{\partial^2 h}{\partial \Lambda_{pq}\partial\Lambda_{rs}}=D^2h_{\Lambda}(E_{pq},E_{rs})$.
Note that the Hessian of $h$ is a complicated tensor . |
solving laplace equation with separation of variables | The function $u=0$ cannot be a solution of the problem because of the boundary conditions (which are not all zero on the boundary).
In the first boundary conditions the first $u(\pi, y)=0$ must be replaced by $u(x,\pi)=0$ in order to have zero boundary conditions for $y$. After coorecting this and applying the separation of variables we get:
$$u(x,y)=\sum_{n=1}^{\infty}\big(c_{n}\sinh(nx)+d_n\cosh(nx)\big)\sin(ny).$$ Now using the othogonality of the functions $\sin(ny)$ and BC's for $x$ we will determine $c_n$ and $d_n$.
$$u(0, y)=0\Longrightarrow d_n=0.$$
$$u(\pi, y)=\sin y\cos y=\frac{1}{2}\sin(2y)=\sum_{n=1}^{\infty}c_n\sinh(n\pi)\sin(ny).$$
Orthogonality of $\sin(ny)$ implies that $$\int_0^\pi\sin(2y)\sin(ny)dy=\pi c_n\sinh(n\pi).$$
In this case we should be more careful, because the integral on the LHS is not zero for all $n$. It is zero (by othogonality) for all $n$ except for $n=2$. When $n=2$, $$\int_0^\pi\sin^2(2y)dy=\frac{\pi}{2}$$
and hence $c_2=\displaystyle\frac{1}{2\sinh(2\pi)}$. Therefore, $$u(x,y)=\frac{\sinh(2x)\sin (2y)}{2\sinh(2\pi)}.$$ |
$ \int_{0}^{88} \sin \sqrt{x} $ dx, $n=4$ . Use the Midpoint Rule with the given value of $n$ to approximate the integral. | The default assumption in calculus is that angles, unless otherwise specified, are measured in radians. If that is the case, put your calculator in radian mode amd calculate
$$\frac{88}{4}\left(\sin(\sqrt{11})+\sin(\sqrt{33})+\sin(\sqrt{55})+\sin(\sqrt{77})\right).$$
To $5$ decimal places, I get $18.12003$. Rounded to say $4$ decimal places this is $18.1200$.
Remark: Weird problem, particularly using the default assumption that we are using radians. With $n=4$, one cannot expect a numerical method such as Trapezoidal Rule, Midpoint Rule, or Simpson's Rule, to produce even a crude estimate of the integral. |
Find a 2nd independent solution of the given equation (given one solution)? | Write the differential equation in the form $ y''+P(x)y'+Q(x)y=0$
Since $x>1 \implies x \neq 1$ so
$$y''-\frac x {x-1}y'+\frac y {x-1}=0$$
$$ \text {1)} \left (\frac {y_2} {y_1}\right )'=\frac {W(y_1,y_2)} {y_1^2}$$
$$\frac {y_2'y_1-y_1'y_2}{y_1^2}=\frac {W(y_1,y_2)} {y_1^2}$$
$$\frac {y_2'e^x-e^xy_2}{e^{2x}}=\frac {W(e^x,y_2)} {e^{2x}}$$
$${y_2'e^x-e^xy_2}= {W(e^x,y_2)}$$
$${y_2'e^x-e^xy_2}=\left|
\pmatrix{e^x & y_2\\
e^x & y'_2}
\right|$$
$${y_2'e^x-e^xy_2}= e^xy'_2-e^xy_2$$
2)Then use Abel's identity ... with $P(x)=-\dfrac x{x-1}$
$$W=W_0e^{-\int P(x)dx}=W_0e^{\int \frac x {x-1}}=Ke^{\int 1+\frac 1 {x-1}}=Ke^xe^{ln|x-1|}=Ke^x(x-1) \text { since x>1}$$
$$W= e^xy'_2-e^xy_2 \implies y'_2-y_2=K(x-1)$$
I let you finish...solve for $y_2$ the equation.It's a first order linear equation.
Another way to integrate this equation...
$$(x-1)y''-xy'+y=0$$
$$(x-1)y''-xy'\color{red}{+y'-y'}+y=0$$
$$(x-1)y''\color{blue}{-xy'+y'}+\color{green}{y-y'}=0$$
$$(x-1)y''\color{blue}{-(x-1)y'}+y-y'=0$$
$$(x-1)(y''-y')-(y'-y)=0$$
Substitute $z=y'-y$
$$(x-1)z'-z=0$$
Integrate..
$$\int \frac {dz}{z}=\int \frac {dx}{x-1}$$
$$z=K_1(x-1)$$
$$y'-y=K_1(x-1)$$
$e^{-x}$ as inetgrating factor
$$(ye^{-x})'=e^{-x}K_1(x-1)$$
$$ye^{-x}=\int e^{-x}K_1(x-1)dx+K_2$$
$$y(x)=K_1e^{x}\int e^{-x}(x-1)dx+K_2e^x$$
$$\boxed{y(x)=K_1x+K_2e^x}$$ |
Finding the derivative of an integral function | Since
$$y = c e^{-x} + e^{-x} \int_0^x \frac{\tan t}{t} \, dt,\tag1$$
as you correctly show
$$y' = -ce^{-x} + e^{-x} \frac{\tan x}{x} - e^{-x} \int_0^x \frac{\tan t}{t} \, dt.\tag2$$
Now if we rearrange (1) we have
$$e^{-x} \int_0^x \frac{\tan t}{t} \, dt = y - c e^{-x},$$
so substituting this result into (2) we have
$$y' = -c e^{-x} + e^{-x} \frac{\tan x}{x} - (y - ce^{-x}),$$
or
$$y' + y = \frac{e^{-x} \tan x}{x},$$
a first-order differential equation that is free from any integral sign and has (1) as a solution. |
A linear code must be able to send $256$ different messages s.t. it corrects one error. What is the least possible length of such a code? | Welcome to MSE!
I'd consider the Hamming bound,
$$q^n \geq |C|\sum_{i=0}^t {n\choose i}(q-1)^i,$$
where $|C|=2^8$, $q=2$, $d=2t+1=3$, i.e., $t=1$.
Search for the smallest block length $n\geq 8$ such that this inequality is satisfied. |
Finding the volume of a cone which is partially filled with water | Let $r_1,r_2$ be the radii of the upper circular water surface when the cone is base-down/base-up respectively, $r$ the cone's radius and $h$ the cone's overall height. Then we have, by similarity relations,
$$\frac{h-h_{w_1}}{r_1}=\frac hr=\frac{h_{w_2}}{r_2}$$
$$\frac{r(h-h_{w_1})}h=r_1,\frac{rh_{w_2}}h=r_2\tag1$$
Since the volume of water is unchanged,
$$\frac\pi3(r^2+rr_1+r_1^2)h_{w_1}=\frac\pi3r_2^2h_{w_2}$$
$$(r^2+rr_1+r_1^2)h_{w_1}=r_2^2h_{w_2}$$
Substituting the relations in $(1)$,
$$\left(r^2+r\left(\frac{r(h-h_{w_1})}h\right)+\left(\frac{r(h-h_{w_1})}h\right)^2\right)h_{w_1}=\left(\frac{rh_{w_2}}h\right)^2h_{w_2}$$
Dividing by $r^2$ and then multiplying by $h^2$, we get a quadratic in $h$:
$$3h_{w_1}h^2-3h_{w_1}^2h+h_{w_1}^3-h_{w_2}^3=0$$
Thus we can solve for $h$. If we are given only $h_{w_1}$ and $h_{w_2}$, then we are stuck; we cannot find $r,r_1,r_2$ even though they are now in known proportions. For example, two cones with $h=3$ and $r=1,2$ filled so that $h_{w_1}=1$ in both cases will have the same $h_{w_2}$.
If we are also given $r=h$, then the volume is simply $\frac\pi3h^3$ using the $h$ we have calculated. (Knowing that $r=h=a$ and the value of $a$ makes the problem trivial.) |
Proving that $\frac{ \cos (A+B)}{ \cos A-\cos B}=-\cot \frac{A-B}{2} \cot \frac{A+B}{2}$ | First, note that $\dfrac{\cos(A + B)}{\cos A - \cos B} \neq -\cot \left( \dfrac{A-B}{2} \right) \cot \left(\dfrac{A+B}{2} \right)$
In particular, if we use something like $A = \pi/6, B = 2\pi/6$, then the left is $0$ as $\cos(\pi/2) = 0$ and the right side is a product of two nonzero things.
I suspect instead that you would like to prove:
$$\dfrac{\cos A + \cos B}{\cos A - \cos B} = -\cot \left( \dfrac{A-B}{2} \right) \cot \left(\dfrac{A+B}{2} \right)$$
HINTS
And this looks to me like an exercise in the sum-to-product and product-to-sum trigonometric identities (wiki reference). In fact, if you just apply these identities to the top and the bottom, you'll get the result. |
If $M$ can be immersed in $\mathbb{R}^k\times(S^1)^l$ with codimension $1$, can it be immersed in $\mathbb{R}^{k+l}$ with codimension $1$? | Note that $\mathbb{R}\times S^1$ embeds in $\mathbb{R}^2$ via the map $(x, z) \mapsto e^xz$. Repeated application of this map gives an embedding $\varphi : \mathbb{R}^k\times(S^1)^l \to \mathbb{R}^{k+l}$.
If $f : X \to \mathbb{R}^k\times(S^1)^l$ is an immersion, then $\varphi\circ f : X \to \mathbb{R}^{k+l}$ is also an immersion (because $\varphi$ is an embedding). Moreover, the codimension of the immersion $f$ is equal to the codimension of the immersion $\varphi\circ f$ (because $\dim \mathbb{R}^k\times(S^1)^l = \dim \mathbb{R}^{k+l}$). |
How do I prove a sequence is Cauchy | As an easier example of how to apply the definition of a Cauchy sequence, define the sequence $\{\frac{1}{n}\}$. Given any $\epsilon>0$, you would like to find an $N$ such that for any $n,m>N$, $\left|\frac{1}{n}-\frac{1}{m}\right|<\epsilon$. This would certainly be the case if $\frac{1}{n},\frac{1}{m}<\frac{\epsilon}{2}$ since $\left|\frac{1}{n}-\frac{1}{m}\right|\leq\left|\frac{1}{n}\right|+\left|\frac{1}{m}\right|$. Therefore you need to force $n,m>\frac{2}{\epsilon}$. Conveniently enough, you are allowed to choose whichever $N$ you like, so choosing $N>\frac{2}{\epsilon}$ will do the trick. The above constitutes the work you do beforehand, now the proof.
Claim: The sequence $\{\frac{1}{n}\}$ is Cauchy.
Proof: Let $\epsilon>0$ be given and let $N>\frac{2}{\epsilon}$. Then for any $n,m>N$, one has $0<\frac{1}{n},\frac{1}{m}<\frac{\epsilon}{2}$. Therefore, $\epsilon>\frac{1}{n}+\frac{1}{m}=\left|\frac{1}{n}\right|+\left|\frac{1}{m}\right|\geq\left|\frac{1}{n}-\frac{1}{m}\right|$. Thus, the sequence is Cauchy as was to be shown. |
Find the number of sides of a polygon where it satisfies the relation $\frac{1}{A_1A_2} =\frac{1}{A_1A_3} + \frac{1}{A_1A_4}$ | Note that if the polygon $A_1A_2 \ldots A_{n}$ is a regular polygon, then the quadrilateral $A_1A_2A_3A_4$ is a cyclic quadrilateral. Furthermore, even $A_1A_3A_4A_5$ is a cyclic quadrilateral. We will shorten this to $ABCDE$ for convenience of avoiding subscripts.
We use the famous Ptolemy's theorem:
In a cyclic quadrilateral $ABCD$, $AB\cdot CD + BC \cdot AD = AC \cdot BD$.
Let : $AB=BC=CD=DE=a$, $AC = BD = CE = b$, $AD = BE =c$ ,and $AE=d$.
Apply Ptolemy to $ABCD$: $a^2 + ac = b^2(1)$. Next, apply Ptolemy to $ACDE$ : $ab+ad =bc(2)$.
Apply Ptolemy to $ABDE$ : $a^2 + bd=c^2(3)$.
Furthermore, we are given that $\dfrac{1}{a} = \dfrac{1}{b} + \dfrac{1}{c}(4)$.
Solve for $d$ from $2$:
$$
d = \frac{bc-ab}{a}
$$
Substitute this value of $d$ in $3$:
$$
a^2 + b\left(\frac{bc-ab}{a}\right) = c^2 \implies a^2 + \left(\frac{b^2c-ab^2}{a}\right) = c^2
$$
now, use $4$:
$$
a^2 + (b^2c-ab^2)\left(\frac{1}{b} + \frac{1}{c}\right) = c^2
$$
On simplification:
$$
a^2 + bc = c^2
$$
But, by $2$,
$$
bc=bd \implies c=d
$$
Hence, the shape has exactly two distinct diagonal lengths, hence there are four diagonals from $A$ to other vertices, two of length $b$ and two of length $c$. That means that the number of vertices is $4+3=7$, where the $3$ is for $A$ and it's adjacent vertices.
Hence, the shape is that of a regular heptagon.
If the shape is not regular, nothing definitive can be said about the polygon.
By the way, if this would have been a regular polygon such that $A_1A_2 + A_1A_3 = A_1A_6$, then the regular polygon is a nonagon (again, Ptolemy's theorem is useful). |
Is a forest a tuple, a set or both? | Wiki: "A forest is an undirected graph in which any two vertices are connected by at most one path"
The important thing is that this forbids cycles. Now, assume V1=V2={A,B,C} and V3={}. Furthermore, let E1={AB,BC} and E2={AC,BC} (E3={}, of course). In both definitions, you produce the cycle AB, BC, CA, i.e. not a forest. However, note that this depends how exactly you interpret (i) as a graph (this is a bit of an unusual notation... thinking of it, defining a graph as a set with 3 elements seems weird...I strongly advise against it). |
Proof of LCM property. | Yes: $a\mid ak\implies a\mid\operatorname{lcm}(ak,cm)$. For the same reason, $c\mid\operatorname{lcm}(ak,cm)$. Therefore, $\operatorname{lcm}(a,c)\mid\operatorname{lcm}(ak.cm)$. |
discrete version of Laplacian | Here is a low-brow answer. If you accept that the discrete version of the second derivative of a function $f$ is $f(x+1) - 2f(x) + f(x-1)$, then the discrete Laplacian, say in two dimensions, is $\Delta f(x, y) = \frac{f(x+1, y) + f(x-1, y) + f(x,y+1) + f(x,y-1)}{4} - f(x,y)$. But the first term is just the transition probabilities of a random walk on $\mathbb{Z}^2$ where one moves to each of the four horizontally or vertically adjacent neighbors with equal probability. A similar statement is true in $n$ dimensions.
More generally one can define a discrete Laplacian on any graph which mimics the usual Laplacian. In fact there is an entire textbook by Doyle and Snell dedicated to working out potential theory on graphs. |
Questions regarding mutual independence of events | Yes it is possible. See https://en.wikipedia.org/wiki/Pairwise_independence for a standard example.
Yes it is possible, let $A_1 = \emptyset$ and the statement holds regardless of the rest of the $A_i$, which do not have to be independent.
No this does not imply mutual independence, or even pairwise independence. If $A_1$ and $A_2$ are disjoint, they cannot be independent unless at least one of them has probability zero. Intuitively, if I told you that $A_1$ occurred, then you know for certain that $A_2$ did not occur. |
What series does the function $\frac{1}{(1-ax)^r}$ generate? | The series is $$S=(1-ax)^{-r}=\sum_{k=0}^{\infty}(-1)^k {-r \choose k} (ax)^k=\sum_{k=0}^{\infty} {r+k-1 \choose r - 1} (ax)^k.$$
Which is valid for $|x|<a^{-1}.$ |
If $2AB = (BA)^2+I_n$ then $1$ is an eigenvalue for $AB$ | The condition implies that $BA$ commutes with $AB$ and hence they are simultaneously triangularisable over $\mathbb C$.
Let $AB$ and $BA$ be simultaneously triangularised. Since $AB$ and $BA$ in general have identical spectra, if $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $BA$ along its diagonal, then the entries along the diagonal of $AB$ are $\lambda_{\sigma(1)},\ldots,\lambda_{\sigma(n)}$ for some permutation $\sigma$. So, the given condition implies that $f(\lambda_i)=\lambda_{\sigma(i)}$ where $f:z\mapsto (z^2+1)/2$.
In other words, $f$ is a bijection among the eigenvalues of $BA$. As $f$ maps real numbers to real numbers, it must also be a bijection among the real eigenvalues of $BA$.
Since $BA$ is a real matrix of odd dimension, real eigenvalues do exist. Now, as $f(x)\ge x$ on $\mathbb R$, $f$ must map the largest real eigenvalue of $BA$ to itself. Solving $f(x)=x$, we see that this eigenvalue is $1$.
Edit. As the OP points out in his comment, actually we only need to consider the largest real eigenvalue. Let $(\lambda,x)$ be an eigenpair of $BA$, where $\lambda$ is the largest real eigenvalue of $BA$. Then $(\frac{\lambda^2+1}2,x)$ is an eigenpair of $AB$. However, since $AB$ and $BA$ have identical spectra and $\frac{\lambda^2+1}2\ge \lambda$, we must have $\frac{\lambda^2+1}2=\lambda$ and hence $\lambda=1$. |
Complex series expansion of a particular form using formula | Yes this should be fine. It's helpful to note that
$$\frac1{z-b}-\frac1{z-a}={a-b\over(z-a)(z-b)}.$$
The identity now follows by dividing both sides by $a-b$ and expanding using geometric series. |
Does a set $A \subseteq [0,1]$ exist such that $A$ is homeomorphic to $[0,1] \setminus A$? | Such subsets do exist. The interval $[0,1]$ is homeomorphic to the extended real line $X=[-\infty, \infty]$ (with the standard topology). Now, let $A\subset X$ be the union of $\infty$ with the collection of intervals $[2n, 2n+1)$, $n\in {\mathbb Z}$. The set $A$ is homeomorphic to $B$, which is the union of $\infty$ and the collection of intervals $(2n, 2n+1]$: Send $\infty$ to itself and $[2n, 2n+1)\to (2n, 2n+1]$, $\forall n$ via linear maps. Composing this homeomorphism with the map $x\mapsto -x+1$, we get a homeomorphism $A\to X\setminus A$. qed
Edit: Here is a proof that for such an example ($A$ homeomorphic to $A^{\mathrm c}=[0,1]\setminus A$) the set $A$ has to consist of infinitely many components.
Suppose that $A$ is a finite union of intervals (I allow open, half-open, closed and degenerate intervals). For each interval $I$ define its "modified Euler characteristic" $\chi^c(I)$ as the number of vertices (end-points which belong to $I$) minus the number of edges (which is 1 if $I$ is nondegenerate and $0$ if $I$ is a singleton). Thus, for $I=[a,b]$, we get $\chi^c(I)=1$, while for $I=(0,1)$, we get $\chi^c(I)=-1$; we also have $\chi^c((0,1])=0$.
Now, extend $\chi^c$ to finite unions of intervals in the obvious fashion. For compact subsets with finitely many components, $\chi^c=\chi$, the usual Euler characterstic. One can (easily) show that $\chi^c$ is additive:
$$
\chi^c(\bigsqcup_{i=1}^n I_i)=\sum_{i=1}^n \chi^c(I_i)
$$
(this is false for the usual Euler characteristic!) and is invariant under homeomorphisms.
Now, if $A\subset [0,1]$ is a finite union of intervals and $A^{\mathrm c}$ is homeomorphic to $A$, then
$$
2\chi^c(A)=\chi^c(A)+ \chi^c(A^{\mathrm c})=\chi^c([0,1])=1
$$
which is absurd. The same works for the interval $(0,1)$.
The same argument works in higher dimensions, but you have to modify what "finite number of components" means. Instead, assume that $A$ is "semialgebraic", i.e. is given by a finite system of inequalities of the type $p_i(x)>0$, $p_j(x)\ge 0$, where $p$'s are polynomials of several variables. The key is that $\chi^c$ of the closed $n$-dimensional disk is $1$ and that $\chi^c$ is again additive. The modified Euler characteristic can be regarded as the "right" Euler characteristic for semialgebraic sets; it can be defined as the alternating sum of ranks of homology groups for the Delfs' homology theory ("homology with closed support", not to be confused with Borel-Moore!). This interpretation explains why $\chi^c$ is a topological invariant (this is no longer obvious with the 2nd definition below).
A more direct definition of $\chi^c$ is to consider "incomplete simplicial complexes" triangulating semialgebraic sets, i.e. generalized simplicial complexes where where simplices might be missing some faces (like the interval $[0,1)$ is missing the vertex $1$) and then use the standard alternating sum of the face numbers, as I did above in the 1-dimensional case. |
Two points with “identical” local geometry on Riemann Manifolds | I would always be careful with the word "identical" in mathematics. In this situation, what I would say is that the local geometries of $s$ and $t$ are isometric is there exist open subsets $U \subset S$ and $V \subset T$, and an isometry $f : U \to V$, such that $s \in U$, $t \in V$, and $f(s)=t$. In case you don't the definition, to say that $f$ is an isometry means that $f$ is a diffeomorphism, and for every $x \in U$ and $y=f(x) \in V$ the derivative map $D_x f : T_x U \to T_y V$ is a linear isomorphism that preserves the inner products on those spaces (given by the Riemannian metrics), i.e. for each $v,w \in T_x U$ we have $\langle v,w \rangle = \langle D_x(v),D_x(w) \rangle$. |
What is $\lim_{n\to \infty} {\sqrt[n+1]{(n+1)!} \over \sqrt[n]{n!}}$ | Note:
$$\frac{\sqrt[n+1]{n!}}{\sqrt[n]{n!}} <\frac{\sqrt[n+1]{(n+1)!}}{\sqrt[n]{n!}}<\frac{\sqrt[n+1]{(n+1)!}}{\sqrt[n+1]{n!}}.$$
Taking limit:
$$\lim_{n\to\infty} (n!)^{\frac{1}{n(n+1)}}\le L \le \lim_{n\to\infty} (n+1)^{\frac{1}{n+1}} \Rightarrow$$
$$1\le L \le 1 \Rightarrow $$
$$L=1.$$ |
Express unit sphere as countable union of great circles? | you may reduce the dimension of your question by considering the intersections of your great circles with a suitably chosen equator.
so can a circle be made up of a countable set of points? |
Proving a criteria for a sequence of functions on $[0,1]$ converging to a function non uniformly | This is false. Let $f(x) = 0$, and:
$$
f_n(x) = \begin{cases}
x^n, &\text{if $x \in [0,1)$} \\
0, &\text{otherwise}
\end{cases} \\
$$
We have $f_n(x) \to f(x)$ pointwise, as $\lim\limits_{n \to \infty} x^n = 0$ $\forall x \in [0,1)$, but not uniformly as $\sup\limits_{x \in [0,1]} |f_n(x) - f(x)| = 1$ $\forall n$. However, for any sequence $x_n \to x$ we must have $f(x_n) = 0 \to 0 = f(x)$. |
Evaluating and proving $\lim\limits_{x\to\infty}\frac{\sin x}x$ | You know that $-\dfrac{1}{x} \leq \dfrac{\sin(x)}{x} \leq \dfrac{1}{x}$
Now let $x \to \infty$ and apply the squeeze theorem. |
Window perimeter optimization | Start by identifying the constraint and what you're trying to optimize/maximize:
Constraint: $A = 2xy+\frac{\pi x^2}{2} = 10$
Maximize: $P = 2y+2x+\pi x$
From here, solve the constraint for $y$ so that we can make the maximization equation a function of $x$: $$10 = 2xy+\frac{\pi x^2}{2} \\ 2xy = 10-\frac{\pi x^2}{2} \\ y = \frac{5}{x}-\frac{\pi x}{4}$$
Plug this into the maximization equation: $$2\left(\frac{5}{x}-\frac{\pi x}{4}\right)+2x+\pi x \longrightarrow \frac{1}{2} (4+\pi ) x+\frac{10}{x}$$
Then take the derivative of this equation: $$\frac{d}{dx}P=\frac{4+\pi }{2}-\frac{10}{x^2}$$
Set it equal to $0$ to find critical points: $$x = \pm 2 \sqrt{\frac{5}{4+\pi}}$$
Plug it back into the equation for the perimeter: $$P\left(2 \sqrt{\frac{5}{4+\pi}}\right) = 2 \sqrt{5 (4+\pi )} \approx 11.9512$$ |
Prob. 2 (d), Sec. 27 in Munkres' TOPOLOGY, 2nd ed: Open supersets and ϵ-neighborhoods of compact set | Suppose by contradiction that no $U(A,\epsilon)$ is a subset of $U.$
For $n\in \mathbb N$ take $a_n\in A$ and $p_n\in B_d(a_n,1/n)$ \ $U.$ Since A is compact we can take a subsequence $(a_{n_i})_i$ of the sequence $(a_n)_n$ with $(a_{n_i})_i$ converging to some $a\in A.$
Now take $m\in \mathbb N$ such that $B_d(a,1/m)\subset U.$ Take any $n_i>2m$ such that $d(a,a_{n_i})<1/2m .$ Then $$d(a,p_{n_i})\leq d(a,a_{n_i})+d(a_{n_i},p_{n_i})<1/2m+1/n_i<1/m .$$ This implies $p_{n_i}\in B_d(a,1/m)\subset U,$ contrary to $p_{n_i}\not \in U.$ |
A Bound for the Error of the Numerical Approximation of a the Integral of a Continuous Function | This is where using randomness can help. Let $X_i\sim U(0,t)$ be independent and let $$I_n=\frac t n\sum_{i=1}^n f(X_i)$$
Then $I_n$ converges to $I=\int_0^tf(x)\,dx$ a.s. and in $L^2$: $$\sigma^2_n=E((I_n-I)^2)=\frac {\int_0^tf^2(x)\,dx-I^2}n\to 0$$ regardless of how hairy your function is.
To get the simplest bound it's enough to know $M=\max_{x\in[0,t]} |f(x)|$ for then $\sigma_n^2\le\frac {M^2t}n$. |
Sum of certain binomial coefficients | Hint: Each term is just $\dbinom{q+k}{k} = \dbinom{q+k}q$. Then just hockey stick identity. |
Ramification of $5$ in $\mathbb{Q}( \sqrt[5]{n})$ | I will show that there is only one prime lying over $5$ in $\mathcal{O}_K$.
I will use the following proposition, which may be found in Neukirch's Algebraic Number Theory (prop. 8.2):
Proposition: Suppose the extension $L/K$ is generated by the zero $\alpha$ of the irreducible polynomial $f\in K\left[ X\right]$. Then the valuations $w_1,\dots,w_r$ of $L$ extending a valuation $v$ of $K$ correspond $1-1$ to the irreducible factors $f_1,\dots,f_r$ in the decomposition of $f$ over the completion $K_v$.
Using this proposition, it is enough to show that $X^5-n$ has a unique irreducible factor in $\mathbf{Q}_5$. If $n=0$, this is evident.
Otherwise, one may show that $X^5-n$ is irreducible. First note that $X^n-5$ has no quadratic factor in $\mathbf{Q}_5$: indeed, let $\alpha\in \overline{\mathbf{Q}_5}$ be a root of such a quadratic factor $Q$ so that $Q$ has roots $\alpha$ and $\alpha\zeta$ for some nontrivial fifth root of unity $\zeta$; then $\alpha^2\zeta$ and $\alpha(1+\zeta)$ are in $\mathbf{Q}_5$ (these are the coefficients of $Q$); from this, one deduces that $\alpha^2$ is in $\mathbf{Q}_5$, hence that $\zeta$ is in $\mathbf{Q}_5$, which leads to a contradiction. The same kind of reasoning shows that $X^5-n$ does not split, that it has no cubic factor with two linear factors nor any factor of degree $4$ (in all those cases, the contradiction is always the same: $\mathbf{Q}_5$ would contain a fifth primitve root of unity).
Thus, $X^5-n$ has a unique irreducible factor in $\mathbf{Q}_5$ which means that $5$ is totally ramified in $\mathcal{O}_K$. |
Limit with trigonometric function | $$\text{Let }\displaystyle y=\lim_{x\to1}\left(\frac{x^2+1}{2x}\right)^{\cot^2{\pi x}} \implies\ln y=\lim_{x\to1} \frac{\cos^2{\pi x}}{\sin^2\pi x}\cdot \ln\left(\frac{x^2+1}{2x}\right)$$
Setting $x-1=h,$
$$\ln y=\lim_{h\to0} \frac{\cos^2{\pi(h+1)}}{\sin^2\pi(h+1)}\cdot \ln\left(\frac{(h+1)^2+1}{2(h+1)}\right)$$
Now as $\displaystyle\frac{(h+1)^2+1}{2(h+1)}=1+\frac{h^2}{2(h+1)},$
$\displaystyle \cos^2{\pi(h+1)}=\{\cos(\pi+\pi\cdot h)\}^2=\{-\cos(\pi\cdot h)\}^2=\cos^2(\pi\cdot h)$
Similarly, $\displaystyle \sin^2{\pi(h+1)}=\sin^2(\pi\cdot h)$
using $\displaystyle\lim_{u\to0}\frac{\ln(1+u)}u=1$ and $\displaystyle\lim_{v\to0}\frac{\sin v}v=1,$
$\displaystyle\ln y$
$$=\left(\lim_{h\to0}\cos(\pi\cdot h)\right)^2\lim_{h\to0}\frac{\ln\left(1+\frac{h^2}{2(h+1)}\right)}{\frac{h^2}{2(h+1)}}\cdot\frac1{\left(\lim_{h\to0}\frac{\sin(\pi\cdot h)}{\pi \cdot h}\right)^2}\cdot\frac1{\lim_{h\to0}2(h+1)\pi^2}$$
$$\implies \ln y=1^2\cdot1\cdot\frac1{1^2}\cdot\frac1{2\pi^2}$$
$$\implies \ln y=e^{\frac1{2\pi^2}} $$ |
Covariance, covariance operator, and covariance function | The first definition is a special case of the second. Rather than a Hilbert space H, let's look at a Banach space $X$, and distinguish it from its dual space $X^*$. Every continuous linear functional $\varphi \in X^*$ is a random variable $\varphi : X \to \mathbb R$, so it makes sense to take expectations and covariances. We define the expectation of a functional by $\mathbb E[\varphi] = \int_X \varphi[x] \, \mathrm d \mathbf P(x)$, and the covariance of two functionals to be
$$\operatorname{cov}[\psi|\varphi] = \int_X \big( \psi[x] - \mathbb E[\psi]\big) \big( \varphi[x] - \mathbb E[\varphi]\big) \, \mathrm d \mathbf P(x).$$
Now, consider a probability measure $\mathbf P$ on a Hilbert space $X = H$. By the Riesz representation theorem, we know that the dual space $H^*$ is isomorphic to $H$, and all the functionals are of the form $\varphi_h[x] := \langle h, x \rangle$.
The mean can be represented by a single element $m \in H$, which is called the "Pettis integral" of $\mathbf P$. This element satisfies the property that $\mathbb E[\varphi_h] = \varphi_h[m] = \langle h, m \rangle$ for all $h \in H$.
Consequently,
$$\operatorname{cov}[\varphi_h|\varphi_{h'}] = \int_X \big\langle h, x - m \big\rangle \big\langle h', x-m \big\rangle \, \mathrm d \mathbf P(x).$$
This is the formula you were looking for. It's just a special case of the usual covariance formula, specialized to the setting where the random variables of interest are continuous linear functionals, and the probability space is a Hilbert space. |
Lebesgue integrable implies zero almost everywhere | Hint: If $b\in L^\infty(\mathbb R),$ then there exists a sequence $g_n$ of bounded continuous functions on $\mathbb R$ such that $\|g_n\|_\infty\le \|b\|_\infty$ for all $n,$ and $g_n \to b$ pointwise a.e. |
What is the explicit formula for the general term of the sequence? | Hint:
$$
A_n = \begin{bmatrix}
a_n \\ a_{n-1}
\end{bmatrix}
=
\begin{bmatrix}
5a_{n-1} - 6a_{n-2} \\ a_{n-1}
\end{bmatrix}
=
\begin{bmatrix}
5 & -6 \\ 1 & 0
\end{bmatrix}
\begin{bmatrix}
a_{n-1}\\ a_{n-2}
\end{bmatrix} =
\begin{bmatrix}
5 & -6 \\ 1 & 0
\end{bmatrix} A_{n-1}.
$$ |
How to find tangential accel? | You have $\theta=0.3t^2\implies \dot{\theta}=0.6t\implies \ddot{\theta}=0.6$
So for C, the tangential acceleration is $$r\ddot{\theta}=15\times0.6=9$$
And for D, the radial acceleration is $$r\dot{\theta}^2=15\times3.6^2=194.4$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.