title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Limit of composite function: $\lim_{t \to 0} [\frac{\sin(\tan(t))}{\tan(t)}]$ | $u(0)$ is not involved in finding the limit. The required limit is $\lim_{u \to 0} u(s)=1$ because $s =\tan\, t \to 0$ as $t \to 0$ and $u(s) \to 1$ as $ s\to 0$.
Just apply definition of limit. There is no need to apply any theorem except the one which says $u(s) \to 1$ as $ s\to 0$. |
Coupon collector's problem worst case time? | If there are $n$ types of coupons, and you sample $k$ times with replacement, the probability you have $C=c$ different coupons has the recursion
$$\Pr(C=c|n,k)=\frac{n-c+1}{n}\Pr(C=c-1|n,k-1) + \frac{c}{n}\Pr(C=c|n,k-1)$$
starting at $\Pr(C=0|n,0)=1$ and $\Pr(C=c|n,0)=0$ for $c \ne 0$.
You can do the calculation for example on a spreadsheet to find $\Pr(C=9|n=9,k=26) \approx 0.62912$.
More precisely, this is $\dfrac{9!}{9^{26}}S_2(26,9)$ where $S_2(r,n)$ is a Stirling number of the second kind and in general $$\Pr(C=c|n,k) = \dfrac{n!}{(n-c)!n^{k}} S_2(k,c).$$
To have $99\%$ certainty with $9$ types of coupons, you need to sample up to $58$ times; $99.9\%$ certainty requires $78$. |
Finding the eigenvalues and eigen vectors of linear transformation | Hint: We can rewrite $T$ as
$$
T \pmatrix{a\\b\\c\\d} = \pmatrix{a\\a+b\\b+c\\c+d}
$$
(strictly speaking, this is how $T$ acts on the coordinate vector with respect to a particular basis).
How can we represent the above transformation as a matrix? |
Real Analysis Continuous Function Problem | Assume $f(x_0)=0$ for some $x_0\in(-1,1)$. Then from $0=f(x_0)=f(x_0/2+x_0/2)=f(x_0/2)^2$ we conclude $f(x_0/2)=0$. By induction, $f(2^{-n}x_0)=0$ and by continuity $f(0)=0$. Then $f(x)=f(x+0)=f(x)f(0)=0$ for all $x\in(-1,1)$. As $f$ is not supposed to be the zero function, we conclude $f(x)\ne 0$ for all $x\in (-1,1)$. Together with $f(x)=f(x/2)^2\ge0$ we conclude $f(x)>0$ for all $x\in(-1,1)$.
Specifically $f(0)=f(0+0)=f(0)^2$ implies $f(0)=1$.
Pick $x_1\in (0,1)$ and let $a=\exp(\frac1{x_1}\ln f(x_1)))>0$. Let
$$ A=\{\,x\in(-1,1)\mid f(x)=a^x\,\}.$$
We have $x_1\in A$ and $0\in A$.
If $x,y,z\in(-1,1)$ with $x+y=z$ and two of $x,y,z$ are in $A$ then all are in $A$: For example if $x,y\in A$, then $f(z)=f(x)f(y)=a^xa^y=a^{x+y}=a^z$; and if $x,z\in A$, then $f(y)=\frac{f(z)}{f(x)}=\frac {a^z}{a^x}=a^{z-x}=a^y$.
Especially, $x\in A$ implies $-x\in A$ and then $nx\in A$ for all $n\in\mathbb Z$ with $nx\in(-1,1)$. Also $x\in A$ implies $\frac12x\in A$ and by induction $2^{-n}x\in A$ for all $n\in\mathbb N$. We conclude that $\frac m{2^n}x_1\in A$ for all $n\in \mathbb N$, $m\in\mathbb Z$ with $-1<\frac{m}{2^n}x_1<1$. Then $A$ is dense in $(-1,1)$ and by continuity of $f$, it is all of $(-1,1)$.
Note that this implies $\lim_{x\to 1}f(x)=a$. |
Properties of powerfully embedded subgroups | If $M$ is a nontrivial normal subgroup of a finite $p$-group $G$, then $[G,M] < M$, $[G,M]$ is normal in $G$, and $G$ centralizes $G/[G,M]$ so, if we choose $L$ with $[G,M] \le L < M$ with $|L:M|=p$ then $L$ is normal in $G$ and $[M,G] \le L$. For Question (1) apply this with $M = [K,G]$.
For (2), by using the commutator identity $[ab,c] = [a,c][[a,c],b][b,c]$ (here $[a,b] = a^{-1}b^{-1}ab$) and $[N,G] \le Z_2(G)$, we get (for $n \in N$, $g \in G$)
$$[n^p,g] = [n,g]^p[[n,g],g]^{p(p-1)/2} = [n,g]^p[[n,g]^{p(p-1)/2},g]$$
from which the result follows (we are assuming that $p$ is odd). |
Extrema of $f:[-1,1]\to\mathbb{R}$ defined as $f(x)=1+|x|$ | Why so complicated ?
We have $f(x) \ge 1$ for all $x$ and $f(0)=1.$
Furthermore: $f(x) \le 2 $ for all $x \in [-1,1]$ and $f( \pm 1)=2.$ |
Finding the direction vector for the tangent of two Surfaces | Yes, you are right. One approach is to cross product normal vectors of the two tangent planes. The other way is intersecting them and find the driving vector of the obtained line. Both are straight forward. Let $F(x,y,z) = 4x^2 + 3y + z^3 - 11$ and $G(x,y,z) = x^3 + 2y^2 -z - 8$. Thus $\nabla F$ and $\nabla G$ are perpendicular to level curves $F(x,y,z) = 0$ and $G(x,y,z) = 0$ or equivalently $S_1$ and $S_2$. So we have:
$$\nabla F \Big\vert_{(1,2,1)} = (\frac{\partial F}{\partial x}, \frac{\partial F}{\partial y}, \frac{\partial F}{\partial z})\Big\vert_{(1,2,1)} = (8x, 3, 3z^2) \Big\vert_{(1,2,1)} = (8, 3, 3)$$
$$\nabla G \Big\vert_{(1,2,1)} = (3x^2, 4y, -1) \Big\vert_{(1,2,1)} = (3, 8, -1)$$
And the cross product $(8, 3, 3) \times (3, 8, -1) = (-27, 17, 55)$.
I think you are writing $z$ as a function of $x,y$ and facing complex fractions when computing the partial derivatives especially for $S_1$ which contains third root. (z = $\sqrt[3]{11-4x^2-3y}$). Actually it is not necessary to calculate that since you can see the surface as the level curves of three variable function. |
Vector fields- differential topology | $p\in\mathbb{R}^{n+1}$ and so is $v$, hence the pair $(p,v)$ can be seen as living in $\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}$ which is $\mathbb{R}^{2n+2}$. |
Central Limit problem | \begin{align}
\Pr(Y<140) & = \Pr\left( \frac{Y-150}{\sqrt{120}} < \frac{140-150}{\sqrt{120}} \right) \\[10pt]
= {} & \Pr\left(Z<\frac{140-150}{\sqrt{120}}\right) = \Phi\left(\frac{140-150}{\sqrt{120}}\right).
\end{align} |
How to Solve $\lim_{x\to \infty}\sqrt{x^2 + ax} - \sqrt{x^2 + bx}$ | $$
\begin{aligned}
\sqrt{x^{2}+ax}-\sqrt{x^{2}+bx}&=\frac{\sqrt{x^{2}+ax}-\sqrt{x^{2}+bx}}{\sqrt{x^{2}+ax}+\sqrt{x^{2}+bx}}\cdot(\sqrt{x^{2}+ax}+\sqrt{x^{2}+bx})\\[6pt]
&=\frac{x^{2}+ax-x^{2}-bx}{\sqrt{x^{2}+ax}+\sqrt{x^{2}+bx}}\\[6pt]
&=\frac{(a-b)x}{\sqrt{x^{2}+ax}+\sqrt{x^{2}+bx}}\\[6pt]
&=\frac{a-b}{\sqrt{1+a/x}+\sqrt{1+b/x}}.
\end{aligned}
$$
Now take the limit. |
Intuitive Understanding of Dependent Events in Probability | With an example you will see it clear.
If you throw two dice, as there is no physical connection between both dice we say that both throws are independent.
Now suppose another case, we retrieve balls from a bag. There are balls of two colors inside, say red and blue. If we take out one blue for the first time and don't replace it, then we have changed the likelihood of getting a ball of a particular color next time. I have changed the 'system' by retrieving a ball. Now the system has more balls of one color than another. The 'system' remembers the change. So one event is dependent on the other. But dice have no memory.
The fact of you knowing the outcome of one die, does not change how probability is distributed on the other die. So
$$P(A \text{ and } B)=P(A)\cdot P(B)$$
Thinking that the past dice casts affect the future ones is called 'The Gambler's Fallacy'. And It has made loose lots of money to some people at the casinos.
Hope it helps. |
If $d$ is a metric on $M$, show that $\left|d(x,z) - d(y,z)\right|\leq d(x,y)$ for any $x,y,z\in M$. | $d(x,z)\leq d(x,y)+d(y,z)$ implies $d(x,z)-d(y,z)\leq d(x,y)$
$d(y,z)\leq d(y,x)+d(x,z)$ implies $d(y,z)-d(x,z)\leq d(x,y)$
This implies that $|d(x,z)-d(y,z)|\leq d(x,y)$. |
convergence of any sequence in infinite space | Define $X=\mathbb R^{\mathbb N}$ as the set of all functions from $\mathbb N$ to $\mathbb R.$
With any topology on $X,$ we define $\lim_{n\to \infty}f_n=f\iff \{f\}=\cap_{n\in \mathbb N}Cl(\{f_m: m\geq n\}).$
The (Tychonoff) product topology on $X$ is called the topology of point-wise convergence. Because for $A\subset X$ and $f\in X$ we have $f\in \bar A $ iff there is a sequence $(f_n)_n$ in $A$ such that $\lim_{n\to \infty}|f_n(j)-f(j)|=0$ for each $j\in \mathbb N.$ So in this topology, a sequence $(f_n)_n$ converges to $f$ iff $f_n(j)$ converges to $f(j)$ for each $j\in \mathbb N.$ That is, iff $f_n\to f$ pointwise.
With respect to the uniform topology on $X$ we have $f\in \bar A$ iff there is a sequence $(f_n)_n$ in $\bar A$ such that $\lim_{n\to \infty}\sup_{j\in \mathbb N}|f_n(j)-f(j)|=0.$ So in this topology if $(f_n)_n$ converges to $f$ then $(\sup_{j\in \mathbb N}|f_n(j)-f(j)|)_n$ converges to $0,$ which requires that $f_n\to f$ pointwise.
Closure of an arbitrary $A\subset X$ in the box topology on $X$ cannot be defined in terms of sequences. For example if $f(j)=0$ for all $j,$ and $A=\{g\in X: \forall j\;(g(j)>0)\}$ then $f\in \bar A$ but no sequence in $A$ converges to $f .$ This does not mean that non-trivial convergent sequences don't exist. For example if $f_n(1)=f(1)+1/n$ and $f_n(j)=f(j)$ for $j>1$ then $(f_n)_n$ converges to $f.$
But if a sequence $(f_n)_n$ converges to $f$ in the box topology then $f_n(j)$ must converge to $f(j)$ for each $j,$ so $f_n\to f$ pointwise. |
Show that $f$ is $3$-Lipschitz w.r.t the second component | Let us use the mean value theorem for functions with two variables, i.e., $f(x,y)-f(x,z)=\frac{\partial{}f}{\partial{}y}\Big|_{y=\xi}(y-z)$,
where $\xi$ is between $y$ and $z$.
Define
$f(x,y):=\frac{xy}{1+x^{2}+y^{2}}$ for $(x,y)\in{}D:=\{(x,y):\ x^{2}+y^{2}\leq1\}$.
It should be noted that $(x,y)\in{}D$ implies $|x^{2}-y^{2}|\leq{}x^{2}+y^{2}\leq1$ and $|x|\leq\sqrt{x^{2}+y^{2}}\leq1$. Then, we can compute that
$f(x,y)-f(x,z)=\frac{x(1+x^{2}-\xi^{2})}{(1+x^{2}+\xi^{2})^{2}}(y-z)$,
which yields
$|f(x,y)-f(x,z)|\leq\frac{|x|(1+x^{2}+\xi^{2})}{(1+x^{2}+\xi^{2})^{2}}|y-z|\leq\frac{1(1+1)}{(1+0)^{2}}|y-z|=2|y-z|$
for all $(x,y)\in{}D$, where $\xi$ is some number between $y$ and $z$. |
Composition Relations (Discrete Mathematics) | Note that $(g\circ f)\circ (f^{-1}\circ g^{-1})=g\circ (f\circ f^{-1})\circ g^{-1}=g\circ id_{A}\circ g^{-1}=g\circ g^{-1}=id_{A}$.
Also $(f^{-1}\circ g^{-1})\circ (g\circ f)=f^{-1}\circ (g^{-1}\circ g)\circ f=f^{-1}\circ id_{A}\circ f=f^{-1}\circ f=id_{A}$ .
So $(g\circ f)$ is bijective and its inverse is $f^{-1}\circ g^{-1}$. |
Solutions of the Laplace equation | You should also use the polar coordinate version of the Laplace operator
$$
\Delta u
= {\partial^2 u \over \partial r^2}
+{1 \over r} {\partial u \over \partial r}
+ {1 \over r^2} {\partial^2 u \over \partial \theta^2}
$$
and trying for $u(r, \phi) = f(r)$. This reduces to solving the ODE
$$
0 = g'(r) +{1 \over r} {g}
$$
for $g(r) = f'(r)$. |
Find all derangements for $n=4$ and $n=5$ | These kind of things are actually really easy to find with Constraint programming.
For a permutation of length n we got the following constraints for a list:
- The list has to have length(n)
- Any number x must be in range 1..n
- All elements must different
Looking at a derangement of length n we got the following constraints:
- Be a permutation
- Have no fixed points
We can put all of this quite fast together in Prolog with:
:- use_module(library(clpfd)).
permut(L,N) :-
length(L, N),
L ins 1..N,
all_different(L),
label(L).
noFixpoint([],_Xs).
noFixpoint([X|Xs],N1) :-
X #\= N,
N1 #= N-1,
noFixpoint(Xs,N).
derangement(L,N) :-
permut(L,N),
noFixpoint(L,0).
findDerangements(N,Ls) :-
findall(L,derangement(L,N),Ls).
With derangement(L,N) we can now simply generate derangements of a certain a size or all of them (of a certain size) with findDerangements(N,Ls).
findDerangements(4,Ls), length(Ls,N).
Ls = [[2, 1, 4, 3], [2, 3, 4, 1], [2, 4, 1, 3], [3, 1, 4, 2], [3, 4, 1, 2], [3, 4, 2|...], [4, 1|...], [4|...], [...|...]],
N = 9.
?- findDerangements(5,Ls), length(Ls,N).
Ls = [[2, 1, 4, 5, 3], [2, 1, 5, 3, 4], [2, 3, 1, 5, 4], [2, 3, 4, 5, 1], [2, 3, 5, 1|...], [2, 4, 1|...], [2, 4|...], [2|...], [...|...]|...],
N = 44.
You can try it out here ideone link - source |
Verification of Disproof of Linear Diophantine Equations | There is also another problem about it.
For illustration, the equation $2x+3y+4z=1$ has integer solution because $\gcd(2,3,4)=1$, while the equation $2x+4z=1$ fails to be solved. On the other hand the equation $2x+3y+5z=1$ is solvable and also each of equations $2x+3y=1$, $2x+5z=1$ and $3x+5z=1$.
To summarize, your verification is true and is false!!! |
Proof for bijection of a function between positive integers and nonprime positive integers. | Very interesting question!
Let us first prove that S(m) can not be prime. This follows from the fact that such a sequence can not end in a prime, as the product would only be divisible once by this prime.
To prove it is a bijection from all integers greater than $1$ to all composite numbers, we give the inverse. The inverse $S^{-1}(n)$ is the greatest possible $m$ such that such a sequence exists. Note that this is well-defined for composite numbers as you can take the sequence to be all primes dividing into $n$ an odd number of times.
To prove this gives an inverse, we need to show $S^{-1}(S(m))=m$ holds for all $m$. This is because, if a sequence ending with $S(m)$ exists with a higher starting number than $m$, then taking the symmetric difference with the sequence starting with $m$ and ending with $S(m)$ gives a sequence starting with $m$ and with lower ending number than $S(m)$. |
2D system of hyperbolic equation (LeVeque) | Setting $p = R^{-1}q$, we have
$$
p_t + \Lambda_x p_x + \Lambda_y p_y = 0 \, .
$$
Since the $\Lambda^\alpha$ matrices are diagonal, this system is decoupled.
Applying the method of characteristics componentwise yields
$$
p(x,y,t) = R^{-1} q(x,y,t) = \begin{bmatrix}
F_1(\lambda^x_1 x -t, \lambda^y_1 x - \lambda^x_1 y) \\
F_2(\lambda^x_2 x -t, \lambda^y_2 x - \lambda^x_2 y)
\end{bmatrix}
$$
where $F_1$, $F_2$ are two arbitrary functions, and $\lambda_i^\alpha$ denotes the diagonal entries of $\Lambda_i^\alpha$ (see this post, where the notations correspond to $\Phi = F_i$, $a=1$, $b=\lambda_i^x$, and $c=\lambda_i^y$). Now, it remains to apply the intial condition to determine the arbitrary functions:
$$
q(x,y,0) = R\, p(x,y,0) = \begin{bmatrix}
F_1(4 x, 2 x - 4 y) + F_2(2 x, -2 (x + y)) \\
F_1(4 x, 2 x - 4 y) - F_2(2 x, -2 (x + y))
\end{bmatrix} .
$$
From the initial condition $q_1(x,y,0) = \Bbb I_{x^2 + y^2 < 1}$ and $q_2(x,y,0) = 0$ in OP, we deduce that
\begin{aligned}
\tfrac12\Bbb I_{x^2 + y^2 < 1} &= F_2(2x,-2(x+y)) \\
& = F_2(X,Y) = \tfrac12\Bbb I_{X^2 + (Y+X)^2 < 4} \\
\tfrac12\Bbb I_{x^2 + y^2 < 1} &= F_1(4x,2 x - 4 y) \\
& = F_1(\xi,\eta) = \tfrac12\Bbb I_{\xi^2 + (\eta-\xi/2)^2 < 16}
\end{aligned}
where $\Bbb I$ is the indicator function. Finally,
$$
q(x,y,t) = \frac12 \begin{bmatrix}
\Bbb I_{(x-t/4)^2 + (y-t/8)^2 < 1} + \Bbb I_{(x-t/2)^2 + (y+t/2)^2 < 1} \\
\Bbb I_{(x-t/4)^2 + (y-t/8)^2 < 1} - \Bbb I_{(x-t/2)^2 + (y+t/2)^2 < 1}
\end{bmatrix}
$$
with the above expressions. |
An example of a sequence in the norm closed unit ball of $c_{00}$ has no weakly cluster point. | In this answer to your previous question we considered the sequence $(x_n)_n$ defined as $$x_n = \left(\frac12, \frac14, \frac18, \ldots, \frac1{2^n}, 0, 0, \ldots\right) \in B$$
We then showed that the sets $E_n = \{x_k : k \ge n\}$ are weakly closed and satisfy $\bigcap_{n=1}^\infty E_n = \emptyset$.
By taking complements we get $\bigcup_{n=1}^\infty E_n^c = c_{00}$.
Assume that $(x_n)_n$ has a weak limit point $y \in c_{00}$. Then there exists $n \in \mathbb{N}$ such that $y \in E_n^c$. Therefore $E_n^c$ is a weakly open neighbourhood of $y$ so $E_n^c$ contains infinitely many terms of the sequence $(x_n)_n$.
But this is a direct contradiction with $E_n = \{x_k : k \ge n\}$. |
Why is the limit supremum an increasing sequence? Is my textbook incorrect? | Your argument is right. $b_k$ is the supremum of more numbers than $b_{k+1}$, hence it can hardly be smaller. So indeed we have $b_1\ge b_2\ge b_3\ge \ldots$. And likewise $c_1\le c_2\le c_3\le \ldots$ |
About this simple linear system $Ax=0$ under the condition $Dim(Ker(A))>0$ | No. Suppose for example that $A = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}$. Then the kernel of $A$ is the linear span of $\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$. In particular, every $x \in \ker(A)$ satisfies $x_2 = 0$.
A sufficient (but of course not necessary) condition to achieve what you want would be that every combination of $k-1$ columns is linearly independent. In that case your eigenvectors don't have a zero entry and you get what you want trivially.
Also note that your property is not basis-independent. So almost every change of basis would achieve this as well. |
Clarification on variance and expected value problem | Hints
How can you maximize $\mathbb{E}[Y]$? Note that it is given by
\begin{equation}
\mathbb{E}[Y] = \mathbb{P}(Y = 1) \cdot 1 + \mathbb{P}(Y = 2) \cdot 2 + \mathbb{P}(Y = 4) \cdot 4.
\end{equation}
The variance is given by
\begin{equation}
\operatorname{Var}(Y) = \mathbb{E}[Y^2] - \mathbb{E}[Y]^2.
\end{equation}
So, what you want to do, is maximize $\mathbb{E}[Y^2] - \mathbb{E}[Y]^2$ which means that you want to try to make $\mathbb{E}[Y^2]$ large and at the same time make $\mathbb{E}[Y]$ small. Remember that
\begin{equation}
\mathbb{E}[Y^2] = \mathbb{P}(Y = 1) \cdot 1^2 + \mathbb{P}(Y = 2) \cdot 2^2 + \mathbb{P}(Y = 4) \cdot 4^2.
\end{equation} |
Evaluate the following limit without using Stirling's formula | Let
$L(a,n)
=\prod_{k=1}^{\lfloor an \rfloor -1}(1-\frac{k}{n})
$.
The inequality states that
$x < -\ln(1-x) < x+x^2
$.
$\begin{array}\\
f(a,n)
&=-\ln(L(a, n))\\
&=-\sum_{k=1}^{\lfloor an \rfloor -1}\ln(1-\frac{k}{n})\\
&=\sum_{k=1}^{\lfloor an \rfloor -1}-\ln(1-\frac{k}{n})\\
&>\sum_{k=1}^{\lfloor an \rfloor -1}\frac{k}{n}
\qquad\text{since } -\ln(1-x) > x\\
&=\dfrac{(\lfloor an \rfloor -1)(\lfloor an \rfloor)}{2n}\\
&=\dfrac{(an-1)(an-2)}{2n}\\
&=n\dfrac{(a-1/n)(a-2/n)}{2}\\
&\to \infty
\qquad\text{for any } 0 < a < 1\\
\end{array}
$
so
$L(a, n) \to 0$.
Here's a more accurate estimate.
$\begin{array}\\
f(a,n)
&=-\ln(L(a, n))\\
&=-\sum_{k=1}^{\lfloor an \rfloor -1}\ln(1-\frac{k}{n})\\
&=-n\sum_{k=1}^{\lfloor an \rfloor -1}\dfrac1{n}\ln(1-\frac{k}{n})\\
&\approx -n \int_0^a \ln(1-x)dx\\
&= -n \int_{1-a}^1 \ln(x)dx\\
&= -n (x\ln(x)-x)|_{1-a}^1\\
&= -n (-1-((1-a)\ln(1-a)-(1-a))\\
&= -n (-1-(1-a)\ln(1-a)+1-a)\\
&= n (a+(1-a)\ln(1-a))\\
\end{array}
$
If $a = \frac14$,
then
$f(a, n)
\approx n(\frac14+\frac34\ln(\frac34))
$. |
Is $\mathbb{Q}(\pi)$ a simple extension of $\mathbb{Q}\left(\frac{\pi^3}{1+\pi}\right)$? | Let $K=\mathbb{Q}(\frac{\pi^3}{1+\pi})$. Clearly $\pi$ satisfies the relation
$$\frac{x^3}{x+1}-\frac{\pi^3}{\pi+1}=0$$
Therefore $\pi$ is a root of the following irreducible polynomial of degree $3$:
$$x^3+\left(\frac{\pi^3}{1+\pi}\right)x+\left(\frac{\pi^3}{1+\pi}\right)\in K[x]$$
And to answer the question in the title of your post, yes, $\mathbb{Q}(\pi)$ is a simple extension of $K$ because there exists an $a\in\mathbb{Q}(\pi)$ such that
$$\mathbb{Q}(\pi)=K(a)$$
One obvious example is $a=\pi$, though there are infinitely many other $a$ that work too. |
Prove that $\operatorname{null} T_1 = \operatorname{null} T_2$ iff there exists an invertible operator $S$ such that $T_1=ST_2$. | Hint: Here's a proof for finite dimensional spaces: let $\{v_1,\dots,v_k\}$ be a basis for the shared null-space of $T_1$ and $T_2$. Extend this to a basis $\{v_1,\dots,v_n\}$ of $L$.
We want $S:W \to W$ to satisfy $T_1 v_j = ST_2 v_j$ for $j = (k+1),\dots,(n-1),n$. Why can we guarantee that such a map exists?
Try to recast this as a proof via quotient spaces. In particular, we have $\ker(T_1) = \ker(T_2) = K$, and $T_1$ and $T_2$ both induce maps $\tilde T_i: L/K \to W$. |
Can ADMM be applied to optimisation with matrix variables and inequality constraint? | Honestly, I do not see big issues here. Assume $n=2$ to clarify the ideas, and set $$ v = \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}, \ A= \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}, \ B= \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}, \ X= \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{pmatrix}. $$Then $$ \|Xv\|_2 + \|X\|_F ^2 = \sqrt{ (v_1 x_{11} + v_2 x_{12} )^2 + (v_1 x_{21} + v_2 x_{22})^2 }+ x_{11} ^2 + x_{12} ^2 + x_{21} ^2 + x_{22} ^2, $$ $$\text{Tr}(AX) = a_{11} x_{11} + a_{12} x_{21} + x_{22} a_{22} + a_{21} x_{12} $$ and $$\text{Tr}(B)=b_{11} + b_{22}. $$
Then you want to minimize a nonlinear objective function (differentiable almost everywhere, but maybe you wanted $\|Xv\|_2 ^2$ instead?) under linear inequality constraints. This paper briefly explains how ADMM can deal with linear inequality constraints. |
Simple exercise on probability theory | Your working is fine. Note that
$$\frac{\binom75 \cdot \binom30 +\binom74 \cdot \binom31 + \binom73 \cdot \binom32 + \binom72 \cdot \binom33 }{\binom{10}{5}}=1$$ |
Finding $a,b$ in this limit without using L’Hôpital’s rule | Set $x-1=h$
$$F=\lim_{h\to0}\dfrac{16^{1+h}-16(1+h)^4}{a^{1+h}-2(1+h)^b}$$
$$=16\cdot\lim_{h\to0}\dfrac{16^h-16-h(4+O(h))}{a(a^h-1)-2\left((1+h)^b-1\right)+a-2}$$
Divide numerator & denominator by $h\ne0$ as $h\to0$
$$F=16\cdot\dfrac{\lim_{h\to0}\dfrac{16^h-1}h-\lim_{h\to0}(4+O(h))}{a\lim_{h\to0}\dfrac{a^h-1}h-2\lim_{h\to0}\dfrac{(1+h)^b-1}h+\dfrac{a-2}h}$$
$a-2$ must be $0$ for the existence of the limit
$$F=16\cdot\dfrac{\ln16-4}{a\ln a-2b}=16\cdot\dfrac{4\ln2-4}{2\ln2-2b}$$
$\implies b=1$ |
Show that if y satisfies $y''+y=\sin^{2017}{x}\cos x$ then $y$ is a periodic function. | Let $f(t)=(\sin(t))^{2017}\cos(t)$. Put $$g(x)=\sin(x)\int_0^xf(t)\cos(t)dt-\cos(x)\int_0^x f(t)\sin(t)dt$$
Then we compute that $g$ is a particular solution of our differential equation. We want to show that $g$ is periodic with period $2\pi$. This is equivalent , writing $g(x+2\pi)-g(x)=0$, to show that
$$\sin(x)\int_x^{x+2\pi}f(t)\cos(t)dt-\cos(x)\int_x^{x+2\pi}f(t)\sin(t)dt=0$$
It suffices to show that $\int_x^{x+2\pi}f(t)\cos(t)dt=\int_x^{x+2\pi}f(t)\sin(t)dt=0$. By computing the derivative of these functionss, we see that they are constant; so it suffices to show that they vanishes for a particular value of $x$. For the first, we take $x=-\pi$, and use the fact that $f(t)\cos(t)$ is odd to show that it is $0$. For the second, we take $x=0$, and use that a primitive of $f(t)\sin(t)$ is $\frac{(\sin(t)^{2019}}{2019}$ to show it is $0$ also, and we are done. |
Lebesgue measure as a fixpoint: change of variables formulas | Before I start I'd like to point out that your change of variables formula only works for invertible functions (technically injective functions as well, but let me get back on that later). There is a more general version involving differential forms that works for arbitrary differentiable functions, but let's not complicate things.
Also I'd like to change the notation $\phi'(\mu)$ to $|\phi'| \mu$.
Anyway, using the fact that $\phi'$ is invertible and that $\phi_*^{-1} \phi_*(\mu) = \mu$ we can show that for any $\mu$ satisfying $
\phi_*(|\phi'| \mu) = \mu
$ we also have that
$$
|\phi'| \mu = \phi_*^{-1}(\mu)
$$
or equivalently
$$
\tag{1}\label{*}
\def\d{\mathrm{d}}
\frac{\d\phi_*^{-1}(\mu)}{\d \mu} = |\phi'|
$$
Note that when $\eqref{*}$ holds for all translations then $\mu$ must be the Lebesgue measure. Since that is the only (complete) translation invariant measure.
Now let $\mu$ be any measure for which $\frac{\d \mu}{\d \lambda}$ exists (i.e. $\mu \ll \lambda$), then for any $f$
$$
\int_{\Omega} f \, \d \phi_*^{-1}(\mu) = \int_{\phi(\Omega)} (f \circ \phi^{-1}) \, \d \mu = \int_{\phi(\Omega)} (f \circ \phi^{-1}) \frac{\d \mu}{\d \lambda} \d \lambda = \int_{\Omega} f |\phi'| \, \frac{\d \mu}{\d \lambda}\d \lambda.
$$
So, using the properties of the Radon–Nikodym we find that ($\lambda$-almost everywhere)
$$
\frac{\d \phi_*^{-1}(\mu)}{\d \lambda} = \frac{\d \phi_*^{-1}(\mu)}{\d \mu} \frac{\d \mu}{\d \lambda} = |\phi'| \, \frac{\d \mu}{\d \lambda}
$$
which, provided $\frac{\d \mu}{\d \lambda} \neq 0$ almost everywhere, implies $\eqref{*}$. Therefore $\eqref{*}$ holds for all measures equivalent to the Lebesgue measure (i.e. $\mu \ll \lambda$ and $\lambda \ll \mu$).
As for what happens when $\mu \not\ll \lambda$ it's hard to say. It seems to depend quite a lot on the properties of $\phi$, but for any particular $\phi$ you can construct a pathological measure by starting with a measure $\mu$ and considering the sum
$$
\sum_{k=-\infty}^\infty (\phi_*|\phi'|)^k(\mu)
$$
where for $k \geq 0$
$$
\begin{align}
(\phi_*|\phi'|)^0(\mu) &= \mu \\
(\phi_*|\phi'|)^k(\mu) &= (\phi_*|\phi'|)^{k-1} \phi_*(|\phi'|\mu))\\
(\phi_*|\phi'|)^{-k}(\mu) &= (\phi_*|\phi'|)^{-k+1} (|\phi'|^{-1}\phi_*^{-1}(\mu))
\end{align}
$$
This sum won't always behave, but letting $\mu$ be $\delta_x$, the dirac delta measure at $x$, then as long as $x$ isn't a fixed point and $|\phi'(\phi^k(x))|$ doesn't become $0$ or undefined for any $k$ the resulting measure remains finite and well defined at least for all sets that only contain finitely many of the points in $\{ \phi^k(x) : k \in \mathbb{Z} \}$. Since this only excludes countably many $x$ there must be an $x$ for which the pathological construction works and is reasonably well behaved.
Appendix
To get back on the injective functions $f : \mathbb{R}^n \to U \subset \mathbb{R}^n$ , note that those are still invertible when considered as functions onto their image $U$ rather than all of $\mathbb{R}^n$. This complicates things slightly but all the steps in first part of the proof still work. The main issue is that the pathological construction becomes somewhat more difficult, in case $\phi$ doesn't induce an isomorphism on any non-trivial subspace of $\mathbb{R}^n$ you're forced to let $x$ be a fixpoint which results in the measure $\infty \delta_x$, technically this is still a counter example but it's not that interesting. |
Determinant of $M = \begin{pmatrix} I_n&iI_n \\iI_n&I_n \end{pmatrix}$ | There exists a generalization of Cofactor expansion called Laplace expansion. It is a very cumbersome method but it has a very natural and useful special case. Let
$$\begin{pmatrix}A & 0 \\ B & C\end{pmatrix}$$
be a matrix composed of block-matrices $A,B,C,0$ of appropriate dimensions so that this matrix makes sense. The matrix $0$ is a is a zero matrix. Then this result says
$$\det \begin{pmatrix}A & 0 \\ B & C\end{pmatrix} = \det (A) \det (C)$$
i.e that the method for expanding upper triangular matrizcs actually also extends to 'upper-triangular-block-matrices'.
For your matrix we can add $-i$ times the second 'block-row' to the first block-row to get
$$\det M = \det \begin{pmatrix}2 I_n & 0 \\ iI_n & I_n \end{pmatrix} = \det(2I_n) \det(I_n) = 2^n$$ |
Zolotarev's Lemma and Quadratic Reciprocity | Zolotarev's Lemma relates the value of the Legendre symbol to the signature of a permutation of $\mathbb{Z}_p$ It is stated and proved below, along with its use in what is considered to be a very elegant proof of the quadratic reciprocity law.
Zolotarev's Lemma: Let $p$ be an odd prime, $a \in \mathbb{Z}_p^\times $, and $\tau_a : \mathbb{Z_p} \to \mathbb{Z_p}$ be the permutation of $\mathbb{Z_p}$ given by $\tau_a(x):= ax, \,$ then
$$\binom{a}{p} = \text{sgn}(\tau_a) $$
Proof:
We determine the signature based on the parity of the cycle structure. Note
that the signature of a k-cycle is $(-1)^{k-1}$. Let $m = |a|$ in $\mathbb{Z}_p^\times$. Since $0$ is a singleton orbit (i.e. fixed point) of $\tau_a$, and therefore has no effect on its signature, it suffices to proof this for the restriction of $\tau_a$ to $\mathbb{Z}_p^\times$, as the signature will be the same for both. Each cycle has the form $(x,ax,a^2x,a^3x,\dots ,a^{m-1}x)$. Thus
the cycle structure consists of $(p-1)/m \ $ $\ m$-cycles, and its signature is therefore given by:
$ \\ $
$$
\text{sgn}(\tau_a)= \left((-1)^{m-1}\right)^{p-1 \over m} =
\begin{cases}
(-1)^{\frac{p-1}{m}} & \mathrm{if} \ m \text{ is even} \\
\\
\ \ \ 1 & \text{if } m \text{ is odd}
\end{cases}
$$
$ \\ $
Recall that $x^2 = 1 \; (\text{mod }p) \implies x = \pm 1 \; (\text{mod }p)$, and thus $a^{m/2} = -1 \:$ as $ \ \frac{m}{2} < m = |a| $.
If $m$ is even we have $$ a^{\frac{p-1}{2}} = \left(a^{\frac{m}{2}}\right)^{\frac{p-1}{m}} = (-1)^{\frac{p-1}{m}} = \text{sgn}(\tau_a)$$
$ \\ $
If $m$ is odd, then $(2,m) = 1 \text{ and } 2,m \,\big| \, p\!-\!1 \implies 2m \, \big| \, p\!-\!1 $ and we have:
$$a^{\frac{p-1}{2}} = \left(a^m \right)^\frac{p-1}{2m} = 1^{\frac{p-1}{2m}} = 1 = \text{sgn}(\tau_a)$$
$ \\ $
Euler's criterion then finishes the argument.
$ \\ $
Corollary: If p and q are odd primes, $a \in \mathbb{Z}_q$, and $b \in \mathbb{Z}_p $ then $\binom{p}{q}$ and $\binom{q}{p}$ are equal to the signatures of the permutations $x \mapsto qx + b $ and $x \mapsto a + px$ respectively.
$ \\ $
Proof
The argument is symmetric. We shall prove it for $\binom{p}{q}$. Let $ a \in \mathbb{Z}_q$ and define the permutation $\sigma: \mathbb{Z}_q \to \mathbb{Z}_q$ by $\sigma(x):= a + x$. If $ a = 0$, then $\sigma = Id$ and $\text{sgn}(\sigma) = 1$. Otherwise, the permutation consists of a single p-cycle, $(x,a+x,2a+x,\dots,(q-1)a+x)$ and thus sgn$(\sigma) = 1$ also. Letting $\tau_p$ be as defined above, the permutation $x \to a+px$ is equal to the composition $\sigma \tau_p$ and thus by Zolotarev's Lemma its signature is $\text{sgn}(\sigma \tau_p) = \text{sgn}(\sigma)\text{sgn}(\tau_p) = \text{sgn}(\tau_p) = \binom{p}{q}$.
$ \\ $
The Law of Quadratic Reciprocity: If $p$ and $q$ are odd primes then
$$\binom{p}{q} \binom{q}{p} = (-1)^{\frac{p-1}{2} \frac{q-1}{2}}$$
Proof
Let $\tau: \mathbb{Z}_{pq} \to \mathbb{Z}_p \times \mathbb{Z}_q \ \text{ and }
\ \lambda,\alpha : \mathbb{Z}_p \times \mathbb{Z}_q \to \mathbb{Z}_p \times \mathbb{Z}_q$ be permutations defined as follows:
$$
\begin{align}
\tau(x):=& \ (x,x) \\
\lambda(a,b):=& \ \left(a,a\!+\!p{}b\right) \\
\alpha(a,b):=& \ \left(q{}a\!+\!b,b\right)
\end{align}
$$
Now define the permutation $\varphi: \mathbb{Z}_{pq} \to \mathbb{Z}_{pq}$
by the rule $\varphi(a+pb):= qa+b$. This function is well-defined by the Division Algorithm, provided we view it as being defined only on the residues. It is routine to extend this argument to account for the congruence classes in general.
Note that $\varphi = \tau^{-1} \circ \alpha \lambda^{-1}\! \circ \tau$ and thus $\text{sgn}(\varphi) = \text{sgn}(\alpha)\text{sgn}(\lambda)$
We count the signature of $\varphi$ in two ways - by its cycle parity and then by its inversions. Looking at $\lambda$'s cycle structure, we note that for each $a \in \mathbb{Z_p}$ the restriction of $\lambda$ to $\{a\} \times \mathbb{Z}_q$ is still a permutation, and its cycle structure is identical to the cycle structure of the permutation $b \mapsto a+pb$ it induces in its second coordinate. In particular the restriction of $\lambda$ to $\{a\} \times \mathbb{Z}_q$ has a signature equal to $\binom{p}{q}$. We can then extend this function to the rest of $\mathbb{Z}_p \times \mathbb{Z}_q$ by making it the identity, and we can then view $\lambda$ as the p-fold composition of these permutations, and thus $\text{sgn}(\lambda) = \binom{p}{q}^p = \binom{p}{q}$. It is best to see this via an example.Similarly, $\text{sgn}(\alpha) = \binom{q}{p}$ and thus $$\text{sgn}(\varphi) = \binom{p}{q}\binom{q}{p}$$
We now count the inversions. Note that $$a_1 + p{}b_1 < a_2 +p{}b_2 \text{ and }q{}a_2+b_2 < q{}a_1+b_1 \implies a_1 - a_2 < p(b_2 - b_1) < p{}q(a_1 - a_2)$$
Since $a_1 - a_2$ gets larger when multiplied by the positive integer $pq$ we must have that $a_1 - a_2 > 0$ which then forces $b_2 - b_1 > 0$. It can also be seen that this is a sufficient condition for an inversion as well. Thus the pair $\left(a_1 + pb_1,a_2+pb_2 \right)$ represents an inversion under $\varphi$ if and only if $a_1 > a_2 \text{ and } b_2 > b_1$. Since given any pair of distinct integers one is necessarily larger than the other, any pair of doubles $(a_1,a_2),(b_1,b_2)$ corresponds to a unique inversion,provided we don't distinguish between $(a_1,a_2) \text{ and } (a_2,a_1)$ (and similarly for the $b_i$).The number of inversions is therefore
$$\binom{p}{2}\binom{q}{2} = \frac{p-1}{2}\frac{q-1}{2} \,(\text{mod }2)$$
Equating the two values for $\text{sgn}(\varphi)$ gives us our result. |
Find number of triangles formed by lines( given:angle along x-axis) | Any set of three lines at different angles will form a triangle unless they pass through a common point. If we exclude cases where three lines go through a point, count the number of lines at each angle. I will take an example where the angles are $-10,-5,0,10,10,10,20,20,30,40,50$. We can represent this as $(1,1,1,3,2,1,1,1)$ because there are $3$ lines at $10$ and $2$ lines at $20$.
If all the lines were at different angles there would be ${11 \choose 3}=165$ triangles. We need to subtract the number of ways to get two parallel lines and one nonparallel, which is ${3 \choose 2}\cdot 8+{2 \choose 2}\cdot 9=33$. We then have to subtract the number of ways of getting three parallel lines, which is ${3\choose 3}=1$. The answer is then $131$.
If there are $n_i$ lines at angle $i$ and $n=\sum_i n_i$ is the total number of lines, the number of triangles is
$${n \choose 3}-\sum_i \left((n-n_i){n_i \choose 2} -{n_i \choose 3}\right)$$ |
Inverse of ratio function | In General, because an inverse is the reflection of the function in the line $y=x$, the slope of the tangent to the inverse at the point $(x,f^{-1}(x))$ must be the reciprocal of the slope of the tangent to the function at $(f^{-1}(x) ,x)$
$$
\frac d{dx}f^{-1}(x)=\frac 1 {f'( f^{-1}(x))}
$$
In your case $f(1)=2$ so $f^{-1}(2)=1$
So
$$
\frac d{dx}f^{-1}(2)=\frac 1 {f'( 1)}
$$ |
Intersected cone, a practical problem | I can't see your picture, but the intersection of a plane and cone will be a circle only if the plane is perpendicular to the axis of the cone. |
precise vs ordinary (?) definition of the limit | OK, the "$\epsilon$-$\delta$ definition" is for cases where the $\delta$ condition makes sense: $|x-a| < \delta$. This is meaningless for $a=\infty$. So a slight variant is used. You could call it the "$\epsilon$-$N$ definition" or something. Which one to use is determined by which one makes sense. We can imagine a third, similar, definition for
$$
\lim_{n\to\infty} a_n = \infty .
$$
Then neither the $\delta$ condition nor the $\epsilon$ condition makes sense. See if you can write the definition that will apply in that case. |
Finding if a particle of a parametric equation is moving horizontally. | Yes, certainly DNE means there is no solution in $t$ to $\dfrac{dx}{dt} = 0$, and hence, $\dfrac{dx}{dt}$ cannot equal zero, ever, and so certainly not at $t = \dfrac{\ln 5}{6}$.
What we need here, given $$\dfrac{dy}{dx} = \dfrac{dy/dt}{dx/dt} = 0$$ with the denominator NOT equal to zero. And we've found it is not equal to zero (nor can it be). And that gives us $$\dfrac{dy}{dx} = \dfrac 0{dx/dt} = 0$$ |
Help with notation in linear algebra | You substitute for the argument ($x$) not the coefficients.
$\because p_n(x) = c_0+ c_1 x + c_2 x^2 + \cdots + c_n x^n$
$\therefore p_n(0) = c_0 + c_1 (0) + c_2 (0)^2 + \cdots + c_n (0)^n = c_0$
$\therefore p_n(1) = c_0 + c_1 (1) + c_2 (1)^2 + \cdots + c_n (1)^n = c_0 + c_1 + c_2 + \cdots + c_n$
So for quadratic polynomials (ie: $p_2(x)=c_0+c_1 x + c_2 x^2$) $$p_2(0)=0 \implies c_0 = 0 \\ p_2(1)=0 \implies c_0+c_1+c_2 = 0$$ |
Limit of $\mu(\theta)=\frac{e^{\theta}(\theta-1)+1}{(e^{\theta}-1)\theta}$ when $\theta \to \pm \infty$ & $\theta \to 0$ | Divide numerator and denominator by $\theta e^{\theta}$ to obtain
$$
\frac{1 - \theta^{-1} + \theta^{-1}e^{-\theta}}{1 - e^{-\theta}}.
$$
We have $\lim_{\theta\rightarrow\infty}e^{-\theta} = 0$ and $\lim_{\theta\rightarrow\infty}\theta^{-1}= 0$, so
$$
\lim_{\theta\rightarrow\infty}\frac{1 - \theta^{-1} + \theta^{-1}e^{-\theta}}{1 - e^{-\theta}} = \frac{1-0+0}{1-0} = 1.
$$
Can you do $\theta\rightarrow-\infty$ yourself? |
Show $\int_0^\infty \frac{\ln^2x}{(x+1)^2+1} \, dx=\frac{5\pi^3}{64}+\frac\pi{16}\ln^22$ | Here is a solution based on real methods. Note that $\frac{1}{x^2+2x+{2}}=\frac{x^2-2x+2}{x^4+4}$. Then
\begin{align}
&\int_0^\infty \frac{\ln^2x}{(x+1)^2+1}dx \\
= & \int_0^\infty \underset{x^2=\frac2{t^2}}{\frac{x^2\ln^2x }{x^4+4}}dx
-2\int_0^\infty \underset{x^2=2t}{\frac{x\ln^2x }{x^4+4}}dx
+ 2\int_0^\infty \underset{x^2=2t^2}{ \frac{\ln^2x }{x^4+4} }dx\\
&\hspace{-5mm} =
-\frac{\ln^22}8\int_0^\infty \frac{dt}{t^2+1}
-\frac18\int_0^\infty \frac{\ln^2tdt}{t^2+1}+\frac{\ln^22 }{2\sqrt2} \int_0^\infty \frac{dt}{t^4+1}
+ \sqrt2 \int_0^\infty \frac{\ln^2 t dt}{t^4+1} \\
= &
-\frac{\ln^22}8\left(\frac\pi2\right)
-\frac18\left(\frac{\pi^3}8\right) +\frac{\ln^22 }{2\sqrt2}\left(\frac\pi{2\sqrt2}\right)
+ \sqrt2 \left(\frac{3\pi^3\sqrt2}{64}\right) \\
=& \frac{5\pi^3}{64}+\frac\pi{16}\ln^22
\end{align} |
The degree extension of $\mathbb{Q}(\sqrt{2},\sqrt{3})/\mathbb{Q}(\sqrt{2})$ is at most $2$ because $[\mathbb{Q}(\sqrt{3}),\mathbb{Q}] = 2$ | We have $[\Bbb Q(\sqrt3):\Bbb Q]=2$. This means that there is an irreducible polynomial of degree $2$ with rational coefficients where $\sqrt3$ is a root. (One such polynomial is $x^2-3$, although the exact polynomial expression is of little importance..)
Now for $[\Bbb Q(\sqrt3,\sqrt2):\Bbb Q(\sqrt2)]$. It corresponds to the degree of an irreducible polynomial with coefficients in $\Bbb Q(\sqrt2)$ where $\sqrt3$ is a root. Clearly the polynomial from before is a polynomial with valid coefficients and the right root. Is it irreducible? Don't know, don't care. Regardless, there is an irreducible. polynomial of either first or second degree with $\sqrt3$ as a root, meaning the extension has either degree $1$ or degree $2$. |
Problem finding the number of r-element multi-subsets of the multi-set $M=\{ a_{1},a_{2},...,a_{n},m.b \} $ | Let $S=\{a_1,\cdots,a_n\}$ and consider the cases 1) $r\le n, r\le m,\;\;$ 2) $n\le r\le m,\;\;$ 3) $m\le r\le n$.
In each case, we can choose $i$ elements from $S$ and $r-i$ $b's$, where $i\le r$ and $i\le n$ and $r-i\le m$.
1) In case 1, $0\le i\le r$ and there are $\binom{n}{i}$ ways to choose $i$ elements from S, so
$\;\;\;$there are $\displaystyle\sum_{i=0}^{r}\binom{n}{i}$ multisets.
2) In case 2, $0\le i\le n$ so there are $\displaystyle\sum_{i=0}^{n}\binom{n}{i}=2^n$ multisets.
$\;\;\;$ (In this case, we can choose any subset of $S$, and $S$ has $2^n$ subsets.)
3) In case 3, $i\le r$ and $r-i\le m\implies r-m\le i\le r$,
$\;\;\;$so there are $\displaystyle\sum_{i=r-m}^{r}\binom{n}{i}$ multisets. |
Proving a function $f:\mathbb{Z}_{nm} \rightarrow \mathbb{Z}_n \times \mathbb{Z}_m$ is an isomorphism | You first have to prove the map is well defined.
Define
$$
g\colon\mathbb{Z}\to\mathbb{Z}_n\times\mathbb{Z}_m
\qquad
g(r)=(\bar{r},\bar{r})
$$
This is a ring homomorphism with no assumption on $m$ and $n$. The kernel is
$$
\ker g=\{r\in\mathbb{Z}:r\in n\mathbb{Z}\cap m\mathbb{Z}\}=k\mathbb{Z}
$$
where $k=\operatorname{lcm}(n,m)$, by well known results on ideals in the ring of integers. Granted that $n$ and $m$ are coprime (which probably is what your exercise tells you), you have $\operatorname{lcm}(n,m)=nm$.
The homomorphism theorem therefore provides a unique injective ring homomorphism
$$
f\colon\mathbb{Z}_{nm}\to\mathbb{Z}_n\times\mathbb{Z}_m
$$
such that $f(\bar{r})=g(r)=(\bar{r},\bar{r})$.
Since $f$ is injective and domain and codomain have the same number of elements, $f$ is also surjective. |
Prove that the recurrence is true | Base case is fine if you include the option of more than 3 consecutive zeros:
$$ t4: 1000, 0000, 0001 $$
Breaking the problem down, you can consider the first bit as either 1 or 0. For one, there are no extra results and you get t3. For 0, you get a bonus set of cases where the first 3 are zero, which occurs 2^(n-3) times. Repeating the process for the second and third digit - minus the already counted cases in 2^(n-3) - and you get the rest of the argument you were requested to prove. |
Integral of a piecewise given function | Define $$f(t,x) = \begin{cases} \frac 1{x_{\max}},& t\in[0,x]\\0,& t\notin [0,x]\end{cases}$$
Then $$p(t) = \int_0^{x_\max} f(t, x)\,dx$$
Where the limits are from $0$ to $x_\max$, not $-\infty$ to $\infty$, because you said yourself that $x\in [0, x_\max]$. Note that in your definition of $p(t), x$ is a dummy variable, not an actual part of the definition. Its only purpose is to make the notation work. You could switch to a different variable (other that $t$ and $x_\max$, which already have roles) without changing the meaning at all. So in saying $x\in [0, x_\max]$, you are just admitting that you goofed up the limits of the integration.
Now break it out into two cases:
$t < 0$ or $t > x_\max$. Then $t\notin [0,x]$ for all $x$ in the limits, so $f(t,x) = 0$ and $\int_0^{x_\max} 0\,dx = 0$
$t \in [0, x_\max]$. Then
$$\begin{align}p(t) &= \int_0^{x_\max} f(t, x)\,dx \\&= \int_0^t f(t, x)\,dx + \int_t^{x_\max} f(t, x)\,dx\\&=\int_0^t 0\,dx + \int_t^{x_\max} \dfrac 1{x_\max}\,dx\\&= 0 + \dfrac 1{x_\max}(x_\max -t)\\&=1 - \dfrac t{x_\max}\end{align}$$
Putting it together:
$$p(t) = \begin{cases}0,& t < 0\\1 - \dfrac t{x_\max}, & 0 \le t \le x_\max\\0,& x_\max < t\end{cases}$$ |
eigen value of the gradient operator | $\def\R{{\bf r}}
\def\K{{\bf k}}
\def\o{\cdot}$The function $\phi(\R) = e^{a \K\o\R}$ is an eigenfunction of the differential operator $\nabla$ with eigenvalue $a\K$.
This can be shown by proving that
$\nabla \phi(\R) = a\K \phi(\R)$
for the given $\phi(\R)$.
For the $x$-component
\begin{eqnarray*}
\nabla_x e^{a \K\o\R} &=& \frac{\partial}{\partial x}
\exp a (k_x x+ k_y y + k_z z) \\
&=& a k_x \exp a (k_x x+ k_y y + k_z z) \\
&=& a k_x e^{a \K\o\R}.
\end{eqnarray*}
The other components go similarly.
The result follows. |
Norm of $ T:(C([0,1]),||\cdot||_\infty)\rightarrow\mathbb{R}$ where $Tf=\sum_{k=1}^n a_kf(t_k)$? | WLOG, one can assume that $a_k\ne0$ for all $k$ (if not, just drop all the zero-values, and in the case that all are zero, the problem is trivial).
Take $f(x)$ a continuous function with these properties:
$f(t_k)=\frac{|a_k|}{a_k}$.
$|f(x)|\le 1$ for all $x\in[0,1]$.
For example, you can take the polygonal through the points $\{(t_k,f(t_k)),\,1\le k\le n\}\cup\{(s_k,0),1\le k\le n-1\}$, where $s_k=\frac{t_k+t_{k+1}}{2}$ (the mid point of the interval $[t_k,t_{k+1}]$).
Then it is clear that $\|f\|_\infty=1$ (observe that $|f(t_k)|=1$) and $Tf=\sum_{k=1}^n |a_k|$.
This together with your calculations leads to $\|T\|=\sum_{k=1}^n|a_k|$. |
If totally disconnectedness does not imply the discrete topology, then what is wrong with my argument? | Note that being totally disconnected means that the only connected nonempty subsets are the singletons. To see that, for example, $\{ a , b \}$ is not connected, it suffices to find open subsets $U , V \subseteq X$ with the following properties:
$U \cap \{ a,b \} = \{ a \}$;
$V \cap \{ a,b \} = \{ b \}$;
This would not imply, however, that either $\{ a \}$ are $\{ b \}$ are open subsets of $X$. This only means that $\{ a \}$ and $\{ b \}$ are open subsets of the subspace $\{ a,b \}$ of $X$.
To give more details with regards to an actual example, consider the rationals $\mathbb{Q}$ as a subspace of the real line. Given any subset $A \subseteq \mathbb{Q}$ of size $> 1$ pick $p,q \in A$ with $p < q$. Then there is an irrational number $x$ such that $p < x < q$. Note that $U = ( - \infty , x ) \cap \mathbb{Q}$ and $V = ( x , + \infty ) \cap \mathbb{Q}$ are open subsets of $\mathbb{Q}$ which have the following properties:
$U \cap A \neq \emptyset \neq V \cap A$;
$A \subseteq U \cap V$; and
$( U \cap V ) \cap A = \emptyset$.
This demonstrates that $A$ cannot be a connected subset of $\mathbb{Q}$; and in general, the only nonempty connected subsets of $\mathbb{Q}$ are the singletons.
Note, however all nonempty open subsets of $\mathbb{Q}$ are infinite. |
Correct logic of permuting 5 men and 5 women to find probability of different highest women rank | In your calculations,
$5$ is the number of possible top-ranked women
$\binom{5}{k}$ is the number of ways $k$ of the five men can have a lower rank than the top-ranked woman
$(9 - k)!$ is the number of ways of arranging the $9 - k$ people whose rankings are lower than that of the top-ranked woman
$10!$ is the number of possible sequences of rankings
In your numerators, you failed to multiply by the number of ways the men who are selected before the first woman can be ranked. Observe that
\begin{align*}
P(X = 1) & = \frac{0! \cdot 5 \cdot \binom{5}{5} \cdot 9!}{10!} = \frac{1}{2}\\
P(X = 2) & = \frac{1! \cdot 5 \cdot \binom{5}{4} \cdot 8!}{10!} = \frac{5}{18}\\
P(X = 3) & = \frac{2! \cdot 5 \cdot \binom{5}{3} \cdot 7!}{10!} = \frac{5}{36}\\
P(X = 4) & = \frac{3! \cdot 5 \cdot \binom{5}{2} \cdot 6!}{10!} = \frac{5}{84}\\
P(X = 5) & = \frac{4! \cdot 5 \cdot \binom{5}{1} \cdot 5!}{10!} = \frac{5}{252}\\
P(X = 1) & = \frac{5! \cdot 5 \cdot \binom{5}{0} \cdot 4!}{10!} = \frac{1}{252}
\end{align*}
The reason you obtained the correct answer for $X = 1$ and $X = 2$ is that $0! = 1! = 1$.
Note: The author(s) of your text are calculating the probability that the top-ranked woman is in the $k$th position. For that to occur, we must select $k - 1$ of the five men to be ranked ahead of her and one of the five women to fill the $k$th position while choosing $k$ of the $10$ people. Hence, the answer in the text is equivalent to
\begin{align*}
P(X = 1) & = \frac{\binom{5}{0}\binom{5}{1}}{\binom{10}{1}} = \frac{1}{2}\\
P(X = 2) & = \frac{\binom{5}{1}\binom{5}{1}}{\binom{10}{2}} = \frac{5}{18}\\
P(X = 3) & = \frac{\binom{5}{2}\binom{5}{1}}{\binom{10}{3}} = \frac{5}{36}\\
P(X = 4) & = \frac{\binom{5}{3}\binom{5}{1}}{\binom{10}{4}} = \frac{5}{84}\\
P(X = 5) & = \frac{\binom{5}{4}\binom{5}{1}}{\binom{10}{5}} = \frac{5}{252}\\
P(X = 6) & = \frac{\binom{5}{5}\binom{5}{1}}{\binom{10}{6}} = \frac{1}{252}
\end{align*} |
Definition of a bilinear map | $f(ae_1,ce_1)=acf(e_1,e_1)$ because $f$ is bilinear (meaning linear in both arguments).
Since $f$ is linear in the first argument, $f(ae_1,ce_1)=af(e_1,ce_1)$.
Since $f$ is linear in the second argument, $af(e_1,ce_1)=acf(e_1,e_1)$. |
How to calculate a number of $x$ values such that $f(x)$ is a quadratic residue modulo $m$? | Step 1: You can ignore the $-b$ since for each residue of $(3x+3a)^2-b$ there is a corresponding residue of $(3x+3a)^2$ whichi s just $b$ more than the former.
Step 2: Similarly you can ignore the $a$ since for each value of $(3x+3a)$ there is a value $(3y)$ with $y=x+a \pmod m$.
So we want to find the number of quadratic residues of the form $(3x)^2 \pmod m$.
If $m$ is a prime greater than $3$, then there are $\frac{m+1}{2}$ quadratic residues $r$ mod $m$, and for each one of those $q$, since $3$ has an inverse mod $m$,
$x=3^{-1}q\pmod m$ is a number such that $(3x)^2 = r.$ So if $m$ is an odd prime greater than $3$, then the answer is $\frac{m+2}{2}$.
If $m$ is the product of distinct primes all greater than $3$ then again we can find $x=3^{-1}\pmod m$ and the number of quadratic residues matches the number of possble values of $(3x)^2$; and that is the product of the number of quadratic residues for each factor. So for example, when $m = 5\cdot 11 \cdot 19$ there are
$3\cdot 6 \cdot 10$ possible values of $3x)^2\pmod m$.
If $m=3$ or $m=9$, then there of course is only one residue, namely $(3x)^2=0$.
Other cases can be solved as well, but counting the residues for an arbitrary prime power is a bit trickier. |
Total ordering on complex numbers | If we had an order on the complex numbers, then either $i \prec 0$ or $0 \prec i$.
If $0 \prec i$, then $$0i \prec ii \implies 0 \prec -1$$
Then since $0 \prec -1$, we see that $0 \prec (-1)^2 = 1$. Using (iii) we get
$$0 \prec -1 \implies 1 = 0 + 1 \prec -1 + 1 = 0 \implies 1 \prec 0 \prec 1$$
contradicts (i). The case that $i \prec 0$ is similar. Just use (ii) and add $(-i)$ both sides. |
Proving exponent law for real numbers using the supremum definition only | $
\def\qq{\mathbb{Q}}
$ ... there exists a rational number $t≤x$ such that $(a^x)^r < (a^t)^r ≤ \sup\{a^{rs}:s∈\qq,s≤x\}$ ...
This is correct. After this you then concluded $a^x < a^t$, but I am not sure whether you actually understood how to get it, because you cannot simply "take $r$-th root" unless you have already proven the needed inequalities involving rational powers of reals. Some work is needed here.
Moreover, you should not immediately seek a proof by contradiction when doing real analysis. Instead, focus on the underlying structure. Here, we can in fact directly prove the desired result. Here is a sketch (I'll leave the proof of each substep to you):
$(a^x)^y = \sup \{ ( \sup\{ a^r : r∈\qq_{<x} \} )^s : s∈\qq_{<y} \}$
$ = \sup \{ \sup\{ (a^r)^s : r∈\qq_{<x} \} : s∈\qq_{<y} \}$
$ = \sup \{ (a^r)^s : r∈\qq_{<x} ∧ s∈\qq_{<y} \}$
$ = \cdots$
It should be easy to finish now, using the definition of multiplication of reals and the properties of exponentiation for rationals. |
If H is a subgroup of a group G and K is a normal subgroup of G the show that K is a normal subgroup of HK. | Well, you know that $K \subseteq HK \subseteq G$, right? And you have $K \unlhd G$, so $gkg^{-1} \in K$ for every $g \in G$, $k \in K$. A fortiori, this holds for every $g \in HK$. Hence $K$ is a normal subgroup of $HK$. |
How many $3 \times 3$ integer matrices are orthogonal? | Dot product of columns can be used (it must be $0$) The first column of matrix $A$ has 6=2*3 possibilities for location 1 or -1, after location 1 or -1 the rest of entries must be zeros and they give possibilities for the second column - we have here only 2*2 = 4 possible choices for location 1 or -1, and the third column stays with only 2.
6*4*2=48.
But rotation matrices in right handed frame (if you would want only them in the future) we have only 24, because the third column is always calculated as the cross product of 1 and 2 column. |
Area of region r(theta) | The complication that arises when a polar curve is described by its "radius-squared" being given by a trigonometric function is that we may need to deal with negative and even imaginary radii. There is a simple protocol for working with "negative" radii, but having imaginary ones will introduce gaps in the angles covered by the curve.
In the case of the lemniscate described by $ \ r^2 \ = \ 162 \ \cos (2 \theta) \ $ , in the principal circle $ \ 0 \ \le \ \theta \ < \ 2 \pi \ $ , there are both positive and negative radii in the intervals $ \ 0 \ \le \ \theta \ \le \ \frac{\pi}{4} \ $ , $ \ \frac{3 \pi}{4} \ \le \ \theta \ \le \ \frac{5 \pi}{4} \ $ , and $ \ \frac{7 \pi}{4} \ \le \ \theta \ < \ 2 \pi \ $ . This has the effect of simultaneously "sweeping out" the half of one lobe that lies in the first quadrant and the half of the other lobe found in the third quadrant; these tracings of the curve meet at the origin for $ \ \theta \ = \ \frac{\pi}{4} \ $ . The "tracing" is then interrupted by having imaginary radii until $ \ \theta \ = \ \frac{3 \pi}{4} \ $ , where the other two half-lobes are then swept out up to $ \ \theta \ = \ \pi \ $ , completing the lemniscate. (Continuing further in angle simply re-traces the curve.)
The origin represents the angles $ \ \frac{(2k + 1) \ \pi}{4} \ , \ $ for all integers $ \ k \ $ .
To evaluate the area of the lemniscate, we can most easily avoid confusion by working with the portion of the right-hand lobe lying in quadrant I , which is traced over the interval $ \ 0 \ \le \ \theta \ \le \ \frac{\pi}{4} \ . $ We may integrate over this region and multiply the result by 4 :
$$ A \ = \ \ 4 \ \int_0^{\pi / 4} \ \frac{1}{2} [r(\theta)]^2 \ \ d\theta \ \ = \ \ 2 \ \int_0^{\pi / 4} \ 162 \ \cos (2 \theta) \ \ d\theta $$
$$ = \ 2 \cdot 162 \ \left[ \ \frac{1}{2} \ \sin (2 \theta) \ \right] \vert_0^{\pi / 4} \ = \ 162 \ ( \ \sin \ \frac{\pi}{2} \ - \ \sin \ 0 \ ) \ = \ 162 \ \ . $$
Indeed, we see that we can easily generalize this to say that the area of a lemniscate described by $ \ r^2 \ = \ C \ \cos (2 \theta) \ $ is just $ \ A \ = \ C \ $ .
[As a check on the credibility of our result, we can take the total area of the lemniscate as approximated by two ellipses with major axes of $ \ \sqrt{162} \ = \ 9\sqrt{2} \ $ . If we use the "height above" the $ \ x-$ axis at $ \ \theta \ = \ \frac{\pi}{6} \ $ as the semi-minor axis, this is $ \ [ \ \sqrt{162 \ \cos (2 \cdot \frac{\pi}{6})} \ ] \ \sin \frac{\pi}{6} \ = \ \frac{9}{2} \ $ . The area approximation by ellipses is then $ \ 2 \cdot \pi a b \ = \ 2 \ \cdot \ \pi \ \cdot \ \frac{9}{2} \sqrt{2} \ \cdot \ \frac{9}{2} \ = \ \frac{81 \sqrt{2}}{2} \pi \ \approx \ 180 \ $ , which agrees to about a 10% difference.] |
Does the series $\sum_{k=1}^{\infty}\left(\sqrt{k+\frac{1}{k}}-\sqrt{k}\right)$ converge or diverge? | It converges. Just multiply each term with $\frac{\sqrt{k+\frac{1}{k}}+\sqrt{k}}{\sqrt{k+\frac{1}{k}}+\sqrt{k}}$.
$$\sum_{k=1}^{\infty}\left(\sqrt{k+\frac{1}{k}}-\sqrt{k}\right) = \sum_{k=1}^\infty \frac{k+ \frac{1}{k} - k}{\sqrt{k+\frac{1}{k}}+\sqrt{k}} = \sum_{k=1}^\infty \frac{\frac1k}{\sqrt{k+\frac{1}{k}}+\sqrt{k}} \le \sum_{k=1}^\infty \frac{1}{2k^{3/2}} < +\infty$$ |
Example of an increasing, integrable function $f:[0,1]\to\mathbb{R}$ which is discontinuous at all rationals? | In general for any countable set $C \subset \mathbb R$ you can find a monotone function that is discontinuous only on $C$. Your particular case has already been answered elsewhere on the site, see this answer of Brian Scott. |
limit of the sequence $s_n=\sum_{k=1}^n (-1)^{k+1}a_k$ | You have the series
$$s_n=\sum_{k=1}^n (-1)^{k+1}a_k \tag{1}\label{eq1A}$$
Note by "decreasing" I assume it means $\le$ and by "increasing" I assume it means $\ge$. If it's to be considered "strict" instead, then replace the $\le$ with $\lt$, and $\ge$ with $\gt$ below.
You have
$$\lim_{k\rightarrow\infty}a_k=0 \tag{2}\label{eq2A}$$
Since the sequence $s_{2n + 1}$, $n \geq 0$, is decreasing, you thus have that
$$\begin{equation}\begin{aligned}
s_{2n+3} - s_{2n+1} & \le 0 \\
\sum_{k=1}^{2n+3} (-1)^{k+1}a_k - \sum_{k=1}^{2n+1} (-1)^{k+1}a_k & \le 0 \\
(-1)^{(2n+3) + 1}a_{2n+3} + (-1)^{(2n+2) + 1}a_{2n+2} & \le 0 \\
a_{2n+3} - a_{2n+2} & \le 0 \\
a_{2n+2} & \ge a_{2n+3}
\end{aligned}\end{equation}\tag{3}\label{eq3A}$$
Similarly, since the sequence $s_{2n}$, $n\geq1$, is increasing. you get
$$\begin{equation}\begin{aligned}
s_{2n+2} - s_{2n} & \ge 0 \\
\sum_{k=1}^{2n+2} (-1)^{k+1}a_k - \sum_{k=1}^{2n} (-1)^{k+1}a_k & \ge 0 \\
(-1)^{(2n+2) + 1}a_{2n+2} + (-1)^{(2n+1) + 1}a_{2n+1} & \ge 0 \\
-a_{2n+2} + a_{2n+1} & \ge 0 \\
a_{2n+1} & \ge a_{2n+2}
\end{aligned}\end{equation}\tag{4}\label{eq4A}$$
Using $n = 0$ in \eqref{eq3A} gives
$$a_2 \ge a_3 \tag{5}\label{eq5A}$$
while using $n = 1$ in \eqref{eq4A} gives
$$a_3 \ge a_4 \implies a_2 \ge a_3 \ge a_4 \tag{6}\label{eq6A}$$
Now, repeating the procedure using $n = 1$ in \eqref{eq3A} and $n = 2$ in \eqref{eq4A}, and combining the results with \eqref{eq6A}, gives
$$a_2 \ge a_3 \ge a_4 \ge a_5 \ge a_6 \tag{7}\label{eq7A}$$
You can fairly easily prove, such as by using induction (which I'm leaving to you to do), that
$$a_k \ge a_{k + 1}, \; \forall \; k \ge 2 \tag{8}\label{eq8A}$$
This means the $a_k$ terms, apart from possibly $a_1$, are all monotonically decreasing. Thus, using \eqref{eq2A}, the alternating series test states that $s_n$ converges.
Note in your proof attempt, if you let $x_n = s_{2n}$ and $y_n = s_{2n+1}$, then $x_n - y_n = -(-1)^{(2n + 1) + 1}a_{2n + 1} = -a_{2n + 1}$, which is actually $\le 0$ instead as you have $x_n$ and $y_n$ mixed around. Also, in your next line where you state
$$\lim(y_n - x_n) = \lim(s_1 + s_3 + \cdots) + \lim(s_2 + s_4 + \cdots) = \lim s_n$$
you seem to use that $y_n$ and $x_n$ are the sums of $s_{2n + 1}$ and $s_{2n}$ from $1$ up to $n$ rather than just being those terms as you earlier defined them.
Nonetheless, with your definitions, you have $y_n - x_n = a_{2n + 1}$, so $\lim_{n \to \infty}(y_n - x_n) = 0$. Also, as shown in \eqref{eq8A}, all $a_{k}$ for $k \ge 2$ are non-negative (as it's a monotonically decreasing series with a limit of $0$) so for every $n \in \mathbb{N}$, $y_n - x_n \ge 0$. Thus, you could also use the result which you stated you proved earlier. |
If the reflection of the hyperbola $xy = 4$ in the line $x - y + 1 = 0$ is $xy = mx + ny + l$ find $m + n + l$ | Any point on $xy=4$ will be $P(2t,2/t)$
Now any point on $L:x-y+1=0$ can be written as $Q(u,u+1)$
If $R(h,k)$ is any point on $$xy=mx+ny+l,$$
$Q$ will be the perpendicular bisector of $PR$
The equation of $PQ:$ $$x+y=2t+2/t\implies h+k=2t+2/t$$
Also $Q$ being the midpoint of $PQ,$
$$2u=2t+h\iff h=?,2(u+1)=2/t+k\iff k=?$$
$$2u-2t+2(u+1)-2/t=2t+2/t\iff2u=2t+2/t-1$$
$$\implies h=2u-2t=2/t-1\text{ and } k=2u+2-2/t=1+2t$$
$$\implies(h+1)(k-1)=2/t\cdot2t=4$$ |
What does it mean for a domain to lie left to a path? | Imagine walking along the path in the direction in which it’s oriented. Put your arms out; the left arm points into the region. |
Expected values in a sequence | As already mentioned, $E[a_n]=(2\mu)^n$ for every $n$, whatever the variance is. An explanation for the deviation observed in the simulations might be the following (although I fail to see how the graph in the post could represent any sequence $(a_n)$ generated as the OP explains, since almost surely $a_n\lt0$ for infinitely many $n$... unless one plots $|a_n|$ and not $a_n$?).
Assume that $(x_n)$ is i.i.d. normal with mean $1$ and variance $v$, and consider $y_n=x_1x_2\cdots x_n$, then $y_n$ is $a_n/(2\mu)^n$ for some variance $v$ and indeed, $E[y_n]=1$. But the almost sure behaviour of $y_n$ (the one simulations would exhibit) is quite different.
To wit, considering the normal density $f_v$ with mean $1$ and variance $v$, one sees that $x_k$ is in the infinitesimal interval $(x,x+\mathrm dx)$ approximately $nf_v(x)\mathrm dx$ times hence
$$
\sum_{k=1}^n\log|x_k|\approx\int\log|x|\,nf_v(x)\mathrm dx.
$$
This heuristics can be made rigorous, which shows that, when $n\to\infty$,
$|y_n|=\mathrm e^{nI(v)+o(n)}$, where
$$
I(v)=\int_\mathbb R\log|x|\,f_v(x)\mathrm dx=\frac1{\sqrt{2\pi}}\int_\mathbb R\log|1+x\sqrt{v}|\,\mathrm e^{-x^2/2}\mathrm dx.
$$
Thus, the ratio between the blue curve and the red curve should behave like $\mathrm e^{nI(v)}$. Qualitatively, this explains the straight lines in the simulations.
Quantitatively, the parameters used in the simulation (mean $2$, variance $1/2$) correspond to $v=1/8$, and $I(1/8)\approx-.0825$, which is negative, hence indeed the ratio goes to zero, almost surely. For $n=800$, this indicates a ratio red/blue of order $10^{29}$, which is much greater than what the simulations indicate.
My guess is that the parameters are actually mean $2$ and standard deviation $1/2$ (not variance). This guess yields $v=1/16$, $I(1/16)\approx-.0352$ and a ratio red/blue of order $10^{13}$, which seems compatible with the simulations. |
A Galois Theory Question | These are called Artin-Schreier extensions. Everything you want to know is at Wikipedia or by searching on that term. |
If $u\in L^1(\mathbb T)$ why $u\in \mathcal S'(\mathbb R^n)$? | Due to the periodicity of $u$ you have
\begin{align*}
\left|\int_{\mathbb R^n}u\varphi\,dx\right|
&\le\,\int_{\mathbb R^n}|u||\varphi|\,dx = \sum_{k\in\mathbb Z^n}\int_{\mathbb T}|u||\varphi(x+k)|\,dx\\
&= \sum_{k\in\mathbb Z^n}\int_{\mathbb T}|u||\varphi(x+k)|(1+|x+k|)^{2n}\cdot\frac{1}{(1+|x+k|)^{2n}}\,dx\\
&\le p_{2n}(\varphi)\|u\|_{L^1}\sum_{k\in\mathbb Z^n}\sup_{x\in\mathbb T}\frac{1}{(1+|x+k|)^{2n}}\\
&\le 2^np_{2n}(\varphi)\|u\|_{L^1}\sum_{k\in\mathbb N^n}\sup_{x\in\mathbb T}\frac{1}{(1+|x+k|)^{2n}}\\
&\le 2^np_{2n}(\varphi)\|u\|_{L^1}\sum_{k\in\mathbb N^n}\frac{1}{(1+|k|)^{2n}}.
\end{align*}
The last series converges: By means of AM-GM we have
$$
1+|k| = 1+\sqrt{\sum_1^nk_j^2}\ge\sqrt{1+\sum_1^nk_j^2}\ge\frac{1}{\sqrt{2n}}\sqrt{\sum_1^n(1+k_j)^2}\ge \frac{1}{\sqrt 2}\prod_1^n(1+k_j)^{1/n}.
$$
That is,
$$
(1+|k|)^{2n}\,\ge\,\frac{1}{2^n}\prod_1^n(1+k_j)^{2}.
$$
This gives
$$
\sum_{k\in\mathbb N^n}\frac{1}{(1+|k|)^{2n}}\,\le\,2^n\sum_{k_1=0}^\infty\cdots\sum_{k_n=0}^\infty\frac 1{(1+k_1)^2}\cdots\frac 1{(1+k_n)^2} = 2^n\left(\sum_{k=0}^\infty\frac 1{(1+k)^2}\right)^n,
$$
which is a finite number. |
Why multiplication isn't the monoid of number instead of summation since both operations are monoidal? | You are right, but in order to disambiguate, you can talk about the additive monoid of positive integers, that is, $(\mathbb{N}, +, 0)$ versus the multiplicative monoid of positive integers $(\mathbb{N}, \times, 1)$.
Answer to your comment. To start with, you can look at the Wikipedia entry Monoid. |
Expected Number of steps to delete all nodes of a tree | Depending on what kind of answer you expect, this is one answer or a long comment. You can bound $ e_n $ the maximum trimming number ("trimming" sounds just right) of a $ n $-vertices tree by $ \frac{n + 1}2 $ by induction:
$ \bullet $ Initialization: $ e_1 = 1 = \frac{1 + 1}2 $.
$ \bullet $ Induction: Take a $ n $-vertices tree and notice that trimming the root gives you $ 1 $ step and all the $ n - 1 $ other trimmings give you at most $ 1 + e_i $ steps for some $ 1 \le i \le n - 1 $ (which corresponds to the number of vertices remaining). So $ e_n \le 1 + \frac{e_{i_1} + \dots + e_{i_{n - 1}}}n \le 1 + \frac{(n - 1)\frac n2}n = \frac{n + 1}2 $.
Reciprocally, this maximum is reached by the tree of depth one (one vertex connected with the $ n - 1 $ others) as it attains the equality case of the inequality in the induction.
You can prove that the trimming number of $ L_n $ the line graph is $ H_n = 1 + \dots + \frac 1n $ the $ n $-th harmonic number by induction:
$ \bullet $ Initialization: $ L_1 $ has trimming number $ 1 = H_1 $.
$ \bullet $ Induction: Notice that trimming $ L_n $ gives $ L_i $ for $ i $ equiprobably chosen in $ [\![0, n - 1]\!] $. That means it has trimming number $ 1 + \frac{H_1 + \dots + H_{n - 1}}n = 1 + \frac{\sum_{i = 1}^{n - 1} \frac {n - i}i}n = \frac 1n + \frac{\sum_{i = 1}^{n - 1} \frac ni}n = H_n $
I highly conjecture this is the minimal trimming number of a $ n $-vertices tree. |
Computing the volume of a fundamental domain of a lattice | If we rotate $V$ so that it becomes the standard $\mathbb R^n$ (i.e. all $v_i$ have $0$ entries in positions $n+1,\ldots,m$), we can cut off the higher components and compute the determinant of the resulting $n\times n$ matrix.
The same can be achieved by adding one by one vectors $v_{n+1},\ldots,v_m$ such that each of these has unit length ans is orthogonal on all preceeding vectors. Then compute the resulting $m\times m$ determinant. |
Number theory problem - contradiction | With your rather scarce data, you have a contradiction. If $\;p\;$ is a prime dividing $\;c\;$ , then:
$$\left(\frac{ab}c=d\iff ab=cd\right)\implies p\mid ab\;,\;\;p\nmid a\implies p\mid b $$
Taking the maximal power of each prime dividing $\;c\;$ we get the same as above, but then we'd have that all the powers of all the primes that divide $\;c\;$ also divide $\;b\;$ and then $\;b\ge c\;$ , contradiction. |
Show that given $N$ iid variates $X_i$ uniform on (0,1), $P(\max(\{x_i\} > \frac{1}{2}\sum x_i)$ is $\frac{1}{( N-1)!}$ | Let $X_j$ be independent uniform random variables on $(0,1)$. Conditioned on $\max(X_1, \ldots, X_N) = X_m$, the other $X_j$ are independent uniform on $(0, X_m)$. Thus your probability is the probability that the sum of $N-1$ independent uniform(0,1) random variables is less than 1. The region
$A = \{(x_1, \ldots, x_{N-1}) \in \mathbb R^{N-1}: x_1 + \ldots + x_{N-1} \le 1\}$ is a simplex with vertices $0$ and $e_j$ (the standard unit vectors).
But you'll find the answer is $1/(N-1)!$, not $1/(2 N!)$. |
Mathematics of MC Escher | "The Symmetries of Things" by Conway et al has some really interesting material at different levels including wallpaper groups, and also some examples (and great illustrations) which could be used in a high school or maths club. |
Describe the space of solutions to a simple matrix equation | We assume $n=p$. Then $Z=\{U,V\in M_n(\mathbb{C})|UV^T=VU^T\}$ is an algebraic variety; let $\Delta_n=dim(Z)$, that is the maximal dimension of its connected components. Note that we work over the complex numbers.
Proposition: $\Delta_n=(3n^2+n)/2$.
Proof: i) Let $U\in M_n$ be a fixed matrix. According to Horn, Sergeichuk, ArXiv 0709.2473, lemma 2, $U$ is congruent to $diag(B,J_{r_1},\cdots,J_{r_s})$ where $J_k$ is the nilpotent Jordan block of dimension $k$ and $B$ is invertible. We give a proof when $U=diag(B,J),B\in M_p,J\in M_q,p+q=n$; indeed, the general case works in a same way and gives the same result. Putting $V^T=\begin{pmatrix}P&Q\\R&S\end{pmatrix}$, we obtain $BP,JS$ symmetric and $Q=B^{-1}(JR)^T$. $P$ depends on $p(p+1)/2$ parameters, $S$ depends on $q(q+1)/2$ parameters and $(Q,R)$ depends on $pq$ parameters. Finally $V^T$ or $V$ depends on $n(n+1)/2$ independent parameters that is a function of $n$ only.
ii) The maximal dimension with respect to $U$ is obtained for $p=n$ and therefore $\Delta_n=n^2+n(n+1)/2$.
EDIT . Let $\Delta_{n,p}=dim(\{U,V\in M_{n,p}(\mathbb{C})|UV^T=VU^T\})$. Note that "$UV^T$ symmetric" is equivalent to $n(n-1)/2$ relations; thus $\Delta_{n,p}\geq 2np-n(n-1)/2=(4np-n^2+n)/2$. Numerical calculations seem to "show" that follows: if $p\leq n$, then $\Delta_{n,p}=np+p(p+1)/2$ and if $p\geq n$, then $\Delta_{n,p}=(4np-n^2+n)/2$. |
Proving that $f = X^3+2$ is irreducible in $\mathbf{F}_{49}[X]$, and is $f$ irreducible over all $\mathbf{F}_{7^n}$ for $n$ even? | Since $X^3+2$ is irreducible over $\Bbb F_7$, any extension of $\Bbb F_7$ where it is reducible must contain $\Bbb F_{7^3}$, as that's the smallest extension of $\Bbb F_7$ where an irreducible cubic has a root.
Thus $\Bbb F_{49}$ cannot have a root. On the other hand, any extension of $\Bbb F_{7^3}$ will have a root. Thus, over, for instance, $\Bbb F_{7^6}=\Bbb F_{(7^3)^2}$ the polynomial will have a root and thus be reducible. |
Probability that exactly k out of n candidates are hired in the Hiring Problem? | Let $P(n,k)$ be the probability of exactly $k$ hires among $n$ candidates. I'm not sure that there's a compact algebraic formula for $P(n,k)$, but there's a simple recursive one. We have the boundary conditions, $$
\begin{align}
P(n,k)=0,\ k>n\\
P(n,1)=\frac1n\\
P(n,n)=\frac1{n!}
\end{align}
$$
Call the least favorable candidate $1$, the second-worst $2$, and so on. If candidate $m$ comes on day $1$ the $m-1$ candidates worse than $m$ may be ignored and we will hire $k$ candidates if and only if we hire exactly $k-1$ from the remaining $n-m$. Since the probability that candidate $m$ is the first to show up is $\frac1n$, we have $$P(n,k)=\frac1n\sum_{m=1}^kP(n-m,k-1),\ 1<k\leq n$$
P.S
As you suspected the accepted answer for the cited question is wrong, but a correct one has been posted recently. |
Probability of all trial success in infinite Bernoulli trials | HINT: The probability of $n$ successes in a row is $p^n.$ What happens as $n\to\infty$? What if $p=1$? What if $p<1?$ |
Pareto optimality in matching pennies | Matching pennies is a zero-sum game. In a zero-sum game, all strategy profiles are Pareto-optimal, as there is a fixed sum to be distributed and it's impossible to redistribute it without making someone worse off. |
Evaluating the statement an "An injective (but not surjective) function must have a left inverse" | I'm not sure what you mean by “injective-only”. Anyway, what you want is, given $f\colon A\to B$ that is injective, a map $g\colon B\to A$ such that $g\circ f$ is the identity on $A$ that is, for all $a\in A$, $g(f(a))=a$.
First let me add the assumption that $A$ is not empty (otherwise the result is false). Let $a_0\in A$. Which one? It's irrelevant.
Now, how do you define a map $g\colon B\to A$?
Let's look at step 1 in your picture. If $b\in B$, there are two cases:
$b=f(a)$, for some $a\in A$ (example: $s=f(2)$)
$b\ne f(a)$, for all $a\in A$ (example $t$)
In the first case, define
$$
g(b)=a
$$
which is justified because there is a unique $a\in A$ such that $b=f(a)$, by injectivity.
In the second case, define $g(b)=a_0$.
Thus we have a map $g\colon B\to A$. If $a\in A$, then
$$
g(f(a))=a
$$
by definition of $g$, so $g$ is really the sought left inverse of $f$.
The point about $a_0$ sometimes confuses beginners (just like those pictures do, actually). There is no special role about $a_0$; it is needed in order to define $g$ also on elements “not reached by $f$”. One might define $g$ on these elements by saying “pick an element of $A$ at random”, but this would be much more difficult to formalize. We are looking for one left inverse, not for all of them. |
Question about convergence of series | Defining $\displaystyle\int_{-\infty}^{\infty}xdx=\lim_{M\rightarrow\infty}\displaystyle\int_{-M}^{M}xdx=0$ is called the principal value sense. Most of the cases, if it were not stated, we are not taking principal value sense, rather, to be $\displaystyle\int_{-\infty}^{\infty}xdx=\lim_{M,N\rightarrow\infty}\int_{-M}^{N}xdx$, which does not exist, this is called Improper Riemann integral. |
Formula for Repeated Integration of Exponentials | Let $f_k(t) = k ! \int _ { 0 } ^ { t } \int _ { 0 } ^ { t - s _ { 1 } } \cdots \int _ { 0 } ^ { t - \sum _ { i = 1 } ^ { k - 1 } s _ { i } } \prod _ { j = 1 } ^ { k } e ^ { - j s _ { j } } d s _ { k } \cdots d s _ { 1 }$. Consider $f_k'(t)$:
$$
f_k'(t) = k! \left.\int _ { 0 } ^ { t - s _ { 1 } } \cdots \int _ { 0 } ^ { t - \sum _ { i = 1 } ^ { k - 1 } s _ { i } } \prod _ { j = 1 } ^ { k } e ^ { - j s _ { j } } d s _ { k } \cdots d s _ { 2 }\right|_{s_1=t} + \\
k! \int _ { 0 } ^ { t } \left.\int _ { 0 } ^ { t - s _ { 1 } -s_2} \cdots \int _ { 0 } ^ { t - \sum _ { i = 1 } ^ { k - 1 } s _ { i } } \prod _ { j = 1 } ^ { k } e ^ { - j s _ { j } } d s _ { k } \cdots d s _ { 3 }\right|_{s_2=t-s_1}ds_1
+ \ldots+\\
k! \left. \int_0^t\int _ { 0 } ^ { t - s _ { 1 } } \cdots \int _ { 0 } ^ { t - \sum _ { i = 1 } ^ { k - 2 } s _ { i } } \prod _ { j = 1 } ^ { k } e ^ { - j s _ { j } }\right|_{s_k=t-s_1\ldots-s_{k-1}}
d s _ { k-1 } \cdots d s _ { 2 }
$$
All terms here except for the last one are zero since in one of the integrals both limits are $0$. The term inside the last integral is:
$$
\left.\prod _ { j = 1 } ^ { k } e ^ { - j s _ { j } }\right|_{s_k=t-s_1...-s_{k-1}} =
e^{-k(t-s_1\ldots-s_{k-1})}\prod _ { j = 1 } ^ { k-1 } e ^ { - j s _ { j } }
$$
The coefficient before the product:
$$
e^{-k(t-s_1\ldots-s_{k-1})}=1-k\int_0^{t-s_1\ldots-s_{k-1}}e^{-ks_k}ds_k.
$$
Collecting everything together, we get:
$$
f_k'(t)=kf_{k-1}(t) - kf_k(t).
$$
Now we can use induction. We directly check that $f_1(t)=1-e^{-t}$. Then we assume that $f_{k-1}(t)=(1-e^{-t})^{k-1}$. Then we get differential equation, that satisfies Cauchy conditions:
$$
f_k'(t) = k(1-e^{-t})^{k-1} + kf_k(t)
$$
with initial condition $f_k(0) = 0$. We show with direct calculation that function $y(t)=(1-e^{-t})^k$ satisfies the differntial equation and the initial condition. Finally, Cauchy–Kowalevski theorem states that this equation has the unique solution, which we have found.
Edit A simpler approach.
One can notice that integration is done over a simplex on unit vectors, which is symmetric under any permutation of variables. So it's possible to change the order of integration:
$$
\int_0^t\int_0^{t-s_1}\dots\int_0^{t-s_1\ldots-s_{k-1}} F ds_k\ldots ds_1 = \\
\int_0^t\int_0^{t-s_k}\int_0^{t-s_k-s_1}\dots\int_0^{t-s_k-s_1\ldots-s_{k-1}} F ds_{k-1}\ldots ds_1ds_k.
$$
Thus, we can simply write:
$$
f_k(t) = k\int_0^t e^{-ks_k}f_{k-1}(t-s_k) ds_k = k\int_0^t (1-e^{-t+s_k})^{k-1}e^{-ks_k}ds_k = \\
k \int_0^t (e^{-s_k}-e^{-t})^{k-1} e^{-s_k}ds_k =
-k\int_1^{e^{-t}} (u-e^{-t})^{k-1}du = -\left.(u-e^{-t})^k\right|_{1}^{e^{-t}}=(1-e^{-t})^k
$$ |
Problem with counting conditional expectation with PDF | It's useful to use that integral over some set is equal to integral over the entire space of function multiplied by indicator of this set. So $\int_{\{X < Y\}} X dP = \int X \cdot \mathbb{I}_{X < Y} dP$. And $X \cdot \mathbb{I}_{X < Y}$ is a function of $X$ and $Y$, so we can find it's expectation by integrating it's product with joint density over $\mathbb{R}^2$:
$\int X \cdot \mathbb{I}_{X < Y} dP = \int_\mathbb{R_+^2}\ dx \ dy\ e^{-x -y} x \cdot \mathbb{I}_{x < y} = \int_0^\infty\ dy \int_0^y\ dx\ e^{-x - y}$ |
Number of Trees with n Nodes | This is not a solution, or even a useful hint, but perhaps these comments will be useful to someone.
Let $t(n,h)$ be the number of binary trees of height $h$ having $n$ nodes; if I understand correctly, you’re to find some sort of usable expression for $t(n,h)$. That appears to me to be a very hard problem.
A few results are easy: $t(h+1,h)=2^h$, $t(n,h)\ne 0$ iff $h<n<2^{h+1}$, $t(2^{h+1}-1,h)=1$, and of course $\sum_h t(n,h)=C_n$, the $n$-th Catalan number. Summing in the other direction, $\sum_n t(n,h)$ is the $h$-th term of OEIS A001699, for which the OEIS entry mentions no closed form. Here’s a table of $t(n,h)$ for $1\le n\le 8$ and $0\le h\le 7$:
$$\begin{array}{l|cccccccc|c}
n\backslash h&0&1&2&3&4&5&6&7&\text{Total}\\ \hline
0&1&&&&&&&&1\\
1&1&&&&&&&&1\\
2&0&2&&&&&&&2\\
3&0&1&4&&&&&&5\\
4&0&0&6&8&&&&&14\\
5&0&0&6&20&16&&&&42\\
6&0&0&4&40&56&32&&&132\\
7&0&0&1&68&152&144&64&&429\\
8&0&0&0&94&376&480&352&128&1430
\end{array}$$
An analysis like the one that leads to the Catalan recurrence for binary trees on $n$ nodes yields a very messy recurrence for $t(n,h)$:
$$t(n+1,h+1)=2\sum_{k=h+1}^nt(k,h)\sum_{i=0}^{h-1}t(n-k,i)+\sum_{k=h+1}^{n-h-1}t(k,h)t(n-k,h)\;.\tag{1}$$
Without the factor of $2$, the double summation counts the number of ways of building a binary tree on $n+1$ vertices whose left subtree has height $h$ and whose right subtree has height less than $h$; doubling this adds the trees whose right subtrees have height $h$ and whose left subtrees have height less than $h$. The last term on the right counts the binary trees on $n+1$ vertices whose left and right subtrees both have height $h$.
$(1)$ does reduce to something reasonable in special cases. For example, it’s easily verified that
$$t(h+2,h)=2\Big(t(h,h-1)+t(h+1,h-1)\Big)$$
for $h\ge 2$.
This is messy enough, however, that I can’t help wondering whether this is the right problem. By the way, it doesn’t appear to help to replace of height h with of height at most h: that just replaces the entries in the table above with cumulative row sums, which don’t appear to be any nicer.
Added: $(1)$ requires that $t(0,0)=1$ and that the empty tree have height $0$. I’ve added a row to the table to reflect this.
Gerry Myerson has found OEIS A073429 and OEIS A073345, which are versions of the table above. Unfortunately, it appears that not much more is known. |
Integral involving logarithm and cosine | We have
$$I=\dfrac12\int_0^{2\pi} -\log(a^2+b^2-2ab\cos{t})dt=\int_0^{\pi} -\log(a^2+b^2-2ab\cos{t})dt$$
We then have
$$I=-2\pi\log(b) -\int_0^{\pi} \log((a/b)^2+1-2(a/b)\cos{t})dt$$
Let us call $a/b$ as $a$ from now on. Hence, we want to evaluate the integral
$$I(a) = \displaystyle \int_0^{\pi} \ln \left(1-2a \cos(x) + a^2\right) dx$$ Some preliminary results on $I(a)$. Note that we have $$I(a) = \underbrace{\displaystyle \int_0^{\pi} \ln \left(1+2a \cos(x) + a^2\right) dx}_{\spadesuit} = \overbrace{\dfrac12 \displaystyle \int_0^{2\pi} \ln \left(1-2a \cos(x) + a^2\right) dx}^{\clubsuit}$$ $(\spadesuit)$ can be seen by replacing $x \mapsto \pi-x$ and $(\clubsuit)$ can be obtained by splitting the integral from $0$ to $\pi$ and $\pi$ to $2 \pi$ and replacing $x$ by $\pi+x$ in the second integral.
Now let us move on to our computation of $I(a)$.
\begin{align}
I(a^2) & = \int_0^{\pi} \ln \left(1-2a^2 \cos(x) + a^4\right) dx = \dfrac12 \int_0^{2\pi} \ln \left(1-2a^2 \cos(x) + a^4\right) dx\\
& = \dfrac12 \int_0^{2\pi} \ln \left((1+a^2)^2-2a^2(1+ \cos(x))\right) dx = \dfrac12 \int_0^{2\pi} \ln \left((1+a^2)^2-4a^2 \cos^2(x/2)\right) dx\\
& = \dfrac12 \int_0^{2\pi} \ln \left(1+a^2-2a \cos(x/2)\right) dx + \dfrac12 \int_0^{2\pi} \ln \left(1+a^2+2a \cos(x/2)\right) dx
\end{align}
Now replace $x/2=t$ in both integrals above to get
\begin{align}
I(a^2) & = \int_0^{\pi} \ln \left(1+a^2-2a \cos(t)\right) dt + \int_0^{\pi} \ln \left(1+a^2+2a \cos(t)\right) dt = 2I(a)
\end{align}
Now for $a \in [0,1)$, this gives us that $I(a) = 0$. This is because we have $I(0) = 0$ and $$I(a) = \dfrac{I(a^{2^n})}{2^n}$$ Now let $n \to \infty$ and use continuity to conclude that $I(a) = 0$ for $a \in [0,1)$. Now lets get back to our original problem. Consider $a>1$. We have
\begin{align*}
I(1/a) & = \int_0^{\pi} \ln \left(1-\dfrac2{a} \cos(x) + \dfrac1{a^2}\right)dx\\
& = \int_0^{\pi} \ln(1-2a \cos(x) + a^2) dx - 2\int_0^{\pi} \ln(a)dx\\
& = I(a) - 2 \pi \ln(a)\\
& = 0 & \text{(Since $1/a < 1$, we have $I(1/a) = 0$)}
\end{align*}
Hence, we get that
$$I(a) = \begin{cases} 2 \pi \ln(a) & a \geq 1 \\ 0 & a \in [0,1] \end{cases}$$
Adapted from here |
Assuming that $|z|<1$, calculate: $\sum_{n=2}^{\infty}(n^2-3n+2){z^{n-1}}$ | \begin{split}
\sum_{n=2}^{\infty}(n^2-3n+2)z^{n-1}
&= \sum_{n=2}^{\infty}(n-1)(n-2)z^{n-1} \\
&= z^2 \sum_{n=2}^{\infty}(n-1)(n-2)z^{n-3} \\
&= z^2 \frac{d^2}{dz^2} \sum_{n=1}^{\infty}z^{n-1}\\
&= z^2 \frac{d^2}{dz^2} \sum_{n=0}^{\infty}z^{n}\\
&= z^2 \frac{d^2}{dz^2} \frac{1}{1-z}\\
&= z^2 \frac{2}{(1-z)^3}
\end{split}
This proof holds for all complex $z$ with $|z|<1$ which means that, in particular, this also holds for all real $z$ with $0<z<1$. From the task setting itself, in this case the answer must be positive, since only positive terms are added. So if the result is claimed to be $ \frac{-2 z^2}{(1-z)^3}$, this is false. |
Show that $e^{\alpha t} \ast (\frac{t^k}{k!}e^{\alpha t}) = \frac{t^{k+1}}{(k + 1)!}e^{\alpha t}$ for the convolution | By definition $f(t)=e^{\alpha t}$ and $g(t)=\frac{t^k}{k!}e^{\alpha t}$. Plugging these function into the integral,
$$\int\limits_0^tf(t - s)g(s)ds=\int\limits_0^t e^{\alpha(t-s)}\cdot\frac{s^k}{k!}e^{\alpha s}ds.$$
We may pull out the constants (in terms of $s$) to get to$$\frac{e^{\alpha t}}
{k!}\int\limits_0^t s^k\underbrace{e^{-\alpha s}\cdot e^{\alpha s}}_{=1} ds=\frac{e^{\alpha t}}
{k!}\int\limits_0^t s^k ds=\frac{e^{\alpha t}}{k!}\left[\frac{t^{k+1}}{k+1}-\frac{0^{k+1}}{k+1}\right]=\frac{t^{k+1}}{(k + 1)!}e^{\alpha t}.$$
In the last step we used the defining property $k!\cdot(k+1)=(k+1)!$ of the factorial. |
Fundamental group of a space without a point | If $X$ is a closed, connected, $n$-dimensional manifold, then the point $\{pt\}\subset X$ will admit a geodesically convex open neighborhood $N$. Hence, $N-\{pt\}\simeq S^{n-1}$ and we can apply Van-Kampen's theorem to the cover $\{N,X-\{pt\}\}$. We get
$$\pi_1(X)\cong \pi_1(X-\{pt\})\ast_{\pi_1(S^{n-1})}\pi_1(N)\;.$$
In particular, if the manifold has dimension $\geq 3$ then it will alway be true that $\pi_1(X-\{pt\})\cong \pi_1(X)$, since $N$ is contractible and $\pi_1(S^{n-1})\cong 1$. If $n=2$ and the map
$$i_*:\mathbb{Z}\cong\pi_1(S^1)\to \pi_1(X-\{pt\})\;,$$
induced by the inclusion sends the generator to the trivial element, then you also have an isomorphism. In particular, an orientable surface of genus $g$ will also have the property you want. |
Chebyshev's theorem, is my answer correct to this question? | LGTM.
Chebyshev says that $P(|X-5|/6\ge A)\le 1/A^2$, so $A^2=4/3$ and $c=6A$. |
Question about classifying critical points when finding extrema | If $f_{xx} = 0$, then $f_{xx} f_{yy} - f_{xy}^2 = - f_{xy}^2$. This is never positive, would be $0$ if $f_{xy} = 0$, otherwise is negative. |
Prove that if $f:(a, b) \to \mathbb{R}$ is defined on an open interval then $f$ is continuous iff $ \lim_{x\to c} f(x) =f(c)$ | Let $x_{n}:=2-\frac{1}{n}$. Then, for each $n$, $x_{n}< 2$, so
$$
f(x_{n})=18-\left(2-\frac{1}{n}\right)^{2}=18-4+\frac{4}{n}-\frac{1}{n^{2}}\to 14\qquad\text{as }n\to\infty.
$$
But, $f(2)=15$. |
How to solve $14t - 4.9t^2 = -4.9(t-2)^2$? | By expanding one gets
$$
-4.9(T-2)^2=-4.9\times T^2+4.9\times(4T)-4.9\times4=-4.9\: T^2+19.6\:T-19.6
$$ then the equation
$$
14T-4.9\: T^2=-4.9(T-2)^2
$$ reads
$$
14\:T-19.6\:T=-19.6
$$ that is (simplifying and dividing by $-7$)
$$
0.8\:T=2.8
$$ as announced. |
Prime factorization of factorials | $$n! = \prod_p p^{\sum_{k \ge 1} k \lfloor n/p^k\rfloor}$$
You are given a prime decomposition $A =\prod_p p^{a_p}$ and you want to know if it exists $r$ such that $r! = \prod_p p^{a_p}$. Of course you can compute $A$ and use the dichotomy for finding $r$.
Otherwise, note that if $n$ is even then it is fully determinated by $a_2 = \sum_{k \ge 1} k \lfloor n/2^k\rfloor$.
So you can make use only of $a_2$ for finding (if it exists, using dichotomy) $n$ even such that $a_2 = \sum_{k \ge 1} k \lfloor n/2^k\rfloor$.
Then look at the other exponents for choosing if the correct result is $r = n$ or $r=n+1$, or if $A$ is not a factorial |
"indecomposability" in theory of groups and topological spaces | There is a paper of A.L.S. Corner, A note on rank and direct decompositions of torsion-free Abelian groups, Proc. Cambridge Philos. Soc. 57 (1961) 230–233. (1961) where he shows the following. (I quote from the review on mathscinet.)
“Let $N$ and $k$ be natural numbers with $N>k$, and let $N=r_1+\cdots+r_k$ be any representation of the number $N$ as the sum of $k$ natural numbers. Then there exist an abelian group $G$ without torsion and subgroups $A_1,A_2,\ldots,A_k$ of $G$ such that
(a) the rank of subgroup $A_i$ is equal to $r_i$,
(b) $G=\sum_{i=1}^k A_i$ and
(c) the subgroups $A_i$ are indecomposable ($i=1,2,\ldots,k$).”
I haven't read this paper so I don't know whether the notation $\sum$ in this review means ‘direct sum’. If it does — and the Wikipedia article implies that it does — then this gives a positive answer to (1). |
Erwin Kreyszig's Introductory Functional Analysis With Applications, Problem 8, Section 2.7 | The range of this operator is a subspace of $C_{0}$, which consisting of elements eventually go to zero. It can be characterized by
$$
\{a_{i}\}\in l^{\infty}, \exists N\in \mathbb{N}, |a_{i}*i|\le C, \forall i\ge N
$$
It is not difficult to see that the inverse map
$$
\{a_{i}\}\rightarrow \{ia_{i}\}
$$
is not bounded on the sequence $a_{i}=1,\forall i$ under $l^{\infty}$-norm. |
Does Lipschitz continuity of a convex imply boundedness of the domain of its Fenchel conjugate | Yes, the domain of its Fenchel conjugate is bounded. In particular
\begin{equation}
\text{dom} \, g^* \subseteq B(0,L_{g}),
\end{equation}
where $B(0,L_{g})$ denotes the ball with radius $L_{g}$ around the origin.
First note that for every Lipschitz continuous function
\begin{equation}
\lVert g(x) \rVert \le \lVert g(x) - g(0) \rVert + \lVert g(0) \rVert \le C + L_g \lVert x \rVert, \quad \forall x \in \mathcal{H}
\end{equation}
for some constant $C > 0$.
Thus, for any $y \in \mathcal{H}$ with $\lVert y \rVert > L_g$ and any $\lambda \in \mathbb{R}_+$,
\begin{equation}
\begin{aligned}
g^*(y) = \sup_x \{ \langle y, x \rangle - g(y) \} \ge& \langle y, \lambda y \rangle - g(\lambda y) \\
\ge& \lambda \lVert y \rVert^2 - \lambda L_g \lVert y \rVert - C \\
=& \lambda \lVert y \rVert \left( \lVert y \rVert - L_g \right) - C,
\end{aligned}
\end{equation}
proofing that $g^*(y) = +\infty$ by letting $\lambda$ go to $+\infty$. |
faithful flatness for the group scheme | Assume that $T$ is irreducible. Let $R$ be the coordinate ring of the affine variety $T$ and let $A$ be the coordinate ring of $S$. Then $f$ gives a map $\phi:R\rightarrow A$. Since $f$ is dominant and $R$ is reduced, it follows that $\phi$ is injective. Hence we can assume $R\subseteq A$. Since $T$ is irreducible, $R$ is a domain. Then the theorem you mentioned states that there exists $r\in R$ and $a\in A$ such that $R_r\rightarrow A_a$ is faithfully flat.
But $R_r$ is the coordinate ring of the open subvariety $D(r)\subseteq T$ and $A_a$ is the coordinate ring of $D(a)\subseteq S$. So we have shown that we get a faithfully flat morphism $D(a)\rightarrow D(r)$ by restricting the domain and codomain of $f$. But faithfully flat morphisms of varieties are surjective, so $D(r)\subseteq T$ is the image of $D(a)$ under $f$, and hence is contained in $f(S)$. Since $T$ is irreducible, the open set $D(r)$ is dense, so is is the desired dense open set.
For the case where $T$ may not be irreducible, let $T'$ be an irreducible component. The map $f:f^{-1}(T')\rightarrow T'$ has dense image, so we may apply the case considered above. We have a dense open set $U\subseteq T'$ which is contained in $f(f^{-1}(T'))\subseteq f(S)$.
Do this for each component $T_1,\ldots T_n$ to get dense open sets $U_i\subseteq T_i$ with $U_i\subseteq f(S)$. Then $\bigcup U_i$ is dense in $T=\bigcup T_i$, and can be seen to be the desired dense open subset. |
get a 'win' in even expriment | Let $p_e$ be the sought probability. To get $x$ in even tossing, we must either get $yx$ in the first two rounds (which happens with probability $(1-p)p$, of course) or we must start with $yy$ and then get $x$ in an even tossing in the subsequent rounds, which happens with probability $(1-p)^2\cdot p_e$. All in all we find
$$p_e = (1-p)p+(1-p)^2p_e.$$
Hence
$$ p_e = \frac{(1-p)p}{1-(1-p)^2}=\frac{1-p}{2-p}.$$ |
$\operatorname{lcm}(\operatorname{gcd}(a,b),c)=\operatorname{gcd}(\operatorname{lcm}(a,c),\operatorname{lcm}(b,c))$ for any $a,b,c \in \mathbb{Z}$ | Using prime factor decompositions and remembering $\mathrm{lcm}(p^a,p^b)=p^{\max(a,b)}$, $\mathrm{gcd}(p^a,p^b)=p^{\min(a,b)}$ for primes $p$, one reduces the claim to the equation
$\max(\min(a,b),c) = \min(\max(a,c),\max(b,c))$
for $a,b,c \in \mathbb{N}$. We have $a \leq b$ or $b \leq a$. By symmetry, we may assume $a \leq b$. Then the LHS is $\max(a,c)$, and the RHS also equals $\max(a,c)$ since $\max(a,c) \leq \max(b,c)$.
So actually the equation above holds in every linear order. The crux is that although $(\mathbb{N} \setminus \{0\},|)$ is not a linear order, it embeds into a product of linear orders, using prime factor decompositions. More generally we see that the lattice of ideals of a PID is distributive (which fails for other rings). |
"Double" Density of Bochner Space | The simple $V_h$-valued functions are dense in the simple $V$-valued functions under the $L^2((0,T),V)$ metric. To see this, if $f = \sum_{k=1}^K v_k \chi_{A_k}$ is a simple $V$-valued function, let $V_h \ni v_{k,n} \to v_k$ as $n \to \infty$, and let $f_n = \sum_{k=1}^K v_{k,n} \chi_{A_k}$. Then $f_n \to f$ in $L^2((0,T),V)$.
Also, if $A$ is dense in $B$, and $B$ is dense in $C$, then $A$ is dense in $C$.
Hence the simple $V_h$ values functions are dense in $L^2((0,T),V)$. |
solving linear ODEs for mixing problems with function notation | The technique of separation of variables is really the chain rule in disguise. You have
$$\frac{dS}{dt} = 10-\frac{2S}{100}=\frac{1000-2S}{100}.$$
So
$$\frac{\frac{dS}{dt}}{1000-2S} = \frac{1}{100}.$$
Now (introducing a dummy variable $u$) you integrate both sides from $u=0$ to $u=t$:
$$\int_0^t \frac{\frac{dS}{du}}{1000-2S} du = \int_0^t \frac{1}{100} du.$$
Now the left side you can integrate using the change of variables formula, by taking $v=1000-2S$. So you get
$$\int_{1000-2S(0)}^{1000-2S(t)} \frac{1}{v} dv = \frac{t}{100}.$$
Now you solve this for $S(t)$. This change of variables in integration is justified by the chain rule for derivatives and the fundamental theorem of calculus. |
Is it always true that $\|\vec{x} + \vec{y}\| \geq \|\vec{x}\| - \|\vec{y}\|$ over $\mathbb{R}^n$? | Hint: Apply the triangular inequality to $x=(x+y)+(-y)$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.