title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Evaluate the limit by using epsilon-M approach. | In the case $x=y$ there is nothing to prove. Thus suppose $x>y$ wlog. Then the mean value theorem shows
$$
\frac{e^x-e^y}{x-y}=e^\xi\leq e
$$
for some $\xi\in(0,1)$. This proves the claim.
Your task is now to check whether the mean value theorem was applicable at all. |
Proof by Contradiction: $100$ Balls & $9$ Boxes | The statement of the problem contains a very big hint: by giving a proof by contradiction. That means assuming that the result is false and trying to derive a contradiction from that assumption. What would make the result false? There would have to be some way to distribute $100$ balls amongst nine boxes so that no box contained $12$ or more balls, i.e., so that every box contained at most $11$ balls. What can you then conclude about the total number of balls in the nine boxes? |
Proving a topological space is separable | Looking at covers by open balls is a good idea, but not in quite the manner that you seem to have in mind. For $n\in\Bbb Z^+$ let
$$\mathscr{U}_n=\left\{B\left(x,\frac1n\right):x\in X\right\}\;;$$
this is an open cover of $X$, so it has a finite subcover $\mathscr{V}_n$. This means that there is a finite $F_n\subseteq X$ such that
$$\mathscr{V}_n=\left\{B\left(x,\frac1n\right):x\in F_n\right\}\;;$$
Now see what you can do with the set $D=\bigcup_{n\in\Bbb Z^+}F_n$. |
Proving that $x$ is an integer, if the differences between any two of $x^{1919}$, $x^{1960}$, and $x^{2100}$ are integers | First, prove that $x$ is rational.
Say $x^{2100} - x^{1919} = a > 0$ and $x^{1960} - x^{1919} = b > 0$.
Obviously, $x$ is algebraic, of degree at most $1960$. We need to show that the $2099$ other complex solutions to the first equation and the $1959$ other solutions to the second are distinct.
Here, it helps to plot the complex numbers $z$ such that $z^{2100} - z^{1960}$ is a positive real, and similarly with $z^{1960} - z^{1919}$. That graph should look like a bunch of concentric "lines".
We have an asymptotic development for the roots :
The roots of $x^{2100} - x^{1919} = a$ are $\alpha + \frac 1 {2100}\alpha^{-180} + \frac {1739}{8820000}\alpha^{-361} + \ldots$ where $\alpha^{2100} = a$.
Similarly, the roots of $x^{1960} - x^{1919} = b$ are $\beta + \frac 1 {1960}\beta^{-40} + \frac{1879}{7683200}\beta^{-81} + \ldots$ where $\beta^{1960} = b$
Looking at the first term removes every candidate root except $10$ : Since $gcd(2100,1960) = 10$, the only possible common arguments for $\alpha$ and $\beta$ are the $10$ multiples of $\pi/5$ : $\alpha^{10}$ and $\beta^{10}$ have to be positive reals if you want the two roots to have a remote chance at being equal.
We are left with the real axis and $8$ pairs of "lines" that look very close together.
Looking at the second term of the expansion, we see that it is real, and very quickly, $\beta^{-40}/1960$ is much larger than $\alpha^{-180}/2100$, so unless we pick real roots, those line stay disjoint from each other.
Hence any simultaneous solution to the two equation has to be one of the two real solutions. And again, the negative solutions are $-x + \frac 2 {2100}x^{-180} + \ldots$ and $-x + \frac 2 {1960} x^{-40} + \ldots$, and they are different for $x$ large enough.
Once you know that $x$ has only one conjugate, and is rational, write $x = c/d$ with coprime $c$ and $d$.
Since $c^{2100} = c^{1919}d^{181} + ad^{2100}$, $d^{181}$ divides $c^{2100}$, hence if $p$ is a prime factor of $d$, $p$ also divides $c$, which is impossible. Hence $d=1$ and $x$ is an integer. |
Common quadratic Lyapunov function with all convex combinations Hurwitz | See
R. Shorten and K. Narendra, “Necessary and sufficient conditions for
the existence of a common quadratic Lyapunov function for two stable
second order linear time-invariant systems,” in Proc. Amer. Control
Conf. , 1999, pp. 1410–1414.
and
R. Shorten, K. Narendra, and O. Mason, “A result on common quadratic
Lyapunov functions,”IEEE Trans. Automat. Control , vol. 48, no. 1, pp.
110–113, Jan. 2003
The results are summarized in Theorem 1 of
Lin, H. and Antsaklis, P.J. (2009). Stability and stabiliz-ability of
switched linear systems: A survey of recent re-sults.IEEE Transactions
on Automatic Control , 54(2), 308–322. doi:10.1109/TAC.2008.2012009. |
Closed range operator | The range of $V$ is a Banach space since it is a closed subspace of $H$. So you only need to answer, when is the range of a bounded operator between two Banach spaces closed.
That was already answered here When is the image of a linear operator closed? |
How to convert a diophantine equation into parametric form? | $\frac 97=1+\frac27=1+\frac1{\frac72}=1+\frac1{3+\frac12}$
So, the last but one convergent is $1+\frac13=\frac43$
Using Convergent property of continued fraction, $7\cdot4-9\cdot3=1$
$7x+9y=5(7\cdot4-9\cdot3)\implies 7(x-20)=-9(y+15)\implies x-20=\frac{-9(y+15)}{7}$ which is an integer.
So, $7\mid(y+15)$ as $(7,9)=1$ $\implies \frac{x-20}{-9}=\frac{y+15}7=z$ for some integer $z$
So, $y=7z-15=7(z-3)+6=7w+6$ where $w=z-3$ is any integer.
So, $x=-9z+20=-9(w-3)+20-27=-(9w+7)$
Alternatively, by observation $7x+9y=5=14-9$
or $7(x-2)=-9(y+1)$
or, $\frac{x-2}{-9}=\frac{y+1}7$
$\frac{-9(y+1)}7=x-2$ which is an integer, so is $\frac{y+1}7$ as $(7,9)=1$
So, $\frac{x-2}{-9}=\frac{y+1}7=u$ where $u$ is any integer.
So, $x=-9u+2,y=7u-1$ |
Pizza topping combination problem | You are correct in how many ways there are to make $1$ pizza. There are multiple ways to solve, but $120$ is the correct answer (your calculation is correct).
However, your mistake is when you go to two pizzas. If there are $120$ ways to order a pizza, then there are
$$
\binom{120}{2}+120=\binom{121}{2}=7260
$$
ways to order two pizzas.
Consider ordering one pizza with chicken ($1$ topping) and one pizza with pepperoni ($1$ topping). This is no different than ordering one pizza with pepperoni ($1$ topping) and one pizza with chicken ($1$ topping). My counting counts this combination once, but yours counts them seperately. |
Spectrum of an unbounded operator | Yes, it is enough. Because $A_0$ is positive, symmetric and surjective, then $A_0$ is densely-defined, injective and selfadjoint. Therefore, $A_0^{-1}$ is compact, selfadjoint with trivial null space. So $A_{0}^{-1}$ has an orthnormal basis of eigenfunctions $\{e_n \}$ with corresponding eigenvalue sequence of positive numbers
$$
\lambda_1 \ge \lambda_2 \ge \lambda_3 \ge \cdots \ge \lambda_n \ge \cdots \rightarrow 0
$$
Thus $A_0e_n = \lambda_n^{-1}e_n$ and $\lambda_n^{-1}\rightarrow\infty$. |
Complex function mapping | Picard's Theorem says that that the range of any non-constant entire function is all of $\mathbb C$ or $\mathbb C \setminus \{c\}$ for some complex number $c$. Since your arc has more than one point the result follows.
Ref.: https://en.wikipedia.org/wiki/Picard_theorem |
Are all limits solvable without L'Hôpital Rule or Series Expansion | $$L_1=\lim_{x\to0}\frac{\tan x-x}{x^3}\quad L_2=\lim_{x\to0}\frac{\sin x-x}{x^3}\quad L_3=\lim_{x\to0}\frac{\ln(1+x)-x}{x^2}\\L_4=\lim_{x\to0}\frac{e^x-x-1}{x^2}\quad L_5=\lim_{x\to0}\frac{\sin^{-1}x-x}{x^3}\quad L_6=\lim_{x\to0}\frac{\tan^{-1}x-x}{x^3}$$
Yes if we know beforehand the limit exists.
For $L_1$:
$$L_1=\lim_{x\to0}\frac{\tan x-x}{x^3}\\
L_1=\lim_{x\to0}\frac{\tan 2x-2x}{8x^3}\\
4L_1=\lim_{x\to0}\frac{\frac12\tan2x-x}{x^3}\\
3L_1=\lim_{x\to0}\frac{\frac12\tan{2x}-\tan x}{x^3}\\
=\lim_{x\to0}\frac{\tan x}x\frac{\frac1{1-\tan^2x}-1}{x^2}\\
=\lim_{x\to0}\frac{(\tan x)^3}{x^3}=1\\
\large L_1=\frac13$$
For $L_2$:
$$L_2=\lim_{x\to0}\frac{\sin x-x}{x^3}\\
L_2=\lim_{x\to0}\frac{\sin 2x-2x}{8x^3}\\
4L_2=\lim_{x\to0}\frac{\frac12\sin 2x-x}{x^3}\\
3L_2=\lim_{x\to0}\frac{\frac12\sin 2x-\sin x}{x^3}
=\lim_{x\to0}\frac{\cos x-1}{x^2}\frac{\sin x}x\\
L_2=\frac13\lim_{x\to0}\frac{\cos x-1}{x^2}\\
L_2=\frac13\lim_{x\to0}\frac{\cos 2x-1}{4x^2}\\
4L_2=\frac13\lim_{x\to0}\frac{\cos 2x-1}{x^2}\\
3L_2=\frac13\lim_{x\to0}\frac{\cos 2x-\cos x}{x^2}\\
3L_2=\frac13\lim_{x\to0}\frac{-2\sin^2\left(\frac x2\right)(2\cos x+1)}{x^2}\\
3L_2=\frac13\lim_{x\to0}\frac{-2\sin^2\left(\frac x2\right)(2\cos x+1)}{x^2}\\
\large L_2=-\frac16$$
For $L_3$:
$$L_3=\lim_{x\to0}\frac{\ln(1+x)-x}{x^2}\\
L_3=\lim_{x\to0}\frac{\ln(1+2x)-2x}{4x^2}\\
2L_3=\lim_{x\to0}\frac{\frac12\ln(1+2x)-x}{x^2}\\
L_3=\lim_{x\to0}\frac{\frac12\ln(1+2x)-\ln(1+x)}{x^2}\\
2L_3=\lim_{x\to0}\frac{\ln(1+2x)-2\ln(1+x)}{x^2}\\
2L_3=\lim_{x\to0}\frac{\ln\left(1-\frac{x^2}{(1+x)^2}\right)}{x^2}\\
\large L_3=-\frac12
$$
For $L_4$:
$$L_4=\lim_{x\to0}\frac{e^x-x-1}{x^2}\\
4L_4=\lim_{x\to0}\frac{e^{2x}-2x-1}{x^2}\\
3L_4=\lim_{x\to0}\frac{e^{2x}-e^x-x}{x^2}\\
12L_4=\lim_{x\to0}\frac{e^{4x}-e^{2x}-2x}{x^2}\\
6L_4=\lim_{x\to0}\frac{\frac12e^{4x}-\frac12e^{2x}-x}{x^2}\\
3L_4=\lim_{x\to0}\frac{\frac12e^{4x}-\frac32e^{2x}+e^x}{x^2}\\
3L_4=\frac12\lim_{x\to0}\frac{e^x(e^x-1)^2(e^x+2)}{x^2}\\
\large L_4=\frac12$$
For $L_5$:
$$L_5=\lim_{x\to0}\frac{\sin^{-1}x-x}{x^3}\\
8L_5=\lim_{x\to0}\frac{\sin^{-1}2x-2x}{x^3}\\
4L_5=\lim_{x\to0}\frac{\frac12\sin^{-1}2x-x}{x^3}\\
3L_5=\lim_{x\to0}\frac{\frac12\sin^{-1}2x-\sin^{-1}x}{x^3}\\
6L_5=\lim_{x\to0}\frac{\sin^{-1}2x-2\sin^{-1}x}{x^3}\\
6L_5=\lim_{x\to0}\frac{\sin^{-1}\left(-4 x^3-2 \sqrt{1-4 x^2} \sqrt{1-x^2} x+2 x\right)}{x^3}\\
6L_5=\lim_{x\to0}\frac{-4 x^3+2x(1- \sqrt{1-4 x^2} \sqrt{1-x^2})}{x^3}\\
6L_5=\lim_{x\to0}-4+2\frac{(1- \sqrt{1-5 x^2+4x^4})}{x^2}\\
6L_5=\lim_{x\to0}-4+2\frac{(1- \sqrt{1-5 x^2+4x^4})}{x^2}$$
Since you would consider binomial theorem as series expansion, if not well and good, if yes, then I'll do:
Now let $\sqrt{1-5 x^2+4x^4}=\sum a_kx^k$, squaring both sides, $$1-5x^2+4x^4=a_0^2+2a_0a_1x+(2a_0a_2+a_1^2)x^2+(2a_0a_3+a_1a_2)x^3+(2a_0a_4+2a_1a_3+a_2^2)x^4+...$$
Now taking positive branch:
$$a_0=1,a_1=0,a_2=-5/2,a_3=0,a_4=-9/8,...$$
So:
$$6L_5=\lim_{x\to0}-4+2\frac{(1- (1-5x^2/2-9x^4/8...))}{x^2}\\\large L_5=\frac16$$
For $L_6$:
$$L_6=\lim_{x\to0}\frac{\tan^{-1}x-x}{x^3}\\
4L_6=\lim_{x\to0}\frac{\frac12\tan^{-1}2x-x}{x^3}\\
3L_6=\lim_{x\to0}\frac{\tan^{-1}2x-2\tan^{-1}x}{2x^3}\\
6L_6=\lim_{x\to0}\frac{\tan^{-1}\left(-\frac{2 x^3}{3 x^2+1}\right)}{x^3}\\
L_6=-\frac13$$ |
What is the degree of the fourier expansion | The degree is always equal to $n$. More precisely, the coefficient of $x_1\dots x_n$ in the Fourier-Walsh expansion is equal to $$\frac{(-1)^{k-1}\binom{2k-1}{k}k}{4^{k-1}(2k-1)}$$ where $n=2k-1$.
Proof. Let $L_n$ be the (unique) polynomial of degree at most $n$ such that $L_n(2j-n)=\operatorname{sgn}(2j-n)$ for $j=0,\dots,n$. I don't know if these polynomials have a name: the interpolation problem looks fairly natural. Here is $L_9$ and the signum function for comparison: they agree at odd integers.
Clearly, $f(x)=L_n(x_1+\dots+x_n)$ for all $x_1,\dots,x_n\in \{-1,1\}$. From this representation you get the Fourier expansion simply by reducing all exponents mod $2$. For example, $L_3(x)=\frac{-1}{12}x^3+\frac{13}{12}x$, and after expanding and cleaning up $L_3(x_1+x_2+x_3)$ we arrive at
$$\begin{align}
&\frac{-1}{12}\Big(x_1^3+x_2^3+x_3^3 +3(x_1x_2^2+x_1x_3^2+x_2x_1^2+x_2x_3^2+x_3x_1^2+x_3x_2^2)+6x_1x_2x_3\Big)+\frac{13}{12}(x_1+x_2+x_3) \\ &= \frac{-1}{12}\Big(x_1+x_2+x_3 +3(2x_1+2x_2+2x_3)+6x_1x_2x_3\Big)+\frac{13}{12}(x_1+x_2+x_3) \\
&= -\frac{1}{2}x_1x_2x_3 +\frac12 (x_1+x_2+x_3)
\end{align}$$
The only way to get $x_1 \dots x_n$ via this process is from the expansion of $(x_1+\dots+x_n)^n$, which yields $n!\,x_1\dots x_n$. Thus, the coefficient of $x_1\dots x_n$ in the Fourier expansion is $n!\,c_n$ where $c_n$ is the leading coefficient of $L_n$.
Presumably, there is a quick way to see that $c_n\ne 0$, i.e., the degree of $L_n$ is indeed $n$. But just for fun, I'm going to compute $c_n$ precisely. By the Lagrange interpolation formula,
$$\begin{align}
L_n(x) &= \sum_{j=0}^n \operatorname{sgn}(2j-n)\prod_{0\le i\le n, i\ne j}\frac{x-(2i-n)}{(2j-n)-(2i-n)} \\
&= 2^{1-n}\sum_{j=0}^n \operatorname{sgn}(2j-n)\prod_{0\le i\le n, i\ne j}\frac{x-(2i-n)}{j-i}
\end{align}\tag{1}$$
Therefore,
$$\begin{align}
c_n &= 2^{1-n}\sum_{j=0}^n \operatorname{sgn}(2j-n)\prod_{0\le i\le n, i\ne j}\frac{1}{j-i} \\
&= -2^{1-n}\sum_{j=0}^{k-1} \prod_{0\le i\le n, i\ne j}\frac{1}{j-i} \\
\end{align}\tag{2}$$
where the last step is based on the fact that the reflection $j\mapsto n-j$ changes the sign of
$\prod_{0\le i\le n, i\ne j}(j-i)$. Since
$$\prod_{0\le i\le n, i\ne j}(j-i) = (-1)^{n-j} j! (n-j)!\tag{3}$$
it follows that
$$\begin{align}
n!\, c_n &= -2^{1-n}\sum_{j=0}^{k-1} (-1)^{n-j} \binom{n}{j} \\
&= -2^{2-2k}\sum_{j=0}^{k-1} (-1)^{2k-1-j} \binom{2k-1}{j} \\
&= 4^{1-k}\sum_{j=0}^{k-1} (-1)^{j} \binom{n}{j} \\
\end{align}\tag{4}$$
Since Maple 16 is a better combinatorialist than me, I trust it when it says
$$ \sum_{j=0}^{k-1} (-1)^{j} \binom{n}{j} = (-1)^{k-1} \frac{k}{2k-1} \binom{2k-1}{k}\tag{5}$$
Hence
$$n!\,c_n=\frac{(-1)^{k-1}\binom{2k-1}{k}k}{4^{k-1}(2k-1)}\tag{6}$$ as claimed. $\Box$ |
How does one deal with a homomorphism on a coproduct of groups? | Since $G$ contains subgroups isomorphic to $G_{i}$ for each $i\in I$, you have embeddings $\iota_j\colon G_j\to G$ given by the subgroup inclusions. By the universal property of the coproduct, there is a unique homomorphism $\Phi$ from the coproduct to $G$ induced by the subgroup embeddings $\iota_j$. This homomorphism $\Phi$ is such that the natural inclusion $G_k\hookrightarrow C$ into the coproduct followed by $\Phi$ gives $\iota_k$.
This is just the "universal property of the coproduct". We have an explicit description of the coproduct, given by the free product which is what is used in the second sentence to study the kernel of this $\Phi$.
Added.
Okay, let's start from scratch, because I don't think Lang is very clear on this part.
Though I'm stating this definition for groups, the fact that they are groups is really immaterial; the definition only uses "objects" and "morphisms", so it is a "categorical" definition. In fact, it's the definition of "coproduct object" in the category of groups.
Definition. Let $\{G_i\}_{i\in I}$ be a family of groups. A coproduct of the family is an ordered pair, $(C,\{\iota_j\}_{j\in I})$, where $C$ is a group, and for each $j\in I$, $\iota_j\colon G_i\to C$ is a group homomorphism, such that $(C,\{\iota_j\}_{j\in I})$ satisfies the following "universal property":
Given any group $K$, and any family $\{f_j\}_{j\in I}$ of morphisms, $f_j\colon G_j\to K$, there exists a unique morphism $\mathcal{F}\colon C\to K$ such that $f_j = \mathcal{F}\circ \iota_j$ for all $j\in I$.
The morphisms $\iota_j\colon G_j\to C$ are called the "coproduct inclusions" (we don't know ahead of time whether they are actually inclusions, but in any category with a zero element, such as the category of groups, they will be inclusions as I show below), and often refer to $C$ as "the coproduct" (even though, technically, the coproduct is both the group and the inclusions).
As is usually the case, the definition guarantees that if there is a coproduct, then it is unique-up-to-unique-isomorphism (where the isomorphism must also "respect" the inclusions):
Theorem. Let $\{G_i\}$ be a family of groups. If $(C,\{\iota_j\}_{j\in I})$ and $(D,\{\kappa_j\}_{j\in I})$ are both coproducts of the family, then there exists a unique isomorphism $\Phi\colon C\to D$ such that $\kappa_j = \Phi\circ \iota_j$ for all $j\in I$.
Proof. Since $\{\kappa_j\}_{j\in I}$ is a family of maps from the $G_j$ to $D$, the universal property for $C$ guarantees the existence of a unique $\Phi\colon C\to D$ such that $\kappa_j=\Phi\circ\iota_j$ for all $j$. It only remains to show that $\Phi$ is an isomorphism.
Now we use the fact that $D$ is also a coproduct; since $\{\iota_j\}$ is a family of maps from the $G_j$ to $C$, the universal property of $D$ guarantees the existence of a unique $\Psi\colon D\to C$ such that $\iota_j=\Psi\circ\kappa_j$ for all $j$.
Also, if we look at the family $\{\iota_j\}$, the universal property of $C$ guarantees the existence of a unique homomorphism $F\colon C\to C$ such that $\iota_j=F\circ\iota_j$. Since putting $F=\mathrm{id}_C$ works, then this is the unique map that has this property. Consider now $\Psi\circ\Phi$: we have
$$(\Psi\circ\Phi)\circ\iota_j = \Psi\circ(\Phi\circ\iota_j) = \Psi\circ\kappa_j = \iota_j$$
for all $j\in I$. So $\Psi\circ\Phi=\mathrm{id}_C$. Symmetrically, $\Phi\circ\Psi=\mathrm{id}_D$, so $\Phi$ is an isomorphism, as claimed. $\Box$
Since I mentioned it above, let me prove that the inclusions are actually inclusions.
Theorem. Let $\{G_i\}_{i\in I}$ be a family of groups. If $(C,\{\iota_j\}_{j\in I})$ is a coproduct of the family, then $\iota_j$ is one-to-one for every $j\in I$.
Proof. Fix $j\in J$. Consider the family of maps $f_k\colon G_k\to G_j$, where $f_j=\mathrm{id}_{G_j}$, and $f_k$ is the zero map if $k\neq j$. By the universal property of $C$, we have a unique homomorphism $\Phi\colon C\to G_j$ such that $f_k = \Phi\circ \iota_k$ for each $k$. In particular, $\mathrm{id}_{G_j} = \Phi\circ\iota_j$, so $\iota_j$ is one-to-one. $\Box$
So, if $C$ is a coproduct of $\{G_i\}$, then it contains subgroups isomorphic to each $G_i$. In fact, a bit more is true:
Theorem. Let $\{G_i\}_{i\in I}$ be a family of groups, and let $(C,\{\iota_j\}_{j\in I})$ be a coproduct for the family. Let $K=\bigl\langle \iota_j(G_j)\mid j\in I\bigr\rangle$. Then $C=K$; that is, the coproduct is generated by the images of the groups $G_j$ under the coproduct inclusions.
Proof. Consider the maps $\iota_j\colon G_j\to K$; by the universal property of $C$, there is a unique morphism $\Phi\colon C\to K$ such that for all $j$, $\iota_j = \Phi\circ\iota_j$. Now let $\Psi\colon K\to C$ be the subgroup inclusion. Then $\Psi\circ\Phi\colon C\to C$ satisfies $\iota_j = (\Psi\circ\Phi)\circ\iota_j = \mathrm{id}_C\circ\iota_j$. By the uniqueness clause of the universal property for $C$, it follows that $\Psi\circ\Phi=\mathrm{id}_C$, and in particular $\Phi$ is one-to-one and $\Psi$ is onto. But $\Psi$ is the subgroup inclusion $K\hookrightarrow C$; so being onto means that $K=C$, as claimed. $\Box$
Now, all of this is fine and well, but we haven't actually proven that there is such a thing as a coproduct of a family. There may not be! We can prove that if it exists it has all sorts of wonderful properties, but until we actually show that there is such an animal, all those theorems and properties may well be completely moot. So at some point, we usually need to abandon the "abstract nonsense" and get our hands dirty and actually come up with a group that is the coproduct.
In the case of families of groups, this object is the free product of the groups.
Definition. Let $\{G_i\}_{i\in I}$ be a family of groups. The free product of the family, $\mathop{*}\limits_{i\in I}G_i$, is the following group:
The underlying set of $\mathop{*}_{i\in I}G_i$ consists of all finite (possibly empty) sequences of elements of the form $x_{i_1}x_{i_2}\cdots x_{i_m}$, where $i_j\in I$, $x_{i_j}\in G_{i_j}$, no $x_{i_j}$ is the identity element of the corresponding $G_{i_j}$; and for $k=1,\ldots,m-1$, $i_k\neq i_{k+1}$.
The operation on $\mathop{*}_{i\in I}G_i$ is the following: given $x_{i_1}\cdots x_{i_m}$ and $y_{j_1}\cdots y_{j_n}$,
if $i_m\neq j_1$, then the product is just the concatenation of the two words, $x_{i_1}\cdots x_{i_m}y_{j_1}\cdots y_{j_n}$.
If $i_m=j_1$ and $x_{i_m}\neq (y_{j_1})^{-1}$, then the product is
$x_{i_1}\cdots x_{i_{m-1}}z_{i_m}y_{j_2}\cdots y_{j_n}$, where $z=x_{i_m}y_{j_1}$.
If $i_m=j_1$ and $x_{i_m}=(y_{j_1})^{-1}$, then the product is the same as the product of $x_{i_1}\cdots x_{i_{m-1}}$ and $y_{j_2}\cdots y_{j_n}$.
(That is: concatenate, and simplify if possible).
Proving this is a group is straightforward except for proving associativity of the operation; for that, there is a clever idea due to van der Waerden that simplifies things enormously. The idea is explained in George Bergman's Universal Algebra notes; in the case of free groups, it is in page 32, starting where it says "But there is an elegant trick..."; the case of the free product of two groups, with the same idea, is given in Proposition 3.6.5, page 49, of the same notes.
Now, if $\mathop{*}\limits_{i\in I} G_i$ is the free product of the family, then for each $j$ we have a homomorphism $\iota_j\colon G_j\to \mathop{*}\limits_{i\in I}G_i$, given by mapping $x_j$ to the word of length one "$x_j$", if $x_j\neq 1$, and to the empty word if $x_j=1$. We now have:
Theorem. Let $\{G_i\}_{i\in I}$ be a family of groups. Then $\Bigl( \mathop{*}\limits_{i\in I}G_i, \{\iota_j\}_{j\in I}\Bigr)$ is a coproduct for the family.
Proof. Let $K$ be any group, and let $f_j\colon G_j\to K$ be homomorphisms from each member of the family to $K$. Define $\mathcal{F}\colon \mathop{*}\limits_{i\in I}G_i\to K$ by
$$\mathcal{F}(x_{i_1}\cdots x_{i_m}) = f_{i_1}(x_{i_1})f_{i_2}(x_{i_2})\cdots f_{i_m}(x_{i_m}).$$
It is not hard to verify that this is a homomorphism, and has the required universal property. Moreover, the values of $\mathcal{F}$ are forced by the condition $f_j = \mathcal{F}\circ\iota_j$, so $\mathcal{F}$ is unique, as required by the definition of coproduct. $\Box$
So there is a coproduct for any family, given by the free product.
A few observations about the free product $\mathop{*}\limits_{i\in I}G_i$:
It is generated by a family of subgroups that are isomorphic to the $G_i$.
The subgroups $H_i$ are "independent", in the sense that a product of the form
$$x_{i_1}\cdots x_{i_m}$$
in which $x_{i_j}$ is in the subgroup corresponding to $G_{i_j}$, $x_{i_j}\neq 1$ for all $j$, and for each $k=1,\ldots,m-1$, $i_k\neq i_{k+1}$, equals the identity if and only if $m=0$.
What Lang is showing is that these two conditions also characterize the coproduct. Suppose that $G$ is a group as described in the theorem. For each $j$, we have an embedding $f_j\colon G_j\to G$ (the subgroup inclusion). By the universal property of the coproduct, there must be a (unique) homomorphism $\Phi\colon\mathop{*}\limits_{i\in I}G_i \to G$ such that for each $j\in I$, $f_j=\Phi\circ\iota_j$. The map $\Phi$ is onto because $G$ is generated by the $G_j$. And in fact, $\Phi$ is one-to-one: suppose there is an element of the free product that lies in the kernel of $\Phi$. Then this element is a finite sequence of elements with the properties that describe the elements of the free product, i.e., it is something of the form $x_{i_1}\cdots x_{i_m}$, where $i_k\neq i_{k+1}$, $x_{i_j}\in G_{i_j}$, $x_{i_j}\neq 1$. Then, since it is in the kernel of $\Phi$, we have
$$\begin{align*}
1 &= \Phi(x_{i_1}\cdots x_{i_m})\\
&= \Phi(x_{i_1})\cdots \Phi(x_{i_m})\\
&= f_{i_1}(x_{i_1}) \cdots f_{i_m}(x_{i_m})\\
&= x_{i_1}\cdots x_{i_m}.
\end{align*}$$
But condition (b) on the theorem says that the only way this can happen is if $m=0$, so that the only element in the kernel of $\Phi$ is the identity of the free product. Thus, $\Phi$ is an isomorphism, so these two properties (properties (a) and (b)) actually also characterize the coproduct of the family of groups.
This proposition allows you to go freely from the universal property of the coproduct, to the precise construction of the free product, to just knowing you have a group generated by "independent" copies of the $G_j$. It lays the groundwork that allows you to use whichever one may be more convenient at any given time.
I hope that helps. |
Span and Smallest Submodule Proof | Let $J$ be any submodule containing $X$. Then show that $J$ contains span$(X)$.
This is not very difficult: use the definition of span($X$) to show that since $J$ is a module containg $X$ it should contain span$(X)$. |
How to price a supershare option; expected value of a payoff function? | The Black-Scholes model under the real-world measure is described by
\begin{align}dS_t &= \mu S_t dt + \sigma S_t dW_t \\
dB_t &= rB_t dt\end{align}
The fundamental theorem of asset pricing says the market above is free of arbitrage if and only if for every choice of numeraire ("currency"), $N_t$, there is a probability measure equivalent to the real-world measure such that under this measure the relative price of any asset, $\frac{Y_t}{N_t}$, is a martingale. It is this principle (combined with completeness of course) that makes pricing of contingent claims possible.
The expression you have for $S_t$ is valid under the risk-neutral measure, i.e. when you take the bond as the numeraire ($N_t = B_t$). In that case, the asset prices follow the process
\begin{align}dS_t &= \color{red}r S_t dt + \sigma S_t dW^{\color{red}M}_t \\
dB_t &= rB_t dt\end{align}
You can check for yourself that $\frac{S_t}{B_t}$ is now a martingale.
But $B_t$ is not the only possible choice for a numeraire. You can also choose the stock price, $S_t$ as the numeraire. In that case you would require $\frac{S_t}{S_t}$ and $\frac{B_t}{S_t}$ to be martingales. The former is always a martingale regardless of the measure. For the latter to be a martingale it must hold that
\begin{align}dS_t &= \color{red}{(r+\sigma^2)} S_t dt + \sigma S_t dW^{\color{red}S}_t \\
dB_t &= rB_t dt\end{align}
You can then say
$$S_T = S_0 e^{\left(r \color{red}+ \frac{1}{2}\sigma^2\right)T + \sigma W_T^S} $$
So now for the value of this contract you can write (due to the martingale property)
$$\frac{C_0}{S_0} = E\left[\frac{C_T}{S_T}\right]$$
We have
$$\frac{C_T}{S_T} = \frac{1}{S_T}\frac{S_T}{K_1}\mathbb{1}_{K_1 \leq S_T \leq K_2} = \frac{1}{K_1}\mathbb{1}_{K_1 \leq S_T \leq K_2}$$
Hence
$$E\left[\frac{C_T}{S_T}\right] = \frac{1}{K_1}P\{K_1 \leq S_T \leq K_2\}$$
Then
$$C_0 = \frac{S_0}{K_1}P\{K_1 \leq S_T \leq K_2\}$$
$$C_0 = \frac{S_0}{K_1}P\{\log{K_1} \leq \log{S_T} \leq \log{K_2}\}$$
$$C_0 = \frac{S_0}{K_1}P\{\log{K_1} \leq \log{S_0}+ \left(r + \frac{1}{2}\sigma^2\right)T + \sigma W_T^S \leq \log{K_2}\}$$
$$C_0 = \frac{S_0}{K_1}P\{\log{\frac{K_1}{S_0}}-\left(r + \frac{1}{2}\sigma^2\right)T \leq \sigma W_T^S \leq \log{\frac{K_2}{S_0}}-\left(r + \frac{1}{2}\sigma^2\right)T \}$$
$$C_0 = \frac{S_0}{K_1}P\{\frac{\log{\frac{K_1}{S_0}}-\left(r + \frac{1}{2}\sigma^2\right)T}{\sigma \sqrt{T}} \leq Z \leq \frac{\log{\frac{K_2}{S_0}}-\left(r + \frac{1}{2}\sigma^2\right)T}{\sigma \sqrt{T}} \}$$
In the last equality $Z$ denotes a standard normal random variable.
If you then define
$$d_i = \frac{\log{\frac{K_i}{S_0}}-\left(r + \frac{1}{2}\sigma^2\right)T}{\sigma \sqrt{T}}$$
for $i = 1,2$, you get the formula
$$C_0 = \frac{S_0}{K_1} \left(\Phi(d_2) - \Phi(d_1) \right)$$
where $\Phi(x)$ is the cumulative distribution function of the standard normal random variable. |
Path properties of Brownian Motion: relation between its maximum and hitting time | The two sets $\{T_a<t\}$ and $\{M(t)>a\}$ are equal to up to a null set, because the event $M(t)=a$ has probability zero. Indeed, both $T_a$ and $M(t)$ are continuously distributed random variables: Distribution of hitting time of line by Brownian motion. |
Proving Isomorphism Linear SubSpaces | How about defining a linear map $L: X\to \frac{X+Y}{Y}$ via $L(x)=x+Y$?
Is $L$ surjective? What is the kernel of $ L$? |
When are the sections of the structure sheaf just morphisms to affine space? | The scheme $\mathrm{Spec}(k[X])=\mathbf{A}_k^1$ is the universal locally ringed space with a morphism to $\mathrm{Spec}(k)$ and a global section (namely $X$). What I mean by this is that for any locally ringed space $X$ with a morphism to $\mathrm{Spec}(k)$ (equivalently $\mathscr{O}_X(X)$) is a $k$-algebra) and any global section $s\in\mathscr{O}_X(X)$, there is a unique morphism of locally ringed $k$-spaces $X\rightarrow\mathbf{A}_k^1$ such that $X\mapsto s$ under the pullback map $f^*:k[X]=\mathscr{O}_{\mathbf{A}_k^1}(\mathbf{A}_k^1)\rightarrow\mathscr{O}_X(X)$. This is a special case of my answer here: on the adjointness of the global section functor and the Spec functor |
Question regarding positivity of integrals | If $f(t) \ge 0$ for all $t \in [a,b]$, then the integral $\int_a^x f(t)dt \ge 0$ for $x \in [a,b]$. Likewise $\int_x^b f(t)dt \ge 0$ for $x \in [a,b]$. This is clear if you think of integration as finding the signed area under a curve. The area can't be negative if the graph of $f(t)$ never goes below the $t$-axis.
This could also be proved more rigorously by showing that any Riemann sum $\sum_i^n f(t_i)\Delta t_i$ must be non-negative. |
Proving Girsanov Theorem | Recall that if $M,N$ are continuous local martingales then $[M,N]$ is the unique finite variation process such that $MN - [M,N]$ is a continuous local martingale and we have that
$$\sum_{\pi_n} (M_{t_i^n}- M_{t_{i-1}^n})(N_{t_i^n}- N_{t_{i-1}^n}) \to [M,N]_t$$
in probability as the mesh of the partition $\pi_n$ goes to $0$.
So it will be enough to show that if $V$ is a finite variation process then $[V,N] = 0$ for a continuous local martingale $N$. Write $\|V\|_{1,T} = \sup \sum |V_{t_i} - V_{t_{i-1}}|$ for the $1$-variation of $V$ where the $\sup$ is over all partitions of $[0,T]$. Then we have for a partition $\pi_n$ of $[0,t]$,
$$\bigg | \sum_{\pi_n} (V_{t_i^n} - V_{t_{i-1}^n})(N_{t_i^n}- N_{t_{i-1}^n}) \bigg | \leq \|V\|_{1,t} \max_{\pi_n} |N_{t_i^n}- N_{t_{i-1}^n}|$$
and the right hand side goes to $0$ as the mesh of $\pi_n$ goes to $0$ since $N$ is uniformly continuous on $[0,t]$. This implies that $[V,N]_t = 0$ as desired. |
Formality of a statement in a calculus exercise, proof check | There is a problem in the part you are concerned about wher eyou say $f(x) \geq 1+\epsilon$ for all $x \in I_{x_0}$. We cannot lower bound $f(x)$ by $1+\epsilon$ for any arbitrary $\epsilon$.
What we know is that for any $\epsilon>0$, there exists $\delta>0$ such that for all $x$ satisfying $|x-x_0|<\delta$, we have $|f(x)-f(x_0)|<\epsilon$. In particular, this means that $f(x_0)-\epsilon<f(x)<f(x_0)+\epsilon$ for all $x \in I_0:=(x_0-\delta,x_0+\delta)$. So we can conclude that $f(x)>f(x_0)-\epsilon \geq 1-\epsilon$. This inequality doesn't look good enough because we need $f(x)$ to be strictly greater than $1$. The trick is we have to choose a particular $\epsilon$ that will give us a good enough inequality.
Recall that $f(x_0)$ is assumed to be strictly greater than $1$. In other words, $f(x_0)-1>0$. So choose $\epsilon:=\frac{f(x_0)-1}{2}>0$. Then for all $x \in I_{x_0}$, we have $f(x)>f(x_0)-\frac{f(x_0)-1}{2} = 1+\frac{f(x_0)-1}{2}>1$.
There is one other small issue, which is when you say $\int_0^1 f^n(x) \ dx \geq (1+\epsilon)^2 \cdot 2\delta$. This assumes that the interval $I_{x_0}$ is entirely contained in $[0,1]$. You can take care of this by just choosing a small enough $\delta$ so that $I_{x_0}\subseteq [0,1]$. |
Prove (without quoting any theorems) that polynomials on [0,1] are continous | Steps:
Prove that constant functions are continuous
Prove that the identity function $f(x)=x$ is continuous
Prove that if $f(x)$ and $g(x)$ are continuous, then so are $f+g$ and $f\cdot g$.
This suffices to prove that all polynomials are continuous. |
Proving infeasibility using Duality | Hint
The dual for this problem is $${\max g(\lambda_1,\lambda_2)\\\text{s. t.}\\\lambda_1,\lambda_2\succeq 0}$$where $$g(\lambda_1,\lambda_2){=\inf_{x}c^Tx+\lambda_1^TAx+\lambda_2^Tx\\=\inf_{x}(c+A^T\lambda_1+\lambda_2)^Tx}$$Now, when is the dual problem infeasible? How is it applied here? |
Show that $(\Bbb{R}, +)$ is isomorphic to $(\Bbb{R},\,\cdot\,)$ | As I am also studying some of these, I would like to share some tips that I found useful.
First of all, you could begin writing an homomorphism. In your example, it might help writing $\phi:(\mathbb{R,+})\rightarrow(\mathbb{R,\times})$ such that $\phi(x+y)=\phi(x)\phi(y)$ with $x,y\in(\mathbb{R},+)$. Now it's clearer that you may want to seek something with exponentials. Now prove that it's actually a bijective map, thus an isomorphism.
Also simplifying the problem might help. From your example, we found that there's an isomorphism between $(\mathbb{R,+})$ and $(\mathbb{R_{>0},\times})$. Now how could we extend it to negative numbers?
Or how could we prove that we can't do it (which is your case)?
Having a picture may also help if you have simpler sets. For example, showing that there's an homomorphism between $\mathbb{R}$ and $\mu(\mathbb{C}):=\{z\in\mathbb{C}:|z|=1\}$. A picture of $\mu(\mathbb{C})$ will show that you need to map all real numbers to the unit circle in the complex plane. Thus, you'll likely think about a trigonometric circle in the complex plane.
Also, using the isomorphism theorems will help. For example, from the above example, you'll find that a clever choice of $\theta$ and direct application of the First Isomorphism Theorem will hold that $\mathbb{R}\backslash\mathbb{Z}$ is isomorphic to $\mu(\mathbb{C})$. |
Find a linear, homogeneous differential equation with constant coefficients, such that $y(x)=(x+\cos(x))\cos(x)$ is one of its solutions | You can write your solution as
$$y(x)=(x+\cos x)\cos x=x\cos x+\cos^2 x=x\cos x+\frac{1}{2}+\frac{1}{2}\cos 2x$$
Then, an equation which characteristic polynomial contain factors $(x^2+1)^2$, $x$ and $x^2+4$ will be your answer. For example $p(x)=x(x^2+1)^2(x^2+4)$ and your ODE
$$ \boxed{y^{(7)}(x)+6y^{(5)}(x)+9y^{'''}(x)+4y'(x)=0}$$ |
Finding $~x~ y~$and $~z~$ | $y=0.23x$
$z=0.25x$
$x+0.23x-0.25x=300$
$0.98x=300$
$x=\dfrac{300}{0.98}\approx306.122$
Can you take it from here? |
Minimum value of a trigonometric function | let $$f(\theta)=\frac{a}{\cos(\theta)}+\frac{b}{\sin(\theta)}$$ then the first derivative is given by
$$f'(\theta)=a\frac{(
\sin(\theta))}{\cos(\theta)^2}-b\frac{\cos(\theta)}{\sin(\theta)^2}$$
then you must solve the equation $$f'(\theta)=0$$ for $\theta$. |
Let $f:[-1,1]\to\Bbb R$ be a continuous and differentiable function.prove that there exists a point $ c\in(-1,1), $ such that $f'(c)=0$ | This is where the proof (and not just the statement) of Rolle's theorem comes into play. Since $f(0)<f(-1)$ and $f(0)<f(1)$ it follows that $f$ attains its minimum value at some interior point $c\in(-1,1)$. Since $f$ is differentiable at $c$ by principle of minima we have $f'(c) =0$. There is no need to invoke additional theorems like IVT. |
Test for countability | Define $f: \mathbb{Z_+}^3\to \mathbb{N}, f(a,b,c)=2^a3^b5^c$.From the fundamental theorem of arithmetic it follows that $f$ is 1-1 since if $2^a3^b5^c=2^x3^y5^z \Rightarrow (a,b,c)=(x,y,z).$ This should be enough. |
How to identify that a polynomial has factor $(x+y)^2$ | Let $t:=x+y$.
Then we must have
$$P(x,t-x)=t^2Q(x,t-x).$$
E.g.
$$P(x,y)=x^3+x^2-xy^2+2xy-y^3+y^2+yx^2\to P(x,t-x)=2xt^2-t^3+t^2.$$
You can simply check that the polynomial has a double root at $t=0$,
$$\left.P(x,t-x)\right|_{t=0}=P(x,-x)=0,$$
$$\left.\frac{\partial P}{\partial y}(x,t-x)\right|_{t=0}=\frac{\partial P}{\partial y}(x,-x)=0.$$
Indeed
$$x^3+x^2-x(-x)^2+2x(-x)-(-x)^3+(-x)^2+(-x)x^2=0$$
and
$$x^2-2x(-x)+2x-3(-x)^2+2(-x)=0.$$
Another option is to divide by $(x+y)^2=x^2+2xy+y^2$:
$$x^3+x^2-xy^2+2xy-y^3+y^2+yx^2\\
=x(x^2+2xy+y^2)-x^2y+x^2+2xy-y^3+y^2\\
=(x-y)(x^2+2xy+y^2)+x^2+2xy+y^2\\
=(x-y+1)(x^2+2xy+y^2).$$ |
Questions and Solutions in Brownian Motion and Stochastic Calculus? | René L. Schilling/Lothar Partzsch: Brownian Motion - An Introduction to Stochastic Processes (Solutions) |
Matrix determinant operations. | You can add or remove any row to any other (not the same) row, but in the first step you replace R3 by -R3 (+R1 but you can add). So by negating a row, you negate the determinant. |
In $\sin(\sin(x))$ Why should I Calculate the $\sin$ of $(\sin(x))$ radians not the $\sin$ of $\sin(x)$ degrees? | $\sin(\sin x)$ could be evaluated if $x$ is in radian or in degrees.
You will get different answers so we need to clarify which unit is used.
For example in radian $$\sin(\sin(90)) \approx 0.7795$$ but in degrees $$\sin(\sin(90))\approx0.017$$
The trig formulas for differentiation or integration are only valid in radians, but the trig identities are valid in any unit.
For instance $$ \sin^2 (x) + \cos^2(x) =1$$ is valid in any unit as well as $$\tan (x) = \frac {\sin (x)}{\cos(x)}$$ |
Existence and uniqueness of solutions to $Ax=b$ in terms of number of equations and number of variables | There is this general result:
The linear system $Ax=b$ has solutions if & only if the matrix $A$ and the augmented matrix $[A|b]$ have the same rank. If it is the case, this common rank is the codimension of the (affine) subspace of solutions. |
Showing that a quadratic cannot have 3 roots using Determinants | If the matrix equation
$$\begin{bmatrix}a^2&a&1\\b^2&b&1\\c^2&c&1\end{bmatrix}\begin{bmatrix}p\\q\\r\\\end{bmatrix}=\begin{bmatrix}0\\0\\0\end{bmatrix}$$
has a non-trivial solution $(p,q,r)\neq (0,0,0)$, then the matrix is singular by definition. |
Set of points equidistant from two points in hyperbolic space. | Hint 1: Find a circle inversion that takes $p$ to $q$. This is an isometry, so what can you conclude about its fixed-point set? (extra hint: consult Will Jagy's comment)
Hint 2: Write down a Möbius transformation ($\operatorname{SL}_2(\mathbb{R})$ for the upper-half plane or $\operatorname{SU}(1,1)$ for the Poincaré disk) that takes $p$ to $q$. Identify its fixed-point set. Argue as in 1.
For geometric constructions (i.e., playing with circles), I find the Poincaré disk much more intuitive. Your mileage may vary. |
Uniformly differentiable diffeomorphisms | Let $f \colon \mathbb{R}^n \to \mathbb{R}^m$ be a $C^1$ function (notice that $n \ne m$ is not excluded). Let $K \subset \mathbb{R}^n$. For any $\epsilon > 0$ there exists $\delta > 0$ such that if $x \in K$ and $\lVert h \rVert < \delta$ then $\lVert Df(x) - Df(x + h) \rVert < \epsilon$.
Take $\epsilon > 0$, $x \in K$ and $h$ with $\lVert h \rVert < \delta$. By the $C^1$ Mean Value Theorem (see, e.g., Thm. 12 on p. 278 of: C. C. Pugh, Real Mathematical Analysis), there holds
$$
f(x + h) - f(x) = \Bigl(\int\limits_{0}^{1} Df(x + t h) \, dt \Bigr) h.
$$
Therefore,
$$
f(x + h) - f(x) - Df(x) h = \Bigl(\int\limits_{0}^{1} (Df(x + t h) - Df(x)) \, dt \Bigr) h
$$
with
$$
\Bigl\lVert \int\limits_{0}^{1} (Df(x + t h) - Df(x)) \, dt \Bigr\rVert \le \int\limits_{0}^{1} \lVert Df(x + t h) - Df(x) \rVert \, dt < \epsilon.
$$ |
Definition of minus power in cyclic group | Let $G$ be a group and $g\in G$. For $r\in \Bbb N_0$ we define $g^r\in G$ recursively via $g^{k+1}:=g\cdot g^k$ with base case $g^0:=1\in G$. We extend this to $r\in \Bbb Z$ by taking inverses, i.e., $g^{-r}=(g^{-1})^r$.
In more fancy language: For every group $G$ and $g\in G$, there exists a unique homomorphism $\phi_g\colon \Bbb Z\to G$ such that $\phi_g(1)=g$. We write $g^r$ for $\phi_g(r)$. Note that $\phi_g(-1)$ is the inverse of $g$, so ambiguity with the notation $g^{-1}$ for the inverse of $g$ does not arise. The usual power laws $g^{r+s}=g^r\cdot g^s$ and $g^{rs}=(g^r)^s$ follow readily from the homomorphism property. |
How many logical connectives are possible involving n simple propositions:p1, p2, . . . , pn? | In general, there are $2^{2^n}$ such connectives possible.
This is because for each of the $2^n$ possible truth-combination of the $n$ propositions, the connective can give an output of either True or False.
So, for example, there are $2^{2^3}=2^8=256$ possible $3$-place ('ternary') connectives.
Also, notice that there are $2^{2^2}=2^4=16$ possible binary connectives, so your calculation was wrong. In fact, you didn't even get to $12$, because with $\land$, $\lor$, $\to$, and $\leftrightarrow$ (that's $4$) and their negations (that's $4$ more) you only get to $8$
The connectives you missed: connective that always gives True, the connective that returns the value of the left operand, the connective that returns the value of the right operand, the connective that works like $\leftrightarrow$, and their $4$ negations. |
Quick way to find all roots of a polynomial? | The other two roots are $\omega\sqrt[3]{2}$ and $\omega^2\sqrt[3]{2}$, where $\omega$ is a primitive cubic root of unity. It just so happens that the primitive cubic roots of unity are easy to express, because they are the "other two" roots of $x^3-1 = (x-1)(x^2+x+1)$, so they can be found using the quadratic formula.
In general, the $n$ complex roots of $x^n-a$, with $a\in\mathbb{R}$, are given by
$$\sqrt[n]{a},\quad \zeta_n\sqrt[n]{a},\quad \zeta_n^2\sqrt[n]{a},\quad\ldots,\quad \zeta_n^{n-1}\sqrt[n]{a},$$
where $\zeta_n$ is a primitive $n$th root of unity.
Also, if you happen to know one root of a cubic polynomial, then you can always divide and solve the resulting quadratic. Here, you have $x^3-2$, and you know that $x-\sqrt[3]{2}$ is a factor (because $\sqrt[3]{2}$ is a root). Factoring, you have
$$x^3 - 2 = (x-\sqrt[3]{2})(x^2 + \sqrt[3]{2}x + \sqrt[3]{4}).$$
So the other two roots are the roots of the quadratic, which can be found using the quadratic formula:
$$r_1 = \frac{-2^{1/3} + \sqrt{2^{2/3} - 4\cdot 2^{2/3}}}{2},\qquad r_2 = \frac{-2^{1/3} - \sqrt{2^{2/3} - 4\cdot 2^{2/3}}}{2}.$$
Simplify the square root, factor out $2^{1/3}$, and you get the expressions as well. E.g.,
$$\begin{align*}
r_1 &= \frac{-2^{1/3}+\sqrt{2^{2/3}-4\cdot 2^{2/3}}}{2}\\
&= \frac{-2^{1/3}+\sqrt{2^{2/3}(-3)}}{2}\\
&= \frac{-2^{1/3} + 2^{1/3}\sqrt{-3}}{2}\\
&= \left(\frac{-1+\sqrt{-3}}{2}\right)2^{1/3}\\
&= \left( - \frac{1}{2} + \frac{\sqrt{3}}{2}i\right)\sqrt[3]{2}.
\end{align*}$$ |
Show that spectral radius does not depend on equivalent norm using Gelfand's formula | $\newcommand{\nrm}[1]{\left\lVert{#1}\right\rVert}\newcommand{\norm}{\nrm{\bullet}}$The definition of being equivalent norms is $$\norm_1\sim\norm_2\iff \exists c,b>0,\ c\norm_1\le \norm_2\le b\norm_1$$
Now, use $h>0\implies\lim_{n\to\infty}\sqrt[n]{h}=1$ to evaluate $\limsup\limits_{n\to\infty}$ and $\liminf\limits_{n\to\infty}$ of $\sqrt[n]{\nrm{A^n}_2}$ |
Class group of a field | This isnt the method you have in mind but there is a way to compute class groups all day every day.
A quadratic form $ax^2+bxy+cy^2$ (henceforth abreviated as $(a,b,c)$) has discriminant $D=b^2-4ac$. The discriminant of $\mathbb{Q}(\sqrt{-14})$ is $-56$. Thus look at forms of discriminant $D=-56$.
Forms correspond to ideals: $(a,b,c)$ corresponds to the ideal
$$\left(a, \frac{-b+\sqrt{D}}{2}\right)$$
And equivalent forms (under linear transforms of $x$ and $y$) correspond to equivalent ideal. Now reduced forms for $D<0$ are those for which $$|b|\leq a\leq c$$ and have the property that $$|b|\leq \sqrt{\frac{D}{3}}$$
We can now enumerate all the ideal classes. If $D=-56$ then $|b|\leq 4$ so
$$b=0, \pm 2$$
If $b=0$ we get
$$(1,0,14)$$
$$(2,0,7)$$ if $b=\pm 2$ we get
$$(3,2,5)$$
$$(3,-2,5)$$
So the class number is $4$ and the ideal classes are represented by
$$(1)$$
$$(2,\sqrt{-14})$$
$$(3,1-\sqrt{-14})$$
$$(3,1+\sqrt{-14})$$
One can indeed go further and show that the ideal class group is
$\mathbb{Z}_4$ generated by $(3,1+\sqrt{-14})$. |
Local maximum implies that the second partial derivative is non positive | What is a maximum? It is a point $x_0\in \mathbf{R}^d$ so that for all $x$ "near" $x_0$, $f(x_0)\geqslant f(x)$. So if you take the directional derivative
$$
D_h(f):= \lim_{t\rightarrow 0}\frac{f(x_0+th)-f(x_0)}{t}
$$
for any unit vector $h$ (corresponding to any direction), $D_h(f)$ should be non-positive (since the function is decreasing in every direction). What are partials? They are just directional derivatives with respect to a coordinate vector. Let $(e_1,e_2,\cdots,e_d)$ be a basis of $\mathbf{R}^d$, then $D_{e_i}(f)=\frac{\partial f}{\partial e_i}$. So if you can show that $D_h(f)\leqslant 0$ for all unit vectors $h$, you will get your result.
Hint (for showing $D_h(f)\leqslant 0$): What will the sign of $f(x_0+th)-f(x_0)$ be for unit vectors $h$ and $t$ small? |
What is the way to show the following derivative problem? | Hint: For some $c\in[0,1]$, $f'(c)=0$ (can you see why?). Now apply the mean value theorem to $f'$ to bound $f'(x)$ for any other $x\in[0,1]$. |
Finding limits of a line integral with vector fields | The easiest way to determine the limits of a line integral is simply to look at the function $r(t)$. Since your curve $C$ is an ellipse, your wild guess is right, because $r(t)$ runs over $C$ only once when $r(t)$ runs from $0$ to $2
\pi$. The limits do not depend on the vector field $F$, they must only be such that $r(t)$ runs over $C$ once when $t$ goes over the chosen domain of $r$. What I mean by "once" is that for instance had we chosen $0$ and $4
\pi$ for the limits, you would run over $C$ twice.
Did you just need confirmation or were you actually wondering about something more?
Hope that helps, |
The general linear group is closed | $GL(n)$ is an algebraic group, that is, it is a variety (described by polynomials) and the group multiplication and inverse are also polynomial. For example, $GL(2)$ is the set of all $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ with $ad-bc\ne 0$, which can also be described as the set of all $(a,b,c,d,D)$ that are zeroes of the single polynomial $(ad-bc)D-1$. Note that we are seeing five, not four variables here! Hence we are taling about a (Zariski) closed subset of $F^{n^2+1}$, not of $F^{n^2}$.
Also note that adding the $D$ variable is not just a trick to convert an
inequality into an equality. Instead, it is essential to make inversion polynomial, the inverse of $(a,b,c,d,D)$ being $(dD,-bD,-cD,dD,ad-bc)$. |
$n$ cities with two ways | You cannot have $n \lt 3$ to satisfy the existence of a plane direct connection and a car direct connection.
For $n=3$ you are going to have three direct connections, two of a particular type and one of the other. The first type of connection connects all the cities directly or indirectly.
For $n \gt 3$, given A and B, take any two other cities. Between these four cities there are six direct connections. So at least three of the direct connections are of a particular type. This particular type directly or indirectly connects all four cities, except when it forms a triangle between three of them and the other type provides direct connections between the fourth city and the other three. Therefore in every case the four cities are linked directly or indirectly by a single type of connection. This picture may help when there are at least three black direct connections:
So you can get between any two cities on a single type of connection, passing though at most two other cities. |
Posterior for Gamma prior and Gamma likelihood with known shape | I think it is only a matter of parametrizing the gamma density. If you consider $\beta,b$ as the scale parameter, Gamma density is the following
$$f_X(x)=\frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-x/\beta}$$
Thus the likelihood is
$$p(\mathbf{x}|\beta)\propto\theta^{30}\cdot e^{-\theta \Sigma_i X_i}$$
The prior is
$$\pi(\theta)\propto \theta^9\cdot e^{-\theta/2}$$
Thus the posterior is
$$\pi(\theta|\mathbf{x})\propto\theta^{39}\cdot e^{-\theta(1/2+\Sigma_i X_i)}$$
Now we immediately recognize that the posterior is a $Gamma\Big[40;\frac{2}{2 \Sigma_i X_i+1}\Big]$
Obviously it is related with a $\chi_{(80)}^2$ which can be obtained by a simple linear transformation of the Posterior |
Upper Bound for Polynomial Using Evenly Spaced Points | Let $n\geq3$ and let $x_i:=\frac{i}{n-1}$ so that we want to determine the maximum of the polynomial
$$f_n(x):=\prod_{i=1}^n(x-x_i)=\prod_{i=1}^n\left(x-\frac{1-i}{n-1}\right),$$
on the interval $[0,1]$. If $n$ is odd then for all $x\in(x_1,x_2)$ and $i>2$ we have
$$\frac{1-i}{n-1}<x-x_i<\frac{2-i}{n-1}<0,$$
from which it follows that
$$(x-x_1)(x-x_2)\prod_{i=3}^n\frac{2-i}{n-1}
<f_n(x)
<(x-x_1)(x-x_2)\prod_{i=3}^n\frac{1-i}{n-1}.$$
The maximum of $(x-x_1)(x-x_2)$ on the interval $(x_1,x_2)$ is at the midpoint $\frac{x_1+x_2}{2}=\frac{1}{2(n-1)}$, yielding the following bounds for the maximum $M_n$ of $f_n$ on the interval $(x_1,x_2)$:
$$M_n
>\left(\tfrac{1}{2(n-1)}-x_1\right)\left(\tfrac{1}{2(n-1)}-x_2\right)\prod_{i=3}^n\frac{2-i}{n-1}
=\frac14\frac{(n-2)!}{(n-1)^n}.
$$
$$M_n
<\left(\tfrac{1}{2(n-1)}-x_1\right)\left(\tfrac{1}{2(n-1)}-x_2\right)\prod_{i=3}^n\frac{1-i}{n-1}
=\frac14\frac{(n-1)!}{(n-1)^n},$$
Similarly, if $n$ is even then for all $x\in(x_2,x_3)$ and $i>3$ we have
$$(x-x_1)(x-x_2)(x-x_3)\prod_{i=4}^n\frac{3-i}{n-1}
<f_n(x)
<(x-x_1)(x-x_2)(x-x_3)\prod_{i=4}^n\frac{2-i}{n-1},$$
and some basic algebra shows that the maximum of $(x-x_1)(x-x_2)(x-x_3)$ is at $x=\frac{1+\sqrt{3}}{3(n-1)}$, so
$$M_n>\left(\tfrac{1+\sqrt{3}}{3(n-1)}-x_1\right)
\left(\tfrac{1+\sqrt{3}}{3(n-1)}-x_2\right)
\left(\tfrac{1+\sqrt{3}}{3(n-1)}-x_3\right)\prod_{i=4}^n\frac{3-i}{n-1}
=2\frac{\sqrt{3}}{9}\frac{(n-3)!}{(n-1)^n},$$
$$M_n<\left(\tfrac{1+\sqrt{3}}{3(n-1)}-x_1\right)
\left(\tfrac{1+\sqrt{3}}{3(n-1)}-x_2\right)
\left(\tfrac{1+\sqrt{3}}{3(n-1)}-x_3\right)\prod_{i=4}^n\frac{2-i}{n-1}
=2\frac{\sqrt{3}}{9}\frac{(n-2)!}{(n-1)^n}.$$
Proof that the maximum is in $(x_1,x_2)$ if $n$ is odd, and in $(x_2,x_3)$ if $n$ is even:
(A bit messy, might clean up later)
It it not hard to see that for $x\in(x_k,x_{k+1})$
we have $\operatorname{sgn}f(x)=(-1)^{n-k}$. So the maximum is in some interval $(x_k,x_{k+1})$ with $k\equiv n\pmod{2}$.
The symmetry in the product shows that for all $x\in(0,1]$ we have
$$f\left(x-\frac{1}{n-1}\right)
=\frac{x-x_n}{x-x_1}f\left(x\right)
=(1-x^{-1})f(x),$$
and of course for $x\in[\tfrac12,1]$ we have $|1-x^{-1}|\leq1$, so the maximum value of $f$ on the interval $[\tfrac12,1]$ is assumed on the interval $(x_{n-2},x_{n-1})$. It is clear that
$$f(1-x)=(-1)^nf(x),$$
for all $x\in[0,1]$, from which it follows that the maximum value of $f$ on the interval $[0,1]$ is assumed on $(x_1,x_2)$ if $n$ is odd, and on $(x_2,x_3)$ if $n$ is even.
The polynomial $f$ of degree $n$ has precisely $n$ distinct roots in the interval $[0,1]$. It follows that the polynomial $f'$ has precisely one root in each interval $(x_k,x_{k+1})$ for $k\in\{1,\ldots,n-1\}$. The maximum of $f$ is then assumed at the unique root of $f'$ in the interval $(x_k,x_{k+1})$, where $k=1$ if $n$ is odd and $k=2$ if $n$ is even. |
Prove that the ideal $I = \left( 3, 2 + \sqrt{-5} \right)$ is a prime ideal in $\mathbb{Z}\left[ \sqrt{-5} \right]$. | Hint:
\begin{align*}
\frac{\mathbb{Z}\left[\sqrt{-5}\right]}{\left(3, 2 + \sqrt{-5}\right)} &\cong \frac{\mathbb{Z}[x]/(x^2 + 5)}{(3, 2 + x, x^2 + 5)/(x^2 + 5)} \cong \frac{\mathbb{Z}[x]}{(3, 2+x, x^2 + 5)}
\end{align*} |
Inequality $(n+1)^{-s} \leq (2n)^{-s}$ true for all $s\leq1$ and natural $n$? | You cannot use this inequality if $s<0$. Note that for $s=0$ the equality in that passage holds. But if $s<0$ the divergence of the series is quite trivial. For example, rewrite $$\frac{1}{\left(n+1\right)^{s}}+\dots+\frac{1}{\left(2n\right)^{s}}$$ and using the fact $s<0$. |
Find the number of digits in $2^{2^{22}}$ without logarithms. | We know that $2^{10}$ is about $1000$ and $2^{20}$ is about $1,000,000$. So $2^{22}$ is about $4$ million. Then
$$2^{2^{22}} \approx 2^{4000000} = (2^{10})^{400000} \approx (10^3)^{400000}=10^{1200000}.$$
So we expect about $1.2$ million digits. By log's we get about $1.26$ million. |
Does this derivative exist at 0 | Using the chain rule we have that for $x\in\Bbb R\setminus\{0\}$
$$f'(x)=-\sin(e^{|x|}-1)e^{|x|}\operatorname{sign}(x)$$
and
$$\lim_{x\to 0}f'(x)=0$$
where the above limit is equivalent to
$$f'(0)=\lim_{x\to 0^+}\frac{\cos(e^x-1)-1}{x}=\lim_{x\to 0^-}\frac{\cos(e^{-x}-1)-1}{x}=0$$
due to L'Hôpital rule.
P.S.: dont trust wolfram alpha or any other mathematical software. |
The random variables $X$ and $Y$ have the joint density $f_{X,Y}(x,y)=...$ | I suggest rewriting the joint density using the indicator function $\mathbf1_{a\in A}$, which equals $1$ if $a\in A$ and equals $0$ otherwise.
So you have
\begin{align}
f_{X,Y}(x,y)=e^{-y}\mathbf1_{0<x<y}&=\begin{cases}\frac{1}{y}\mathbf1_{0<x<y}\,ye^{-y}\mathbf1_{y>0}
\\\\e^{-(y-x)}\mathbf1_{y>x}\,e^{-x}\mathbf1_{x>0}\end{cases}
\end{align}
We have expressed the joint density in two different ways, in each case $f_{X,Y}$ factors as the product of a conditional density and a marginal density.
From the first case, $$f_{X\mid Y=y}(x)=\frac{1}{y}\mathbf1_{0<x<y}$$
And from the second case, $$f_{Y\mid X=x}(y)=e^{-(y-x)}\mathbf1_{y>x}$$
At this point you can calculate the conditional means simply from definition. Or you might identify $X\mid Y$ as having a uniform distribution over $(0,Y)$ and $Y\mid X$ as having a shifted exponential distribution (i.e. $[Y\mid X]-X$ has an exponential distribution with mean $1$), from which the expectations follow easily. |
Combinatorial proof of $P(n|p|q+1) = P(n-q|p|\le q+1) - P(n-p-q|p|\le q)$ | Here is a bijective proof of
$$P(n|p|q+1)+ P(n-p-q|p|\le q) = P(n-q|p|\le q+1) $$
Given a partition $\lambda$ of $n-q$ into $p$ parts which are all at most $q+1$, there are two cases.
If all of the parts of $\lambda$ are greater than $1$, then subtracting one from each part leaves a partition of $n-p-q$ into $p$ parts of sizes at most $q$.
If one of the parts of lambda is equal to $1$, then deleting that part and adding a part of size $q+1$ leaves a partition of $n$ into $p$ parts whose largest part is $q+1$. |
Why is cos and sin used in plotting this phase portrait | It's not necessary at all. All it does is decide the initial condition for each of the curves in the plot. So, what sin, cos, pi allow the code authors to do is to make ten starting points evenly spaced around a small circle centred on the origin.
Note the /5 there. That makes the theta go, not from $0$ to $10\pi$, but from $0$ to $2\pi$. Then taking sine and cosine of theta makes a circle, and 1e-5 makes the circle small (radius $10^{-5}$). |
What happens to the trace if you multiply with orthogonal matrices | No, the statement is not true. As a counterexample, consider
$$
A = Q = V = \pmatrix{1&0\\0&1}, \quad U = \pmatrix{0&1\\1&0}.
$$ |
Use the arithmetic mean-geometric mean inequality to establish that $-x < n < m$ implies $(1+x/n)^{n} \leq (1 + x/m)^{m}$ | We need also $n>0$, otherwise your inequality is wrong.
For $n>0$ since $$\frac{n}{m}+\frac{m-n}{m}=1,$$ By AM-GM we obtain $$1+\frac{x}{m}=\frac{n}{m}\left(1+\frac{x}{n}\right)+\frac{m-n}{m}\geq\left(1+\frac{x}{n}\right)^{\frac{n}{m}}\cdot1^{\frac{m-n}{m}}=\left(1+\frac{x}{n}\right)^{\frac{n}{m}},$$ which gives your inequality. |
How many subsets of $A=\{0,1,2,...,n\}$ are there such that no consecutive numbers come together | Let $A_n = \{1,2,3,\dots,n\}$. Let $G_n$ be the number of "good" subsets of $A_n$. Here I will consider the empty set to be a "good" subset of each $A_n$.
For each subset, $S$ of $A_n$ there is a function $f_{n,S}:A_n \to \{0,1\}$ define by $f_{n,S}(x)=
\begin{cases}
0 & \text{If $x \not \in S$} \\
1 & \text{If $x \in S$} \\
\end{cases}$
\begin{array}{c}
& f \\
\text{subset} & 1 & \text{good?} \\
\hline
& 0 &\checkmark \\
1 & 1 &\checkmark \\
\hline
\end{array}
So $A_1=2$.
\begin{array}{c}
& f \\
\text{subset} & 12 & \text{good?} \\
\hline
& 00 &\checkmark \\
1 & 10 &\checkmark \\
2 & 01 &\checkmark \\
12 & 11 \\
\hline
\end{array}
So $A_2=3$.
\begin{array}{c}
& f \\
\text{subset} & 123 & \text{good?} \\
\hline
& 000 &\checkmark &\text{Compare to $A_2$}\\
1 & 100 &\checkmark \\
2 & 010 &\checkmark \\
12 & 110 \\
\hline
3 & 001 &\checkmark &\text{Compare to $A_1$}\\
13 & 101 &\checkmark \\
23 & 011 & \\
123 & 111 \\
\hline
\end{array}
So $A_3=A_1+A_2=5$.
\begin{array}{c}
& f \\
\text{subset} & 1234 & \text{good?} \\
\hline
& 0000 &\checkmark &\text{Compare to $A_3$}\\
1 & 1000 &\checkmark \\
2 & 0100 &\checkmark \\
12 & 1100 \\
3 & 0010 &\checkmark \\
13 & 1010 &\checkmark \\
23 & 0110 & \\
123 & 1110 \\
\hline
4 & 0001 &\checkmark &\text{Compare to $A_2$}\\
14 & 1001 &\checkmark \\
24 & 0101 &\checkmark \\
124 & 1101 \\
34 & 0011 & \\
134 & 1011 & \\
234 & 0111 & \\
1234 & 1111 \\
\hline
\end{array}
So $A_4=A_2+A_3=8$.
So it seems that $A_1=2, \quad A_2=3, \quad$ and $A_{n+2}=A_n + A_{n+1}$ Thus $A_n = F_{n+2}$, the $(n+2)^{th}$ fibonacci number. |
Connected topological spaces, product is connected | First prove that any finite product of connected spaces is connected. This can be done by induction on the number of spaces, and for two connected spaces $X$ and $Y$ we see that $X \times Y$ is connected, by observing that $X \times Y = (\cup_{x \in X} (\{x\} \times Y)) \cup X \times \{y_0\}$, where $y_0 \in Y$ is some fixed point. This is connected because every set $\{x\} \times Y$ is homeomorphic to $Y$ (hence connected) and each of them intersects $X \times \{y_0\}$ (in $(x,y_0)$), which is also connected, as it is homeomorphic to $X$. So the union is connected by standard theorems on unions of connected sets. Now finish the induction. So for every finite set $I$, $\prod_{i \in I} X_i$ is connected.
Now if $I$ is infinite, fix points $p_i \in X_i$ for each $i$, and define $$Y = \{ (x_i) \in \prod_i X_i: \{i \in I: x_i \neq p_i \} \text{ is finite }\}$$
Now for each fixed finite subset $F \subset I$, define $Y_F = \{ (x_i) \in \prod_i X_i: \forall i \notin F: x_i = p_i \}$. By the obvious homeomorphism, $Y_F$ is homeomorphic to $\prod_{i \in F} X_i$, which is connected by the first paragraph. So all $Y_F$ are connected, all contain the point $(p_i)_{i \in I}$ of $\prod_{i \in I} X_i$, and their union (over all finite subsets $F$ of $I$) equals $Y$. So again by standard theorems on the union of connected subsets of a space, $Y$ is a connected subspace of $\prod_{i \in I} X_i$.
Finally note that $Y$ is dense in $\prod_{i \in I} X_i$, because every basic open subset $O$ of the product depends on a finite subset of $I$, in the sense that $O = \prod_{i \in I} U_i$ where all $U_i \subset X_i$ are non-empty open and there is some finite subset $F \subset I$ such that $U_i = X_i$ for all $i \notin F$. Pick $q_i \in U_i$ for $i \in F$ and set $q_i = p_i $ for $i \notin F$. The $(q_i)_{i \in I}$ is in $O \cap Y_F \subset O \cap Y$, so every (basic) open subset of $\prod_{i \in I} X_i$ intersects $Y$.
Now use that the closure of a connected set is connected to conclude that $\prod_{i \in I} X_i$ is connected, also for infinite $I$. |
Nilpotent elements in a quotient ring. | No.
Consider $\mathbb{Q}[x]$ (rational polynomials) and the ideal $I=(x^2)$ (the principal ideal generated by $x^2$).
Notice that $\mathbb{Q}[x]$ is an integral domain (so no zero divisors so no nilpotent elements). Thus $I$ contains all of the nilpotent elements (since there are none).
On the other hand, $\mathbb{Q}[x]/I$ does have nilpotent elements. In particular $(x+I)^2=x^2+I=0+I$ and $x+I \not= 0+I$.
So quotienting can create new nilpotent elements.
Another example (along the same lines), consider $\mathbb{Z}$ (integral domain so no nilpotents). Then $8\mathbb{Z}$ contains all nilpotents (again since there aren't any). But $\mathbb{Z}/8\mathbb{Z} = \mathbb{Z}_8$ does have nilpotent elements. For example, $2 \not=0$ in $\mathbb{Z}_8$ but $2^3=8=0$. |
If bimodules are important, why not bi-vector spaces? | Unlike the case of vector spaces, it's not particularly natural here to restrict attention to fields for bimodules. As paul garrett says, the basic problem is that if $M$ is a $(K_1, K_2)$-bimodule over a field $k$, where $K_1, K_2$ are both fields, then equivalently $M$ is a $K_1 \otimes_k K_2$-module, and $K_1 \otimes_k K_2$ need not be a field.
For a simple example, if $k = \mathbb{R}$ and $K_1 = K_2 = \mathbb{C}$, then
$$\mathbb{C} \otimes_{\mathbb{R}} \mathbb{C} \cong \mathbb{C} \times \mathbb{C}.$$
So in the nice cases, instead of a field you get a finite direct product of fields (e.g. if $K_1, K_2$ are both finite separable extensions of $k$), which is not too bad. What this means is that every $(\mathbb{C}, \mathbb{C})$-bimodule over $\mathbb{R}$ is a direct sum of copies of two simple bimodules, one given by $\mathbb{C}$ with the obvious bimodule structure and one given by $\mathbb{C}$ where one of the multiplications has been "twisted" by complex conjugation.
But worse things can happen: if $k = \mathbb{F}_p(t)$ and $K_1 = K_2 = k[x]/(x^p - t)$, then
$$K_1 \otimes_k K_2 \cong K_1[x]/(x - \sqrt[p]{t})^p$$
which has a nontrivial nilpotent, and in particular which is not semisimple. |
Prove the inequality: $\int_0^2 \frac{1}{2+\arctan x} dx \geq \ln 2$ | There are two important things to remember to start with here:
$\arctan(0)=0$
The derivative of $\arctan(x)$ with respect to $x$ is $\frac{1}{x^2+1}$ (Something you will have learned from differential calculus) which we can notice that for all values of $x$ is a positive number strictly less than $1$ with the exception of when $x=0$ where it is identically equal to $1$.
These two facts together along with noting the derivative of $f(x)=x$ is identically equal to $1$ show that on the interval $(0,2)$ you have that $x$ is always strictly larger than $\arctan(x)$
Using this, we find that on the interval $(0,2)$ we have that $\frac{1}{2+\arctan(x)}$ is always strictly larger than $\frac{1}{2+x}$ (since we are dividing by a smaller amount) from which it follows that $\int_0^2 \frac{1}{2+\arctan(x)}dx \geq \int_0^2\frac{1}{2+x}dx$
Finally, correctly evaluating the integral on the right yields the value of $\ln(2)$ which when replaced in the inequality completes the proof. |
Why when $D^2 = D$ is D not the identity matrix | Matrices, that fulfill $D^2=D$ are projections. There is only one projection, that maps into the whole vector-space. It's the identity.
Your example projects everything on the space spanned by $\begin{pmatrix}1\\1\end{pmatrix}$ and is not invertible.
In your derivation you asusmed the matrix to be non-singular, but is (basically) never the case in projections. |
$x\in{\rm span}(S\cup\{y\}), x\notin{\rm span}(S)$ implies $y\in{\rm span}(S\cup\{x\})$ | The statement is false for arbitrary modules. To take a simple example, let the module be $\Bbb Z$ as a $\Bbb Z$-module with the usual operations, and let $S=\emptyset$, $x=2$, $y=1$. Then $x\in{\rm span}(y)=\Bbb Z$ but $y\notin{\rm span}(x)=2\Bbb Z$. The statement $x_i\notin{\rm span}(\{x_j:j\in S\setminus i\})$ corresponds to $x_i\ne\sum_{j\in S}a_jx_j$ for any finitely supported family $\{a_j:j\in S\}$, which does not match the usual symmetric notion of linear independence, $\sum_{j\in S}a_jx_j=0$ only if all $a_j=0$, in an arbitrary module (since a nontrivial representation of $0$ does not necessarily allow any one of the vectors to be written in terms of the others). To use the $\Bbb Z$ example again, $\{2,3\}$ is linearly dependent in the usual sense, because $2\cdot 3-3\cdot 2=0$, but $2\notin{\rm span}(3)$ and $3\notin{\rm span}(2)$. There is no simple interpretation of this notion of linear independence in terms of spans.
For a module over a skew field $F$ or a vector space, the statement is true, and the proof is similar to the given one. Let $T=\{z:\exists b\in F,\,z-by\in{\rm span}(S)\}$. Then it is easy to show that $T$ is a subspace, and of course it contains $y$ and the vectors in $S$. Thus ${\rm span}(S\cup\{y\})\subseteq T$, so $x\in T$ and there is a $b$ such that $x-by\in{\rm span}(S)$. If $b=0$, then $x\in{\rm span}(S)$, so $b\ne0$ and $b^{-1}$ exists. Then $y=b^{-1}(x-(x-by))$, and since $x,x-by\in{\rm span}(S\cup\{x\})$, we have $y\in{\rm span}(S\cup\{x\})$. |
Does pointwise convergence to a continuous function in compact set imply uniform convergence? | No, if there is no condition that $f_n$ are continuous. Counterexample: $K = [0,1]$, $f_n(x)$ $=$ $0$ for $x\neq1/n$, $1$ for $x = 1/n$, $f(x) \equiv 0$.
EDIT: For continuous functions it is still not true. $K$ and $f$ as above, and $f_n(x) = 0$ for x < $1/2n$, $0$ for $x > 3/2n$, $1$ for $1/n$ and linear on intervals $[1/2n,1/n]$ and $[1/n,3/2n]$. |
$\mathbb {Z}^2$ can be generated by the vectors $(w,x)$ and $(y,z)$, where $w,x,y,z \in \mathbb {Z}$, if $wz-xy= \pm1$ is satisfied. | $\mathbb Z$ is generated by $\pm1$, so $\mathbb Z^2$ is generated by $(\pm1,0)$ and $(0,\pm 1)$.
Given the condition, we have $(\pm1,0)=z(w,x)-x(y,z)$ and $(0,\mp1)=y(w,x)=w(y,z),$
so $(w,x)$ and $(y,z)$ generate $\mathbb Z^2$. |
Prove $\mathbb{R}^3$ is not the product of two identical topological spaces | As has been pointed out in the comments, this question has been asked and answered on MathOverflow. I have replicated the accepted answer by Tyler Lawson below.
No such space exists. Even better, let's generalize your proof by converting information about path components into homology groups.
For an open inclusion of spaces $X \setminus \{p\} \subset X$ and a field $k$, we have isomorphisms (the relative Kunneth formula)
$$
H_n(X \times X, X \times X \setminus \{(p,p)\}; k) \cong \bigoplus_{p+q=n} H_p(X,X \setminus \{p\};k) \otimes_k H_q(X, X \setminus \{p\};k).
$$
If the product is $\mathbb{R}^3$, then the left-hand side is $k$ in degree 3 and zero otherwise, so something on the right-hand side must be nontrivial. However, if $H_p(X, X \setminus \{p\};k)$ were nontrivial in degree $n$, then the left-hand side must be nontrivial in degree $2n$. |
Show power set union identity | $a\in X\Rightarrow a\subseteq\cup X$.
Proving that $\wp\left(a\right)\in\wp\left(\wp\left(\cup X\right)\right)$
comes to the same as proving that $\wp\left(a\right)\subseteq\wp\left(\cup X\right)$
wich is a direct consequence of $a\subseteq\cup X$. |
Potential of Sphere from Surface Integrals | The question sounds correct. One thing you may be missing out on, however, is that you (probably) are not required to use rectangular coordinates. We want to calculate the potential at some point $P$ due to something like charge or mass that is evenly distributed over a spherical surface.
First, a sketch of the surface showing relevant features, namely the surface, a differential element of area on the surface and the point $P$ at which the potential is to be calculated. Now here's a trick: put $P$ on the z-axis of the reference frame. We should be free to pick any frame we want, so we pick a frame whose origin is at the center of the sphere and whose z-axis goes through $P$.
Second, write an expression for contribution to the potential due to our element of area. That contribution is $\mu dA/R_{ps}$, where $R_{ps}$ is the distance from element of surface area to $P$. So, we are talking about evaluating the integral $$\int \frac{\mu}{R_{ps}}dA $$
Third, we need coordinates to identify the points on the surface that are included in the integral and we want to express $dA$ and $R_{ps}$ in those same coordinates. Since we have a spherical surface, let's use spherical coordinates.
The spherical coordinates I use are $(r,\theta,\phi)$ where $\theta$ is the polar angle that goes from 0 to $\pi$ and $\phi$ is the azimuthal angle that goes from 0 to $2\pi$. So, the area element is $a^2\sin{\theta}d\theta d\phi$.
The distance $R_{ps}$ can be obtained from the law of cosines. A separate sketch showing the distances involved may be helpful. We let the distance from the origin to point $P$ be $b$, and get $R_{ps}^2=a^2+b^2-2ab\cos\theta$.
Fourth, rewrite the integral in spherical coordinates to obtain $$\int \int \frac{a^2\mu\sin\theta}{\sqrt{a^2+b^2-2ab\cos\theta}}d\theta d\phi $$
Fifth, evaluate the integral over the proper limits and reduce the result to it simplest form. The integral itself is easy. There is one trick to keep in mind when we eliminate the square roots: we must consider separately what happens when $b<a$, when $b=a$ and when $b>a$. Really, all we are doing evaluating $R_{ps}$ at three positions, so our sketch and the law of cosines may be helpful.
Sixth, as a sanity check, is this the same solution as the physics approach? In physics we talk about attractive forces like gravity, which has negative potential for positive $\mu$, and about repulsive forces, like the Coulomb force, which has a positive potential for positive $\mu$. |
(Proof-check) for the expression of total variation of a continuously differentiable function | It is a nice characterization of the total variation of a $C^1$ function in terms of the integral of the derivative. Notice this holds in general also if $f$ is assumed to be only absolutely continuous and the integral is the Lebesgue one.
In any case, your proof is perfectly fine! |
Undetermined vs. Undefined | Division by zero is undefined in every case.
In calculus, the phrase “$0/0$ is an indeterminate form” means that you have a limit of the form
$$
\lim_{x\to a}\frac{f(x)}{g(x)}
$$
where
$$
\lim_{x\to a}f(x)=0
\qquad\text{and}\qquad
\lim_{x\to a}g(x)=0
$$
but $f(x)/g(x)$ is defined in a set having $a$ as a limit point (usually, but not necessarily, a punctured neighborhood of $a$) and nowhere you do $0/0$, which makes no sense. In this case you can apply no standard theorem on limits and the limit, if existing, must be computed with some different technique than simply substituting the value $a$.
Some say that the value of the fraction $0/0$ (no reference to limits) is undetermined, but this has no real usefulness. |
Is it possible to define the standard function notation $f(x)=y$ in terms of an arbitrary relation? | Suppose $R$ is a set of ordered pairs. Let $dom(R)=\{x: \exists y(\langle x,y\rangle\in R)\}$. There is a natural function with domain $dom(R)$ which captures the behavior of $R$: $$\gamma_R: x\mapsto\{y: \langle x,y\rangle\in R\}.$$ The relation $R$ is a function iff $f_R(x)$ is a singleton for every $x\in dom(R)$, and so for $f$ a function we can think of "$f(x)=y$" as a shorthand for "$\gamma_f(x)=\{y\}$." |
Understanding how to construct a tall narrow tree | What a nice coincidence, I was actually present at that talk!
We want to show the following:
There is a tree $T\subseteq {}^{<\eta}2$ in $W$ (ordered by end-extension) with levels of size ${<}\delta$ so that for any $\alpha<\eta$, $\left.s\right|_\alpha\in T_\alpha$.
Observe that the problem at hand is only interesting when $\delta\leq\eta<\lambda$.
First note that $\lambda$ is a strong limit cardinal in $W$, too. Thus there is some $\theta<\lambda$ and a bijection $h:({}^{<\eta}2)^W\rightarrow \theta$ in $W$.
We now make use of the third variant of the $\lambda$-uniform $\delta$-covering as defined in the slides:
Whenever $f:\lambda\rightarrow\lambda$ is in $V$, there is $B\subseteq\lambda\times\lambda$ in $W$ with all vertical slices of size ${<}\delta$ and $f\subseteq B$.
As $\theta,\eta<\lambda$, this surely still holds if we let $f$ be a function $f:\eta\rightarrow\theta$ and demand $B$ to be a subset of $\eta\times\theta$. In our case, we take
$$f:\eta\rightarrow\theta,\ \alpha\mapsto h(\left.s\right|_\alpha)$$
This is well-defined by our assumtion that $\left.s\right|_\alpha\in W$ for all $\alpha<\eta$. Hence we get a $B\subseteq \eta\times\theta$ in $W$ with vertical silces of size ${<\delta}$ and $f\subseteq B$. We can furthermore assume that whenever $(\alpha,\beta)\in B$, then $h^{-1}(\beta)\in{}^\alpha 2$. You can think of the vertical slice of $B$ at $\alpha<\eta$, that is $B_\alpha=\{\beta\mid (\alpha, \beta)\in B\}$, (or rather of $h^{-1}[B_\alpha]$) as a guess for $\left.s\right|_\alpha$. This readily gives us the desired tree $T$ in $W$: The $\alpha$-th level of $T$ is
$$T_\alpha=\{t\in {}^\alpha 2\mid h(t)\in B_\alpha\}$$ |
Binomial Function and the hypergeometric function | You should express $h(r|M, N, n)$ and the hypergeometric functions in terms of Pochhammer symbol. To do so, you need the following properties :
$$\binom mn = \frac{m!}{n! (m-n)!}$$
$$ m! = (1)_m$$
$$ (m)_{n+k} = (m)_n (m+n)_k$$
$$(m)_{-k} = \frac{(-1)^k}{(1-m)_k}$$
Using these properties, $h(r|M, N, n)$ becomes :
$$h(r|M, N, n) = \frac{(N-M)!(N-n)!}{N! (N-M-n)!}\,\frac{(-M)_r (-n)_r}{r! (N-M-n+1)_r}$$
Then
\begin{eqnarray}
G(t) &=& \frac{(N-M)!(N-n)!}{N! (N-M-n)!}\,\sum_{r=0}^n \frac{(-M)_r (-n)_r}{r! (N-M-n+1)_r} t^r \\ &=& \frac{(N-M)!(N-n)!}{N! (N-M-n)!}\, F(-M, -n, N-M-n+1;t)
\end{eqnarray}
Using Chu-Vandermonde identity, you can demonstrate that
$$F(-M, -n, N-M-n+1;t) = \frac{N! (N-M-n)!}{(N-M)!(N-n)!}$$
which gives you the desired result. |
Let $i: X \to Y$ be the inclusion of a closed subvariety. Then $i$ is a finite morphism | This is all a bit laboured. We can reduce to the affine case: $X=\textrm{Spec}\,R$ and $Y=\textrm{Spec}\,S$. For $i:X\to Y$
to be a closed immersion (the inclusion of a closed variety
is a special case), then the corresponding ring map $S\to R$
is a surjection: basically $R=S/I$ for the ideal $I$ for
which $X=V(I)$. Therefore
$R$ is a finitely-generated $S$-module, and so $i$ is a finite morphism. |
Prove that when AC matrices are multiplied and equal the identity matrix the solution of Ax=b is consistent for all real numbers | So, I misunderstood the meaning of consistent. We are just looking for at least one solution. Let $b\in\mathbb{R}^{m}$. Then $ACb=b$ which implies that $Cb$ Is a solution to eh equation $Ax=b$. This solution may not be unique. I wasn't able to come up with a quick example but it will only be guaranteed to be unique if $m=n$. |
Kernel cokernel correspondence? | The categories involved are the full subcategories of the morphism category of $C$. Its objects are morphisms of $C$ and its arrows are commutative squares.
As for the correspondence, just think of kernel of $f$ not as of an object but as of a morphism ("the inclusion of kernel into the domain of $f$") and likewise think of cokernel. Then a kernel of a normal epi is a normal mono and a cokernel of normal mono is a normal epi, and taking (co)kernels is functorial. |
Generating functions combinatorical problem | Here’s one elementary approach. As you see, it confirms your result.
Let $b,g$, and $r$ be the numbers of blue, green, and red balls chosen to make up a set of $10$ balls. You’re looking for the number of solutions in non-negative integers to the equation $$b+g+r=10\;,\tag{1}$$ subject to the condition that $g\le 5$ and $r\le 5$. (You also have to have $b\le 10$, but that imposes no additional constraint when the sum is to be $10$.) Without the upper bounds this is a standard stars-and-bars problem whose solution is
$$\binom{10+3-1}{3-1}=\binom{12}2\;.$$
However, this count includes solutions with too many green or red balls. Let $g'=g-6$; then there is a bijection between solutions to $$b+g'+r=4\tag{2}$$ in non-negative integers and solutions to $(1)$ for which $g>5$. Thus, we need only count solutions to $(2)$ to get the number of solutions to $(1)$ with $g>5$. This is another stars-and-bars problem, and the answer is
$$\binom{4+3-1}{3-1}=\binom62\;.$$
Similarly, there are $\dbinom62$ solutions to $(1)$ that have $r>5$. There are no solutions that exceed the upper limits on both $g$ and $r$, so the number of solutions to $(1)$ that satisfy all of the conditions is
$$\binom{12}2-2\binom62=36\;.$$ |
A function $f:I\times I\longrightarrow I$ continuous in each variable, where $I=[0,1]$ | When we say that $f:I\times I\longrightarrow I$ is continuous we mean that for every $(x,y)\in I\times I$ is continuous |
Another question on integral domains having special kind of field of fractions | If $M\subset Frac(R)$ is a nonzero submodule which is free, then it must be cyclic, since $M\otimes_R Frac(R)=Frac(R)$ is free of rank $1$ over $Frac(R)$. Or by a more elementary argument, if $x=\frac{a}{b}$ and $y=\frac{c}{d}$ are two distinct nonzero elements of $M$ with $a,b,c,d\in R$, then $bcx=ady$, which is a nontrivial relation between $x$ and $y$. So $x$ and $y$ cannot both be part of a free generating set for $M$, and $M$ cannot be generated freely by more than one element. So this is equivalent to your previous question: the answer is that $R$ satisfies your condition iff $R$ is a field or a DVR. |
How to work out this problem using Stolz's theorem? | Let $f(x) = x (1 - q x)$. For $0 < x < 1/q$ we have $0 < 1 - q x < 1$ so $0 < f(x) < x$.
Since the only fixed point of the continuous function $f$ is $0$, the limit of the decreasing sequence $x_n$ is $0$. Now if $a_n = 1/x_n$, we have $a_{n+1} - a_n = \dfrac{q}{1 - q x_n}$.
Take $b_n = n$ in the statement of the
Stolz–Cesàro theorem as in http://en.wikipedia.org/wiki/Stolz_theorem |
Finding $c,b$ which minimizes $E(|X-c|)$ and $E[(X-b)^2]$ | It is a fact, that the median minimizes the sum of absolute deviations. See here, for example, for explanations and proofs.
Since $5$ is the median of the given data set and the weights are all equal to $\frac 17$, it minimizes $E(X-c)$. |
An uncountable subset in $\Bbb{R}^n$ in which each $n$ elemented subset is a base | $$\{\,(1,t,\ldots,t^{n-1})\mid t\in\mathbb R\,\}$$ |
How to show $x\in \{y\in\mathbb R: 2^{k-2}\leq |y|\leq 2^{k+1}\}\Leftrightarrow k=j, j+1, j+2$? | Hint: Note that $\{y\in\mathbb{R}\mid 2^{k-2}\leq |y|\leq 2^{k+1}\}$ is equal to $$\{y\in\mathbb{R}\mid 2^{k-2}\leq |y|\leq 2^{k-1}\}\cup\{y\in\mathbb{R}\mid 2^{k-1}\leq |y|\leq 2^k\}\cup\{y\in\mathbb{R}\mid 2^k\leq |y|\leq 2^{k+1}\}.$$ |
Number of solutions for x^2=a(mod m) | I think I got it: I dismantle m to coprime numbers and check when the x^2=a(mod m) have solutions for each. lets say x^2=11^2(mod 1800) so it has solutions if and only if
x^2=11^2(mod 9)=4(mod 9)
x^2=11^2(mod 8)=1(mod 8)
x^2=11^2(mod 25)=21(mod 25)
Because 1800=9*8*25
x=+-2(mod 9)
x=+-1,+-3(mod 8)
x=+-11(mod 25)
so it has 2*4*2=16 solutions, is it right? |
Clarification on the order of an element in the Pohlig-Hellman algorithm for groups whose order is a prime power | The order of $h_k$ does divide $p$; for $k=0$ this is immediate from the definition, as
$$h_0:=(g^{-x_0}h)^{p^{e-1-0}}=(g^{-0}h)^{p^{e-1}}=h^{p^{e-1}},$$
and so by Lagrange's theorem the order of $h_0$ divides $p$. For every $k\geq0$ we have
\begin{eqnarray*}
h_{k+1}^p&=&\left((g^{-x_{k+1}}h)^{p^{e-1-(k+1)}}\right)^p\\
&=&\left(g^{-(x_k+p^kd_k)}h\right)^{p^{e-1-k}}\\
&=&\left(g^{-p^kd_k}g^{-x_k}h\right)^{p^{e-1-k}}\\
&=&g^{-p^{e-1}d_k}(g^{-x_k}h)^{p^{e-1-k}}\\
&=&\gamma^{-d_k}h_k,
\end{eqnarray*}
which shows that $h_{k+1}^p=e$ by definition of $d_k$, so the order of $h_{k+1}$ divides $p$. |
11 Teams, 3 Medals, How many Combinations? | The answer is $$\color{blue}{\binom{11}{3}}\times\color{red}{3!}=990.$$
Explanation:
In blue is the binomial coefficient of $11$ by $3$: it tells us in how many different ways we can choose $3$ teams out of the $11$.
In red is the factorial of $3$: it tells us in how many ways we can arrange the three leading teams on the podium.
For each set of $3$ winning teams we multiply by the number of possible orderings of the three leading teams. This is how we arrived at the provided answer. |
What does it mean that there is an "isomorphism of homsets" due to an exponential object? | (i) : No : start from $h : Z\to X^Y$; then you have $h\times id_Y : Z\times Y\to X^Y\times Y$, and then you can compose with $\mathbf{apply}$ to get $g:= \mathbf{apply}\circ (h\times id_Y) : Z\times Y\to X$.
Then by uniqueness, $\lambda g = h$.
(ii) : A bijection between sets is an isomorphism in the category of sets $\mathbf{Set}$.
But here it's even lore than that, because the isomorphism $\hom(Z\times Y, X) \cong \hom(Z, X^Y)$ is natural in all three variables, so it is actually an isomorphism between two functors in the category of functors $C^{op}\times C^{op}\times C\to \mathbf{Set}$ |
$f$ decreasing monotonically decreasing , Prove: for every $\alpha \in (0,1)$ :$\int_{0}^{\alpha} f(x)dx \geq \alpha\int_{0}^{1}f(x)dx$ | $\int_0^{\alpha} f(x) dx = \alpha \int_0^1 f(\alpha t) d t \geq \alpha \int_0^1 f(t) d t$
First equality is integration by substitution with $x=\alpha t$, then inequality holds since $\alpha t \leq t$ and $f$ is decreasing. |
Proving MLE is asymptotically normal. | Switching up the notation a little bit here to move $\theta_0$ and $\theta^*$ outside until after we take the derivative.
Write out $Z_n$ as:
\begin{equation}
Z_n = \frac{1}{n}\sum_{i=1}^n \frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*}
\end{equation}
For now let's assume $\theta^*$ is a constant, because the $X_i$ are sampled from the true $\theta_0$, so it doesn't affect our approach until the end.
Using the law of large numbers we have:
\begin{equation}
\frac{1}{n}\sum_{i=1}^n \frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} \overset{a.s.}{\rightarrow} E_{\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*}]
\end{equation}
Using the dominated convergence property of conditional expectation we can rearrange the expectation to condition on a random $\theta^*$:
\begin{equation}
\begin{split}
E_{\theta_0|\theta^*}[\frac{1}{n}\sum_{i=1}^n \frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} | \theta^*&] \overset{a.s.}{\rightarrow} E_{\theta_0|\theta^*}[ E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*}|\theta^*]|\theta^*]\\
\end{split}
\end{equation}
Noting that:
\begin{equation}
E_{\theta_0|\theta^*}[ E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*}]|\theta^*] = E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} | \theta^*]
\end{equation}
Is a random function of $\theta^*$, and is almost surely continuous in $\theta^*$.
Using your fact that $\theta^*\overset{a.s.}{\rightarrow}\theta_0$, by the continuous mapping theorem:
\begin{equation}
\begin{split}
E_{\theta_0|\theta^*}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta^*} | \theta^*] ]\overset{a.s.}{\rightarrow}E_{\theta_0|\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta_0} | \theta_0]
\end{split}
\end{equation}
Given that the convergence is a.s. we are conditioning on a set of probability one, and can remove the conditioning to get:
\begin{equation}
\begin{split}
E_{\theta_0|\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta_0} | \theta_0] = E_{\theta_0}[\frac{d}{d^2\theta}[\textrm{log}f(X_i|\theta)]|_{\theta_0}]
\end{split}
\end{equation}
Which is, up to a minus sign, the Fisher information.
Edit: I need to show that the dominated convergence theorem holds.
Let $X_n = \frac{1}{n}\sum_{i=1}^n \frac{d}{d\theta^2}[\textrm{log}f(X_i|\theta)]|_{\theta^*}$.
We need to find $M$ such that $||X_n||_{L_1}\leq ||M||_{L_1}$ almost surely in $P_{\theta_0}$ probability. Taking the supremum to find $x',\theta' = \sup_{x,\theta^*} ||\frac{d}{d\theta^2}[\textrm{log}f(X_i|\theta)]|_{\theta^*}||_{L_1}$.
Then, specifying $M = \frac{d}{d\theta^2}[\textrm{log}f(x'|\theta)]|_{\theta'}$, we have:
\begin{equation}
||X_n||_{L_1} \leq \frac{1}{n}\sum_{i=1}^n ||M||_{L_1} = \frac{n}{n}||M||_{L_1} = ||M||_{L_1}
\end{equation} |
Whether two matrices $M_1$ and $M_2$ are the same, that their entry-wise sums are the same and $M_2=UM_1V$ , where $U$, $V$ are both othogonal | no we can't:
let $A=\begin{pmatrix}0&1\\1&0\end{pmatrix}$ and $B=\begin{pmatrix}a&b\\c&d\end{pmatrix}$ with $a,b,c,d\in \mathbb{R}$
$A\cdot A^T = I \Rightarrow$ A is orthogonal
$A\cdot B \cdot A^T =\begin{pmatrix}d&c\\b&a\end{pmatrix}$
Therefor the sum is the same
qed
edit:
yes the statement 'can imply' is always true... you can simply take I
$$I\cdot I^T = I \Rightarrow I $$ is orthogonal
and $A = I\cdot A\cdot I$ |
Prove that the square of an integer $a$ is of the form $a^2=3k$, or $a^2=3k+1$, where $k\in \mathbb{Z} $ | By the division algorithm
\begin{equation}
x=3q+r, \quad r \in \{0, 1, 2\}
\end{equation}
writing
\begin{align}
x^{2} &= 9q^{2}+r^{2} +6qr \\
&= 3(3q^{2}+2qr)+r^{2}
\end{align}
The result follows if for a given $x$, $r=0, 1$. If $r=2$ then clearly $r^{2}=4=1+3$ and thus
\begin{align}
x^{2} &=3 \lambda + 1 + 3 \\
&= 3(\lambda + 1) +1
\end{align}
(I should make it clear that it is obvious why $\lambda$ is an integer)
Q.E.D |
In a Group, is the existence of the left identity equivalent to the existence of the unique two sided identity? | Suppose $L\cdot A=I$ that is $L$ is a left inverse of $A,$ and $A\cdot R =I$ that is $R$ is a right inverse. Then $$L=L\cdot (A\cdot R)=(L\cdot A)\cdot R=R.$$ A similar argument can show that the identity is also unique. |
9 people standing in a circle. Groups of 3 chosen randomly*. Optimal position so that each is equidistant from other 2 in group, w/o communication. | This is not always possible. Suppose that the people have picked groups such that we can number the people $1,\ldots,k$ in such a way that for $1 < i < k$, the group of $i$ contains ${i-1}$ and ${i+1}$; such that the group of $1$ consists of $k$ and $2$, and the group of $k$ consists of $k-1$ and $2$. That is, they almost form a kind of circle, except $k$ has chosen $2$ instead of $1$.
Now suppose towards a contradiction that an optimal solution exists: then the distance of $k$ to $1$ should equal the distance $1$ to $2$, which should equal the distance $2$ to $3$, and so on, which should equal the distance $k-1$ to $k$, which should equal the distance $k$ to $2$. But then $k$ is equidistant from $1,2,k-1$, which is impossible. |
Does $C^*(G) \cong C^*(H)$ imply that $\mathbb{C}G \cong \mathbb{C}H$? | Suppose $G$ and $H$ are abelian. Then their group C*-algebras are isomorphic iff their Pontryagin duals are homeomorphic. So take, for example, $G$ to be the Prufer 2-group and $H$ to be the Prufer 3-group. Their Pontryagin duals are the 2-adic and 3-adic integers, which are homeomorphic but not isomorphic even as abstract groups. I'm not sure off the top of my head if the group algebras are isomorphic though. But I suspect they can be distinguished by their self-adjoint subalgebras. |
Prove $\Delta(|x|^m)= m(m+n-2)|x|^{m-2}$ pointwise in $\mathbb{R}^n\backslash\{0\}$ for all $ m \in \mathbb{R}$ | Observe when $x\neq 0$ we have
\begin{align}
\Delta(|x|^m) =&\ \nabla\cdot\nabla |x|^m = \nabla\cdot\left(m|x|^{m-2}x\right)\\
=&\ m(\nabla |x|^{m-2})\cdot x+m|x|^{m-2}\nabla\cdot x \\
=&\ m(m-2) |x|^{m-4}x\cdot x + mn |x|^{m-2}\\
=&\ (m(m-2)+mn)|x|^{m-2}\\
=&\ m(m+n-2)|x|^{m-2}.
\end{align} |
Example when the tail $\sigma$-algebra is not generated by the even and odd tail $\sigma$-algebras | My suggestion for a solution, based on @saz's comment:
We look at $\Omega=\{0,1\}^\mathbb N$ as the product of $\{0,1\}$ with the discrete topology $\mathbb N$ times.
For each $n$, we define $\pi_n:\Omega\rightarrow\{0,1\}$ the projection on the $n$-th coordinate, and $\mathcal F_n=\sigma(\pi_n)$, that is, $\mathcal F_n=\{\Omega,\varnothing,\pi_n^{-1}(0),\pi_n^{-1}(1)\}$.
We then define $T_n$, $T$, $T^0_n$, $T^1_n$, $T^1$ and $T^0$ as in the question.
Let $A$ denote the set of sequences $(x_n)$ such that $x_{2n}=x_{2n+1}$ infinitely often.
Then $A$ is a tail event since omitting a finite number of indexes from such a sequence is not changing that $x_{2n}=x_{2n+1}$ occurs infinitely often. Thus, $T$ is not a trivial $\sigma$-algebra.
Claim: $T^0$ is a trivial $\sigma$-algebra.
Proof: for every $k>m$, $\mathcal F_{2k}\cap \mathcal F_{2m}=\mathcal F_{2k}$ hence, for every $n$, $\bigcap\limits_{1\leq k\leq n}\mathcal F_{2k}=\mathcal F_{2n} $ and the intersection of sigma-algebras is a sigma-algebra.
Similarly, $T^1$ is a trivial sigma-algebra and so is $\sigma(T^0,T^1)$.
Reminder, I have already shown that $\sigma(T^0,T^1)\subset T$.
Is this proof good enough to show that the inclusion $\sigma(T^0,T^1)\subset T$ can be strict? |
Are representables on the étale site on topological space sheaves? | Yes, this is true; the point is that étale maps locally have continuous inverses. An étale cover of a topological space $X$ is a space $Y$ together with a map $p:Y\to X$ which is surjective and a local homeomorphism. To say representables are sheaves is to say that if $Z$ is a topological space and $f:Y\to Z$ is a continuous map such that $p(x)=p(y)$ implies $f(x)=f(y)$, then there is a unique continuous map $g:X\to Z$ such that $gp=f$. Clearly there is a unique such map of sets $g$, so it suffices to check that this $g$ is continuous.
Fix a point $x\in X$. Since $p$ is surjective, there is some $y\in Y$ such that $p(y)=x$. Since $p$ is a local homeomorphism, there is an open neighborhood $U\subseteq Y$ of $y$ such that $p(U)$ is open in $X$ and $p$ restricts to a homeomorphism $p_U:U\to p(U)$. For all $z\in p(U)$, we then have $g(z)=g(p(p_U^{-1}(z)))=f(p_U^{-1}(z))$. Thus $g|_{p(U)}=f\circ p_U^{-1}$ and is hence continuous. Thus $g$ is continuous in a neighborhood of any point of $X$, and hence continuous on $X$. |
How can we prove a graph is planar iff its subgraph is planar? | but how can we prove backward?
We cannot, because then any graph can be shown to be planar, and that's obviously not the case. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.