title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Laurent Series expansion without geometric series | Let $\alpha=e^{(2n+1)\pi i/6}$ be one of the roots of $z^6+1$ and $\alpha w=z-\alpha$.
$$
\begin{align}
\frac1{z^6+1}
&=\frac1{1-(1+w)^6}\tag{1}\\
&=-\frac1w\frac1{6+15w+20w^2+15w^3+6w^4+w^5}\tag{2}\\
&=\sum_{k=-1}^\infty b_kw^k\tag{3}\\
&=-\frac1{6w}+\frac5{12}-\frac{35}{72}w+\frac{35}{144}w^2+\frac{119}{864}w^3+\dots\tag{4}
\end{align}
$$
Explanation:
$(1)$: $z^6+1=1+\alpha^6(1+w)^6=1-(1+w)^6$
$(2)$: Binomial Theorem
$(3)$: label the powers of $w$ in the expansion of $(2)$
$(4)$: multiply both sides of $(2)$ and $(3)$ by $w\!\left(6+15w+20w^2+15w^3+6w^4+w^5\right)$:
$\phantom{(3)\,}$ $\color{#C00000}{-1}=\left(6+15w+20w^2+15w^3+6w^4+w^5\right)\sum\limits_{k=-1}^\infty b_kw^{k+1}$
$\phantom{(3)\,}$ $\phantom{-1}=\color{#C00000}{6b_{-1}}+\color{#00A000}{(15b_{-1}+6b_0)}w+\color{#00A000}{(20b_{-1}+15b_0+6b_1)}w^2$
$\phantom{(3)\,}$ $\phantom{-1}+\color{#00A000}{(15b_{-1}+20b_0+15b_1+6b_2)}w^3+\color{#00A000}{(6b_{-1}+15b_0+20b_1+15b_2+6b_3)}w^4$
$\phantom{(3)\,}$ $\phantom{-1}+\sum\limits_{k=4}^\infty\color{#0000F0}{(b_{k-5}+6b_{k-4}+15b_{k-3}+20b_{k-2}+15b_{k-1}+6b_k)}w^{k+1}$
$\phantom{(3)\,}$ The red term is $-1$ and gives $b_{-1}=-\frac16$
$\phantom{(3)\,}$ The green terms are $0$ and give the other coefficients in $(4)$.
$\phantom{(3)\,}$ The blue term in the sum is $0$ and gives the recursion in $(5)$.
where, for $k\ge4$,
$$
b_k=-\frac{15b_{k-1}+20b_{k-2}+15b_{k-3}+6b_{k-4}+b_{k-5}}6\tag{5}
$$
Then substitute $w=\frac{z-\alpha}\alpha$ into $(3)$ to get
$$
\frac1{z^6+1}=\sum_{k=-1}^\infty\frac{b_k}{\alpha^k}(z-\alpha)^k\tag{6}
$$ |
Show that this estimator is not unbiased poisson($\theta$) | Assuming the $X_i$ are independent,
we have
\begin{align}
E\left[ \bar{X}^2 \right] &= \frac1{n^2}E\left[ \left(\sum_{i=1}^nX_i\right)^2 \right]\\
&= \frac1{n^2}E\left[ \left(\sum_{i=1}^nX_i^2\right)+ 2\sum_{i<j} X_i X_j\right]\\
&= \frac1{n^2}\left[ \left(\sum_{i=1}^nE[X_i^2]\right)+ 2\sum_{i<j} E[X_i]E[ X_j]\right]\\
&= \frac1{n^2} \left[ \left(\sum_{i=1}^n(Var[X_i]+E[X_i]^2)\right)+ n(n-1) \theta^2\right]\\
&= \frac1{n^2} \left[ \left(\sum_{i=1}^n(\theta+\theta^2)\right)+ n(n-1) \theta^2\right]\\
&= \frac1{n^2}(n \theta + n^2 \theta^2)\\
&= \theta^2 + \frac{\theta}{n}
\end{align}
Hence it is biased.
To make it unbiased,
note that we have
$$E\left[ \bar{X}^2- \frac{\theta}{n}\right] = \theta^2$$
If $U$ is an unbiased estimator for $\theta$, then
$$E\left[ \bar{X}^2- \frac{U}{n}\right] = \theta^2$$
I will leave the task of finding an unbiased estimator for $\theta$ as an exercise. |
Solve the equation for $\theta$ | hint :$$R = \frac{V_0^2}{g}(b\sin^2{\theta} + \sin{2\theta})\\b\sin^2{\theta} + \sin{2\theta}=\dfrac{gR}{v_0^2}\\
b\sin^2{\theta} + 2\sin{\theta}\cos{\theta}=\dfrac{gR}{v_0^2}\\$$now divide by $\cos^2 {\theta}$ then you have a quadratic equation by $\tan \theta$
$$b\tan^2{\theta} + 2\tan{\theta}=\dfrac{gR}{v_0^2}(\dfrac{1}{\cos^2 {\theta}})\\
b\tan^2{\theta} + 2\tan{\theta}=\dfrac{gR}{v_0^2}({1+\tan^2 {\theta}})$$ finally you will have
$$(b-\dfrac{gR}{v_0^2})\tan^2\theta+2\tan \theta-\dfrac{gR}{v_0^2}=0\
\tan\theta=\dfrac{-2\pm\sqrt{4+4(b-\dfrac{gR}{v_0^2})(\dfrac{gR}{v_0^2})}}{2(b-\dfrac{gR}{v_0^2})}\\\\$$ |
Find the probability of $P(B|-A)$ given $P(B)$, $P(A|B)$, $P(A|-B)$? | Assuming that $-A$ denotes the complement of $A$ we have by Bayes rule that $$P(B|A^c)=\frac{P(A^c|B)P(B)}{P(A^c)}=\frac{\left(1-P(A|B)\right)P(B)}{1-P(A)}$$ which should be given according to the exercise (except for $P(A)$ which you have found as you say). |
Algebra struggles: 2 problems in 1 | I'll start with 1) for now since I haven't quite figured 2) out. If nobody else is interested, I'll come back and edit this in later.
For 1), it helps to unpack the definition of $S^X$. I assume that $S^X$ is the set of functions from a set $X$ to $S$; that is,
$$
S^X := \{f: X \to S \}
$$
and that this naturally inherits its ring structure from that of $S$. Specifically, given functions $f, g, h \in S^X$, it only makes sense to define
$$
(f+g)(x) := f(x)+g(x),
$$ and
$$
(f\cdot g)(x) := f(x)\cdot g(x).
$$
Now, what would left-distributivity even mean in this context? Basically, we have to verify
$$
\big(h\cdot (f+g)\big)(x) = (h\cdot f)(x) + (h \cdot g)(x) = \big( (h \cdot f)+(h \cdot g) \big)(x).
$$
I think if you just figure out how to interpret the left hand side, you'll see it's really just symbol pushing, and that distributivity of $(S^X, +, \cdot)$ inherits distributivity from $S$ in a very straightforward way. |
Polynomial expansion (plus/minus trick in statistics) | The cross term is as you would expect from $(x+y)^2=x^2+y^2+2xy$:
\begin{align}
(a-b)^2 &= ((a-c)+(c-b))^2 \\
&=(a-c)^2 + (c-b)^2 + 2(a-c)(c-b)
\end{align}
So for any constant $\alpha \in \mathbb{R}$ you get
\begin{align}
\left(X_i-\overline{X}\right)^2 &= \left(X_i-\alpha + \alpha - \overline{X}\right)^2 \\
&= (X_i-\alpha)^2 + \left(\alpha - \overline{X}\right)^2 + 2\left(X_i-\alpha\right)\left(\alpha-\overline{X}\right)
\end{align}
But since $(\alpha - \overline{X})$ does not depend on $i$, when you sum the last term over $i \in \{1, ..., n\}$ and then divide by $n$, you get
\begin{align}
\frac{1}{n}\sum_{i=1}^n 2(X_i-\alpha)(\alpha-\overline{X})&= 2(\alpha-\overline{X})\frac{1}{n}\sum_{i=1}^n(X_i-\alpha) \\
&= 2\left(\alpha-\overline{X}\right)\left(\overline{X}-\alpha\right)
\end{align}
Thus, indeed we get:
$$ \boxed{\frac{1}{n}\sum_{i=1}^n \left(X_i-\overline{X}\right)^2 = \frac{1}{n}\sum_{i=1}^n(X_i-\alpha)^2 - \left(\overline{X}-\alpha\right)^2 \quad \forall \alpha \in \mathbb{R}}$$
Similarly it can be shown
\begin{align}
&\frac{1}{n}\sum_{i=1}^n(X_i-\overline{X})(Y_i-\overline{Y}) \\
&= \frac{1}{n}\sum_{i=1}^n(X_i-\alpha)(Y_i-\beta) - (\overline{X}-\alpha)(\overline{Y}-\beta) \quad \forall \alpha, \beta \in \mathbb{R}
\end{align}
This is similar to the following identities:
\begin{align}
Var(X) &= Var(X-\alpha) \quad \forall \alpha \in \mathbb{R}\\
Cov(X,Y) &= Cov(X-\alpha, Y-\beta) \quad \forall \alpha, \beta \in \mathbb{R}
\end{align}
where we recall
\begin{align}
Var(X) &= E[(X-E[X])^2] = E[X^2] - E[X]^2\\
Cov(X,Y)&= E[(X-E[X])(Y-E[Y])] = E[XY]-E[X]E[Y]
\end{align}
For example, for all $\alpha \in \mathbb{R}$ we get
\begin{align}
E[(X-E[X])^2] &= Var(X) \\
&= Var(X-\alpha)\\
&= E[(X-\alpha)^2] - (E[X]-\alpha)^2
\end{align} |
Solution Operator for inhomogenous Dirichlet Problem | Okay, here's an attempt at an answer.
First I don't think the $L^2(\partial \Omega)$ is the appropriate norm to use here, since the space $H^{1/2}(\partial \Omega)=: \text{Range}(Tr)$ is dense in $L^2(\partial \Omega)$, so continuity in this latter norm would imply a bounded extension to $L^2$, which is not the case. Instead you want to use the norm of $H^{1/2}(\partial \Omega)$ defined as
$$
\| f\|_{H^{1/2}}:= \inf\{ \| F \|_{H^1(\Omega)} : f=Tr(F) \}.
$$
With this you can write a solution of
$$
\begin{cases}
-\Delta u = 0 \text{ in } \Omega \\
u = \varphi \text{ on } \partial\Omega.
\end{cases}
$$
as $u=w+\Phi$ where $\Phi$ is an $H^1$ extension of $\phi$ and $w$ solves
$$
\begin{cases}
-\Delta w = \Delta \Phi \text{ in } \Omega \\
w = 0 \text{ on } \partial\Omega.
\end{cases}
$$
With this we can estimate
$$
\| \nabla u \|_{L^2(\Omega)} \leq \| \nabla w\|_{L^2(\Omega)} + \| \nabla \Phi\|_{L^2(\Omega)} \leq 2 \| \nabla \Phi \|_{L^2(\Omega)}.
$$
Taking now the infimum over $\Phi$ we get the desired result. |
What are the odds of guessing a 4 digit number if told how many you have correct? | For any 4 digit sequence, assuming that 0000 is a valid sequence for the number guessing game that you have devised, it will take a maximum of 34 appropriate guesses to correctly choose the number that you picked, and the number of guesses should be adjusted accordingly.
Methodology:
Guessing the unique number sequences; 0000, 1111, 2222, ... 9999 (10 initial guesses)
This should give you the number of each digit that occurs in the sequence and maximum guesses could only occur for a standardized guessing for a sequence containing a 9.
Then, when you have the numbers available for the sequence (4 total), they can only be arranged in 4! different orders; 0123, 0132, 0213, 0231, 0321, 0312, ... 3210
10 + 4! = 10 + 4*3*2*1 = 34.
Now, the probability of guessing any individual number is 1:10, and the odds of guessing the right number is still 1:10,000 simply based on probability. However, "guessing" in the right manner narrows down the individual probability to 1:34. |
Finding Range of a function - Discrete | The range of $f$ is $$\{-1,1\}$$
The range of $g$ is $$\{-1/2, 1/2, 7/2, 17/2, 31/2, ..., (2n^2-1)/2,...\}$$
The range of $h$ is the interval $$[2, \infty)$$ |
Looking for Clarification on a proof of Density of Q in R | You can obtain $m_1$ and $m_2$ by applying the Archimedean property as cited with $x=1 > 0$ and $y= nx$ or $y = -nx$.
Only $x$ in the Archimedean property needs to be positive. You can make a little sketch to see it. |
Harmonic Mean questions | Harmonic mean of $x_1,\ldots,x_n$ is defined as
$$
\frac{n}{x_1^{-1}+x_2^{-1}+\ldots+x_n^{-1}}
$$
Usually, you would want all $x_i$ to be positive, so that the denominator is guaranteed not to be zero. In case of $n=2$ you get, like you said,
$$
\frac{2}{x_1^{-1}+x_2^{-1}}=\frac{2}{\frac{x_2}{x_1x_2}+\frac{x_1}{x_1x_2}}=\frac{2x_1x_2}{x_1+x_2}.
$$
It is a special case of a power or generalised mean. A power mean of positive numbers $x_1,\ldots,x_n$ with exponent $\mu$, where $\mu$ is a real number, is defined as
$$
\left( \frac{x_1^\mu+x_2^\mu+\ldots+x_n^\mu}{n}\right)^{1/\mu}
$$
For $\mu=1$ you get the arithmetic mean, for $\mu=-1$ you get the harmonic mean. It can be shown that the mean is monotone in $\mu$, and the limit as $\mu\to 0$ is the geometric mean, while the limits at $\mu=+\infty,-\infty$ are, respectively, the maximum and the minimum.
In fact, there are also continuous and weighted analogues of these, and similar facts are true. |
Are these subrings of $\Bbb Q$? | $1$ & $2$ are fine, but $\frac {2}{ 2} = \frac {1} {1}$ has odd numerator, when the fraction is completely reduced - so this isn't a counterexample. But $3-1$ is. |
How does the equation $a^2 \frac{\partial^2 f}{\partial x^2} - \frac{\partial^2 f}{\partial y^2}=0$ change with the change of variables? | Strong hint
How about if you write this:
$$
g(u, v) = f(\frac{u+v}{2}, \frac{u-v}{2a}),
$$
instead?
Now you're not using the name "f" for two different functions. You can then write
$$
g(x+ax, x-ay) = f(x, y),
$$
and discover that (using subscript numerals to indicate derivatives, so that $f_1$ means "derivative of $f$ with respect to its first argument")
\begin{align}
f_1(x, y) &= g_1(x+ ay, x-ay) \cdot 1 + g_2(x+ay, x-ay) \cdot 1\\
f_2(x, y) &= g_1(x+ ay, x-ay) \cdot a + g_2(x+ay, x-ay) \cdot (-a)
\end{align}
and then, differentiating each of these, a formula for $f_{11}$ and $f_{22}$, which you can substitute into your original equation, which told you that
$$
a^2 f_{11}(x, y) -f_{22}(x, y) = 0.
$$
That will give you an expression relating various derivatives of $g$ to zero. |
how do determine the distribution of outcomes for a given probability? | You want the binomial distribution because the probability of each block being karma or not-karma (whatever that means), that is, the probability of a success or a failure in each trial is constant and independent of the other trials.
In general, the probability that there are k successes out of n trials in a binomial distribution is ${n\choose k} p^k(1-p)^{n-k}$ (where p is the probability of a success). In your case the probability of 'k' "karma blocks" would be ${600 \choose k} (0.001)^k(0.999)^{600-k}$.
What's actually happening is you determine there must be exactly $k$ karma-blocks, and that probability is $0.001^k$. The rest of the 600 have to be failures, hence $0.999^{600-k}$ (because if it's not a success, it's a failure, 1-0.001=0.999). And you don't care about order so you multiply by the number of different orders there could be for k karma blocks in 600 blocks, which is $600 \choose k$ |
Cannot simplify this boolean expression | $$x'y'w'+yz+xzw'\overset{Adjacency \times 3}=$$
$$x'y'zw'+x'y'z'w'+yzw+yzw'+xyzw'+xy'zw'\overset{Absorption}=$$
$$x'y'zw'+x'y'z'w'+yzw+yzw'+xy'zw'\overset{Idempotence \times 2}=$$
$$x'y'zw'+x'y'z'w'+yzw+yzw'+yzw'+xy'zw'+x'y'zw'\overset{Adjacency}=$$
$$x'y'zw'+x'y'z'w'+yzw+yzw'+yzw'+y'zw'\overset{Adjacency \times 3}=$$
$$x'y'w'+yz+zw'$$ |
Requirement for $x$ to be different from $c$ in a theorem regarding limits | The condition is necessary if limit is to mean what we want it to mean.
Let $f(x)=x^2+3$ when $x\ne 0$, and let $f(0)=88$. We want to have $\lim_{x\to 0}f(x)=3$. If we allow sequences $(x_n)$ such that $x_k=0$ for infinitely many $k$ (and $x_k\ne 0$ for infinitely many $k$), then $\lim_{n\to\infty}f(x_n)$ will not exist. |
Non-associative, non-commutative binary operation with a identity element | No, you can't say that the operation has no identity just because it is not associative or commutative. To say that $e$ is an identity for the operation is to say that $e*a=a=a*e$ for all $a$. To prove that there is no identity, you would need to prove that no such $e$ exists; there's no reason that an identity automatically can't exist just because the operation is not commutative or associative.
To figure out whether there is an identity, then, you should look at what the equation $e*a=a=a*e$ says. For your operation, it says that $e$ must satisfy $$e+2a=a=a+2e$$ for all $a\in\mathbb{Z}$. Is there any such integer $e$? |
For all $\omega \neq 0$, $\rho(L_{\omega}) \ge |1 - \omega|$, where $L_{\omega}$ is the SOR matrix | For $\omega \in \Bbb C \setminus \{0\}$ you have :
\begin{gather*}\det\left(L_{\omega}\right ) & =& \det\left(\left(\frac1{\omega} D - E\right)^{-1} \left(\frac{1-\omega}{\omega} D + F\right) \right) \\ & = &\frac{\det\left(\frac{1-\omega}{\omega} D + F\right)}{\det \left(\frac1{\omega} D - E\right) }
\end{gather*}
But your decomposition is such that E and F are strictly triangular so we end up with :
\begin{gather*}
\det \left(\frac1{\omega} D - E\right) = \frac{1}{\omega^{n}}\prod a_{ii}\\
\det \left(\frac{1-\omega}{\omega} D + F\right) = \frac{\left(1 - \omega \right )^{n}}{\omega^{n}}\prod a_{ii}\\
\end{gather*}
That gives us : $\det\left(L_{\omega}\right ) = \left(1 - \omega \right )^{n}$
But you also know by definition of the spectral radius that $\rho(L_{\omega}) = \displaystyle \max_{i} |\lambda_i|$.
To the power of $n$ it gives :
\begin{gather*}
\rho(L_{\omega})^{n} &\ge& \prod_{i} |\lambda_i|\\
&\ge& |\det\left(L_{\omega}\right )|\\
&\ge& |1 - \omega|^{n}
\end{gather*}
And from that follows $\rho(L_{\omega}) \ge |1 - \omega|$. |
Proof $2$ Lines Are Parallel | Lines $a$ and $b$ aren't necessarily parallel (unless $c$ and $d$ are supposed to be parallel). Just redraw the picture, keeping the angles fixed, but making it so $c$ and $d$ don't look parallel (or imagine grabbing the point with the angles $5$,$6$,$7$,$8$ and rotating that point and the lines $a$ and $d$ just a little bit). |
$\nabla \frac{1}{|x|}$ is Lipschitz continuous | You can just try to be a bit more explicit and compute the square norm:
$$\left|\dfrac{x}{|x|^3} - \dfrac{y}{|y|^3}\right|^2 = \dfrac{(x^T|y|^3 - y^T|x|^3)(x|y|^3 - y|x|^3)}{|x|^6|y|^6} = \dfrac{|x|^4+|y|^4 - 2|x||y|x^Ty}{|x|^4|y|^4}.$$
Now use Cauchy-Schwarz as:
$$x^Ty\leq |x||y|$$
then you are basically done. |
Why the negative sign on that line? | He doesn't do exactly that integral. Note the sign in front of $v(t)$ in the denominator. |
Trace map of a finite étale morphism | Let $\rho : A \to B$ be a finite flat morphism of rings. The corresponding morphism of schemes $f : X \to Y$ is a finite flat and is therefore proper. By Grothendieck duality you have an adjunction $(f_* , f^!)$, where $f_* : \textrm{Qcoh}(X) \to \textrm{Qcoh}(Y)$ is the direct image and $f^!$ its right adjoint. This adjoint can be precised as follows :
$$f^!(\mathcal{G}) = \mathcal{H}om_{\mathcal{O}_Y}(f_*\mathcal{O}_X,\mathcal{G})^\sim$$ for $\mathcal{G} \in \textrm{Qcoh}(Y)$, where $(\cdot)^\sim$ denotes the equivalence between $\mathcal{O}_X$-modules and $f_*\mathcal{O}_X$-modules over $Y$, as $f$ is an affine morphism. Now the co-unit $f_*f^! \mathcal{G} \longrightarrow \mathcal{G}$ of the adjunction applied to $\mathcal{O}_Y$ yields to the map $$ \mathrm{Tr}_{\rho} \colon B \longrightarrow A$$ defined as follows : each $b\in B$ acts on $B$ (viewed as an $A$-module through $\varphi$) by multiplication. Since $B$ is finite flat over $A$ and $A$ is Noetherian, the $A$-module $B$ is a locally free $A$-module and multiplication by $b$ is therefore locally (on an open subset $D(a) \simeq \mathrm{Spec}(A_a)\subseteq \mathrm{Spec}(A)$, for an $a\in A$ and under some isomorphism $B_a \simeq A_a^n$) given by multiplication by a matrix. We define $\textrm{Tr}_{\rho}(b)$ to be the trace of this matrix. As the trace of a matrix is independent of the choice of basis this homomorphism of $A$-modules is well defined. I think though that the name may even come from the "simplest" case of trace of an element of a field finite extension. |
Properties that better "capture" compactness. | The first elementary but crucial property, I would think of when talking about consequences of compactness, would be the following:
Proposition $1$. Let $X$ be a compact topological set and let $(K_i)_{i\in I}$ be a family of closed sets of $X$, then:
$$\bigcap_{i\in I}K_i\neq\varnothing\iff\left(\forall J\subset I,\#J<+\infty\Rightarrow\bigcap_{j\in J}K_j\neq\varnothing\right).$$
Proof. The direct implication is obvious.
For the reverse one let us proceed by the contrapositive and assume that one has:
$$\bigcap_{i\in I}K_i=\varnothing.$$
Then, taking the complementary, one gets:
$$\bigcup_{i\in I}X\setminus K_i=X.$$
Since $X$ is compact, there exists $J\subset I$ finite such that:
$$\bigcup_{j\in J}X\setminus K_j=X.$$
Whence the result taking again the complementary. $\Box$
Remark 1. This proposition is frequently applied when $(K_i)_{i\in I}$ is a decreasing sequence of nonempty compact sets of a topological compact set, not necessarily compact itself.
Here is an explicit example where proposition $1$ is used. It is also a good example of what one can expect from compactness assumptions.
Theorem $1$. Let $E$ be a real vector space endowed with a dot product, let $K$ be a nonempty compact convex set of $E$ and let $G$ be a compact subgroup of $\textrm{GL}(E)$. Assume that $G$ stabilises $K$, then there exists $x\in K$ such that for all $g\in G$, $g(x)=x$.
The key lemma of theorem $1$ is the following:
Lemma $1$. Let $T$ be a continuous linear map of $E$ and assume that $T(K)\subset K$, then there exists $x\in K$ such that $T(x)=x$.
Proof. Let $x_0\in K$ and let define the following sequence:
$$x_{n+1}=\frac{1}{n+1}\sum_{k=0}^nT^k(x_0)\in K,$$
using compactness of $K$, one can assume that $(x_n)_n$ converges toward $x \in K$. To conclude, notice that:
$$\|T(x_n)-x_n\|=\frac{\|T^{n+1}(x_0)-x_0\|}{n+1}\leqslant\frac{\textrm{diam}(K)}{n+1}.$$
Therefore, taking $n\to+\infty$ shows that $x$ is a fixed point of $T$. Whence the result. $\Box$
Then, using proposition $1$ applied to:
$$K_u:=\{x\in K\textrm{ s.t. }u(x)=x\},$$
one sees that theorem $1$ is true if and only if lemma $1$ holds for a finite number of continuous linear maps. The rough sketch of a proof of theorem $1$ does not capture why the compactness of $G$ is important, but it is.
This result is called Kakutani's fixed point and have deep applications. Here is a modest sample:
Proposition $2.$ Let $G$ be a compact subgroup of $\textrm{GL}_n(\mathbb{R})$, then $G$ is conjugated to a subgroup of $O(n)$.
Remark $2$. The proof uses the fact that the convex hull of a compact set is again compact, which follows from Carathéodory's theorem.
Perhaps, more substantial is the following:
Theorem $2$. Let $G$ be a compact topological group, then there exists a unique Borel probability measure $\mu$ on $G$ such that for all $g\in G$ and all Borel measurable subset $A$ of $G$, one has:
$$\mu(gA)=\mu(A).$$
In particular, for all measurable map $f\colon G\rightarrow\mathbb{R}$ and all $g\in G$, one has:
$$\int_Gf(gx)\,\mathrm{d}\mu(x)=\int_Gf(x)\,\mathrm{d}\mu(x).$$
The existence of such measures is of importance in Riemannian geometry.
Proposition $3.$ Let $G$ be a compact Lie group, then $G$ can be endowed with a bi-invariant Riemannian metric i.e. such that left and right translations are isometries.
Proof. Let $\langle\cdot,\cdot\rangle$ be a dot product on $\mathfrak{g}$, let $\mu$ be the measure of theorem $2$ and let define:
$$\forall x,y\in\mathfrak{g},\langle x,y\rangle_G:=\int_{g\in G}\langle\textrm{Ad}(g)x,\textrm{Ad}(g)y\rangle\,\mathrm{d}\mu(g),$$
then $\langle\cdot,\cdot\rangle_G$ is a dot product on $\mathfrak{g}$ which is $\textrm{Ad}\colon G\rightarrow\textrm{GL}(\mathfrak{g})$ invariant and $h\in\Gamma(TG\otimes TG)$ defined by:
$$h_g:={L_g}^*\langle\cdot,\cdot\rangle_G$$
is a bi-invariant metric on $G$. Whence the result. $\Box$
Remark 3. In fact, the set of bi-invariant metric on $G$ is in bijective correspondence with the set of $\textrm{Ad}$-invariant dot product on $\mathfrak{g}$.
From proposition $3$, it is straightforward to deduce the following:
Theorem $3$. The exponential map of a compact Lie group is surjective.
Proof. Let $G$ be a compact Lie group, by proposition $3$, let $h$ be a bi-invariant metric on $G$ and let $\nabla$ be the associated Levi-Civita connection, then using the Koszul's formula, one has:
$$\forall X,Y\in{}^G\Gamma(TG),\nabla_XY=\frac{1}{2}[X,Y].$$
From there, it is easy to see that the geodesics starting from the identity element of $G$ are the one parameter subgroups of $G$. Therefore, the Lie theoretical exponential map of $G$ coincides with the Riemannian exponential of $(G,h)$. Whence the result from Hopf-Rinow theorem. $\Box$
Remark 4. In the proof, I used that an $\textrm{Ad}$-invariant bilinear form $B$ on $\mathfrak{g}$ is $\textrm{ad}$-alternate, namely:
$$B([X,Y],Z)=B(X,[Y,Z]).$$
Example $1$. The matrix exponential map from $\textrm{SO}(n)$ to the set of skew-symmetric matrices is surjective, since for matrix Lie groups, the exponential map is the usual one. |
Bellman Criteria (Graphtheory) | The distance between two vertices is by definition the length of a shortest path between them. So if $d(s,v)+d(v,t)=d(s,t)$ there is an $st$-path containing $v$, which is a shortest path from $s$ to $t$.
Conversely assume that $d(s,v)+d(v,t)\neq d(s,t)$ and note that by definition of the latter distance we indeed have $d(s,v)+d(v,t)>d(s,t)$. Moreover any $st$-path $P$ containing $v$ satisfies $\operatorname{len} P \geq d(s,v) + d(v,t)$, since it decomposes into an $sv$-path $P_s$ and $vt$-path $P_t$, which by definition satisfy $\operatorname{len} P_s \geq d(s,v)$ and $\operatorname{len} P_t \geq d(v,t)$. But this way $v$ cannot possibly lie on a shortest $st$-path. |
Definition of ordinal | A set is an ordinal if it's transitive and well-ordered with respect to $\in$.
Well-ordered means that every non-empty subset has a least element. Transitive means that every element is also a subset.
What this means for your example is that $\alpha = \{ x, \{ x \}\}$ is transitive if $x \subset \alpha$. The only case where $\alpha$ is transitive is if $x = \emptyset$, in all other cases $\alpha$ is not actually an ordinal because then you don't have $x \subset \alpha$.
Edit
The ordinals are an extension of the natural numbers, see here, and the natural numbers start at $0$ which is the empty set $\emptyset$. So every ordinal has to contain $0$, that is, the empty set.
Hope this helps. |
Ordered subsequences in die rolls with Markov chains | I get exactly the same answers you do.
I don't find it so surprising that there isn't all that much advantage in being close to the goal, since the most likely thing to happen, by far, is that we will go back to the beginning.
If you think about setting up a system of linear equations to solve for the expectations you get for example $$e_0=1+\frac56e_0+\frac16e_1$$ since we always have to roll once, and with probability $\frac56$ we make no progress and with probability $\frac16$ we got to state $1$. This gives $e_0=6+e_1$, so that there's hardly any gain from getting started.
Of course, this is the same system of equations you wrote in matrix form, but I somehow all ways find it more concrete when I write out like they showed me in high school. |
Exterior derivative of a $\mathfrak{g}$-valued form | For a general vector bundle $E\to M$, in order to define exterior derivation on $E$-valued differential forms one needs a connection on $E$. The vector bundle at hand is trivial, and so, it has the trivial connection. Consequently, there is a natural exterior derivation of $\mathfrak{g}$-valued forms.
The following is one possible way to understand the picture. This way is not canonical, but very clear. Let $\xi_1,\ldots,\xi_n$ be a basis of $\mathfrak{g}$. Then a $\mathfrak{g}$-valued form $\omega$ can be written as $$\omega=\sum\xi_i\otimes\omega_i,$$ where every $\omega_i$ is an ordinary differential form. The exterior derivative is then given by $$d\omega=\sum\xi_i\otimes d\omega_i.$$
Edit: A word of clarification, thanks to the comment by Pedro. The above computation yields a well-defined result for $d\omega$, which is independent of the chosen basis. However, the basis is not canonical, and neither are the $\omega_i$'s. Hence, this approach, as easy to understand as it is, may not be the right way to think of the matter. |
Prove interval is open by considering function | For continuos function, preimage of open sets is open.
Here $(-\infty, r)$ is an open set in $\mathbb{R}$. |
How do I convert OS coordinates (X and Y) to longitude and latitude coordinates? | It just a transverse Mercator projection. The library GeographicLib (written
by me) has a class OSGB which converts eastings + northings to latitude + longitude (and also deals with OS grid references). If you want to know how to calculate the transverse Mercator projection accurately, see Transverse Mercator with an accuracy of a few nanometers (a preprint is available here). |
Meaning/Justification for Describing Functions as 'Orthogonal' | Indeed, there is deep theory involved behind all of this. When you learn more about inner product/Hilbert spaces, you will see spaces that have a certain operation on them called an inner product whose axioms/properties you can find in the link. These spaces are interesting because inner products give us a sense of "angle" or "direction" (note how its properties generalize the dot product of Euclidean space). Further, we can define a norm on that space with this inner product: $ \|x\| = \sqrt{\langle x,x \rangle}$ and then this norm gives us an idea of the "length" of a vector in this space.
Then we define a metric on this space with $ d(x,y) = \|x-y\|$ and this metric gives us an idea of distance between two points (which leads us to think about convergence of sequences and other analysis-type questions). This metric also then allows us to generalize open sets which endows a sense of whether some points are "close together" or "split apart", and these open sets then induce a topology, which allows us to study topological properties such as continuity and connectedness.
Clearly that is just an overload of new information, so you may wonder - what does it all mean? Well essentially each of these stages are abstractions that are structurally similar to the well known Euclidean space. Hilbert spaces are the most similar to $\mathbb{R}^n$ - in them we have all the ideas above, of angle, direction, magnitude of a vector, distance between vectors, neighbourhoods of vectors, and convergence of every Cauchy sequence. This is why Hilbert spaces are so interesting - we have plenty of intuition about how they should behave, they have enough structure on them to give us plenty of useful results, but they are general enough to be substantially different from $\mathbb{R}^n$.
One example of a Hilbert space is the Lebesgue square-integrable functions with the inner-product $\langle f,g \rangle = \int_X f\cdot \bar g\; \mathrm d\mu$, and you are considering one of them. The integral in your post corresponds to the inner product of this space (remember, this is the generalization of a dot product) and just as when the dot product of Euclidean vectors was $0$ we declared them orthogonal, we do the same here. As anon commented, when two functions are orthogonal they are linearly independent, which is a familiar property. If you have enough linearly independent vectors chosen wisely to form a basis, we should be able to form any vector in this space as a linear combination of the basis vectors - that is a Fourier series. |
Finding probability that a car experiences a failure | I think you just misread the question, it means the first failure in the last quarter of some calendar year, it need not be the first year.
\begin{align}
\sum_{n=0}^\infty P\left(n+1 > T > n + \frac34\right) &=\sum_{n=0}^\infty \left(F(n+1)-F\left(n+\frac34\right)\right)\\
&= \sum_{n=0}^\infty \left(\exp \left(-\frac{n+\frac34}2 \right) - \exp\left( -\frac{n+1}{2}\right) \right)\\
&= \left( \exp\left(-\frac38 \right) -\exp\left( -\frac12\right)\right)\sum_{n=0}^\infty \exp(-n/2) \\
&=\frac{\left( \exp\left(-\frac38 \right) -\exp\left( -\frac12\right)\right)}{1-\exp(-1/2)} \\
&\approx 0.205
\end{align} |
Significance of $G$ being reductive | If $G$ is a connected, reductive group, you should think of the structure of $G$ in terms of its roots, and the facts about parabolic and Levi subgroups that the roots entail. Here is part of the story. Let $B$ be Borel subgroup of $G$, and let $T$ be a maximal torus of $G$. For an example, you may keep in mind $G = \textrm{GL}_n, B$ the upper triangulars, and $T$ the diagonal matrices. There is a unique Borel subgroup $B^-$ (think lower triangular) containing $T$ such that $B \cap B^- = T$. Let $U, U^-$ be the unipotent radicals of $B, B^-$. Then $B = TU, B^- = TU^-$ (semidirect products).
Let $\mathfrak g$ be the Lie algebra of $G$ (think all $n$ by $n$ matrices). The adjoint action $\textrm{Ad}: T \rightarrow \textrm{GL}(\mathfrak g)$ (think conjugation by a diagonal matrix) is a rational representation of $T$. Then $\mathfrak g$ decomposes into a direct sum of one dimensional spaces
$$\mathfrak g = \bigoplus\limits_{\alpha \in X(T)} \mathfrak g_{\alpha}$$
where $\alpha: T \rightarrow \mathbb{G}_m$ is a rational homomorphism of $T$ which acts on $\mathfrak g_{\alpha}$ as $\textrm{Ad}(t)X_{\alpha} = \alpha(t) X_{\alpha}$. Then $\mathfrak g_0$ is the Lie algebra of $T$ (think all diagonal matrices), and it is a consequence of reductivity of $G$ that for $0 \neq \alpha \in X(T)$, $\mathfrak g_{\alpha}$ is either one or zero dimensional. Those $0 \neq \alpha$ for which $\mathfrak g_{\alpha}$ is one dimensional are called the roots of $T$ in $G$ (in our example, the $\mathfrak g_{\alpha}$ will be the one dimensional spaces spanned by the elementary matrices $E_{ij}, i \neq j$). Let $\Phi$ be the set of roots.
Let $\mathfrak u, \mathfrak u^-$ be the Lie algebras of $U, U^-$ (for $\mathfrak u$, think upper triangular matrices with zeroes on the diagonal, similarly $\mathfrak u^-$). Then $\mathfrak g = \mathfrak u^- \oplus \mathfrak g_0 \oplus \mathfrak u$, and each of $\mathfrak u, \mathfrak u^-$ is a direct sum of the various one dimensional spaces $\mathfrak g_{\alpha} : \alpha \in \Phi$. Together they exhaust all the roots. Those $\alpha$ for which $\mathfrak g_{\alpha} \subseteq \mathfrak u$ will be called positive, and the others negative. Let $\Phi^+$ be the set of positive roots, and $\Phi^-$ the set of negative roots. Then $\Phi^- = - \Phi^+$.
Let $\Delta$ be the set of positive roots which cannot be expressed as a sum of two other positive roots (if $e_1, ... , e_n$ is the usual basis for $X(T)$, then $\Delta = \{e_1 - e_2, ... , e_{n-1} - e_n \}$, and $\Phi^+ = \{ e_i - e_j : i < j \}$). For each root $\alpha$, there is a unique connected one dimensional algebraic group $U_{\alpha}$, isomorphic to $\mathbb{G}_a$, which is normalized by $T$ and whose Lie algebra is $\mathfrak g_{\alpha}$ (if $\alpha = e_i - e_j$, then $\mathfrak g_{\alpha}$ has basis $E_{ij}$, and $U_{\alpha} = \{ I + xE{ij} : x \in \mathbb G_a \}$).
The standard Levi subgroups of $G$ are in bijection with the subsets of $\Delta$. Specifically, if $\theta$ is any subset of $\Delta$, let
$$T_{\theta} = (\bigcap\limits_{\alpha \in \theta} \textrm{Ker}(\alpha))^0$$
and define the Levi subgroup $M_{\theta} = Z_G(T_{\theta})$. For example, in $\textrm{GL}_4$, let $\theta = \{e_1 - e_2, e_3 - e_4\}$. Then $T_{\theta} = \textrm{diag}(x,x,y,y)$, and $Z_{\theta}$ is the block diagonal sum of two copies of $\textrm{GL}_2$. Then $Z_{\theta}$ is connected reductive, with maximal torus $T$ and Borel subgroup $Z_{\theta} \cap B$, with respect to which it has its own set of roots, namely those roots of $\Phi$ obtained by linear combination of $\theta$.
The standard parabolic subgroups of $G$ are also in bijection with the subsets of $\theta$. Every positive root is a unique positive, integer-linear combination of elements of $\Delta$. As $\alpha$ ranges through those positive roots which, when written as a combination of things in $\Delta$, has a nonzero coefficient belonging to something outside $\theta$, the subgroup $N_{\theta}$ generated by these $U_{\alpha}$ is a connected, unipotent group. The parabolic subgroup $P_{\theta}$ is $M_{\theta}N_{\theta}$, which is a semidirect product, with $M_{\theta}$ normalizing $N_{\theta}$. For example, in the example I mentioned with $\textrm{GL}_4$,
$$N_{\theta} = \begin{pmatrix} 1 & 0 & a & b \\ 0 & 1 & c & d \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$$
The standard parabolic subgroups $P_{\theta} : \theta \subseteq \Delta$ are exactly those subgroups of $G$ which contain $B$. Note that $P_{\Delta} = G, P_{\emptyset} = B$. |
A question on Lyapunov exponents associated with a fixed point of a vector field. | When $x_0$ is a critical point we have $X(t;x(t,x_{0}))=e^{At}$ with $A=d_{x_0}f$.
So what matters are the vectors in each generalized eigenspace.
For example, if $Av=\lambda v$ with $v\ne0$, then $e^{At}v=e^{\lambda t}v$ and so $\chi(x_{0},v)={\rm Re\,}\lambda$. This gives what you want (by taking sums of generalized eigenvectors). |
Here is a riddle that I have no idea how to solve. | Okay, the solution to this riddle is actually written in the blog you found it, but since you are asking about ideas on how to approach, i'll give u mine.
I found this solution (using GLPK)
$$
\begin{array}{cccc}
x_1 = 13 & x_2 = 8 & x_3 = 7 & x_4 = 6 \\
x_5 = 12 & x_6 = 2 & x_7 = 14 & x_8 = 11 \\
x_9 = 1 & x_{10} = 5 & x_{11} = 16 & x_{12} = 3 \\
x_{13} = 4 & x_{14} = 10 & x_{15} = 15 & x_{16} = 9
\end{array}
$$
where im using the same notation you did above. I also tried to figure out what was that saying, but i get a nosense group of letters, which I guess its because there exist, as u noticed, at least 8 simetric solutions. I saw in the blog some people solved the riddle just looking at the letters, but I am more of numbers, so I'll just skip that "saying" part.
Linear algebra part
My aproach consists on using the 8 conditions you already wrote, but adding the condition that each number between 1 and 16 must appear exactly once. Also we have to face the problem that our variables are integers, not real numbers. I dont even know if its possible to get such conditions using only linear algebra equations, a start would be adding this
$$
\sum_{i=1}^{16} x_i = 136
$$
but we still lack equations and i dont know how to proceed.
So I have used linear programming, and GLPK to solve it given this conditions.
Linear programming and aproach used (a bit long)
I dont know if you are familiar with linear programing, but I will explain the procedure in a way that it can be understood, hopefully.
Basicaly we define a new set of variables $x_{ij}$ which are binary (i.e $0$ or $1$) and are related to your variables this way
$$
x_i = \sum_{j=1}^{16} x_{ij}\cdot j
$$
Of course, then we only allow one of this $x_{ij}$ to be diferent from zero for each $i$, becasue "every unknown is only one number" so we add the condition
$$
\sum_{j=1}^{16} x_{ij} = 1
$$
The same way, we also want "each number to appear only once", which gives a very similar condition (note the change on the index of the sum)
$$
\sum_{i=1}^{16} x_{ij} = 1
$$
Finally, we substitute this new $x_{ij}$ on the 8 equations we had at the start and run GLPK. It tries to find a feasible solution, and it succeeds, giving the one i wrote above. I guess on diferent machines the program might find diferent solutions, since we already know its not unique.
Extra (I almost found the saying)
I was a bit dissapointed i didnt get the saying, so i decided to cheat a bit, took a look on the blog, the text is "do a good turn daily". His solution involves Y being the last letter, which means $x_{12}=16$. So i put this restraint on my program and... still got a diferent solution.
$$
\begin{array}{cccc}
x_1 = 8 & x_2 = 4 & x_3 = 15 & x_4 = 7 \\
x_5 = 6 & x_6 = 12 & x_7 = 9 & x_8 = 10 \\
x_9 = 11 & x_{10} = 14 & x_{11} = 3 & x_{12} = 16 \\
x_{13} = 5 & x_{14} = 13 & x_{15} = 2 & x_{16} = 1
\end{array}
$$
Not sure how many solutions this annoying star has :) |
Prove that $1^3 + 2^3 + ... + n^3 = (1+ 2 + ... + n)^2$ | HINT: You want that last expression to turn out to be $\big(1+2+\ldots+k+(k+1)\big)^2$, so you want $(k+1)^3$ to be equal to the difference
$$\big(1+2+\ldots+k+(k+1)\big)^2-(1+2+\ldots+k)^2\;.$$
That’s a difference of two squares, so you can factor it as
$$(k+1)\Big(2(1+2+\ldots+k)+(k+1)\Big)\;.\tag{1}$$
To show that $(1)$ is just a fancy way of writing $(k+1)^3$, you need to show that
$$2(1+2+\ldots+k)+(k+1)=(k+1)^2\;.$$
Do you know a simpler expression for $1+2+\ldots+k$?
(Once you get the computational details worked out, you can arrange them more neatly than this; I wrote this specifically to suggest a way to proceed from where you got stuck.) |
Is it defined the product of vectors of different spaces? | If you are searching for a short answer, here it is: no, it is not defined. However, I think it is useful to think about the following answer:
Are we searching for kind of a inner product or are we searching for something else?
Well, if we are searching for a inner product, the answer is still no: of course you can add a $0$ as the third real coordinate of $v_1$, but it is not clear why to use $0$; one could use any $r\in \mathbb{R}$. Probably you are trying to add $0$ because you are thinking $\mathbb{R}^2$ as the plane $z=0$ inside $\mathbb{R}^3$. Of course the plane $z=0$ is isomorphic to $\mathbb{R}^2$, but inside $\mathbb{R}^3$ there are infinitely many isomorphic copies of $\mathbb{R}^2$, not only that one.
In a different fashion, there are infinitely many monomorphism (injective linear maps)
$$ \mathbb{R}^2 \to \mathbb{R}^3$$
and among these there is not a "natural", "canonical" one.
A different question is: there is a way to "multiply" two vector spaces, in order to obtain another vector space? This time we are not asking a inner product, but a sort of a "product" which takes vectors and gives vectors.
The answer is yes, there is. The construction is called tensor product of two vector spaces, and is performed taking a particular quotient of the cartesian product. If you are interested, you can find further informations about this in a lot of linear algebra books. |
injective open map between two euclidean spaces | There does not exist such a function even if we do not assume continuity.
$f(\mathbb R^2)\subset \mathbb R$ must be an open set, and we have a bijective continuous function $f^{-1}:f(\mathbb R^2)\to \mathbb R^2$. This map is a homeomorphism onto its image when restricted to each closed subinterval of $f(\mathbb R^2)$, and a subspace of $\mathbb R^2$ homeomorphic to a closed interval in $\mathbb R$ has empty interior. Since $f(\mathbb R^2)$ is a countable union of such closed intervals, so is $\mathbb R^2$. But this contradicts the Baire category theorem. |
5 black, 7 red, 9 blue, and 6 white marbles. | There are, in general, $n!$ ways to arrange $n$ objects. So you'd have $(5+7+9+6)!$ but then as all the black marbles are identical, their permutations shouldn't be counted. For every 'good' permutation, you also have another $5!7!9!6!$ that only differ from it by permutations of same colored marbles. Thus the result is $\frac{(5+7+9+6)!}{5!7!9!6!}$.
For the probability, consider that whites must be in pairs or triples. This is equivalent to solving the previous problem for $3$ and $2$ white marbles. Notice that the two cases overlap when all $6$ marbles are in one group. |
Finding latitude and longitude | You do not know where or how near the greenwich/prime merdian you are located. Therefore, longitude cannot be found. Otherwise, your latitude looks correct. |
Is the Cauchy–Schwarz inequality valid for the Lebesgue space $L^2(0,T,L^2(\Omega))$? | Yes, for the first question, since $L^2(0,T;L^2(\Omega))$ is a Hilbert space. For the second question, note that $\left\| \int_{0}^{t} h(s)ds\right\|_{L^2(\Omega)} \leq \int_{0}^{t} \left\|h(s)\right\|_{L^2(\Omega)}ds$ and $$\int_{0}^{t}\|h(s)\|_{L^2(\Omega}ds \leq \left( \int_{0}^{t}1^2ds \right)^{\frac{1}{2}}\left( \int_{0}^{t}\|h(s)\|_{L^2(\Omega)}^2 ds\right)^{\frac{1}{2}}.$$ Then,
$$\left\| \int_{0}^{t} h(s)ds\right\|^{2} \leq t \int_{0}^{t}\|h(s)\|_{L^2(\Omega)}^2 ds.$$ |
Does $|\textbf{x}-\textbf y|<\delta$ imply $|x_1- y_1|<\delta$ and $|x_2- y_2|<\delta$ | Note that $|x_1-y_1|^2\leq |x_1-y_1|^2+|x_2-y_2|^2$. The result follows from taking square roots on both sides. |
Do non-symmetric "strongly positive definite" matrices have unique positive definite square roots? | Your condition "$A$ strongly PD" is equivalent to "$A+A^T$ is symmetric $>0$". According to,
Largest eigenvalues of matrix and its doubled symmetric part
every $\lambda\in spectrum(A)$ satisfies $Re(\lambda)>0$.
Thus $A$ admits $\log(A)$ as its principal logarithm and the principal square root $A^{1/2}$ is well defined; cf the first part of my post in
When is square root of transpose and transpose of square root of a matrix are equal?
Moreover, $A^{1/2}$ is the unique square root of $A$ whose all the eigenvalues have a positive real part. Thus, if $A$ admits a strong square root, then it is unique.
EDIT. The difficult part is to see if $A^{1/2}+{A^{1/2}}^T$ is $>0$.
That is true; cf. Corollary 8 in
https://www.sciencedirect.com/science/article/pii/S0024379500002433
cf. also
Square root of positive definite nonsymmetric matrix
where this question was studied. |
Formula for the square root of a symetric, positive definite 2x2 matrix | If you guarantee that $\Sigma$ is positive definite, then if you choose plus signs in both $s$ and $t$ in Wikipedia formula, you will get a positive definite matrix.
In other words:
$$
s=\sqrt{v_{11}v_{22}-v_{12}^2}\qquad t=\sqrt{v_{11}+v_{22}+2s}\\
\sqrt{\Sigma} = \frac1t(\Sigma+sI)
$$
You can easily see that:
$$
u^T\sqrt{\Sigma}u=\frac1t(u^T\Sigma u+su^2).
$$
is always positive if $u^T\Sigma u$ is positive. |
Cayley Table Question | A Cayley table does multiplication for you.
To compute powers of $x$, first look at the row and column for $x$: this gives you $x^2$.
To get $x^3$, find the row for $x$ and the column for $x^2$. (Or you can do the column for $x$ and the row for $x^2$.) This gets you $x^3$.
Adapt this to do greater powers of $x$.
Then you just need to count how many steps it takes you to reach the identity element, and you have the order. |
Pareto distribution derivation | I'm with you till the fifth line where you start integrating. For the first term, you have $$ \int_0^\infty h^{m-1}e^{-h/\theta}dh.$$ Substitute in $u=h/\theta$ and get $$\int_0^\infty h^{m-1}e^{-h/\theta}dh= \theta^m\int_0^\infty u^{m-1}e^{-u}du = \theta^m \Gamma(m)$$ where we used the formula$$ \Gamma(z) = \int_0^\infty x^{z-1}e^{-x}dx.$$
For the second term, pull the $y$ and $R$ stuff out front and combine the powers of $h$ in the integral to get $$ \frac{y^{-2/\alpha}}{R^2}\int_0^\infty h^{m-1+2/\alpha} e^{-h/\theta}dh.$$ Making the same substitution gives $$ \frac{y^{-2/\alpha}}{R^2}\int_0^\infty h^{m-1+2/\alpha} e^{-h/\theta}dh =\frac{y^{-2/\alpha}}{R^2}\theta^{m+2/\alpha}\int_0^\infty u^{m-1+2/\alpha}e^{-u}du = \frac{y^{-2/\alpha}}{R^2}\theta^{m+2/\alpha}\Gamma(m+2/\alpha).$$
Putting it all together and dividing by the denominator $\Gamma(m)\theta^m$ gives $$ 1- \frac{y^{-2/\alpha}}{R^2} \theta^{2/\alpha}\frac{\Gamma(m+2/\alpha)}{\Gamma(m)}$$
So I agree with the answer except for a factor of $m^{2/\alpha}$ in the second term. |
How is false position method used when the root is a double root? | The root $x=3$ can not be found with the false position method. Any root found by a bracketing method has to be a zero crossing, with values of both signs in every neighborhood of the root. You also need to start the method with a bracketing interval, which here has to contain the other single root $x=1$. The method will converge to that root, possibly involving a longer initial phase where one interval end point moves towards $x=3$.
For a polynomial function, you can eliminate the multiple roots by computing the GCD with its derivative and dividing it out. This depends on the coefficients being integers or given as rationals.
Starting the regula falsi method for the given polynomial on the interval $[-2,5]$, the method in some sense "finds" the root at $x=3$ in that the active interval end moves toward that point, the lower end $-2$ never changing. However, the bracketing interval does not change and the convergence is very slow, for instance $x[ 5] = 3.50$, $x[ 8] = 3.38$, $x[ 12] = 3.30$, $x[22] = 3.20$, $x[ 55] = 3.10$.
Changing to the Illinois modification does not help much, as this anti-stalling measure is geared towards simple roots in the standard situation of a convex increasing function (or any of its flipped variants) over the interval. The double root still leads to long stalling segments. Enhancing this variant by a stalling count and an over-relaxed Aitkens delta-squared formula restores fast convergence to the root $-1$. |
Solve $\log\left(\frac{1.07^x}{1050-2.5x}\right)=\log\left(\frac{1.2}{828}\right)$ for $x$. | As said in comments, the solution is given in terms of Lambert function.
If you plot the function $$f(x)=\frac{1.07^x}{1050-2.5x}-\frac{1}{690}$$ you should notice that the solution is very close to $x=6$; this means that you could start Newton method and converge quite fast as shown below
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 6\\
1 & 5.993055006 \\
2 & 5.993053313 \\
3 & 5.993053313
\end{array}
\right)$$ Sooner or later, you will lear than any equation which can write or rewrite as $$A+B x+C \log(D+Ex)=0$$ has solution(s) in terms of Lambert function. |
If every polynomial in $F[x]$ splits then there exists no nontrivial algebraic extension | Let $E$ be an algebraic extension of $F$ and let $\alpha \in E$.
Then $F(\alpha) \cong F[x]/(f(x))$, where $f(x)$ is the minimal polynomial of $\alpha$.
Since $f$ is irreducible, it must have degree $1$ and so $\alpha \in F$.
Thus $E \subseteq F$ and so $E=F$. |
The derivative of a two-to-one complex function has no zeros. | If $f'$ has a zero of order $k$ at $a$, then in some neighborhood of $a$, $f$ can be written as $f(z)=f(a)+((z-a) h(z))^{k+1}$ where $h$ is a holomorphic function with $h(a)\ne 0$.
So, the values near $f(a)$ are attained $k+1$ times in this neighborhood, while $f(a)$ itself is attained exactly once in this neighborhood.
By the 2-to-1 assumption, $k$ must be equal to $1$. Also, the value $f(a)$ must be attained the second time at some point $b$. By the open mapping property, every value near $f(a)$ is also attained in a neighborhood of $b$. But this makes them have at least three preimages, a contradiction. |
'Categorical' characterization of invertible elements of monoid | Without having your actual application in mind, my first reaction is as follows.
"Categorically", we tend to identify elements $m \in M$ with homomorphism $\mathbb{N} \to M$. And we can identify invertible elements of $M$ with homomorphisms $\mathbb{Z} \to M$.
So, an element $m : \mathbb{N} \to M$ is invertible if and only if it can be written as a composite of the form $\mathbb{N} \to \mathbb{Z} \to M $, where the first map is the canonical inclusion.
My next reaction, again still not knowing if its suitable for your application, is to pick out the functor $\mathrm{Core}$ (the subgroup of invertible units) in an abstract way.
If $U : \mathbf{Groups} \to \mathbf{Monoids}$ is the forgetful functor, then I believe there is an adjunction $\hom_{\mathbf{Monoids}}(U(G), M) \cong \hom_{\mathbf{Groups}}(G, \mathrm{Core}(M))$.
Thus, you can identify the functor $\mathrm{Core}$ as being the right adjoint to $U$, and the counit $U \mathrm{Core}(M) \to M$ picks out its submonoid of invertible elements.
And, of course, an element $m \in M$ is invertible if and only if it lies in this submonoid. |
Triangle Inequality: use to prove convergence (psi function elliptic functions) | The triangle inequality is actually a combined inequality. That is, $\;||x|-|y||\le|x+y|\le|x|+|y|.$ In your case, you have written $\;w\;$ for $\;\omega\;$ which is okay. Notice that all that matters is $\;|z|<\frac12|w|\;$ which implies $\;|w|-|z|>\frac12|w|.\;$Now the triangle inequality states $\;||z|-|w||<|z-w|$
and thus, $\;\frac12|w|<|z-w|.\;$ For the rest,
$\;|z-2w|=|z+(-2w)|\le|z|+|\!-\!2w|<\frac12|w|+2|w|=\frac52|w|.$ |
Is there a standard ordering of non-ordinal order types? | If $A$ and $B$ are totally ordered sets of order type $\alpha$ and $\beta$ respectively, then $\alpha\le\beta$ means that $A$ is isomorphic to a subset of $B$. (This agrees with the usual ordering of ordinal numbers if $A$ and $B$ are well-ordered sets.) This relation $\le$ is a quasi-ordering: reflexive and transitive but not antisymmetric. The notation $\alpha\lt\beta$ means that $\alpha\le\beta$ while $\beta\not\le\alpha$. |
Some help on Bayes Statistics/probability problem | $P(\text{enzyme})$ is N/95 and N is something to be adjusted.
For $P(\text{illness})$ you use the very useful breakdown for the denominator in Bayes' theorem $P(B)=P(B\mid A)~P(A) + P(B\mid \overline A)~(1-P(A))$, here $$P(\text{illness}\mid \text{enzyme})~P(\text{enzyme})+P(\text{illness}\mid\text{normal})~(1-P(\text{enzyme}))$$ |
Why is there never a proof that extending the reals to the complex numbers will not cause contradictions? | There are several ways to introduce the complex numbers rigorously, but simply postulating the properties of $i$ isn't one of them. (At least not unless accompanied by some general theory of when such postulations are harmless).
The most elementary way to do it is to look at the set $\mathbb R^2$ of pairs of real numbers and then study the two functions $f,g:\mathbb R^2\times \mathbb R^2\to\mathbb R^2$:
$$ f((a,b),(c,d)) = (a+c, b+d) \qquad g((a,b),(c,d))=(ac-bd,ad+bc) $$
It is then straightforward to check that
$(\mathbb R^2,f,g)$ satisfies the axioms for a field, with $(0,0)$ being the "zero" of the field and $(1,0)$ being the "one" of the field.
the subset of pairs with second component being $0$ is a subfield that's isomorphic to $\mathbb R$,
the square of $(0,1)$ is $(-1,0)$, which we've just identified with the real number $-1$, so let's call $(0,1)$ $i$, and
every element of $\mathbb R^2$ can be written as $a+bi$ with real $a$ and $b$ in exactly one way, namely $(a,b)=(a,0)+(b,0)(0,1)$.
With this construction in mind, if we ever find a contradiction involving complex number arithmetic, this contradiction can be translated to a contradiction involving plain old (pairs of) real numbers. Since we believe that the real numbers are contradiction-free, so are the complex numbers. |
A simple problem on first order differential equations | If you cannot express your derivative then you deal with an implicit differential equations They are very important and extensively studied. Depending on what is your level of preparation you can find differnt accounts how to deal with such equations. A particularly interesting and very geometric treatment of such equations is given in Arnold's Geometric methods in the theory of ODE.
Here is just one example how the solutions can look like. |
Does there exist an open subset of R of Lebesgue measures .5 whose closure has Lebesgue measure 1? | Say $(r_n)$ is an enumeration of the rationals in $(0,1)$. Choose $a_n>0$ with $\sum 2a_n<1/2$ and let $$E=(0,1)\cap\bigcup_n(r_n-1_n,r_n+1_n).$$Then $E$ is open, dense, and $m(E)<1/2$.
Now for $\alpha\in(0,1)$ let $$S_\alpha=E\cup(0,\alpha).$$Note that $m(S_\alpha)$ depends continuously on $\alpha$; hence there exists $\alpha$ with $m(S_\alpha)=1/2$. |
Compact Kähler manifolds with $dim_\mathbb{C}H^{1,1}(X)=1$ | As mentioned in the comments, now I think the argument in the book is incorrect, at least with the given assumption. The following was my first attempt to answer this question, where the crossed sentences are insufficient:
Every compact Kahler manifold $X$ admits a Kahler form $[\omega]\in H^2(X,\mathbb C)$ which is nonzero. Moreover, $[\omega]$ is real and of type $(1,1)$. Therefore, by the assumption that $\dim_{\mathbb C}H^{1,1}(X)=1$, $[\omega]$ is a real element of the complex vector space $\mathbb C\cong H^{1,1}(X)$. One can rescale $[\omega]$ by a constant $r\in\mathbb R^+$, such that $r[\omega]\in H^2(X,\mathbb Z)$ is integral and still stay in the Kahler cone.
For your confusion, you can check from the fact $H^{1,1}(X)$ is a complex subspace of $H^{2}(X,\mathbb C)$ such that $\overline{H^{1,1}(X)}=H^{1,1}(X)$, it follows that $$H^{1,1}(X)= \big (H^{1,1}(X)\bigcap H^2(X,\mathbb R)\big )\otimes_{\mathbb R} \mathbb C.$$
So in particular, the intersection $\require{enclose}\enclose{horizontalstrike}{H^2(X,\mathbb Z)\cap H^{1,1}(X)\neq \{0\}}$.
The reason of insufficiency is the following: Since $H^{1,1}(X)$ is 1-dimensional, the Kahler cone is a ray $\mathbb R^+$ in the real vector space $H^2(X,\mathbb R)$, so $X$ is a Hodge manifold iff the that ray passes an (nonzero) integral element in $H^2(X,\mathbb R)$. Explicitly, let's say $h^{2,0}=h^{0,2}=1$, and $v_1,v_2,v_3$ is a basis of $H^2(X,\mathbb Z)$, then $[\omega]=\sum_{i=1}^3r_iv_i$ for some $r_i\in \mathbb R$, can we gurantee that $[r_1:r_2:r_3]=[n_1:n_2:n_3]$ with $n_i$ integer? No, and general weight two Hodge structure won't satisfy that. |
Homology homomorphism induced by linear isomorphism | From the outset, I would use the fact that both connected components of $\mathrm{GL}_n(\mathbf R)$ are path-wise connected, assuming that $n>0$. Hence, it suffices to prove the statement for the identity matrix and the diagonal matrix $\mathrm{diag}(-1,1,\ldots,1)$. Then you apply your argument to these two maps. For the former you get the identity on $H_{n-1}(S^{n-1})$ of course, for the latter you can show with Mayer-Vietoris that you get $-\mathrm{id}$ on $H_{n-1}(S^{n-1})$. |
Average length of the longest segment | The answer to (B) is actually given in both Yuval Filmus' and my answers to the question about the average length of the shortest segment. It's $$\frac{1}{n} H_n,$$ where $H_n = \sum_{k=1}^n \frac{1}{k},$ i.e., the $n$th harmonic number.
"Clever" is of course subjective, but here's an argument for (A) in the $n$-piece case. At least there's only one (single-variable) integration in it. :)
If $X_1, X_2, \ldots, X_{n-1}$ denote the positions on the rope where the cuts are made, let $V_i = X_i - X_{i-1}$, where $X_0 = 0$ and $X_n = 1$. So the $V_i$'s are the lengths of the pieces of rope.
The key idea is that the probability that any particular $k$ of the $V_i$'s simultaneously have lengths longer than $c_1, c_2, \ldots, c_k$, respectively (where $\sum_{i=1}^k c_i \leq 1$), is $$(1-c_1-c_2-\ldots-c_k)^{n-1}.$$ This is proved formally in David and Nagaraja's Order Statistics, p. 135. Intuitively, the idea is that in order to have pieces of size at least $c_1, c_2, \ldots, c_k$, all $n-1$ of the cuts have to occur in intervals of the rope of total length $1 - c_1 - c_2 - \ldots - c_k$. For example, $P(V_1 > c_1)$ is the probability that all $n-1$ cuts occur in the interval $(c_1, 1]$, which, since the cuts are randomly distributed in $[0,1]$, is $(1-c_1)^{n-1}$.
If $V_{(n)}$ denotes the largest piece of rope, then
$$P(V_{(n)} > x) = P(V_1 > x \text{ or } V_2 > x \text{ or } \cdots \text{ or } V_n > x).$$ This calls for the principle of inclusion/exclusion. Thus we have, using the "key idea" above,
$$P(V_{(n)} > x) = n(1-x)^{n-1} - \binom{n}{2} (1 - 2x)^{n-1} + \cdots + (-1)^{k-1} \binom{n}{k} (1 - kx)^{n-1} + \cdots,$$
where the sum continues until $kx > 1$.
Therefore,
$$E[V_{(n)}] = \int_0^{\infty} P(V_{(n)} > x) dx = \sum_{k=1}^n \binom{n}{k} (-1)^{k-1} \int_0^{1/k} (1 - kx)^{n-1} dx = \sum_{k=1}^n \binom{n}{k} (-1)^{k-1} \frac{1}{nk} $$
$$= \frac{1}{n} \sum_{k=1}^n \frac{\binom{n}{k}}{k} (-1)^{k-1} = \frac{H_n}{n},$$
where the last step applies a known binomial sum identity. |
Sum of functions and $L^p$ spaces | Let $A=f^{-1} ([-1,1]) , B=A=f^{-1} ((-\infty,-1)\cup (1,\infty ))$.
Take $$f_1 (x)=\begin{cases} f(x) \mbox{ for } x\in B \\ 0 \mbox{ for } x\notin B\end{cases}$$
$$f_2 (x)=\begin{cases} f(x) \mbox{ for } x\in A \\ 0 \mbox{ for } x\notin A\end{cases}$$ |
Kernel of epimorphism from $F_{k}$ to $Z_{n}^k$ | Hint: the kernel of every such epimorphism is the verbal subgroup of $F_k$ determined by the set of words $\{x^n, [x,y]\}$. |
How badly can GCH fail? | As many as you want. There is no upper bound. The cofinality of $2^\kappa$ must exceed $\kappa$,of course. See Easton's theorem for the details regarding regular cardinals. |
Finding All Solutions For $\sin(x) = x^2$ | It's quite obvious that there are no solutions when $x<0$, so we will look for $x\ge0$. You have found that $x=0$ satisfies the equation. Let's analyze for $x>0$:
Take $f(x)=x^2$ and $g(x)=\sin(x)$.
For $x=\frac{\pi}{4}$, some calculations give $f(\frac{\pi}{4})\approx 0.625$ while $g(x) \approx 0.7$: $$f(\frac{\pi}{4}) < g(\frac{\pi}{4})$$
For $x=1$, $f(1)=1$ but $g(1)<1$ since $\sin(x)$ is increasing for $x\in[0,\pi/2]$ and $\sin(\pi/2)=1$, then $$f(1)>g(1)$$
which means $f(x)$ exceeds $g(x)$ between $(\pi/4,1)$ and intersect in this inteval. Now you just need to prove that they can't intersect more than once. |
What is the status of the purported proof of the ABC conjecture? | An update by Brian Conrad is here:
http://mathbabe.org/2015/12/15/notes-on-the-oxford-iut-workshop-by-brian-conrad/ |
Evaluating $\int \ln(2x+3) \mathrm{d}x$ | There is no mistake. $C$ is an arbitrary constant and $-\frac 3 2+C$ is just another constant $C'$. And there is no better way to answer this question. |
Prove that : $\gcd ( a.P(X) + b.Q(X), cP(X) + dQ(X) ) = \gcd ( P(X), Q(X) )$ | Let $$r(x)= \gcd ( a.P(X) + b.Q(X), cP(X) + dQ(X) )$$ and $$s(x) = \gcd ( P(X), Q(X) ).$$
Since $P=sP'$ and $Q= sQ'$ we have $$aP+bQ = s(aP'+bQ')$$ and $$cP+dQ = s(cP'+dQ')$$ which means that $s\mid aP+bQ$ and $s\mid cP+dQ$ so $\boxed{s\mid r}$.
Vice versa, we prove $r\mid s$ (and so $s=r$): $$r\mid c(aP+bQ)-a(cP+dQ) = (cb-ad)Q $$
so $r\mid Q$ and the same way we get $r\mid P$ so $\boxed{r\mid s}$ |
prove that if $\lambda$ is an eigenvalue of T then $\bar\lambda$ is eigenvalue of $T^*$ | If $Tv=\lambda v$, with $v\ne 0$, then, for all $u\in V$,
$$\langle T^*u-\bar\lambda u,v\rangle=\langle T^*u,v\rangle-\langle\bar\lambda u,v\rangle=\langle u,Tv\rangle-\lambda\langle u,v\rangle=0
$$
This means that the image of $T^*-\bar\lambda I$ is contained in the orthogonal complement of $v$, so $T^*-\bar\lambda I$ is not surjective. What can you say, now?
Of course, the assumption is that the space $T$ operates on is finite dimensional, because the assertion is false for infinite dimensional spaces. |
What are the advantages of ending a proof with “QED”? | It is part of the grammar of mathematical writing/discourse.
A proof is a key unit of mathematical discourse. It is important therefore to have efficient markers of the beginning and end of the unit. It doesn't matter what these are - the specifics are arbitrary.
You mention the end of the proof. There are conventional beginnings too: eg "Theorem 2" or "Proposition 5.6" or "Lemma 3.2" followed by a statement of what is to be proved. Why not just leave these out?
Why is it not redundant: well sometimes people write or say things like "in the proof of proposition 5" - and if we have markers of the beginning and end of the proof we know what the point of reference is.
The key thing is that it costs little and adds to efficiency and accuracy of communication. |
Is there any pde whose solution evolves as a partial Fourier integral? | (Converted and fixed from a comment)
Let $F(t,x) = \rho(t) \exp(itx)$, then the solution to the PDE
$$ \partial_t u = F $$
with $0$ initial data is exactly
$$ u(t,x) = \int_0^t \rho(s) \exp(isx) \mathrm{d}s. $$ |
Permutation equation | Here is a starter. It is convenient to look at the cycle notation of $\alpha$:
\begin{align*}
\color{blue}{\alpha=(1\ 7)(2\ 6)(3\ 5\ 10)(4\ 9\ 8)}\tag{1}
\end{align*}
We see $\alpha$ consists of two involutions $(1\ 7)$ and $(2\ 6)$ which have order two and two cycles of length $3$ which have order $3$, namely
\begin{align*}
(1\ 7)^2=\varepsilon\qquad\qquad&(3\ 5\ 10)^2=(3\ 10\ 5)\\
&(3\ 5\ 10)^3=\varepsilon\tag{2}\\
(2\ 6)^2=\varepsilon\qquad\qquad&(4\ 9\ 8)^2=(4\ 8\ 9)\\
&(4\ 9\ 8)^3=\varepsilon\\
\end{align*}
with $\varepsilon$ the identity permutation.
From (1) and (2) we see $\alpha^{30}=\varepsilon$ and consequently
\begin{align*}
\color{blue}{\alpha^{31}}=\alpha^{30}\circ \alpha=\varepsilon\circ \alpha\color{blue}{=\alpha}
\end{align*}
We also derive from (2)
\begin{align*}
(3\ 10\ 5)^{29}&=(3\ 5\ 10)^{58}=(3\ 5\ 10)^{3\cdot19+1}=(3\ 5\ 10)\\
(4\ 8\ 9)^{29}&=(4\ 9\ 8)^{58}=(4\ 9\ 8)^{3\cdot19+1}=(4\ 9\ 8)\\
(1\ 7)^{29}&=(1\ 7)^{2\cdot 14+1}=(1\ 7)\\
(2\ 6)^{29}&=(2\ 6)^{2\cdot 14+1}=(2\ 6)\\
\end{align*}
It follows:
\begin{align*}
\color{blue}{((1\ 7)(2\ 6)(3\ 10\ 5)(4\ 8\ 9))^{29}=\alpha}
\end{align*} |
real analysis : futher properties of the integral | I think you meant $f_t=\frac{\partial f}{\partial t}$ rather.
\begin{eqnarray*}F'(t)&=&\lim_{h\to 0}\frac{1}{h}\left(\int_a^b f(x,t+h)\,dg(x)-\int_a^b f(x,t)\,dg(x)\right)\\
&=&\lim_{h\to 0}\int_a^b\frac{f(x,t+h)-f(x,t)}{h}\,dg(x)\\
&=&\lim_{h\to 0}\int_a^b f_t(x,t+\zeta)\,dg(x)\\
&=&\int_a^b f_t(x,t)\,dg(x)
\end{eqnarray*}
where $\zeta=\zeta(h,x)$ satisfies $0\le \zeta\le h$ by Mean Value theorem. Continuity of $f_t$ implies that it is bounded on $[a,b]$, and the last equality follows from Bounded Convergence theorem and the continuity of $f_t$. |
Determine the curve points | For (b): The vector joining the point(s) of extremal distance and $(0,0,8)$ are orthogonal to the tangent plane in that point, hence we have
$$\bigl\langle\begin{pmatrix}1\\0\\8x \end{pmatrix},\begin{pmatrix} x\\y\\4x^2+y^2-8\end{pmatrix}\bigr\rangle=0\text{ and }
\bigl\langle\begin{pmatrix}0\\1\\2y \end{pmatrix},\begin{pmatrix}x\\ y\\
4x^2+y^2-8\end{pmatrix}\bigr\rangle=0.$$
If I'm not mistaken there are five solutions $(x,y)$, namely $(0,0)$, $(0,\pm\sqrt{15/2})$ and $(\sqrt{63/32},0)$. |
Prove the inequality without using the concept of Arithmetic and Geometric mean inequality | Assuming $0<r<1$,
$$\frac{1-r^{2m+1}}{r^m(1-r)}=\frac{1+r+r^2+\cdots+r^{2m}}{r^m}
=\frac1{r^m}+\frac1{r^{m-1}}+\cdots+\frac1r+1+r+r^2+\cdots+r^m.$$
If you can prove that $1/r^k+r^k>2$ for $k=1,2,\ldots,m$ then this will be $>2m+1$. |
Solution to the 2nd Order ODE using Variation of Parameters | Try a particular solution of the form $a_{n_p}(t) = A\cos\omega t + B\sin\omega t$; plugging that into the equation
$$ a_{n_p}'' + (nc)^2 a_{n_p} = C_n\sin\omega t $$
$$ C_n = \frac{2(-1)^{n+1}\omega ^2A}{n\pi} $$
yields
$$ (-\omega^2 + (nc)^2)(A\cos\omega t + B\sin\omega t) = C_n\sin\omega t $$
Matching the coefficients of the sines and cosines yields
$$
A = 0
\qquad
B = \frac{C_n}{-\omega^2+(nc)^2}
$$
This heuristic for finding particular solutions for linear autonomous equations when the source term is a sum of functions of the form $e^{-st}t^n$ is called the method of undetermined coefficients |
Evaluating improper double integral with Lebesgue | You don't need to turn to Lebesgue integration to add rigor. The result can be obtained rigorously within the framework of the multi-dimensional Riemann integral.
The Riemann integral is naturally defined over bounded rectangles and extended to more general (rectifiable) regions with an indicator function. Before we even begin to consider Fubini's theorem to manipulate iterated integrals, we first need to define what the inproper integral means. In this case, the restriction of $f(x,y) = 2xy/(1-y^4)$ is continuous and Riemann integrable over $S_b = \{(x,y): 0 \leqslant y \leqslant \sqrt{x} \leqslant b$ for any $b < 1$ with the integral given by
$$I_b = \int_{S_b}f = \int_{[0,b]^2}f \, \chi_{S_b}$$
There are some technicalities regarding how to precisely define the improper (or extended) Riemann integral for arbitrary regions, but it boils down in this case to $I = \lim_{b \to 1-} I_b$ when the limit exists.
At this point we apply Fubini's theorem specifically for Riemann integrals, as discussed for example in Analysis on Manifolds by Munkres or Calculus on Manifolds by Spivak. This states simply if $f$ is integrable and the iterated Riemann integrals exist --both of which are satisfied here -- we have
$$\begin{align}I_b = \int_0^b \left(\int_0^b \frac{2xy}{1 - y^4}\, \chi_{S_b} \, dy\right) \, dx &= \int_0^b \left(\int_0^b \frac{2xy}{1 - y^4}\, \chi_{S_b} \, dx\right) \, dy \\ &= \int_0^b \left(\int_{y^2}^b \frac{2xy}{1 - y^4}\, \, dx\right) \, dy \\ &= \int_0^b \frac{(b^2-y^4)y}{1 - y^4}\, dy \\ &= \int_0^1 \frac{(b^2-y^4)y}{1 - y^4} \chi_{y \leqslant b} \, dy \end{align}$$
Since, for all $y \in [0,1]$ we have
$$\left| \frac{(b^2-y^4)y}{1 - y^4} \chi_{y \leqslant b} \right| \leqslant y \leqslant 1$$
it follows by the bounded convergence theorem that
$$I = \lim_{b \to 1-}I_b = \int_0^1 \lim_{b \to 1-} \frac{(b^2-y^4)y}{1 - y^4} \chi_{y \leqslant b} \, dy = \int_0^1 y \, dy = \frac{1}{2}$$
Lebesgue and Riemann integrals coincide on bounded rectangles, so the demonstration could also be made in the context of Lebesgue integrals in an analogous fashion with the dominated convergence theorem. |
Does the absence of horizontal lines shows that there are no $n,m\in \mathbb{N}$ such that $n^2=2m^2$? | You consider a horizontal line such that $n^2 = 2m^2$.
Note that $n^2$ is even and therefore $n$ is even.
Suppose we start and move the horizontal line upwards. If we find a line - we are done.
We are now at a position that we have not found a line yet.
Suppose we have found a horizontal line that matches $n^2 = 2m^2$.
As $n$ is even, $m$ cannot be even, otherwise we would have found a horizontal line already for half the values, as $(2n)^2 = 2(2m)^2 \Rightarrow n^2 = 2 m^2$.
So $n$ is even and $m$ is odd. Thus we can write $n=2k$ and $m=2\ell+1$,
whence we obtain
$$
2 k^2 = 4\ell^2+2\ell+1,
$$
but the left side is even and the right side is odd, so we cannot find such number, i.e. we cannot find such a horizontal line by moving the line upwards.
Another proof:
Suppose we have a horizontal line such that $n^2 = 2m^2$.
Then due to the grid we can write $n=m+a$, where $a$ is an integer, and we obtain
$$
(m+a)^2 = 2m^2 \Rightarrow (m-a)^2 = 2a^2,
$$
however $m-a<m+a$, so there is a lower horizontal line.
As for every horizontal line there exists a lower horizontal line, there is no lowest horizontal line, unless $m+a=m-a$, thus $a=0$, i.e. $m=0$ and $n=0$ which is the trivial case. Consequently, the only horizontal line is the trivial case... |
$c=\text{gcd}(a,b)$ means $\exists{x,y}\in{\mathbb{Z}}$ so that $a=cx$ and $b=cy$. Show $\text{gcd}(x,y)=1$ | $x=dx',y=dy'\implies a=(cd)x', b=(cd)y'$ and so $cd$ is a common divisor of $a$ and $b$.
This implies that $cd \mid c$ (or $cd \le c$), which can only happen if $d=1$. |
All associated primes appear in a series of submodules | You have to show two identities:
$$ \rm{Ass}(M) \subset \rm{Ass}(N) \cup \rm{Ass}(M/N)$$
and for any $\mathfrak p \in \rm{Spec}(\rm R)$, $$ \rm{Ass}(R/\mathfrak p) = \{\mathfrak p\}.$$ |
Need help with substitution for integration | From $
-\mathrm{y}^{\prime}=(\mathrm{rk}-\alpha) \mathrm{y}-\mathrm{r}
$, we get $\mathrm{y}^{\prime}=-(\mathrm{rk}-\alpha) \mathrm{y}+\mathrm{r}$.
Thus we obtain the following linear first - order differential equation
$$ \mathrm{y}^{\prime}+(\mathrm{rk}-\alpha) \mathrm{y}=\mathrm{r}.$$
We get that the integrating factor is
$$
\mu(\mathrm{t})=\mathrm{e}^{\int(\mathrm{rk}-\alpha) \mathrm{dt}}=\mathrm{e}^{(\mathrm{rk}-\alpha) \mathrm{t}}.
$$
Multiplying the differential equation by the integrating factor $\mu(\mathrm{t})$, we obtain
$$\mathrm{e}^{(\mathrm{rk}-\alpha) \mathrm{t}} \mathrm{y}^{\prime}+\mathrm{e}^{(\mathrm{rk}-\alpha) \mathrm{t}}(\mathrm{r} \mathrm{k}-\alpha) \mathrm{y}=\mathrm{re}^{(\mathrm{rk}-\alpha) \mathrm{t}},$$
which we can write as $$\left(e^{(\mathrm{rk}-\alpha) \mathrm{t}} \mathrm{y}(\mathrm{t})\right)^{\prime}=\mathrm{re}^{(\mathrm{rk}-\alpha) \mathrm{t}}.$$
Integrating both sides with respect to $t$,
$$
y e^{(r k-\alpha) t} = \int r e^{(r k-\alpha) t} d t+C
$$ |
What am I doing wrong with this second order ODE? | What you should realize is that you don't need to do a change of variables here but recognize that your solution is a Euler-Cauchy problem and therefore can be solved by:
Let $v=t^m$ then subbing the values into your DE you get:
$$ m(m-1)t^{m-2} = -kmt^{m-2} + kt^{m-2} \\
\implies m(m-1) +km -k =0 \\
\implies m=1,m=-k \\$$
therefore giving:
$$ v(t) = At^1 + Bt^{-k} \implies v(t) = At + Bt^{-k}$$
Which does hold $\forall k \in \mathbb{R}$ but you should also note that it does hold $\forall t \in (-\infty,0) \cup (0,+\infty) $ because $t=0$ is a case basis in that if $k>0 $ then $t \neq 0$ but if $k<0$ then $t\in (-\infty,\infty) $
Since you did say you want to learn about the reduction to first order just note then when you do that you typically let u be something like $u= t^m \dot{v} $ so then $\dot{u}$ would be equal to a second derivative of v. |
Extension of Definition of Concavity | A function $f: I \to \Bbb R$ is strictly convex (aka strictly “concave up”) if
$$ \tag 1
f((1 - \lambda)a + \lambda b) < (1 - \lambda)f(a) + \lambda f(b)
$$
for all $a, b \in I$ with $a \ne b$ and all $\lambda \in (0, 1)$. For a differentiable function this is equivalent to
$$ \tag 2
f(y) - f(x) > (y - x)f'(x)
$$
for all $x, y \in I$ with $x \ne y$.
To see that $(2)$ implies $(1)$, set $c = (1 - \lambda)a + \lambda b$. Then
$$
\begin{align}
f(a) &> f(c) + (a-c) f'(c) = f(c) + \lambda (a-b) f'(c) \\
f(b) &> f(c) +(b-c) f'(c) = f(c) +(1-\lambda) (b-a) f'(c)
\end{align}
$$
and therefore
$$
(1-\lambda)f(a) + \lambda f(b) > (1-\lambda)f(c) + \lambda f(c) = f(c) \, .
$$ |
How to solve this limit with trigonometric functions?. Should I use the Squeeze Theorem? | It is quite simple using asymptotic analysis
$|\cos^3r+\sin r |$ is bounded, so $r^2(\cos^3r+\sin r)=O\bigl(r^2\bigr)$;
A polynomial is asymptotically equivalent to its leading term, so $(r^2+1)(r-3)\sim_\infty r^3$, and consequently
$$\frac{r^2(\cos^3r+\sin r)}{(r^2+1)(r-3)}=\frac1{r^3}O\bigl(r^2\bigr)=O\Bigl(\frac1r\Bigr),$$
and $\lim_{r\to\infty}\frac1r=0$. |
Showing that the ring of $n\times n$ matrices has exactly two 2 sided ideals, even though it is not a division ring | You can "move around" the single non-zero entry via row and column operation matrices; since ideals are closed under scaling and addition, you can generate everything.
Also, I'm not sure what you're confused by in your second question - there are non-zero elements of $\mathrm{Mat}(F)$ that are not invertible; a division ring is a ring in which every non-zero element is invertible; therefore $\mathrm{Mat}(F)$ is not a division ring. |
Over a PID, $\text{rank}(F/N)=0 \Longleftrightarrow\text{rank}(F)=\text{rank}(N)$? | Hint: $\text{rank}(F/N)=0\Rightarrow F/N$ is a torsion module, not $\{ 0\}.$ But the conclusion is true. Use Structure Theorem for Finitely Generated Modules over a PID
Since $N$ is submodule of $F,$ there is a basis $y_1, y_2, \cdots , y_n$ of $F$ and $d_1, d_2, \cdots , d_r \in D$ with $d_1| d_2| \cdots |d_r$ such that $d_1y_1, d_2y_2, \cdots , d_r y_r$ is a basis of $N$ (rank $N = r$) and $F/N \cong D^{n-r} \times D/d_1D \times \cdots \times D/d_rD.$
In this case, rank$(F/N) = 0 \Rightarrow r = n.$ On the other hand, let $r = n.$ Then $F/N$ is a torsion module and hence is or rank zero. |
Why does a real spectrum not have holes? | Your statement "a real spectrum is simply connected" is wrong. To give a concrete example, let $A = B(\mathbb C^2)$, where $\mathbb C^2$ carries the standard inner product. Let $$ a = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} $$
Then $\sigma(a) = \{0,1\}$, which is not connected. Note that - in general - on $B(\mathbb C^n)$, the spectrum consists of at most $n$ points and is almost never connected.
As Murphy writes on page 11, proof of Thm. 1.2.8, by $\sigma_A(b)$ "having no holes", he means that $\mathbb C \setminus \sigma_A(b)$ is connected. But this is obviously true for bounded subsets of $\mathbb R$. |
Where is the tangent line to this curve horizontal? | If you want a horizontal tangent, then you need to look for where ${dy\over dx}=0$.
Edit: As indicated, solve ${dy\over dx}=0\implies y(2.5-1/x)=0\implies y=0\text{ or }x={2\over 5}$. But $y=0$ is not in the domain of the original relation, $\ln(xy)=2.5x$, so the horizontal tangent occurs only when $x=2/5\implies y=5e/2$.
This is depicted graphically below: the dashed line is the horizontal tangent passing thru $(2/5,5e/2)$ on the graph of $\ln(xy)=2.5x$. |
Why is the probability of two elements $y_i$, $y_j$ being compared equal to $2/(j - i + 1)$ in random quicksort? | Elements $y_i$ and $y_j$ remain in the same sublist as long as you choose pivots that are on the same side of them (both less or both greater). There comes a one point in the algorithm at which you choose a pivot that's either between them or one of them. If it's between them, they will never be compared. If it's one of them, they will be compared. At this point, all $j-i+1$ of these elements are candidates for the pivot, and they all have the same probability of being chosen. That's two favourable outcomes out of $j-i+1$ equiprobable outcomes, for a probability of $2/(j-i+1)$. |
Is the set $\{5a+2:a\in\mathbb Z\}$ a subset of $\{4b+3:b\in\mathbb Z\}$? | Your argument is correct. Since $12 \in X$ and $12 \not \in Y$, $X$ is not a subset of $Y$. |
Prove that there exist only one analytic function that holds conditions | Let $g$ be a function that satisfies the same conditions. Then$$(\forall n\in\mathbb{N}):f\left(1+\frac in\right)=g\left(1+\frac in\right).$$Therefore, by the identity theorem and because the limit $\lim_{n\to\infty}1+\frac in$ exists, $g=f$. |
ordering of intervals | You have nothing that indicates the direction of intersection. For example, if $a=(0,4), b=(3,5)$ you have the matrix $\begin {pmatrix} 4&1\\1&2 \end {pmatrix}$, but $b$ could be $(-1,1)$ just as well and give the same matrix. If you have groups of intervals that do not intersect there is nothing to indicate the order at all. For example, if $a=(0,4), b=(3,5), c=(10,11)$ you have the matrix $\begin {pmatrix} 4&1&0\\1&2&0\\0&0&1 \end {pmatrix}$ and you can't position $c$ with respect to $a,b$. If you have 2D objects, even if they are all rectangles, I have no idea what your matrix should look like. |
Generalized Formula for the Probability of the Union of $n$ events occurring? | We have
\begin{align}
P(A_1\cup A_2\cup A_3)= & \;\;\;\;P(A_1)+P(A_2) +P(A_3)
\\
&
-\underbrace{P(A_1\cap A_{2})}_{1<2}
-\underbrace{P(A_1\cap A_{3})}_{1<3}
-\underbrace{P(A_2\cap A_{3})}_{2<3}
\\
&
+\underbrace{P(A_1\cap A_{2}\cap A_{3})}_{1<2<3}
\end{align}
and
\begin{align}
P(A_1\cup A_2\cup A_3\cup A_4)= & \;\;\;\;P(A_1)+P(A_2) +P(A_3)+P(A_4)
\\
&
-\underbrace{P(A_{1}\cap A_{2})}_{1<2}
-\underbrace{P(A_{1}\cap A_{3})}_{1<3}
-\underbrace{P(A_{1}\cap A_{4})}_{1<4}
\\
&
-\underbrace{P(A_{2}\cap A_{3})}_{2<3}
-\underbrace{P(A_{2}\cap A_{4})}_{2<4}
\\
&
-\underbrace{P(A_{3}\cap A_{4})}_{3<4}
\\
&
+\underbrace{P(A_{1}\cap A_{2}\cap A_{3})}_{1<2<3}
+\underbrace{P(A_{1}\cap A_{2}\cap A_{4})}_{1<2<4}
+\underbrace{P(A_{1}\cap A_{3}\cap A_{4})}_{1<3<4}
+\underbrace{P(A_{1}\cap A_{2}\cap A_{4})}_{2<3<4}
\\
&
-\underbrace{P(A_1\cap A_{2}\cap A_{3}\cap A_4)}_{1<2<3<4}.
\end{align}
In geral,
\begin{align}
P\Big( \bigcup_{i=1}^{n}A_i\Big)
=&-(-1)^1\sum_{i{_1}=1}^{n}P(A_{i_1})
\\
&-(-1)^2\sum_{1\leq i{_1}<i_{_2}\leq n}^{n}P(A_{i_1}\cap A_{i_2})
\\
&-(-1)^3\sum_{1\leq i{_1}<i_{_2}<i_{_3}\leq n}^{n}P(A_{i_1}\cap A_{i_2}\cap A_{i_3})
\\
&-(-1)^4\sum_{1\leq i{_1}<i_{_2}<i_{_3}<i_{_4}\leq n}^{n}P(A_{i_1}\cap A_{i_2}\cap A_{i_3}\cap A_{i_4})
\\
&\quad\quad\quad\vdots
\\
&\ldots-(-1)^n \sum_{1\leq i_{1}<i_{2}<i_{3}<\ldots <i_{n}\leq n}^{n}P(A_{i_1}\cap A_{i_2}\cap A_{i_3}\cap \ldots \cap A_{i_n})
\end{align} |
Show that if $Y$ is path-connected, then the set $[I, Y]$ (the homotopy classes of maps from $I$ to $Y$) has a single element | You can do this.
Note that the identity map on $I$ is homotopic to the constant
map taking $I$ to $0$. Composing with $f$, $f$ is homotopic
to the constant map taking $I$ to $f(0)$.
If $Y$ is path-connected and $y_0$ is your favourite point there,
then there is a path from $f(0)$ to $y_0$. This induces a homotopy
from the constant map taking $I$ to $f(0)$ and
the constant map taking $I$ to $y_0$. Therefore $f$ is homotopic
to the constant map taking $I$ to $y_0$.
One could replace $I$ by any contractible space in the foregoing. |
How many operation are required to sort a array of numbers. | I don't believe so. This is equivalent to computing the Kendall tau coefficient. There are a couple algorithms for doing this (one of which involves using a Merge sort to compute how many steps Bubble sort would take) but they are $O(n\cdot log\ n)$. |
Find the equivalent value of radians | According to what you are given, an angle $\theta$ and an angle $\theta + 2 \pi k$ are defined as coterminal, with rays from the origin terminating at the same point on the circle. This means that an angle of $2 \pi$ represents a full turn around the circle, equivalent to 360 degrees.
Therefore to get from an angle to the opposite angle, you add a half turn or $1/2 * 2 \pi = \pi.$ The angle $\theta + \pi$ will give the opposite ray to $\theta.$
A more productive method is to draw a circle with center at the origin with radius $= r$. The radius $r$ can be any positive number measured outward from the origin. (Later in polar graphs, a negative radius is interpreted as measuring backwards, along the opposite ray.)
Then we define (no depth, just what they are) for any point on the circle $P(x,y), sin \theta = y/r, cos \theta = x/r, tan \theta = y/x = $slope of OP. And similarly define angle $\theta = $arc/$r$ where $arc$ = the part of the circle traced by the end of the ray going from an angle of zero to the terminal ray of $\theta$.
Set up this way, radian measure is a ratio just like sine, cosine, and tangent. It is clearly defined and visible. It is a pure number with no units (technicaly one should not say "radians" -- a radian is not a unit.)
Clearly the full circle or 360 degrees is $2 \pi r / r = 2 \pi,$ the half circle or 180 degrees is $\pi$ and so on. |
Lines in coordinate geometry from euclidean geometry | By one of the axioms of Euclidean geometry, two distinct points define a unique line. If you plug distinct points into your equation, you don't get a unique solution, but instead a one-dimensional space of solutions: an equation and all its multiples. You can consider this an equivalence class, as usually done for homogeneous coordinates.
For the converse, you'd have to verify that every such equation contains at least two distinct points. You could pick those for $x=0$ and for $x=1$, but you'd have to provide some alternate handling for $b=0$. In that case you could swap the roles of $x$ and $y$, and hence of $a$ and $b$ as well. But what if $a=b=0$? Then you don't have a line. For $c=0$, every point will satisfy the equation, while for $c\neq0$ no (finite) point satisfies the equation. So you must take care to exclude the case $a=b=0$. |
Proof verification that $t(n+1)=t(n) + \pi$ using mathematical induction | This might provide some additional clarity:
In the "inductive step", you may assume that $t\left(k\right)$ is true, and then show that $t\left(k+1\right)=t\left(k\right)+\pi$:
$$
t\left(k+1\right)=\left(\left(k+1\right)-2\right)\pi
$$
from the definition of $t\left(n\right)$. This leads to:
$$
t\left(k+1\right)=\left(\left(k-2\right)+1\right)\pi
$$
$$
t\left(k+1\right)=\left(k-2\right)\pi+\pi
$$
$$
t\left(k+1\right)=t\left(k\right)+\pi
$$
I hope this helps. |
$f(w)=\frac{w}{4w^{2}-1}$, find max value of $|f(w)|$ in $|w|\geq1$ | You already showed that $|f(w)|=|\frac{1}{4w-\overline{w}}|$ To maximize this, we should minimize $|4w-\overline{w}|$. Geometrically, this is achieved if $w$ and $\overline{w}$ point in the same direction, i.e., $4w=\lambda\overline{w}$, with $\lambda>0$. We see that $w=1$ and the maximum is $\frac{1}{3}$. |
Are the path connectedness of $X_0$ and $X_1$ necessary in Seifert-van Kampen theorem? | I think I knew why $\tilde{X} \subset \tilde{X}_0 \cup \tilde{X}_1$.
Let $X_0$ and $X_1$ be subspaces of $X$ such that the interiors cover X. Suppose $X_{01}=X_0\cap X_1$ is path connected.
For any point $x\in \tilde{X}$, there is a path $a:I\to \tilde{X}$ such that $a(0)=\ast$ and $a(1)=x$. We can find $0=t_0<t_1<...<t_{n-1}<t_n=1$ such that for all $i=0,...,n-1$, $a([t_i,t_{i+1}])\subset$int$(X_v)$ for some $v=0,1$. (Lebesgue number of open covering is used here.) We may assume that $a([t_{n-1},t_n])\subset$ int$(X_0)$. There is a minimal integer $k$ such that $a([t_k,t_n])\subset$ int($X_0$). Hence $a(t_k)\in X_{01}$. There is a path $b:I\to X_{01}$ such that $b(0)=\ast$ and $b(1)=a(t_k)$ since $X_{01}$ is path connected. Then the product path of $b$ and $a|{[t_k, t_n]}$ is a path in $X_0$ connecting $\ast$ and $x$. Hence $x\in \tilde{X}_0$. And the path connectedness of $X_{01}$ is crucial here. |
Use three 11's and various math symbols to make an equation equal to 6 | $\large 6=\left( \sqrt{\sqrt{\frac{11+11!!!}{11}}}\right)\LARGE!$
where n!!! = n(n-3)(n-6)... is triple factorial |
Masters' in Applied or Computational Math | It is very difficult to quantify where a student "ends up" after completing a program. It is easy enough to determine what percentage of undergraduates move on to graduate school, but for post-graduate work, where do you draw the line? Immediately after graduation? 5 years? 10? Do you quantify the percentage that attempt to stay in academia? Does it have any relevance if the person gets a job after getting their degree, hates it, and quits 6 months later to open a surf shop in Cabo?
Regardless of those issues, there are some other things to consider. Many schools accept students into Master's programs only on "special case" bases, such as when an employer is paying for the degree, or the student is attending school part time prior to formally applying for a PhD program. This is not always true, but in general, it has become more difficult and expensive to fund PhD students; the cost of supporting a Master's student is in fact worse (most graduate students do not provide the university/their advisor with a return on investment in the first two years; the real gain only comes after the student is mostly done with coursework and can spend time actually doing things). Consequently, finding funding for Master's students is even more increasingly difficult.
This, combined with extremely high inflation in tuition and fees in the US, makes Master's programs quite expensive. Adding to these frustrations are the fact that some Master's programs are not optimally-designed for your objectives; as an anecdote, the university where I am studying puts more stringent course requirements on a student for the M.S. program than the M.A. program, meaning that if I wanted to do an M.S., I am essentially limited to an algebra program -- it is otherwise financially and temporally unfeasible to attempt an analysis program or topology program with the coursework requirements imposed on the M.S. degree. The M.A. degree, which requires no thesis, leaves more options open, and so I can actually specialize in the M.A. program more, despite my interest in completing a thesis in line with my job-oriented work.
So, in order to measure expense vs. quality-of-education, you truly have to consider many, many factors:
Does the university have a program for you in-name-only (as in, they may offer the program on paper, but is it really something designed for what you want to use it for)?
Are you able to qualify for loans, in-state tuition, scholarships, or other support?
Do you intend on going to school full-time?
You are concerned with industry placement. Which industry?
How do you intend to leverage your Master's degree for career gain? (This question is particularly important to understand, since you are explicitly looking to enter industry). Do you intend to do so through contacts, publications, experience, original research, or ...?
So in short, there are many good programs everywhere. Without knowing where you are from or what you can qualify for, it is impossible to give a proper valuation of a university that suits your needs. What you should really be looking at is first: what's the best university that suits your interests; second: how do you intend to pay for it AND support yourself during that time (after all, food is very important to mathematicians); third: what do you intend to do after completing the program. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.