title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to find a linear extension of a poset | Given your finite poset $P$, it is clear that the first element in your linear extension must be a minimal element (else you'll not have a linear extension). Moreover, if you picked a partial linear extension -- elements $a_1, \dots, a_k \in P$ with the property that $a_i < a_j$ in $P$ implies $i < j$ (for $1 \leq i, j \leq k$), then the next element $a_{k+1}$ in your linear extension must be a minimal element of $P \setminus \{a_1, \dots, a_k\}$.
Conversely, if you follow that recursive procedure, you'll always generate a linear extension of $P$ (exercise!). Thus, to find linear extensions of your given posets, proceed one element at a time, always picking a minimal element of the poset of the remaining not-yet-chosen elements. |
detect when a line crosses a curve more than 2 times | Hint: If a line (such as the green one) passes through the curve then there is an "earlier" slope that was greater than it.
The lines that don't pass through twice have to all have been the maximum of all the slopes before it, and minimum of all the slopes to come. If any later slope is less than it, then it passes the curve twice.
So measure slopes $m_1, m_2, m_3, m_4$. If you ever get a slope where $m_{i+1} \le m_i$ then $m_{i+1}$ and all the previous slopes, $m_j$, so that $m_{i+1} \le m_j \le m_i$ pass through the curve twice. |
Proof of parallel lines | Possibly the last steps of a proof
This is no full proof, just some observations which might get you started, but which just as well might be leading in the completely wrong direction.
You could start from the end, i.e. with the last step of your proof, and work backwards.
$P$ is the midpoint of $FB$ and $Q$ is the midpoint of $FC$. Therefore the triangle $BCD$ is similar to $PQF$, and since they have the edges at $F$ in common, the edges opposite $F$ have to be parallel. So your next question is: why are these the midpoints?
You can observe that $NP$ is parallel to $AB=AE$, and $NQ$ is parallel to $CD=CE$. Since $N$ is the midpoint of $FE$, the $\triangle FNP$ is the result of dilating $\triangle FEB$ by a factor of $2$ with center $F$. Likewise for $\triangle FNQ$ and $\triangle FEC$. So this explains why $P$ and $Q$ are midpoints as observed, but leaves the question as to why these lines are parallel.
Bits and pieces
I don't have the answer to that question yet. But I have a few other observations which I have not proven either but which might be useful as piezes of this puzzle.
$\measuredangle DBE = \measuredangle ECA = \measuredangle NMG = \measuredangle HMN$. The first equality is due to the cocircularity of $ABCD$, but the others are unexplained so far.
$\measuredangle MGN = \measuredangle NHM$, which implies that the circles $MGN$ and $MHN$ have equal radius, and the triangles formed by these three points each are congruent. |
Calculating "lucky 6" game value | Let $P_n$ denote the probability of scoring on the nth draw.
$P_n = {n \choose 6}/ {48 \choose 6} -\sum_{i=5}^{n-1} P_{i} $ for $n >5$. With $P_0=P_1= \cdots=P_5=0$.
I'm not familiar with the game, but I believe in order to "score" you must get all 6 digits correct.
Define $N_n$ as the number of ways to win on the nth draw it is:
$N_6 =1$
$N_7 =7-1=6$
$N_8 = {8 \choose 6}-6-1= 28-6-1=21$
Someone has already written the odds for this here: https://wizardofodds.com/games/lucky-6-35-48/
Except the winnings listed are different. |
Sections of holomorphic vector bundles | For any neighborhood $U$ of $p\in X$ over which there are local trivializations for both bundles, in the corresponding coordinate charts the map $f$ has representation
$$U\times\mathbb{C}^n\to U\times\mathbb{C}^m$$
$$(u,v)\to(u,\tau(u)\cdot v)$$
where $\tau$ takes values in the space of $m\times n$ matrices.
Since $f$ is surjective and a bundle homomorphism it is surjective when restricted to each fiber. So for any $u_0$, $\tau(u_0)$ is an $m\times n$ matrix that represents a surjective linear map. Thus the matrix $\tau(u_0)$ is of maximal rank and must have an invertible $m\times m$ submatrix.
Assume $\tau(u_0)$ is of the form $(A\ B)$ with $A$ invertible. By continuity, that same submatrix is invertible for all $\tau(u)$ in a possibly smaller neighborhood $V$ of $u_0$. So the coordinate representation is now $(u,v)\to(u,(A(u)\ B(u))\cdot v)$ where $A$ and $B$ are matrices and $A$ is invertible. We can invert that submatrix $A$ for all $u\in V$ to get a holomorphic function $\phi:V\to GL(m,\mathbb{C})$ sending $u\to A\to A^{-1}$. Ie. $\phi(u)=A(u)^{-1}$.
Let the section $s$ have coordinate representation $(u,r(u)):U\to U\times\mathbb{C}^m$. Now consider the map
$$t:V\to V\times\mathbb{C}^m\to V\times\mathbb{C}^n$$
given by
$$\Bigg(Id\times\binom{\phi(u)}{0}\Bigg) \circ s$$
sending
$$u\to(u,r(u))\to\Bigg(u,\binom{\phi(u)\cdot r(u)}{0}\Bigg)$$
The map $t$ is a local section of $E$. Composing with $f$
$$(f\circ t)(u)=f\Bigg(u,\binom{\phi(u)\cdot r(u)}{0}\Bigg) = \Bigg(u,\tau(u)\cdot\binom{\phi(u)\cdot r(u)}{0}\Bigg)$$
$$= \Bigg(u,(A(u)\ B(u))\cdot\binom{A(u)^{-1}\cdot r(u)}{0}\Bigg)$$
$$=(u, A(u)\cdot A(u)^{-1}\cdot r(u) + B(u)\cdot 0)$$
$$(u,r(u)) = s(u)$$ |
Probability distribution of halving a segment | A family of distributions often used to model the breaking of a linear piece is the beta family. For every $a\gt0$ and $b\gt0$, the beta $(a,b)$ distribution has density
$$
\mathrm B(a,b)^{-1}x^{a-1}(1-x)^{b-1}\mathbf 1_{0\lt x\lt1}.
$$
Some justifications for this choice come from Bayesian considerations. Several shapes are possible. For shapes dropping to zero at $0$ and $1$, choose $a\gt1$ and $b\gt1$. |
Pointwise convergence almost surely of an Approximation sequence | Well, this problem is rather from analysis since the only probabilistic fact is used here is that for a.a. $\omega\in \Omega$ the function $s\mapsto X(s,\omega)$ is right-continuous in $s$. For the simplicity of notation, let us put the function $f:[0,t]\to \mathbb R$ to be right-continuous:
$$
\lim\limits_{h\downarrow 0}f(s+h) = f(s)\quad\text{ for all }s\in[0,t)
$$
and define
$$
\begin{cases}
t_{k,n} &= \frac{kt}{2^n},
\\
\\
f_{k,n} &= f\left(t_{k+1,n}\right),
\\
\\
I_{k,n}(s) &= 1_{(t_{k,n},t_{k+1,n}]}(s)
\end{cases}
$$ for $n\in\mathbb N $ and $k = 0,1,2,\dots,2^{n}-1$. Let us show that for any $s\in [0,t)$ we have
$$
\lim\limits_{n\to\infty}\sum\limits_{k=0}^{2^n-1}f_{k,n}I_{k,n}(s) = f(s).
$$
Fix $s$ and let $\hat k(n) = \{k:I_{k,n}(s) = 1\}$ be the index of the interval of the partition where $s$ falls. Then:
$$
\lim\limits_{n\to\infty}\sum\limits_{k=0}^{2^n-1}f_{k,n}I_{k,n}(s) = f_{\hat k(n),n} = f\left(t_{\hat k(n)+1}\right)
$$
and $s<t_{\hat k(n)+1}$ toghether with the fact that $t_{\hat k(n)+1}-s\leq 2^{-n}t$ imply that $t_{\hat k(n)+1}\downarrow s$ with $n\to\infty$. As a result, $\lim\limits_{n\to\infty}f\left(t_{\hat k(n)+1}\right) = f(s)$ by right-continuiuty. Finally, if $s =t$ then
$$
\lim\limits_{n\to\infty}\sum\limits_{k=0}^{2^n-1}f_{k,n}I_{k,n}(t) = f(t)
$$
and there is nothing to prove. |
Prove inequality: $\frac{P+2004a}{P-2a}\cdot\frac{P+2004b}{P-2b}\cdot\frac{P+2004c}{P-2c}\ge2007^3.$ | Let
$$b+c-a=s,\quad c+a-b=t,\quad a+b-c=u$$
Then, we can have
$$a=\frac{t+u}{2},\quad b=\frac{s+u}{2},\quad c=\frac{s+t}{2}$$
So, using AM-GM inequality and letting $d=1003$,
$$\begin{align}&\frac{P+2004a}{P-2a}\cdot\frac{P+2004b}{P-2b}\cdot\frac{P+2004c}{P-2c}\\\\&=\left(1+d\frac ts+d\frac us\right)\left(1+d\frac st+d\frac ut\right)\left(1+d\frac su+d\frac tu\right)\\\\&=2d^3+3d^2+1+(d^3+d^2+d)\left(\frac su+\frac st+\frac tu+\frac ts+\frac us+\frac ut\right)+d^2\left(\frac{s^2}{tu}+\frac{t^2}{su}+\frac{u^2}{st}\right)\\\\&\ge 2d^3+3d^2+1+6(d^3+d^2+d)\sqrt[6]{1}+3d^2\sqrt[3]{1}\\\\&=(2d+1)^3\\\\&=2007^3\end{align}$$ |
Distribution of sum of two exponential random variables | Hint
First of all, we have $z\ge 0$. Therefore $$\Pr\{X+Y\le z\}{=\Pr\{X<z-Y\}\\=\int_0^\infty \Pr\{X<z-y\}f(y)dy\\=\int_0^z (1-e^{-\lambda (z-y)})e^{-\lambda y}dy}$$the last equality holds since $$\Pr\{Y>Z\}=0$$ |
Jordan-Chevalley vs Jordan normal decomposition | This answer turned out far longer than initially planed:
It explains the connection between the Jordan-Chevalley decomposition and the Jordan normal form, why Petersen only considers a single Jordan block, and what the Jordan-Chevalley decomposition is useful for.
The connection between the Jordan-Chevalley decomposition and the Jordan normal form:
As it has already been explained in the comments, the Jordan-Chevalley decomposition of $T$ can be derived from its Jordan canonical form:
Suppose that $\mathcal{B}$ is a basis of $V$ with respect to which the operator $T$ is given by a matrix $[T] \in \operatorname{M}_n(\mathbb{C})$ which is in Jordan normal form, say
$$
[T]
= \begin{pmatrix}
J_{n_1}(\lambda_1) & & \\
& \ddots & \\
& & J_{n_t}(\lambda_t)
\end{pmatrix}.
$$
(Here the $\lambda_i$ are not necessarily pairwise distinct.)
Then with respect to $\mathcal{B}$ the operators $S$ and $N$ are given by the matrices
$$
\begin{pmatrix}
\lambda_1 I_{n_1} & & \\
& \ddots & \\
& & \lambda_t I_{n_t}
\end{pmatrix}
\quad\text{and}\quad
\begin{pmatrix}
J_{n_1}(0) & & \\
& \ddots & \\
& & J_{n_t}(0)
\end{pmatrix}.
$$
One could also go the other way around, and derive the Jordan normal form of $T$ from its Jordan-Chevalley decomposition:
Every eigenspace $V_\lambda(S)$ is $N$-invariant, since $S$ and $N$ commute.
Since $N$ is nilpotent, the same goes for the restrictions $N|_{V_\lambda(S)}$.
Thus we can find for every $\lambda \in \mathbb{C}$ a basis $\mathcal{B}_\lambda$ for $V_\lambda(S)$ with respect to which the operator $N|_{V_\lambda(S)}$ is given by a matrix which is in Jordan normal form, say $[N|_{V_\lambda(S)}] = \bigoplus_{j=1}^{n(\lambda)} J_{n(\lambda,j)}(0)$ (here we use that finite-dimensional nilpotent operators always have a Jordan normal form, and that $0$ is the only eigenvalue of a nilpotent operator).
Since $S$ is diagonalizable we have that $V = V_{\lambda_1}(S) \oplus \dotsb \oplus V_{\lambda_r}(S)$ (with the $\lambda_i$ being pairwise distinct), so it follows that the union $\mathcal{B} := \bigcup_{i=1}^r \mathcal{B}_{\lambda_i}$ is a basis of $V$.
With respect to $\mathcal{B}$ the operator $N$ is given by the block diagonal matrix $[N] = \bigoplus_{i=1}^r \bigoplus_{j=1}^{n(\lambda_i)} J_{n(\lambda_i, j)}(0)$, which is again in Jordan normal form, and the operator $S$ is given by the diagonal matrix $[S] = \bigoplus_{i=1}^r \lambda I_{\dim V_{\lambda_i}(S)}$.
So with respect to $\mathcal{B}$ the operator $T = S + N$ is given by the matrix
\begin{align*}
[T]
= [S] + [N]
&= \left( \bigoplus_{i=1}^r \bigoplus_{j=1}^{n(\lambda_i)} J_{n(\lambda_i, j)}(0) \right)
+ \left( \bigoplus_{i=1}^r \lambda I_{\dim V_{\lambda_i}(S)} \right)
\\
&= \bigoplus_{i=1}^r \bigoplus_{j=1}^{n(\lambda_i)} J_{n(\lambda_i, j)}(\lambda_i),
\end{align*}
which is in Jordan normal form.
Alltogether this shows that the Jordan-Chevalley decomposition and the Jordan normal form are equivalent, and how one can be derived from the other.
This observation actually holds for arbitary fields:
An operator $T \colon V \to V$ on a finite-dimensional $k$-vector space $V$ has a Jordan-Chevalley decomposition (into commuting diagonalizable and nilpotent parts) if and only if it has a Jordan normal form.
Also note that the decomposition
$$
V = \ker(T - \lambda_1 I)^{m_1} \oplus \dotsb \oplus \ker(T - \lambda_k I)^{m_k}.
$$
which is used to construct the Jordan-Chevalley decomposition is precisely the generalized eigenspace decomposition, which is used to show the existence of the Jordan normal form.
Regarding the number of Jordan blocks:
I am not very familiar with the Frobenius canonical form which Petersen uses here, but I think I (kind of) understand where the problem comes from, and how to solve it.
You are right that we may need more than one Jordan block if we look at the restriction of $T$ to $\ker (T - \lambda_i)^{m_i}$;
the matrix representation of $[T|_{\ker (T - \lambda_i)^{m_i}}]$ consists of all Jordan blocks for the eigenvalues $\lambda$.
This is why Petersen further decomposes $\ker (T - \lambda_i)^{m_i}$ into cyclic subspace:
This means that we have reduced the problem to a situation where $T$ has only one eigenvalues.
Given the Frobenius canonical form the problem is then further reduced to [proving] the statement for companion matrices, where the minimal polynomial has only one root.
Let $C_p$ be a companion matrix with $p(t) = (t - \lambda)^n$.
(From Linear Algebra by Peter Petersen, page 150, proof of Theorem 25.)
So we further decompose
$$
\ker (T - \lambda_i)^{m_i}
= C_1 \oplus \dotsb \oplus C_{k(i)}
$$
where the $C_j$ are cyclic subspaces.
We fix some $j$ and set $C := C_j$ and $n := \dim C$.
Since $C$ is cyclic, we find that the characteristic polynomal and minimal polynomial of $T|_C$ coincide (I assume that this has already been shown before); we will refer to this polynomial as $p$.
We know that this minimal polynomial $p(t)$ of $T|_C$ divides the minimal polynomial of $T|_{\ker (T - \lambda_i)^{m_i}}$, which is given by $(t - \lambda_i)^{m_i}$.
So $p(t)$ is of the form $p(t) = (t - \lambda_i)^{m'_i}$ with $m'_i \leq m_i$.
Together with $\deg p = \dim C = n$ we find that $p(t) = (t - \lambda_i)^n$.
We now consider the matrix
$$
J
:= \begin{pmatrix}
\lambda & 1 & & \\
& \ddots & \ddots & \\
& & \ddots & 1 \\
& & & \lambda
\end{pmatrix}
\in \operatorname{M}_n(\mathbb{C}).
$$
From an earlier part of the chapter (namely part 4, The Minimal Polynomial, page 120, Proposition 17) we know that the minimal polynomial of $J$ is given by $(t - \lambda)^n = p(t)$.
Since the minimal polynomial of $J$ is of maximal degree it equals its characteristic polynomial;
from this is follows that $J$ is similar to the companion matrix its characteristic polynomial $p(t)$, which we will refer to as $C_p$.
(Petersen seems to have shown this before, but gives no explicit reference in the proof.)
Since the minimal and characteristic polynomial of $T$ coincide, we find that when we represent $T$ with respect to some basis of $C$ by a matrix $A \in \operatorname{M}_n(\mathbb{C})$, then $A$ is also similar to the companion matrix $C_p$.
Hence we find that $A$ and $J$ and similar, so there exists a basis of $C$ with respect to which $T|_C$ is represented by $J$.
(There might be some redundancy in the above argumentation.)
Note that we have shown that the decomposition $\ker (T - \lambda_i)^{m_i} = C_1 \oplus \dotsb \oplus C_k$ into cylic subspaces corresponds precisely to the decomposition of $[T|_{\ker (T - \lambda_i)^{m_i}}]$ into Jordan blocks.
Since we restrict our attention to single cyclic subspace, we also get only Jordan block.
I have to admit that I find Petersen’s proof somewhat strange:
What he actually does is to construct the Jordan normal form by construction a decomposing $V = \bigoplus_{i=1}^k \bigoplus_{j=1}^{k'(i)} C_{\lambda_i, j}$ into cyclic subspaces $C_{\lambda_i, j}$, and then showing that for each $C_{\lambda_i, j}$ there exists a basis with respect to which $T|_{C_{\lambda_i, j}}$ is given by a matrix $[T|_{C_{\lambda_i, j}}]$ which is a Jordan block.
Then he constructs the Jordan-Chevalley decomposition from the Jordan normal form — without ever mentioning the Jordan normal form.
I suppose that this doesn’t help understanding the difference between the two constructions.
Advantages of the Jordan-Chevalley decomposition:
One way to think about the Jordan-Chevalley decomposition is to regard it as a coordinate-free version of the Jordan normal form:
To talk about the Jordan normal form of $T$ we need to associate to $T$ a matrix $[T]$, which requires the use of a basis.
The Jordan-Chevalley decomposition on the other hand has no such requirements.
What has not been mentioned so far, but is very useful, is that $S$ and $N$ can be expressed as polynomials of $T$, i.e. there exists polynomials $p(t), q(t) \in \mathbb{C}[t]$ with $S = p(T)$ and $N = q(T)$.
As far as I know, this has no analogue in terms of the Jordan normal form.
The Jordan-Chevalley decomposition also has the advantage that it generalizes more easily to other settings:
One can generalize the notion of a diagonalizable operator to that of a semisimple operator (if we work over an algebraically closed field then both notions coincide).
Then one can also generalize the Jordan-Chevalley decomposition accordingly.
One can generalize the Jordan-Chevalley decomposition to finite-dimensional, semisimple complex Lie algebras:
If $\mathfrak{g}$ is such a Lie algebra, then every elemente $x \in \mathfrak{g}$ can be uniquely written as $x = s + n$ where $s, n \in \mathfrak{g}$ are semisimple, resp. nilpotent elements which commute.
One can generalize the additive Jordan-Chevalley decomposition, which we have encountered so far, to the multiplicative Jordan-Chevalley decomposition: Every $T \in \operatorname{GL}_n(\mathbb{C})$ can be a uniquely decomposition as $T = S'U'$ with $S' \in \operatorname{GL}_n(\mathbb{C})$ being diagonalizable and $U' \in \operatorname{GL}_n(\mathbb{C})$ being unipotent.
(The additive and multiplicative Jordan-Chevalley decompositions $T = S + N$ and $T = S' U'$ are related by $S = S'$ and $U' = 1 + S^{-1} N$.) |
Monotonic transformation preserves extrema | First of all, make sure you understand what is meant: The locations of extrema for $f$ are also the locations of extrema for $g\circ f$. It is not the maximum and minimum values taken on that are the same, but the places where those values occur.
The key here is that $g$ is monotone. Either $g$ is increasing, or $g$ is decreasing. Let me discuss increasing first. That means for every $x \le y$ we have $g(x) \le g(y)$.
Now if $x_0$ is a local maximum of $f$, then for all $x$ sufficiently close to $x_0, f(x) \le f(x_0)$. But since $g$ is increasing, that means $g(f(x)) \le g(f(x_0))$. Since this holds for all $x$ sufficiently close to $x_0$, that means $x_0$ is a local maximum of $g\circ f$ as well.
A local minima $x_1$ works the same way. For all $x$ sufficiently near $x_1, f(x_1) \le f(x)$, so $g(f(x_1)) \le g(f(x))$, so $x_1$ is also local minimum of $g\circ f$.
By dropping the restriction of $x$ to neighborhoods of $x_0$ and $x_1$, the same argument shows that global maximums and minimums of $f$ are also global maximums and minimums of $g\circ f$.
When $g$ is decreasing, the only thing that changes is $g\circ f$ will have minima where $f$ has maxima, and vice versa. That $g$ is decreasing means that if $x \le y$, then $g(x) \ge g(y)$. So if $f(x) \le f(x_0)$, then $g(f(x)) \ge g(f(x_0))$. so if $x_0$ is a (local) maximum of $f$, it is a local minimum of $g\circ f$. And similarly for minimums. |
The Neumann series for the resolvent will converge down to the first singularity of $R(λ)$. | There is a unitary $U$ such that $A=U^*DU$ where $D$ is a diagonal matrix with eigenvalues $\rho = \lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_k \ge 0$. Then
$$
(\lambda I-A)^{-1}=U^*\begin{pmatrix}\frac{1}{\lambda-\lambda_1} & 0 & 0 & \cdots & 0 \\
0 & \frac{1}{\lambda-\lambda_2} & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & \frac{1}{\lambda-\lambda_n}\end{pmatrix}U.
$$
Then, for $|\lambda| > \rho$, you can expand
\begin{align}
\frac{1}{\lambda-\lambda_k}& = \frac{1}{\lambda}\frac{1}{1-\frac{\lambda_k}{\lambda}}=\sum_{m=0}^{\infty}\frac{\lambda_k^m}{\lambda^{m+1}}.
\end{align}
So,
$$
(\lambda I-A)^{-1}=\sum_{m=0}^{\infty}\frac{1}{\lambda^{m+1}}
U^*\begin{pmatrix}\lambda_1^{m} & 0 & 0 & \cdots & 0 \\
0 & \lambda_2^m & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & \lambda_n^m
\end{pmatrix}U, \;\;\; |\lambda| > \rho.
$$
This can be written as
$$
(\lambda I - A)^{-1}=\sum_{m=0}^{\infty}\frac{1}{\lambda^{m+1}}A^{m},
\;\;\; |\lambda| > \rho.
$$ |
Orthonormal bases for Hilbert spaces | For each $n \in \mathbb{N}$, consider the set
$$
\{x_{\alpha} : |(x_{\alpha}, y)| \geq 1/n\}
$$
Show that this set is finite, so the union over all the $n$'s is countable, which is the set you are talking about. |
Probability of car starting first time | Let me rename the events, because I keep thinking event A refers to Andre.
Let A be the event that Andre's car starts the first time and let C be the event that Claude's car starts the first time. We are asked for $$\Pr(A|A\cup C)=\frac{\Pr(A\cap(A\cup C))}{\Pr(A\cup C)}=\frac{\Pr(A)}{\Pr(A\cup C)}$$
Now, $\Pr(A\cup C)=\Pr(A)+\Pr(C)-\Pr(A\cap C) = .7+.8-.7\cdot.8=.94$, so the required probability is $$\frac{.7}{.94}=\frac{35}{47}$$ |
How to prove that block matrix may be ill conditioned | Let $M_{\gamma}=\left[\begin{array}{cc}{X^{\top} X+\frac{1}{\gamma} I} & {X^{\top} e} \\ {e^{\top} X} & {m}\end{array}\right]
$. $M_{\gamma}$ is a $n+1\times n+1$ symmetric matrix.
We can see that $[u^T,v^T]M_{\gamma}[u^T,v^T]^T\geq 1/\gamma ||u||^2$ where $u\in\mathbb{R}^n,v\in\mathbb{R}$. Then $M_{\gamma}$ is $\geq 0$; moreover we see that the smallest eigenvalue depends on $1/\gamma$.
EDIT. If we remove the block $(1/\gamma) I$, then the obtained matrix $M$ is $Gramm(C_1,\cdots,C_n,e)$, where $X=[C_1,\cdots,C_n]$. A generic $X$ (choose a random $X$), has full rank $m$ ($dim(\ker(X))=n-m$) and $rank(M)=m$. Clearly, if $u\in \ker X$, then $[u^T,0]^T$ is an eigenvector of $M_{\gamma}$ associated to the eigenvalue $1/\gamma$.
In general, there are exactly $n-m$ eigenvalues equal to $1/\gamma$. Note that, if $\gamma$ is large, then the smallest eigenvalue $\lambda_{\min}$ is often $1/\gamma$ but not always. Moreover, the largest eigenvalue $\lambda_{\max}$ varies little when $\gamma$ varies (say $\lambda_{\max}\approx \tau$).
If we consider the condition number of $M$ associated to the spectral norm ($K(M)=\lambda_{\max}/\lambda_{\min}$), then $K(M)\geq \tau\gamma$. Finally, if $\gamma$ is large, then $M$ is ill-conditionned. |
Completely regular spaces with common dense subspace. | No: Take $X=[0,1]$ with the usual topology and $Y=[0,1]\cap\Bbb Q$. |
A question about a proof of $\operatorname{ord}(\mathbb{Q}/\mathbb{Z})<\infty$ | Given a group $G$ and a normal subgroup $H$, remember that the group operation on $G/H$ is defined by
$$(g_1H)(g_2H)=(g_1g_2)H$$
Therefore the group operation on $\mathbb{Q}/\mathbb{Z}$ is
$$(r_1+\mathbb{Z})+(r_2+\mathbb{Z})=(r_1+r_2)+\mathbb{Z}$$
In particular, in your situation with $r=\frac{m}{n}$ with $n$ a positive integer,
$$n\cdot (r+\mathbb{Z})=\underbrace{(r+\mathbb{Z})+\cdots+(r+\mathbb{Z})}_{n\text{ times}}=(n\cdot r)+\mathbb{Z}=m+\mathbb{Z}=\mathbb{Z}$$
I'd say the $n\mathbb{Z}$ was just a typo. |
What conditions do I need if I want to prove a relation which is both symmetric and transitive is also reflexive? | The proof is wrong and you are wrong. After all, there are relations which are both symmetric and transitive which are not reflexive. Take the empty relation for instance (that is, we never have $a\mathrel Rb$).
However, if you add the hypothesis that for each $a\in A$, there is some $b\in A$ such that $a\mathrel Bb$, then the first proof that you described works. |
Math behind Maximum-Product in Rod-Cutting Problem | This problem is essentially IMO $1976/P4$.
The idea is that using maximum number of $3$'s is most efficient.
Since larger numbers can be broken down into smaller ones, keeping the sum same but increasing the product.
For example, if any part is $5$ it can be broken down into a $3$ and $2$ increasing the product contribution from $5$ to $6$. A $6$ into two $3$'s - product contribution : $6$ to $9$. A $7$ into $3,4$ or $3,2,2$ - product contribution : $7$ to $12$.
The reason is simple inequality :
$$xy \ge (x+y)\cdot 1 \iff (x-1)(y-1)\ge 1$$
That is, it is beneficial to convert a single part $(x+y)$ into two parts $x$ and $y$ as long as $x,y \ge 2$. Now for size $6=2+2+2=3+3$. But $$3\times 3 > 2 \times 2 \times 2$$
So we observe use of maximum $3$'s is optimal.
For $n \equiv 0 \pmod 3$, all parts should be $3$'s. For $n \equiv 1 \pmod 3$, $n \equiv 2 \pmod 3$ all $3$'s except for one $4$ and one $2$ can be used respectively. |
Is this proof correct: $\liminf {\frac{\sin(\frac{\pi n}{8})n^n}{n!}} = 0$? | No, your proof is not correct.
Consider the subsequence $n_k:=12+16k$ for $k\geq 0$, then
$$\sin(\pi n_k/8)=\sin\left(\frac{3\pi}{2}+2k\pi\right)=-1.$$
Moreover for $n\geq 2$
$$\frac{n^n}{n!}=n\cdot\prod_{j=2}^n\frac{n}{j}\geq n.$$
(which means that the sequence $\frac{n^n}{n!}$ is unbounded!).
Hence
$$\liminf_{n\to+\infty} {\frac{\sin(\frac{\pi n}{8})n^n}{n!}}
\leq -\lim_{k\to+\infty} \frac{{n_k}^{n_k}}{n_k!}\leq -\lim_{k\to+\infty} n_k=-\infty.$$ |
A problem about tangents and areas. | Firstly we will express the curve in parametric form:
Let the point $P_1$ be $(t,t(t-2)^2)$, and let the parameter values at $P_2$ and $P_3$ be $p$ and $q$ respectively.
The equation of the tangent at $P_1$ is
$$y-(t^3-4t^2+4t)=(3t^2-8t+4)(x-t)$$
At $P_2$, $x=p$, so we substitute the coordinates of $P_2(p,p(p-2)^2)$ into this equation and simplify. We are aided by the fact that $(p-t)^2$ must be a factor, and this equation reduces to:
$$p=4-2t$$
From this we can deduce also that $$q=4-2p$$
The area $$S_1=\left|\int^p_t (x^3-4x^2-x(3t^2-8t)+2t^3-4t^2 )dx\right|$$
After some algebra and applying the condition $p=4-2t$, this simplifies to $$S_1=\frac{1}{12}(3t-4)^4$$
We can therefore immediately deduce that $$S_2=\frac{1}{12}(3p-4)^4=\frac{1}{12}(3t-4)^4\times 2^4$$
and hence the result $$\frac{S_1}{S_2}=\frac{1}{16}$$ |
joint probability function of 2 independent Bernoilli trials | Your work is correct, except at the very end.
The correct calculation is: $$\Pr[X_2 = x_2] = \sum_{x_1 = 1}^{x_2 - 1} p^2 (1-p)^{x_2-2} = (x_2 - 1)p^2(1-p)^{x_2-2}.$$ This is simply because the summand $p^2(1-p)^{x_2-2}$ is constant with respect to the index of summation $x_1$. I'm not sure how you got $(1+2+\cdots+(x_2-1))$, because there's no $x_1$ term in the summand. |
Existence of Riemann–Stieltjes integral | If $\int_{a}^{b}fdg$ exists, then, for every $\varepsilon > 0$, there exists $\delta > 0$ such that
$$
\left|\sum_{\mathcal{P}}f(x_k^{\star})\{g(x_k)-g(x_{k-1})\}-
\sum_{\mathcal{P'}}f(x_k'^{\star})\{g(x_k')-g(x_{k-1}')\}\right| < \varepsilon
$$
whenever $\|\mathcal{P}\| < \delta$ and $\|\mathcal{P'}\| < \delta$. If you use the same partition points for $\mathcal{P}$ and $\mathcal{P'}$ and allow one of the evaluation points to vary, one consequence of the above is that
$$
|f(x_k^{\star})-f(x_{k}'^{\star})||g(x_{k})-g(x_{k-1})| < \varepsilon \;\;\;\; (\dagger)
$$
whenever $x_k \le x_k^{\star},x_k'^{\star} \le x_{k+1} <x_k+\delta$. Suppose $g$ is discontinuous at some $x \in (a,b)$; $g$ has limits from the left and the right of $x$, which means that either $g(x+0)\ne g(x)$ or $g(x-0)\ne g(x)$, or both. There are two cases to consider
Case 1: $g(x-0)=g(x+0)$.Then $g(x+0)\ne g(x)$ and $g(x-0)\ne g(x)$, which means there exists $\sigma > 0$ such that $|g(x_l)-g(x)| > \sigma$ and $|g(x)-g(x_r)| > \sigma$ for all $x-\delta_0 < x_l < x < x_r < x+\delta_0$, provided $\delta_0$ is chosen sufficient small. Let $\varepsilon > 0$ be given, and choose $\delta$ so that $(\dagger)$ holds with $\varepsilon$ replaced by $\sigma\varepsilon$. Let $\delta_1$ be the minimum of $\delta_0$ and $\delta$.
Then
$$
|f(x')-f(x)|\sigma < \sigma\varepsilon,\;\;\; x-\delta_1 < x' < x \\
|f(x)-f(x'')|\sigma < \sigma\varepsilon,\;\;\; x < x'' < x+\delta_1.
$$
Hence, $f(x-0)=f(x)=f(x+0)$, making $f$ continuous at $x$.
Case 2: $g(x-0)\ne g(x+0)$. In this case there exists $\sigma > 0$ such that $|g(x_l)-g(x_r)| > \sigma$ for all $x-\delta_0 < x_l < x < x_r < x+\delta_0$, provided $\delta_0$ is chosen sufficiently small. Let $\varepsilon > 0$ be given. Choose $\delta > 0$ so that $(\dagger)$ holds with $\varepsilon$ replaced by $\sigma\varepsilon$. Let $\delta_1$ be the minimum of $\delta_0$ and $\delta$. Then
$$
|f(x')-f(x'')|\sigma < \sigma\varepsilon,\;\;\; x-\delta_1 < x' , x'' < x+\delta_1
$$
Set $x''=x$ in order to concluded that $\lim_{x'\rightarrow x}f(x')=f(x)$.
I'll leave the endpoint cases at $x=a$, $x=b$ to you. |
Convexe functions , geometric interpretation !! | You are correct. If $f \colon \Omega \to \mathbf R$ is differentiable at $x_0$, then
$$ T(x) := f(x_0) + Df(x_0)(x-x_0) = \langle\nabla f(x_0), x-x_0\rangle + f(x_0) $$
is the tangent hyperplane to $f$ at $x_0$. Note that $f$ and $T$ equal at $x_0$ and their first derivatives also do, as expected. We have, as in the $n=1$ case, that
$$ f(x_0) = T(x_0), \quad Df(x_0) = DT(x_0). $$ |
$\int_{-\infty}^\infty \frac{e^{pz}}{e^z-1}dz$ Cauchy principal value | Your values of $\theta$ should be $0 \le \theta \le \pi$ traveled backwards. Interchanging limits and integration, we thus have
$$\begin{align}
\lim_{\epsilon \to 0}\int_{\pi}^{0} \frac{e^{p \epsilon e^{i \theta}}}{e^{\epsilon e^{i \theta}}-1} i \epsilon e^{i \theta}d\theta &= \int_{\pi}^{0} \lim_{\epsilon \to 0}\frac{e^{p \epsilon e^{i \theta}}}{e^{\epsilon e^{i \theta}}-1} i \epsilon e^{i \theta}d\theta
\\&=i\int_{\pi}^{0}e^{i \theta}\lim_{\epsilon \to 0}\frac{e^{p \epsilon e^{i \theta}} \epsilon }{e^{\epsilon e^{i \theta}}-1}d\theta
\\&=i\int_{\pi}^{0}e^{i \theta}e^{-i\theta}d\theta
\\&=-i\pi
\end{align}
$$
Which yields the answer you would expect, canceling the imaginary part. We thus only get the real part. |
Solution verification for evaluating $\lim\limits_{x \to +\infty}\dfrac{\int_0^{x}|\sin t|{\rm d}t}{x}$ | As the function is periodic, the limit is also the average value over a single period (you sum arbitrarily many whole periods plus a single incomplete one, which is bounded), hence
$$\frac1\pi\int_0^\pi|\sin x|\,dx=\frac2\pi.$$ |
Factorising Iterated Integrals | The idea is that $h(y)$ is a constant (relative to $x$), so you can pull it through the $\mathrm{d}x$-integral: $$\int g(x) h(y) \, \mathrm{d}x = h(y) \Big(\int g(x) \, \mathrm{d}x\Big).$$ $\int g(x) \, \mathrm{d}x$ is a constant, so you can pull it through the $\mathrm{d}y$-integral: $$\int h(y) \Big(\int g(x) \, \mathrm{d}x\Big) \, \mathrm{d}y = \Big( \int g(x) \, \mathrm{d}x \Big) \int h(y) \, \mathrm{d}y.$$ This is rigorous as long as both $\int g(x) \, \mathrm{d}x$ and $\int h(y) \, \mathrm{d}y$ exist in whatever since you need them to. |
Find integral solutions for $2x^2+y^2=2\times(1007)^2+1$ | Looking at the equation modulo 2 shows that $y$ is odd.
Then looking at it modulo 4 shows that $x$ is odd as well.
(The square of an odd number is always $1\bmod4$.)
A brute force search shows that the solutions $(x,y)$ are (335,1343), (593,1151), (965,407) and (1007,1).
I fail to see enough structure in the answer to be able to explain it. |
Reference request : existence and uniqueness of solution to a certain class of SPDE | Section 3.7 of P.L. Chow's book on Stochastic Partial Differential Equations discusses this type of equations (only on a finite domain but I think that is what you want). |
Is this a valid strong induction proof? (2 base cases) | You practically did the same thing. Your solution is completely fine. |
Prove or disprove: $(\frac{1}{n})^n(1 - \frac{1}{n})^{n^2-n} \simeq \frac{1}{n!}$ as $n \rightarrow \infty$. | $$\frac{1}{n^n}\left(1 - \frac{1}{n}\right)^{n^2-n} \sim \frac{1}{n!}
\iff
\left(1 - \frac{1}{n}\right)^{n^2}\left(1 - \frac{1}{n}\right)^{-n}
\sim \frac{n^n}{n!}
\iff \left(1 - \frac{1}{n}\right)^{n^2}
\sim \frac {e^{n-1}}{\sqrt{2\pi n}}
\\
%\left(1 - \frac{1}{n}\right)^{n^2}
%=\exp \left[ n^2\log \left(1-\frac 1n
%\right)
%\right]
%=\exp \left[ n^2 \left(-\frac 1n
%-\frac 1{n^2} + O\left( \frac{1}{n^3}\right)
%\right)\right]
%= e^{-1-n}+o(1)
$$
LHS goes to $0$ and RHS to infinity. |
Proving $a^n + b$ is divisible by $c$ inductively | Show by induction $6^{2n} +4 $ divisible by $5.$
Check : $n=1$ , is ok (so is n=0).
Hypothesis: $6^{2n} +4$ is divisible by $5$.
Step: $n+1;$
$6^{2n+2}+ 4 = 6^{2n} 6^2 +4 =$
$ (6^{2n} +4 -4)6^2 +4=$
$(6^{2n}+4)6^2 -4(6^2) +4=$
$ (6^{2n} +4)6^2 -4(6^2-1)=$
$(6^{2n}+4)6^2 - 4(35);$
The first term is divisible by $5$, hypothesis, the second term $-4(35)$ is divisible by $5 $,
hence the sum of the $2$ terms is divisible by $5$, I.e.
$6^{2(n+1)} +4$ is divisible by $5.$ |
Integral representation of the Riemann zeta function | Writing $\tan^{-1}\cot(\pi x)$ is in fact just a very fancy/silly way to write the sawtooth function.
Let $B_1(x)=s(x)=\{x\}-\frac{1}{2}$ denote the sawtooth function. Then note that $$\tan^{-1}\cot(\pi x)=\pi^2 s(x)^2.$$ Their identity is then $$\zeta(s)=\frac{s}{s-1}-\frac{1}{2}+\frac{s}{8}-\frac{s(s+1)}{2}\int_{1}^{\infty} \left(\{u\}-\frac{1}{2}\right)^2 u^{-s-2}du,$$ however, as you have noted, this isn't correct either. So what is the real identity lurking here, and how do we prove it? In fact, we have that $$\zeta(s)=\frac{s}{s-1}-\frac{1}{2}-\frac{s(s+1)}{2}\int_{1}^{\infty}\left(\{u\}^{2}-\{u\}\right)u^{-s-2}du $$ and this follows from integration by parts twice.
Writing $$\zeta(s)=\sum_{n=1}^{\infty}n^{-s}=\int_{1}^{\infty}x^{-s}d\lfloor x\rfloor,$$ and applying integration by parts, we have $$\zeta(s)=\frac{s}{s-1}-s\int_{1}^{\infty}\{u\}u^{-s-1}du.$$ Now, we could proceed from here, but instead we'll rewrite this in terms of the first periodic bernoulli polynomial $s(x)$ to obtain $$\zeta(s)=\frac{s+1}{2\left(s-1\right)}-s\int_{1}^{\infty}B_{1}(\{u\})u^{-s-1}du.$$
Applying integration by parts again, $$\int_{1}^{\infty}B_{1}(\{u\})u^{-s-1}du=u^{-s-1}\int_{1}^{u}B_{1}(\{u\})dt\biggr|_{1}^{\infty}+(s-1)\int_{1}^{\infty}u^{-s-2}\left(\int_{1}^{u}B_{1}(\{u\})dt\right)du.$$ Now, as $$\int_{0}^{x}B_{1}(\{u\})du=\frac{B_{2}\left(\{x\}\right)-B_{2}(0)}{2}=\frac{1}{2}\left\{ x\right\} ^{2}-\frac{1}{2}\left\{ x\right\},$$ we arrive at the desired identity. We could continue in this way, and rewrite the identity as $$\zeta(s)=\frac{s}{s-1}-\frac{1}{2}+\frac{s}{12}-\frac{s(s+1)}{2}\int_{1}^{\infty}B_{2}(\{u\})u^{-s-2}du,$$ applying integration by parts again, using the fact that $$\int_0^x B_n(\{t\})dt = \frac{B_{n+1}({x})-B_{n+1}(0)}{n+1},$$ and this yields a method of extending $\zeta(s)$ to the half plane $\text{Re}(s)>-n$ where the formula involves the first $n+1$ Bernoulli numbers. In fact, some combinatorial analysis can turn this into an alternate proof that $$\zeta(-n)=-\frac{B_{n+1}}{n+1},$$ and hence if combined with the functional equation for the zeta function we obtain an alternate proof for the value of $\zeta(2k)$. |
right definition of correct space of domain and range for a self-adjoint Operator | Showing an operator such as $P$ is symmetric on its domain $\mathcal{D}(P)$ is the first place to start. All selfadjoint operators are symmetric, but not necessarily the other way around. So, first show that
$$
(Pf,g)=(f,Pg),\;\;\; f,g \in \mathcal{D}(P).
$$
The inner-product should be the $L^{2}(\Omega)$ inner-product. The domain $\mathcal{D}(P)$ consists of all $u\in W^{2}(\Omega)$ which satisfy the boundary condition $\alpha \frac{\partial u}{\partial n}+\beta u=0$. And it is useful to show $P$ is semibounded if it is; that is, is there a constant $m$ such that
$$
(Pf,f) \ge m\|f\|^{2},\;\;\; f \in \mathcal{D}(P)?
$$
It's not important to establish that $\mathcal{D}(P)$ is dense in $W^{2}(\Omega)$--in fact, that won't be true. But it is important to show that $\mathcal{D}(P)$ is dense in $L^{2}(\Omega)$. If $P$ is not densely-defined in $L^{2}(\Omega)$, then an adjoint $P^{\star}$ isn't well-defined, and selfadjoint has no meaning. The underlying Hilbert space $H$ is $L^{2}(\Omega)$. The fact that $W^{2}(\Omega)$ is a Hilbert space is not directly relevant; you're just trying to find a domain $\mathcal{D}(P)$ on which $P$ will be a densely-defined selfadjoint linear operator. So, the setting is
$$
P : \mathcal{D}(P)\subset L^{2}(\Omega)\rightarrow L^{2}(\Omega),
$$
and you want to choose that domain so that $P=P^{\star}$, where the adjoint is taken in $L^{2}(\Omega)$.
It may seem hard to prove that the domain is dense. However, if you can show that $P$ is symmetric and that $P-\lambda I$, $P-\overline{\lambda}I$ are surjective for some $\lambda \in \mathbb{C}\setminus\mathbb{R}$, then the domain must be dense and $P$ must be selfadjoint. If $P$ is bounded below by some real $m$ as above, one need only verify that $P+\lambda I$ is surjection for some real $\lambda > -m$. Either verification usually comes down to classical solvability for the PDE. |
Convert numbers from one base to another using repeated divisions. | Here are a couple of examples, the first with almost full detail, the second with less.
First I’ll convert $156_{16}$ to base ten using repeated division in base sixteen. I’ll use $A,B,C,D,E$, and $F$ for the base sixteen digits corresponding to base ten $10,11,12,13,14$, and $15$. I’ll also use a subscript $s=16$ to indicate that a number is to be interpreted in base sixteen.
Divide $156_s$ by $A_s$. Do this just as you would in base ten: $A_s$ won’t go into $1_s$, but it will go into $15_s$. In fact $15_s=1\cdot 16+5=21$, and $A_s=10$, so it goes twice. The first digit of your quotient is $2_s$, so you need to subtract $2_s\cdot A_s$ from $15_s$.
$2_s\cdot A_s=2\cdot 10=20=1\cdot16+4=14_s$, and $15_s-14_s=1_s$, so after you bring down the $6_s$, you’re left dividing $A_s$ into $16_s$.
Similarly, $16_s=1\cdot 16+6=22$, so $A_s$ goes in twice. After you repeat the previous step (with suitable minor modifications) you have your full quotient $22_s$ and overall remainder $2_s$, as shown below.
$$\begin{array}{}
&&&2&2\\
&&\text{_}&\text{_}&\text{_}\\
A&)&1&5&6\\
&&1&4\\
&&-&-&-\\
&&&1&6\\
&&&1&4\\
&&&-&-\\
&&&&\color{red}2
\end{array}$$
Now divide $22_s$ by $A_s$. $22_s=2\cdot 16+2=34$, so the integer part of the quotient is $3_s$:
$$\begin{array}{}
&&&3\\
&&\text{_}&\text{_}\\
A&)&2&2\\
&&1&E\\
&&-&-\\
&&&\color{red}4\\
\end{array}$$
Finally, divide this last quotient, $3_s$, by $A_s$:
$$\begin{array}{}
&&0\\
&&\text{_}\\
A&)&3\\
&&0\\
&&-\\
&&\color{red}3\\
\end{array}$$
Read off the red remainders in reverse order: $156_s=342$.
Here’s one a little more complicated, the conversion of $2BA_s$ to base three.
$$\begin{array}{ccccc|cccc|cccc|cccc|ccc}
&&&E&8&&&4&D&&&1&9&&&&8&&&\color{red}2\\
&&\text{_}&\text{_}&\text{_}&&&\text{_}&\text{_}&&&\text{_}&\text{_}&&&\text{_}&\text{_}&&&\text{_}\\
3&)&2&B&A&3&)&E&8&3&)&4&D&3&)&1&9&3&)&8\\
&&2&A&&&&C&&&&3&&&&1&8&&&6\\
&&-&-&-&&&-&-&&&-&-&&&-&-&&&-\\
&&&1&A&&&2&8&&&1&D&&&&\color{red}1&&&\color{red}2\\
&&&1&8&&&2&7&&&1&B\\
&&&&-&&&-&-&&&-&-\\
&&&&\color{red}2&&&&\color{red}1&&&&\color{red}2
\end{array}$$
That last quotient of $2$ is less than the divisor, so the next division will have a $0$ quotient and remainder of $\color{red}2$, so I’ve skipped the step and colored the quotient instead. Reading the remainders in reverse order, we have $2BA_s=221212_t$ (where the subscript $t$ indicates base three).
Check: $$2BA_s=2\cdot 256+11\cdot16+10=698\;,$$ and $$221212_t=2\cdot 243+2\cdot81+1\cdot27+2\cdot9+1\cdot3+2=698\;.$$ |
How do I show that this mapping is Lipschitz continuous | Write $$q(u)=\sum_t \rho_t (u)$$
First note that $q$ is Lipschitz on $W$. Also, there exist constants $\delta_1,\delta_2,\delta_3>0$ (depending on $W$) such that $$\tag{1}\delta_1\leq q(u)\leq \delta_2,\ \forall\ u\in W$$
$$\tag{2}\rho_t(u)\leq\delta_3,\ \forall\ u\in W$$
Note
\begin{eqnarray}
|\varphi_t(u)-\varphi_t(v)| &=& \left|\frac{\rho_t(u)}{q(u)}-\frac{\rho_t(v)}{q(v)}\right| \nonumber \\
&=& \left|\frac{1}{q(u)q(v)}\left\{q(v)\left[\rho_t(u)-\rho_t(v)\right]-\rho_t(v)\left[q(u)-q(v)\right]\right\}\right| \nonumber \\
&\le& \frac{q(v)}{q(u)q(v)}\left|\rho_t(u)-\rho_t(v)\right|+\frac{\rho_t(v)}{q(u)q(v)}\left|q(u)-q(v)\right|
\end{eqnarray}
If $u,v\in W$ we conclude from $(1),(2)$ and the last inequality that $\varphi_t$ is Lipschitz in $W$. |
The fundamental group of preimage of covering map | The question with regard to pathconnected is answered in this stackexchange question, which refers to the book Topology and Groupoids (T&G). A paper was published on this as Groupoids and the Mayer-Vietoris sequence Journal of Pure and Applied Algebra 30 (1983) 109-129.
The first step is to translate the problem, into one on groupoids, since a covering map of spaces $p: X \to Y$ induces a covering morphism of $\pi_1(p): \pi_1(X) \to \pi_1(Y)$ of fundamental groupoids, which is a special case of a fibration of groupoids dealt with in the paper. Covering morphisms of groupoids carry all the algebraic information usually obtained from covering maps.
In that algebraic model it is easier to analyse the situation, and to get the type of exact sequence required.
The conclusion with regard to fundamental groups is that they can be seen as pullbacks of other fundamental groups. |
How to prove that a 4-regular graph with 8 vertices has a cycle of length 8 | Dirac's Theorem from 1952 says that if $n\ge3$ then a simple graph with $n$ vertices is Hamiltonian if every vertex has degree $n/2$ or greater. See also Ore's Theorem from 1960. |
Are these functions Riemann integrable on $[0,1]$ using this theorem? | $\frac{1}{x} \sin(1/x)$ is not bounded: consider the sequence of points $x_n=\frac{1}{\pi/2+2n\pi}$ and notice $\sin(1/x_n)=1$.
Also, in all of these you should be a bit careful: you need to give some alternate definition of them at $x=0$ in order for the question of Riemann integrability to make sense. As your theorem shows, the choice of this value is of no consequence, but still, the concept of proper Riemann integrability is restricted to functions whose domain is a closed bounded interval. |
3 co-ordinates given. Equation of perpendicular line. | HINT
angular coefficient line AB: $$m_{AB}=\frac{-9-1}{-6+1}=2$$
angular coefficient line perpendicular to AB: $$m=-\frac{1}{m_{AB}}=-\frac12$$
line through point C perpendicular to AB: $$y-(-2)=m(x-(-2))\implies y+2=-\frac12 x-1\\\implies y+\frac12x+3=0\implies 2y+x+6=0$$ |
Mistake in reasoning regarding initial value problem | You forgot the constant of integration. You should obtain ln z = ln y + k where k is a constant. From the boundary conditions k = ln 2. |
Exercise in Chapter 1 of Switzer's Algebraic Topology book | Your proof is correct, but I suggest to use the notation introduced in your question. That is, use $T_\phi(X)$ instead of $\Phi$ (base point suppressed). Proceed as follows:
Switzer's construction works for any $\theta : A \to B$ to give $T_\theta(X) : [B,X] \to [A,X]$.
Clearly $T_\theta(X)$ only depends on the pointed homotopy class $[\theta]$ of $\theta$.
Obviously $T_{id}(X) = id$ and $T_{\chi \circ \theta}(X) = T_\theta(X) \circ T_\chi(X)$.
If $\phi$ is a pointed homotopy equivalence and $\psi$ is a homotopy inverse, then $T_\phi(X) \circ T_\psi(X) = T_{\psi \circ \phi}(X) = T_{id}(X) = id$, similarly $T_\psi(X) \circ T_\phi(X) = id$. This means that $T_\phi(X)$ and $T_\psi(X)$ are bijections which are inverse to each other.
For the converse, take $X = K$ and let $\psi : L \to K$ represent the unique homotopy class in $[L;K]$ such that $T_\phi(K)([\psi]) = [id] \in [K;K]$ (recall that $T_\phi(K) : [L;X] \to [K;X]$ is a bijection for all $X$). Then $[\psi] \circ [\phi] = [\psi \circ \phi] = T_\phi([\psi]) = [id]$. By 2. we get $T_\phi(L) \circ T_\psi(L) = T_{\psi \circ \phi}(L) = id$, thus $T_\psi(L) = T_\phi(L)^{-1}$, i.e. $T_\psi(L) : [K;L] \to [L;L]$ is a bijection. Let $\phi' : K \to L$ represent the unique homotopy class in $[K;L]$ such that $T_\psi(L)([\phi']) = [id] \in [L;L]$. This means $[\phi'] \circ [\psi] = [id]$. Thus $[\psi]$ has a rigt and a left inverse. It is a general theorem of category theory that if a morphism $v : B \to A$ has a right inverse $u : A \to B$ (i.e. $v \circ u = id_A$) and a left inverse $u' : A \to B$ (i.e. $u' \circ v = id_B$), then $v$ is an isomorphism and $u = u' = v^{-1}$.
To see this, note that $u = id_B \circ u = (u' \circ v) \circ u = u' \circ (v \circ u) = u' \circ id_A = u'$. |
Computation of the probability density function for $(X,Y) = \sqrt{2 R} ( \cos(\theta), \sin(\theta))$ | As usual, fix a bounded measurable function $\varphi$ and consider
$$
(*)=\mathrm E(\varphi(X,Y))=\mathrm E(\varphi(\sqrt{2R}\cos\Theta,\sqrt{2R}\sin\Theta)),
$$
hence
$$
(*)=\iint[r\gt0,0\lt\theta\lt2\pi]\,\varphi(\sqrt{2r}\cos\theta,\sqrt{2r}\sin\theta)f_R(r)\mathrm dr\frac{\mathrm d\theta}{2\pi}.
$$
The change of variables $x=\sqrt{2r}\cos\theta$ and $y=\sqrt{2r}\sin\theta$, yields $2r=x^2+y^2$ and the Jacobian $\mathrm dr\mathrm d\theta=\mathrm dx\mathrm dy$, hence
$$
(*)=\frac1{2\pi}\iint \varphi(x,y)f_R\left(\frac{x^2+y^2}2\right)\mathrm dx\mathrm dy.
$$
This proves that
$$
\color{red}{f_{X,Y}(x,y)=\frac1{2\pi}f_R\left(\frac{x^2+y^2}2\right)}.
$$
One can recover the marginal densities $f_X=f_Y$ from here by the formula
$$
f_X(x)=f_Y(x)=\int f_{X,Y}(x,y)\mathrm dy=\frac1{2\pi}\int f_R\left(\frac{x^2+y^2}2\right)\mathrm dy,
$$
that is,
$$
f_X(x)=f_Y(x)=\frac1{\pi}\int_{y\gt0} f_R\left(\frac{x^2+y^2}2\right)\mathrm dy.
$$
The change of variable $2r=x^2+y^2$ yields $r\gt\frac12x^2$ and $\mathrm dr=\sqrt{2r-x^2}\mathrm dy$, hence
$$
\color{red}{f_X(x)=f_Y(x)=\frac1{\pi}\int_{x^2/2}^{+\infty}\frac{f_R(r)}{\sqrt{2r-x^2}}\mathrm dr}.
$$ |
Conflicting answer for determining cake today | If $n=2$ and $m=3$ then you count $\binom{2+3-1}2=6$ possible birthday sequences. I reckon that comes to the $3$ possibilities that there are $2$ birthdays on the same day plus the $3$ possibilities that there are $2$ birthdays on distinct days (repair me if I am wrong). However these possibilities are not equiprobable.
Observe that the probability that you and your team-mate both have your birthday on e.g. day $1$ is $(\frac13)^2$, but the probability that one of you has it on day $1$ and the other on day $2$ is $2\times(\frac13)^2$. |
Morse functions are dense in $\mathcal{C}^\infty(X,\mathbb{R})$. | By compactness of $X$, the function $\|x\|$ is bounded on $X$, and by the theorem, there exists
$$a\;\in\;\mathbb{B}^N\Big(0,\frac{\varepsilon}{\max_{x\in X}\|x\|}\Big),$$ for which $f_a$ is a Morse function. Then by Cauchy-Schwarz-Bunyakowsky,
$$\sup_{x\in X}|f(x)\!-\!f_a(x)|= \sup_{x\in X}|\langle x,a\rangle|\leq \sup_{x\in X}\|x\|\|a\|\leq \varepsilon.$$ |
Find $\underset{x\rightarrow 0}\lim{\frac{1-\cos{x}\sqrt{\cos{2x}}}{x\sin{x}}}$ | As $x\to 0$, $x\sin x\sim x^2$. Also
$$\cos x=1-\frac{x^2}2+O(x^4),$$
$$\cos2x=1-2x^2+O(x^4),$$
$$\sqrt{\cos2x}=1-x^2+O(x^4),$$
$$(\cos x)(\sqrt{\cos2x})=1-\frac{3x^2}2+O(x^4),$$
$$1-(\cos x)(\sqrt{\cos2x})\sim\frac{3x^2}2.$$
Therefore
$$\lim_{x\to0}\frac{1-(\cos x)(\sqrt{\cos2x})}{x\sin x}=\frac32.$$ |
What precisely is a vacuous truth? | No. The phrase "vacuously true" is used informally for statements of the form $\forall a \in X: P(a)$ that happen to be true because $X$ is empty, or even for statements of the form $\forall a \in X: Q(a) \to P(a)$ that happen to be true because no $a \in X$ satisfies $Q(a)$. In both cases, it is irrelevant what statement $P(a)$ is.
I guess you could turn this into a formal definition of a property of statement, but that's not standard. |
Variation on Chen-iterated integrals | Let
$$
I_n(b)=\int\limits_a^b dx_1
\int\limits_a^{x_1} dx_2
\int\limits_a^{x_2} dx_3 \cdots
\int\limits_a^{x_{n-1}} dx_n \;\;
f(x_1)\,f(x_2)\cdots f(x_n).
$$
We use the Mathematical Induction to show
$$ I_n(b)=\frac{1}{n!}I_1^n(b).\tag{1} $$
Note $f(x)dx=dI_1(x)$. For $n=2$, then
\begin{eqnarray}
I_2(b)&=&\int\limits_a^b dx_1
\int\limits_a^{x_1} dx_2 f(x_1)\,f(x_2)\\
&=&\int\limits_a^b\bigg[\int\limits_a^{x_1}f(x_2)dx_2\bigg]f(x_1)dx_1\\
&=&\int\limits_a^bI_1(x_1)I_1'(x)dx_1\\
&=&\frac12I_1^2(b).
\end{eqnarray}
Suppose for $k=n-1$, (1) holds, namely
$$
I_{n-1}(b)=\int\limits_a^b dx_1
\int\limits_a^{x_1} dx_2
\int\limits_a^{x_2} dx_3 \cdots
\int\limits_a^{x_{n-2}} dx_{n-1} \;\;
f(x_1)\,f(x_2)\cdots f(x_{n-1})=\frac{1}{(n-1)!}I_1^{n-1}(b).
$$
Then
\begin{eqnarray}
I_n(b)&=&\int\limits_a^b dx_1
\int\limits_a^{x_1} dx_2
\int\limits_a^{x_2} dx_3 \cdots
\int\limits_a^{x_{n-1}} dx_n \;\;
f(x_1)\,f(x_2)\cdots f(x_n)\\
&=&\int\limits_a^bI_{n-1}(x_1)f(x_1)dx_1\\
&=&\frac{1}{(n-1)!}\int\limits_a^bI_1^{n-1}(x_1)f(x_1)dx_1\\
&=&\frac{1}{n!}I(b)_1^{n}.
\end{eqnarray}
Namely for $k=n$, (1) hold. |
Wrong result in Fourier transform | Observations on the attempt
Let's take a look at the statement of Jordan's Lemma:
Let $f$ be a continuous function on $\mathbb C$ and $\gamma_R$ an arc of a circumference centered in the origin with radius $R$ with an angle span $[\theta_1,\theta_2]$ with $0\le \theta_1<\theta_2\le \pi$. If
$$\lim_{R\to \infty} \max_{[\theta_1,\theta_2]}|f(Re^{i\theta})|=0 $$
Then for every $\omega >0$
$$\lim_{R\to \infty}\int_{\gamma_R}f(z)e^{i\omega z}\mathrm d z=0 \tag {$\dagger$}$$
The essential error in my previous computation regarded the fact that if $\xi>0$ we cannot apply Jordan's Lemma in the upper plane, because $(\dagger)$ holds only for positive $\omega$.
Correct resolution
First of all, we can use Jordan's lemma on the upper plane to compute integrals of the kind
$$\int_\mathbb R e^{ix}R(x)\mathrm d x \tag{$\star$}$$
where $R(x)$ is a rational function of $x$ such that $\lim_{|z|\to \infty}zR(z)=0$. In this case in particular, $R(x)=\frac{1}{1+x^2}$.
To reconduce the integral
$$\hat f(\xi)=\int_\mathbb Re^{-i\xi x}\frac{1}{1+x^2}\mathrm d x$$
to $(\star)$ we perform the change of variables $y=-\xi x$. By change of variables in a Lebesgue integral,
$$\begin{align}\int_\mathbb Re^{-i\xi x}\frac{1}{1+x^2}\mathrm d x
&=\int_\mathbb Re^{iy}\frac{1}{1+\frac{y^2}{\xi^2}}\frac{\mathrm dy}{|\xi|}
\\
&=|\xi|\int_\mathbb Re^{iy}\frac{1}{\xi^2+y^2}\mathrm dy
\end{align}$$
In this form, it is sufficient to apply the residue theorem on the upper plane. There are two cases:
$\xi>0$: in this case, the residue in the upper plane is located in $i\xi$. This yields
$$\begin{align}\hat f(\xi) &
=\xi\int_\mathbb Re^{iy}\frac{1}{(y+i\xi)(y-i\xi)}\mathrm dy
\\
&=\xi \cdot 2\pi i \text{Res}\left (e^{iy}\frac{1}{(y+i\xi)(y-i\xi)},y=i\xi\right)
\\
&=2\pi i \xi \frac{e^{-\xi}}{2i\xi}=\pi e^{-\xi}
\end{align}$$
$\xi<0$: in this case, the residue in the upper plane is located in $-i\xi$. This yields
$$\begin{align}\hat f(\xi) &
=-\xi\int_\mathbb Re^{iy}\frac{1}{(y+i\xi)(y-i\xi)}\mathrm dy
\\
&=-\xi \cdot 2\pi i \text{Res}\left (e^{iy}\frac{1}{(y+i\xi)(y-i\xi)},y=-i\xi\right)
\\
&=-2\pi i \xi \frac{e^{\xi}}{-2i\xi}=\pi e^{\xi}
\end{align}$$
The final result can be written as
$$\hat f(\xi)=\pi e^{-|\xi|}$$ |
Special values for 3D rotations matrices | For me case $\theta=i\pi/2$ where $i=0,1,2,3$ is a special one because it generates matrices with values exclusively $\{-1,0,1\}$. Compositions of these matrices also give matrices with $\{-1,0,1\}$ values (there are such $24$ cases) so they are good for simple exercises with rotation matrices and for checking different rotation formulas for example Rodrigues' rotation formula. |
NMAT Practice Exam Math Problem: $3^{n+2}+(3^{n+3}-3^{n+1}) =~?$ | If you are allowed to assume a typo of $+$ for the similar $\div$, you can get one of the choices:
$$3^{n+2}\div(3^{n+3}-3^{n+1}) = \frac38.$$
Since all the choices have denominators, you need to do a division or use negative exponents to get them. |
Find the determinant of the following general matrix | Here is a partial answer. For convenience, let us drop the subscript $r$. Write $C=\pmatrix{A&R\\ S&T}$, where $T$ is $r(s-1)\times r(s-1)$. Note that
$$
T^{-1} =
\left[\begin{array}{rrrrr}
A^{-1}&-A^{-1}BA^{-1}&A^{-1}BA^{-1}BA^{-1}&\cdots&(-1)^{s-2}(A^{-1}B)^{s-2}A^{-1}\\
&A^{-1}&-A^{-1}BA^{-1}&\ddots&\ddots\\
&&\ddots&\ddots&\ddots\\
&&A^{-1}&-A^{-1}BA^{-1}&A^{-1}BA^{-1}BA^{-1}\\
&&&A^{-1}&-A^{-1}BA^{-1}\\
&&&&A^{-1}
\end{array}\right].
$$
Therefore, using Schur complement, we get
\begin{align*}
\det(C) &= \det(T)\det(A-RT^{-1}S)\\
&= \det(A)^{s-1} \det(A - (-1)^{s-2}B(A^{-1}B)^{s-2}A^{-1}B)\\
&= \det(A)^s \det(I - (-1)^{s-2}A^{-1}B(A^{-1}B)^{s-2}A^{-1}B)\\
&= (-t^{2r})^s \det(I - (-A^{-1}B)^s).
\end{align*}
So, the question boils down to finding $\det(I - (-A^{-1}B)^s)$. |
Can $\sum_i{d(m_i,Pn_i)^2}$ be minimized over $P$ using linear least squares? | Your problem is much simpler...
Minimization can be performed independently for unknowns $(p_{1}, p_{2})$ and $(p_{3}, p_{4})$.
$$
I(p_{1}, p_{2}, p_{3}, p_{4}) = \sum \limits_{i = 1}^{N}((m_{i, 1} - p_{1} n_{i, 1} - p_{2} n_{i, 2})^{2} + (m_{i, 2} - p_{3} n_{i, 1} - p_{4} n_{i, 2})^{2}) = \\ = I_{1}(p_{1}, p_{2}) + I_{2}(p_{3}, p_{4})
$$
where
$$
I_{1}(p_{1}, p_{2}) = \sum \limits_{i = 1}^{N}(m_{i, 1} - p_{1} n_{i, 1} - p_{2} n_{i, 2})^{2}
$$
and
$$
I_{2}(p_{3}, p_{4}) = \sum \limits_{i = 1}^{N}(m_{i, 2} - p_{3} n_{i, 1} - p_{4} n_{i, 2})^{2}
$$
can be minimized independently with usual least squares method. |
Calculating the length of a helix | Hint:
The parametric equation of the helix can be written $$x=r\cos(t),y=r\sin(t),z=ht.$$
Using the differential arc length we obtain
$$S(t)=\int_0^t\sqrt{\dot x^2+\dot y^2+\dot z^2}\,dt=\int_0^t\sqrt{r^2+h^2}\,dt=\sqrt{r^2+h^2}\,t.$$
Remains to make the connection to your own descriptive parameters. |
A sharper bound than Chernoff for a sum of random variables | Looks like you can use CLT for Normal approximation to Binomial. $1000$ is a pretty large number so you can say that your probability is "like" a normal cdf with specified parameters.
In other words, denoting $S_n=\sum_{i=1}^n X_i$ we have by CLT that $\dfrac{S_n-\dfrac{n}{2}}{\sqrt{\dfrac{n}{4}}}\to \mathcal N(0,1)$ weakly, as $n\to\infty$.
Thus $P[S_n\leq t]\sim \Phi\left(\dfrac{t-\dfrac{n}{2}}{\sqrt{\dfrac{n}{4}}}\right)$ for large enough $n$. Your $n=1000,t=599$. Plug them in. |
The integral $\int_{U}u^{4}$ is well-defined for $u\in H_{0}^{2}(U)$ and $U\subset\mathbb{R}^{n}$ open, bounded, $n\leq3$. | If $n=3$, then the Sobolev conjugate of $2$ is equal to $6$, and since $\nabla^2 u\in L^2(U)$, we have $\nabla u\in L^6(U)$. Hence, Morrey's theorem implies that $u\in L^{\infty}(U)$.
If $n=2$, then $\nabla u$ belongs to every $L^p(U)$ space, for any $p\geq 1$. In particular $\nabla u\in L^3(U)$, and Morrey's theorem shows that $u\in L^{\infty}(U)$. |
Show continuously differentiable image of Jordan measurable set is Jordan measurable | $\DeclareMathOperator{\diam}{diam}
\DeclareMathOperator{\vol}{vol}
\DeclareMathOperator{\itr}{int}
\DeclareMathOperator{\cl}{cl}$For the first part you are on the right track.
Continuity alone is not enough to bound the volume of the image of each box $B_i$ under $G$ by the volume of $B_i$. Here you should use the fact that a $C^1$ function on a compact set is Lipschitz, that is, $G$ is Lipschitz, say with constant $L$. Then $\diam(G(B_i)) \le L \diam(B_i)$. Without loss of generality assume that these boxes are cubes. Therefore
$$ \vol(G(B_i)) = \left( \frac{\diam(G(B_i))}{\sqrt{n}} \right)^{\frac{1}{n}} \le \left( \frac{L \diam(B_i)}{\sqrt{n}} \right)^{\frac{1}{n}} = L^{\frac{1}{n}} \vol(B_i) $$
We are done.
Now for the second part you are also on the right track, you just need to ensure that $G(\partial T) \subset \partial G(T)$.
Note that since $G$ is continuous then $G(\cl{T}) \subset \cl{G(T)}$ and the preimage of open sets are open. So $G(\partial T) \subset G(\cl{T}) \subset \cl{G(T)}$. Moreover $G^{-1}(\itr G(T))$ is open and $G^{-1}(\itr G(T)) \subset G^{-1}(G(T))$, hence $G^{-1}(\itr G(T)) \subset \itr G^{-1}(G(T))$.
By the injectivity of $G$ it follows that $G^{-1}(G(T)) = T$. Then $G^{-1}(\itr G(T)) \subset \itr T$, whence $ \itr G(T) \subset G(\itr T)$.
This last inclusion implies that $G(\partial T) \bigcap \itr G(T) \subset G(\partial T) \bigcap G(\itr T) = \emptyset$, because $\partial T \bigcap \itr T = \emptyset$ and $G$ is injective.
Putting things together we have $G(\partial T)\subset \cl{G(T)} = \partial G(T) \bigcup \itr G(T)$ and $G(\partial T) \bigcap \itr G(T) = \emptyset$. Therefore $G(\partial T)\subset \partial G(T)$ as we wanted to show. |
Structure constants for and the adjoint representation and meaning in $sl(2,F)$ | The adjoint ${\rm ad}\,h$ is the linear map $x\mapsto[h,x]$, or simply $[h,-]$ for abbreviation. To determine the matrix of this linear map, we calculate its effect on the basis vectors $e,f,h$:
$$\color{Red}{[h,}e\color{Red}{]}=\color{Blue}{2}e+\color{Blue}{0}f+\color{Blue}{0}h$$
$$\color{Red}{[h,}f\color{Red}{]}=\color{Blue}{0}e\color{Blue}{-2}f+\color{Blue}{0}h \tag{$\circ$}$$
$$\color{Red}{[h,}h\color{Red}{]}=\color{Blue}{0}e+\color{Blue}{0}f+\color{Blue}{0}h $$
Therefore the matrix of this linear map is given by
$${\rm ad}\,h=\begin{pmatrix}2 & \,0 & 0 \\ 0 & -2 & 0 \\ 0 & \,0 & 0\end{pmatrix} $$
how come we come up with $2$ instead of $[h,e]=2e$?
Constants and equations are different things. If we compute the lie bracket of two basis vectors, the result will be expressible as a linear combination of basis vectors. "Structure constants" refer to the coefficients of these basis vectors in such sums.
The coordinates of the vector $(1,0,0)\in\Bbb C^3$ are not $(1,0,0)$, the coordinates are the actual scalars $1,0,0$ in that order. Similarly the structure constants that appear when writing $[h,e]$ as a linear combination of $e,f,h$ are $2,0,0$ in that order.
what is the major points of these "structure constants"? Why are they useful - especially as I can seemingly just calculate the lie bracket to figure them out and don't need to figure out some summation.
Do you really want to compute $(\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix}) (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) - (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) (\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix})$ every single time you need $[h,e]$? That's a lot of superfluous matrix multiplication when all you'd have to do instead is memorize the simple fact that $[h,e]=2e$ to avoid all of that tedious work. What if the elements of the lie algebra are $8\times8$ matrices, would you rather compute every lie bracket over and over again by hand for the rest of your life, or simply compute them once and get it over with? What if the elements of the lie algebra aren't matrices at all, they're just abstract vectors - in what sense are you "calculating" the lie brackets then?
Not to mention, if you want to write the product of basis vectors as a linear combination of the basis vectors, then yes you do need to "figure out some summation" one way or another. One might as well figure out it once, write down the appropriate coefficients of the basis vectors (the structure constants), and then reuse that information later whenever it comes up again.
Suppose $R$ is a not necessarily associative or unital $S$-algebra (I am thinking in particular of $S$s that are commutative domains or fields like $S=\Bbb Z,\Bbb Q,\Bbb R,\Bbb C$ but these facts are more general) which has basis elements $r_1,\cdots,r_n$. That is, every element is uniquely expressible as a sum $s_1r_1+\cdots+s_nr_n$ for scalars $s_1,\cdots,s_n\in S$. Then for each $1\le i,j\le n$ we can write the products $r_ir_j$ as a $S$-linear combination of basis elements, say as $r_ir_j=\sum_{k=1}^n c_{ij}^k r_k$. These structure constants $c_{ij}^k$ completely determine the structure of the ring. All you would need to do is to write down the structure constants for another person to compute anything in the ring. They would know every element is a combination of basis elements, and they'd be able to compute the product of two sums of basis elements using distributivity and these structure constants.
For example, suppose I told you the structure constants of some nonassociative $\Bbb Z$-algebra I have, every element of which is $ax+by$ for some $a,b\in\Bbb Z$, are given by the following equations:
$$\begin{array}{ll} xx=x+y & xy=x \\ yx=y & yy=x-y \end{array}$$
If you want to see the constants more clearly, write it like this:
$$\begin{array}{ll} xx=\color{Blue}{1}x+\color{Blue}{1}y & xy=\color{Blue}{1}x+\color{Blue}{0}y \\ yx=\color{Blue}{0}x+\color{Blue}{1}y & yy=\color{Blue}{1}x\color{Blue}{-1}y \end{array}$$
Notice how this time the product of two basis elements can be a nontrivial combination of basis elements, instead of just a single term (at most) as in our nice $e,f,g$ situation. The act of rewriting the product of two basis elements as a linear combination of basis elements is the act of using structure constants. The constants and the equations are literally the same information. Indeed writing a linear map as a matrix is the same idea: a priori you have a bunch of equations, describing how applying the operator to a basis element yields something that can be written as a sum of basis elements, and then you collect all of the coefficients together in a matrix.
Can you use these equations to compute $(3x+2y)(2x-3y)$ as $ax+by$ for some $a,b\in\Bbb Z$? Sure you can; distribute and then use the equations. If I had omitted any of the four equations, would you still be able to calculate the product? Nope. So you see these structure constants are necessary and sufficient conditions to doing calculations in the ring. In particular this applies with lie algebras, since they are nonassociative nonunital algebras over a field (the lie bracket is the "multiplication" in the ring). If you wanted to store a lie algebra in a computer and then query it later to do lie bracket calculations, you would store the structure constants and then program the computer to distribute and then evaluate products of basis elements using structure constants. |
If $G$ is an abelian group of order $p^n$? | By the definition of invariants of $G$, it can be decomposed as a product of cyclic groups of orders $p^{n_1},p^{n_2},\ldots,p^{n_k}$. The first factor already contains an element of order $p^{n_1}$, so you are sure this order occurs in $G$. Now directly form the definition, the order of an element in a product of groups$~G_i$ is the least common multiple of the orders of its components in the factors$~G_i$. Here you can easily see that the orders of those components always divide $p^{n_1}$, so the maximal value for that least common multiple is $p^{n_1}$ (and all other orders that occur divide this number). |
Find $\mathbb{Q}$-basis for the field $\mathbb{Q}(e^{\frac{2\pi i}{n}})$ (as a vector space) | It is true that $\zeta_p$ ($p$-prime) generates a normal basis for $\Bbb{Q}(\zeta_m,\zeta_p)/\Bbb{Q}(\zeta_m)$ when $p\nmid m$,
the proof is that a non-trivial $\Bbb{Q}(\zeta_m)$-linear relation between the $\zeta_p^1,\ldots,\zeta_p^{p-1}$ would mean $\zeta_p$ is a root of a polynomial $\in \Bbb{Q}(\zeta_m)[x]$ of degree $\le p-2$.
Thus $\prod_{p| n}\zeta_p $ and hence $\zeta_n$ generate a normal basis for $\Bbb{Q}(\zeta_n)/\Bbb{Q}$ whenever $n$ is square-free.
If $n$ is not square-free then it fails because $\sum_{a\le n,\gcd(a,n)=1} \zeta_n^a = \mu(n)=0$ |
Composition series of the ring R of 3x3 real matrices as an R-module and as a Real-Module | as a vector space over $\mathbb{R}$, $M_3$ is just $\mathbb{R}^9$ and the composition series is $0,\mathbb{R},\mathbb{R}^2,...,\mathbb{R}^9$.
as a (left) module over itself $M_3$ is the direct sum of the column spaces, and a composition series is
$$
0,
\left(
\begin{array}{ccc}
0&0&*\\
0&0&*\\
0&0&*\\
\end{array}
\right),
\left(
\begin{array}{ccc}
0&*&*\\
0&*&*\\
0&*&*\\
\end{array}
\right),
M_3
$$ |
Calculate the radical | You are on the right path. $\langle y,w\rangle\cap\langle y,z\rangle=\langle y,wz\rangle$ and $\langle x,w\rangle\cap\langle x,z\rangle=\langle x,wz\rangle$, so the answer you are looking for is $\langle y,wz\rangle\cap\langle x, wz\rangle=\langle yx,wz\rangle$. |
Show that the Lebesgue Stieltjes measure corresponding to $\alpha(x) = \mu((0,x])$ is $\mu$. | I don't see why you think you need to show they are the same for all Lebesgue measurable sets. Consider the example $\alpha$ being the Cantor function https://en.wikipedia.org/wiki/Cantor_function. Then the Cantor set has $\mu$ measure equal to $1$, but Lebesgue measure zero. Then you should be able to find a subset $E$ of the Cantor set which is non-measurable with respect to $\mu$, but clearly $E$ is Lebesgue measurable. |
Evaluate$\int_{1}^{\infty}$ $\frac{1-(x-[x])}{x^{2-\sigma}}$dx where [x] denotes greatest integer function and $0<\sigma<1$ | Trying to continue along your way.
You wrote
$$\int_1^\infty(1-x+\lfloor x\rfloor )\, x^{\sigma -2}\,dx=\sum_{n=1}^\infty \int_n^{n+1}(n+1-x)\,x^{\sigma -2}\,dx=\sum_{n=1}^\infty I_n$$
$$I_n=\int_n^{n+1}(n+1-x)\,x^{\sigma -2}\,dx=\frac{ (n+\sigma )n^{\sigma }-n (n+1)^{\sigma }}{n (1-\sigma) \sigma }=\frac{n^{\sigma -1} (n+\sigma )-(n+1)^{\sigma } } {(1-\sigma)\, \sigma }$$ and here the problem starts to be difficult if you are not familiar with the zeta function.
Hoping that you are, the result should be
$$\frac{1+\sigma\, \zeta (1-\sigma ) } {(1-\sigma) \,\sigma }$$ |
proving a random variable is a martingale | $4S_n-n\leq 4n-n=3n$, and $4S_n-n\geq -4n-n=-5n$. Therefore $|N_n|\leq 5n$, so $\mathbb{E}[|N_n|]\leq 5n<\infty$.
You're right that the sequence of random variables $\{N_n\}$ is not uniformly bounded, but each individual random variable $N_n$ is bounded. |
Practice Problems on the Elimination of Quantifiers | Model theory texts typically have a section devoted to quantifier elimination.
For example, David Marker's book (google books link) shows quantifier elimination for the theories of dense linear orders, divisible abelian groups, presburger arithmetic, algebraically closed fields, and real closed fields all in the section pointed to there. The exercises for that particular chapter have some simple theories to work out quantifier elimination for yourself as well.
Other parts of the book have scattered results on some other theories, and some general theorems that can help with proving quantifier elimination. |
On the notion of tensor in Riemannian Geometry | Such fields do not exist on general neighborhoods of a point $p$ (since for example the whole manifold is a neighborhood of $p$). But for sufficiently small neighborhoods they do exist as the example of the coordinate vector fields of a chart containing $p$ shows. |
Showing a set G is closed under addition | Closure:
If $b,c\in G$ then $b=am$ and $ c=an$, so $b+c=a(m+n)\in G$.
(Follows from closure of $\mathbb Z$ under addition.)
Identity:
$0=a0\in a\mathbb Z$
Inverse:
if $b=ma$ then $-b=(-m)a\in C$
Associative: follows from associativity of addition in $\mathbb Z$. |
Showing that $\mathcal{O}(X_f)\cong\mathcal{O}(X)_f$ without schemes language | Here's the proof from Hartshorne. I believe what you want is part (b). It's pretty well written and followable, I was able to follow it without knowing anything about schemes. If you want to discuss any particular points of it let me know. |
Prove $e^x$ limit definition from limit definition of $e$. | If you accept that exponentiation is continuous, then certainly
$$\left(\lim_{n\to\infty}\left(1+\frac1n\right)^n\right)^x = \lim_{n\to\infty}\left(1+\frac1n\right)^{nx}$$
But if $u=nx$, then by substitution we have
$$
\lim_{n\to\infty}\left(1+\frac1n\right)^{nx}=\lim_{u\to\infty}\left(1+\frac{x}{u}\right)^u
$$ |
Chinese Remainder Theorem with additional conditions | It is not necessarily true that such an $x$ exists. Take for example $s=1$ and $p_1=5$, $a_1=2$, and let $p_2,p_3,\dots$ be all of the other primes. The only integers $x$ that are not divisible by any of the primes $p_2,p_3,\dots$ are those of the form $\pm 5^k$, and none of those are congruent to $2$ (mod $5$).
(I see Yuval Filmus had the same idea.) |
Field norm well-behaved with respect to minimal polynomial | Let $S,T$ be the (finite) sets of embeddings of $\mathbb{Q}(\beta)$ and $\mathbb{Q}(\alpha)$ in $\mathbb{C}$. Let $n=|S|$, and, for each $0 \leq k \leq n$, $(-1)^kb_k$ be the $k$-th elementary symmetric polynomial in the elements of $S$.
Then $min_{\beta/\mathbb{Q}}(\alpha)=\prod_{\sigma \in S}{(\alpha-\sigma(\beta))}=\sum_{k=0}^n{\alpha^kb_{n-k}}$.
So the norm $\mathbb{Q}(\alpha)/\mathbb{Q}$ of the LHS is $\prod_{\tau \in T}{\sum_{k=0}^n{\tau(\alpha)^kb{n-k}}}=\prod_{\tau \in T}{\prod_{\sigma \in S}{(\tau(\alpha)-\sigma(\beta))}$.
So actually, the quotient $LHS/RHS$ for you is equal to $(-1)^{|S||T|}$ and the equality holds iff $\alpha$ and $\beta$ are conjugates or $|S|$ or $|T|$ is even. Otherwise, there is a sign. |
Method of moments estimator of $θ$ using a random sample from $X \sim U(0,θ)$ | The method of moments estimator is $\hat \theta_n = 2\bar X_n,$ and it is unbiased.
It has a finite variance (which decreases with increasing $n$)
and so it is also consistent; that is, it converges in probability to $\theta.$
I have not checked your proof of consistency, which seems inelegant and incorrect (for one thing, the $\epsilon$ disappears in the second line).
You should be able to use a straightforward application of Chebyshev's inequality to show that
$\lim_{n \rightarrow \infty}P(|\hat \theta_n - \theta| <\epsilon) = 1.$
However, $\hat \theta_n$ does not have the minimum variance among unbiased
estimators. The maximum likelihood estimator is the maximum of the $n$
values $X_i$ (often denoted $X_{(n)}).$ The estimator $T = cX_{(n)},$ where $c$ is constant depending on $n,$ is unbiased and has minimum variance among
unbiased estimators (UMVUE).
Both estimators are illustrated below for $n = 10$ and $\theta = 5$ by
simulations in R statistical software. With a 100,000 iterations
means and variances should be accurate to about two places. They
are not difficult to find analytically.
m = 10^5; n = 10; th = 5
x = runif(m*n, 0, th)
DTA = matrix(x, nrow=m) # m x n matrix, each row a sample of 10
a = rowMeans(DTA) # vector of m sample means
w = apply(DTA, 1, max) # vector of m maximums
MM = 2*a; UMVUE = ((n+1)/n)*w
mean(MM); var(MM)
## 5.003658 # consistent with unbiasedness of MM
## 0.8341769 # relatively large variance
mean(UMVUE); var(UMVUE)
## 5.002337 # consistent with unbiasedness of UMVUE
## 0.207824 # relatively small variance
The histograms below illustrate the larger variance of the method of
moments estimator. |
Solving the following trigonometric identities | Friend you are given a=π/2-(b+c)
So plug the value of. b+c=π/2-a
So sin(a)sin(a/2)=sin(a)
That makes sin(a)=0. or sin(a/2)=1
And the rest work remains on your part!! |
How can be this quadratic diagonalizable? | I'm not sure if I understand your question, because, as far as I know, every quadratic form is "diagonalizable"; that is, can be transformed into a sum of squares with a linear change of variables.
I think you've mixed two different things: "diagonalization" of a (general) endomorphism, and "diagonalization" of a quadratic form. The first is not always possible (though it is in this case, see the end of my answer), the second always is.
Moreover, you have got an easy algorithm to do this: just perform elementary transformations on your matrix $A$, as if you were looking for a row echelon form, BUT, whenever you perform a row transformation you must do the same transformation with columns.
For instance, you can start substracting the first row from the second and the third, getting
$$
\begin{pmatrix}
1 & 1 & 1 \\
0 & a-1 & a-1 \\
0 & a-1 & 2
\end{pmatrix}
$$
And now you have to do the same operations with columns:
$$
\begin{pmatrix}
1 & 0 & 0 \\
0 & a-1 & a-1 \\
0 & a-1 & 2
\end{pmatrix}
$$
If you now substract the second row from the third one and do the same with columns, you'll get a diagonal form for your quadratic one:
$$
\begin{pmatrix}
1 & 0 & 0 \\
0 & a-1 & 0 \\
0 & 0 & 3-a
\end{pmatrix}
$$
So, after a changing of variables, your form becomes:
$$
q(\overline{x}, \overline{y}, \overline{z}) = \overline{x}^2 + (a-1)\overline{y}^2 + (3-a)\overline{z}^2 \ .
$$
And if you want to know which change of variables gives you this reduced quadratic form, there is an easy way to obtain it. Namely, if
$$
X =
\begin{pmatrix}
x \\
y \\
z
\end{pmatrix}
$$
and $\overline{X}$ is the analogous vector for the new variables, then they are related through a linear relation $X = S\overline{X}$ and you can find $S$ as follows: add to your matrix $A$ the identity matrix $I$ like this:
$$
(A\ |\ I) \ .
$$
Now, perform to $I$ the same row operations you do to $A$ (just row operations!). In the end, you'll get a matrix
$$
(D\ |\ S^t)
$$
where $D$ is the previous diagonal one, up there, and the one appearing on the right is the transpose of the change of variables matrix. And the relations among all these matrices that have appeared here so far is:
$$
D = S^tA S \ .
$$
Another way to do this, though with more involved computations: your matrix $A$ is a symmetric one, right? Well, from the endomorphism point of view, every diagonal real matrix diagonalizes and with an orthonormal basis of eigenvectors. So, you have a diagonal matrix $D$ and an orthogonal bases change matrix $S$ (hence, $S^{-1} = S^t$) such that the above relation between all of them holds. |
Find a closed form for this infinite sum: $ 1+\frac 1 2 +\frac{1 \times2}{2 \times 5}+\frac{1 \times2\times 3}{2 \times5\times 8}+ \dots$ | First of all, let us note that
$$\int_0^1(1-x)^{n-1}x^{-1/3}\,dx=\mathrm{B}\left(n,\frac23\right)=\frac{\Gamma(n)\Gamma(\frac23)}{\Gamma(n+\frac23)}=\frac{(n-1)!\,\Gamma(\frac23)}{(n-\frac13)(n-\frac43)\ldots\cdot \frac23 \Gamma(\frac23)}=\frac{3^n (n-1)!}{2\cdot 5\cdot\ldots\cdot(3n-1)},$$
which is almost the main term of our series (becomes it after multiplicating by $n/3^n$). Then the series in question without initial term $1$ equals to
$$S=\sum_{n=1}^\infty \frac{n!}{2\cdot 5\cdot\ldots\cdot(3n-1)}=\sum_{n=1}^\infty \frac{n}{3^n}\int_0^1(1-x)^{n-1}x^{-1/3}\,dx=$$
$$=\sum_{n=1}^\infty \frac{n}{3^n}\int_0^1(1-u^{3/2})^{n-1}\frac32\,du=\frac12\int_0^1 \sum_{n=1}^\infty n\left(\frac{1-u^{3/2}}{3}\right)^{n-1}\,du.$$
Using the relation $$\sum_{n=1}^\infty nq^{n-1}=\frac{1}{(1-q)^2}$$ (which is just the derivative of $\sum\limits_{n=1}^\infty q^{n}=\frac{1}{1-q}$) with $q=\frac{1-u^{3/2}}{3}$ we get
$$S=\frac12\int_0^1\frac{du}{(1-\frac{1-u^{3/2}}{3})^2}=\frac92\int_0^1\frac{du}{(2+u^{3/2})^2}=9\int_0^1 \frac{t\,dt}{(t^3+2)^2}.$$
Since we have antiderivative for this function of the form
$$\int \frac{9t\,dt}{(t^3+2)^2}=\frac{3t^2}{2(t^3+2)}+\frac{1}{4\sqrt[3]{2}}\ln(t^3+2)-\frac{3}{4\sqrt[3]{2}}\ln(t+\sqrt[3]{2})+\frac{\sqrt3}{2\sqrt[3]{2}}\arctan\frac{\sqrt[3]{4}t-1}{\sqrt3}+C,$$ by the fundamental theorem of calculus $$S=\frac12+\frac{1}{4\sqrt[3]{2}}\ln3-\frac{3}{4\sqrt[3]{2}}\ln(1+\sqrt[3]{2})+\frac{\sqrt3}{2\sqrt[3]{2}}\arctan\frac{\sqrt[3]{4}-1}{\sqrt3}+\frac{\sqrt3}{2\sqrt[3]{2}}\arctan\frac{1}{\sqrt3}=$$
$$=\frac12+\frac{1}{4\sqrt[3]{2}}\ln(\sqrt[3]{2}-1)+\frac{\sqrt3}{2\sqrt[3]{2}}\left(\frac{\pi}{6}+\arctan\frac{\sqrt[3]{4}-1}{\sqrt3}\right).$$
Adding again omitted earlier first term $1$, we obtain an expression for required sum which is equivalent to one given by Mathematica. |
If $c\in \mathbb F_p^\times $, then $x^m-c=0$ has at most $m$ solution in $\mathbb F_p^\times $. | Well, its a general fact that any polyomial $f\in K[x]$ of degree $m$ with coefficients in a field $K$ has at most $m$ roots in $K$.
One can show that for each $a\in K$ using the division theorem/algorithm:
$f(a) = 0$ iff $f(x) = (x-a)g(x)$,
where $g\in K[x]$.
This can be iterated (apply to $g$ next) by induction such that
$f(x) = (x-a_1)\cdots (x-a_k)g(x)$
where $g\in K[x]$ and $g$ has no root in $K$.
By comparing degrees, its clear that $k\leq $ degree of $f$. |
equivalence of definitions | If you can find $X_1,X_2,\dots,X_n$ such that $X=\frac{1}{n}\sum X_i$ then we can simply choose $Y_i=X_i/n$ and we have $X=\sum Y_i$.
One can do something analogous in the other direction.
So both definitions are equivalent. |
Null space of the sum of two matrices | As the other answers have indicated, your assertions are not true. However, a moment's thought does reveal that we can salvage a partial result:
$$N(A+B) \supseteq N(A)\cap N(B)$$
which effectively rests on the fact that $0 + 0 = 0$ in any vector space.
As per request, a proof of the above. We are asserting that $v \in N(A) \cap N(B)$ implies $v \in N(A+B)$. Now suppose that the premise holds, i.e. $v \in N(A)\cap N(B)$:
$$\begin{align*}(A+B)v &= Av + Bv & & \text{by definition of $(A+B)v$} \\
&= 0 + 0 & & \text{since $v \in N(A)$ and $v \in N(B)$} \\ &= 0\end{align*}$$
hence $v \in N(A+B)$. $\blacksquare$ |
Show that $ x_n = \left(1 + \frac{r}{n} \right)^n $ has an upper bound | If you know that $\log(x)\leq x-1$ which is clear from the symmetry with the inverse function $\text e^x\geq x+1$ it follows that
$$
0\leq \log(x_n)=n\ \log\left(1+\frac{r}{n}\right)\leq r
$$
showing that $x_n$ is bounded. In fact this shows that $1\leq x_n\leq \text{e}^r$. Also it should be noted here that $\log(x)$ is an increasing function thus preserving inequalities. |
How many boys and girls should be included in the sample. | Consider it this way: The total number of vegetarians is, of course, the sum of male vegetarians and female vegetarians. So you can split the task into two tasks:
find the number of male vegetarians
find the number of female vegetarians
Of course for the first task, you'd interview only boys, and for the second task you'd interview only girls.
Now clearly, your estimate for the male vegetarians gets better if you interview more boys, and your estimate for female vegetarians gets better if you interview more girls.
However you've got the additional restriction that the total number of people you interview is 40. That is, the more boys you interview, the less girls you can interview.
For the extreme cases, imagine that you interview only boys. Then you have the best estimate for the number of male vegetarians, but exactly zero information about the number of female vegetarians. Similarly, if you interview only girls, you get the best estimate for the number of female vegetarians, but no information about the male vegetarians.
Now this tradeoff means that there's some specific number of boys and girls where the combined estimate will be best. The weight factors are exactly to determine that specific number.
Note that the more people there are in each group, the more people you have to ask to get a good picture; as an extreme example, if there are only ten girls, by asking nine of them you already get a pretty good estimate of the number of vegetarians (you're at worst off by one), while asking nine of a million won't tell you much. So since there are more boys than girls in the school, it's no surprise that you also should ask more boys than girls.
Now if you just randomly choose 40 people, it might be that you choose the optimal number of boys and girls, but it is not at all guaranteed. Since by definition, you'll get worse result if the number is not optimal, leaving it to chance is obviously a worse option than interviewing the optimal number of each.
There's one exception: If there is no difference between vegetarianism rates for boys and girls, then it doesn't matter whether you select boys or girls. Then just randomly selecting 40 people will be just as good. But it's an assumption you don't really want to make if you can avoid it. |
How to compute $\int_{0}^{\frac{\pi}{2}}\frac{\sin(x)}{\sin^3(x)+ \cos^3(x)}\,\mathrm{d}x$? | Note
\begin{align}
\int_{0}^{\infty} \frac{\mathrm{d}u}{1+u^3}
&=\frac13
\int_{0}^{\infty} \left(\frac{1}{1+u}+ \frac{2-u}{u^2-u+1}\right)du\\
&= \frac13\int_{0}^{\infty} \left(\frac{1}{1+u}-\frac12 \frac{2u-1}{u^2-u+1}+ \frac32\frac1{(u-\frac12)^2+\frac34} \right)du\\
&=\frac13 \left( \ln\frac{u+1}{\sqrt{u^2-u+1}}+\frac1{\sqrt3}\tan^{-1}\frac{2u-1}{\sqrt3}\right)\bigg|_0^\infty=\frac{2\pi}{3\sqrt3}
\end{align} |
Is there a special name for matrices A and C if ABC=B | Assuming that $A,B$ are invertible, then the relation $ABC=B$ implies that $A^{-1}$ and $C$ are similar matrices. Indeed, we have
$$ C = (AB)^{-1}B = B^{-1}A^{-1}B$$ |
Linear orders with dense isomorphic subsets | Yes. Indeed, $X$ must be canonically isomorphic to the Dedekind completion (with endpoints) of $X'$, and similarly for $Y$. To be precise, let us say a Dedekind cut in $X'$ is a downward closed subset $L\subseteq X'$ which contains its supremum, if the supremum exists (note we do not require $L$ to be nonempty or a proper subset, to allow for cuts corresponding to endpoints). Every element $x\in X$ determines a Dedekind cut $$L(x)=\{y\in X':y\leq x\},$$ and if $L(x)=L(x')$ then $x=x'$ since otherwise there would be an element of $X'$ between $x$ and $x'$ by density. (Note that here I take "dense" to mean dense in the order sense, not the topological sense, but they are equivalent given that $X$ is connected, which implies the open interval $(x,x')$ in $X$ must be nonempty.) Moreover, every Dedekind cut $L\subseteq X'$ comes from an element of $X$ in this way: since $X$ is complete, $L$ has a supremum $x$ in $X$, and then $L=L(x)$.
So, $x\mapsto L(x)$ is a bijection between $X$ and the set of Dedekind cuts on $X'$, and it is moreover order-preserving when you order the Dedekind cuts by inclusion. So, $X$ is isomorphic to the completion of $X'$. |
Is this a valid step in a convergence proof? | Your statement is valid, but it doesn't use the definition of convergence (i.e., the $\epsilon-N$ definition of convergence). To prove that the sequence converges to $3/4$, consider
$$\left|\frac{3n^2 + 1}{4n^2 + n + 2} - \frac{3}{4}\right| = \left|\frac{-3n - 2}{4(4n^2 +n + 2)}\right| = \frac{3n + 2}{4(4n^2 + n + 2)} < \frac{3n + 2n}{16n^2} = \frac{5}{16n} \tag{*}$$
Given $\epsilon > 0$, setting $N > 5/(16\epsilon)$ will make the rightmost side (and hence the leftmost side) of $(*)$ less than $\epsilon$ for all $n\ge N$. Therefore,
$$\lim_{n\to \infty} \frac{3n^2 + 1}{4n^2 + n + 2} = \frac{3}{4}.$$ |
Controllability of LTI Networks | As @jnez71 proposed the first $\boldsymbol{B}$ does give you more control authority, because your input vector $\boldsymbol{u}=[u_1,u_2,u_3,u_4]^T$ has four components instead of a scalar component $u$ for the vector $\boldsymbol{b}$. Because of this, you have a higher degree of freedom for the control of your system.
In order to see this, we can look at a simplified system
$$\dot{x}_1=x_1 + u_1$$
$$\dot{x}_2=x_2 - u_2$$
we can quickly see that $u_1=-x_1-k_1x_1$ and $u_2=x_2+k_2x_2$ will drive the system to the origin if $k_1$ and $k_2$ are positive.
If we look at the same system but with only one input we will have
$$\dot{x}_1=x_1 + u$$
$$\dot{x}_2=x_2 - u.$$
If we apply $u=-x_1-k_1x_1$, we can stabilize the first equation for $k_1>0$. The problem is now that the second equation will now be directly affected by this choice of $u$.
$$\dot{x}_2=x_2+x_1+k_1x_1$$
Hence, we cannot freely choose $u$ without constraining the values of $k_1$ which will lead to an asymptotic behavior of the system. It turns out that for this example we will not be able to stabilize the system with such a control law.
I hope you can see how we lost control authority by having a vector instead of a matrix as the input matrix. |
Finding all possible combination **patterns** - as opposed to all possible combinations | You need to split the count according to the number of different numbers used. There are $4$ grids using one number each, but they all yield the same pattern.
At the other extreme there are
$$4^{25}-\binom43\cdot3^{25}+\binom42\cdot2^{25}-\binom41\cdot1^{25}=4^{25}-4\cdot3^{25}+6\cdot2^{25}-4$$
grids that use all four numbers. The four numbers can be permuted in $4!=24$ different ways, so there are
$$\frac1{24}\left(4^{25}-4\cdot3^{25}+6\cdot2^{25}-4\right)$$
patterns.
Given two colors, say $a$ and $b$, we can fill them into the grid in $2^{25}-2$ ways (since we need to exclude the one-number grids). However, that counts each pattern twice, since we can interchange $a$ and $b$, so this case contributes $2^{24}-1$ patterns.
Similarly, three colors can be filled into the grid in
$$3^{25}-\binom32\cdot2^{25}+\binom31\cdot1^{25}=3^{25}-3\cdot2^{25}+3$$
ways, but the colors can be permuted in $3!=6$ ways, so the number of patterns is only
$$\frac16\left(3^{25}-3\cdot2^{25}+3\right)=\frac12\left(3^{24}-2^{25}+1\right)\;.$$ |
Show that the gradient of the curve is positive for all x in the given interval (for a trig function) | Firstly note that your function is undefined at $x=-\pi$. So I think the interval should be $-\pi<x<\pi$.
As @lab bhattacharjee mentioned in the comments we have $\cos(2x)=2\cos^{2}(x)-1$ so $$\cos(x)=2\cos^{2}(\frac{x}{2})-1$$ and $\cos(x)+1=2\cos^{2}(\frac{x}{2}).$
Clearly $\cos^{2}(\frac{\pi}{2})=\cos^{2}(\frac{-\pi}{2})=0$, $\cos^2(0)=1$ and it follows that $0<2\cos^2(\frac{x}{2})\leq2$ for all $x\in(-\pi,\pi).$
Hence $$\frac{dy}{dx}=\frac{1}{1+\cos(x)}=\frac{1}{2\cos^{2}(\frac{x}{2})}$$ and from the previous calculations we have $\frac{1}{2\cos^{2}(\frac{x}{2})}\geq\frac{1}{2}$ for $-\pi<x<\pi.$ |
Automorphism with zero trace | For $n=2$ I used that $Gln(K)$ is generated by transvections and dilatations than all transvections and dilataions are products of automorphism with zero trace , for a generalisation maybe we should try induction , I'll be glad to see a solution for the generalisation |
If $\mathcal A$ generates $\mathcal S$ then $\sigma (X )=\sigma (X ^{-1 } ( \mathcal A ))$ | It comes to proving that: $$X^{-1}\left(\sigma\left(\mathcal{C}\right)\right)=\sigma\left(X^{-1}\left(\mathcal{C}\right)\right)$$
where in general $\sigma\left(\mathcal{D}\right)$ denotes the smallest
$\sigma$-algebra that contains $\mathcal{D}$.
On base of the fact that $\sigma\left(\mathcal{C}\right)$ is a $\sigma$-algebra
it is straightforward to prove that $X^{-1}\left(\sigma\left(\mathcal{C}\right)\right)$
is also a $\sigma$-algebra, and this with $X^{-1}\left(\mathcal{C}\right)\subseteq X^{-1}\left(\sigma\left(\mathcal{C}\right)\right)$. This leads to $\sigma\left(X^{-1}\left(\mathcal{C}\right)\right)\subseteq X^{-1}\left(\sigma\left(\mathcal{C}\right)\right)$
wich is the part that you allready worked on in your question.
To prove the other side it must be verified that any collection of
the form $\left\{ Z\mid X^{-1}\left(Z\right)\in\mathcal{D}\right\} $
is a $\sigma$-algebra whenever $\mathcal{D}$ is a $\sigma$-algebra. This also is straightforward and applying this result on $\mathcal{D=}\sigma\left(X^{-1}\left(\mathcal{C}\right)\right)$
we find that $\mathcal{E}:=\left\{ Z\mid X^{-1}\left(Z\right)\in\sigma\left(X^{-1}\left(\mathcal{C}\right)\right)\right\} $
is a $\sigma$-algebra. It is evident that $\mathcal{C}\subseteq\mathcal{E}$ which allows the conclusion that $\sigma\left(\mathcal{C}\right)\subseteq\mathcal{E}$
or equivalently $X^{-1}\left(\sigma\left(\mathcal{C}\right)\right)\subseteq\sigma\left(X^{-1}\left(\mathcal{C}\right)\right)$. |
About Fourier transform | Here you start with $\ell^2(\Gamma)=\ell^2(\mathbb Z)$. The left regular representation $\lambda$ maps every $n\in\mathbb Z$ into $B(\ell^2(\mathbb Z))$ by $\lambda(n)f(m)=f(m-n)$.
The Fourier transform $\ell^2(\mathbb Z)\to L^2(\mathbb T)$ in this context is the map $U:f\mapsto \sum_{n\in\mathbb Z}f(n)\,e^{int}$.
Now, using that $\{e^{int}\}_n$ is an orthonormal basis,
$$
\langle U\lambda(n)f,U\lambda(n)f\rangle_{\ell^2(\mathbb Z)}=\langle \sum_{m\in\mathbb Z}f(m-n)\,e^{int},\sum_{m\in\mathbb Z}f(m-n)\,e^{int}\rangle_{L^2(\mathbb T)}\\
=\sum_{m\in\mathbb Z}|f(m-n)|^2=\sum_{m\in\mathbb Z}|\lambda(n)f(m)|^2=\langle\lambda(n)f,\lambda(n)f\rangle.
$$
This shows that $U$ is isometric, and it is easy to check that it is surjective: so it is a unitary.
As $C^*_\lambda(\mathbb Z)$ is generated by $\lambda(1)$, we have that $UC^*_\lambda(\mathbb Z)U^*$ is generated by $U\lambda(1)U^*$. This is $e^{-it}$ since
\begin{align}
\langle U\lambda(1)U^*\sum_{m\in\mathbb Z}c_m\,e^{imt},\sum_{m\in\mathbb Z}c_m\,e^{imt}\rangle
&=\langle\lambda(1)(c_m),(c_m)\rangle=\langle (c_{m-1}),(c_m)\rangle\\ \ \\
&=\sum_{m\in\mathbb Z} c_{m-1}\overline{c_m}\\ \ \\
&=\langle e^{-it}\,\sum_{m\in\mathbb Z}c_m\,e^{imt},\sum_{m\in\mathbb Z}c_m\,e^{imt}\rangle.
\end{align}
So $UC^*_\lambda(\mathbb Z)U^*$ is the closure of the trigonometric polynomials, that is $C(\mathbb T)$. |
weakly-* convergence and dominated convergence | You have
$$\int_0^T (g_n, h_n(t))_{L^2} \, \mathrm dt = ( g_n, \int_0^T h_n(t)\,\mathrm d t)_{L^2}$$
and
$\int_0^T h_n(t)\,\mathrm d t$ converges weakly in $H^s(\mathbb R)$, hence in $L^2(\mathbb R)$. Together with the strong convergence of $g_n$, this is enough to infer the convergence of the scalar product. |
Bijective function without knowing codomain | I found in my friends notes that the function $f(x) = (x-3)^{1/2}$ ,where x is any positive real number
Really any positive real number? I suppose only for $x-3\ge 0 \iff x \ge 3$.
Other than that, if the domain is $[3,+\infty)$, then you are right that the codomain needs to be (restricted to) $[0,+\infty)$ for $f$ to be surjective and then also bijective. |
Taylor series and Laurent series | Laurent series is unique. If a function has a Taylor series and a Laurent series about a point then the two coincide. Which means the function is analytic at that point. |
Finding a conformal map from unit disk to half-plane | Here's an outline; I'll leave the details to you:
The map you have will send the unit disc to a half plane. To get from a half plane to all of $\mathbb{C}$ minus a ray, postcompose with $z\mapsto z^2$. Now, to get the missing ray where you want it, rotate and translate.
Lastly, look at the pre-image of $0$. You can precompose with an automorphism of the disk sending $0$ to that point. Then all that's left is to check that, when you compose all these maps, the derivative is a positive number. |
Derivative: Square Root | What you have is correct. You've applied the product rule correctly, and you've applied the chain rule correctly on the right term. Good job!
If your choices are missing square roots, then do as Ross suggested and multiply top and bottom by $\sqrt{5-3x}$. |
Find the curve defined by the equation $\frac{z-z_2}{z_1-z_2}=\Big(\overline{\frac{z-z_2}{z_1-z_2}}\Big)$ | You are correct (assuming $z_1 \ne z_2$ of course, otherwise the problem is ill posed).
For a shorter proof, let $\lambda = \frac{z-z_2}{z_1-z_2}$ and note that the premise is $\lambda=\overline{\lambda}$ which is equivalent to $\lambda \in \mathbb{R}$.
Then it follows that $z = z_2 + \lambda(z_1 - z_2)$ with $\lambda \in \mathbb{R}$ which is the line passing through $z_2$ (at $\lambda = 0$) and $z_1$ (at $\lambda = 1$). |
Higher dimensional cross product equivalent | Cross product $a \times b$ of two vectors $a, b \in \mathbb{R}^3$ is designed to satisfy
$$\langle a\times b, x\rangle = \det(a,b,x),\qquad\forall x\in\mathbb{R}^3,$$
where $\langle \cdot, \cdot \rangle$ is an inner product. And of course, this relation can be used to prove all the relevant properties of $a \times b$. Likewise, the $n$-dimensional cross product can be defined as a function $\operatorname{Cross}(\cdots)$ of $(n-1)$-vectors in $\mathbb{R}^{n-1}$, given by
$$\langle \operatorname{Cross}(a_1,\cdots,a_{n-1}), x \rangle = \det(a_1,\cdots,a_{n-1},x),\qquad\forall x\in\mathbb{R}^n.$$
This is almost exactly the same as what you have constructed, possibly except for the sign choice. Indeed, the following observations can be extracted from the properties of determinant.
$a_1, \cdots, a_{n-1}$ are linearly independent if and only if $\operatorname{Cross}(a_1, \cdots, a_{n-1}) \neq 0$.
If $x \in \operatorname{span}(a_1, \cdots, a_{n-1})$, then $\langle \operatorname{Cross}(a_1, \cdots, a_{n-1}), x \rangle = 0$. In particular, $a_1, \cdots, a_{n-1}$ are orthogonal to $\operatorname{Cross}(a_1, \cdots, a_{n-1})$.
$\operatorname{Cross}(\cdots)$ is multi-linear, meaning that it is linear in each argument. |
Normal subgroup of automorphisms of a free group | Arturo Magidin's excellent answer explains how to show that $m(\langle A,B,C\rangle)=\{\pm 1\}$, and also why this does not in general imply that your subgroup is normal. However, I want to explain why in this case it is true that $\langle A,B,C\rangle$ is the full preimage of $\{\pm 1\}$, and thus is normal.
The key is a theorem of Nielsen (1) on what is now called $IA_n$: this is by definition the kernel of the natural surjection $Aut(F_n)\to GL_n\mathbb{Z}$, so it fits into a short exact sequence
$$1\to IA_n\to Aut(F_n)\to GL_n\mathbb{Z}\to 1$$
This group of automorphisms which are the Identity on the Abelianization has been well-studied, starting with Magnus (2), who proved that $IA_n$ is finitely generated by giving an explicit set of generators: type (A) which for given $i$ and $j$ conjugates $x_i$ by $x_j$ and fixes all other generators, and type(B) which for given $i$, $j$, and $k$ multiplies $x_i$ by $x_jx_kx_j^{-1}x_k^{-1}$ and fixes all other generators.
But anyway, we are interested here in $IA_2$, and Nielsen proves in (1) that $IA_2$ is just $Inn(F_2)$, the group of automorphisms obtained by conjugation. This is isomorphic to $F_2$ and generated by $conj_X\colon X\mapsto X, Y\mapsto XYX^{-1}$ and by $conj_Y\colon X\mapsto YXY^{-1}, Y\mapsto Y$.
We know that your group $\langle A,B,C\rangle$ subjects to $\{\pm 1\}$, so to show it is the full preimage of $\{\pm 1\}$ it suffices to show that $\langle A,B,C\rangle$ contains the full preimage of $\{1\}$, which is $IA_2$. But we can compute by hand that $AB=conj_Y$ and $CB=conj_X$. So by Nielsen's theorem your group contains $IA_2$, and thus fits into a short exact sequence
$$1\to IA_2\to \langle A,B,C\rangle\to \{\pm 1\}\to 1$$
In particular, your subgroup is normal in $Aut(F_2)$.
Finally, here is another method of proving that $\langle A,B,C\rangle$ is normal which does not require quoting any papers written in German. We know that the elementary Nielsen transformations provide a generating set for $Aut(F_n)$. This generating set becomes particularly simple in the case when $n=2$: we have three generators $R\colon X\mapsto X^{-1}, Y\mapsto Y$, $S\colon X\mapsto Y, Y\mapsto X$, and $T\colon X\mapsto XY,Y\mapsto Y$. To show that $\langle A,B,C\rangle$ is normal in $Aut(F_2)$ means that it is invariant under conjugation under any element of $Aut(F_2)$. But since $R,S,T$ generate $Aut(F_2)$, it is enough just to show that $\langle A,B,C\rangle$ is invariant under conjugation by $R$, by $S$, and by $T$.
To do this for $R$, for example, you just need to compute the conjugates $RAR^{-1}$, $RBR^{-1}$, and $RCR^{-1}$ and show that each can be written as a combination of $A$, $B$, and $C$. In this case this is quite easy: we have $RAR^{-1}=A$, $RBR^{-1}=B$, and $RCR^{-1}=C^{-1}$. Now all you have to do is do the same for $SAS^{-1}$, $SBS^{-1}$, $SCS^{-1}$, $TAT^{-1}$, $TBT^{-1}$, and $TCT^{-1}$, and you've proved that $\langle A,B,C\rangle$ is normal!
(1) Nielsen, Die Isomorphismen der allgemeinen unendlichen Gruppe mit zwei Erzeugenden, Math. Ann. 78 (1964), 385-397
(2) Magnus, Über n-dimensinale Gittertransformationen, Acta Math. 64 (1935), 353-367. |
Show that the following expansion is valid for first $n$ terms | You seem to have the wrong partial fractions. I get
\begin{eqnarray*}
\frac{3x^2+2x}{x^3-2x^2+3x-6} &=& \frac{16}{7(x-2)} + \frac{5x
+24}{7(x^2+3)} \\
&=& \frac{-8}{7} \frac{1}{1-\frac{x}{2}} + \frac{5x+24}{21} \frac{1}{1-\frac{-x^2}{3}}. \\
&& \\
\text{Using the series for $\;\frac{1}{1-x}$} && \text{for both of the above fractions} : \\
\frac{-8}{7} \frac{1}{1-\frac{x}{2}} &=& \frac{-8}{7} \left(1 + \frac{x}{2} + \left(\frac{x}{2}\right)^2 + \left(\frac{x}{2}\right)^3 + \cdots \right)\qquad\text{for $|x| \lt 2$} . \\
&& \\
\frac{5x+24}{21} \frac{1}{1-\frac{-x^2}{3}} &=& \frac{5x+24}{21} \left(1 - \frac{x^2}{3} + \left(\frac{x^2}{3}\right)^2 - \left(\frac{x^2}{3}\right)^3 + \cdots \right) \\
&=& \frac{1}{21} \left(24 + 5x - 8x^2 - \frac{5x^3}{3} + \cdots \right)\qquad\text{for $|x| \lt \sqrt{3}$} . \\
&& \\
\text{Summing,}\quad \frac{3x^2+2x}{x^3-2x^2+3x-6} &=& -\frac{1}{3}x - \frac{2}{3}x^2 - \frac{2}{9}x^3+\cdots . \\
&& \\
\end{eqnarray*}
From the radii of convergence of the partial fractions, this series is valid for $|x|\lt \sqrt{3}$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.