title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Understanding derivation in Robbins (1952) | Struggled with the expression for the last couple of days, very frustrating but in the end rewarding since I think the derivation below shows how to get the recursion. Unfortunately I do not have an intuitive explanation for the recursion...
Define the events
$H_i := \text{"heads in flip } i$",
$A_i := \text{"coin $A$ is used in flip $i$"}$ and
$B_i := \text{"coin $B$ is used in flip $i$"}$. Then you get the following:
\begin{align*}
p_{i+1} &= \mathbb{P}(H_{i+1}) \\
&= \mathbb{P}(H_{i+1} | H_i, A_i)\mathbb{P}(H_i|A_i)\mathbb{P}(A_i)
+ \mathbb{P}(H_{i+1} | H_i^c, A_i)\mathbb{P}(H_i^c|A_i)\mathbb{P}(A_i)\\
& \enspace \enspace \enspace \enspace + \mathbb{P}(H_{i+1} | H_i, B_i)\mathbb{P}(H_i|B_i)\mathbb{P}(B_i)
+ \mathbb{P}(H_{i+1} | H_i^c, B_i)\mathbb{P}(H_i^c|B_i)\mathbb{P}(B_i) \\
&= \alpha^2\mathbb{P}(A_i) + \beta (1-\alpha)\mathbb{P}(A_i) + \beta^2\mathbb{P}(B_i) +
\alpha(1-\beta)\mathbb{P}(B_i) \\
&= \alpha^2\mathbb{P}(A_i) + \beta^2\mathbb{P}(B_i) + \beta(1-\alpha) + \alpha(1-\beta) -\mathbb{P}(B_i)\beta(1-\alpha) - \mathbb{P}(A_i)\alpha(1-\beta)\\
&= \alpha\mathbb{P}(A_i)(\alpha + \beta - 1) + \beta\mathbb{P}(B_i)(\alpha + \beta - 1) + \beta(1-\alpha) + \alpha(1-\beta) \\
&= (\alpha + \beta - 1)(\alpha \mathbb{P}(A_i) + \beta \mathbb{P}(B_i)) + \beta(1-\alpha) + \alpha(1-\beta) \\
&= (\alpha + \beta - 1)p_i + \beta(1-\alpha) + \alpha(1-\beta)
\end{align*} |
How do you compute the pull-back of a complex differential (1,1)-form given its potential? | Suppose the charts on $Y$ follow coordinates $w \in \mathbb{C}^n$. Then we may write $\omega$ locally in a chart as
$$ \omega(w) = \frac{\partial^2 f}{\partial w \partial \bar w }(w) dw \wedge d\bar w $$
Now if we have a nice holo map $F: X \to Y$ (i.e. structure preserving), the pull back is just as you've written, $F^* \omega = \omega \circ F$. This amounts to swapping the coordinates in the charts. Suppose that the charts on $X$ follow coordinates $z \in \mathbb{C}^n$, then in the compatible charts we'll have $F(z) = \omega$ So
$$F^* \omega(z) = \omega \circ F(z) =\frac{\partial^2 f}{\partial w \partial \bar w }(F(z)) d F(z) \wedge d \overline {F(z)} $$
You may rewrite the partial derivatives in terms of $z$ if you wish using chain rule. |
Closure/Interior Operator Definition | The closure of $V$ is the intersection of all closed subsets of $X$ that contain $V$ as a subset. And a closed set in $X$ is one whose complement
relative to $X$ is in the topology $\mathcal{T}$.
So $$\operatorname{cl}(V) = \bigcap \{A \subseteq X: (X\setminus A \in \mathcal{T}) \land (V \subseteq A)\}$$
In the OP's definition of closure he's intersecting the complement of $X$? which is empty, as $X$ is the whole space. |
If T maps R-n ONTO R-m, is the nullity of $T = n-m$? | By the rank nullity theorem, the dimension of the domain of a linear map is equal to the sum of the dimension of its image plus its nullity.
As $T$ is supposed to be onto, the dimension of the image of $T$ is equal to $m$ and indeed the nullity is equal to $n-m$. |
How to prove the sequence is cauchy's | For part (a), you should get (for $m \le n$ and $m, n > N$),
$$
|s_m - s_n| = \sum_{i=m+1}^n \frac{1}{i!}
$$
Now one thing you could do is notice that $\frac{1}{i!} < \frac{1}{i(i-1)} = \frac{1}{i-1} - \frac{1}{i}$. So that gives you
$$
|s_m - s_n| < \sum_{i=m+1}^n \left(\frac{1}{i-1} - \frac{1}{i}\right)
$$
Now this sum is more manageable, if you write it out you will see that most of the terms cancel and you are left with $\frac{1}{m} - \frac{1}{n}$. And so you have
$$
|s_m - s_n| < \left|\frac1m - \frac1n\right| < \frac{2}{N}
$$
and so that should give you the ability to pick $N$ depending on $\epsilon$ so that $|s_m - s_n| < \epsilon$. |
Linear algebra questions | Hint
Take any subspaces $M$ and $N$ of $V$ such that $M\not\subset N$ and $N\not\subset M$ and then $M\cup N$ isn't a subspace of $V$(why?)
Any subspace contains the zero vector. |
Aftermath of the incompletness theorem proof | This is an extension of a comment below Arthur Fischer's answer. For concreteness we will work with the theory PA but in principle any sufficiently strong theory would act the same.
Working in PA, or even in the weaker theory PRA, we can formally prove the implication
$$
\text{Con}(PA) \to G_{PA}
$$
where $G_{PA}$ is the Gödel sentence of PA. This leads to two conclusions:
Because we know from the first incompleteness theorem that $G_{PA}$ is not provable in PA, and because $\text{Con}(PA) \to G_{PA}$ is provable in PA, $\text{Con}(PA)$ must not be provable in PA. This is the standard way to prove the second incompleteness theorem.
If we were working in a setting where we had already assumed $\text{Con}(PA)$, and we have access to the normal resources of PRA, then we can prove $G_{PA}$. In particular, when we are proving the incompleteness theorems we are working in normal mathematics, which includes much more than just PRA, and we assume $\text{Con}(PA)$ when we are proving that theorem. Under that assumption we can prove $G_{PA}$ in normal mathematics. Thus the Gödel sentence is "true" in exactly the same sense that $\text{Con}(PA)$ is true when we assume it as a hypothesis to prove the incompleteness theorem.
The key point in the second bullet is that we don't prove $G_{PA}$ starting with nothing. We prove $G_{PA}$ starting with the knowledge or assumption that $\text{Con}(PA)$ holds. If we did not assume the truth of $\text{Con}(PA)$, or have a separate proof of $\text{Con}(PA)$, the argument in that bullet would be useless. But once we do assume $\text{Con}(PA)$ is true, it only takes a very weak theory (PRA) to deduce that $G_{PA}$ is also true under that assumption. |
a basic quetion in positive definite matrix | This is called Sylvester's criterion. A proof can be found on the link.
Edit: As asked by the OP, here are some remarks:
Let $A=\begin{pmatrix}a_{11} &\cdots &a_{1n}\\
\vdots &\ddots &\vdots \\
a_{n1} &\cdots &a_{nn} \end{pmatrix}\in \mathcal M_{n\times n}(\Bbb C)$, for some $n\in \Bbb N$ and for every $t\in [n\textbf{]}$ define $A_t=\begin{pmatrix}a_{11} &\cdots &a_{1t}\\
\vdots &\ddots &\vdots \\
a_{t1} &\cdots &a_{tt} \end{pmatrix}$.
It is true that $A$ is negative definite if, and only if, $(\forall t\in [n\textbf{]})\left((-1)^t\det (A_t)>0\right)$.
Also $A$ is positive semidefinite if, and only if, all its submatrices have non-negative determinants. |
A closed formula for the measure of the union of $n$ sets | Hint: The base $n=2$ step is easy to verify, as you've said. For the induction step, suppose that it holds for any $n$ sets, and observe that $$\bigcup_{k=1}^{n+1}A_k=\left(\bigcup_{k=1}^nA_k\right)\cup A_{n+1}$$ and $$\bigcup_{k=1}^n\left(A_k\cap A_{n+1}\right)=\left(\bigcup_{k=1}^nA_k\right)\cap A_{n+1}.$$ Now, apply the base result and the induction hypothesis. |
Solving this permutation | Recall that $p(n,r)$ is the probability that the first shuffle of the deck of $n$ cards (by the prescribed method) will leave exactly $r$ unsorted cards. In order for this to happen, the first shuffle must put $n_1$ cards in correct order at the top of the deck and $n_2$ cards in correct order at the bottom of the deck, where $n-(n_1+n_2)=r$ is the number of cards left in the middle still to be shuffled.
There are $r!$ possible shuffled arrangements of the unsorted $r$ cards, and they’re all equally likely. However, some of them put the right card at the top of this deck of $r$ cards, which means that it would have been added to the top sorted stack. We don’t want to include those: they don’t leave us with $r$ unsorted cards. If the right card ends up at the top of the $r$ cards, the other $r-1$ cards could still be in any order, so there are $(r-1)!$ such permutations. There are also $(r-1)!$ permutations that put the right card on the bottom of the $r$ cards, where it would have gone into the bottom sorted stack. Thus, so far we have $r!-2(r-1)!$ permutations that leave the whole stack of $r$ cards unsorted.
However, we’ve subtracted too much: we subtracted some permutations twice, because some put the right card on the top and the right card on the bottom. Those merely leave the middle $r-2$ cards shuffled, so there are $(r-2)!$ of them. We subtracted them twice, so we have to add them back in to get the correct total number of permutations that leave all $r$ cards unsorted:
$$\begin{align*}
r!-2(r-1)!+(r-2)!&=r(r-1)(r-2)!-2(r-1)(r-2)!+(r-2)!\\
&=\Big(r(r-1)-2(r-1)+1\Big)(r-2)!\\
&=(r^2-3r+3)(r-2)!\;.
\end{align*}$$
Now $n_1$, the number of cards in the top sorted set, could be anywhere from a minimum of $0$ up to a maximum of $n-r$, if all of the sorted cards are at the top. To put it a little differently, the first one of the $r$ unsorted cards can be anywhere from the first card in the deck to the $(n-r+1)$-st card. That’s a total of $n-r+1$ different positions that it can occupy. For each of those possible positions of the $r$ unsorted cards there are $(r^2-3r+3)(r-2)!$ permutations of those cards that leave them unsorted, i.e., that don’t put the right card on either the top or the bottom. There is only one possible permutation of the top $n_1$ and bottom $n_2$ cards that gets them in the right places. Thus, we have a total of $$(n-r+1)(r^2-3r+3)(r-2)!$$ permutations of the whole deck that leave an unsorted section of $r$ cards to be shuffled again.
Finally, there are altogether $n!$ possible permutations of the deck, so the fraction that produce an unsorted batch of exactly $r$ cards is
$$\frac{(n-r+1)(r^2-3r+3)(r-2)!}{n!}\;.$$
Since the permutations are equally likely, that is also the probability of leaving exactly $r$ cards to be sorted. |
lifting property of universal cover of $\mathbb{RP}^n$ | In general, the lifting property says that given a map $h:Y\to X$, and a covering map $p: X' \to X$, the map $h$ lifts to a map $\hat h: Y\to X'$ (with designated base point $y_0$ mapping to designated base point $x_0'$ above $x_0=h(y_0)$) if and only if the containment $f(\pi_1(Y,y_0))\subseteq p(\pi_1(X',x_0'))$ is satisfied. In particular, if $Y$ is simply connected then this will always be true, so the answer to your first question is yes. On the other hand, the answer to the second question is no: a simple counterexample is given by $Y=X=S^1$ with $h$ being the identity map; if $h$ lifted to a map $\hat h: S^1 \to \mathbb R$, then $\hat h$ would be null-homotopic, and hence $h=p \circ \hat h$ would be null-homotopic as well, a contradiction, since we know that in fact $h$ is not null-homotopic. |
Why does the axiom of foundation hold for every class? | Let $A$ be a class and $x\in A$. Relativized axiom of foundation states
$$\forall x\in A:(x\neq\varnothing)^A.$$
You can see that $(x\neq\varnothing)^A$ iff $x\cap A$ is nonempty, and then you apply axiom of foundation for $x\cap A$. |
How many base-out configurations would be possible in sleazeball? | Each base can either be empty or contain a player. That's two possible states per base. There are seven independent bases. So that's $2^7$ states over all bases. Now, multiply by 5, since there are 5 possibilities for number of "outs" for each base state. |
What is the condition that a cubic equation $x^3+ax^2+bx+c=0$ has exactly three positive real root? | For cubic polynomial to have 3 positive real Roots , we can split question into two parts (1) Roots are real (2) all roots are positive.
$$ $$
Let $f(x)=x^3+ax^2+bx+c$. Find $f'(x)$.
$$f'(x)=3x^2+2ax+b=0\;has\,two\,roots$$
$$\therefore 4a^2-12b\ge 0$$
Let $f'(x)=0$. For. $x=\alpha,\beta$.
Now for three roots to be real condition will be $f(\alpha).f(\beta)\le 0$. $$. $$
Also for all three roots to be positive point of local maximum should be positive i.e if $\alpha \lt\beta $ then. $\alpha \gt 0$ use both roots of quadratic $f'(x)$ should be positive. Along with it ensure that $f(0)=c\lt 0$ . |
Integrating double integral with spherical coordinates problem with interpret a domain | The limits are $x^2+y^2\le 2y \ ,\ x\ge0 \ , y\ge\frac{1}{2} $
You've found the upper limit i.e, $r = 2\sin\theta$ (upper limit).
For $y\ge\frac{1}{2} \ , \ r\sin\theta \ge\frac{1}{2} $
So, the lower limit of $r$ is $\frac{1}{2\sin\theta}$
Also, $x\ge0$ , $r\cos\theta\ge0$ , $\cos\theta\ge0$ So, $\theta\le\pi/2$
At $y=\frac{1}{2}$, $\sin\theta = \frac{\pi}{6}$ [as, $1.\sin(\pi/6) =1/2$]
$$I = \int^{\pi/2}_{\pi/6}\int^{2\sin\theta}_{\frac{1}{2sin\theta}}r^2\cos\theta \ dr d\theta = \frac{1}{3}\int^{\pi/2}_{\pi/6}\bigg\{8\sin^3\theta - \frac{1}{8sin^3\theta}\bigg\}\cos\theta d\theta$$
Now let $u = sin\theta$ , $du = \cos\theta d\theta$
At $\theta = \pi/6$, $u=1/2$ and at $\theta = \pi/2$, $u=1$
So, $$I = \frac{1}{3}\int^{1}_{1/2}\bigg\{8u^3- \frac{1}{8u^3}\bigg\}du = \frac{1}{3}\bigg[8\cdot\frac{1}{4}(1-\frac{1}{16}) - \frac{1}{8}\cdot\frac{-1}{2}\bigg(1 - \frac{1}{(1/2)^2}\bigg)\bigg] = \frac{1}{3}$$
$$I = \frac{1}{3}\big[ \frac{15}{8} - \frac{3}{16}\big] = \frac{1}{3}[\frac{27}{16}] = \frac{9}{16}$$ |
One particular equation involving floor function | If $x \ge 0$ then $0 < \frac {x+1}{x+7} < 1$ and $0 \le \frac x{x +3} < 1$ so $[\frac{x+1}{x+1}] = [\frac x{x+3}] = 0$. So all $x \ge 0$ are solutions.
If $x < 0$ is a matter of solving $[\frac {y - 1}{y-7}] = [\frac y{y- 3}]$ for $y = - x > 0$.
If $y = 1$ then $[\frac {y - 1}{y-7}] = 0 \ne [\frac y{y- 3}] = [- 1/2] = -1$.
Otherwise $\frac {y-1}{y-7} = 1 + \frac 6{y-7}$ and $\frac y{y-3} = 1 + \frac 3{y-3} $
so $[\frac {y - 1}{y-7}]=[\frac y{y- 3}]\iff [\frac 6{y-7}] =[\frac 3{y-3}] $
$[\frac 6{y-7}] = k \iff k \le \frac 6{y-7}< k + 1 \iff \frac 6{k}+7\ge y >\frac6{k+1} +7 $
Likewise $[\frac 3{y-3}] = k \iff \frac 3k + 3 \ge y > \frac 3{k+1} +3$
So $y >0 $ $[\frac 3{y-3}]= [\frac 6{y-7}] \iff \min(\frac 6k + 7,\frac 3k + 3) \ge y > \max (\frac 6{k+1} + 7, \frac 3{k+1} + 3)$.
If $k = 0$ then this is true if $y > 13$ and so $x < -13 \implies [\frac {x+1}{x+7} ]=[\frac {x}{x+3}] = 1$
If $k \ge 1$ then this is true if $\frac 3k + 3 \ge y > \frac 6{k+1} + 7 \ge \frac 6{2k} + 7 = \frac 3k +7$ which is impossible.
If $k = -1$ then to be true, $0 \ge y $ which is impossible.
If $k = -2$ then $\frac 32 \ge y > 1$ allows $[\frac{x+1}{x+7}] =[\frac{x}{x+3}] = -1$.
if $k = -3$ then $\frac 3{k} + 3 \ge y > \frac 6{k + 1} + 7 > \frac 6{2k} + 7 = \frac 3k + 7$ which is impossible.
So solution is $x < -13; -\frac 32 \le x < -1$; $x \ge 0$ |
non trivial representation of the identity in the symmetric group by different transpositions | Try
$$(14)(34)(12)(13)(24)(23)$$ |
Total derivative in high dimension | The situations you describe are not the same. In the first case, you are taking the derivative of two functions: $g$ and $\widetilde{g}=g \circ h$ and concluding that they are equal at every point. (Which, as you say, is natural.)
In the second case, you are considering the derivative of a function in two different points. If you do the same for the original function $g$, you will find similarly that $Dg_{a}=-Dg_{-a}$, which does not contradict anything.
Likewise, if you put $f:x \mapsto x^2$ and $g: x \mapsto f(-x)$, you will find that the derivative is the same at every point, which would be the similar thing to do with respect to the first case. |
Why $(nkn^{-1})k^{-1}=n(kn^{-1}k^{-1})\in N\cap K$? | $(nkn^{−1})k^{−1}$ lies in $K$ since $K$ is normal, and
$n(kn^{−1}k^{−1})$ lies in $N$ since $N$ is normal.
Since both expressions are the same, they lie in the intersection $N\cap K$. |
Series Expansion of $\arcsin\left(\frac{a}{a+x}\right)$ | To answer my own question, the most accurate I can get is by using
$$
\arcsin\left(x\right)=2\arctan\left(\frac{x}{1+\sqrt{1-x^{2}}}\right)\
$$
so
$$
\arcsin\left(\frac{a}{a+x}\right)=\frac{\pi}{2}-\frac{\sqrt{2}}{\sqrt{a}}x^{1/2}+\frac{5}{6a^{3/2}\sqrt{2}}x^{3/2}...\
$$ |
Finding generating series from equation | $$f(x)=\frac{1-x^3}{1-x}f(x^2)=\frac{1-x^3}{1-x}\frac{1-x^6}{1-x^2}f(x^4)=
\cdots.$$
Therefore
$$f(x)=f(0)\prod_{k=0}^\infty\frac{1-x^{3\cdot 2^k}}{1-x^{2^k}}.$$ |
Find **Pontrjagin dual.** of $\Bbb Z_n$ & $\Bbb Z$ | If $\phi: \Bbb{Z}_n \to \Bbb{C}^*$, put $z = \phi(1) \in \Bbb{C}^*$. Then,
\begin{align}
z^n &= \underbrace{\phi(1) \cdot \cdots \cdot \phi(1)}_{n} \\
&= \underbrace{(\phi + \cdots + \phi)}_{n}(1) \\
&= (n\phi)(1) \\
&= \operatorname{id}_{\Bbb{Z}_n}(1) \\
&= 1,
\end{align}
so $z$ is an $n$-th root of unity. By the fundamental theorem of algebra, there are exactly $n$ of these. If $\zeta$ is a primitive $n$-th root of unity, then $z = \zeta^k$ for some $k \in \{0, 1, \ldots, n-1\}$. This parametrizes possible values of $z$, so the dual is a group of size $n$.
It turns out that the dual group is cyclic, so
$$
\operatorname{Hom}(\Bbb{Z}_n, \Bbb{C}^*) \cong \Bbb{Z}_n.
$$
Why? Say that $\psi: \Bbb{Z}_n \to \Bbb{C}^*$ with $\psi(1) = \zeta$, a primitive $n$-th root of unity. Then, the following calculation shows that any other $\phi: \Bbb{Z}_n \to \Bbb{C}^*$ is a multiple of $\psi$:
$$
\phi(1) = \zeta^k = \bigl( \psi(1) \bigr)^k = (k\psi)(1),
$$
and any homomorphism from $\Bbb{Z}_n$ is uniquely determined by where it sends $1$.
The case of $\operatorname{Hom}(\Bbb{Z}, \Bbb{C}^*)$ is easier. Any homomorphism $\phi: \Bbb{Z} \to \Bbb{C}^*$ is uniquely determined by $z = \phi(1)$. Unlike the case of the finite cyclic group, there are no restrictions on $z \in \Bbb{C}^*$. Thus,
$$
\operatorname{Hom}(\Bbb{Z}, \Bbb{C}^*) \cong \bigl( \Bbb{C}^*, \times \bigr).
$$ |
Rotating Around A Regular Polygon in Polar Coordinates | In your image suppose other vertices for the triangle on the right side are B, C because you have already drawn O vertex there.
Now some mathematical calculations.
$AB=\sin(36)$
$OB=r$
$PA=P\cos(36-\phi)$
$OA=\cos(36)$
Now apply: $a^2+b^2=c^2$
$OA^2+PA^2=PO^2$
$P^2\cos^2(36-\phi)+\cos^2(36)=P^2$
Solving you get:
$P=\sqrt{\frac{\cos^2(36)}{\sin^2(36-\phi)}}$
Is this the correct solution? Please let me know. |
Solve Diophantine equation: $xy+5y=2x+(y+2)^2$ | It is equivalent to
$$x(y-2)=y^2-y+4\iff x=\frac{y^2-y+4}{y-2}=y+1+\frac6{y-2}$$
for $y\neq2$.
EDIT
As @lab bhattacharjee suggested, we may also arrange the equation with $y$ as the subject.
$$y^2-(1+x)y+(2x+4)=0\tag1$$
For integer $y$, we must have some integer $z$ such that
$$\Delta=z^2=(1+x)^2-4(2x+4)=x^2-6x-15=(x-3)^2-24$$
$$\iff(x-3-z)(x-3+z)=24$$
So there exists some integer $a=x-3-z$ and $\frac{24}{a}=x-3+z$.
Thus we have $$a+\frac{24}a=2x-6\iff x=\frac a2+\frac{12}a+3$$
So $a=2,4,6,12\iff x=8,10$. Substitute both values of $x$ into $(1)$ and we get
$$(x,y)=(8,4),(8,5),(10,3),(10,8)$$ |
Derivative of squared form | I'll assume you know how to calculate the derivative of the inverse function, i.e.
$$\eqalign{
h(z)=g^{-1}(z) \implies\frac{dh}{dz}= h'(z) \cr
}$$
Define the vectors
$$\eqalign{
e_k &\in {\mathbb R}^{n\times 1},\quad
\varepsilon_k \in {\mathbb R}^{2\times 1}&{\rm (standard\,basis)} \cr
z_k &= e_k^TX_r\beta_r,\quad p_k = h(z_k),\quad q_k = h'(z_k)&\in{\mathbb R}^{1}\cr
w_k &= \pmatrix{e_k^TX_c\beta_c\\p_k} - \pmatrix{\varepsilon_1^TY_k\\\varepsilon_2^TY_k}&\in{\mathbb R}^{2\times 1} \cr
&= \big(e_k^TX_c\beta_c-\varepsilon_1^TY_k\big)\varepsilon_1 + \big(p_k-\varepsilon_2^TY_k\big)\varepsilon_2 \cr
dw_k &= \varepsilon_2\,dp_k = \varepsilon_2q_k\,dz_k \cr
}$$
Write the $u_k$ vectors in terms of these new vectors. Then find the differential and gradient.
$$\eqalign{
u_k &= w_k^T\Sigma^{-1}w_k = \Sigma^{-1}:w_kw_k^T \cr
du_k &= 2\Sigma^{-1}w_k:dw_k \cr
&= 2\Sigma^{-1}w_k:\varepsilon_2\,q_k\,dz_k \cr
&= \Big(2\varepsilon_2^T\Sigma^{-1}w_k\Big)q_k\Big(e_k^TX_r\,d\beta_r\Big) \cr
\frac{\partial u_k}{\partial\beta_r}
&= \Big(2\varepsilon_2^T\Sigma^{-1}w_k\Big)q_k\Big(X_r^Te_k\Big) \cr
&= \Big(2\varepsilon_2^T\Sigma^{-1}w_k\Big)\,X_r^TQe_k \cr
}$$
where the matrix $\,Q={\rm Diag}(q_k)\in{\mathbb R}^{n\times n}$
In some intermediate steps, a colon was used to denote the trace/Frobenius product, i.e.
$$\eqalign{A:B = {\rm Tr}(A^TB)}$$ |
Counting distinct sets of pairwise distances for some number of passengers loaded onto different cars of a ferris wheel | Your numerical data don’t agree with your statement of the problem, though they’re related. I’ll solve the problem as you stated it and show what related problem your numerical data solve.
Let the riders be $A_0,A_1,\dots,A_{N-1}$. Number the cars clockwise from $0$ through $M-1$, with $A_0$ occupying Car $0$. If $A_k$ occupies car $c_k$, the distance from $A_0$ to $A_k$ is $c_k$. Thus, the relative placements of all of the riders are completely determined by the list $\langle c_1,\dots,c_{N-1}\rangle$ of distances from $A_0$, and the other distances between riders can be inferred from these, since $M$ is known. The number of distinct ordered lists of distances is therefore equal to the number of lists of the form $\langle c_1,\dots,c_{N-1}\rangle$, where $c_k$ is the distance from $A_0$ to $A_k$. This is simply the number of $(N-1)$-tuples of distinct integers from the set $\{1,2,\dots,M-1\}$, which is $$(M-1)(M-2)\cdots(M-N+1)=\frac{(M-1)!}{(M-N)!}\;:$$ there are $M-1$ possible locations for $A_1$, after which $A_2$ can occupy any of the $M-2$ remaining cars, and so on.
For $M=N=3$, for instance, there are $(3-1)(3-2)=2$ possible lists, $\langle 1,2\rangle$ and $\langle 2,1\rangle$, depending on whether the clockwise seating order is $A_0,A_1,A_2$ or $A_0,A_2,A_1$. Similarly, for $N=3$ and $M=4$ there are $6$ possible lists, not $3$: $\langle 1,2\rangle,\langle 2,1\rangle,\langle 1,3\rangle,\langle 3,1\rangle,\langle 2,3\rangle$, and $\langle 3,2\rangle$.
Suppose, now, that you don’t care about the identities of the riders other that $A_0$. That is, you’re interested just in the set of inter-rider distances, not in which distance goes with which pair of riders. Then all you need to know is which set of $N-1$ cars are occupied by riders $A_1$ through $A_{N-1}$: you need to know the set $\{c_1,\dots,c_{N-1}\}$ of numbers, but not the order in which its members are associated with riders $A_1$ through $A_{N-1}$. You can still infer from this set what all of the other inter-rider distances are; you just don’t know which distance goes with which pair of riders. For this problem you’re counting the $(N-1)$-element subsets of the $(M-1)$-element set $\{1,\dots,M-1\}$, a number which is given by the binomial coefficient
$$\binom{M-1}{N-1}=\frac{(M-1)!}{(N-1)!(M-N)!}\;.$$
This expression gives the numbers from your brute force calculations.
The relationship between the two counts is straightforward: each set of $N-1$ car numbers can be arranged in an ordered list in $(N-1)!$ ways, so the number of lists must be $(N-1)!$ times the number of unordered sets, and indeed
$$\frac{(M-1)!}{(M-N)!}=(N-1)!\binom{M-1}{N-1}\;.$$ |
Proof that G is Abelian given $xy=zx\implies y=z$ for $x,y,z \in G$ | This is just Ihf's proof simplified:
By the given condition:
$$x(yx)=(xy)x \Rightarrow yx=xy$$ |
Totally ordered sets | Yes, so long as $T$ is nonempty. Since $T$ is totally ordered, then minimal is equivalent to minimum (one direction is easy, the other follows by totality/comparability). Similarly for maximal and maximum. |
If a functional sequence $f_n$ converges to $f$ uniformly then the mean of $f_n$ converges to $f$ proof help | You were almost at the end ;-)
From here: $|\frac{1}{n}\sum_{i=1}^n f_i(x) - f(x)| = \frac{1}{n}|\sum_{i=1}^n(f_i(x)-f(x))| \leq \frac{1}{n}\sum_{i=1}^n|f_i(x)-f(x)|$
you can use the fact that $\exists N,\forall n>N, |f_n(x) - f(x)| \leq\varepsilon$. Let's call $N$ that integer.
Then $\frac{1}{n}\sum_{i=1}^n|f_i(x)-f(x)| = \frac{1}{n}\sum_{i=1}^N|f_i(x)-f(x)| + \frac{1}{n}\sum_{i=N+1}^n|f_i(x)-f(x)|$
$ \leq \frac{1}{n}\sum_{i=1}^N|f_i(x)-f(x)| + \frac{(n-N-1)\varepsilon}{n}$
$\leq \frac1n N(L+M)+ \frac{(n-N-1)\varepsilon}{n}$ (note that it does not depend on $x$ anymore)
For a sufficient large $n$, the first term is close to 0, and the second less than $\varepsilon$. |
number of subsets of the positive integers that whose members sum to n | So you're going to find the different ways for partitioning a number. There is a very nice explanation here:
https://en.wikipedia.org/wiki/Partition_(number_theory) |
When is a matrix a constant multiple of unit matrix | For part 1: if eigenvectors $v_1, v_2$ of an $n \times n$ matrix $A$ share an eigenvalue $\lambda$, then so do all linear combinations of $v_1, v_2$:
$$
A(a_1v_1 + a_2v_2) = a_1A(v_1) + a_2A(v_2) = a_1\lambda v_1 + a_2 \lambda v_2 = \lambda (a_1v_1 + a_2v_2).
$$
Thus, if $A$ has a set of eigenvectors sharing an eigenvalue $\lambda$ which span the whole space, then in fact every vector of the space is an eigenvector. In particular, this is true for the standard basis $e_1, \ldots, e_n$. Now note that the $i$th column of $A$ is equal to $Ae_i$, and
$$
Ae_i = \lambda e_i,
$$
so that $A = \lambda I_n$.
For the second part: hint, consider the matrix
$$
B = \begin{pmatrix}
2 & 1 & 0\\
0 & 2 & 1 \\
0 & 0 & 2
\end{pmatrix}.
$$
Also, for any invertible matrix $P$, consider $PBP^{-1}$. |
Is $l_p$ a closed subspace of $c_0$? | No. Let $a=(k^{-1/p})_k$ and
$$
a^n=(1,2^{-1/p},\dots,n^{-1/p},0,0,0,\dots).
$$
Then $a^n$ converges to $a$ in $c_0$, but $a\notin\ell^p$. |
Prove that $\sum_{n=0}^\infty e^{-nz}$ is analytic in the right half plane $\text{Re}(z)>0$ | Let $K$ be a compact subset of $\{z\in \mathbb C \ | \ \text{Re}(z)>0\}$.
Then, there exist $a,b\in\mathbb R$ such that $0<a\leq b$ and $K\subset \{z\in \mathbb C \ | \ a\leq \text{Re}(z)\leq b\}$.
Observe that $\displaystyle \sum_{n=0}^{+\infty}e^{-nz}$ converges uniformaly on $K$. |
What is the property of the eigenvalues of this matrix? | By the given condition, we see that $A=\frac12I+K$ for some skew-Hermitian matrix $K$. Since skew-Hermitian matrices have purely imaginary eigenvalues, it follows that the real part of every eigenvalue of $A$ is $\frac12$.
The eigenvalues can be non-real. E.g. consider $A=\frac12\pmatrix{1&-1\\ 1&1}$. Its eigenvalues are $\frac1{\sqrt{2}}e^{\pm i\pi/4}$. |
Characterization of separable spaces | The answer is NO. Let $X=\ell^{\infty}$ and $x^{*}_n(x)=x_n$. Then $x_n^{*}$ is a continuous linear functioinal on $X$ and $\|x\|=\sup | \langle x, x_n^{*} \rangle|$ for all $x \in X$ but $X$ is nor separable. |
How does Hilbert's axiomatization relate to set theory? | The resolution is that Hilbert's axioms are not first-order axioms. They refer to sets (or "systems") as well as to individual objects.
This is most visible in the axiom of continuity, which directly refers to sets. The purpose of this axiom is to prevent issues which affect Euclid's axioms. One such issue is that the rational plane $\mathbb{Q} \times \mathbb{Q}$ satisfies many of the axioms of geometry (including the natural FOL versions of Euclid's axioms). Some axiom is needed to prove, for example, that the line $y = x$ intersects the circle of radius $1$ centered at the origin.
Hilbert's axioms are most naturally viewed in second-order logic. In that framework, "set" is a logical concept (like "equality" and "or") rather than a term of the theory being studied.
At the time that Hilbert was working on geometry, the distinction between first-order and second-order logic was not yet understood. So it is not at all surprising that Hilbert would not make any distinction, and would not view "set" as an undefined term of geometry. From the 19th century point of view there was simply "logic", which had not yet been formalized. It took until the 1940s or 1950s for our contemporary understanding of logic to be completely developed.
There are first-order axiom systems for geometry, such as Tarski's axioms. These are different from Hilbert's axioms in various ways, but in particular they do not try to include a "completeness" or "continuity" axiom analogous to Hilbert's axiom. |
Finr the first term and the difference of an arithmetic progression, given two relations between its terms | The sum of the first $15$ terms is $\frac{15}2 (2a + 14d)$, and the sum of the first $9$ terms is $\frac{9}{2}(2 a + 8d)$.
Their sum together is $279$, so
$$\frac{15}2 (2a + 14d) + \frac{9}{2}(2 a + 8d) = 279$$
is your second equation. |
Is the integral of computable functions computable? | Answering my own question:
First of all, I need to specify the range of $f_n$, which are reals. This means that we can think of $f_n$ as a Turing machine which given oracle $\omega$ and a rational $\epsilon>0$, outputs a rational $r$ such that $|r-f_n(\omega)|<\epsilon$. Denote this $r$ with $f_n^{(\epsilon)}(\omega)$.
Given that $\int f_n(\omega)d\omega$ will be a real, we need to show that for a given $\epsilon$, we can compute in finite time a rational $r$ such that $|r-\int f_n(\omega)d\omega|<\epsilon$.
Now some notation: For any $\omega\in 2^{\mathbb{N}}$, let $\omega| m$ be the string containing the first $m$ digits of $\omega$. Also, for a finite string $x$ of length $m$, let $\Omega_x = \{ \omega\in 2^{\mathbb{N}}: \omega | m=x\}$. Finally, for a finite string $x$, let $x0_{\infty}$ be the infinite sequence which starts with $x$ and simply adds an infinite amount of zeros at the end.
For any $\omega$, the computation of $f_n^{(\epsilon)}(\omega)$ uses only a finite part of $\omega$, say the first $m$ digits. This means that $f_n^{(\epsilon)}$ is constant on $\Omega_{\omega | m}$. As this holds for any $\omega$, these sets $\Omega_{\omega | m}$ ($m$ can dependent on $\omega$) form an open covering of the whole space $2^{\mathbb{N}}$. Since $2^{\mathbb{N}}$ is compact, there are finitely many $\Omega_{\omega_1 | m_1}, \Omega_{\omega_2 | m_2},\ldots,\Omega_{\omega_k | m_k}$ which cover $2^{\mathbb{N}}$. This means that for any $\omega$, the computation of $f_n^{(\epsilon)}$ uses at most $M=\max\{m_1,m_2,\ldots,m_k \}$ digits.
The final ingredient is that we can effectively check which digits of $\omega$ were used for the computation of $f_n^{(\epsilon)}(\omega)$.
Now for the final algorithm: The inputs are $n$ and a rational $\epsilon>0$.
For any $m\in \mathbb{N}$, consider the sum
$$ \sum_{\text{string }x \text{ of length }m} 2^{-m} f_n^{(\epsilon)}(x0_{\infty}).$$
At each step, check if the computation required more than $m$ digits, and if it did, move on to $m+1$. If for all $x$, this did not happen, than $m=M$ and the sum above is equal to $\int f_n^{(\epsilon)}(\omega)d\omega$, which differs at most $\epsilon$ from $\int f_n(\omega)d\omega$ |
Riemann Sums - $ \lim_{n\to +\infty} \prod_{k=1}^{n}\left(1+{k^2\over n^2}\right)^{1\over n} = ? $ | $$\prod_{k=1}^{n}\left(1+\frac{k^2}{n^2}\right)^{1/n} = \exp\left[\frac{1}{n}\sum_{k=1}^{n}\log\left(1+\frac{k^2}{x^2}\right)\right]$$
hence, by Riemann sums and integration by parts:
$$ \lim_{n\to +\infty}\prod_{k=1}^{n}\left(1+\frac{k^2}{n^2}\right)^{1/n} = \exp\left[\int_{0}^{1}\log(1+x^2)\,dx\right]=\color{red}{2\cdot e^{\pi/2-2}}.$$
Explanation:
$$\begin{eqnarray*} \int_{0}^{1}\log(1+x^2)\,dx &=& \left.x\log(1+x^2)\right|_{0}^{1}-\int_{0}^{1}\frac{2x^2}{1+x^2}\,dx\\&=&\log(2)-2+2\int_{0}^{1}\frac{dx}{1+x^2}\\&=&\log(2)-2+2\arctan(1). \end{eqnarray*}$$ |
Finding the characteristic function of a 2nd order ODE with variables on both sides | $$x^2y''+xy'+y=\ln(x)$$
Substitute $x=e^t$ the equation becomes:
$$y''+y=t$$
Which is easier to integrate. |
What is the complement of an element of a generated σ-algebra? | This problem got solved. I cite Hayden's comment(the 1st comment) as an answer so that I can close it:
"$X$ has to be a subset of $Ω$ (this is part of what it means to be a σ-algebra on $Ω$), and yes, $X^c = Ω - X$". |
Finite Element method Implementation | When you apply the FEM to a differential equation, you first write the problem in the weak/variational form, for a narrower solution space, for example
$${V} = \{ { \omega :| \omega | + |\omega ' | < \infty } \}.$$
Then, to actually solve the problem, you have to discretize that solution space, i.e., you have consider a finite subspace of the solution space. Most likely, you decide to consider your solution as a linear combination of hat functions and hence
$V_h = \left\{ {{\omega _h} \in V:{\omega _h}{\textrm{ is piecewise linear, continuous on the chosen mesh}}}\right\}$
and the basis of this space consists of $\{\varphi_j\}^{N}_{j=1}$, where
$${\varphi _j}\left( x \right) = \left\{ {\begin{array}{*{20}{c}}{\frac{{x - {x_{j - 1}}}}{h}}&{,x \in [{x_{j - 1}},{x_j}]}\\{\frac{{{x_{j + 1}} - x}}{h}}&{,x \in [{x_j},{x_{j + 1}}]}\\0&{,x \notin [{x_{j - 1}},{x_{j + 1}}]}\end{array}} \right. , j=1,\cdots ,N.$$
At this point, I hope you've already figured out the answer to your question :)
Each time you modify your mesh, you also modify your hat functions, that is, the basis of your discrete solution space. Thus, to find your new solution in the new basis, you have to perform the calculations all over again.
-EDIT-
If you're looking for a method that allows you to avoid computing your solution all over again, then you should read about Structure Adaptive Mesh Refinement (SAMR) used within the finite difference method (FDM) framework.
A good way to get a grasp of the concept is by reading the following master thesis:
"A patch-based partitioner for
Structured Adaptive Mesh Refinement" by A. Vakili |
Can we write a closed subset of $\mathbb R^d$ as a union of connected sets? | Any subset of $\mathbb R^{d}$ is a union of singleton sets which are connected. |
Finding the sixth roots of $-8i$ | Here is how you advance
$$ z^6=-8i = 8 e^{-\frac{\pi i}{2}}=8 e^{-\frac{\pi i}{2}+2k\pi i}\implies z= 8^{1/6} e^{-\frac{\pi i}{12}+\frac{k\pi i}{3}},\quad k=0,1,2,3,4,5. $$ |
Justify indirect proof rule with law of excluded middle | $\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline #2\end{array}}$
Am I approaching what is requested in a proper manner ?
Yes.
You have used Explosion (ex falso quodlibet.) and now need to use LEM and disjunction elimination.
Also if you do not wish to combine the first and third proof into one, then you need to add a conditional introduction so that you can validly justify deriving $\bot$ from the second assumed $\neg A$ (with conditional elimination).
(Plus, you do not technically need to reiterate the assumption in the second subproof, but some proof checkers may require it.)
$$\fitch{}{\fitch{~~~~~1.~~\neg A}{~~~~~~~\vdots\\~~~~~k.~~\bot}\\k{+}1.~~
\neg A\to\bot\hspace{4ex}{\to}\mathsf I~1{-}(k)\\\fitch{k{+}2.~~A\hspace{8ex}}{k{+}3.~~A\hspace{8ex}\mathsf R~(k{+}2)}\\\fitch{k{+}4.~~\neg A\hspace{6.5ex}}{k{+}5.~~\bot\hspace{7.5ex}{\to}\mathsf E~(k{+}4), (k{+}1)\\k{+}6.~~A\hspace{8ex}\mathsf X~(k{+}5)}\\k{+}7.~~A\vee\neg A\hspace{5ex}\textsf{LEM}\\k{+}8.~~A\hspace{11ex}{\vee}\mathsf E~(k{+}7),(k{+}2){-}(k{+}3),(k{+}4){-}(k{+}6)}$$ |
How to weight three different variables to create a ranking | If $x_1,\ldots,x_n$ are the scores in $n$ categories assigned to some school, pick some $p$ from $(0,\infty)$ and compute the total score as $$
\bar x = \left(\frac{1}{n}\sum_{k=1}^n \textrm{sgn}(x_k)\sqrt[p]{|x_k|} \right)^p \text{ where }\textrm{sgn}(x) = \begin{cases} -1 & \text{if $x < 0$} \\ 0 & \text{if $x=0$} \\ 1 &\text{if $x > 0$} \end{cases}
$$
If If $x_1 = \ldots = x_n = x$, you always get $\bar x = (\frac{1}{n}n\,\textrm{sgn}(x)\sqrt[p|]{|x|})^p = sgn(x)\,|x| = x$. The larger $p$ gets, the less do large scores in a single category influence the value. For example, if $x_1=x_2=0$, $x_3=x$, then $\bar x = \frac{x}{3^p}$.
To incorporate weights into this, simply replace the $\frac{1}{n}\sum \ldots$ part, which simply averages the $p$-th roots of the $x_k$, by a weighted average. If your weights are $w_1,\ldots,w_n$, the weighted total score is $$
\bar x = \left(\frac{1}{w}\sum_{k=1}^n w_k\textrm{sgn}(x_k)\sqrt[p]{|x_k|} \right)^p
\text{ where $w = w_1 + \ldots + w_k$.}
$$
You'll have to play with various values for $p$ to see which work well. Generally, if $p > 1$, values close to zero will influence the result more than values far away from zero. On the other hand, if $p < 1$, then large values will have more influence. In the limit $p \to 0$, $\bar x$ is the value furthest awa from zero, with the other values having no influence (i.e., the opposite of what you seem to want). So you'll want to only look at values $p > 1$.
Note that the business with $\textrm{sgn}$ and the absolute value $|x_k|$ is only necessary if the scores can be negative. It simply works around the issue that $\sqrt[p]{x_k}$ isn't defined for negative $x_k$, so what we do is we take the $p$-th root of the absolute value $|x_k|$, and then add the original sign of $x_k$ back by multiplying with $\textrm{sgn}(x_k)$. |
Existence of a certain bouned linear functional in the dual of a Hilbert space | Hint: Try a linear functional of the form $\psi(f) = (f,g)_H$ for some $g \in H$. (In fact, by the Riesz representation theorem, every bounded linear functional has this form.) What should $g$ be to get $\psi$ as desired by the problem? |
Graphing a line of an expected value | The answer is that you can't if you want to place a scatter plot going through the lines. Knowing only the mean and variance of X is not enough to know its probability distribution unless say you assume it is Gaussian. But to plot the line means that you want to show how the the (x,y) pairs vary with respect to the three lines you mention. Of course you can plot y=x-1, y=x-2 and y=x-3. But that doesn't require knowing the distribution for X. |
Calculate cartesian coordinates from lattice points in hexagonal closest packing (HCP) | Try $\,\textrm{HCP}(i,j,k) =
(i+(j+m)/2,\, (3j+m)/\sqrt{12},\, k\sqrt{2/3})\,$ where $\,m=\textrm{mod}(k,2)\,$ and $\,i,j,k\,$ are any integers.
Check this with the OEIS sequence A004012 "Theta series of hexagonal close-packing.". What the formula expresses with the use of $\,m\,$ the parity of $\,k\,$ is that HCP is not a lattice but the union of two lattices. The first is centered at the origin and the second is a shifted copy of the first. |
Number of polynomfunctions $\mathbb{Z}_3 \rightarrow \mathbb{Z}_3$ | All functions from $\mathbb{Z}_3$ to $\mathbb{Z}_3$ are polynomial functions.
And there are $3^3$ functions from $\mathbb{Z}_3$ to $\mathbb{Z}_3$.
This is because you have $3$ choices as to what $f(0)$ should be. And for every such choice, there are $3$ choices as to what $f(1)$ should be. And for every choice of what $f(0)$ and $f(1)$ are, there are $3$ choices for $f(2)$.
Remarks: $1.$ If $F$ is any finite field, all functions from $F$ to $F$ are polynomial functions. Let the field have $n$ elements $a_1,\dots,a_n$. Let $P_i(x)$ be the polynomial
$$\frac{1}{x-a_i}\prod_{j=1}^n (x-a_j).$$
Then for any function $f$, we can find constants $c_i$ such that
$$f(x)=\sum_i c_iP_i(x).$$
This is the same as the usual Lagrange interpolation process.
$2.$ In your case, any two distinct polynomials of degree $\le 2$ produce distinct polynomial functions, and all polynomial functions arise in this way. So your polynomial functions are all functions $f(x)=a_0+a_1x+a_2x^2$, where the $a_i$ range over $\mathbb{Z}_3$.
Polynomials of degree $\ge 3$ do not produce any new functions. You can think of this as a consequence of the fact that the function $x^3$ is the same function as the function $x$.
This gives us another way of seeing that all $27$ functions are polynomial functions. Suppose that $P(x)$ and $Q(x)$ are polynomials of degree $\le 2$ which determine the same function. Then $P(x)-Q(x)$ determines the identically $0$ function. But a non-zero polynomial cannot have more roots than its degree. There are therefore $27$ different polynomial functions that only use polynomials of degree $\le 2$. Since there are only $27$ functions in total, any function must be given by a polynomial of degree $\le 2$. The same idea works is any finite field. |
Let $(A; ≼)$ be any poset. Prove that the relation $≺ := ≼ ∩ ≠$ on A is transitive. | Let $a, b, c \in A$ be such that $a ≺ b$ and $b ≺ c$. By the definition of $≺$, this means that$ (1) a ≼ b$, $(2) a \neq b$, $(3) b ≼ c$, and $(4) b \neq c$. From (1) and (3), and the transitivity of $≼$, it follows that $a ≼ c$. It remains to show that $a \neq $c. Assume towards a contradiction that $a = c$. This, together with (1), implies that c ≼ b. But this, together with (3) and the antisymmetry of $≼$, implies that $ b = c$, which is a contradiction to (4).
Overall, we showed that for any $a, b, c ∈ A$, such that $a ≺ b$ and $b ≺ c$, we have $a ≼ c$ and $a \neq c$. This, by the definition of$ ≺$, means that a $≺ c$. |
Finding eigenvalues of a system of ODE | First consider the initial values $y(0) = 0, \; y'(1) = 0$. After you are able to do this problem, using your initial values won't be as difficult.
You wish to find the eigenvalues and eigenfunctions of $y''(t) + \lambda y(t) = 0$ such that $0 \leq t \leq 1$ and $y(0) = 0, \; y'(1) =0 $.
Let $x_{1}(t) = y(t)$ and $x_{2}(t) = y'(t)$. Then the initial values become $x_{1}(0) = 0$ and $x_{2}(1) = 0$.
Let $\mathbf{X} = \left ( \begin{matrix}x_{1}(t) \\ x_{2}(t) \end{matrix}\right )$. We now have the following first order system $$\mathbf{X}' = \left ( \begin{matrix}0 & 1\\ -\lambda & 0 \end{matrix}\right ) \mathbf{X}$$
Where $\mathbf{A} = \left ( \begin{matrix}0 & 1\\ -\lambda & 0 \end{matrix}\right )$ is the coefficient matrix.
I find the general solution to be $\mathbf{X} = c_{1}\left ( \begin{matrix}1 \\ i\sqrt[]{\lambda} \end{matrix} \right ) e^{i\sqrt[]{\lambda}t} + c_{2} \left ( \begin{matrix}1 \\ -i\sqrt[]{\lambda} \end{matrix} \right )e^{-i\sqrt[]{\lambda}t}$.
Now we examine $\mathbf{X}$ for which values of $\lambda$ allow for a non-trivial solution. We split $\lambda$ into three cases. Namely, $\lambda < 0, \; \lambda =0, \; \lambda > 0$. Note that any real number possesses a square root. Therefore, for the first case, $\lambda < 0$, let $\lambda = -k^2$ for $k \in \mathbb{R}$. Then we obtain $$\mathbf{X} = c_{1} \left ( \begin{matrix}1 \\ -k \end{matrix} \right )e^{-k t} + c_{2} \left ( \begin{matrix}1 \\ k \end{matrix} \right )e^{k t}$$ By the first initial value condition $x(0)=0$, we have $$c_{1} + c_{2} = 0 \implies c_{1} = -c_{2}$$ Now by the second initial value condition we have $$-c_{1} k e^{-k} + c_{2}ke^{k} = 0$$ Then $c_{2} k \left ( e^{-k}+e^{k} \right ) = 0$. Note that $e^{-k} + e^{k} = 2 \cosh(k)$. Either $c_{2}k = 0$ or $2\cosh(k) = 0$. Since $2\cosh(k) = 0$ for nonreal $k$, then $c_{2}k = 0$ and since $k \neq 0$ then $c_{2} = 0 \implies c_{1} = 0$.
Hence, for $\lambda < 0$, $\mathbf{X}$ is trivial.
Now continue this process for $\lambda = 0$ and $\lambda > 0$ until you find $X$ to be nontrivial. Then you can examine the corresponding eigenvalue and eigenfunction to that $\lambda$ |
Is "the reals" a slang term for $\mathbb{R}^d$ where $d = 1$? | ”The reals" are indeed the real numbers, though I don't know if I'd call it a slang term. |
Singular values and trace norm of a submatrix | The inequality $\sigma_1(A)\ge\sigma_1(B)\ge\sigma_2(A)\ge\sigma_2(B)\ge\cdots\ge\sigma_{m-1}(A)\ge\sigma_{m-1}(B)\ge\sigma_m(A)$ holds if $A$ is a (square) positive semidefinite matrix. It doesn't hold in general, not even if $A$ is Hermitian. E.g. when
$$
A=\pmatrix{0&0&1\\ 0&0&0\\ 1&0&0},
$$
the three singular values of $A$ are $1,1,0$ but the two singular values of $B$ are $0,0$. In this counterexample, we also have $\sum_{i=2}^m\sigma_i(A)=1>0=\sum_{i=1}^{m-1}\sigma_i(B)$.
Another counterexample: let
$$
A=\pmatrix{0&3&0\\ 2&0&-2\\ 1&0&1}.
$$
The three singular values of $A$ are $3,2\sqrt{2},\sqrt{2}$ and the two singular values of $B$ are $\sqrt{5}$ and $0$. Here we have $\sigma_2(A)=2\sqrt{2}>\sqrt{5}=\sigma_1(B)$ and $\sum_{i=2}^m\sigma_i(A)=3\sqrt{2}>\sqrt{5}=\sum_{i=1}^{m-1}\sigma_i(B)$.
It is true that $\sigma_i(A)\le\sigma_{i-2}(B)$ for $3\le i\le\min\{m,n\}$. Actually, if we delete a row (resp. a column) of $A$ to obtain a matrix $C$, we get $\sigma_j(A)\le\sigma_{j-1}(C)$. Similarly, if we delete a column (resp. a row) of $C$ to obtain a matrix $B$, we get $\sigma_k(C)\le\sigma_{k-1}(B)$. Combine the two inequalities, we get $\sigma_i(A)\le\sigma_{i-2}(B)$.
Interestingly, the inequality $\sigma_j(A)\le\sigma_{j-1}(C)$ can be obtained from the interlacing inequality $\lambda_1(A)\ge\lambda_1(B)\ge\cdots\ge\lambda_{m-1}(A)\ge\lambda_{m-1}(B)\ge\lambda_m(A)$ for eigenvalues of Hermitian matrices. For a proof, see corollary 7.3.6 of Horn and Johnson's Matrix Analysis (2nd ed.). |
Prove that a recurrent sequence $u_n$ is convergent if $\lim \limits_{n \to \infty}(u_{n+1}-u_n)=0$ | Lemma : If $(u_n)$ is a sequence such that $u_{n+1} - u_n \rightarrow 0$, then the set of all the adherence value (the word in french is "valeur d'adhérence") is an interval.
Proof : Let $x$ and $y$ two different adherence value (if they exist). Let $c\in ]x,y[$. To show it's an adherence value of $u$, we just need to show that $$\forall \varepsilon >0, \forall N\in \mathbb{N}, \exists n\geqslant N, |u_n -c| \leqslant \varepsilon$$
We take $\varepsilon >0$. Since $u_{n+1} - u_n \rightarrow 0$, there exists $N$ such that for all $n\geqslant N$, $|u_{n+1} - u_n|\leqslant \varepsilon$. Since $x$ is an adherence value of $u$, with $x<c$, there exists a rank $p\geqslant N$ such that $u_p <c$. Since $y$ is also one with $y>c$, there exists $q>p$ such that $u_q >c$. Let's consider the smallest $n>p$ such that $u_n >c$. Then we have $$u_n >c, u_{n-1}\leqslant c \text{ and } |u_n-u_{n-1}|\leqslant \varepsilon$$
So $u_n$ is in $]c, c+\varepsilon]$, which gives the result
Proof of the theorem :
We first note that all adherence values are fixed points of $f$.
From the Lemma, we get that if we have $2$ adherence values, for instance $x$ and $y$, then $[x,y]$ is only consituted of adherence values, and so of fixed points.
That means since we have to go from $x$ to $y$ with jumps that tend to $0$, when the length of the jumps becomes inferior strictly to $|y-x|/2$, then the sequence take a value in $[x,y]$, so takes a fixed point as value, so that the sequence gets constant after this rank.
That contradicts the fact that $x$ and $y$ are adherence values.
So $(u_n)$ converges. |
Prove $x \notin x$ without regularity? | A Quine atom is a solution to $x=\{x\}$. Working within ZFC we can construct a class model of ZFC${}-{}$Regularity${}+{}$"there exist $\aleph_0$ Quine atoms", by defining the cumulative hierarchy on top of a set of postulated atoms:
Let $A = \{0\}\times \mathbb N $ and define by transfinite induction
$$ U_\alpha = A \cup \{1\} \times \Big[ \mathcal P(\bigcup_{\beta<\alpha} U_\beta ) \setminus \{ \{a\} \mid a\in A\}\Big]$$
and then let
$$ \mathbf U = \bigcup_{\alpha\in\mathbf{ON}} U_\alpha $$
Further define the relation $\tilde \in $ by
$$ x \mathop{\tilde\in} y \iff \begin{cases} x=y & \text{if }y=\langle 0,z\rangle \\ x\in z & \text{if }y=\langle 1,z\rangle \end{cases} $$
Then, jumping to the metalevel, we can see that $(\mathbf U,\tilde\in)$ is a model for every axiom of ZFC except for Regularity, and that each $\langle 0,n\rangle\in\mathbf U$ is a Quine atom.
Therefore, if ZFC is consistent, then ZFC $-$ Regularity does not prove $x\notin x$. |
Modulus of Elasticity | Force is proportional to deformation of the spring. Bulb mass plays no role.
$$ 2 \cdot \dfrac { 6-2 } { 6 } = 1. 333 N $$ |
Maximum distance after 99 steps with 90 degree turns after each | Should be 100m. Any additional right turns just take you back to your starting point. Therefore you go 100m by going right left right left... and with the 100 right turns remaining you simply go in circles. |
I would like to calculate $\lim_ {n \to \infty} {\frac{n+\lfloor \sqrt{n} \rfloor^2}{n-\lfloor \sqrt{n} \rfloor}}$ | You may observe that, as $n \to \infty$,
$$
\begin{align}
{\frac{n+\lfloor \sqrt{n} \rfloor^2}{n-\lfloor \sqrt{n} \rfloor}}&={\frac{2n+(\lfloor \sqrt{n} \rfloor-\sqrt{n})(\lfloor \sqrt{n} \rfloor+\sqrt{n})}{n-\lfloor \sqrt{n} \rfloor}}\\\\
&={\frac{2+(\lfloor \sqrt{n} \rfloor-\sqrt{n})(\lfloor \sqrt{n} \rfloor+\sqrt{n})/n}{1-\lfloor \sqrt{n} \rfloor/n}}
\\\\& \to 2
\end{align}
$$ since, as $n \to \infty$,
$$
\left|\frac{\lfloor \sqrt{n} \rfloor}{n}\right|\leq\frac{\sqrt{n}}{n} \to 0
$$ and
$$
\left|\frac{(\lfloor \sqrt{n} \rfloor-\sqrt{n})(\lfloor \sqrt{n} \rfloor+\sqrt{n})}{n}\right|\leq\frac{2\sqrt{n}}{n} \to 0.
$$ |
Why is the Padé approximant typically written in this form? | Well the most obvious reason is that if we had $b_0$ replace $1$ it would be a useless degree of freedom (at least over a field). This is basically it. One area where rational functions arise is in signal processing and in the (frequency/Z domain) transfer functions of infinite impulse response filters, i.e. filters defined by a linear recurrence.
Let $x$ and $y$ be sequences and $y = R(Z^{-1})x$ where $Z^{-1}$ is a delay operator $(Z^{-1}x)_n = x_{n-1}$. We have $y + \sum_{k=1}^N b_k Z^{-k} y = \sum_{j=0}^M a_j Z^{-j} x$ which is $$y_n = \sum_{j=0}^M a_j x_{n-j} - \sum_{k=1}^N b_k y_{n-k}$$ This is a causal linear recurrence equation. That is, it calculates the next output value $y_n$ in terms of previous and current input values $x_{n-j}$ and previous output values $y_{n-k}$. There's little reason to have the equation solve for $b_0y_n$; we'd just need to divide through anyway to get the current output. It's simpler and cheaper to just include it in the polynomial coefficients.
To make it closer to Padé approximants, just replace the delay operators with inverse differential operators. |
Is there a simpler calculation for the probability of exactly x occurrences over n trials? | The simplified equation that I didn't know the name of is
\begin{equation}
p_{x}=\binom{n}{x}*p^{x}*(1-s)^{n-x}
\end{equation} |
Shrinking Lemma for Arbitrary Open Covers of Normal Spaces | This paper notes that a space $X$ is countably shrinking (i.e. every countable open cover has a shrinking, as you describe) iff $X$ is normal and countably paracompact. And as there are so-called Dowker spaces (in ZFC) that are normal and not countably paracompact, there are normal spaces in which there are covers without a shrinking, e.g. Mary-Ellen Rudin's Dowker space.
It is the case that we can have a shrinking for all point-finite covers, a proof of which I wrote down here; also a proof by transfinite induction. |
Show $(\frac{\partial f}{\partial x})^2-(\frac{\partial f}{\partial y})^2=(\frac{\partial f}{\partial r})^2-1/{r^2}(\frac{\partial f}{\partial t})^2$ | to go the other way round,
$$
f_x=f_r r_x + f_t t_x \\
f_y=f_r r_y + f_t t_y
$$
since
$$
r^2 = x^2-y^2
$$
we have
$$
r_x= \frac{x}{r} \\
r_y= -\frac{y}{r}
$$
also
$$
t = \tanh^{-1}\frac{y}{x}
$$
so
$$
t_x = \frac{y}{r^2} \\
t_y = - \frac{x}{r^2}
$$
this gives:
$$
(f_x)^2 - (f_y)^2 = (\frac{x}{r}f_r +\frac{y}{r^2}f_t)^2 -(-\frac{y}{r}f_r -\frac{x}{r^2}f_t)^2 \\
= (f_r)^2 - \frac1{r^2} (f_t)^2
$$
using the same identity as before, and again with cancellation of the mixed terms, which this time are $\frac{2xyf_rf_t}{r^3}$
it is worth noting that the symmetry shown by the relations:
$$
\begin{pmatrix} f_r \\f_t \end{pmatrix}=\begin{pmatrix}x_r & y_r \\ x_t & y_t
\end{pmatrix} \begin{pmatrix} f_x \\ f_y \end{pmatrix}
$$
$$
\begin{pmatrix} f_x \\f_y \end{pmatrix}=\begin{pmatrix}r_x & t_x \\ r_y & t_y
\end{pmatrix} \begin{pmatrix} f_r \\ f_t \end{pmatrix}
$$
is expressed in terms of the Jacobian determinants:
$$
\frac{\partial(x,y)}{\partial(r,t)} \frac{\partial(r,t)}{\partial(x,y)} = 1
$$ |
Linear algebra question with dual basis? | Suppose the matrix is singular, so that there exists a column linearly dependent in the preceeding columns, say
$$\begin{pmatrix}a_i(b_1)\\a_i(b_2)\\\ldots\\a_i(b_n)\end{pmatrix}=\sum_{k=1}^{i-1}c_k\begin{pmatrix}a_k(b_1)\\a_k(b_2)\\\ldots\\a_k(b_n)\end{pmatrix}\iff$$
$$\forall\,1\le m\le n\;\;,\;\;a_i(b_m)=\sum_{k=1}^{i-1}c_ka_k(b_m)=\sum_{k=1}^{i-1}a_k(c_kb_m)$$
But since a linear functional (and, in fact, any linear map) is completely and uniquely determined by its values on some basis of the domain, we get that $\;a_i\;$ is a linear combination of $\;a_1,...,a_{i-1}\;$ , which of course contradicts the fact that $\;a_1,...,a_n\;$ is a basis of $\;V^*\;$ |
Maxima & Minima Word Problem | One way to check which one is Maximum or Minimum is to take the second derivative and check the sign of the second derivative at critical points(https://www.math.hmc.edu/calculus/tutorials/secondderiv/). |
the top chern class of the holomorphic tangent bundle is the euler class | The answer is affirmative if $X$ is compact and compactness is also necessary. A reference for the first statement can be found in the discussion after Corollary 5.1.4, the item (ii) (page 235) in the book Complex Geometry by Huybrechts. This is also known as the Gauss-Bonnet-Formula and this Wikipedia Link contains a counterexample for the noncompact case.
Edit. I give a proof for the Gauss-Bonnet formula. I use some standard terminology: $X$ is now a smooth complex projective variety over $\mathbb C$ and we denote by $\Omega_X$ its sheaf of relative differentials. We denote sheaf cohomology by $\mathcal H^\bullet$ and singular cohomology by $H^\bullet$. Also, $\mathcal T_X$ is the tangent sheaf of $X$, $\operatorname{ch}$ is the exponential Chern character, $\operatorname{td}$ the Todd class and $\chi$ the Euler characteristic. Finally, $X^{\mathrm{an}}$ denotes the analytification of $X$, i.e. the GAGA-associated complex manifold.
Write $\Omega_X^p:=\bigwedge^p\Omega_X$ for the $p$-th exterior power of $\Omega_X$. We will use the Borel-Serre-Identity, given in Fulton's Intersection Theory as Example 3.2.5. It says
\begin{equation}
\sum_{p=0}^d (-1)^p\cdot\operatorname{ch}(\Omega_X^p)\cdot\operatorname{td}(\mathcal T_X) = c_d(\mathcal T_X). \end{equation}
Note that $d$ is the rank of $\Omega_X$ and $\Omega_X^\vee=\mathcal T_X$ by definition. As a second tool, we require the Hirzebruch-Riemann-Roch Theorem to conclude
\begin{equation}
\int_X \operatorname{ch}(\Omega_X^p) \cdot \operatorname{td}(\mathcal T_X)
= \chi(X,\Omega_X^p).
\end{equation}
Finally, we require the Hodge Decomposition Theorem, which we quote from Corollaries 2.6.21 and 3.2.12 in Huybrecht's book as
\begin{equation}
H^r(X^{\mathrm{an}},\mathbb C) = \bigoplus_{p+q=r} \mathcal H^q(X,\Omega_X^p).
\end{equation}
Putting it all together, we obtain
\begin{align*}
\int_X c_d(\mathcal T_X)
&= \sum_{p=0}^d (-1)^p\cdot \int_X \operatorname{ch}(\Omega_X^p)\cdot\operatorname{td}(\mathcal T_X)
&& \text{by Borel-Serre}
\\ &= \sum_{p=0}^d (-1)^p \cdot \chi(X,\Omega_X^p)
&& \text{by HRR}
\\ &= \sum_{p,q} (-1)^{p+q}\cdot\operatorname{rank}(\mathcal H^q(X,\Omega_X^p)) &&
\\ &= \sum_{r=0}^d (-1)^r \cdot H^r(X^{\mathrm{an}},\mathbb C)
&& \text{by the Hodge Decomposition}
\\ &= \chi(X^{\mathrm{an}}).
\end{align*} |
How do you solve this trig/geometry question? | HINT:
Using Werner Formulas, $$\sin A+\sin B+\sin C+\sin D=4$$
Now for real $x,\sin x\le1$
Can you take it home from here? |
Show that $ \ d(d \omega)=0 \ $ | Your $d\omega$ should be
$$d \omega=(\frac{\partial h}{\partial y}-\frac{\partial g}{\partial z}) dy \wedge dz+ (\frac{\partial f}{\partial z}-\frac{\partial h}{\partial x}) dz \wedge dx + (\frac{\partial g}{\partial x}-{\frac{\partial f}{\partial y}}) dx \wedge dy ,
$$
so
\begin{align}
d^2 \omega & = d \Big((g_x-f_y) dx \wedge dy + (h_y-g_z)dy\wedge dz + (h_x-f_z)dx \wedge dz \Big) \\ & = (g_{xz} - f_{yz}) dz\wedge dx \wedge dy + (h_{yx}-g_{zx}) dx\wedge dy \wedge dz + (h_{xy}-f_{zy}) dy\wedge dx \wedge dz.
\end{align}
Next, arrange the terms in the second equality into just one term $ (\cdots)\, dx \wedge dy \wedge dz$ and use the fact that $f_{xy}=f_{yx}$ and so on. You'll find that the terms will cancel each other. |
Manipulating the definition of $e$ | Both are true. Note that
$$\lim_{\circ\to\infty}\left(1+\frac{1}{\circ}\right)^\circ=e$$
and that $t^2\to\infty$ as $t\to\pm\infty$. |
$M/im(f)$ is Torsion | Let $M$ be $R\times R$ and let $f:R\to M$ be given by $f(r)=(r,0)$.
Then $M/Im(f)\cong R$ is not torsion. |
Lagrange multipliers - confused about when the constraint set has boundary points that need to be considered | In many extremal problems the set $S\subset{\mathbb R}^n$ on which the extrema of some function $f$ are sought is stratified, i.e., consists of points of different nature: interior points, surface points, edges, vertices. If an extremum is assumed in an interior point it comes to the fore as solution of the equation $\nabla f(x)=0$. An extremum which is at a (relative) interior point of a surface or an edge comes to the fore by Lagrange's method or via a parametrization of this surface or edge. Here (relative) interior refers to the following: Lagrange's method deals only with constrained points from which you can march in all tangent directions of the submanifold (surface, edge, $\ldots$) defined by the constraint(s), all the while remaining in $S$. Now at a vertex there are forbidden marching directions on all surfaces meeting at that vertex. If the extremum is taken on such a vertex it only comes to the fore if you have deliberately taken all vertices into your candidate list.
Now your $S_1$ is an arc in the plane with two endpoints. (The latter are not immediately visible in your presentation of $S_1$, but you have found them.) Your candidate list then should contain all relative interior points of the arc delivered by Lagrange's method plus the two boundary points.
The circle $S_2\!: \ x^2+y^2=1$ however has only "interior" points. The candidate list then contains only the points found by Lagrange's method. |
Show the space of continuous functions on $[0,1]$ with the 2 norm for functions is not complete. | Hint: Let $f_n(x)=0$ for $x <\frac 1 2$, $1$ for $x >\frac 1 2+\frac 1 n$ and $f_n(x)=n(x-\frac 1 2)$ for $\frac 1 2 \leq x \leq \frac 1 2+\frac 1 n$. Then $(f_n)$ is a Cauchy sequence which does not converge to a continuous function in the norm. |
What is the value of $\operatorname{Arg}(\frac{z}{\bar{z}})$? | $$z=re^{i\theta}, \bar{z} = re^{-i\theta} \implies\frac{z}{\bar{z}} = e^{2i\theta} $$
$\arg (\frac{z}{\overline{z}}) = 2\theta = 2 \arg (z) $ |
Affine Cipher Question | $$12a = -4 + 26k$$
$$ 6a = -2 + 13k$$
$$6a \equiv -2 \pmod {13}$$
$$12a \equiv -a \equiv -4 \pmod {13}$$
$$a \equiv 4 \pmod {13}$$
Hence Case 1: $a \equiv 4 \pmod {26}$ or Case 2: $a \equiv 17 \pmod {26}$.
Now you can explore the corresponding value for $b$ for each case. |
Show that the linear operator is zero. | Take
$$T=\begin{pmatrix}0&0\\0&1\end{pmatrix}\implies T^m=T\;,\;\;\forall\,m\in\Bbb N$$
and also
$$T\binom10=\binom00\implies T^m\binom10=\binom00$$
yet $\;T\neq0\;\implies$ the claim is false. |
Clarification from old post: Union of sigma-algebras is non sigma-algebra | Let $\mathscr A_i$ be the Lebesgue $\sigma-$algebra over $[0,i)$. We have now a sequence of monoton classes. Their union, however is not a $\sigma-$algebra because $[0,+\infty)$ is not included. On the other hand all the sets of the form $[0,i)$ are included. As a result their union should be included, but it is not.
I can see that the old post answers the question... (a different way though) You can ignore my answer. |
Draw a simple graph where its vertices can be divided into 2 sets where every edge joins a vertex in one set to a vertex in the other | Suppose our two vertex sets have $m$ and $n$ vertices respectively; if every edge links one vertex type to another, there can be at most $mn$ edges, when every pair of different-type vertices is connected by an edge.
In this question, $m+n=6$, so $(m,n)$ can be $(1,5),(2,4),(3,3)$, up to reversal of variables. These vertex sets allow for 5, 8 and 9 edges respectively. We reject $(m,n)=(1,5)$ as we need 8 edges. The other two choices lead to the two distinct solutions to the problem (the two vertex sets are marked with white and black circles): |
Explaining Characteristics of a PDE | The method of characteristics consists in setting $u(x,y) = u(x(s),y(s))$, and transform the PDE in $(x,y)$ into an ODE in $s$. Here,
$$
\frac{\mathrm{d}u}{\mathrm{d}s} = x'\, u_x + y'\, u_y \, .
$$
We set the equations of characteristics
$$
x' = 1 \qquad\text{and}\qquad y' = \frac{x}{y}\, ,
$$
such that $\mathrm{d}u/\mathrm{d}s = 0$, i.e. $u$ is constant along the characteristics.
Thus, $$ \frac{y'}{x'} = \frac{\mathrm{d}y}{\mathrm{d}x} = \frac{x}{y} \, , $$ which solutions are $y^2 = x^2 + C$, with constant $C$. The value of $u$ is completely determined by the constant $C$, so that the solutions of the PDE are
$$
u(x,y) = f(y^2 - x^2)\, ,
$$
where $f$ is any differentiable function. To go further, this post is related. |
Extrema of a functional | Experimental idea
Set $\sin(u)=\frac{y''}{\sqrt{1+y''^2}}\implies y''=\tan(u)$. Then the Euler-Lagrange equation reduces to $(y'\sin(u))'=c_1+\sqrt{1+y''^2}$, which further transforms to
$$
\tan(u)\sin(u)+y'\cos(u)u'=c_1+\frac1{\cos(u)}
\implies u'=\frac{\frac{c_1}{\cos(u)}+1}{y'}
$$
With $y'=v$ this gives the first order system
\begin{align}
y'&=v,& y(0)&=0,&y(1)&=1\\
v'&=\tan(u),& v(0)&=0,&v(1)&=2\\
u'&=\frac{\frac{c_1}{\cos(u)}+1}{v}
\end{align}
This could be useful as a compact formulation for a BVP solver.
Constant solutions
If $c_1+\cos(u)=0$, then this gives a valid constant solutions. Thus $y''=C\implies y=A+Bx+\frac12Cx^2$. This also includes the cases $v=y'=0\implies y=A+Bx$. With the initial conditions the linear functions can be ruled out, the quadratic functions give a solution $y(x)=x^2$. This does not prove that it is the only solution or the optimal among them.
One further integration
Assuming that the denominators are not zero or at least not constant zero, combining the last two equations gives,
$$
\frac{v'}{v}=\frac{\sin(u)u'}{c_1+\cos(u)}\implies v=\frac{c_2}{c_1+\cos(u)}
$$
and then
$$
\frac{\cos(u)u'}{(c_1+\cos(u))^2}=\frac1{c_2}
$$
which in principle can be integrated... |
Norm inequality for sum and difference of positive-definite matrices | This answer only answers the first question. It does so in a way that may be unnecessarily complicated for your taste.
If $A$ and $B$ are Hermitian matrices, let $A\leq B$ mean that $B-A$ is positive semidefinite. A Hermitian matrix is positive semidefinite if and only if all of its eigenvalues are nonnegative, and the spectral norm of a Hermitian matrix is the maximum of the absolute values of its eigenvalues.
Then $$-\|X_2\|I\leq -X_2\leq X_1-X_2\leq X_1\leq \|X_1\|I.$$ This implies that all of the eigenvalues of $X_1-X_2$ lie in the interval $[-\|X_2\|,\|X_1\|]$, which implies that $\|X_1-X_2\|\leq \max\{\|X_1\|,\|X_2\|\}$.
On the other hand, $0\leq X_1\leq X_1+X_2$ implies that $\|X_1\|\leq\|X_1+X_2\|$, and similarly $\|X_2\|\leq \|X_1+X_2\|$, so $\max\{\|X_1\|,\|X_2\|\}\leq \|X_1+X_2\|$. |
Prove $\forall u,v,x,y,z,w \in \mathbf{R}^+, \frac{u}{v} < \frac{x}{y} \wedge \frac{x}{y} < \frac{z}{w} \implies \frac{u + z}{v+w} < \frac{z}{w}$ | Notice you are given by transitivity
$$ \frac{z}{w} > \frac{u}{v} $$
Hence $zv > uw$. Therefore
$$ z(v + w) = zv + zw > uw + zw = (u+z)w \iff \frac{z}{w} > \frac{u+z}{v+w}$$ |
Integrating $\int_{0}^{\infty} x^{-k} e^{-a(x+d)^2} dx$ where $k \in \mathbb{Z}_{+}$ | This integral does not exist. Consider the case when $k=1$. (I think other cases reduce to this via integration by parts.) We know $\int_0^n\frac{dx}{x}$ diverges. But our integral $\int_0^n\frac{e^{-a(x+d)^2}dx}{x}\ge\int_0^n\frac{mdx}{x}$ where $m$ is the minimum of $e^{-a(x-d)^2}$ on $[0,n]$. |
Multivariable limit studying discontinuity at $x = y$ line | After having thought about this for a while, I havent't quite figured it out, however I have a suggestion. If you factor $(x^2 - y^2)^2$ as $(x+y)^2 (x-y)^2$ then you can rewrite the original expression and have one factor that does not evaluate to $0$ when taking the limit as $x$ approaches $y$. Now you are left with an indeterminate limit, so you can use L'Hopital's rule after this. All of this should yield an expression you can continue to manipulate, since you have gotten rid of the squared terms inside the paranthesis in the numerator in the original expression. This is as far as I've tried to work the problem. It seems promising though, so it might be worth a shot. I tried plotting the graph and it looks a little weird, so you might not have a well defined limit at $x=y$, meaning you should consider the limit when $x$ goes to $y$ from above and when $x$ goes to $y$ from below.
Hope this gets you somewhere if you decide to try it out! Best of luck. |
What is the domain of definition of $f(x,y)=\frac{y-x}{1+\ln(x)}$? | Strictly speaking, the expression for $f(x,y)$ is meaningless unless the precise domain of definition is provided. This is because a function, by definition, is a triple $(f,X,Y)$ where $X$ is the domain of definition, $Y$ is the range, and $f$ is a function.
However, mathematics is often not very strict, so that in "practice" we get an expression such as $f(x,y)$ for which we have to figure out the precise domain of definition. Of course, we must assume something about $(x,y)$. For example, here it is tacitly assumed that $(x,y)\in\mathbb{R}^2$. (but that's only because of convention; in truth, there is nothing to prevent an alien from interpreting $(x,y)$ as an element of the Cartesian product of any two sets). Therefore, we must find the largest possible subset of $\mathbb{R}^2$ for which the expression $f(x,y)$ is well defined. That will be our domain of definition. By inspection, we see that we must avoid nonpositive $x$ and such $x$ for which $\ln x=-1$. The first condition gives the set $A=\{(x,y)\in\mathbb{R}^2| x>0\}$ and the second condition gives $B=\{(x,y)\in\mathbb{R}^2| x\neq e^{-1}\}$. Thus, the domain of definition is $A\cap B$ - the set where both conditions are met. |
A question about Cauchy Integral Theorem. | Since $dz$ is a line element,
$$\int_D f(z) \ dz$$
doesn't make sense. So, I suppose the general answer would be that we do so because this is a theorem about line integrals.
This theorem tells you that if you pick any piecewise smooth closed curve $\gamma$ and $f$ is holomorphic in the region bounded by $\gamma$, then
$$\int_\gamma f(z) \ dz = 0.$$
Yes, in a sense, $D$ is "inside" $\partial D$. |
A problem regarding a generalization of Schwartz space | These are the functions with "superpolynomial decay". I have not seen the whole space studied much as a whole, because it does not have particularly good regularity properties (at least, no better than Schwartz or $L^p$ spaces), but I have seen various linear subspaces with greater regularity studied more often. You can equivalently write your defining condition by saying that for every $f$ in your space and every $k \geq 1$, there exists a constant $C_{f,k} > 0$ that, for all $x \in \mathbb{R}^n$,
$$
|f(x)| \leq C_{f,k} \exp(-k \log |x|)
$$
It's a lot easier to get work done if you have some uniform regularity, e.g. requiring that you have some uniform $C_k > 0$ such that $C_{f,k} \leq C_k$ for all $f$. Another option: there exist some $b > 0$ and some $t > 1$, such that for any $f$, there exists $C_f > 0$ with
$$
|f(x)| \leq C_f \exp(-b \log^t|x|)
$$
This kind of bound comes up a lot when estimating decay of correlations in statistical physics and dynamical systems. Edit in response to a comment: For example, this paper by Mark Holland (ETDS 25 (1), 2005, pp. 131-159). I found that paper by following a reference in a paper by Benoît R. Kloeckner (ETDS 40 (3), 2020, pp. 714-750), which I found by Googling "sub-polynomial decay".
Yes, this space is contained in $L^p(\mathbb{R}^n)$. Same argument as for Schwartz space. Edit: not quite right, see the comments: you need measurability.
and 4. Since it contains Schwartz space (which is dense in $L^p(\mathbb{R}^n)$ for every $n$ and $p\in[1,\infty)$, this space is certainly dense in $L^p(\mathbb{R}^n)$. It is not dense in $L^\infty$ (again, see comments; adding continuity would make it worse). |
Nonlinear integral equations and successive approximations to solution | What they are referring to is in Banach Fixed Point Theorem, the unique fixed point of a contraction map $T$ on a metric space $X$ can be found by considering any $x_0\in X$ and defining the sequence
$$x_{n+1} := T(x_n) = T^n(x_0)$$
This is Cauchy and has a limit hence $x^*$ which satisfies
$$T(x^{*}) := T(\lim_{n\rightarrow \infty} T(x_n)) = T(\lim_{n\rightarrow \infty} T^n(x_0)) = \lim_{n\rightarrow \infty} T^n(x_0) = x^*$$
This allows you to keep approximating the solution to the integral equation, by taking any $C([a,b])$ function and applying the map repeatedly. The proof looks good though. |
A very different alternative form of the geometric series | $$\lim_{n\to \infty} \sum_{k=1}^n \frac{1}{\sqrt[x]{n^{x-1}k}} = $$
$$\lim_{n\to \infty}\sum_{k=1}^n (n^{x-1}k)^{-\frac{1}{x}} = $$
$$\lim_{n\to \infty}\frac{1}{n}\sum_{k=1}^n \left(\frac{k}{n}\right)^{-\frac{1}{x}} = $$
$$\int_{0}^1t^{-\frac{1}{x}}dt = \frac{x}{x-1} $$ |
Finding the logarithm to the nearest integer without a calculator | $$2^{10}=1024\approx1000$$ hence $$10^9\approx(1024)^3=2^{30}.$$
As the initial relative error is $2.4\%$, after cubing the value the relative error will be close to $7.2\%$, which is not enough to be wrong by one half on the exponent. |
Is an infinite set of disjoint subsets of the natural numbers with an infinite number of members each, countable or uncountable. | A set of pairwise disjoint non-empty subsets of $\Bbb{N}$ must be countable. To see this, let $U$ be any set of pairwise disjoint non-empty subsets of $\Bbb{N}$. Then each $n \in \Bbb{N}$ belongs to at most one $X \in U$. So the partial function that maps $n \in \Bbb{N}$ to the set $X \in U$ that contains it (if there is one) is well-defined and onto. So $U$ is the image of a partial function on $\Bbb{N}$ and must be countable.
Disclaimer: "countable" means "countably infinite or finite" in the above. |
Matrix Inverse and transpose | \begin{eqnarray*}
I=A+BA=(I+B)A
\end{eqnarray*}
then $A^{-1}=(I+B)$ |
What is the number of ways to rearrange a set of n members into subsets of all possible sizes from 1 to n ?. | These are the Bell numbers. They can be calculated using the recurrence relation
$$B_{n+1}=\sum_{k=0}^n\binom nkB_k.$$ |
A different type binomial expansion problem | Let $(1+x+x^2)^n=\sum_ka_kx^k$. Then:
\begin{align}
(1+x^2+x^4)^n&=(1-x^{-1}+x^{-2})^n(1+x+x^2)^nx^{2n}\\
\sum_ja_jx^{2j}&=\sum_k(-1)^ka_kx^{-k}\sum_ja_jx^jx^{2n}\\
&=\sum_j\sum_k(-1)^ka_ka_jx^{2n+j-k}\\
&=\sum_j\sum_k(-1)^ka_ka_{k+j-2n}x^j\\
\end{align}
The $x^{2n}$ coefficient on the left side is $a_n$; the same coefficient on the right side is $\sum_k(-1)^ka_k^2$. |
Order of conjugate element equals the order of the element | You have already shown that for any $g, x\in G$, we have $\lvert xgx^{-1} \rvert \leq \lvert g\rvert$.
To complete the argument, we have only to observe that $g = x^{-1}(xgx^{-1})x$. Applying the fact in the first line of this answer, we have
$$
\lvert x^{-1}(xgx^{-1})x\rvert \leq \lvert xgx^{-1}\rvert
$$
which means that $\lvert g \rvert \leq \lvert xgx^{-1}\rvert$. So we are done. |
Proof by induction: Show that $7|5^{2n}-2^{5n}$ | I wouldn't do this by induction. Modular arithmetic is your friend!
$$5^{2n} - 2^{5n} = 25^n - 32^n\equiv 4^n - 4^n \equiv 0 \pmod{7}.$$
(You could turn this into a proof by induction: show that for all $n\in \mathbb{N}$ the remainders upon division by $7$ of $5^{2n}$ and $2^{5n}$ are the same. But it really wouldn't be that nice.) |
Truthtelling/Lying question (If B is lying then I'm lying) | The last one is not true. If $B$ tells the truth, then the implication $A$ is stating is a true implication. Hence, $A$ told the truth. Do you understand? |
Inner product on the k-tensor space | Hint:
If each of the $e^{*}_{i_{1}}\otimes...\otimes e^{*}_{i_{k}}$ is orthogonal to each other and normalized, we should have
$\langle e^{*}_{i_{1}}\otimes...\otimes e^{*}_{i_{k}}, e^{*}_{j_{1}}\otimes...\otimes e^{*}_{j_{k}} \rangle = \delta_{i_1j_1} \cdots \delta_{i_kj_k}$
Now, by ...., you can extend this definition to .... |
Prove that if $\lim_n x_n = 0$ then $\lim_n \frac{x_1+x_2+...x_n}{n}=0$ | Fix $\epsilon$ to be as small as you please. Then (by definition) there exists an $N$ sufficiently large such that $|x_n| < \epsilon$ for all $ n >N$.
Then we have that
$$ \bigg| \frac{x_1 + x_2 + ... x_N + x_{N+1} + ...+ x_n}{n} \bigg| \leq \bigg|\frac{x_1 + x_2 + ... x_N}{n}\bigg| + \frac{|x_{N+1}| + ... + |x_n|}{n} \\< \bigg|\frac{x_1 + x_2 + ... x_N}{n} \bigg| + \frac{n - (N+1)}{n} \epsilon \\ = \bigg|\frac{x_1 + x_2 + ... x_N}{n} \bigg| - \frac{(N+1)\epsilon}{n} + \epsilon $$
Letting $n \to \infty$ gives you that this the desired quantity is smaller than $\epsilon$ and since $\epsilon$ is chosen to be arbitrary, that quantity must be equal to $0$.
For practice, you should try reproducing the same argument where, instead of $x_n \to 0$ we have $x_n \to c$ for some real number $c$ and the problem is to show $$\frac{x_1 + x_2 + ... x_n}{n} \to c $$ |
Right-adjoint functors are left-exact? | As Martin Brandenburg mentioned, this holds in a much more general context:
Let $C,D$ be categories and $F\colon C\rightarrow D$, $G\colon D\rightarrow C$ functors, such that $F$ is left adjoint to $G$.
Then $F$ preserves all colimits and $G$ preserves all limits. Especially G preserves kernels and therefore is left-exact, whenever you can talk about exactness. (Dually: F preservers cokernels, so it's right-exact.)
This isn't hard to prove: You can prove by hand (checking the universal property), that covariant homfunctors $Hom(A,\_)$ preserve finite limits and then you can do the following by using the adjointness and the preservation of limits of hom-functors:
$$Hom(A,G(lim(X_i)))\cong Hom(F(A),lim(X_i))\cong lim(Hom(F(A),X_i))\cong lim(Hom(A,G(X_i)))\cong Hom(A,lim(G(X_i)))$$
All these isomorphisms are natural in A, so we get $Hom(\_,G(lim(X_i))\cong Hom(\_,lim(G(X_i)))$ and with the fact, that the Yoneda embedding is an embedding $G(lim(X_i))\cong lim(G(X_i))$. Dually this works for rightadjoints and colimits. |
Show that $\mathcal{T}=\{\emptyset, \mathbb{R}^2, \underset{\epsilon>0}{\bigcup}B_\epsilon(\vec{0})\}$ is a topology on $\mathbb{R}^2$ | A mistake in notation: the topology is $$\{0, \mathbb{R}^2\} \cup \{B_\epsilon(0)\}_{\epsilon > 0}.$$
On the closed under arbitrary unions part. If $S$ is bounded, the first part is correct but you are missing the second. You want to show $\cup_{\epsilon \in S} B_\epsilon(0) = B_N(0)$. You have shown $\cup_{\epsilon \in S} B_\epsilon(0) \subset B_N(0)$ but not the opposite inclusion.
If $S$ is unbounded, the idea is correct but the execution is not. You want to show $\cup_{\epsilon \in S} B_\epsilon(0) = \mathbb{R}^2$. The inclusion $\cup_{\epsilon \in S} B_\epsilon(0) \subset \mathbb{R}^2$ is as you have done. But the issue with the opposite inclusion is $S$ being unbounded does not imply $S$ contains all $\epsilon > 0$. What it does imply is that for all $n \in \mathbb{N}$, there exists $s \in S$ such that $s > n$. For instance, if $S = \{0.5, 1.5, 2.5, \cdots\}$ then $S$ is unbounded but does not contain any natural numbers. (In fact you do not even require the Archimedean property; it's enough that for all $\delta > 0$ there exists $s \in S$ such that $s > n$. Then consider a 'smart' value of $\delta$.)
On the closed under finite intersection part. Note a finite set always has a minimum, which is equal to the infimum. Saying it has a minimum is better in this case because it emphasises membership, i.e. that $\epsilon_j \in \{\epsilon_1, \cdots, \epsilon_k\}$. I'll leave you to think about how, but this resolves your concerns in the last proof.
As a remark, your concerns in the last proof 'validate' in a sense why we only permit closure under finite intersections; indeed, if we permitted countable intersections then it's possible $\cap_{\epsilon \in S} B_\epsilon(0)$ could be a closed ball, not an open ball. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.