title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
If $...E_3\subset E_2 \subseteq E_1$ and $\mu(E_i)<\infty$ then $\mu(\cap E_n)=lim(\mu(E_n))$ | There is a small typo in the first step after (1), which should explain it. We have
$$E_1\setminus \cap E_n = E_1\setminus(\cup E_n^c)^c= E_1\cap (\cup E_n^c) = \cup (E_1\cap E_n^c).$$
Does that help? |
From a short exact sequence $\mathbb{Z}_k\cong \mathbb{Z}/ \mathbb{Z}$, $k>1$? | In a exact sequence, we have that $\ker(\psi) = \operatorname{Im}(\varphi)$. And we have by exactness that $\varphi$ is injective, so $\operatorname{Im}(\varphi) \cong A$, so that gets us both of them.
Something good to think about also is that taking all sorts of these quotients makes sense (i.e., we stay in the category of abelian groups or the category of modules) is because we are dealing with so called Abelian categories.
Edit: With regards to your second question about the specific example, but also in general (I don't have a way with words)... You are jumping ahead too much. We just have that $C/\operatorname{Im}(\varphi)$ for that specific map $\varphi$. |
Evaluating integral of Green's theorem | Maybe a few pics might be of use. Here's the surface whose volume you're trying to get:
Here are the "left" and "half" bits I was talking about in the comments:
This is the antisymmetry that was being referred to. The integrand on the left is precisely the negative of the integrand of the right. Due to this, the integral for the left section would be the negative of the integral for the right section, and adding those two integrals will yield the value of zero. |
Is the laplacian of this test functions bounded? | No, by an almost identical argument to my answer to your previous question. By the divergence theorem,
$$0=\int \Delta\phi_j=\int_{K_j} \Delta\phi_j+\int_{K_{j+1}\setminus K_j} \Delta\phi_j=\operatorname{vol}(K_j)+\int_{K_{j+1}\setminus K_j}\Delta\phi_j,$$
so
$$\sup(-\Delta\phi_j)\geq \frac{1}{\operatorname{vol}(K_{j+1}\setminus K_j)}\int_{K_{j+1}\setminus K_j}-\Delta\phi_j = \frac{\operatorname{vol}(K_j)}{\operatorname{vol}(K_{j+1}\setminus K_j)}\to\infty.$$ |
Solving this problem with Lagrange multipliers method | Because $a$ is a positive constant we can ignore it and minimise
$$
F(x)=3 x_1+5 x_2
$$
Rewrite the inequalities with slack variables:
$$
\frac{2.16}{x_1}+\frac{10}{x_2}\leq 1 \implies \frac{2.16}{x_1}+\frac{10}{x_2}+s_1^2= 1\\
g_1=\frac{2.16}{x_1}+\frac{10}{x_2}+s_1^2-1
$$
Now the second one:
$$
x_1 \geq 11.25\implies x_1 = 11.25+s_2^2\\
g_2= x_1-11.25-s_2^2
$$
Finally, the third one:
$$
x_2 \geq 18.75 \implies x_2 = 18.75+s_3^2\\
g_3 = x_2 - 18.75-s_3^2
$$
Form the Lagrangian:
$$
\begin{split}
L(x_1,x_2,\lambda_1,\lambda_2,\lambda_3,s_1,s_2,s_3)
&= 3 x_1 + 5 x_2 \\
&- \lambda_1
\left(\frac{2.16}{x_1}+\frac{10}{x_2}+s_1^2-1\right) \\
&- \lambda_2 \left(x_1-11.25-s_2^2\right) \\
&- \lambda_3 \left(x_2 - 18.75-s_3^2\right)
\end{split}
$$
This implies the following partial derivatives:
$$
\begin{split}
\frac{\partial L}{\partial x_1} &= 3+\frac{2.16\lambda_1}{x_1^2}-\lambda_2\\
\frac{\partial L}{\partial x_2} &= 5+{{10\lambda_1}\over{x_2^2}}-\lambda_3\\
\frac{\partial L}{\partial s_1} &= -2s_1\lambda_1\\
\frac{\partial L}{\partial s_2} &= 2s_2\lambda_2\\
\frac{\partial L}{\partial s_2} &= 2s_3\lambda_3
\end{split}
$$ |
Understanding the validity of this proof for interesection of power sets | The answer is not stating that $Y$ is an element of $A$ and $B$, it is saying that every element $y \in Y$ is an element of $A$ and $B$. In your example $1 \in A$ and $1 \in B$. |
Scalar extension in polynomial ring | Since $B$ is an $A$-algebra, you have a ring homomorphism $f:A\to B$, and this induces a functor $f^*:B-\mathbf{Alg}\to A-\mathbf{Alg}$, the restriction of scalars, that takes a $B$-algebra $K$ to $K$, with the same abelian group structure but scalar multiplication defined by
$$a\cdot_A k=f(a)\cdot_B k$$
for all $k\in K$.
If we denote $U_A:A-\mathbf{Alg}\to \mathbf{Set}$ and $U_B:B-\mathbf{Alg}\to \mathbf{Set}$ the forgetful functors, then we have $U_A\circ f^*=U_B$.
Now the forgetful functors have left adjoints $F_A\dashv U_A$, $F_B\dashv U_B$, which are precisely the "polynomial algebra" functors; and moreover the functor $(B\otimes_A\_) : L\mapsto B\otimes_AL$ is the left adjoint to $f^*$. Then the composition of two left adjoints is left adjoint to the composition of their right adjoints, i.e. $(B\otimes_A \_)\circ F_A \dashv U_A\circ f^*=U_B$; and since adjoints are unique up to isomorphism this implies $(B\otimes_A \_)\circ F_A\simeq F_B$. In particular,
$$B\otimes_A A[x]=B\otimes F_A(\{\ast\})\simeq F_B(\{\ast\})=B[x].$$ |
Every ordered space has an ordered compactification. | The statement you are trying to prove is false. As per this paper:
an ordered space has an ordered compactification if and only if it is completely regularly ordered.
(see the top of page 2, and the book "Topology and Order" by Nachbin cited therein.) |
how many points belong to the quadric $x_0^2+x_1^2+x_2^2+x_3^2=0$ in $\mathbb{P}_3$ over $\mathbb{F}_9$ | You can try a brute force approach: Simply have a small computer program iterate over all the possible values for your $x_i$. For example in Sage:
from itertools import product
F9 = FiniteField(9)
len([(a,b,c,d) for a,b,c,d in product(F9, repeat=4) if a*a+b*b+c*c+d*d==0])
You will find 801 combinations. One of them is the null vector. And for each of the other values, you get $9-1=8$ possible representations of each point. So you have 100 points on your quadric. Of course if you already use Sage for finite field arithmetic (as I do because I'm lazy), you can leave the rest to it, too:
P3.<x0,x1,x2,x3> = ProjectiveSpace(3, F9)
s = P3.subscheme(x0^2+x1^2+x2^2+x3^2)
s.count_points(1) # 1 because we want this for field of size 9^1
len(s.rational_points()) # without len this gives you the coordinates |
Equation rearranging - what are $x$ and $y$ in terms of $u$ and $v$? | Be careful what you wish for. Here's the solution for just $y$, produced by Mathematica:
$$y = -\frac{1}{2} \sqrt{\frac{1}{3} \left(u^2-v\right)+\frac{\sqrt[3]{2
\left(u^2-v\right)^3-288 \left(u^2-v\right)+432 u^2+24 \sqrt{3} \sqrt{u^6 v-3 u^4
v^2-4 u^4+3 u^2 v^3+80 u^2 v-v^4+32 v^2-256}}}{3 \sqrt[3]{2}}+\frac{\sqrt[3]{2}
\left(u^4-2 u^2 v+v^2+48\right)}{3 \sqrt[3]{2 \left(u^2-v\right)^3-288
\left(u^2-v\right)+432 u^2+24 \sqrt{3} \sqrt{u^6 v-3 u^4 v^2-4 u^4+3 u^2 v^3+80 u^2
v-v^4+32 v^2-256}}}+v}-\frac{1}{2} \sqrt{\frac{1}{3}
\left(v-u^2\right)+u^2-\frac{\sqrt[3]{2 \left(u^2-v\right)^3-288
\left(u^2-v\right)+432 u^2+24 \sqrt{3} \sqrt{u^6 v-3 u^4 v^2-4 u^4+3 u^2 v^3+80 u^2
v-v^4+32 v^2-256}}}{3 \sqrt[3]{2}}-\frac{\sqrt[3]{2} \left(u^4-2 u^2
v+v^2+48\right)}{3 \sqrt[3]{2 \left(u^2-v\right)^3-288 \left(u^2-v\right)+432 u^2+24
\sqrt{3} \sqrt{u^6 v-3 u^4 v^2-4 u^4+3 u^2 v^3+80 u^2 v-v^4+32 v^2-256}}}-\frac{8
u^3-8 u \left(u^2-v\right)}{4 \sqrt{\frac{1}{3} \left(u^2-v\right)+\frac{\sqrt[3]{2
\left(u^2-v\right)^3-288 \left(u^2-v\right)+432 u^2+24 \sqrt{3} \sqrt{u^6 v-3 u^4
v^2-4 u^4+3 u^2 v^3+80 u^2 v-v^4+32 v^2-256}}}{3 \sqrt[3]{2}}+\frac{\sqrt[3]{2}
\left(u^4-2 u^2 v+v^2+48\right)}{3 \sqrt[3]{2 \left(u^2-v\right)^3-288
\left(u^2-v\right)+432 u^2+24 \sqrt{3} \sqrt{u^6 v-3 u^4 v^2-4 u^4+3 u^2 v^3+80 u^2
v-v^4+32 v^2-256}}}+v}}+v}-\frac{u}{2}$$ |
Motivation behind the proeprties of sigma algebra | Basically the idea behind sigma-algebras is this. We'd like to be able to assign measures or probabilities to sets, but it turns out we can't do that for every set. So we try to do the best we can, getting at least enough to handle the kinds of sets that arise in practice. Starting with, say, the open sets and using the operations of complements, finite unions and countable increasing unions gives you the Borel sets, which are enough: you will never encounter a non-Borel set in "real life". |
Operators between normed linear spaces | I am guessing that you are asking if $T$ is a bounded additive homomorphism then it is linear.
If $T$ is bounded, then it is continuous. Let $\overline{B}$ be the closed unit ball, then for some $K< \infty$ we have $\|T(x)\| \le K$ for all $x \in \overline{B}$. Since $T(x)-T(y) = T(x-y)$, we need only examine continuity at zero. We have $T(nx) = n T(x)$ for $n \in \mathbb{N}$, so if $\|x\| \le \frac{1}{n}$ (equivalently $\|nx\| \le 1$), we have $\|T(x)\| \le \frac{1}{n} K$. Continuity follows.
If the field is real, then it is straightforward to show that $T({n \over m} x) = {n \over m} T(x)$, and continuity shows this is true for all scalars.
Elaboration: $0 = T(x + (-x)) = T(x) + T(-x)$, so $T(-x) = -T(x)$. We have $T(nx) = n T(x)$ for $n \in \{0,1,...\}$, and $T(m (\frac{1}{m} x)) = m T(\frac{1}{m} x)$, which gives $T(\frac{1}{m} x) = \frac{1}{m} T(x)$. Combining shows $T(qx) = qT(x)$ for all $q \in \mathbb{Q}$.
Let $T : \mathbb{C} \to \mathbb{C}$ be given by $T(x) = \overline{x}$. Then $T$ is a bounded additive homomorphism, but is not linear.
If, in addition, $T(ix) = iT(x)$, then it is true with a complex field. |
When $\pi_1(X)\neq 0$ and $H_1(X) = 0$ | $H^1(X)$ is $\pi_1(X)/[\pi_1(X),\pi_1(X)]$, you can take a non abelian simple discrete group $G$ and $X$, $K(G,1)$ the $1$-Eilenberg-McLane space. |
Why is wau not equal to one? | What Vi Hart did was pretty much the same as what the people behind dihydrogen monoxide (written out, it looks like H$_2$O) did. She just hid the name behind something really cool sounding as a joke. |
Does convexity at a single point imply convexity w.r.t finite convex combinations? | Yes, the conclusion holds. From reading the new comments, I guess my solution is something like a "supporting line".
Lemma: For $x_1 \in (0,c)$ and $x_2 \in (c,+\infty)$ from "$\phi$ convex at $c$" follows
$$\frac{\phi(x_2)-\phi(c)}{x_2-c} \ge \frac{\phi(c)-\phi(x_1)}{c-x_1}.$$
Geometrically that means the line through the points $(c,\phi(c))$ and $(x_2,\phi(x_2))$ has at least the slope of the line through $(x_1,\phi(x_1))$ and $(c,\phi(c))$.
Proof of Lemma: The choice of
$$\alpha=\frac{x_2-c}{x_2-x_1} \Rightarrow 1-\alpha=\frac{c-x_1}{x_2-x_1}$$
leads to $\alpha x_1+(1-\alpha)x_2=c$. Note that because of $x_1 < c < x_2$ the quotients are defined and we have $\alpha,1-\alpha \ge 0$, so $\alpha \in [0,1]$ as required. So we can apply "$\phi$ convex at $c$" and get
$$\phi(c) \le \frac{x_2-c}{x_2-x_1}\phi(x_1) + \frac{c-x_1}{x_2-x_1}\phi(x_2).$$
We want to isolate $\phi(x_2)$ and multiply the inequality with the positive $\frac{x_2-x_1}{c-x_1}$ to get
$$\frac{x_2-x_1}{c-x_1} \phi(c) \le \frac{x_2-c}{c-x_1}\phi(x_1) + \phi(x_2) \Rightarrow \phi(x_2) \ge \frac{x_2-x_1}{c-x_1} \phi(c) - \frac{x_2-c}{c-x_1}\phi(x_1).$$
From the latter inequality we subtract $\phi(c)$ on both sides and then devide by the positive $(x_2-c)$:
$$\frac{\phi(x_2)-\phi(c)}{x_2-c} \ge \frac{x_2-x_1}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1) - \frac{\phi(c)}{x_2-c}.$$
The term on the right hand side can now be simplified by first uniting the coeffcients before $\phi(c)$ then cancelling $(x_2-c)$
$$\begin{eqnarray}
\frac{x_2-x_1}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1) - \frac{\phi(c)}{x_2-c} & = & \frac{(x_2-x_1) - (c-x_1)}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1)\\
& = & \frac{x_2-c}{(c-x_1)(x_2-c)}\phi(c) - \frac{1}{c-x_1}\phi(x_1)\\
& = & \frac{1}{c-x_1}\phi(c) - \frac{1}{c-x_1}\phi(x_1)\\
& = & \frac{\phi(c) - \phi(x_1)}{c-x_1},\\
\end{eqnarray}
$$
which finally proves the lemma!
Now for the OP's problem. Let the $x_i, \lambda_i$ be given for $i=1,2,\ldots,k$ and let's assume they fulfill the stated properties.
If all $x_i$ are at least $c$, then we have $x_i=c$ for all $i=1,2,\ldots,k$, then the conclusion is trivial. So assume at least one $x_i < c$ exists. W.l.o.g. assume that $x_1,x_2,\ldots x_r < c$ while $x_{r+1},\ldots,x_k \ge c$ for some $r \in \{1,2,\ldots,k\}$.
Let
$$m:=\max_{i=1,2,\ldots,r} \frac{\phi(c)-\phi(x_i)}{c-x_i}.$$
Let $i_{max}$ be one index where the maximum is realized.
I claim that all points $(x_i,\phi(x_i))$ lie on or above the line
$$y=m(x-c)+\phi(c)$$
passing through $(c,\phi(c)).$
If $i \le r$ we know by the definition of $m$ that
$$\frac{\phi(c)-\phi(x_i)}{c-x_i} \le m,$$
so multiplying with the positive $(c-x_i)$ yields
$$\phi(c)-\phi(x_i) \le m(c-x_i) \Rightarrow \phi(c) + m(x_i-c) \le \phi(x_i),$$
as desired.
If $x_i=c$, then $(x_i,\phi(x_i))=(c,\phi(c))$ obviously lies on the line.
If $x_i > c$, then we know by the lemma (remember that $x_i > c, x_{i_{max}} < c$, so the lemma can be applied) that
$$\frac{\phi(x_i)-\phi(c)}{x_i-c} \ge \frac{\phi(c)-\phi(x_{i_{max}})}{c-x_{i_{max}}} = m,$$
so we get immediately
$$\phi(x_i)-\phi(c) \ge m(x_i-c) \Rightarrow \phi(x_i) \ge \phi(c) + m(x_i-c).$$
This proves that indeed all $(x_i,\phi(x_i))$ lie at or above the line $y=m(x-c)+\phi(c)$, and the proof now concludes easily:
$$\begin{eqnarray}
\sum_{i=1}^k\lambda_i\phi(x_i) & \ge & \sum_{i=1}^k\lambda_i(\phi(c)+m(x_i-c))\\
& = & \sum_{i=1}^k\lambda_i\phi(c) + m \sum_{i=1}^k\lambda_ix_i - m \sum_{i=1}^k\lambda_ic\\
& = & \phi(c) \sum_{i=1}^k\lambda_i +mc - mc\times1\\
& = & \phi(c)\\
& = & \phi(\sum_{i=1}^k\lambda_ix_i),\\
\end{eqnarray}$$
which is ecactly the statement to prove. |
$|S_1...S_n| = |S_1|...|S_n|$(Sylow subgroups of a group $G$) | Let $x\in S_i, y\in S_j,i\neq j, [x,y]=xyx^{-1}y^{-1}=x(yx^{-1}y^{-1}),$ since $S_i$ is normal, $yx^{-1}y^{-1}\in S_i$, thus $[x,y]\in S_i$,$[x,y]=(xyx^{-1})y^{-1}$ since $S_j$ is normal, $xyx^{-1}\in S_j, [x,y]\in S_i\cap S_j=\{e\}, xy=yx$.
$f:S_1\times ..\times S_n\rightarrow G, f(s_1,..,s_n)=s_1..s_n$ is well defined and injective.To see this: $s_1..s_n=1$ implies that $(s_1..s_n)^{p_2^{k_2}..p_n^{k_n}}=s_1^{p_2^{k_2}..p_n^{k_n}}=e$ This implies that $s_1=e$ $gcd(p_1,p_i)=1, i>1$. Recursively, we deduce that $s_i=e$ ans $f$ is injective, thus is an isomorphism since the source and target of $f$ have the same cardinal. |
Proving a little tough trigonometric identity | You might want to use the following:
$\sin(A - B) = \sin(A) \cos(B) - \cos(A) \sin(B)$ |
Does this series converge, and if so to what value?: $\sum_{n=0}^\infty \left\{\frac{1}{(n+1)^2} \right\}\ln(2n+1)$ | To prove convergence:
For any $\varepsilon > 0$, we have $\ln (2n+1) < n^{\epsilon}$ for all $n > n_{\varepsilon}$ for a certain $n_{\varepsilon}$ depending on $\varepsilon$ (hence the subscript)
So, let $\varepsilon = \frac{1}{2}$, then we can compare this series to $\displaystyle \sum_{n=0}^{\infty} \frac{1}{(n+1)^{1.5}}$, which converges by the $p$-series test since $p = 1.5$ is greater than $1$.
Note that finding the exact value of a sum is often much more difficult than proving whether or not it converges. I calculated up to the $100000$th partial sum and got $S \approx 1.2713728 \dots$ |
Using Gram-Schmidt to find the QR decomposition | You're computing $u_2$ wrongly.
I find it useful to set up a systematic way, where the information can be picked up easily.
Let $v_1$, $v_2$ and $v_3$ be the three columns of $A$.
GS1
$u_1=v_1$
$\langle u_1,u_1\rangle=2$
GS2
$\alpha_{12}=\dfrac{\langle u_1,v_2\rangle}{\langle u_1,u_1\rangle}=\dfrac{1}{2}$
$u_2=v_2-\alpha_{12}u_1=\begin{bmatrix}1/2\\1\\-1/2\end{bmatrix}$
$\langle u_2,u_2\rangle=\dfrac{3}{2}$
GS3
$\alpha_{13}=\dfrac{\langle u_1,v_3\rangle}{\langle u_1,u_1\rangle}=\dfrac{1}{2}$
$\alpha_{23}=\dfrac{\langle u_2,v_3\rangle}{\langle u_2,u_2\rangle}=\dfrac{1}{3}$
$u_3=v_3-\alpha_{13}u_1-\alpha_{23}u_2=\begin{bmatrix}-2/3\\2/3\\2/3\end{bmatrix}$
$\langle u_3,u_3\rangle=\dfrac{4}{3}$
Matrix $Q$
The matrix $Q$ has as columns the vectors $u_1$, $u_2$ and $u_3$ divided by their norms:
$$
Q=\begin{bmatrix}
1/\sqrt{2} & 1/\sqrt{6} & -1/\sqrt{3} \\
0 & 2/\sqrt{6} & 1/\sqrt{3} \\
1/\sqrt{2} & -1/\sqrt{6} & 1/\sqrt{3}
\end{bmatrix}
$$
Matrix $R$
The matrix $R$ is obtained by multiplying each row of the upper unitriangular with the entries $\alpha_{ij}$ by the norm of the corresponding $u$ vector.
$$
R=\begin{bmatrix}
1 & 1/2 & 1/2 \\
0 & 1 & 1/3 \\
0 & 0 & 1
\end{bmatrix}
\begin{array}{l}\cdot\sqrt{2}\\\cdot \sqrt{6}/2\\\cdot 2/\sqrt{3}\end{array}=
\begin{bmatrix}
\sqrt{2} & \sqrt{2}/2 & \sqrt{2}/2 \\
0 & \sqrt{6}/2 & \sqrt{6}/6 \\
0 & 0 & 2/\sqrt{3}
\end{bmatrix}
$$ |
If $\alpha$ irrational, then $F(x,y)=(x+\alpha,x+y)\mod1$, $T^{2}\to T^{2}$ preserves Lebesgue measure and is not weak mixing | For 1, don't try to find an explicit expression. The point is that for each $x \in [a-\alpha,b-\alpha)$, there are exactly $d-c$ choices for $y$, so $\lambda(F^{-1}([a,b)\times[c,d))) = (b-a)(d-c)$.
For 2, the map $\mathbb{T} \to \mathbb{T}, x\mapsto x+\alpha$ is not weakly mixing, so no way this map is. To make this rigorous, consider $A = B = (0,\frac{1}{2})\times \mathbb{T}$ (say); then we'd want $$\frac{1}{N}\sum_{n=0}^{N-1} \left|\lambda\left(S^{-n}\left((0,\frac{1}{2})\right) \cap \left(0,\frac{1}{2}\right)\right)-\lambda\left((0,\frac{1}{2})\right)\lambda\left((0,\frac{1}{2})\right)\right| \to 0,$$ where $S: \mathbb{T} \to \mathbb{T}, Sx := x+\alpha$. But this is false (think about it pictorially/intuitively -- it's the reason that $S$ is not weakly mixing). |
Is there always a way to pick a card from each number? | Let $k > 2$. If I want to remove at least $k - 1$ cards with the same number, I will have to select $k - 1$ rows among the $k$ rows we have, and then I will be able to select the pile I want in the last row. For one card number, I can then have at most $2k$ selections. I then have at most $2nk$ selections such that there is less than 2 cards for a particular card number. The condition you want is then:
$$ 2^{k-1} > nk. $$ |
Help understanding reciprocal polynomial of even degree | This polynomial is not reciprocal? Try substituting in $1/x$ for $x$. |
Why do these parallelograms have the same area | Consider the following figure, in which the blue region is a single quadrilateral that encloses both of your parallelograms:
This quadrilateral has some area. We don't need to measure the area, just recognize that it is a well-defined area. Call that area $A_1.$
Let's change the color of part of the figure, forming a yellow triangle as shown below:
Now return to the blue quadrilateral and again color a triangular portion yellow:
The two yellow triangles are congruent, which you can confirm in any of various ways.
(You can independently show three pairs of congruent angles and three pairs of congruent sides.)
So if one triangle has area $A_2,$ the other also has area $A_2.$
See below gif:
In the second figure we see that the areas of paralellogram $1$ plus a yellow triangle together add up to the area of the quadrilateral in the first figure:
$$ \operatorname{Area}(\text{parallelogram $1$}) + A_2 = A_1.$$
In the third figure we see that the areas of paralellogram $2$ plus a yellow triangle together add up to the area of the quadrilateral in the first figure:
$$ \operatorname{Area}(\text{parallelogram $2$}) + A_2 = A_1.$$
That is,
$$\operatorname{Area}(\text{parallelogram $1$}) = A_1 - A_2 =
\operatorname{Area}(\text{parallelogram $2$}).$$ |
Multiple Choice: What is the design for this experiment? | Your solution appears to be for a different question. In this case, it's a randomized block design. The "randomized" comes from the fact that the assignment to either treatment or control is random for each plant. The "block" refers to the fact that there are three varieties of the plant, and randomization occurs within those blocks (15 treatment + 15 control for the 30 plants in each block), rather than across the entire sample of the study (45 treatment + 45 control across all 90 plants).
An observational study would be one where the plants had already been planted in fertilizer and you weren't able to randomly assign treatment/control.
A matched pairs design typically involves taking pairs of plants that are similar in several characteristics, then assigning one to treatment and one to control.
Completely randomized would be if you just assigned 45 to treatment and 45 to control without accounting for the three varieties. |
Under what minimal conditions on $d,c>0$, do we have $\lim_{n \to \infty}\sum_{n/2 \le k \le n}{n\choose k}(1-1/n^c)^{k\choose d} = 0$? | This holds if and only if $c<d-1$.
I’ll write $f(n)\sim g(n)$ for $g(n)=f(n)(1+o(1))$, that is, $\lim_{n\to\infty}g(n)/f(n)=1$. Also, $n/2$ stands for $\lceil n/2\rceil$. Let
$$S_{d,c}(n)=\sum_{k=n/2}^n\binom nk(1-n^{-c})^{\binom kd}$$
be the quantity we want to estimate. We have
$$\binom{n/2}d\sim\frac{n^d}{2^dd!}$$
and
$$\log(1-n^{-c})\sim-n^{-c},$$
thus
$$\log\left((1-n^{-c})^{\binom{n/2}d}\right)\sim-\frac{n^{d-c}}{2^dd!}.$$
Also, for $n/2\ge d$,
$$\binom n{n/2}(1-n^{-c})^{\binom{n/2}d}\le S_{d,c}(n)\le\sum_{k=n/2}^n\binom nk(1-n^{-c})^{\binom{n/2}d}\le2^n(1-n^{-c})^{\binom{n/2}d},$$
while
$$\log\binom n{n/2}\sim\log2^n=n\log2,$$
hence
$$\log S_{d,c}(n)=n(\log2+o(1))-\frac{n^{d-c}}{2^dd!}(1+o(1)).$$
Thus, if $d-c>1$, the $n^{d-c}$ term dominates, and
$$S_{d,c}(n)=\exp\left(-\frac{n^{d-c}}{2^dd!}(1+o(1))\right)\to0\qquad(n\to\infty),$$
whereas if $d-c<1$, the $n$ term dominates, and
$$S_{d,c}(n)=\exp\bigl(n(\log2+o(1))\bigr)\to+\infty\qquad(n\to\infty).$$
If $d-c=1$, we have
$$\log S_{d,c}(n)=\left(\log2-\frac1{2^dd!}+o(1)\right)n,$$
and since $\log2>1/(2^dd!)$ for $d\ge1$, again
$$S_{d,c}(n)\to+\infty\qquad(n\to\infty).$$ |
Writing down an arbitrary curve in a formal way | If you want all degree-$n$ terms as well as lower-order terms:
$$f(x,y) = \sum_{\substack{i+j\le n \\ i,j\ge 0}}\alpha_{i,j} \;x^i y^j.$$
If you want it to be homogeneous of degree $n$:
$$f(x,y) = \sum_{\substack{i+j = n \\ i,j\ge 0}}\alpha_{i,j} \;x^i y^j.$$
(If you want the TeX commands for these, right-click them, then "Show Math As" -> "TeX Commands".) |
For a Noetherian ring $R$, we have $\text{Ass}_R(S^{-1}M)=\text{Ass}_R(M)\cap \{P:P\cap S=\emptyset\}$ | $\mathfrak p\in\operatorname{Ass}_{R}(S^{-1}M)\implies\exists x\in M,\ \mathfrak p=\operatorname{Ann}_{R}(x/1)$.
This shows that $\mathfrak p\supseteq\operatorname{Ann}_R(x)$. (If you think at this point that $\mathfrak p=\operatorname{Ann}_R(x)$, forget it!)
Moreover, $\mathfrak p$ is minimal over $\operatorname{Ann}_R(x)$: if $\mathfrak p\supseteq\mathfrak p'\supseteq\operatorname{Ann}_R(x)$, then let $a\in\mathfrak p$, and from $\frac a1\cdot\frac x1=\frac01$ deduce that there is $s\in S$ such that $sa\in\operatorname{Ann}_R(x)$, so $sa\in\mathfrak p'$ hence $a\in\mathfrak p'$.
Now one uses that $R$ is Noetherian and conclude that $\mathfrak p\in\operatorname{Ass}_{R}(M)$. |
Trapezoidal integration rule error analysis. | Calculate backwards, with partial integration:
\begin{align}
\int_a^b(x-a)(x-b)f''(x)dx
&=[(x-a)(x-b)f'(x)]_a^b-\int_a^b(2x-a-b)f'(x)dx\\
&=0-0-[(2x-a-b)f(x)]_a^b+\int_a^b2f(x)dx\\
&=-(b-a)(f(b)+f(a))+2\int_a^bf(x)dx
\end{align}
so that indeed the claimed identity holds. |
Definition of "the same" | Cosets are sets themselves, so "the same" refers to set equality. Two sets are equal iff they have the same elements. |
How does one evaluate $I(\alpha,\beta,\gamma)= \int_{0}^{\infty} \cos\left(\frac{x(x^2-\alpha^2)}{x^2-\beta^2}\right)\frac{1}{x^2+\gamma^2} \, dx.$ | For $\alpha, \beta \geq 0$ and $\gamma > 0$ let
$$ f_{\alpha,\beta,\gamma} (z) = \frac{\exp \left(\mathrm{i} z \frac{z^2 - \alpha^2}{z^2 - \beta^2}\right)}{z^2 + \gamma^2} \, , \, z \in \mathbb{C} \setminus \{\pm\beta, \pm \mathrm{i} \gamma\} \, . $$
By symmetry we have
$$ I (\alpha,\beta,\gamma) = \frac{1}{2} \int \limits_{-\infty}^\infty f_{\alpha,\beta,\gamma} (x) \, \mathrm{d} x \, .$$
Clearly, the integral exists for every combination of parameters and is bounded by $\frac{\pi}{2 \gamma}$ .
We can compute its value using the residue theorem. First note that we have
$$ \operatorname{Res} (f_{\alpha,\beta,\gamma}, \mathrm{i} \gamma) = \frac{1}{2 \mathrm{i} \gamma} \exp \left(- \gamma \frac{\alpha^2 + \gamma^2}{\beta^2 + \gamma^2}\right) $$
and (by Jordan's lemma)
$$ \lim_{R \to \infty} \int \limits_{\Gamma_R} f_{\alpha,\beta,\gamma} (z) \, \mathrm{d} z = 0 \, ,$$
where $\Gamma_R$ is a semi-circle of radius $R > 0$ in the upper half-plane. A naive application of the residue theorem therefore yields
$$ I (\alpha,\beta,\gamma) = \frac{1}{2} \left[ 2 \pi \mathrm{i} \operatorname{Res} (f_{\alpha,\beta,\gamma}, \mathrm{i} \gamma) - \lim_{R \to \infty} \int \limits_{\Gamma_R} f_{\alpha,\beta,\gamma} (z) \, \mathrm{d} z \right]= \frac{\pi}{2 \gamma} \exp \left(- \gamma \frac{\alpha^2 + \gamma^2}{\beta^2 + \gamma^2}\right) \, .$$
While the result is indeed correct, this calculation is only valid for $\alpha = \beta$ . In this case we obtain (as already mentioned in the question) $I(\alpha,\alpha,\gamma) = \frac{\pi}{2 \gamma}\mathrm{e}^{- \gamma}$ .
If $\alpha \neq \beta$ , $f_{\alpha,\beta,\gamma}$ has essential singularities on the real axis, namely at $\pm \beta$ . Thus we need to deform our contour using small semi-circles $\gamma_\varepsilon (\pm \beta)$ of radius $\varepsilon > 0$ centred at these points and show that their contribution to the integral vanishes as $\varepsilon \to 0$ .
For simplicity, I will only discuss the case $\beta = 0$ in detail. We want to find
$$ \int \limits_{\gamma_\varepsilon (0)} f_{\alpha,0,\gamma} (z) \, \mathrm{d} z = \varepsilon \int \limits_0^\pi \frac{\mathrm{e}^{\mathrm{i} \left[\phi + \varepsilon \mathrm{e}^{\mathrm{i}\phi} - \alpha^2 \varepsilon^{-1} \mathrm{e}^{-\mathrm{i} \phi}\right]}}{\gamma^2+\varepsilon^2 \mathrm{e}^{2 \mathrm{i} \phi}} \, \mathrm{d} \phi = \frac{\varepsilon}{\gamma^2} \int \limits_0^\pi \mathrm{e}^{\mathrm{i} \left[\phi - \alpha^2 \varepsilon^{-1} \mathrm{e}^{-\mathrm{i} \phi}\right]} \left[1 + \mathcal{O} (\varepsilon) \right] \, \mathrm{d} \phi \, .$$
The leading-order term can actually be calculated analytically: for $z \in \mathbb{C}$ we have
$$ \int \limits_0^\pi \mathrm{e}^{\mathrm{i} \left[\phi - z \mathrm{e}^{-\mathrm{i} \phi}\right]} \, \mathrm{d} \phi = \mathrm{i} \left[(2 \operatorname{Si}(z) - \pi) z + 2 \cos(z)\right] \, . $$
The sine integral $\operatorname{Si}$ satisfies $\lim_{x \to \infty} \operatorname{Si}(x) = \frac{\pi}{2}$, so we obtain
$$ \int \limits_{\gamma_\varepsilon (0)} f_{\alpha,0,\gamma} (z) \, \mathrm{d} z \sim \frac{\mathrm{i} \varepsilon}{\gamma^2} \left[(2 \operatorname{Si}(\alpha^2 \varepsilon^{-1}) - \pi) \alpha^2 \varepsilon^{-1} + 2 \cos(\alpha^2 \varepsilon^{-1})\right] \stackrel{\varepsilon \to 0}{\longrightarrow} 0 $$
as desired. Now we are allowed to apply the residue theorem , which yields
\begin{align}
I(\alpha,0,\gamma) &= \frac{1}{2} \left[ 2 \pi \mathrm{i} \operatorname{Res} (f_{\alpha,0,\gamma}, \mathrm{i} \gamma) - \lim_{R \to \infty} \int \limits_{\Gamma_R} f_{\alpha,0,\gamma} (z) \, \mathrm{d} z - \lim_{\varepsilon \to 0} \int \limits_{\gamma_\varepsilon (0)} f_{\alpha,0,\gamma} (z) \, \mathrm{d} z \right] \\
&= \frac{\pi}{2 \gamma} \exp \left(- \frac{\alpha^2 + \gamma^2}{\gamma}\right) \, .
\end{align}
Almost the same calculation yields
\begin{align}
I(\alpha,\beta,\gamma) &= \frac{1}{2} \left[ 2 \pi \mathrm{i} \operatorname{Res} (f_{\alpha,\beta,\gamma}, \mathrm{i} \gamma) - \lim_{R \to \infty} \int \limits_{\Gamma_R} f_{\alpha,\beta,\gamma} (z) \, \mathrm{d} z - \lim_{\varepsilon \to 0} \int \limits_{\gamma_\varepsilon (\beta)} f_{\alpha,\beta,\gamma} (z) \, \mathrm{d} z - \lim_{\varepsilon \to 0} \int \limits_{\gamma_\varepsilon (-\beta)} f_{\alpha,\beta,\gamma} (z) \, \mathrm{d} z \right] \\
&= \frac{\pi}{2 \gamma} \exp \left(- \gamma \frac{\alpha^2 + \gamma^2}{\beta^2 + \gamma^2}\right)
\end{align}
for $\beta > 0$ .
Combining these results we conclude that the integral is given by
$$ I(\alpha,\beta,\gamma) = \frac{\pi}{2 \gamma} \exp \left(- \gamma \frac{\alpha^2 + \gamma^2}{\beta^2 + \gamma^2}\right) $$
for every $\alpha,\beta \geq 0$ and $\gamma > 0$ . |
Why $\lim_{x \to ∞} 1^x=1$ even though $1^∞$ in indeterminate? | Well, the fact that $1^\infty$ is indeterminate does not mean that $f(x)^{g(x)}$ has no limit when $f(x)$ approaches $1$ and $g(x)$ approaches $\infty$. It only mean that you cannot deduce the limit of $f(x)^{g(x)}$ from the information "$f(x)$ approaches $1$ and $g(x)$ approaches $\infty$". For some $f$ and $g$ satisfying this hypothesis, the limit can exist, and be $1$, or $4$, or $\pm \infty$, or anything, or not exist. It turns out that for $f$ constant equal to $1$, $f^{g}$ is constant equal to $1$, and the limit is $1$. But the fact that you can compute the limit in this case does not contradict the indeterminacy statement. |
A connection between integral and sums [Trapezoidal rule perhaps?] | Yes, it's the trapezoidal rule in action. To get the desired convergence and not only the boundedness, note that
$$\int_{a_k}^{b_k} f'(c_k)(x-c_k)\,dx = 0.$$
Hence you can estimate
$$\Biggl\lvert \int_{a_k}^{b_k} f'(x)(x-c_k)\,dx\Biggr\rvert \leqslant \int_{a_k}^{b_k} \lvert f'(x) - f'(c_k)\rvert\cdot \lvert x - c_k\rvert\,dx \leqslant \omega_{f'}\bigl(\tfrac{1}{2n}\bigr) \int_{a_k}^{b_k} \lvert x-c_k\rvert\,dx,$$
where $\omega_{f'}$ is the modulus of continuity of $f'$,
$$\omega_{f'}(\delta) = \max \:\{\lvert f'(x) - f'(y)\rvert : \lvert x-y\rvert \leqslant \delta\}.$$
Since $f'$ is uniformly continuous, we have $\lim\limits_{\delta \to 0} \omega_{f'}(\delta) = 0$, as needed, leading to
$$\Biggl\lvert \sum_{k = 1}^n f\biggl(\frac{k}{n}\biggr) - n\int_0^1 f(x)\,dx - \frac{f(1) - f(0)}{2}\Biggr\rvert \leqslant K\cdot \omega_{f'}\biggl(\frac{1}{2n}\biggr).$$ |
Alternative Definitions of a Sigma Algebra | They're saying the same thing: a countable collection of sets is in the sigma-algebra.
It's just a notation convention. You can talk of individual sets $A_1\in A$, $A_2\in A$,... OR you can say a set $B=\{A_1,A_2,...\}\subset A$, which means each of the elements of $B$ are in $A$. |
Question with Optimization | Hint: Distance-wise, the hypotenuse is the shortest distance, as you noted. However, that path is entirely underwater, which is very expensive. The other extreme would be to lay 3 km of land line west, and then 2km of underwater line north, but that has a large distance and may still cost too much. The optimal way is probably somewhere in the middle: lay some land line to the west for $x$ kilometers, and then take the straight-line distance $\sqrt{2^2+(3-x)^2}$ to the island from there. |
prove $f^{-1}(C \cap D) = f^{-1}(C) \cap f^{-1}(D)$ | To solve these kinds of questions, you want to show that $$f^{-1}(C\cap D) \subseteq f^{-1}(C) \cap f^{-1}(D)$$ and $$f^{-1}(C\cap D) \supseteq f^{-1}(C) \cap f^{-1}(D).$$
I will show $\subseteq$ here; you should try the other direction.
Suppose $x \in f^{-1} (C \cap D)$. This means $f(x) \in (C\cap D)$, which further implies $f(x) \in C$ AND $f(x) \in D$. Thus, $x \in f^{-1}(C)$ AND $x \in f^{-1}(D)$. |
What is the difference between logarithmic decay vs exponential decay? | The "Square is a rectangle" relationship is an example where the square is a special case of a rectangle.
"Exponential decay" gets its name because the functions used to model it are of the form $f(x)=Ae^{kx} +C$ where $A>0$ and $k<0$. (Other $k$'s above $0$ yield an increasing function, not a decaying one.)
Similarly for "logarithmic decay," it gets its name since its modeled with functions of the form $g(x)=A\ln(x)+C$ where $A<0$.
These two families of functions do not overlap, so neither is a special case of the other. The giveaway is that the functions with $\ln(x)$ aren't even defined on half the real line, whereas the exponential ones are defined everywhere. |
Assume $a$ is the rate of convergence of an algorithm, then how to understand $\frac{1}{1-a}$ | The expression comes from the geometric series $$1+a+a^2+a^3+\cdots=\frac{1}{1-a}$$
I do not know how to prove the given limit, but I think the left side of this idendity is helpful. |
How is $ [(x+h)^{1/3} - x^{1/3}] [(x+h)^{2/3} +x^{1/3}(x+h)^{1/3}+ x^{2/3}] $ simplified to become $ (x+h-x) $? | $$a^3-b^3=(a-b)(a^2+ab+b^2)$$
Replace $a=(x+h)^{\frac13}$ and $b=x^{\frac13}$. |
What are the multiplication properties of symmetric, anti-symmetric, triangular and diagonal matrices | Matrix multiplication is defined for any two matrices with matching dimensions, i.e. for any $A \in \mathbf{K}^{n_a \times m_a}$ and $B \in \mathbf{K}^{n_b \times m_b}$ for any field $\mathbf{K}$, we have the following rule, iff $m_a = n_b$
$$
(A \cdot B)_{ij} = \sum_{k=1}^{m_a} A_{ik} \cdot B_{kj},
$$
for any $i \in \{1, \dots, n_a\}$ and $j \in \{1, \dots, m_b\}$ and thus the resulting matrix $C := A \cdot B \in \mathbf{K}^{n_a \times m_b}$.
E.g. for diagonal matrices the above rule boils down to the multiplication of corresponding elements of the matrix, given the sizes of matrices are the same. |
evaluation of the integral of a certain logarithm | We have:
$$ I = \int_{\frac{a^2-1}{a^2+1}}^{1}\frac{\log x}{(1-x)^{3/2}(1+x)^{1/2}}\,dx $$
where the integrand function is integrable over $(0,1)$. Moreover:
$$ 0 > I > \int_{0}^{1}\frac{\log x}{(1-x)^{3/2}(1+x)^{1/2}} = -\frac{\pi}{2}-\log 2.$$
Integration by parts gives:
$$ I = x\,\left.\log\frac{x^2-1}{x^2+1}\right|_{a}^{+\infty}-\int_{a}^{+\infty}x\left(\frac{1}{x-1}+\frac{1}{x+1}-\frac{2x}{x^2+1}\right)\,dx,$$
or:
$$ I = -a\log\frac{a^2-1}{a^2+1}-\int_{a}^{+\infty}\frac{4x^2}{x^4-1}\,dx = -a\log\frac{a^2-1}{a^2+1}-4\int_{0}^{1/a}\frac{dx}{1-x^4}$$
so:
$$ I = -a\log\frac{a^2-1}{a^2+1}-2\left(\operatorname{arccot} a+\operatorname{arccoth} a\right),$$
or:
$$\color{red}{ I = -a\log\frac{a^2-1}{a^2+1}+\log\frac{a-1}{a+1}-\pi+2\arctan a}. $$ |
Induction proof of $\sum_{i=1}^n i^3 = \frac{n^2(n+1)^2}{4}$ | Steps
Verify the equality for $n=1$
Assume we have the equality for some $n\ge1$
Now for $n+1$ we have
$$\sum_{i=1}^{n+1}i^3=\sum_{i=1}^{n}i^3+(n+1)^3=n^2(n+1)^2/4+(n+1)^3=(n+1)^2\left[\underbrace{n^2+4n+4}_{=(n+2)^2}\right]/4$$
Conclude. |
Matrix Norm Inequality proof | The key fact is that $\|A^*A\|\,I-A^*A$ is positive-semidefinite, and then so is $B^*(\|A^*A\|-A^*A)B=\|A^*A\|B^*B-B^*A^*AB$.
Then, as the eigenvalues of a positive-semidefinite matrix are all non-negative, we have
$$
\|AB\|_F^2=\mbox{Tr}((AB)^*AB)=\mbox{Tr}(B^*A^*AB)\leq\mbox{Tr}(\|A^*A\|B^*B)=\|A^*A\|\,\mbox{Tr}(B^*B)\\ =\|A^*A\|\,\|B\|_F^2=\|A\|^2\,\|B\|_F^2.
$$
Now,
$$
\|ABC\|_F=\|A(BC)\|_F\leq\|A\|\,\|BC\|_F=\|A\|\,\|C^*B^*\|_F\leq\|A\|\,\|C^*\|\,\|B^*\|_F
=\|A\|\,\|B\|_F\,\|C\|.
$$
$\ \\$
Edit: here's a short proof of the fact in the first line. As $A^*A$ is positive-semidefinite, it is diagonalizable, i.e. $A^*A=VDV^*$ for a unitary $V$ and diagonal $D$. As $\|A^*A\|=\|D\|$, it is clear that $\|A^*A\|\,I-D$ is positive-semidefinite (it is a diagonal matrix with non-negative diagonal entries). Then
$$
\|A^*A\|\,I-A^*A=\|A^*A\|\,VV^*-VDV^*=V(\|A^*A\|\,I-D)V^*
$$
is positive-semidefinite. |
On a complement of an open set of a circle in $\Bbb{R}^{3}:$Path connected | I would do the following. Using the fact that the relation of two points being connected by a path is transitive.
If $P\in V_1$ has $z$-coordinate $\ge0$, then the straight line from $P$ to $Q_1=(0,0,1)$ is entirely in $V_1$.
If $P\in V_1$ has $z$-coordinate $\le0$, then the straight line from $P$ to $Q_2=(0,0,-1)$ is entirely in $V_1$.
The straight line connecting $Q_1$ and $Q_2$ is entirely in $V_1$.
Therefore any pair of points $P_1,P_2$ in $V_1$ can be joined by a polygonal path in $V_1$. Either via $Q_1$, $Q_2$ or both. |
Is there a subgroup of $S_7$ that's isomorphic to $\Bbb Z_2\times \Bbb Z_4$? | $\mathbb{Z}_2 \times \mathbb{Z}_4$ looks like two independent cyclic groups, one with order $2$ and the other with order $4$. Can this easily happen? Yes, because $2+4 < 7$.
Thus, let $\mathbb{Z}_2 \times \mathbb{Z}_4$ be represented as $\langle a, b \mid a^2 = b^4 = 1, ab = ba \rangle$.
Then map $a \mapsto (1 2)$ and $b \mapsto (4567)$. Here, I'm using cycle notation. |
Successor function paradox for $\mathfrak{C}$ such that $\mathfrak{B}=(\omega;\lt,S)\preccurlyeq\mathfrak{C}$ | I'll give you some hints, by way of critiquing what you've written.
This was quite simple: in fact an initial segment of a set $X$ need not to be an element of $X$...
True, but irrelevant, as far as I can see.
...so if $B = C = \omega$ then $\omega$ is an initial segment of $C$, and if $C$ contain $B$, every element of $C$ is an ordinal so $\omega$ is again an initial segment.
No. $\mathfrak{C}$ consists of an arbitrary set $C$ equipped with an arbitrary binary relation $<$ and an arbitrary unary function $S$. We have $\omega\subseteq C$, and we know that $<$ is a linear order on $C$, since $(\omega;<,S)$ is an elementary substructure of $\mathfrak{C}$, but the other elements of $C$ need not be ordinals, and even if they are, the order $<$ on $C$ need not agree with the ordinary ordering of ordinals. For example, a priori we might have $C = \omega\cup \{\mathbb{R},\aleph_7\}$, with $0 < 1 < \mathbb{R} < \aleph_7 < 2 < 3 < \dots$ (in which case $\omega$ would not be an initial segment).
Instead, to prove $\omega$ is an initial segment of $C$, you need to use the fact that $\mathfrak{B}$ is an elementary substructure of $\mathfrak{C}$. For example, we have $\mathfrak{B}\models \lnot \exists x\, (1 < x \land x < 2)$, also $\mathfrak{C}\models \lnot \exists x\, (1 < x \land x < 2)$, so the above picture (with elements of $C\setminus \omega$ wedged between $1$ and $2$) can't happen.
The Hint makes you understand that $\omega$ cannot be an element of $C$.
Actually, there's nothing to stop $\omega$ being an element of $C$! Remember, the ordering $<$ is an arbitrary linear order on $C$. But really, the actual identity of the elements of $C\setminus \omega$ is irrelevant for solving this problem. All that matters is their order type, i.e. how they're related under $<$ (and also $S$).
To verify, I used the Tarski-Vaught criterion (II.16.5).
Why are you using Tarski-Vaught? The Tarski-Vaught criterion is useful for proving that $\mathfrak{B}\preceq \mathfrak{C}$. In this problem, you're given the assumption that $\mathfrak{B}\preceq \mathfrak{C}$.
...using as $\psi(x,y)$ the formula $(y\neq 0\land x < y \rightarrow \lnot (y = S(x)))$.
Now if $y = \omega$, for every $a\in \mathfrak{B}$ the $\psi$ is true, but that $y$ is not in $B$ and this is a contradiction for Tarski-Vaugh criterion. Thus $\omega\notin C$.
This is all quite confused. It looks like you think that Tarski-Vaught says that if there is some $c\in C$ such that for all $b\in B$, $\mathfrak{C}\models \psi(b,c)$, then there is some $b'\in B$ such that for all $b\in B$, $\mathfrak{B}\models \psi(b,b')$. If so, you have not parsed Tarski-Vaught correctly. In the correct statement, a particular $b\in B$ is fixed in advance, and then we get that if $\mathfrak{C}\models \exists y\, \psi(b,y)$, then you can find a witness $b'\in B$ which works for that fixed $b$, in the sense that $\mathfrak{B}\models \psi(b,b')$.
In any case, I think the idea behind what you wrote is this: Suppose $\mathfrak{C}\models \exists y\, (y\neq 0 \land \forall x\, \lnot (S(x) = y))$. Since $\mathfrak{B}\preceq \mathfrak{C}$, also $\mathfrak{B}\models \exists y\, (y\neq 0 \land \forall x\, \lnot (S(x) = y))$ [note that we can use the definition of elementary substructure directly here, rather than appealing to Tarski-Vaught]. But that's a contradiction, since every non-zero natural number is a successor.
The above argument is correct, and it shows that every element of $C\setminus \omega$ is a successor (which is an important step in solving (2)). But it does not show that $\omega\notin C$. Remember, the elements of $C$ don't have to be ordinals, and the ordering $<$ doesn't have to have anything to do with the ordinary ordering of the ordinals! $\omega$ can be a successor in $\mathfrak{C}$. For this reason, it's really better to forget altogether about the question of whether $\omega$ is in $\mathfrak{C}$ or not, and just think of the elements of $C\setminus \omega$ as arbitrary "stuff" ("points", as Andrés says in his comment). |
How to find midpoint of an Arc on a 3D Plane | The simplest way in my view would be to take the midpoint of $AB$ and scale it to a point on the sphere (ie, a point whose distance from $O$ is the radius). |
What does "if the rightmost column of the augmented matrix contains a pivot then the system has no solutions" mean? | As I interpret it, they mean by "augmented matrix" the full coefficient matrix, where the equation
$$
\cases{ax + by = c\\dx+ey=f}
$$
gets rewritten to the matrix
$$
\begin{bmatrix}a&b&c\\d&e&f\end{bmatrix}
$$
I hope you can see that if this matrix has a pivot in the right-most column, then the system has no solution. |
Poisson process of satellite launches | If $G$ is the cumulative distribution function of the lifetime of the satellite, a satellite that was launched at time $x$ has probability $1 - G(t-x))$ of still being up at time $t > x$, independent of all other satellites. Thus the launches of satellites still up at time $t$ form an inhomogeneous Poisson process of rate $\lambda (1 - G(t-x))$. The number of these that were launched in the time interval $(a,s)$ is then a Poisson random variable with parameter $\mu = \lambda \int_a^s (1-G(t-x))\; dx$, and the probability that this is $0$ is $\exp(-\mu)$. |
properties of a truncated tent maps on unit interval | In case onyone ever comes across this question again, here are some answers:
There was a typing error in the script. It was supposed to say $T_0$ only has one fixed point. Which is obviously true, because $T_0 (x) = 0$. Thus $T_0(0) = 0$ is the only fixed point. The second part of the statement is correct.
Let $\mathrm(F)$ denote the set of fixed points for a function on an interval. Then, $$\mathrm(F)(T_h) \subset \mathrm(F)(T_1)$$ on the interval $[0,h)$. Notice that this only valid for the half open interval, since $T_h(h) = h$ is a fixed point for every $T_h$, but not for $T_1$. Likewise, $$\mathrm(F)(T_1) \subset (F)(T_h)$$ on $[0,h]$. |
Area under the curve described by θ=ar | I'm hoping this is relevant to your problem. The area of a polar function $r(\theta)$ is given by...
$$A=\int_{\alpha}^{\beta} {1 \over 2} \cdot r^2 \ d \theta$$
(This is fairly intuitive, its exactly like the area formula for a circle, except now its in integral form to allow tiny radius changes to be summed up to a total area. Test this out on a constant $r(\theta)$ to see how it gives the area of a circle.)
The domains of integration are angles. In your case, its very easy to convert $\theta(r)$ to $r(\theta)$. Your function is given by...
$$r(\theta)={\theta \over a}$$
Thus the area is given by...
$$A=\int_{\alpha}^{\beta} {1 \over {2 \cdot a^2}} \cdot {\theta}^2 \ d \theta$$
$$\Rightarrow A={{\beta^2-\alpha^2} \over {4 \cdot a}}$$ |
Simplify limit problem $\lim\limits_{x\to 0}\frac{\sqrt{2+x^2}-\sqrt{2+x^2}}{x^2}$ | $\displaystyle\quad\lim_{x\to 0}\frac {2x^2}{x^2(\sqrt{2+x^2} + \sqrt{2-x^2})}$
$\displaystyle=\lim_{x\to 0}\frac {2}{(\sqrt{2+x^2} + \sqrt{2-x^2})}$
$\displaystyle=\frac {2}{(\sqrt{2+0^2} + \sqrt{2-0^2})}$
$\displaystyle=\frac {2}{(\sqrt{2} + \sqrt{2})}$ |
Lebesgue measure and outer measure | If $m^*(A)=\infty$: Let $G=\Bbb R.$
If $m^*(A)<\infty$: For $n\in \Bbb N$ let $A\subset G_n\subset \Bbb R$ where $G_n$ is open and $m^*(A)\le m(G_n)\le m^*(A)+1/n.$ Let $G=\cap_{n\in \Bbb N}G_n.$
Use this important general property:
$(\bullet)$ If $\{G_n: n\in \Bbb N\}$ is a countable family of measurable sets and each $G_n$ has finite measure then $m(\cap_{n\in \Bbb N}G_n)=\inf_{n\in \Bbb N}m(H_n)$ where $H_n=\cap_{j=1}^nG_j.$
Proof of $(\bullet)$: Let $G=\cap_{n\in \Bbb N}G_n.$ For $n\in \Bbb N$ let $J_n=H_n\setminus H_{n+1}.$ Then $\{G\}\cup \{J_n:n\in \Bbb N\}$ is a countable family of pair-wise disjoint measurable sets and for each $n\in \Bbb N$ we have $H_n=G\cup (\cup_{j\ge n}J_j)$ so $$ (*)\quad m(H_n)=m(G)+\sum_{j=n}^{\infty}m(J_j).$$ Now $\sum_{j\in \Bbb N} m(J_j)$ is a convergent series of non-negative reals... (It sums to $m(H_1\setminus G)=m(G_1\setminus G)$)... so $$(**)\quad \lim_{n\to \infty}\sum_{j=n}^{\infty}m(J_j)=0.$$ Apply $(**)$ to $(*)$ to see that $\langle m(H_n)\rangle_{n\in \Bbb N}$ is a decreasing sequence converging to $m(G).$
Remark: $(\bullet)$ also holds with the weaker condition that $m(G_{n_0})<\infty$ for at least one $n_0$: Apply $(\bullet)$ to $\{G'_n:n\in \Bbb N\}=\{G_n\cap G_{n_0}:n\in\Bbb N\}.$ It does not hold for all families, e.g. if $G_n=[n,\infty).$ |
maximum and miniumum of this expression | This is equivalent to extremizing $\frac{x^TAx}{x^Tx}$, where $[A]_{ii}=2a_{ii}$ and $[A]_{ij}=a_{ij}$ when $i\neq j$. Also known as the Rayleigh quotient of a matrix, if $A$ is diagonalizable then the answer is the largest and smallest eigenvalues of $A$. Otherwise the max and min are equal to the largest and smallest singular values of $A$. Thus one way of solving this question is to compute the eigenvalues of $A^TA$ and then take the largest and smallest eigenvalues. |
Help to proof a Cumulative Distribution Function | If $x\le y$, $0\le F(x)\le F(y)\le 1$, so $0\le F^r(x)\le F^r(y)\le 1$. |
A left ideal need not be a right $R$ module | What about
$$\left\{\;\begin{pmatrix}a&0\\b&0\end{pmatrix}\;\right\}\le M_2(\Bbb R)\;\;?$$
It's a left ideal of its ring, but not a right one. |
Binomial theorem incomplete expansion | That's because
$$
\eqalign{
& \sum\limits_{0\, \le \,k\, \le \,r} {\left( { - 1} \right)^{\,k} \left( \matrix{
n \cr
k \cr} \right)} = \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,r} \right)} {\left( { - 1} \right)^{\,k} \left( \matrix{
r - k \cr
r - k \cr} \right)\left( \matrix{
n \cr
k \cr} \right)} = \cr
& = \left( { - 1} \right)^{\,r} \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,r} \right)} {\left( \matrix{
- 1 \cr
r - k \cr} \right)\left( \matrix{
n \cr
k \cr} \right)} = \left( { - 1} \right)^{\,r} \left( \matrix{
n - 1 \cr
r \cr} \right) \cr}
$$
after which
$$
28 = 7 \cdot 4 = {{8 \cdot 7} \over {1 \cdot 2}}\quad \to \quad \left( {n = 9,\;r = 2} \right)\; \vee \;\left( {n = 9,\;r = 6} \right)
$$ |
Alternative way to check cauchy riemann equation. | As the question asks to use Cauchy-Riemann equations so either you convert it to get $u$ and $v$ in $x$ and $y$; or use polar coordinates $r$, $\theta$ using $z=re^{i\theta}$, i.e.
$f(re^{i\theta})=e^\frac{-cos\theta}{r^4}cos(\frac{sin\theta}{r^4})+e^\frac{-cos\theta}{r^4}sin(\frac{sin\theta}{r^4})$.
Then Cauchy Riemann equations in polar form are: $\partial u/\partial r=\partial v/r\partial \theta,\partial v/\partial r=-\partial u/r\partial \theta$ |
Finding the bound of a sequence sum | Take the simple case of maximizing $J = a_0 + a_0 a_1$ subject to $a_0 + a_1 = A$. The constraint tells us
$a_1 = A - a_0$ so $J = a_0 + a_0(A - a_0) = (A + 1)a_0 - a_0^2$, which has an extremum at $a_0 = (A + 1)/2$, $a_1 = (A - 1)/2$, its value there $(A + 1)^2/4$. Might help a bit, and it does give you a counterexample.
Thanks to @Mindlack for pointing an algebra misstep of mine, now fixed. |
Show that the statement in the Law of Quadratic Reciprocity can be written (as Gauss did as) | I think of Quadratic Reciprocity as saying that for odd primes $p$ and $q$, we have $(p/q)=(q/p)$ unless both $p$ and $q$ are congruent to $1$ modulo $4$. In that case, $(p/q)=-(q/p)$.
Now let us see whether the formula of the post does the same thing. Recall that $(-1/p)=1$ if $p$ is of the form $4k+1$, while $(-1/p)=-1$ if $p$ is of the form $4k+3$.
Let $q=4k+1$. Then we know $(p/q)=(q/p)$. The formula of the post gives $(p/q)=(q(-1)^{2k}/p)$, which is correct.
Let $q=4k+3$. Then the formula of the post gives $(p/q)=(-q/p)$. But $(-q/p)=(-1/p)(q/p)$. If $p$ is of the form $4l+1$, this gives the correct result $(q/p)$. If $p$ is of the form $4l+3$, then again we get the correct result $-(q/p)$. |
derivative curve formula | Hint $$c\times r=(r\times \dot r)\times r=(r.r)\dot r-(\dot r.r)r$$ |
how to prove $\frac {(a_1+...+a_k-1)!}{a_1!...a_k!} \in \Bbb N$ | Observe that this is reminiscent of the multinomial coefficients:
$$\binom{a_1+\ldots+a_k -1}{a_1, \ldots, a_i-1, \ldots, a_k}$$
for each $i, 1 \le i \le k$; we know all of those are integers.
The quantity $\dfrac{(a_1+\ldots+a_k-1)!}{a_1! \cdots a_k!}$ results from them by dividing by $a_i$. So for each $i$:
$$a_i \dfrac{(a_1+\ldots+a_k-1)!}{a_1! \cdots a_k!}$$
is an integer. Can you take it from here? |
Trying to understand the Whitney Embedding Theorem | You are mixing up the imbedding into $\mathbb{R}^3$ and its implication on the topology of the complement with the manifold itself. A smooth closed $1$-manifold can always be diffeomorphically mapped onto the unit circle in $\mathbb{R}^2$.
You will not be able to recover it's embedding into three space from this (the knot), but that's not what Whitneys theorem is about. |
Hahn Banach geometric form | According to the Hahn-Banach first geometric form,
$\exists l\in E^\ast\setminus\{0\}$, $c\in\mathbb{R}$ s.t.
\begin{eqnarray}
\forall x\in A, y\in M \quad l(x)\leq c \leq l(y).
\end{eqnarray}
Then we have $l(M)=\{l(0)\}$ from the second inequality above.
Thus putting $H:=l^{-1}(l(0))$, we have done. |
Regarding function where the slope is infinity or negative infinity | The answer, basically, is that infinity is weird.
There are a couple distinct ways to extend the real line to include infinity. One way is to add two elements, one at either "end" of the number line; often they're denoted $+\infty$ and $-\infty$. This gives a structure called the extended real line.
Another approach, though, is to view the real line as a sort of horseshoe-shape, where both ends curve around towards each other. In this view, the natural thing to do is to add a single point (frustratingly also usually called "$\infty$") which "glues" the ends together. This results in a structure called the projective real line, and is also the one-point compactification of the real line.
What you've discovered is that - essentially - "slope" is a map from $\{$lines$\}$ to the projective real line, rather than from $\{$lines$\}$ to the extended real line. There are other contexts where the extended real line is the "right" object to be looking at instead of the projective real line.
The crucial methodological takeaway, in my opinion, is:
The idea of infinity is complicated enough that it has many different instantiations in mathematics; these versions often have wildly different properties, and are more or less useful/interesting/cool depending on the specific context. |
Graph properties question | We will show that if $(1)$ and $(3)$ fail, then $(2)$ must hold. In fact, it is possible to show a slightly stronger result: that if $\chi(G) \geq r + 1$ and there exists a tree $T$ on $r + 1$ vertices such that $G$ does not contain an induced copy of $T$, then $G$ must contain an induced cycle on at most $r + 1$ vertices.
We will need two lemmas. Recall that $\delta(G)$ denotes the minimum degree of a graph $G$.
Lemma 1. Any graph $G$ satisfies
$$\chi(G) \leq \max\{\delta(H) \mid H \subseteq G\} + 1.$$
Lemma 2. If $\delta(G) \geq r$ and $T$ is a tree on $r + 1$ vertices, then $G$ contains a copy of $T$.
Now we are ready to begin.
Proof: Fix $r \in \mathbb{N}$ and suppose that $G$ is not $r$-colorable. By Lemma 1, $\chi(G) \geq r + 1$ implies that $\max\{\delta(H) \mid H \subseteq G\} \geq r$, that is, that $G$ has a subgraph $H$ of minimum degree at least $r$. By Lemma 2, $H$, and hence $G$, contains a copy of every tree on $r + 1$ vertices.
Suppose that there is some tree $T$ on $r + 1$ vertices such that $G$ does not contain an induced copy of $T$. Then $G$ must contain a cycle on at most $r + 1$ vertices. Furthermore, the (not necessarily unique) smallest such cycle must be induced. This completes the proof. |
Find a basis for a subspace | No, your solution is incorrect.
$$
S= \{p \in P_2 \mid p(1)=0\}
$$
So a generic $p \in S$ would look like $p=ax^2+bx+c$ then
$p(1)=a\cdot1^2+b\cdot1+c=0 \iff c=-a-b$
So we can replace that in our generic $p$:
$$
p=ax^2+bx+(-a-b)
$$
Rearranging...
$$
p=a(x^2-1)+b(x-1)
$$
So you base $B=\{x^2-1,x-1\}$ |
Computing the variance of an estimator | You have a sign mistake. For any real random variable $X$, it holds that $Var(X) = Var(-X)$. So your calculation should be
$$ Var(T) = Var(\frac{1}{2}X_1) + Var(X_2) + Var(\frac{1}{2}X_3) $$
where we assume that the samples are independent.
Therefore,
$$ Var(T) = \frac{1}{4}\sigma^2 + \sigma^2 + \frac{1}{4}\sigma^2 = 1.5\sigma^2 $$ |
3 Spanish teams in champions league | There are 3 possible ways they can meet: Sevilla vs Barcelona, Sevilla vs Real Madrid, Real Madrid vs Barcelona. These are disjoint, and the chance of any particular draw happening is $1/7$ (since any one team can play any of the other $7$, randomly. So the probability of two spanish teams meeting is $3/7$, and the probability of avoiding each other is $4/7$.
An alternative method is to consider this: the first spanish team is picked. There are $2/7$ ways for them to match one of the other two spanish teams. Suppose they don't. Then from the remaining $6$ teams, the chance of the two spanish teams matching is $1/5$, by the fact that any pairing is equally likely from the perspective of one of these remaining teams. So the probability of matching up is $$\frac27+\frac57\cdot\frac15=\frac37$$ as before. |
problem regarding fundamental and homology groups | To see why homology is supposed to "count holes", take the case where $n=1$. Then a singular $1$-chain is just a path in $X$ (since the $1$-simplex is basically $[0;1]$). So a cycle is a sum of path so that the endpoints "telescope". For instance, if you put paths one after the other to make up a triangle, this will be a $1$-cycle. Now a $2$-boundary is the boundary (yes...) of a full triangle in $X$ (actually a sum of such things). So if you take $\mathbb{R}^2$ minus the origin, and take a $1$-cycle as described above to loop around the origin, you won't be able to find a full triangle that has that as a boundary, because the full triangle would have to include the missing point. On the other hand, if you don't have this hole, then no problem, you just take the obvious full triangle with this boundary.
As for your other question, it's true that homotopy groups and homology groups are deeply linked. For instance (let's assume $X$ is arcwise connected to avoid mentioning basepoints), $H_1(X)=\pi_1(X)^{ab}$, and more generally for $n\geqslant 2$ if $\pi_i(X)$ is trivial for $i<n$ then $H_i(X)$ is also trivial for $i<n$, and $H_n(X)=\pi_n(X)$.
The $\pi_n(X)$ are a "perfect" homotopical invariant, in the sense that for nice spaces, if $f:X\to Y$ induces isomorphisms $\pi_n(X)\to \pi_n(Y)$ for all $n$, then $X$ and $Y$ are homotopically equivalent. But they are close to impossible to compute in general.
Whereas homology is much weaker, but on the other hand is very computable. So it is a sort of poor man's homotopy group : it's less powerful, but you can actually get it. This can already be seen in $H_1(X)=\pi_1(X)^{ab}$ : when you compute $H_1$ instead of $\pi_1$, you are in a sense settling for less information. |
Inequality involving infinite sum | Let $f(x) = x\exp(-x^2)$. Note that $f(x)$ is decreasing for $x \geq 1$. Therefore,
\begin{align*}
\sum_{n=1}^{\infty}{f(n)}
&= f(1) + \sum_{n=2}^{\infty}{f(n)} \\
&\leq \exp(-1) + \int_{1}^{\infty}f(x)dx \\
&= \exp(-1) + \int_{1}^{\infty}x\exp(-x^2)dx \\
&= \exp(-1) -2^{-1}\exp(-x^2)\bigg|_{1}^{\infty} \\
&= \frac{3}{2e}
\end{align*} |
One one onto functions | Ok. Here goes a little generalization (maybe) and a hint:
Let $A$ be a countable set then $A'_n = \underbrace{A \times A \times \ldots \times A}_{\text{n times}}$ is also countable. Since countable sets are in bijection with $\mathbb{N}$ so, there exists a bijection from $A$ to $A'_n$ $\forall n \in \mathbb{N}$.
Hint: Try using induction on $n$. (Observe that $A'_1 = A$ so $A'_1$ is countable). |
Tensors = matrices + covariance/contravariance? | Usually, a matrix is thought of a representation of a linear operator: a map that takes a vector and spits out another vector. Say $A$ is some linear operator and $v$ is some vector, then $A(v)$ is the output vector.
An equivalent way of looking at it, however, is to say that there is a map $B$ that takes two vectors $v, w$ and spits out a scalar, given by $B(v,w) = A(v) \cdot w$, say. Such a map is what is usually described in the literature when talking about tensors.
Where do contravariance and covariance come in? Well, the above idea of a tensor is actually a bit of a cheat. There might not be an inner product; we might not be able to freely convert between vectors and covectors using it. So instead of saying that $B$ takes two vectors as arguments, let $B$ be a map taking one vector $v$ and a covector $\alpha$ instead, so that $B(v, \alpha) = \alpha(A(v))$.
(You'll note that, if there is a way to convert from vectors to covectors, then any tensor acting on $p$ vectors and $q$ covectors could be converted to one that acts on $p+q$ vectors, for instance.)
A general tensor could take any number of vector or covector arguments, or a mix of the two in any number.
In physics, it's common to look at the components of a tensor with respect to some basis--rather than supply whatever vectors or covectors that might be relevant to a problem, we supply a set of basis vectors and covectors instead, so we need only remember the coefficients. If $e_i$ is the $i$th basis vector and $e^j$ is the $j$th basis covector, then $B(e_i, e^j) = {B_i}^j$ takes us from the more math-inclined definition of a tensor to the more familiar form to a physicist. |
If $A$ is $n\times n$ matrix with $(A-I)^2=0$ then which of the following is true? | $(A-I)^2=0\;$ implies that all the eigenvalues of $A$ are $1$ (you can see this through his minimal polynomial, or a lot of other ways).
It means that his Jordan Form has all the elements in the diagonal equal to one.
Since trace and determinant are invariant by similarity, you have thate $trace(A)=n$, $det(A)=1$ |
derivative of many roots | Consider $\ln y=...$
Then $\frac 1y\frac{dy}{dx}=...?$ |
Probability of not picking a particular ball, $W1$, out of an urn on the 1st pick, and not picking $W2$ on the 2nd pick? | The probability that white ball 1 is chosen is $\frac{3}{13}$, so $\Pr(Y_1=0)=\frac{10}{13}$.
Given that white ball 1 is not chosen, the probability that white ball 2 is chosen is $\frac{3}{12}$, so $\Pr(Y_2=0|Y_1=0)=\frac{9}{12}$.
Multiply. |
Proof regarding $\operatorname{Spec}A$ being irreducible (Atiyah-MacDonald). | The nilradical $\mathfrak{N}$ of a commutative ring is the intersection of all the prime ideals in the ring.
Suppose first that $\mathrm{Spec}(A)$ is irreducible. Now let $ab \in \mathfrak{N}$. This means that every prime ideal of $A$ contains $ab$. Therefore, $\mathrm{Spec}(A) = V(ab) = V(a) \cup V(b)$. Since $\mathrm{Spec}(A)$ is irreducbile, this means either $\mathrm{Spec}(A) \subset V(a)$ or $\mathrm{Spec}(A) \subset V(b)$. Therefore, either $a$ or $b$ is contained in all the primes ideals of $\mathrm{Spec}(A)$, that is, either $a \in \mathfrak{N}$ or $b \in \mathfrak{N}$. I think it's more 'natural' to work with closed subsets as the Zariski topology is defined in terms of closed subsets.
Suppose $\mathfrak{N}$ is a prime ideal of $A$. Now $\mathrm{Spec}(A) = V(\mathfrak{N})$. Any closed subset $V(I)$, where $I$ is an ideal of $A$, contains $\mathfrak{N}$ if and only if $I$ is contained in all the prime ideals of $A$, that is, $I \subset \mathfrak{N}$, which implies $V(\mathfrak{N})\subset V(I)$. So $V(\mathfrak{N})$ is the smallest closed subset containing $\mathfrak{N}$. So $\mathrm{Spec}(A) = \overline{ \{ \mathfrak{N} \}}$ is the closure of a single point, therefore it is irreducible.
More generally, for any subset $Y$ of $\mathrm{Spec}(A)$, you can define $I(Y) := \bigcap_{\mathfrak{p} \in Y} \mathfrak{p}$. Similar arguments as above also show that $Y$ is irreducible if and only if $I(Y)$ is a prime ideal. |
How can one prove this geometric inequality? | If yo go on reading the page yo linked, you'll discover that proof:
"Let $u=DZ$, $v=FX$, $t=BY$. It's not hard to see that there exists $r$ such that the circles centered at $B, D, F$ with radii $rt, ru, rv$ have a point in common. Call this point $O$. We claim $r \leq \frac {2}{\sqrt {3}}$. Since $\angle BOD + \angle DOF + \angle FOB = 360°$ one of these angles is at least $120°$, say without loss of generality $\angle FOD$. Then using the Law of Cosines on $\Delta FOD$, $\frac { - 1}{2} = \cos 120 \geq \cos \angle FOD = \frac{OF^2 + OD^2 - FD^2}{2OF\cdot OD}$ ...
Now if $r > \frac {2}{\sqrt {3}}$ this rearranges to $2uv > u^2 + v^2$ contradicting the Arithmetic Mean- Geometric Mean inequality. So $r \leq \frac {2}{\sqrt {3}}$ and $O$ satisfies the conditions. Now $$AC \cdot (BD + BF - DF) = 2 AC \cdot BZ \geq 2 AC \cdot \frac{\sqrt {3}}{2} BO \geq 2\sqrt{3}[ABCO]$$
Adding up similar inequalities gives the result." |
Which formula do I use to integrate $ \int {\sqrt{x^2 + 81} \over 2} \,dx $ | Hint: Put $x = 9 \tan{t}$, then $x^{2}+81 = 81(\tan^{2}(t)+1) = 81 \cdot \sec^{2}(t)$. |
Minimising a black box function | A relatively simple way to optimize a continuous, non-differentiable function is Nelder-Mead simplex optimization (which, as the linked Wikipedia article comments, is "[n]ot to be confused with Dantzig's simplex algorithm for the problem of linear optimization"). Nelder-Mead has been implemented in many computational systems (e.g. optim(...,method="Nelder-Mead") in R, scipy.optimize.minimize(..., method='Nelder-Mead') in Python, etc. etc.; Numerical Recipes will give you versions in FORTRAN, C, and C++). It's also simple enough that you could implement it yourself if necessary. The simplex method works by constructing an $n$-dimensional simplex of evaluation points and then iteratively using simple heuristic rules (e.g. "find the worst point and establish a new trial point by reflecting through the opposing face of the simplex") to update until the simplex converges approximately to a point.
It's hard to say how many function evaluations would be required, but I would say you could expect on the order of dozens to hundreds of evaluations for a reasonably well-behaved 8-dimensional optimization problem. At $\approx$ 5 minutes per evaluation that would be tedious but not unfeasible.
It's not clear whether the (0,1) bounds on your inputs are hard constraints (i.e., are you pretty sure that the optimum is inside that space, or do you need to constrain the solution to that space)? If they are, things get a bit harder as most Nelder-Mead implementations don't allow for box constraints (i.e., independent upper/lower bounds on parameters). You can take a look at Powell's BOBYQA method, which might actually be a little more efficient than Nelder-Mead (although also more complex; FORTRAN implementations are available ...)
If you can guess that your function is differentiable (which seems like a good guess if the underlying simulation process is deterministic and doesn't contain sharp if/then switches), then you could also (in principle) compute gradients at each point by finite difference methods, i.e.
$$
\frac{\partial f(x_1,...,x_i,...,x_n)}{\partial x_i} \approx (1/\delta) \cdot \left(f(x_1^*,...,x_i^*+\delta,...,x_n^*) - f(x_1^*,...,x_i^*,...,x_n^*)\right)
$$
for a $\delta$ that is neither too small (roundoff/floating-point error) nor too big (approximation error). Then you can use any derivative-based method you like. In practice this isn't always worth it; the combination of cost and numerical error in the finite-difference computations means this method may be dominated by the derivative-free methods described above. |
Can any finite group be realized as the automorphism group of a directed acyclic graph? | Given an undirected graph $X=(V,E)$ with automorphism group $G$, we form the direct graph $\hat{X}=(\hat{V},\hat{E})$ in the following way by setting
$$\hat{V}=V\cup\{v_e|e\in E\}$$
$$\hat{E}=\{(v,v_{\{v,w\}})|v\in V,w\in V\}.$$
Clearly each $v\in V$ has zero indegree and outdegree at least zero and each $v\in\hat{V}\setminus V$ has indegree two and zero outdegree. It follows that $\hat{X}$ has no cycles and so is a DAG.
It should be clear that every automorphism $g$ on $X$ induces an automorphism $g$ on $\hat{X}$ by simply mapping $v$ to $g(v)$ and mapping $v_e$ to $v_{g(e)}$. Because the $v_{\{w_1,w_2\}}$ are 'trapped' between the vertices $w_1$ and $w_2$, it should also be clear that no new automorphisms can act on $\hat{X}$ as the image of $v_{\{w_1,w_2\}}$ is fully determined by the images of $w_1$, $w_2$ and a possible reordering of the edges in $E$ between $w_1$ and $w_2$, which is all taken in to account by the above induced action. |
Find volume of cube with the help of eqn of plane | Since we have parallel planes we can find the distance between the planes.
Let y = x = 0.
So 6(0) - 3(0) + 2z + 4 = 0 --> 2z = -4 --> z = -2.
Thus a point in that plane is (0,0,2). Now use formula to find distance between the other plane and this point. That is,
$D = \frac{|ax+by+cz+d|}{\sqrt{a^{2}+b^{2}+c^{2}}}$
So $D = \frac{|6(0)+ -3(0)+2(-2)+1|}{\sqrt{6^{2}+(-3)^{2}+2^{2}}} = \frac{|-4 + 1 |}{\sqrt{36+9+4}} = \frac{3}{7}$.
Thus a cube with faces of these two planes would have a distance/length of $\frac{3}{7}$.
So $(\frac{3}{7})^{3} = \frac{27}{343}$ |
Proof: Derivative of $(-1)^{x}$ | Since
$$(-1)^{x}=(e^{i\pi})^x=e^{i\pi x}$$
We have
$$
\dfrac{d}{dx}\left((-1)^{x}\right)=\dfrac{d}{dx}\left(e^{i\pi x}\right)=i\pi e^{i\pi x}=i\pi(-1)^{x}
$$
For higher order derivatives
$$
\dfrac{d^{n}}{dx^{n}}\left((-1)^{x}\right)=(i\pi)^n(-1)^{x}
$$ |
Prove that the set of $n$-by-$n$ real matrices with positive determinant is connected | To continuosly transform $A$ to $I$, you can perform "continuous row and column operations" as these don't change the determinant:
Ensure that $a_{n,n}\ne0$, by continuously adding some other row to row $n$ if necessary (the existence of such a row is guaranteed by $\det A\ne 0$).
Continouusly subtract a multiple of the $n$th row from all other rows until $a_{in}=0$ for all $i< n$. Also substract the $n$th column continuously from all other columns to ensure $a_{ni}=0$ for $i<n$
Now recurse, i.e. perform steps 1 and 2 with the top left $(n-1)\times (n-1)$ submatrix etc. In the end you have a diagonal matrix with unchanged determinant
By now you have a diagonal matrix with the original positive determinant, hence negative entries come in pairs. Such negative pairs can be continuously made postive as follows by row and column operations:
$$\begin{pmatrix}-a&0\\0 &-b\end{pmatrix}\to \begin{pmatrix}-a&0\\-a &-b\end{pmatrix}\to \begin{pmatrix}0&b\\-a &-b\end{pmatrix}\to\begin{pmatrix}0&b\\-a &0\end{pmatrix}\\
\to\begin{pmatrix}b&b\\-a &0\end{pmatrix}
\to\begin{pmatrix}b&0\\-a &a\end{pmatrix}
\to\begin{pmatrix}b&0\\0 &a\end{pmatrix}
$$
Now we have a diagonal matrix with positive entries, and these can be continuously changed to $1$, thus producing $I$. Note that only this last step changed the determinant at all.
I specifically wanted to avoid rotating vectors with transcendental functions. Instead, the resulting curve above is piecewise linear and if we start with rational $A$ we can have rational matrices at every rational time $t$. |
Is $z^{-1}(e^z-1)$ surjective? | Since $f(z)=\frac{e^z-1}{z}$ is entire and non-constant, by Little Picard's theorem it is either surjective or just misses a single complex value. Let us assume to be in the second case. The missing value has to be a real value by Schwarz' reflection principle, since $f$ is real over the real line (assuming that $f$ misses some $w\in\mathbb{C}\setminus\mathbb{R}$, it also misses $\overline{w}$). The missing value is not zero since $f(2\pi i)=0$. The missing value is not some $\alpha>1$ since the line $y=1+\alpha z$ meets the graph of $g(z)=e^z$ at some point with a positive abscissa by the convexity of $g(z)$. The missing value is not some $\alpha\in(0,1)$ since the line $y=1+\alpha z$ meets the graph of $g(z)=e^z$ at some point with a negative abscissa by the convexity of $g(z)$. Negative values are also attained, hence it follows that the only missing value can be $1$, but $1=f(0)$, so $f$ is surjective. |
Compact operator with closed range has finite dimensional range | Let $Z=T(X)$. Then $Z$ is also a Banach space, as a closed subspace of a Banach space, and $T:X\to Z$ is onto, and hence open, due to Open Mapping Theorem. If $Z$ were infinite dimensional, then $T$ would not be compact, as open sets in infinite dimensional spaces are not pre-compact. |
Manifold with $\pi_1(M)=F_n$ | Your construction of the 3-manifold $M_n$ can be reworded by saying that it is a connected sum of the form
$$(*) \qquad M_n \, = \, \mathbb{R}^3 \, \# \, \underbrace{(S^2 \times S^1) \, \# \, \cdots \, \# \, (S^2 \times S^1)}_{n \,\,\text{times}}
$$
The group you want to obtain instead is simply the free product $F_{n-2} * \mathbb{Z}^2$. This is obtained by altering your construction, replacing two of the $S^2 \times S^1$ connected summands with a single $T^2 \times \mathbb{R}$ connected summand:
$$(**) \qquad \mathbb{R}^3 \, \# \, \underbrace{(S^2 \times S^1) \, \# \, \cdots \, \# \, (S^2 \times S^1)}_{n-2 \,\,\text{times}} \, \# \, (T^2 \times \mathbb{R})
$$
As noted in the comments, you can drop the $\mathbb{R}^3$ connected summand of $(*)$ without changing the fundamental group and you get a compact manifold. But although you can also drop the $\mathbb{R}^3$ connected summand of $(**)$ without changing the fundamental group, that will not result in a compact manifold because the $T^2 \times \mathbb{R}$ connected summand remains noncompact. |
How to prove $\sum _{k=1}^{\infty } (-1)^k H_{\frac{2 k}{3}} = -\frac{\pi }{2 \sqrt{3}}+\frac{3 \pi }{8}-\frac{3}{4} \log (2)$? | Start with using the integral representation of the harmonic number $H_n=\int_0^1\frac{1-x^n}{1-x}\ dx$ we have
$$\sum_{k=0}^\infty(-1)^k H_{\frac{2k}{3}}=\int_0^1\frac{1}{1-x}\sum_{k=0}^\infty((-1)^k-(-x^{\frac23})^n)\ dx$$
$$\int_0^1\frac{1}{1-x}\left(\frac12-\frac{1}{1+x^{\frac23}}\right)\ dx\overset{x\to x^3}{=}-\frac32\int_0^1\left(\frac{x}{1+x^2}-\frac{1}{1+x^2}+\frac{1}{1+x+x^2}\right)\ dx$$
$$=-\frac32\left[\frac12\ln(1+x^2)-\tan^{-1}x+\frac{2}{\sqrt{3}}\tan^{-1}\left(\frac{1+2x}{\sqrt{3}}\right)\right]_0^1$$
$$= -\frac{3}{4} \ln2+\frac{3 \pi }{8}-\frac{\pi }{2 \sqrt{3}}$$
Note that I used Grandi series $\sum_{k=0}^\infty (-1)^k=\frac12$. |
If $f(z)$ has order of growth $\rho$, then $f(z)/z^{\ell}$ has order of growth $\leqslant\rho$ | Up to Wiki, the order of an entire function is expressed in terms of its coefficients by $$\rho= \limsup_{n \to \infty} \frac {n\log n}{-\log |a_n|} . $$ The $\ell$-shift of $a_n$ does not change $\rho.$ |
Solutions manual for Analysis On Manifolds | As of July 2017, one can find some detailed solutions of some problems from chapters 4-6 in: Herman Jaramillo, "Solution to selected problems of Munkres Analysis on Manifolds Book," at:
http://s3.amazonaws.com/elasticbeanstalk-us-east-1-200981706290/wufu/573279464f6e8
I will attach the file here if Stack permits attachments. Hmm, apparently not. |
If $E(z)= \sum _{n=0 }^{\infty }\frac {z ^n } {n! } $, how is $E(0) $ defined? | $$E(z)=1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots\\
E(0)=1+0/1+0^2/2+0^3/6\cdots=1$$ |
Herbrand Logic exercise on multidimensional induction | In the Herbrand logic course, Linear Induction if of the form:
\begin{align}
&p(a)\\
&\forall n.(p(n)\implies p(s(n)))\\
&---------\\
&\forall n.(p(n))
\end{align}
While Multidimensional induction is of the form:
\begin{align}
&\forall m.(p(a,m))\\
&\forall n.(\forall m.\big(p(n,m)\big)\implies \forall m.\big(p(s(n),m) \big))\\
&-----------------\\
&\forall n.(\forall m.\big(p(n,m)\big))
\end{align}
It is called multidimensional induction because in effect you are doing induction on $n$ for each and every $m$, which there could be multiple of.
In both forms, the goal is to find the base case, then assume a particular case, and then prove from this particular case that the next case is true. When using multidimensional induction, the assumption of a particular case is always quantified over $\forall m.\big(p(n,m)\big)$, which narrows the search of where to start.
In this particular example, you are asked to prove $\forall y.(\forall z.(e(a,y) \land e(y,z) \implies e(a,z)))$, this means your base case should be $(1)$, and the particular case $(2)$.
$$\tag{1}\forall z.(e(a,a) \land e(a,z) \implies e(a,z))$$
$$\tag{2}\forall z.(e(a,y) \land e(y,z) \implies e(a,z))$$
It is noteworthy that the particular case $(2)$ is the same as the thing you are trying to prove, less one universal quantifier. This is true for all multidimensional induction problems (including linear induction).
After assuming $(2)$, your goal will be to prove $\forall z.(e(a,s(y)) \land e(s(y),z) \implies e(a,z))$. This can be done by assuming its antecedent, $e(a,s(y)) \land e(s(y),z)$, then using the fact that $e(a,s(y))$ is in contradiction with premises $4$ of the question to prove $e(a,z)$.
Once this has been done, you can universally quantify over everything, and then use induction in the last step to complete the question.
EDIT: After your edit, you have completed the base case and are now on the inductive step. Your goal is to prove $(3)$.
$$\tag{3}\big(\forall z.(e(a,y) \land e(y,z) \implies e(a,z))\big)\implies\big(\forall z.(e(a,s(y)) \land e(s(y),z) \implies e(a,z))\big)$$
You have already made the first step toward this goal, by assuming $\forall z.(e(a,y) \land e(y,z) \implies e(a,z))$.
The next step is to arrive at $e(a,s(y)) \land e(s(y),z) \implies e(a,z)$, and universally quantify over it. This is best achieved by assuming $(4)$, and deriving $e(a,z)$.
$$e(a,s(y)) \land e(s(y),z)\tag{4}$$
The derivation of $e(a,z)$ from the assumption $(4)$ can be made by exploiting the fact that your assumption is in contradiction with the $4^{th}$ premises. It is possible to assume $\sim e(a,z)$, then derive $e(a,s(y))$ and $\sim e(a,s(y))$. |
Showing holomorphy of integral | You can start by arguing that $\hat{f}(\lambda)=\int_{\mathbb{R}}f(x)e^{-i\lambda x}dx$ is continuous everywhere on $\mathbb{C}$, which follows from the dominated convergence theorem. Then you can apply Morera's theorem by showing that $\int_{\Delta}\hat{f}(\lambda)d\lambda=0$ for all triangles in $\mathbb{C}$, which follows by interchanging the order of integration using Fubini's Theorem:
$$
\int_{\Delta}\hat{f}(\lambda)d\lambda=\int_{\mathbb{R}}f(x)\int_{\Delta}e^{-ix\lambda} d\lambda dx = 0.
$$
The conclusion of Morera's theorem is that $\hat{f}$ is an entire function. |
Proving closure for a set with $e^{tH}$ | Here, all your matrices are of the form $e^{tH}$ for a $t$. Thus, if $M_1,M_2 \in G$, there exist $t_1,t_2 \in \mathbb{R}$ such that $M_1 = e^{t_1 H}$ and $M_2= e^{t_2H}$. Consequently, as $t_1H$ ans $t_2H$ commute, you can say :
\begin{align}
M_1 \times M_2 = e^{t_1H}\times e^{t_2H} = e^{(t_1+t_2)H} \in G
\end{align}
The key is that $e^{A+B} = e^A e^B$ if $A$ and $B$ commute. |
Is there a standard terminology for $|f^{-1}(y)|=1$ for almost every $y$? | At least in the context of elasticity theory, this is known as "injectivity almost everywhere", see, e.g., Ciarlet's "Mathematical Elasticity vol. 1", Problem 5.7 (following the work of Ball) and Section 7.9 (following Ciarlet-Nečas). |
equivalence norm of matrix | It depends on what you want to prove.
About the $p-$norms on a $n-$dimensional space $V$ (with $n < + \infty$), the main way to compare the different norms is that
$$\forall 1 \leq p \leq q, \quad \forall x \in V, \quad ||x||_q \leq ||x||_p \leq n^{\frac{1}{p}-\frac{1}{q}} ||x||_q$$ |
Why must a primitive root be less than and relatively prime to n? | The have to be relatively prime to $n$ because if $x$ is a primitive root, then $x^n \equiv 1$ for some $n$, and therefore $x^{n-1}$ must be the multiplicative inverse of $x$. But only numbers relatively prime to $n$ have a multiplicative inverse.
They don't strictly speaking have to be smaller than $n$, but any number larger than $n$ is equivalent modulo $n$ to some number smaller than $n$. Remember that $$
a \equiv b \mod n \quad\Leftrightarrow\quad \exists k \in \mathbb{Z} \,:\, a - b = kn \text{.}
$$
So if $a \geq n$, we can substract $n$ until we reach an integer $b$ within $[0,n-1]$. If we had to subtract $k$ times, we have $b = a - kn$, i.e. $a - b = kn$, so $a \equiv b \mod n$. |
Asymptotic behaviour of the solution of a PDE | The solution by characteristics that you give, is only valid when $(r-\beta)e^{\alpha t}+\beta\in[0,1]$, that is, for
$$ \beta-\beta e^{-\alpha t} \le r \le \beta+(1-\beta) e^{-\alpha t}, $$
an interval of width $e^{-\alpha t}$. Outside that interval, the characteristic will hit one of the boundary conditions at $r=0$ or $r=1$, and so $u(t,r)=0$ at those points.
You should find that $\int_0^1 u(t,r) \,dr = \int_0^1 u_0(r)\,dr=\beta$ (a constant).
Conclusion: The asymptotic limit of the solution as $t\to\infty$ is $\beta\delta_\beta$, where $\delta_\beta$ is a delta function located at $\beta$ (sometimes written $\delta_\beta(r)=\delta(r-\beta)$).
Edited to add:
This analysis assumes $0<\beta<1$ and $\alpha>0$. If $\alpha=0$, the solution is of course independent of $t$. And if $\beta=0$ or $\beta=1$, the analysis needs to be changed a bit, but much the same will still hold
Edit the second:
Here is a picture of the solution. I chose $u_0(r)=4(r-r^2)$, $\beta=2/3$, and $\alpha>0$. The red graph is the initial condition $u_0$,
blue is the solution for $e^{\alpha t}=2$, and the tallest one (khaki) is for $e^{\alpha t}=4$. The area under each curve is the same in each case.
Also, I removed my totally misguided “consistency check” from the answer. Sorry about that; not enough caffeine, I suppose. |
Newton Raphson Method for double roots | The usual method is slow for double roots because of the following Taylor series argument. Assume $f$ is twice continuously differentiable and $r$ is a single root, i.e. $f(r)=0$ and $f'(r) \neq 0$. Then
\begin{eqnarray*}\left ( x - \frac{f(x)}{f'(x)} \right ) - r & = & x - r - \left ( x - r + \frac{f''(r)}{2 f'(r)} (x-r)^2 + o((x-r)^2) \right ) \\
& = & \frac{f''(r)}{2 f'(r)} (x-r)^2 + o((x-r)^2)
\end{eqnarray*}
What I did here was Taylor expand $\frac{f(x)}{f'(x)}$ to second order about $x=r$, and then substitute in the fact that $f(r)=0$, which cancels a lot of terms. The result means that when $r$ is a single root, the method converges quadratically. Roughly speaking this means that the number of correct digits double at each step, once the error is small enough. On the other hand, if it is a double root (i.e. $f(r)=f'(r)=0$ and $f''(r) \neq 0$), we have a different situation. Here we cannot expand $f(x)/f'(x)$ about $x=r$ because this function is not defined there. Instead we must do the following:
\begin{eqnarray*}\left ( x - \frac{f(x)}{f'(x)} \right ) - r & = & x-r - \frac{1/2 f''(r) (x-r)^2 + o((x-r)^2)}{f''(r)(x-r) + o(x-r)} \\
& = & x-r -\frac{1/2 (x-r) + o(x-r)}{1 + o(1)} \\
& = & x-r - (1/2 (x-r) + o(x-r))(1+o(1)) \\
& = & 1/2 (x-r) + o(x-r)
\end{eqnarray*}
This means the method converges linearly with a coefficient of $1/2$. This means the error is approximately halved at each step, once the error is small enough, which basically means that you gain a correct binary digit at each step. This gets even worse for higher order roots: for a root of order $n$ we have a coefficient of $1-\frac{1}{n}$.
There is a modified Newton method which can detect proximity to a non-simple root and modify $f$ in such a way that quadratic convergence is recovered. If you know the order of the root is $n$, then you can use
$$x_{k+1} = x_k - n \frac{f(x_k)}{f'(x_k)}.$$
A similar argument to the first one shows that this converges quadratically. If you don't know the order of the root, there is a method which can estimate it: see http://mathfaculty.fullerton.edu/mathews/n2003/NewtonAccelerateMod.html
Edit: the symbol $o(f(x))$, called "little oh notation", is a standard shorthand. It means an unspecified function $g(x)$ such that $g(x)/f(x) \to 0$ as $x$ tends to something specified by context (usually $0$ or $\infty$, in this case $0$). So for example, calculus tells us that
$$\lim_{h \to 0} \frac{f(x) - h f'(x)}{h} = 0$$
which is the same as
$$f(x) - h f'(x) = o(h)$$
Little oh notation has a weaker counterpart, called "Big oh notation". That is, the symbol $O(f(x))$ means an unspecified function $g(x)$ such that $g(x)/f(x)$ is bounded as $x$ tends to something specified by context. |
How to prove that every ordered field has a subfield isomorphic to $\mathbb{Q}$ using a certain provided function? | If $e:\mathbb{Q}\rightarrow K$ is an embedding, then the image of $e$ is a subfield of $K$ which is isomorphic to $\mathbb{Q}$.
For solving exercise 2, how do you think the extension of $\varphi$ should be extended? What, for instance, should $-{5\over 3}=-{1+1+1+1+1\over 1+1+1}$ be sent to?
(Incidentally, when thinking about these problems, it may be best to explicitly distinguish between the natural number $1$ and the multiplicative identity $1_K$ of the field $K$. So, $\varphi(1+1+...+1+1)=1_K+1_K+...+1_K+1_K$.) |
What are the necessary and sufficient conditions for a real square matrix (not necessarily invertible) to have a strictly real arctangent? | a priori, we can define $\arctan(z)$ for every $z\in U=\mathbb{C}\setminus\{-i,i\}$; roughly speaking, it suffices to put $\arctan(z)=\dfrac{1}{2i}\log(\dfrac{z-i}{z+i})+constant$. Yet, we must work on one or several determinations of the $\log$ function. Note also that $U$ is not simply connected.
If we do not impose continuity to $\arctan$, then we can do like Maple: but the obtained function is not continuous (for example) on $\{ib;b>1\}$. Otherwise, we have to restrict the definition domain to a simply connected set; for example
$V=\mathbb{C}\setminus\{ib;|b|\geq 1\}$.
Then we can define an holomorphic function $\arctan$ on $V$ (with unicity if we put $\arctan(1)=\pi/4$).
The extension to the $n\times n$ complex matrices with spectrum included in $V$ is standard (use complex Jordan form); moreover, the $\arctan$ of a real matrix is real (use real Jordan form). |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.