title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding the concavity of a function without having to plot | For simplicity let us assume that the second derivative is everywhere defined and continuous. You have found that $f''(x)=0$ at $x=4$ and at $x=-3$. That is not enough to determine concavity, it only locates the points where concavity might change.
In most simple cases, concavity will change at these points, but it need not. To determine concavity, you need to examine the signs of $f''(x)$ is the intervals $(-\infty,-3)$, $(-3,4)$, and $(4,\infty)$. (Under our conditions, the sign of $f''(x)$ can only change at $-3$ and $4$.)
If for example $f''(x)=(x+3)(x-4)$, then $f''(x)$ is positive in $(-\infty,-3)$, negative in $(-3,4)$, and positive in $(4,\infty)$, so we get concave up, then down, then up.
However, if $f''(x)=(x+3)^2(x-4)$, the story changes: we get down, down, up. And in the case $f''(x)=(x+3)^2(x-4)^2$, we get up, up, up. |
The ratio that the orthocenter divides an altitude into | For acute-angled triangle we obtain: $$\frac{AH}{HD}=\frac{\frac{AE}{\cos\measuredangle HAE}}{BD\tan\measuredangle HBD}=\frac{\frac{c\cos\alpha}{\sin\gamma}}{c\cos\beta\cot\gamma}=\frac{\cos\alpha}{\cos\beta\cos\gamma}$$ |
Solving the integral $\int_{-1}^1 2\sqrt{2-2x^2}\,dx$ | $$\int 2\sqrt{2-2x^2}\, dx=2\sqrt{2}\int \sqrt{1-x^2}\, dx$$
For the expression to make sense, we must have $1-x^2\ge 0$, i.e. $-1\le x\le 1$, so exists $t\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ such that $x=\sin t$. Then $dx=\cos t\, dt$.
$$=2\sqrt{2}\int\cos^2 t\, dt=2\sqrt{2}\int \frac{1+\cos(2t)}{2}\, dt$$
$$=\sqrt{2}\left(\int dt+\frac{1}{2}\int \cos(2t)\, d(2t)\right)$$
$$=\sqrt{2}\left(t+\frac{1}{2}\sin(2t)\right)+C=\sqrt{2}\left(\arcsin x+\sin t\cos t\right)+C$$
$$=\sqrt{2}\left(\arcsin x+x\sqrt{1-x^2}\right)+C$$
$$\int_{-1}^12\sqrt{2-2x^2}\, dx=\sqrt{2}\left(\arcsin (1)+1\sqrt{1-1^2}\right)-\sqrt{2}\left(\arcsin (-1)+(-1)\sqrt{1-(-1)^2}\right)$$
$$=\sqrt{2}\left(\arcsin (1)-\arcsin (-1)\right)=\sqrt{2}\left(\frac{\pi}{2}-\left(-\frac{\pi}{2}\right)\right)=\pi\sqrt{2}$$ |
How to show $|F_1-v|+|F_2-v|=c$ for an ellipse with foci $F_1, F_2$ | Per Blue’s request, I will finish my answer outlined above...
$$|v-F_1|=\sqrt{(p\cos(t)-ep)^2+(q\sin(t)-0)^2}=\sqrt{p^2\cos^2(t)-2p^2e\cos(t)+e^2p^2+q^2\sin^2(t)}$$ $$|v-F_2|=\sqrt{(p\cos(t)+ep)^2+(q\sin(t)-0)^2}=\sqrt{p^2\cos^2(t)+2p^2e\cos(t)+e^2p^2+q^2\sin^2(t)}$$
Use the change of variable $e=\sqrt{1-\frac{q^2}{p^2}}\implies q^2=p^2-p^2e^2\implies$ \begin{equation} \label{eq1}
\begin{split}
|v-F_1|& = \sqrt{p^2\cos^2(t)-2p^2e\cos(t)+e^2p^2+(p^2-p^2e^2)\sin^2(t)} \\
& = \sqrt{p^2\cos^2(t)-2p^2e\cos(t)+e^2p^2+p^2\sin^2(t)-p^2e^2\sin^2(t)}\\
&=\sqrt{p^2-2p^2e\cos(t)+e^2p^2-p^2e^2\sin^2(t)} \\
& =\sqrt{p^2-2p^2e\cos(t)+p^2e^2\cos^2(t)}\\
&=\sqrt{p^2\left(1-2e\cos(t)+e^2\cos^2(t)\right)} \\
&=\sqrt{p^2\left(1-e\cos(t)\right)^2} \\
&=p(1-e\cos(t))
\end{split}
\end{equation}
A similar lengthy calculation shows that: $$|v-F_2|=p(1+e\cos(t))$$
Hence $|v-F_1|+|v-F_2|=p(1-e\cos(t))+p(1+e\cos(t))=2p$
Thus we have shown that the sum of the distances between both foci and a point, $v$, on the ellipse is constant namely $2p$. |
If $A+B = I_n$ and $A^2 +B^2 = O_n$ then $A$ and $B$ are invertible and $(A^{-1}+B^{-1})^n = 2^n I_n$ | First, let's show that A is invertible:
$$\begin{align} A+B=I\\B=I-A\\A^2+(I-A)^2=0\\I-2A+2A^2=0\\A(2(I-A))=I\\A^{-1}=2(I-A)\end{align}$$
Similarly, $B^{-1}=2(I-B)$, so $A^{-1}+B^{-1}=2(2I-A-B)=2I$ (because $A+B=I$), so:
$$(A^{-1}+B^{-1})^n=(2I)^n=2^nI$$. |
Finding extrema of a continuous, univariate function. | A qualitative approach would be to use David's hints about the size and shape of the graph of $f(x)$, and proceed on a case-by-base basis.
If $a=0$, the graph is a horizontal line.
If $a>0$, the graph of $f$ over all real numbers is a parabola that has vertex at $(b,c)$ and opens upward. Now compare $b$ to $0$ and $1$. If $b < 0$, then $f(b) < f(0) < f(1)$. Likewise if $b > 1$, then $f(0) > f(1) > f(b)$. If $0 < b < 1$, you need to compare $f(0) ab^2 + c$ with $f(1) = a(b-1)^2 + c$ and $f(b) = c$.
If $a<0$, the graph of $f$ over all real numbers is a parabola that has vertex at $(b,c)$ and opens downward. You can proceed similarly to the previous case. |
Volume of the solid above the parabolic cylinder $z=1-y^2$ | Note that $x^2+y^2-x=(x-\frac12)^2 +y^2-\frac14=0$. Recenter the circle with $x-\frac12=u$ and integrate the volume over the disk $u^2+y^2=\frac14$
$$\int_{u^2+y^2<\frac14} [(1-y^2)-( -2x-3y-10)]dudy \\
=\int_{u^2+y^2<\frac14} (2u+3y-y^2 +12)dudy
=\int_{u^2+y^2<\frac14} (12-y^2)dudy \\
=\int_0^{2\pi}\int_0^{1/2} (12-r^2 \sin^2\theta)rdr d\theta =\frac{191\pi}{64}\\
$$ |
Equation of a plane given two lines | Your lines are $${\bf l_1}(t)=t(3,-1,1)+(0,1,0)$$ $${\bf l_2}(t)=t(3,-1,1)+(0,1,-2)$$
These are parallel lines through different points. Thus, you may first find a vector $\bf v$ which starts at the first and ends at the second, and then take the cross product of this $\bf v$ with your direction ${\bf w}=(3,-1,1)$ to obtain a normal vector to the plane you seek.
Make a drawing if you cannot visualize what is going on, it usually helps. |
Unbounded and close. | If it is true whenever $D(A)$ is dense it should also be true whenever $D(A)=B$. But when $D(A)=B$ the operator is closed iff it is bounded by Closed Graph Theorem. So any unbounded operator with domain $B$ is a counter-example to your statement. |
Strong limit cardinal - power set operation | $\beth_1=2^{\aleph_0}$ is not a strong limit cardinal. $\beth_0=\aleph_0$ is a strong limit cardinal, as is $\beth_\omega$; see here. |
Find the Maclaurin series of f(x)=(arctan(x)-x)/x^3 | $$\arctan(x)=\sum_{n\geq 0}\frac{(-1)^n}{2n+1} x^{2n+1}\tag{1} $$
$$\arctan(x)-x=\sum_{n\geq 1}\frac{(-1)^n}{2n+1} x^{2n+1}\tag{2} $$
$$\frac{\arctan(x)-x}{x^3}=\sum_{n\geq 1}\frac{(-1)^n}{2n+1} x^{2n-2}=\color{red}{\sum_{n\geq 0}\frac{(-1)^{n+1}}{2n+3}x^{2n}}\tag{3} $$
The radius of convergence (i.e. $1$) is left unchanged by our manipulations. |
The total number of subsets is $2^n$ for $n$ elements | "Include A?" is stage 1
"Include B?" is stage 2
"Include C?" is stage 3. |
Is this correct? Trying to prove that the space $(C(K,\mathbb{R}^m), \| \cdot \|_{\infty})$ with $K\subseteq \mathbb{R}^n$ compact, is complete. | He is skipping some details. Here is a proof: given $\epsilon >0$ there exists $p$ such that $\|f_k(x)-f_m(x)\| <\epsilon $ for all $k,m \geq p$ for all $x \in K$. In this inequality we can let $m \to \infty$ to get $\|f_k(x)-f(x)\| \leq \epsilon $ for all $k \geq p$ for all $x \in K$ and this proves thet $f_n \to f$ in the norm. |
Prove $ I = \int_{-\infty}^\infty dz \frac{1}{(z^2-a^2)^2} \frac{1}{(z-a)^2+b^2} $ diverges | The improper integral of a real function is usually defined for a function $f$ with singularities by splitting the domain at each of $f$'s singularities and taking an improper integral on each of the resulting sections. To be more precise, if we want to integrate $f$ from $a$ to $b$, and it has singularities at $x_1 < x_2 < \ldots < x_n$ (between $a$ and $b$), then we usually define
$$
\int_a^b f(x) \, dx = \int_a^{x_1} f(x) \, dx + \ldots + \int_{x_n}^b f(x) \, dx.
$$
The integral on the LHS is said to converge if and only if all of the (improper) integrals on the RHS do. This is a bit of an abuse of notation, and it's often not made explicit enough that this is what the convention is, so it can often be a source of confusion.
Your example with the integral of $1/x$ is something that gets taught to high school students everywhere, and, while perhaps useful for its simplicity, it just exacerbates this confusion when more rigour is necessary. In fact, if you analyse it by splitting up the domain (as is typically the convention) then we see that the improper integral is not necessarily defined for any pair of endpoints:
$$
\int_{-1}^{1} \frac 1 x \, dx = \int_{-1}^{0} \frac 1 x \, dx + \int_{0}^{1} \frac 1 x \, dx,
$$
and neither of the integrals on the RHS are convergent. Thus it is inaccurate to say that "the integral of $1/x$ with respect to $x$ is $\ln |x| + C$"; this fails when you try to integrate over the singularity. (There are ways to assign values to the integrals that go over 0, such as the Cauchy principal value, but these are not the "default".)
With regards to your original problem, then, we note that the function
$$
f(z) = \frac{1}{(z^2 - a^2)^2} \frac{1}{(z-a)^2 + b^2}
$$
has singularities at $z = \pm a$ and nowhere else, as you observed, so the definition of your integral is
$$
\int_{-\infty}^{\infty} f(z) \, dz = \int_{-\infty}^{-a} f(z) \, dz + \int_{-a}^{a} f(z) \, dz + \int_{a}^{\infty} f(z) \, dz,
$$
and it is not hard to observe that e.g. the middle integral here diverges, because it has positive integrand and
\begin{align*}
\int_{-a}^{a} f(z) \, dz &\geq \frac{1}{4a^2 + b^2} \int_{-a}^{a} \frac{1}{(x^2 - a^2)^2}
\, dx \\
&\geq \frac{1}{4a^2 + b^2} \int_{\max(a-1,-a)}^{a} \frac{1}{(x^2 - a^2)^2} \, dx \\
&\geq \frac{1}{4a^2 + b^2} \int_{\max(a-1,-a)}^{a} \frac{1}{x^2 - a^2} \, dx
\end{align*}
which can easily be seen to diverge by (for instance) explicitly integrating and noticing that the limit that defines the improper integral,
$$
\int_{\max(a-1,-a)}^{a} \frac{1}{x^2 - a^2} \, dx = \lim_{c \to a^{-}} \int_{\max(a-1,-c)}^{c} \frac{1}{x^2 - a^2} \, dx,
$$
is divergent. Thus the integral in question is divergent. |
Series convergence with log and exponential function | A few hints:
I. Notice that for $n\ge 1$, if $0\le a_n\le 1$, then $0\le(a_n)^{n+1}\le a_n$. It should be possible from there to show $a_n$ is positive and monotonic decreasing. The ratio test can be applied from there, since we know that $\frac{a_{n+1}}{a_n}=(a_n)^n$.
II. For sufficiently large $n$, the ratio $\frac{n+\ln(n)}{n+10}$ will be larger than one. This means that for large $n$, $a_{n+1}>a_n$ provided the terms are positive. Consider what this means for the series. |
Field Extensions and their dimensions | This is a great question. Both the degree formula for field extensions and Lagrange's theorem are special cases of a Lemma taking place in a symmetric monoidal category. Here is this Lemma for the case of abelian groups.
Lemma: Let $M$ be a free left $S$-module, $R \to S$ a homomorphism of rings, such that $S$ is free as a left $R$-module. Then the left $R$-module $M|_R$ is free. Specifically, if $B$ is an $S$-basis of $M$ and $C$ is an $R$-basis of $S$, then $C \cdot B$ is an $R$-basis of $M|_R$ with cardinality $|C \cdot B| = |C|\cdot |B|$.
Proof: The basis $B$ induces an isomorphism $M \cong S^{\oplus B}$, hence $M|_R \cong (S|_R)^{\oplus B}$, and $C$ induces an isomorphism $S|_R \cong R^{\oplus C}$. Thus, $M|_R \cong (R^{\oplus C})^{\oplus B} \cong R^{\oplus C \times B}$. By construction this maps $(c,b) \in C \times B$ to $c \cdot b \in M$. QED
The degree formula for field extensions is an immediate corollary.
Now notice that the proof of this Lemma is entirely formal. We don't need any elements at all. Therefore we can generalize this Lemma to arbitrary symmetric monoidal categories with coproducts which distribute over $\otimes$. Then $R,S$ are monoid objects, $M$ is a left $S$-module object, which is called free when it is a coproduct of copies of $S$. The same proof as above works.
Let us apply this to the cartesian category of sets ($\otimes=\times$ and $\oplus=\sqcup$). Then $R,S$ are monoids in the usual sense. Let us restrict to the case of two groups $H,G$ equipped with a homomorphism of groups $H \to G$, w.l.o.g. injective. Then $M$ is a $G$-set. By decomposing $M$ into orbits, we find that $M$ is a coproduct of copies of $G$, i.e. it is a free $G$-set. A basis consists of a system of representatives $B$ for the orbits. Similarly, we find that $G$ is a free $H$-set, a basis consists of a system of representatives $C$ for the right cosets of $H$ in $G$. The Lemma tells us that $C \cdot B$ is a system of representatives for the orbits of $M|_H$. In other words:
$$[H/M] = [H/G] \cdot [G/M].$$
Applying this to a group $K$ equipped with an injective homomorphism $G \to K$, we obtain Lagrange's theorem
$$[G/K] = [H/G] \cdot [G/K].$$
Right modules give the corresponding formula for left cosets. |
construct triangle with $\hat C$ and length of the bisector of $\hat C$ and side c | Let $\triangle ABC$ such that $m(\angle ACB)=\gamma$, $S$ is the intersection point between the angle bisector of $\angle ACB$ and side $AB$, $CS=w$, $AB=c$, and $m(\angle CSB)=\theta$.
Using the sine rule in $\triangle ACS$ and $\triangle BCS$, we get:
$$SB=w \frac{\sin{\frac{\gamma}{2}}}{\sin(\theta+ \frac{\gamma}{2})} \quad(1) $$
and
$$AS=w \frac{\sin{\frac{\gamma}{2}}}{\sin(\theta- \frac{\gamma}{2})}. \quad(2) $$
Recalling that $AS+SB=c$ and after some algebra with trigonometric identities, we get:
$$-\frac{c}{w \sin \gamma}\sin^2(\theta)+\sin(\theta)+\frac{c}{2w}(\frac{1}{\sin \gamma}-\frac{1}{\tan \gamma})=0. \quad (3)$$
Solving for $\sin\theta$ the quadratic equation $(3)$, we can easily construct the $\triangle ABC$. |
Decide belonging of an element to an ideal | The ring is k[x,y,z].
I don't think this is correct. You have only found some rationally functions that satisfy the representation.
One approach is to argue based on the degree of the polynomials on both sides. |
Problem on divergence, rotation, flux | It does seem a bit on the ugly side in that the region seems to be now down to one square pyramid with vertex at $(0,0,2)$. I think the $x$-component of $\vec\nabla\times\vec V$ should be $(y+1)e^y$: check it. For part b), I get your answer now that the domain of $w$ has been corrected. For the Jacobian, I get
$$J=\left|\det\begin{bmatrix}\frac{\partial x}{\partial u}&\frac{\partial x}{\partial v}&\frac{\partial x}{\partial w}\\
\frac{\partial y}{\partial u}&\frac{\partial y}{\partial v}&\frac{\partial y}{\partial u}\\
\frac{\partial z}{\partial u}&\frac{\partial z}{\partial v}&\frac{\partial z}{\partial u}\end{bmatrix}\right|
=\left|\det\begin{bmatrix}w&0&u\\0&w&v\\0&0&-2\end{bmatrix}\right|=\left|-2w^2\right|=2w^2$$
Then
$$\begin{align}\int\int_{\partial\Omega}\vec V\cdot d^2\vec A&=\int\int\int_{\Omega}\vec\nabla\cdot\vec Vd^3\text{volume}\\
&=\int_0^1\int_{-2}^2\int_{-2}^2w^2u^2\cdot2w^2\,du\,dv\,dw\\
&=2\cdot\frac232^3\cdot2\cdot2\cdot\frac151^5=\frac{128}{15}\end{align}$$
For part c) I think you are supposed to use Stokes' theorem to get
$$\oint_{\partial F_c}\vec V\cdot d\vec r=\int\int_{F_c}\vec\nabla\times\vec V\cdot d^2\vec A$$
$w=0$ is above the pyramid, $c=y=vw\le2w$ so $w=\frac c2$ is the upper surface of the pyramid, and $w=1$ is the base of the pyramid. We need $\vec r$ along the surface:
$$\vec r=\langle x,y,x\rangle=\langle uw,vu,2-2w\rangle=\langle uw,c,2-2w\rangle$$
$$d\vec r=\langle w,0,0\rangle\,du+\langle u,0,-2\rangle\,dw$$
Then the vector areal element is
$$\begin{align}d^2\vec A&=\langle w,0,0\rangle\,du\times\langle u,0,-2\rangle\,dw\\
&=\pm\langle0,2w,0\rangle\,du\,dw=\langle0,2w,0\rangle\,du\,dw\end{align}$$
To be pointing away from the $z$-axis, assuming $c>0$. So there are $2$ integrals to evaluate:
$$\int_{\frac c2}^1\int_{-2}^2\langle c^2(2-2w),cu^2w^2,ce^c\rangle\cdot\langle0,2w,0\rangle\,du\,dw=2c\cdot\frac14\left(1-\frac{c^4}{16}\right)\cdot\frac232^3$$
$$\int_{\frac c2}^1\int_{-2}^2\langle (c+1)e^c,c^2,2cuw-2c(2-2w)\rangle\cdot\langle0,2w,0\rangle\,du\,dw=2c^2\cdot\frac12\left(1-\frac{c^2}{4}\right)\cdot4$$
The condition we are trying to satisfy is
$$\frac16c(16-c^4)=c^2(4-c^2)$$
So either $c=0$, $c^2-4=0$, or $\frac16(4+c^2)=c$. The last has the solutions $c=3\pm\sqrt5$ of which only $c=3-\sqrt5$ is valid, and $c=\pm2$ is degenerate, so it looks like the only thing left is $c=0$, which seems rather odd because how can you point away from the $z$-axis when $y=0$? When $c=0$ neither vector field has a nonzero $y$-component, so I guess that's a reasonable solution. I suppose it can be seen that $c\ge0$ because otherwise the two vector fields have $y$-components that disagree in sign. |
How to generalize Newman's simplification of O-Tauberian theorem? | I have found a work on arxiv that proves the very similar theorem and here is the link of it:
https://arxiv.org/abs/1406.0427
It is Ryo-Kato's generalization of Kable's generalization of Wiener-Ikehara Tauberian theorem. Kable states for $k$ of form $\frac{1}{n}$, and Ryo-Kato states it for all rational $k$. The real $k$ is still a mystery, though. |
On the hilbert polynomial of a coherent sheaf over a projective scheme | If $d = 0$, then the right hand side is just
$$\chi(E) \cdot \binom{m-1}{0} = \chi(E).$$
This is consistent with the fact that if $E$ is supported on finitely many points, then $E = \bigoplus_{P \in X} E_P$, i.e. it is the finite direct sum of skyscraper sheaves of its stalks. Twisting does not change the stalks, so $E(m) = E$ for all $m \in \mathbb{Z}$.
This is true because if $D$ is any effective Cartier divisor, then $I_D = \mathcal{O}(-D)$. In this case, $H$ is a hyperplane section, so $I_H = \mathcal{O}(-1)$. Tensoring the defining short exact sequence for $H$ $$ 0 \to \mathcal{O}_X(-1) \to \mathcal{O}_X \to \mathcal{O}_H \to 0 $$
with $E$ leads to the sequence
$$0 \to E(-1) \to E \to E|_H \to 0.$$
This sequence is exact on the left, because $H$ is regular with respect to $E$.
The regularity implies that $H$ contains none of the associated points of $E$, see the paragraph directly under Definition 1.1.11. But this means that $H$ misses all generic points of the irreducible components of the support of $E$, see stacks-project, associated primes. So $H$ does not contain any component of $\operatorname{Supp}(E)$, hence the dimension of $E$ drops by $1$.
We have $$f(m) = \chi(E(m)) - \sum_{i = 0}^{d} \chi(E|_{\bigcap_{j \leq i}H_j})\binom{m + i - 1}{i}.$$
For $m = 0$ the binomial coefficients become $\binom{i-1}{i}$ which equals zero for $i \geq 1$, and equals $1$ for $i = 0$ (at least I hope this is the intention, but I dont see any other way how the statement could be correct).
So $f(0) = \chi(E) - \chi(E) = 0$. |
Proof a corollary on Hahn-Banach theorem | Recall the quotient space $E/F$, whose elements are of the form
$$[x]:=x+F=\{x+y\in E\,:\,y\in F\}$$
where $x\in E$. This is always a vector space, but given a normed structure on $E$ it is not necessarily obvious how to introduce a related norm on $E/F$. However, if $F$ is closed then we can define a norm as
$$\|[x]\|_{E/F}:=\inf_{y\in F}\|x+y\|$$
It is not immediately obvious that this is even a well-defined map, so we shall demonstrate this. Suppose $x,y\in E$ are such that $[x]=[y]$. Then there exists $z\in F$ such that $y=x+z$. Observe that
$$\{\|x+w\|\,:\,w\in F\}=\{\|x+z+w\|\,:\,w\in F\}$$
and hence $\|[x]\|_{E/F}=\|[y]\|_{E/F}$, so the map is well-defined. It should be reasonably obvious that $\|\alpha[x]\|_{E/F}=|\alpha|\|[x]\|_{E/F}$, and the triangle inequality is also not too difficult: given $x,y\in E$ and $u,v\in F$ we have
$$\|[x+y]\|_{E/F}\le\|x+y+u+v\|\le\|x+u\|+\|y+v\|,$$
so taking infima over all $u,v\in F$ we get $\|[x+y]\|_{E/F}\le\|[x]\|_{E/F}+\|[y]\|_{E/F}$. To show $\|\cdot\|_{E/F}$ is indeed a norm it suffices to show that if $\|[x]\|_{E/F}=0$ then $[x]=[0]$. Suppose $\|[x]\|_{E/F}=0$ and let $(y_n)$ be a sequence in $F$ such that $\|x+y_n\|\to0$. Then $y_n\to-x$ as $n\to\infty$, so since $F$ is closed we have $x\in F$. In particular, this implies $[x]=[0]$. Hence $\|\cdot\|_{E/F}$ is indeed a norm.
Now use the most common corollary to Hahn-Banach: given a normed vector space $X$ and $x\in X$, $x\ne0$ there exists $x^*\in X^*$ such that $\|x^*\|=1$ and $\langle x^*,x\rangle=\|x\|$. Let $[x_0]\in E/F$, $[x_0]\ne[0]$, so by the stated result there is $f\in(E/F)^*$ such that $\|f\|=1$ and $\langle f,[x_0]\rangle=\|[x_0]\|_{E/F}$.
Define $\phi:E\rightarrow\mathbb{K}$ by $\phi(x)=\langle f,[x]\rangle$. Clearly $\phi$ is linear, and $\|\phi(x)\|\le\|f\|\|[x]\|_{E/F}\le\|f\|\|x\|$ so $\phi\in E^*$. If $x\in F$, then $\phi(x)=\langle f,[x]\rangle=\langle f,[0]\rangle=0$. However $\phi\ne0$ since $\phi(x_0)=\|[x_0]\|_{E/F}>0$. This completes our proof. |
Discrete monotone decreasing distribution | In economics and related disciplines, it is common to use a discount factor to evaluate streams of profits etc (usually to discount future values rather than past values, but the idea is the same). For example if we have discount factor $
\beta\in(0,1)$, then the "present value" $V_T$ at $T$ of the stream of daily values $v_1,\ldots,v_T$ is given by
$$V_T=\beta^{T-1}v_1+\beta^{T-2}v_2+\cdots+\beta v_{T-1}+v_T=\sum_{t=1}^{T}\beta^{T-t} v_T.$$
This means that
$$V_T=v_T+\beta V_{T-1}.$$
The choice of $\beta$ is arbitrary. It depends upon how much you care about past values relative to present values.
If you want, you can normalize $V_T$. Suppose the daily values are in $[0,\bar{v}]$. Then
$$0\leq V_T\leq \overline{v}(1+\beta+\beta^2+\cdots+\beta^{T-1})=\frac{\overline{v}(1-\beta^T)}{1-\beta}$$
So if we let $$\overline{V}_T=\frac{1-\beta}{\overline{v}(1-\beta^T)}V_T$$
then $\overline{V}_T\in[0,1]$. |
A question about quotient under group action | Consider the natural map between $(X/H)$ and $(X/G)$ given by sending the orbit of a point $x$ under $H$ to the orbit of $x$ under $G$. This map is continuous, because the preimages of open sets in $(X/G)$ are unions of open sets in $(X/H)$. It is constant on each $G$-orbit (which is a $G/H$-orbit), so it induces a continuous bijection between the quotient $(X/H)/(G/H)$ and $(X/G)$.
Now do the same thing, sending each point in $X$ to its image in $(X/H)$ and then to its image in $(X/H)/(G/H)$. The pre image of an open set in $(X/H)/(G/H)$ is the union of open sets in $(X/H)$ and the primates of these are open sets in $X$. Also, this map is constant on $G$-orbits. So we get a continuous bijection between $(X/H)/(G/H)$ and $(X/G)$ which is the inverse of the previous map, giving a homeomorphism. |
Cyclic quadrilateral with equal area and perimeter | Any cyclic quadrilateral (a quadrilateral that can be inscribed in a circle) can be scaled to have equal perimeter and area. Assume that in a unit circle an inscribed quadrilateral has area $A$ and perimeter $P$. Then scaling to a circle of radius $r$ gives a similar quadrilateral of area $r^2A$ and perimeter $rP$. Setting $r = P/A$ makes these equal.
The answer then comes down to noticing that there are infinitely many (dissimilar) cyclic quadrilaterals in a unit circle. Indeed we can fix one side of the quadrilateral to be a diameter and still obtain infinitely many dissimilar cyclic quadrilaterals by varying the length and position of the "opposite" side.
Added: There is no upper bound on the area (or perimeter) of a cyclic quadrilateral having equal area and perimeter. To see this we may restrict attention to rectangles, which are cyclic quadrilaterals, and note that "area equals perimeter" can be stated in terms of width $x \gt 0$ and height $y \gt 0$:
$$ xy = 2(x+y) $$
$$ (x-2)(y-2) = 4 $$
We recognize this as the equation of a hyperbola, having asymptotes $x=2,y=2$, with points on its upper branch in the first quadrant arbitrarily far from the origin. That is, take $x$ to be as big as we want, and solve for $y = 2 + 4/(x-2)$. Now the area and perimeter are equal, and this quantity exceeds $2x$.
On the other hand there is a minimum area (resp. perimeter). From the analysis above we should minimize $P^2/A$ among all cyclic quadrilaterals, which is tantamount to maximizing the area among cyclic quadrilaterals with specified perimeter. The isoperimetric inequality for quadrilaterals says that among quadrilaterals with the same perimeter, the one with the greatest area is regular, i.e. a square, which is of course cyclic. Thus $P^2/A \ge 16$, and we attain this lower bound by taking a square with side 4.
Also, among quadrilaterals with specified side lengths, the maximum area is that of a cyclic quadrilateral. |
counting symmetric matrices over a finite field $\mathbb{F}_q$ | Note that a symmetric matrix is completely determined by the elements on and below the main diagonal (or, equivalently, on and above the main diagonal). More technically, if $A = [a_{ij}]$ is a square matrix or order $n$, then one can freely choose the elements $a_{ij}$ for which $j \leq i$, the remaining elements (for which $j > i$) can then be found by the symmetricity, as $a_{ij} = a_{ji}$.
How many such positions "on or below the main diagonal" are there? We need to count the number of tuples of natural numbers $(i, j)$ for which $1 \leq j \leq i \leq n$. Fixing $i$, there are exactly $i$ possibilities for $j$: 1, 2, ..., $i$. We get for the total number of positions:
$$
\sum_{i = 1}^{n}i = \frac{n(n+1)}{2} = {n+1 \choose 2}
$$
I think the rest should be clear, but let me know if it's not. |
In any triangle $ ABC $ $\frac {sin(B-C)}{sin (B+C)}=\frac {b^2-c^2}{a^2} $. | Note that
\begin{align*}
\frac{\sin(B-C)}{\sin (B+C)}&=\frac{\sin B \cos C-\cos B \sin C}{\sin B \cos C+\cos B \sin C}\\
&=\frac{\frac{\sin B}{\sin C} \cos C-\cos B}{\frac{\sin B}{\sin C} \cos C+\cos B}\\
&=\frac{\frac{b}{c} \frac{c^2-a^2-b^2}{2ab}-\frac{b^2-a^2-c^2}{2ac}}{\frac{b}{c} \frac{c^2-a^2-b^2}{2ab}+\frac{b^2-a^2-c^2}{2ac}}\\
&=\frac{b^2-c^2}{a^2}.
\end{align*} |
Calculation of Klein bottle Integral Simplicial Homology the $H_{1}$ case | I can choose the simplicial complex of a square to be :
$C_0 = \mathbb Z \langle a_1, a_2, a_3, a_4 \rangle$
$C_1 = \mathbb Z \langle x_1, x_2, x_3, x_4, x_5\rangle$
$C_2 = \mathbb Z \langle \Delta_1,\Delta_2 \rangle$
according to this drawing :
If I glue $x_1$ to $x_4$ and $x_2$ to $x_5$ (hence also $a_1$ to $a_2$ to $a_3$ to $a_4$), this induces a simplicial complex
$C_0 = \mathbb Z \langle a_1 \rangle$
$C_1 = \mathbb Z \langle x_1, x_2, x_3\rangle$
$C_2 = \mathbb Z \langle \Delta_1,\Delta_2 \rangle$
for the Klein bottle $K$.
Now the boundaries are :
$\partial_0 a_1 = 0$
$\partial_1 x_1 = 0$
$\partial_1 x_2 = 0$
$\partial_1 x_3 = 0$
$\partial_2 \Delta_1 = x_1 + x_2 + x_3$
$\partial_2 \Delta_2 = - x_4 + x_5 - x_3 = -x_1 + x_2 - x_3$
We can now calculate explicitly $H_1(K)$ :
$$
H_1(K) = \frac{ker \partial_1}{im \partial_2}
\\ = \frac{\mathbb Z \langle x_1, x_2, x_3\rangle}{\mathbb Z \langle x_1 + x_2 + x_3, -x_1 + x_2 - x_3\rangle}
\\ = \mathbb Z \langle x_1, x_2, x_3| x_1 + x_2 + x_3, -x_1 + x_2 - x_3\rangle
\\ = \mathbb Z \langle x_1, x_2, -x_1 - x_2| 2x_2 = 0\rangle
\\ \cong \mathbb Z \oplus \mathbb Z_2
$$ |
Some questions about highly composite numbers | Answer to the second question:
Suppose that $n>1680$ is highly composite and is not a multiple of $9$. Then:
$$n=2^r\cdot 3\cdot 5\cdot\ldots\cdot p_k$$
where $p_k\ge11$, that is, $k\ge 5$
Then $t(n)=(r+1)2^{k-1}$. Define
$$n'=\frac{9n}{p_k}<n$$
Then $$t(n')=(r+1)4\cdot2^{k-3}=t(n)$$
which contradicts the fact that $n$ is highly composite. |
Prove a set is not connected | Proof by contrapositive argument.
First show that if $C$ is a connected subspace of $X$ that intersects both $A$ and $X\setminus A$ for some subset $A$ of $X$, then $C$ intersects the boundary of $A$. Then
prove that if $A$ is a non-empty proper subset of $M$ such that $M$ is connected, then Boundary of $A$ is non-empty.
Since $M$ is connected and $\emptyset\neq A\neq M $, we have $M$ intersects both $A$ and $M\setminus A$ and so $M$ intersects the boundary of $A$, i.e. $M\cap \operatorname{Bd}(A)\neq \emptyset$. Hence $\operatorname{Bd}(A)\neq \emptyset$. |
translate linear coordinates to circular space | WLOG I assume that $P_0=(-1,0)$ and $P_4=(1,0)$ (if not, a translation+rotation+scaling will achieve that).
Now consider a circle of radius $r$ passing by the two given points. By Pythagoras, its center is at $(0,-h)$ with $h:=\sqrt{r^2-1}$, and the half-aperture angle is $\alpha=\arcsin\dfrac1r$.
Using polar coordinates with the pole at $(0,-h)$, you will map the abscissas to angles ($x\to\theta=\dfrac\pi2+x\alpha$) and the ordinates to moduli ($y\to\rho=r+y$), and
$$X=(r+y)\cos(\frac\pi2+x\alpha)=(r+y)\sin(x\alpha),\\Y=(r+y)\sin(\frac\pi2+x\alpha)-h=(r+y)\cos(x\alpha)-h.$$
This way the origin $(0, 0)$ maps to the Cartesian coordinates $(0,r-h)$ and the endpoints $(\pm1,0)$ to themselves.
Note that for very large $r$, $\alpha\approx\dfrac1r$ and $h\approx r$ so that
$$X\approx(r+y)\frac xr\approx x,$$
$$Y\approx (r+y)\cdot1-h\approx y.$$ |
Set of elements yielding Reals when squared | Another approach is to convert the complex number to its polar form.
Consider a complex number $z$:
$$\begin{align}
& \Im\left(z^2\right)=0 \\
& \Im\left(m\cdot e^{2\theta i}\right)=0 \\
& 2m\cdot\cos(\theta)\sin(\theta)=0 \\
& \cos(\theta)\sin(\theta)=0 \\
\end{align}$$
This means that $(\cos(\theta)=0)$ + $(\sin(\theta)=0)$
$$\begin{align}
& \cos(\theta)=0 \\
\implies& \theta=\frac{\pi}{2}+\pi n,\,n\in\mathbb{Z} \\
& \sin(\theta)=0 \\
\implies& \theta=\pi n,\,n\in\mathbb{Z} \\
\end{align}$$
This means that:
$$\theta=\frac{\pi}{2}n,\,n\in\mathbb{Z}$$
Therefore, $S$ is comprised only of numbers that form an angle measuring a multiple of $90^{\circ}$ on the complex plane. In other words, numbers on the real and imaginary axes of the complex plane. |
Inequality $\left(1-\frac{x(1-x)+y(1-y)}{1-x+1-y}\right)^2+(1-\frac{x+y}{2})^2\geq (1-x)^2+(1-y)^2$ | For $x+y\neq2$ by AM-GM we obtain:$$\left(1-\frac{x(1-x)+y(1-y)}{1-x+1-y}\right)^2+\left(1-\frac{x+y}{2}\right)^2=$$
$$=\left(\frac{(1-x)^2+(1-y)^2}{2-x-y}\right)^2+\left(\frac{2-x-y}{2}\right)^2\geq$$
$$\geq2\sqrt{\left(\frac{(1-x)^2+(1-y)^2}{2-x-y}\right)^2\left(\frac{2-x-y}{2}\right)^2}= (1-x)^2+(1-y)^2$$ |
solve $x^2 \equiv 24 \pmod {60}$ | You were correct.
$$x^2\equiv 24\pmod{\! 60}\iff \begin{cases}x^2\equiv 24\equiv 0\pmod{\! 3}\\ x^2\equiv 24\equiv 0\pmod{\! 4}\\ x^2\equiv 24\equiv 4\pmod{\! 5}\end{cases}$$
$$\iff \begin{cases}x\equiv 0\pmod{\! 3}\\ x\equiv 0\pmod{\! 2}\\ x\equiv \pm 2\pmod{\! 5}\end{cases}$$
If and only if at least one of the two cases holds:
$1)$ $\ x\equiv 0\pmod{\! 6},\ x\equiv 2\pmod{\! 5}$
$2)$ $\ x\equiv 0\pmod{\! 6},\ x\equiv -2\pmod{\! 5}$
You can use Chinese Remainder theorem as follows (when I create new variables, they're integers):
$$x\equiv 0\pmod{\! 6}\iff x=6k$$
$$1)\ \ \ x\equiv 2\pmod{\! 5}\iff \color{#00F}6k\equiv \color{#00F}1k\equiv 2\pmod{\! 5}$$
$x=6(5n+2)=30n+12$.
$$2)\ \ \ x\equiv 3\pmod{\! 5}\iff \color{#00F}6k\equiv \color{#00F}1k\equiv 3\pmod{\! 5}$$
$x=6(5n+3)=30n+18$.
Another way you can use CRT (which is basically just finding an $x$ that works in $[0,30)$):
$1)\ \ \ (x\equiv 0\equiv 12\pmod{\! 6}$ and $x\equiv 2\equiv 12\pmod{\! 5})\iff x\equiv 12\pmod{\! 6\cdot 5},$
because (since $(6,5)=1$):
$$6,5\mid x-12\iff 6\cdot 5\mid x-12$$
Using this, in case $2)$ in the same way you find that $18$ works ($18\equiv 0\pmod{\! 6}$ and $18\equiv 3\pmod{\! 5}$).
So you have the congruence holds iff $x=30m\pm 12$ for some $m\in\Bbb Z$. |
An inequality with respect to sum of power of prime factors | The above inequality is true for any $x \ge 1$. We only need the following lemma:
$$\forall a,b \in \mathbb N\setminus\{1\}: a^x+b^x\le (ab)^x$$
Which is proven by observing:
$$\frac 1{a^x}+\frac1{b^x}\le \frac1a+\frac1b\le\frac12+\frac12=1$$
and multiplying the result by $(ab)^x$ on both sides. This proves the inequality after strong induction on $n$, i.e. the number of prime divisors (including multiplicities), as follows:
The case $n=1$ is trivial and the case $n=2$ follows from the lemma.
Suppose the result holds for $n=k$. Consider $L = \prod_{i=1}^{k+1} p_i$.
-Suppose $q_i \ne L\ \forall i$, and choose $q_j \ne 1$.
Let $S$ be a subset of $\{1, \dots, n\}$ such that $\prod_{i \in S} p_i = q_j$.
By induction hypothesis, $\sum_{i \in S}p_i^x \le q_j^x$ and $\sum_{i \notin S} p_i^x \le \sum_{i\ne j} q_i^x$.
Hence $\sum_{i=1}^n p_i^x \le q_j^x+\sum_{i\ne j} q_i^x =\sum_{i=1}^m q_i^x$.
-The case $q_j=L$ follows easily from induction as well:
$$\sum_{i=1}^m q_i^x \ge L^x \ge p_1^x + \left(\frac L {p_1}\right)^x \ge p_1^x + p_2^x + \left(\frac L {p_1p_2}\right)^x \ge \dots \ge \sum_{i=1}^n p_i^x$$
This concludes the induction step.
The inequality is not necessarily true for $0<x<1$. For $n>m$, when $x \to 0$:
$$\sum_{i=1}^n p_i^x\to n, \sum_{i=1}^m q_i^x\to m$$
hence eventually the inequality fails. For example, we have $2^{1/2} + 2^{1/2} > 4^{1/2}$. |
Minimizing the length of a pipeline between cities | You are looking for the Fermat point, $M(x)$, of the triangle whose vertices are $A=(6,3)$, $B=(0,4)$, and $C=(x,0)$, and trying to minimize the value
$$d(x)=|AM|+|BM|+|CM|$$
According to this calculation, the acute angles between $\overline{AM},\overline{BM},$ and $\overline{CM}$ are all $\frac{2\pi}{3}$. Appealing to intuition, we see that $d(x)$ will be minimized when $M(x)=(x,y)$, that is, when $M$ sits directly above $C=(x,0)$. Using these two facts, we see that the Fermat point $M$ minimizing $d$ will be the intersection of the following two lines:
$$y=-\frac{x}{\sqrt{3}}+4\\y=\frac{x-6}{\sqrt{3}}+3$$
This is because these lines pass through the known points $A$ and $B$, and intersect each other and any vertical line at acute angles of $\frac{2\pi}{3}$. Their intersection occurs at $M=(3+\frac{\sqrt{3}}{2},\frac{7}{2}-\sqrt{3})$. Hence $C=(3+\frac{\sqrt{3}}{2},0)$, and a calculation shows that the minimum length of piping required is then
$$d(3+\frac{\sqrt{3}}{2})=\frac{7}{2}+3\sqrt{3}$$
Here is a picture illustrating our triangle, and the two lines given above, intersecting at the Fermat point:
$\hspace{1.75in}$
This problem is related to the Motorway problem, which can be solved using soap bubbles, as is discussed in this entertaining video. |
Show that the singleton set is open in a finite metric spce. | Define $r(x) = \min \{d(x,y): y \in X, y \neq x\}$. This is a minimum of finitely many strictly positive numbers (as all $d(x,y) > 0$ when $x \neq y$). So $r(x) > 0$.
Suppose $y \in B(x,r(x))$ and $y \neq x$. Then by definition of being in the ball $d(x,y) < r(x)$ but $r(x) \le d(x,y)$ by definition of $r(x)$. Contradiction. So $B(x, r(x)) = \{x\}$ and the latter set is open. |
Let $k = (a+b,a^2+b^2-ab)$. If $(a,b)=1$ then $k = 1$ or $k=3$. | Here is a simpler proof that makes it clear why that cannot occur.
${\rm mod}\ k\!:\ b\!+\!a\equiv 0\,\Rightarrow\, b\equiv -a\,\Rightarrow\, 0\equiv a^2\!+b^2\!-ab\equiv 3a^2,\, $ so $\ \color{#0a0}{k\mid 3a^2}$
But $\ k\mid a\!+\!b\,\Rightarrow\, \color{#c00}{(k,a)} = (k,a,a\!+\!b) = (k,a,b) \color{#c00}{= 1},\,$ by $\ (a,b)= 1$
Therefore, by Euclid's Lemma, $\ \color{#c00}{(k,a)=1},\ \color{#0a0}{k\mid 3a^2}\,\Rightarrow\, k\mid 3\quad $ QED |
how many linear transformation like $T$ are from $V_F$ to $V_F$ such that $W=\ imT$ or$W=\ kerT$? ($V_F$and $F$ is finite ) | Pick a basis for $V_F$. Then $T$ is completely determined by where it sends its basis vectors.
If the dimension of $V_F$ is $n$ and the dimension of $W$ is $m$, pick a basis for $V_F$ including a basis for $W$. Then for the kernel of $T$ to be $W$, it needs to send those $m$ basis vectors to $0$ and the other $n - m$ vectors to a linearly independent set of vectors.
Say $n - m = k$, so we want to find how many ways there are to pick vectors $v_1,v_2,\ldots,v_k$ that are linearly independent. If the size of $F$ is $q$, then there are $q^n - 1$ choices for $v_1$ (anything but the origin). Then, the span of $v_1$ consists of $q$ points, so $v_2$ can be any of $q^n - q$ vectors. Similarly $v_3$ can be any of $q^n - q^2$. In general, we get that the total number of linear transformations: having $W$ as a kernel:
$$\prod_{i = 0}^{k - 1} (q^n - q^i)$$
For $W$ as the image, we do two things: First, we have to choose a $k-$dimensional subspace to be the kernel. We do this in essentially the same way: We pick $v_1,v_2,\ldots,v_k$ linearly independent, to be the basis of our subspace. Then we divide by the number of isomorphisms of our subspace. We can count this as simply the number of maps from this space to itself with empty kernel, so the number of $k-$dimensional subspaces is:
$$\frac{\prod_{i = 0}^{k - 1}(q^n - q^i)}{\prod_{i = 0}^{k - 1}(q^k - q^i)}$$
Once we have our kernel picked out, pick a basis for the remainder and map those to some basis of $W$. This can be done in the same number of ways as there are isomorphisms of $W$, so the final number of maps with $W$ as an image:
$$\frac{\prod_{i = 0}^{k - 1}(q^n - q^i)\prod_{i = 0}^{m - 1}(q^m - q^i)}{\prod_{i = 0}^{k - 1}(q^k - q^i)}$$
A small note on these formulas: They don't make a ton of sense when $m = 0$ or $m = n$, when you should replace a product from $i = 0$ to $-1$ with the number $1$. |
Show that $\Bbb{Z}[X] /\langle2,X^2+1\rangle $ is a ring with 4 elements but it is not isomorphic with $\Bbb{Z}_2 \times \Bbb{Z}_2$ | Doing the computation with quotient rings (rather than with the ideal) is often easier: ${\Bbb Z}[X]/(2,X^2+1) \cong {\Bbb F}_2[X]/(X^2+1) = {\Bbb F}_2[X]/((X+1)^2) \cong {\Bbb F_2}[X]/(X^2)$. Writing $\epsilon$ for the residue class of $X$, this ring has elements $0, 1, \epsilon, \epsilon + 1$ and it has $\epsilon^2 = 0$. |
variable of integration not in function | $$\int_0^x ax^2dy=ax^2\int_0^xdy=ax^2\cdot y\bigg|_0^x=ax^2(x-0)=ax^3$$ |
Propose a decreasing function bounded between $\frac{\log(M)}{M}$ and $\frac{1}{\log(M)}$ | Take $\frac1{\sqrt M}$, which is the geometric mean of your functions. |
Question About the Failure of Uniform Convergence | $f_n \to f$ uniformly if $\forall \epsilon>0$ exists $N \in \Bbb{N}$ such that $|f_n(x)-f(x)|<\epsilon ,\forall n \in \Bbb{N},\forall x \in D_0$
The negation is:
Exists $s>0$ such that: $\forall N \in \Bbb{N}$ exists $n \geq N$ and $x \in D_0$ such that $|f_n(x)-f(x)| \geq s$
So for $N=1,2,3.... $ exist $n_N \geq N$ and $x_N \in D_0$ such that $|f_{n_N}(x_N)-f(x_N)| \geq s$ |
10 People and 2 Rooms - Probability Puzzle | For nonnegative integer $n$ and $k\in\left\{ 0,1,2,3,4,5\right\} $
let $p_{n,k}$ denote the probability that after $n$ minutes there
are $k$ persons in room that does not contain more persons than other
room.
Then we are interested in sequence $\left(p_{n,5}\right)_{n}$.
We find the following equalities:
$2p_{0,0}=1=2p_{0,1}$ and $p_{0,2}=p_{0,3}=p_{0,4}=p_{0,5}=0$
And for every $n$:
$10p_{n+1,0}=p_{n,1}$
$10p_{n+1,1}=10p_{n,0}+2p_{n,2}$
$10p_{n+1,2}=9p_{n,1}+3p_{n,3}$
$10p_{n+1,3}=8p_{n,2}+4p_{n,4}$
$10p_{n+1,4}=7p_{n,3}+10p_{n,5}$
$10p_{n+1,5}=6p_{n,4}$
Preassuming that for every $k$ limit $p_{k}:=\lim_{n\to\infty}p_{n,k}$
exists we find:
$p_{0}+p_{1}+p_{2}+p_{3}+p_{4}+p_{5}=1$
$10p_{0}=p_{1}$
$10p_{1}=10p_{0}+2p_{2}$
$10p_{2}=9p_{1}+3p_{3}$
$10p_{3}=8p_{2}+4p_{4}$
$10p_{4}=7p_{3}+10p_{5}$
$10p_{5}=6p_{4}$
This can be solved (do it yourself) and leads to: $$p_{5}=\frac{126}{512}=\frac{63}{256}=0.24609375$$ |
Markov Chain for a teleportation machine | If you are in another place, the probability you come back to the initial place is
the probability you do not stay still $(1-\alpha)$ multiplied by the probability the initial place is chosen $\left(\frac{1}{n-1}\right)$, so
$$\beta = \frac{1-\alpha}{n-1}$$ |
Variance of conditional expectation and constant term using LIE | Using the identity
$$
\operatorname{Var}(U+V)=\operatorname{Var}(U) + \operatorname{Var}(V) + 2\operatorname{Cov}(U,V)\tag1$$
you just need to show that the covariance between $U:=E(Y\mid X)$ and $V:=\epsilon$ is zero. To do so, apply LIE to write
$$
E (UV) = E[ E(UV\mid X) ]= E [ U E(V\mid X) ],\tag2
$$
the last equality following from the fact that $U$ is $X$-measurable. But we are told $E(V\mid X)$ is a constant, say $c$. Upon taking expectations we deduce $c=E(V)$. Continuing (2) we get
$$
E (UV) = E[ U E(V) ] = E(U) E(V)
$$
and we've proved $U$ and $V$ have zero covariance. |
How does this optimal classifier make sense in case of continuous random variable? | Some comments:
You can get intuition from assuming that the set up is that $(X,Y)$ is some process where $Y$ is sampled from a distribution that depends on the realization of $X$. For instance, maybe $X \sim Unif([0,1])$, and $Y$ is a sample from an independent coin with bias $X$. Conditioned on $X = 1/2$, $Y$ is a fair coin. This is pretty close to the learning theory context anyway -- there are some features, $X$, and the class $Y$ is some random function of the features.
This situation is also essentially general, in a way that is made precise in 3. So, there's really no harm in imagining that this is the story with the data you are trying to learn a classifier for. (Since $Y$ is a binary random variable, you can skip to 5.)
If $(X,Y)$ has a continuous pdf $p(x,y)$, then you can define $p_x(y) = \frac{ p(x,y)}{ \int_{\mathbb{R}} p(x,y) dy }$ as the pdf of $Y$ conditioned on $X = x$. You need that the integral in the denominator is nonzero, but this is a weaker condition than $P(X = x) > 0$. In this specific case, $Y$ is a binary variable, so we'd have $p_x(y) = \frac{ p(x,y)}{p(x,0) + p(x,1)}$. See wikipedia for more, though I'll now discuss some of the formalism.
You can define a notion of conditional probability for measure zero sets, called disintegration of measure. Its really not necessary for learning theory, and since building it in general is pretty technical, I wouldn't worry about it unless it interests you (if it does, then the survey on wikipedia by Chang and Pollard is worth reading, as is Chapter 5 in Pollard's "User's Guide"). One important comment though is that you have to build up all of the conditional distributions at once, they are defined a.e. as a family in the distribution over $X$. Otherwise, you have problems like this: https://en.wikipedia.org/wiki/Borel%E2%80%93Kolmogorov_paradox
You can verify that $p_x(y)$ as defined above actually gives a disintegration. I'm not sure what conditions are necessary for this to hold, other than that $p_x(y)$ is well defined, and all the integrals you write down in that verification make sense. In particular, I don't think that $p(x,y)$ needs to be a continuous pdf, but would want to find a reference to double check.
Here's a sketch of the verification, for notation $\mu_x, \nu$ see wikipedia. (Note that there is some notation class -- what they call $Y$ is here called $X \times Y$): The pushforward measure is $d \nu(x) = (\int_{\mathbb{R}} p(x,y) dy) dx$. $\mu_x(y) = p_x(y) dy$ on the fiber $\{x\} \times \mathbb{R}$. When you plug this into the formula from wikipedia , $\int_X (\int_{\pi^{-1}(x)} f(x,y) d \mu_x(y) ) d\nu(x)$, you get:
$$\int_{\mathbb{R}} \int_{\mathbb{R}} f(x,y) \frac{ p(x,y)}{ \int_{\mathbb{R}} p(x,y) dy } dy (\int_{\mathbb{R}} p(x,y) dy) dx = \int_{\mathbb{R}^2} f(x,y) p(x,y) dxdy.$$
From the learning theory point of view, I think it makes sense to imaging fixing a disintegration, and treating that as the notion of conditional probability for $Y$. Even though it is only defined a.e. in $X$, you are not classifying some arbitrary $X$, but one produced from the distribution. Thus, you'll never 'see' disagreements between two different fixed choices of disintegrations. In particular, you can take particularly nice disintegrations given by the formula $p_x(y)$. Also, this means you can treat your distribution as if it is of the kind described in the first bullet.
If $Y$ is a $\{0,1\}$ random variable, $P(Y = 1) = \mathbb{E}[Y]$. Another way that we can define $P ( Y = 1 | X = x) = E [ Y | X = x]$ is via conditioning; the random variable $E [ Y |X ]$ is $\sigma(X)$ measurable, so there is a measurable function $f$ with $E [ Y |X ] = f(X)$. You can then define $E[Y | X = x] = f(x)$. Note that, like disintegration, this is only defined up to almost sure equivalence, since $E[Y|X]$ is only unique up to almost sure equivalence. However, you can pick nice representatives. For instance, if $Y$ is an independent coin flip from $X$ with bias $p$, then $E[Y|X] = p$, so we can take $E[ Y|X = x] = p$. |
Find the least value of $n$ such that $n^{25}\equiv_{83}37$ | We describe a somewhat general procedure, which works for solving $x^k\equiv a\pmod{p}$, when $k$ and $p-1$ are relatively prime. It is overkill for our problem.
Using the Extended Euclidean algorithm, or otherwise, note that $25a\equiv 1\pmod{82}$, where $a=23$. So we have found the modular inverse of $25$ modulo $82$. So $25a=82b+1$, where $b$ happens to be $7$.
It follows that
$$(37^{23})^{25}\equiv (37^b)^{82}\cdot 37^1\equiv 37\pmod{83}.$$
Thus an answer to the problem, but of very much the wrong size, is $37^{23}$. We want to find the remainder when this is divided by $83$.
Use modular exponentiation, preferably an efficient form such as the Binary Method. |
How can I implement a Deterministic finite automaton which accepts strings having specific words. | Here is a NFA accepting your language, where I the an initial state, and F is a final state.
It reads the word, and, in a non-deterministic way, choses to read one of the subword you want.
Then, you "just" have to determinise it (be careful, it can have up to $2^8$ states… you may want to rewrite the NFA with less states) |
How can I prove that $2/9<x^4+y^4<8$? | 1) to show $x^4+y^4 < 8$, you already have a good solution by squaring $x^2+y^2<2+xy$.
2) By AM-GM and Cauchy Schwarz, we get
$$\left(x^4+y^4+\frac{x^4+y^4}2\right)(3) \ge (x^4+y^4+x^2y^2)(1^2+1^2+(-1)^2) \ge (x^2+y^2-xy)^2 > 1$$
hence $x^4+y^4 > \frac29$
For the more general case, you could use Power Means in addition to the above, i.e. for $n>2$:
$$\sqrt[2^n]{\frac{x^{2^n}+y^{2^n}}2} \ge \sqrt[4]{\frac{x^4+y^4}2} > \frac1{\sqrt3} \implies x^{2^n}+y^{2^n} > \frac2{3^{2^{n-1}}}$$
which is a tighter bound than your inequality. |
Frequency response and low pass filtering | Since discrete-time filters have a $2\pi$-periodic frequency response, the frequency $2\pi$ corresponds to frequency $0$. So if you had a zero response at frequency $2\pi$ you would also have a zero response at frequency $0$, which cannot be the case for a lowpass filter. On the other hand, the frequency $\pi$ is the highest possible frequency for a discrete-time system, so it is this frequency that a lowpass filter must suppress. |
The definition of a directional derivative | This is basically just definition of the derivative, nothing more.
If you have
$$f(h)=u((x,t)+he)$$
then the derivative $f'(h)$ is, by definition,
$$f'(0)=\frac{\mathrm{d}f(h)}{\mathrm{d}h}|_{h=0}=\lim\limits_{h\to 0} \frac{f(h)-f(0)}{h-0}=\lim\limits_{h\to 0} \frac{f(h)-f(0)}{h}=\lim\limits_{h\to 0} \frac{u((x,t)+he)-u(x,t)}{h}$$ |
Equality of affine subspaces | In fact, the vector subspace is determined by the affine subspace.
If $a+U=b+W$, where $U,W$ are subspaces of $V$, then $U=W$.
Proof. Let $A=a+U$ and $B=b+W$.
Notice that $a=a+\vec 0\in A$, which implies $a\in B$, i.e., $a=b+\vec w$ for some $\vec w\in W$ and we get
$$b-a\in W.$$
By a very similar argument we can show $a-b\in U$.
$\boxed{U\subseteq W}$: Consider arbitrary $\vec u\in U$. Then we get $a+\vec u\in B$, which means that
$$a+\vec u=b+\vec w$$
for some $\vec w\in W$. From this we get
$$\vec u = \underset{\in W}{\underbrace{(b-a)}}+\underset{\in W}{\underbrace{\vec w}}$$
$\boxed{W\subseteq U}$: Proof of this part is analogous. (Or simply used the first part and symmetry.) $\square$
I will just mention that getting from $a+\vec u=b+\vec w$ to $\vec u = (b-a)+\vec w$ might still need a bit of work. This basically depends on the definition of affine space (and affine subspace) you are using. (But if you only work with $\mathbb R^n$, then it is just the usual addition and subtracting of ordered $n$-tuples.) |
Does 'connected surface' in differential geometry actually mean 'path-connected surface'? | (The question is basically answered in the comment)
As a topological manifold is locally path connected, a connected manifold is automatically path connected, as a connected locally path connected topological space is path connected |
Conceptual difference between weighted arithmetic mean and ordinary arithmetic mean | No, it's right to think that after one hour "each" machine has produced on average 30 tons making a total of 60 tons. |
How to find the values of $\alpha$ in this second order differential equation | The solution to the differential equation is
$$
y(x)=c_1 \cosh x + c_2 \sinh x
$$
and imposing the initial conditions we have
$$
y(x)=2 (\cosh x + \alpha \sinh x)=2\sqrt{1-\alpha^2}\cosh(x+A)
$$
for an opportune value of $A$ ($=\tanh^{-1}\alpha$).
Now should be obvious how to go on and determine the required values for $\alpha$. |
Not all tangents to a plane curve are bitangents | My first instinct was to approach this in terms of analytic curves (say, in the projective plane) and their dual curves. But this is total overkill. It should just be an elementary exercise in the differential geometry of curves, if we make some reasonable assumptions. When $X$ is contained in a line, its curvature $\kappa$ is identically $0$. Real analyticity of $X$ and the assumption that $X$ is not contained in a line will tell you that the zeroes of $\kappa$ must be isolated, and then the proof will work. (I leave it to you to ponder what happens if you just assume $X$ is $C^2$. How horrendous can the $0$-set of the curvature be then?)
It is really a local exercise, proceeding by contradiction. Assume that we arclength parametrize $X$ by $\alpha(s)$ and let $\beta(s)$ be the (locally, a?) "companion point" so that the tangent lines of $\alpha$ and $\beta$ agree. Write $\beta(s) = \alpha(s)+\lambda(s)T(s)$, where $T(s)$ is the unit tangent vector at $\alpha(s)$. Assuming $\lambda$ is differentiable (away from trouble points this should follow from the implicit function theorem), we have
$$\beta'(s) = \alpha'(s) + \lambda'(s)T(s) + \lambda(s)\kappa(s)N(s),$$
where $T(s),N(s)$ give a smoothly-varying orthonormal basis for $\Bbb R^2$. The fact that $\beta'(s)$ is a scalar multiple of $\alpha'(s)=T(s)$ tells us that $\lambda(s)\kappa(s)=0$ for all $s$. So, away from zeroes of $\kappa$ we must have $\lambda = 0$, which means there is no bitangent companion point at all.
EDIT: @JohnHughes's point is well taken. Since this situation cannot occur (bitangencies occur generically only in isolated situations), it is very difficult to have good intuition. Here is a direct approach, again assuming real analyticity to get only isolated zeroes for $\kappa$. Because we want to work with the tangent lines as affine lines, and not as subspaces, it is most natural to work projectively. I can translate the notation I use if necessary.
Given the curve $\alpha\colon I\to\Bbb R^2$ (for an appropriate interval $I$), again assumed to be arclength-parametrized, we consider $f\colon I\to \Bbb R^3$, with $f(s) = (\alpha(s),1)$. To say that $\alpha(t)$ lies on the tangent line at $\alpha(s)$ is to say that $f(s)\wedge f'(s)\wedge f(t) = 0$ (i.e., the three vectors $f(s),f'(s),f(t)$ are linearly dependent). Symmetrically, to say that $\alpha(s)$ lies on the tangent line at $\alpha(t)$ is to say that $f(t)\wedge f'(t)\wedge f(s) = 0$. So we're interested in pairs $(s,t)$ with $s\ne t$ and $$F(s,t) = \big(f(s)\wedge f'(s)\wedge f(t),f(t)\wedge f'(t)\wedge f(s)\big) = (0,0).$$
(To be a bit more careful, throughout we must choose an identification $\Lambda^3(\Bbb R^3) \cong \Bbb R$. To be concrete, let's set $T(s)\wedge N(s)\wedge (0,0,1)$ equal to $1$.) Now, we just calculate:
$$DF(s,t) = \begin{bmatrix} f(s)\wedge f''(s)\wedge f(t) & f(s)\wedge f'(s)\wedge f'(t) \\ f(t)\wedge f'(t)\wedge f'(s) & f(t)\wedge f''(t)\wedge f(s)\end{bmatrix} = \begin{bmatrix} -\lambda(s,t)\kappa(t) & 0 \\ 0 & \lambda(s,t)\kappa(s)\end{bmatrix},$$
where, at a point $(s,t)$ with $F(s,t)=(0,0)$, we write $\alpha(t) = \alpha(s)+\lambda(s,t)T(s)$. Note that $\alpha(s)\ne\alpha(t)$ tells us that $\lambda(s,t)\ne 0$. So, so long as $\kappa(s)$ and $\kappa(t)$ are both nonzero, $DF(s,t)$ will be nonsingular, which means that — away from zeroes of curvature — $F^{-1}(0,0)$ consists of isolated points, as required. |
Minimum and maximum Distance of a point from an ellipse | Introduce the following variables (Leibovici style$^*$)
$$u=2(3x+4y)\,\,\text{ and } \,\, v=3(4x-3y).$$
Then we get a circle of radius $30$ in the $u,v$ coordinate system:$$u^2+v^2=900.$$
In this coordinate system the point from which we measure the distance of the points on the circle is
$$(30, 0).$$
So, the closest point on the circle to this point is at $(30,0)$ and the farthest point on the circle is at $(-30,0)$.
The minimum distance is $0$ and the corresponding point on the ellipse is $\left(\frac95,\frac{12}5\right)$.
For the other (farthest) point, use the inverse of the transformation we used to get the coordinates of the point $(-30,0)$ points in the $(x,y)$ system. Or solve the following system of equations
$$-30=6x+8y\ \ \\ \ \ \ \ \ 0=12x-9y.$$
Then calculate the distance of the two points that we have just found.
EDIT
The solution is $\left(-\frac95,-\frac{12}5\right)$. The closest point is $\left(\frac95,\frac{12}5\right)$. The distance is
$$\sqrt{\left(\frac{18}5\right)^2+\left(\frac{24}5\right)^2}=6.$$
$^*$Claude Leibovici suggested in a later deleted comment the transformation
$$u=3x+4y\,\,\text{ and } \,\, v=4x-3y.$$ |
A condition on function $f(x+a)+f(x+b)=\frac{1}{2}f(2x)$ implies periodicity | Replacing $x$ by $x+a$ and defining $f(x+2a)=g(x),$ we have $$g(x)+g(x+b-a)=\frac12\,g(2x).$$ Replacing $x$ by $2(b-a)x$ and defining $g(2(b-a)x)=h(x),$ we arrive at $$h(x)+h\left(x+\frac12\right)=\frac12\,h(2x).$$ Replacing $x$ by $x+\frac14$ and adding the result to the original equation, we get $$h(x)+h\left(x+\frac14\right)+h\left(x+\frac12\right)+h\left(x+\frac34\right)=\frac12\left[h(2x)+h\left(2x+\frac12\right)\right]=\frac14\,h(4x).$$ Iterating that process (i.e. replacing $x$ by $x+\frac1{2^n}$ and adding to the previous equation), we have
$$\sum^{2^n-1}_{k=0}h\left(x+\frac{k}{2^n}\right)=\frac1{2^n}\,h\left(2^n x\right),$$ i.e.
$$\frac1{2^n}\,\sum^{2^n-1}_{k=0}h\left(x+\frac{k}{2^n}\right)=\frac1{2^{2n}}\,h\left(2^n x\right).$$ Under our assumptions, $h$ is integrable, and $h(x)=o(x^2),$
so by letting $n\to\infty,$ we see that $$\int^1_0 h(x+t)\,dt=\int^{x+1}_x h(t)\,dt=0.$$ Since $h$ is continuous, we can differentiate, so $$h(x+1)-h(x)=0.$$ Thus, the original function $f$ must have period $2(b-a).$ |
Boundedness of a signal and its derivative implies convergence | Function $z(t)=\sin(t)$ satisfies both assumptions. However, $\int_{T}^{\infty} \sin(\tau)d\tau$ does not converge (for any choice of $T$). Extra assumptions are required, however, I don't known which of them (if any) are necessary (so that they could be described as "minimal"). |
$\cos{y}=x+\frac{1}{x}$, possible for any values of $y$? | The value of $\cos(y)$ is between $-1$ and $1$. Therefore, for this equality to hold, $x+\frac{1}{x}$ must be between $-1$ and $1$.
If $x>1$, then $x+\frac{1}{x}>x>1$.
If $x=1$, then $x+\frac{1}{x}=2>1$.
If $0<x<1$, then $\frac{1}{x}>1$, so $x+\frac{1}{x}>\frac{1}{x}>1$.
If $x=0$, then $x+\frac{1}{x}$ is not defined.
If $-1<x<0$, then $\frac{1}{x}<-1$ so $x+\frac{1}{x}<\frac{1}{x}<-1$.
If $x=-1$, then $x+\frac{1}{x}=-2<-1$.
If $x<-1$, then $x+\frac{1}{x}<x<-1$.
Since $x+\frac{1}{x}$ is either greater than $1$ or less than $-1$, the equality can never hold (at least in the reals).
Alternately, one could use calculus and compute minima and limits as $x\rightarrow\pm\infty$ and $x\rightarrow 0^{\pm}$. However, since the necessary inequality can be proved with only elementary facts, I chose this method. |
Compactness and Strictly Finer Topologies. | Consider the identity map $e$ from $\langle A,\tau_2\rangle$ to $\langle A,\tau_1\rangle$. Since $\tau_2\supseteq\tau_1$, $e$ is continuous. Since $\tau_2\supsetneqq\tau_1$, there is a set $K\subseteq A$ such that $K$ is closed in $\tau_2$ but not in $\tau_1$. If $\langle A,\tau_2$ were compact, $K$ would be compact in $\tau_2$, and since $e$ is continuous, $K=e[K]$ would be compact in $\tau_1$. But $K$ is not $\tau_1$-closed, and $\tau_1$ is Hausdorff, so ... ? |
The relationship of ${\frak m+m=m}$ to AC | Suppose that $A$ is infinite and Dedekind finite. Then $\mathfrak m=|A\cup\mathbb N|$ satisfies that $|A|<\mathfrak m$, $\aleph_0<\mathfrak m$, and $\mathfrak m+\mathfrak m>\mathfrak m$.
To see the last inequality, note that if $\mathfrak m+\mathfrak m=\mathfrak m$ then $A\times 2$ embeds into $A\cup\mathbb N$, say via $f$, but only a finite subset of it embeds into $\mathbb N$, so a set strictly larger than $A$ must embed into $A$.
Note that being Dedekind infinite is the same as embedding $\omega$, so if we require $\mathfrak m+\mathfrak m=\mathfrak m$ for all infinite cardinals $\mathfrak m$, or even for all Dedekind-infinite cardinals, then there are no Dedekind-finite sets. But no, Countable Choice is strictly stronger than the lack of infinite Dedekind finite sets. This is due to Pincus, see this MO question.
As for whether $\mathfrak m+\mathfrak m=\mathfrak m$ (the idemmultiple hypothesis) gives us Countable Choice, the answer is again no, as proved by
Gershon Sageev. An independence result concerning the axiom of choice, Ann. Math. Logic 8, (1975), 1–184. MR0366668 (51 #2915). |
Converting $\ln|x| = h$ to $x = e^h$ | CAUTION:
Upon reading the given reference, it turns out that at this point the author is not after the full solution, but after an integrating factor. This is why he does not care about the integration constant nor about the sign. Thus any solution can do and $e^{h(x)}$ is good enough.
This makes my answer below somewhat useless.
The true story is more complicated.
From
$$\frac{dF}F=p\,dx$$
we draw the indefinite solution
$$\log|F|=h+c$$
and
$$|F|=e^{h+c},$$
$$F=\pm e^{h+c}=\pm e^ce^h=Ce^h$$ where $C$ can have any sign.
Notice that the sign of $F$ may nowhere change, because that would make the integral on $F$ undefined.
If you have an initial condition, say $F(0)=F_0$, you can write
$$\int_{F_0}^F\frac{df}f=\int_0^x p(t)\,dt=h(x)-h(0),$$
giving
$$F=F_0e^{h(x)-h(0)}$$ and the sign of $F$ is determined by that of $F_0$. |
Identifying some properties of a set | Hint/Solution:
$(1).$ Note that $(m,n)$ is in $S$ and is also limit point of $S$ hence $S$ is not
discrete.
$(2).$ Note that elements of the form $(m+ \frac{1}{4^p},n)$ are also limit points of $S$
$(3,4.)$ If A is countable then $\mathbb R^2$-$A$ is path connected |
Orthogonal transformation of standard normal sample | Yes, technically $\displaystyle{Var(Y_i) = \sum_{k=1}^n v_{ki}^2}$ would be the more appropriate equation. But actually it doesn't really matter and $\displaystyle{\sum_{k=1}^n v_{ki}^2 = \sum_{k=1}^n v_{ik}^2 = 1}$. This is because if $O$ is an orthogonal matrix, then $O^T$ is also an orthogonal matrix.
$$\mathbb{E}[Y_i Y_j] = \mathbb{E}\left[ \left(\sum_{k=1}^n v_{ki} X_i \right) \left(\sum_{k=1}^n v_{kj} X_i \right) \right]$$
If you expand that out, all the cross terms vanish because $\mathbb{E}[X_i X_j] = 0$ for $i \neq j$ (because $X_i$ and $X_j$ are uncorrelated). |
Isomorphisms in the category of real vector spaces | To complete the solution in the product case, you have to show "uniqueness up to unique isomorphism".
For $\mathbb R[t]$, what you want to show, is that it is a coproduct. Therefore, you have to give maps
$$ \iota_j: \mathbb R \to \mathbb R[t], $$
and then again you show that $\mathbb R[t]$ and these injections $\iota_j$ gives you a coproduct. This will imply that $\mathbb R[t]\cong\oplus_{i=1}^\infty\mathbb R$. |
modular problem in arithmetic | As $41$ is a prime number, $\mathbf Z/41\mathbf Z$ is a field/ In any field, a quadratic polynomial has at most 2 roots. In particular, an element has at most 2 roots, which are opposite. The polynomial $x^2-40$ has $9$ as a root, hence $-9=32$ is the other root.
Same argument for $x^2\equiv 20\mod 71$ since $71$ is prime. |
Problem Solving with Straight Line Graphs? | Hint: use the Distance formula
Write distance between $A$ and point $P(x,y)$ in terms of $x$ and $y$, then, replace $y$ with $x$ using the relation for $x$ and $y$ ($y=3x$).
Now, once you've got this relation, equate it to $\sqrt{74}$ and solve for $x$. Again relate $x=y/3$ to get $y$.
This way, you'll get two $x$ values, and two corresponding values of $y$, giving you two points $B$ and $C$.
The distance is: $$\sqrt{(-1-x)^2+(5-y)^2}=\sqrt{(-1-x)^2+(5-3x)^2}$$
Now, equating and solving:
$$\sqrt{(-1-x)^2+(5-3x)^2}=\sqrt{74}$$
implies
$$10x^2 -28x -48=0$$
which has solutions:
$$x=4,-6/5$$
Corresponding $y$ are:
$$y=12,-18/5$$ |
Integral, Measurable Functions | You need to have a.s convergence to apply the dominated convergence. Your condition
$$\lim_{n\to \infty}\int_{0}^{1}|f_n|=0$$
assures that there exists a subsequence $f_{n_k}$ of $f_n$ that converges a.s to $0$. Indeed, observe that the above integral convergence implies that $f_n$ converges to $0$ in $L_1([0,1])$. Hence, we know that there is a subsequence $f_{n_k}$ that converges almost surely to $0$. Now if
$$\lim_{n\to \infty}\int_{0}^{1}|f_n|^2 \neq 0$$
There is a subsequence say $f_{n_k}$ of $f$ and $\epsilon>0$ such that
$$\tag{1} \int_{0}^{1}|f_{n_k}|^2\geq \epsilon$$
for every $k$. From the discussion above, we can extract a further subsequence of $f_{n_k}$ that converges a.s to $0$. Denote the subsequence of $f_{n_k}$ that converges a.s to $0$ again by $f_{n_k}$ for simplicity. Then, $|f_{n_k}|^2\leq g$ and $|f_{n_k}|^2\to 0$ a.s combined with the dominated convergence implies
$$\lim_{k\to \infty}\int_{0}^1|f_{n_k}|^2=0$$
which of course contradicts $(1)$. |
How can be a set of partial isomorphisms defined from a n-back-and-forth system? | I'll try to answer with some comments.
This approach to structures origins with Roland Fraïssé.
In Heinz-Dieter Ebbinghaus & Jörg Flum & Wolfgang Thomas, Mathematical logic (2nd ed, 1984), we have :
XI.1.1 Definition [page 180] : Let $\mathfrak A$ and $\mathfrak B$ be ($S$)-structures and let $p$ be a map. $p$ is said to be a partial isomorphism from $\mathfrak A$ to $\mathfrak B$ [...].
And :
XI.1.3 Definition [page 182] : [Two structures] $\mathfrak A$ and $\mathfrak B$ are said to be finitely isomorphic, written $\mathfrak A \cong_f \mathfrak B$, iff there is a sequence $(I_n)_{n \in \mathbb N}$ with the following properties :
(a) Every $I_n$ is a nonempty set of partial isomorphism from $\mathfrak A$ to $\mathfrak B$.
[...].
Here condition (a) amounts to : for every $n$ there is $I_n$ such that...
Finally :
XI.1.4 Definition : $\mathfrak A$ and $\mathfrak B$ are said to be partially isomorphic, written $\mathfrak A \cong_p \mathfrak B$, iff there is a set $I$ such that
(a) $I$ is a nonempty set of partial isomorphism from $\mathfrak A$ to $\mathfrak B$.
[...].
According to Ebbinghaus' comment [page 182] :
Informally we can express (b) and (c) [the back-and-forth conditions] as follows : [a] partial isomorphisms $s \in I_{n+1}$ can be extended $(n+1)$ times; the corresponding extensions lie in $I_n, I_{n-1}, \ldots, I_1$ and $I_0$.
In order to understand the difference between $\mathfrak A \cong_f \mathfrak B$ and $\mathfrak A \cong_p \mathfrak B$ we have to see :
XI.1.5 Lemma :
(c) if $\mathfrak A \cong_f \mathfrak B$ and $\mathfrak A$ is finite, then $\mathfrak A \cong \mathfrak B$ [i.e.they are isomorphic].
(d) if $\mathfrak A \cong_p \mathfrak B$ and $A$ and $B$ are at most countable, then $\mathfrak A \cong \mathfrak B$.
All this "machinery" converges towards :
XI.2.1 Fraïssé's Theorem. Let $S$ be a finite symbol set and $\mathfrak A, \mathfrak B$ $S$-structures. Then :
$\mathfrak A \equiv \mathfrak B$ [i.e. elementary equivalent] iff $\mathfrak A \cong_f \mathfrak B$.
We may compare with Bruno Poizat, A Course in Model Theory : An Introduction to Contemporary Mathematical Logic (2000 - french ed, 1985) we can find the notion of local isomorphism between two relations $R,R'$ [page 2].
He define : $p$-isomorphism, which is a local iso "extendible" $p$ times. He use the symbol $S_p(R,R')$ to denote the set of $p$-isomorphism between $R$ and $R'$.
Poizat defines $\omega$-isomorphism (or elemetary local isomorphism) as a local iso $s$ that is a $p$-isomorphism for every $p \ge 0$.
Then he defines the relations of $p$-equivalence between $k$-tuples [page 4] and of $\omega$-equivalece : two $k$-tuples are $\omega$-equivalent if they are $p$-equivalent for every $p$.
Thus, $\omega$-equivalence corresponds to Ebbinghaus' finitely isomorphic : $\mathfrak A \cong_f \mathfrak B$.
Then Poizat defines (in a purely algebraic way, in the style of Fraïssé) elementary equivalence in terms of the following condition :
If the empty function [$f_0$] is a $p$-isomorphism from $R$ to $R'$ for every $p$ (i.e. $S_\omega(R,R')$ is nonempty).
Poizat proves [page 5] :
Th 1.3. : If $R$ is finite, defined on $p$ elements, then every relation $S$ that is $(p+1)$-equivalent to it is isomorphic to it.
which looks like Ebbinghaus' part (c) of XI.1.5 Lemma.
Poizat [page 11] extends the back-and-forth conditions for local isomorphism to ordinals, and introduces the notions of $\infty$-isomorphism and $\infty$-equivalence.
He proves [page 13] :
Th 1.14. : Two $\infty$-equivalent denumerable relations are isomorphic.
which looks like Ebbinghaus' part (d) of XI.1.5 Lemma.
Finally, he states [page 24] :
Th.2.2 (Fraïssé's Theorem). : ...
form which he derives as an immediate consequence :
two $m$-relations are elementary equivalent iff they satisfy the same sentences
which is what we may expect...
In Poizat, page 11, we can find a useful comment :
It is important not to confuse $\infty$-isomorphism with (for example) $\omega$-isomorphism. If we adopt Ehrenfucht's formulation of Fraïssé's back-and-forth method, then elementary equivalence can be characterized as follows : Consider two players, the first choosing an element in $R$ or $R'$ each round, the second replying with an element in the universe of the other relation. By definition, the second player wins the game in $p$ rounds if, at the end of $p$ choices, they have two (locally) isomorphic $p$-tuples; to say that two relations are elementary equivalent is to say that for every $p$ the second player has a strategy guaranteed to win the $p$-stage game, that is to say, a strategy, depending on $p$ [emphasis added], that is effective, provided that he knows in advance that only $p$ rounds will be played. On the other hand, in the case of $\infty$-equivalence, the second player has a uniform winning startegy [emphasis added], always the same, that makes him win no matter how many rounds are played. |
find an expression for $A^n$ for any positive integer N | I see you have the eigenvectors and the eigenvalues of matrix $A$... So build a unitary matrix $U$ (in your case the matrix $U$ is orthogonal), with columns the eigenvectors of $A$. That is, $U=\begin{pmatrix}\ \frac{1}{2} &\frac{1}{2} &\frac{\sqrt{2}}{2}\\ \frac{1}{2} &\frac{1}{2} &\frac{-\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2} &\frac{-\sqrt{2}}{2} &0 \end{pmatrix}.$ Then, $A^{n}=U^{*}\Lambda^{n}U=U^{T}\Lambda^{n}U$, as $U\in\mathbb{R^{n,n}}$. Matrix $\Lambda$ is a diagonal matrix with entries the eigenvalues of matrix $A$. So, $\Lambda^{n}=\begin{pmatrix} (-\sqrt{2})^n & 0 & 0\\ 0 & (\sqrt{2})^{n} & 0 \\ 0 & 0 & 0 \end{pmatrix}$. Note that the matrix $A$ is diagonalizable, but it is not nonsingular (invertible).
Right now i saw that $A=\begin{pmatrix} x & 0 & 1\\ 0 & x & 1 \\ 1 & 1 & x \end{pmatrix}$ and that you can write, $A=B+xI$. So $A^{n}=U^{T}\Lambda^{n}U$, but $\Lambda^{n}=\begin{pmatrix} (-\sqrt{2}+x)^n & 0 & 0\\ 0 & (\sqrt{2}+x)^{n} & 0 \\ 0 & 0 & x^{n} \end{pmatrix}$. |
Help understanding John Lee's definition of curvature | 1- If length were sufficient a circle of length (perimeter) $2\pi$ would be the same as a square whose side is $\frac{2\pi}{4}$.
2- a unit speed parametrization is defined as follows: if $x(t)$ is an arbitrary parametrization take $s$ to be defined by $\frac{ds}{dt}=\vert\vert x'(t)\vert\vert$. You check that $\vert\vert \frac{dx}{ds} \vert\vert=1$. $x(s)$ is the unit speed parametrization of the curve |
Finding Solutions of Sturm-Liouville Equation Satisfying Boundary Conditions and Checking Orthogonality of Eigenfunctions | If $\lambda>0$ solution of DE is
$$y=C_1\, e^{-x} \sin{\left( x\, \sqrt{\lambda }\right) }+C_2\, e^{-x} \cos{\left( x\, \sqrt{\lambda }\right) }$$
Then eigenvalues and eigenfunctions are
$$\lambda_k=k^2,$$
$$y_k=e^{-x}\sin(kx),$$
$k=1,2,\ldots$
If $\lambda\leqslant 0$ solution of DE satisfying the boundary conditions
$$y(0) = y(\pi) = 0$$
is only
$$y=0$$
Eigenfunctions are orthogonal with the weight function $r(x)=e^{2x}$:
$$\langle y_k, y_m\rangle=\int_0^\pi e^{2x}y_ky_mdx=\int_0^\pi\sin(kx)\sin(mx)dx=0,$$
if $k\neq m$. |
Find solution of recurrence relation | As it was suggested in the comments and with quite a few shortcuts (i.e. wolfram links), substituting $a_n=x_n^2$
$$a_n=a_{n-1}+6a_{n-2}+7^n \tag{1}$$
this also means
$$a_{n+1}=a_{n}+6a_{n-1}+7^{n+1} \tag{2}$$
we multiply $(1)$ by 7
$$7a_n=7a_{n-1}+42a_{n-2}+7^{n+1} \tag{3}$$
now subtract $(3)$ from $(2)$
$$a_{n+1}-7a_n=a_{n}+6a_{n-1}-7a_{n-1}-42a_{n-2} \iff \\
a_{n+1}-8a_n+a_{n-1}+42a_{n-2}=0$$
which is a linear homogeneous recurrence with characteristic polynomial
$$x^3-8x^2+x+42=0$$
with roots $-2, 3, 7$ and general solution
$$a_n=A(-2)^n+B3^n+C7^n \tag{4}$$
Now, from $x_0=x_1=1 \Rightarrow a_0=a_1=1$ and from $(1) \Rightarrow a_2=56$ leading to
$$a_0=1=A+B+C$$
$$a_1=1=-2A+3B+7C$$
$$a_2=56=4A+9B+49C$$
solving this system of linear equations leads to
$$a_n=\frac{67}{45}(-2)^n-\frac{37}{20}3^n+\frac{49}{36}7^n \tag{5}$$
(can be validated here and here) and
$$x_n=\sqrt{\frac{67}{45}(-2)^n-\frac{37}{20}3^n+\frac{49}{36}7^n}$$ |
Interesting Differential equation problem $x\frac{dy}{dx} + 3\frac{dx}{dy} = y^2 $. | We change the role of $x$ and $y$, therfore
$$y\dfrac{dx}{dy}+3\dfrac{dy}{dx}=x^2$$
gives the equation
$$x^2y'-3y'^2=y$$
which has two following roots
$$y'=\dfrac{-x^2\pm\sqrt{x^4-12y}}{-6}$$
now let $z^2=x^4-12y$ which leads us to DE
$$(2x^3-x^2+z)dx-zdz=0$$
that could be convert to an exact equation with $\dfrac{\partial M}{\partial z}=1$ and $\dfrac{\partial N}{\partial x}=0$ (with standard notation). After solve that and substitution, exchange the previous role of $x$ and $y$. |
If $a_k:=\tan( \sqrt{2} + \frac{k\pi}{2011})$, then evaluate $ \frac{a_1+a_2+\cdots+a_{2011}}{a_1a_2~\cdots~a_{2011}}$ | Using Sum of tangent functions where arguments are in specific arithmetic series,
$$\tan(2m+1)x=\dfrac{\binom{2m+1}1 t-\binom{2m+1}3t^3+\cdots+(-1)^m\binom{2m+1}{2m+1}t^{2m+1}}{\binom{2m+1}0-\binom{2m+1}2t^2+\cdots++(-1)^m\binom{2m+1}{2m}t^{2m}}$$
So, if $\tan(2m+1)x=\tan y$
$(2m+1)x=k\pi+y$ where $k$ is any integer
$x=\dfrac{k\pi+y}{2m+1}; k=1,2,\cdots,2m+1$
So, the roots of $$\tan y=\dfrac{\binom{2m+1}1 t-\binom{2m+1}3t^3+\cdots+(-1)^m\binom{2m+1}{2m+1}t^{2m+1}}{\binom{2m+1}0-\binom{2m+1}2t^2+\cdots+(-1)^m\binom{2m+1}{2m}t^{2m}}$$
$$\iff(-1)^mt^{2m+1}-\tan y(-1)^m(2m+1)t^{2m}+\cdots+\tan y=0$$ are $\tan x;x=\dfrac{k\pi+y}{2m+1}, k=1,2,\cdots,2m+1$
Can apply Vieta's formula? |
How to compute $\int_0^\infty e^{-ax}cos(bx) dx $? | We can use $\cos(bx)=\Re(e^{ibx})$ to obtain
\begin{eqnarray}
I\equiv \Re \int_0^\infty e^{-ax} e^{ibx}dx=\Re \int_0^\infty e^{-\alpha x} dx=\Re\left(\frac{1}{\alpha}\right)
\end{eqnarray}
where $\alpha \equiv a-ib$, assuming $a>0$. Thus we obtain
$$
I= \Re\left(\frac{1}{\alpha}\right)=\Re\left( \frac{1}{a-ib}\right)=\Re\left( \frac{a+ib}{a^2+b^2}\right)
$$
You can see that
\begin{equation}
I= \frac{a}{a^2+b^2}
\end{equation} |
Fixed points of orthogonal complement of regular representation | The Artin-Wedderburn theorem says $\mathbb{C}[G]\cong\bigoplus\mathrm{End}(U)$ as $U$ ranges over all irreps of $G$.
If $N\trianglelefteq G$ is a normal subgroup then any irrep $G/N\to GL(V)$ can be turned into an irrep $G\to GL(V)$ by simply composing $G\to G/N\to GL(V)$. Then AW applies to $\mathbb{C}[G/N]$ too - the summands in $\mathbb{C}[G/N]$'s AW decomposition are precisely those in $\mathbb{C}[G]$'s decomposition corresponding to those $G$-irreps $U$ which "came from" being irreps of $G/N$ first.
Define $\rho_{G/N}$ to be this subalgebra of $\mathbb{C}[G]$. I assume by $\rho_{G/N}^\perp$ you mean the orthogonal complement with respect to the obvious inner product (in which $G$ is an orthonormal basis). Since $G$ acts on $\mathbb{C}[G]$ by left-multiplication unitarily, $\rho_{G/N}^\perp$ must be a subrep of $\rho_G=\mathbb{C}[G]$ complementary to $\rho_{G/N}$. This forces $\rho_{G/N}^\perp$ to be a direct sum of the $\mathrm{End}(U)$s corresponding to irreps $U$ that do not come from irreps of $G/N$.
The condition $(\rho_{G/M}^\perp)^H\ne0$ may be rewritten as $\dim(\rho_{G/N})^H<\dim(\rho_G)^H$.
Exercise. Show how to turn a transversal for $G/H$ into a basis for $\mathbb{C}[G]^H$.
Thus, we may further rewrite the condition as $[G:HN]<[G:H]$, or simply $N\not\subseteq H$. You can apply this to your particular situation of prime-power cyclic groups if you want. |
Continuity of Translation of the Trigonometric Polynomials | The fact that you are missing is that translation is a diagonal operator in the basis of exponentials. This simple fact is fundamental to the whole Fourier business, since from translations we get derivatives etc. Details follow.
In order to prove that $B_c$ contains all trigonometric polynomials, it suffices to show that it contains monomials $t\mapsto \exp(int) $. The latter follows from $\exp(in(t+\tau))-\exp(int) =(\exp(in\tau)-1)\exp(int)$. Indeed, the norm of this difference is $|\exp(in\tau)-1|\|\exp(int)\|\to0$ as $\tau\to0$. |
Total function and termination | Since the definitions of "total function" say that is defined for all possible inputs, it seems that, yes, it must terminate (otherwise it would not be defined for any input that is does not terminate for). |
Finding roots of unity? | First, there are at most $n$ $n$th roots of unity, because $x^n-1$ can have at most $n$ roots (as a consequence of the Factor Theorem applied in $\mathbb{C}$).
Second, if $\omega$ is an $n$th root of unity, that means that $\omega^n = 1$. But then, for any integer $k$, we have
$$(\omega^k)^n = \omega^{kn} = (\omega^n)^k = 1^k = 1,$$
so $\omega^k$ is also an $n$th root of unity.
So now the question is which ones are different? If $\omega$ is such that $\omega^n=1$ but $\omega^{\ell}\neq 1$ for any $0\lt \ell\lt n$, then $\omega^r=\omega^s$ if and only if $\omega^{r-s}=1$, if and only if $n|r-s$: indeed, using the division algorithm, we can write $r-s$ as $qn + t$, with $0\leq t\lt n$ (division with remainder). So then
$$1=\omega^{r-s} = \omega^{qn+t} = \omega^{qn}\omega^t = (\omega^n)^q\omega^t = 1^q\omega^t = \omega^t.$$
But we are assuming that $\omega^t\neq 1$ if $0\lt t\lt n$; since $0\leq t\lt n$ and $\omega^t=1$, the only possibility left is that $t=0$; that is, that $n|r-s$.
Thus, if $\omega^{\ell}\neq 1$ for $0\lt \ell\lt n$ and $\omega^n=1$, then $\omega^r=\omega^s$ if and only if $n|r-s$, which is the same as saying $r\equiv s\pmod{n}$.
So it turns out that $\omega^0$, $\omega^1$, $\omega^2,\ldots,\omega^{n-1}$ are all different (take any two of the different exponents: the difference is not a multiple of $n$), and they are all roots of $x^n-1$, and so they are all the roots of $x^n-1$.
So it all comes down to finding an $\omega$ with the property that $\omega^k\neq 1$ for $0\lt k\lt n$, but $\omega^n=1$. And
$$\large\omega = e^{2\pi i/n}$$
has that property. |
Is a single point boundaryless? | think your confusion partly lies in not being very clear on what 'points' and boundaries of a space mean. For instance, if we forget about manifolds and such like, and just consider two linear equations over the vector space R2 that intersect once, then the `point' is a vector in the vector space R2.
Similarly for other mathematical spaces points in the space have other meanings -- think for example what `points' of interest might be in algebraic geometry or function spaces.
A topological manifold X with boundary (of dimension n) is a topological space where for each point in x∈X there exists a neighbourhood homeomorphic to Rn or the closed upper half space in Rn.
Here again we need to be careful -- Let D2 denote the open ball in R2. That is D2={x∈R2|x<1}. As a manifold D2 does not have boundary. As a subspace of R2 we see that D2 has boundary S1.
In short, if X is a topological n--manifold with boundary denoted ∂X, then ∂X is an (n−1)--manifold (with relative topology) such that the boundary of ∂X is empty -- ∂∂X=∅.
Thus to answer your question, you need to decide what sort of space you are in, what are the points and how they are determined, and also what you mean by a boundary. Hope this is helpful, I have tried to be as intuitive as possible. |
Most elementary proof that a determinant is divisible by $m$ | Take the matrix (unity matrix except for last column) $$Q=\pmatrix{1&0&\dots&&10^{n-1}\cr0&1&0&\dots&10^{n-2}\cr\dots&0\cr0&\dots&&&1\cr}$$Its determinant is obviously 1, and the last column of $A\,Q$ is obviously the rows of $A$ interpreted in base 10. If the whole last column of $A\,Q$ can be divided by $m$, so can its determinant, and since the determinant of Q is 1, the determinant of $A\,Q$ is the same as the determinant of $A$. |
Solve for $k$ when the equation has equal roots | $b^2-4ac=4^2-4(2)(-k)=16+8k=0$ imply $k=-2$ |
The two-daughter-problem | I think the confusion arises because the classical boy-girl problem is ambiguous:
'You know that Mr.Smith has two kids, one of which is a girl. What is the chance she has a sister?'
The ambiguity here is that from this description, it is not clear how we came to know that 'Mr.Smith has two kids, one of which is a daughter.'
Consider the following two scenarios:
Scenario 1:
You have never met Mr. Smith before, but one day you run into him in the store. He has a little girl with him, which he tells you is one of his two children.
Scenario 2:
You are a TV producer, and you decide to do a show on 'what is it like to raise a daughter?' and you put out a call for such parents to come on the show. Mr.Smith agrees to come on the show, and as you get talking he tells you that he has two children.
Now notice: the original description applies to both cases. That is, in both cases it is true that you know that 'Mr.Smith has two children, one of which is a daughter'.
However, in scenario 1, the chance of Mr. Smith having two daughters is $\frac{1}{2}$, but in scenario 2 it is $\frac{1}{3}$. The difference is that in the first scenario one specific child has been identified as female (and thus the chance of having two daughters amounts to her sibling being female, which is $\frac{1}{2}$), while in the second scenario no specific child is identified, so we can't talk about 'her sibling' anymore, and instead have to consider a conditional probability which turns out to be $\frac{1}{3}$.
Now, your original scenario, where you don't know anything about Mr. Smith other than that he has two children, and then Mr.Smith says 'I am so happy Victoria got a scholarship!' is like scenario 1, not scenario 2. That is, unless Mr. smith has two daughters called Victoria (which is possible, but extremely unlikely, and if he did one would have expected him to say something like 'my older Victoria'), with his statement Mr.Smith has singled out 1 of his two children, making it equivalent to scenario 1.
Indeed, I would bet that most real life cases where at some point it is true that 'you know of some parent to have two children, one of which is a girl' are logically isomorph to scenario 1, not scenario 2. That is, the classic two-girl problem is fun and all, but most of the time the description of the problem is ambiguous from the start, and if you are careful to phrase it in a way so that the answer is $\frac{1}{3}$, you will realize how uncommon it is for that kind of scenario to occur in real life. (Indeed, notice how I had to work pretty hard to come up with a real life scenario that is at least somewhat plausible).
Finally, all the variations of whether Victoria is the oldest, youngest, or whether you don't even know her name ('Mr. Smith tells you one his children got a scholarship to the All Girls Academy') do not change any of the probabilities (as you argued correctly): in most real life scenarios, the way you come to know that 'Mr.Smith has two children, one of which is a girl' (and I would say that includes your original scenario) means that the chance of the other child being a girl is $\frac{1}{2}$, not $\frac{1}{3}$.
So, when at the end of you original post you ask "where is my error?" I would reply: your 'error' is that you assumed that the correct answer should be $\frac{1}{3}$, and that since your argument implied that is would be $\frac{1}{2}$, you concluded that there must have been an error in your reasoning. But, as it turns out, there wasn't! For your scenario, the answer is indeed $\frac{1}{2}$, and not $\frac{1}{3}$. So your 'error' was to think that you had made an error!
Put a different way: you were temporarily blinded by the pure math ( and I say 'temporarily', because you ended up asking all the right citical questions, and later realized that the classic two-girl problem is ambiguous: good job!). But what I mean is: we have seen this two-girl problem so often, and we have been told that the solution is $\frac{1}{3}$ so many times, that you immediately assume that also in your descibed scenario that is the correct answer... When in fact that is not case because the initial assumptions are different: the classic problem assumes a Type 2 scenario, but the original scenario described in your post is a Type 1 scenario.
It's just like the Monty Hall problem ... We have seen it so often, that as soon as it 'smells' like the Monty Hall problem, we say 'switch!' ... when in fact there are all kinds of subtle variants in which switching is not any better, and sometimes even worse!
Also take a look at the Monkey Business Illusion: we have see that video of the gorilla appearing in the middle of people passing a basketball so many times that we can now surprise people on the basis of that! |
Limit $\lim\limits_{n\to\infty}((n-1)!)^{\frac{1}{n}}$ | Hint
$$\ln \left( \sqrt[n]{(n - 1)!}\right)=\frac{\ln(1)+\ln(2)+..\ln(n-1)}{n} \,.$$
Now if you use S-C you get exactly what you said.
I guess you used a consequence of SC, which says:
C: If $a_n >0$ and $\lim_n \frac{a_n}{a_{n-1}}$ exists and is equal to $l$ then
$$\lim_n \sqrt[n]{a_n}=l$$ |
Find the solutions to $e^{\frac{1}{z}} =\displaystyle{ \frac{e^{1+i}}{\sqrt 2}}$ | We have $$\Large e^{1\over z}=e^{1+i-\ln \sqrt 2}$$therefore $$\dfrac{1}{z}=1-\ln \sqrt 2+i(1+2k\pi)$$which means that $$z=\dfrac{1-\ln\sqrt 2-i(1+2k\pi)}{(1-\ln\sqrt 2)^2+(1+2k\pi)^2}$$ |
Polar Coordinate Change of Variable when Domain is a Sphere | A simple example will show where $1/s^{n-1}$ comes form. Suppose $u(x)=0$ and $u=1$ on $\partial B(x,s).$ Then the the left side equals
$$\int_{\partial B(0,1)} 1 \,dS= S(\partial B(0,1))$$
while the right side equals
$$(1/s^{n-1})\int_{\partial B(x,s)} 1 \,dS= (1/s^{n-1})S(\partial B(x,s)).$$
Now a sphere of radius $s$ has surface area equal to $s^{n-1}$ times the surface area of the unit sphere. That's why the $1/s^{n-1}$ is there on the right side. |
Is $1 \otimes 1$ always the unity element in a tensor product? | Yes, $1\otimes1$ is the unity in the tensor product algebra, because it satisfies
$$
(1\otimes1)(c\otimes d)=c\otimes d=(c\otimes d)(1\otimes1)
$$
so the property also transfers to every element of $C\otimes_BD$.
If it happens that $1\otimes1=0$, then you can conclude that $C\otimes_BD=\{0\}$.
The ring $\{0\}$ as a unity, namely $0$. There is no requirement that in a ring the unity (identity) is nonzero.
The condition $1\ne0$ is imposed to domains (and so to fields), because there are very good reasons not to consider $\{0\}$ a domain. |
Munkres order topology difference between definition of simple order using $<$ instead of $\leq$ | I'm pretty certain Munkres does it the way he does because when you define the order topology on a totally ordered set, you need to use $<$ to define open intervals. I think it is more common to use $\leq$ than $<$ in general when speaking about orders, but in this case, Munkres has specific use for $<$ where $\leq$ just won't work, and therefore it's reasonable to let that be the symbol of choice. |
Determinant of Fisher information | If the log-likelihood is quadratic (i.e., the estimator is normally distributed), then the Fisher information is the reciprocal of the variance of the estimator (hence, the lower the variance of the estimator, the more "information" is provided by the data about the parameter...loosely speaking). The square root of the reciprocal of the Fisher Information is the standard error of an estimator (assuming approx. quadratic log-likelihood).
However, if the log-likelihood is not quadratic, then its reciprocal no longer represents the variance of the estimator. You either need to transform the estimator to give it a quadratic log-likelihood or use computational methods to determine the error. |
Distribution of Galton Watson process with exponential offspring | $\sum\limits_{k=1}^{\infty}p^{k}P(Z_t=k)=Ep^{Z_t}$. If $f(s)=Es^{Z_1}$ then $Es^{Z_t}$ is the composition of $f$ with itself $t$ times, usually denoted by $f_t(s)$. Thus $P(D=t)=f_t(p)$. |
How can one formalize and prove things about floating-point numbers, just as one can do with rational or real numbers? | They are fundamentally different in that (given any particular floating-point representation) there are only finitely many of them. And the basic arithmetic operations on floating-point numbers do not satisfy nearly as many nice laws as arithmetic on true reals or rationals do -- for example, $(a+b)+c = a+(b+c)$ is not true for floating point; neither is $(a+b)-b = a$.
There's some mathematical reasoning about floating-point arithmetic going on in numerical analysis, in particular with respect to how much error relative to true arithmetic on reals various formulas can introduce. That often (but not always) has a somewhat conservative character, overestimating the errors rather than going to the trouble of modeling them exactly.
Also, some people have developed complete machine-readable formalizations of floating-point arithmetic, in the context of using them to create computer-verified proofs that specific microprocessor designs implement it correctly. |
Is there a way to solve an underdetermined linear system $Ax=b$ using the Fast Multipole Method? | The minimum norm solution satisfies the normal equations $x = A^\top (AA^\top)^{-1}b$. If both the matrix-vector product $y \mapsto Ay$ and the matrix-transpose-vector product $y \mapsto A^\top y$ can be computed quickly, then $AA^\top y = b$ is a system of linear equations with a positive definite matrix and can be solved by the conjugate gradient method. The product $y \mapsto AA^\top y$ can be computed by composing the fast matrix-transpose-vector and matrix-vector products and is thus fast. Thus, you can compute $y = (AA^\top)^{-1}b$ using conjugate gradient and $x = A^\top y$ using the fast matrix-tranpose-vector product.
The conjugate gradient approach is only advised on speed and accuracy grounds if the matrix $A$ is well-conditioned in the sense that $\kappa(A) := \sigma_{\rm max}(A) / \sigma_{\rm min}(A)$ is modestly sized. The condition number of $AA^\top$ is the square of the condition number of $A$: $\kappa(AA^\top) = (\kappa(A))^2$. If the condition number of $AA^\top$ is large, the conjugate gradient method will take many iterations. If you can devise an effective preconditioner, this may help accelerate your method.
One might hope for a direct method which need not have an iteration count depending on the conditioning of $A$. I do not believe such an algorithm is known if fast multiplications by $A$ are computed by the fast multipole method. However, there are a variety of other FMM-related ways of representing and computing with a matrix $A$ with rank structure which may be worth investigating. This is a very broad and active field with many competing methods championed by different researchers. One possible low-rank structure to look into is the HSS matrix structure. If your matrix $A$ has not just FMM but HSS structure, I believe it's almost certain the $ULV$ factorization approach of Xi and coauthors can be adapted to the minimum-norm case (the original algorithm is in the least-squares context). |
connected components are connected | Let $C_{x_0} = \bigcup_{\alpha \in \Gamma} A_\alpha$, where each $A_\alpha$ is a connected set containing $x_0$. Suppose $U$ and $V$ are open sets that disconnect $C_{x_0}$. Without loss of generality, let $x_0 \in U$, so that $A_\alpha \cap U \neq \emptyset$ for all $\alpha \in \Gamma$.
Now, there is $\beta \in \Gamma$ such $A_\beta \cap V \neq \emptyset$ (otherwise, $C_{x_0} \subseteq U$). In the relative topology on $A_\beta$, $U \cap A_\beta$ and $V \cap A_\beta$ are both open and so disconnect $A_\beta$ (a contradiction with the fact that $A_\beta$ is connected). |
Defining division by zero | Since $0+0=0$, we have $a=0\cdot \infty_a=(0+0)\cdot\infty_a=0\cdot\infty_a+0\cdot\infty_a=a+a$. So $a=a+a$, and therefore $a=0$.
It certainly makes life simpler to have everything equal to $0$. |
When to use $\sin$ and $\cos$ to find $x$,$y$ components? | It depends on your definition of the angle:
In the picture as drawn, $x$ is $r\cos\alpha$ and $y$ is $r\sin\alpha$. But if I chose a different convention for $x$, $y$ or $\alpha$ I would need a different equation. |
Advanced Induction proof | Induction on $n$ doesn't work for this problem. Rather, it is the observation that if $A = \{a_1, \dots, a_m\}$ and $a_1 = \max A$ then $\sum A \le m a_1$. That is, each term in the sum is less than or equal to $a_1$ and therefore
$$ \sum A = \sum_{i = 1}^m a_i \le \sum_{i = 1}^m a_1 = m a_1. $$
You can prove this formally by induction on $m$ if you want:
$$ a_m + \sum_{i = 1}^{m - 1} a_i \le a_m + (m - 1)a_1 \le a_1 + (m - 1)a_1 = ma_1. $$ |
B is the set of bitstrings s of size 8. Which is a function from B to N? | To be a function, you need to assign a single value to each element of the domain. In your case, there are $2^8$ strings in $s$. For the first one, if the string is $a=01010101$, what is $f(a)$? You can't give a single answer, so this is not a function. For the next two, think about the same question-can I give a clear value to the function for every one of the bitstrings? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.