title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Unbounded function on compact interval? | For instance, $f:[-1,1] \rightarrow \mathbb R$ defined as $f(x)=1/x$ if $x\neq 0$ and $f(0)=0$ is defined on a compact domain $[-1,1]$ but it is not bounded.
Recall the Weierstrass theorem:
"Every continuous function on a compact domain has at least one maximum and one minimum"
So negating the above statement we obtain that:
"No maximum or minimum and the function has compact support then it must be discontinuous"
In other words, if you look for an unbounded function (in particular, no maximum or minimum) defined on a compact domain then you must look for a discontinuous function. |
Flip a fair coin 5 times. $A$ is the event that the first flip is heads, and $B$ is the event that you get at least three heads. What is $P(A|B)$? | Given the event $B$, that you get at least three heads in five flips, there are $\binom{5}{3} + \binom{5}{4} + \binom{5}{5}$ equiprobable such outcomes. How many of these have heads in the first flip? If there were exactly $3$ heads flipped and the first flip is heads, then there are $\binom{4}{2}$ ways to distribute the remaining $2$ heads and $2$ tails; if there are exactly $4$ heads flipped, there are $\binom{4}{1}$ ways to distribute the remaining $3$ heads and $1$ tail; and if $5$ heads are flipped there is only $1$ outcome. So the desired probability is $$\frac{\binom{4}{2} + \binom{4}{1} + \binom{4}{0}}{\binom{5}{3} + \binom{5}{4} + \binom{5}{5}} = \frac{6+4+1}{10+5+1} = \frac{11}{16}.$$
The above is a pure enumeration approach. We can consider a more general approach in which the probability of heads is some $p \in (0,1)$ rather than the fair case $p = 1/2$. This is done by Bayes' theorem:
$$\Pr[A \mid B] = \frac{\Pr[B \mid A]\Pr[A]}{\Pr[B]}.$$ The unconditional number of heads is a binomial random variable, $$X \sim \operatorname{Binomial}(n = 5, p), \\ \Pr[X = x] = \binom{5}{x} p^x (1-p)^{5-x},$$ hence event $B$ is equivalent to $X \ge 3$, and $$\Pr[B] = \Pr[X \ge 3] = \sum_{x=3}^5 \binom{5}{x} p^x (1-p)^{5-x}.$$ The conditional random variable $B \mid A$ corresponds to the outcome $Y \ge 2$ where $Y$ is binomial but with $n = 4$, because flips are independent; i.e. $$Y \sim \operatorname{Binomial}(n = 4, p).$$ And of course, $\Pr[A] = p$. So it follows that $$\Pr[A \mid B] = \frac{\Pr[Y \ge 2]p}{\Pr[X \ge 3]} = \frac{p \sum_{y=2}^4 \binom{4}{y} p^y (1-p)^{4-y}}{\sum_{x=3}^5 \binom{5}{x} p^x (1-p)^{5-x}} = \frac{6-8p+3p^2}{10-15p+6p^2}.$$ Note when $p = 1/2$ we recover the original answer $11/16$. |
A couple of questions based on the order of subgroups and normal subgroups. | By counting elements, you can show that $|N|=6$, so it can't have any elements of order $4$, as $4\nmid 6$. Note that this doesn't involve realizing $N$ as $S_3$.
As you mention, $|A|=3$ and hence $|H|=3$. Since $\frac{6}{3}=2$, $H$ is normal in $N$. Let me know if you need help with the last part about $H\vartriangleleft GL_3(\mathbb{R})$, but you should try for yourself. |
How to solve a differential equation $(x^2y+y^5)dx+(x^3-xy^4)dy=0$? | Probably, I can help a bit.
It seems to me, you can try substitution
$$
f = - \frac{1}{xy},\quad g = - \frac{y^3}{x^3}.
$$
Since
$$
df = \frac{dx}{x^2 y} + \frac{dy}{x y^2},\quad
dg = \frac{3 y^3 dx}{x^4} - \frac{3 y^2 dy}{x^3},
$$
you can write
$$
df + \frac{1}{3} dg = \frac{\left(x^2y + y^5\right)dx+ \left(x^3-x y^4\right)dy}{x^4 y^2}.
$$
Now solution should be straightforward. |
Sine Curve on the Y axis | It's impossible to get the entire curve you showed in the form $y=f(x)$, because each $x$ value has several $y$ values above it in the graph. But you can get a portion of the curve with the equation $y = \arcsin(-x)$ for $-1 \le x \le 1$. |
sketching lines and curves in the complex plane. | The line $x = 1$ is parametrized by $\gamma(t) = 1 + it$, or, in real form, $(1, t)$.
The polar description $\theta = \pi/2$ is problematic for two reasons:
It describes the imaginary axis (not the line $x = 1$).
The complex squaring map in polar coordinates does not square $\theta$, but instead sends $re^{i\theta} \sim (r, \theta)$ to $r^{2} e^{2i\theta} \sim (r^{2}, 2\theta)$. |
Formal basis for computing the differential in trig substitution | Got it. Trig substitution is $u$-sub applied backwards where $x$ is a function of $\theta$ so that
$$\int_{a}^{b} f(\varphi(\theta))\varphi'(\theta) \, d\theta = \int_{\varphi(a)}^{\varphi(b)} f(x) \, dx$$
where $x := \varphi(\theta)$ so by identification we also have $dx := \varphi'(\theta) \, d\theta$. Granted that $\varphi$ is continuous and differentiable on the bounds of integration.
Applying the same reasoning to the following integral
$$\int_{-1}^{1} \sqrt{1 - x^2} \, dx$$
we eventually obtain
$$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \sqrt{1 - \sin(\theta)^2} \cos(\theta) \, d\theta = \int_{\sin\left(-\frac{\pi}{2}\right)}^{\sin\left(\frac{\pi}{2}\right)} \sqrt{1 - x^2} \, dx$$
where $x := \sin(\theta)$ and $dx := \cos(\theta) \, d\theta$.
This method is justified by the fact that $\sin(\theta)$ when
$$-\frac{\pi}{2} \leqslant \theta \leqslant \frac{\pi}{2}$$
is continuous and differentiable. |
How to combine $(x - \mu_0)^TA(x - \mu_0) - (x - \mu_1)^T A (x - \mu_1)$ | $$(x - \mu_0)^TA(x - \mu_0) - (x - \mu_1)^T A (x - \mu_1) $$
is equal to
$$x^TAx - x^TA \mu_0 - \mu_0^T A x + \mu_0^T A \mu_0-(x^TAx - x^TA \mu_1 - \mu_1^T A x + \mu_1^T A \mu_1)$$
Notice that $x^T A x$ cancel out
$$- x^TA \mu_0 - \mu_0^T A x + \mu_0^T A \mu_0 + x^TA \mu_1 + \mu_1^T A x - \mu_1^T A \mu_1$$
You can combine similar terms like
$$ \mu_0^T A \mu_0 - \mu_1^T A \mu_1 + x^TA (\mu_1-\mu_0) + (\mu_1-\mu_0)^T A x $$
If $A$ is symmetric, notice that the last two terms are equal, i.e.
$$x^TA (\mu_1-\mu_0) = (\mu_1-\mu_0)^T A x$$so we can write
$$ \mu_0^T A \mu_0 - \mu_1^T A \mu_1 + 2x^TA (\mu_1-\mu_0)\tag{1} $$
An alternative term could be achieved by realizing that
$$(\mu_1 - \mu_0)^T A (\mu_1 + \mu_0) = \mu_0^T A \mu_0 - \mu_0^T A \mu_0 +\mu_1^T A \mu_0 - \mu_0^T A \mu_1$$
again under the assumption that $A$ is symmetric, $\mu_0^T A \mu_1 = \mu_1^T A \mu_0$ hence
$$(\mu_1 - \mu_0)^T A (\mu_1 + \mu_0) = \mu_1^T A \mu_1 - \mu_0^T A \mu_0 \tag{2}$$
The RHS in $(2)$ are the negative of the two terms appearing in $(1)$, pretty cool huh ?
$$ -(\mu_1 - \mu_0)^T A (\mu_1 + \mu_0) + 2x^TA (\mu_1-\mu_0) $$
Again use symmetry, you can write
$$ -(\mu_1 + \mu_0)^T A (\mu_1 - \mu_0) + 2x^TA (\mu_1-\mu_0) =(2x - \mu_0 - \mu_1)^T A (\mu_1-\mu_0)$$
BONUS
You say you are also interested in $$(x - \mu_0)^T A^{-1} (x - \mu_0) - (x - \mu_1)^T A^{-1} (x - \mu_1)$$Using similar steps and under the same symmetric assumption, you'll get
$$(2x - \mu_0 - \mu_1)^T A^{-1} (\mu_1-\mu_0)$$ |
Lp spaces and supremum | We must write $p,q \geq 1.$ To answer your question:
Think about when, equality holds in holder inequality. + You always can scale function to get $||f||_p=1$. |
elementary calculus inquiry | $\newcommand{\del}[2]{\frac{\partial #1}{\partial #2}}$
Assuming $u$ is a smooth (or at the very least continuously second differentiable) function, you can use the chain rule. You know that $$\del{u}{x}=\del{\rho}{x}\del{u}{\rho} + \del{\phi}{x}\del{u}{\phi}$$
and similarily for $y$. You can iterate the relation to derive higher order derivatives and combine your results to get your identity. |
Proof for a relation | Not to hard to prove that:
$x^2\equiv x^2($mod $4)$ (reflexivity)
$x^2\equiv y^2($mod $4)\Rightarrow y^2\equiv x^2($mod $4)$ (symmetric)
And $x^2\equiv y^2($mod $4)\wedge y^2\equiv z^2($mod $4)\Rightarrow x^2\equiv z^2($mod $4)$ (transitiv) |
What does Liu mean by "topological open/closed immersion" in his book "Algebraic Geometry and Arithmetic Curves"? | Yes, that's a correct definition. Yours (1.) is also equivalent to 2. below.
$f(X)$ is open (closed) and $f$ is a homeomorphism on its image
$f$ is open (closed) and a homeomorphism on its image
If we then define an immersion to be a homeomorphism on its image, then an open (closed) immersion really is an immersion that is open (closed).
Note: it is also called an embedding, which is safer to use than immersion, because it is closer to the terminology used in differential geometry. |
Not equivalent distance | Suppose that the two distances are equivalent. Then there exists $c>0$ such that ( with $y=0$)
$|\exp (x)-1| \le c |x|$ for all $x $.
For $x>0$ this gives $\exp (x) \le cx+1$.
But this is absurd. |
Prove or disprove f is contractant | HINT take $f(x)=x^2/2$ on $[0,1]$ not a contraction |
For what $a>0$ does this integral converge? $\int_{1}^{\infty} {\frac{x^a(\sin x+2)}{x^{2a}\ln(a)}}$ | We can ignore $\ln(a)$ in the denominator, since it's well defined for all $a>0$ and is constant, therefore doesn't affect convergence. Then in our domain,
$$
\frac 1 {x^a} \leq \frac{x^a(\sin x+2)}{x^{2a}} \leq \frac 3 {x^a}
$$
Which proves, like you concluded, that the integral is convergent for $a>1$ and that the integral is divergent for $a\leq1$.
Regarding the absolute value technique, it works for non-negative functions. But also for non-positive functions. If $f(x)\leq 0$ then $\int f(x) = -\int -f(x)$ where $-f(x)\geq 0$. What I've written is essentially the same thing, since as you can see, the function is bounded from below by a positive function, so you can put absolute value signs everywhere if you like. |
Nullhomotopic loops in $R^3$ minus two disjoint circles. | You can interpret this problem by way of another: how can you hang a painting with two pins and a string so that if either pin is removed the painting falls? This might seem completely out of left field but there is a direct analogy here. For one, replace these pins with rubber bands (perhaps being held up on the wall by tape), so that wrapping the string around the pin is replaced by inserting it into the loop of the rubber band. These rubber bands will serve as our two disjoint circles. Furthermore, the painting will contain a path connecting the two endpoints of the string, and it can just deformation retract onto this, so the painting and the string serve as the loop.
Let's consider first the initial configuration of the painting hanging. It's not falling because somehow the string is looped into the rubber bands in such a way that it cannot be removed without cutting. The string cannot cut through the rubber band itself, in the same way that a loop going through the center of a circle is not null homotopic. The only way for the painting to fall is for the string to be able to unhook from the inside of the rubber bands, thus allowing the string and painting to be completely separate from the rubber bands. This is essentially saying that the corresponding loop is null homotopic!
Furthermore, going from $Z$ to $X$ or $Y$ is analogous to removing one of the rubber bands from the system. Adding in $C_1$ or $C_2$ allows our loop to pass through that area freely, just as removing a rubber band allows our string to fall through the area where it used to be.
I hope this convinces you that this problem is the same as the painting one I'm describing. For me, I find this much easier to visualize and understand. For one, you can pretty easily get some string and pins and go for it! Unfortunately, I don't think I have the ability to paint a mental picture of such a construction, nor can I draw one. So I'll leave you with this fun little video which shows a solution to this painting hanging problem and does a good job of motivating how you can arrive at such a solution. |
Evaluate integral with gaussian curvature | $
\renewcommand \t\theta
\renewcommand \f\phi
\renewcommand \p\phi
\renewcommand {\d}{\partial\,}
\newcommand \T \Theta
\newcommand \P \Phi
\newcommand \F \Phi
$
HINT: Recall that Gaussian curvature of a surface is equal to Jacobian determinant of the Gauss map for this surface.
One can compute integral $
\int_0^{2\pi}\!\int_0^{\pi}K(x,y)\sqrt{\left\vert g_{ij}\right\vert} \, dy\,dx $ on surface $M$ by mapping $M$ to the unit sphere $S^2$, i.e. by making change of variables defined by a Gauss map $G$.
$$
\big(\, \t, \f\, \big) = G\big(\, x, y \, \big)
\iff
\begin{cases}
\t = \T(x,y) \\
\f = \P(x,y),
\end{cases}
\quad
\big(\, x, y \, \big) = G^{-1}\big(\, \t , \p \, \big)
\iff
\begin{cases}
x = X(\t, \p) \\
\f = Y(\t, \p).
\end{cases}
$$
Changing variables in the integral requires multiplying the integrand to Jacobian determinant $J_{G^{-1}}$ of the map $G^{-1}:S^2\to M$.
$$
\int_0^{2\pi}\int_0^{\pi}K(x,y)\sqrt{\left\vert g_{ij}\right\vert}\, dy\,dx
= \int_0^{2\pi}\int_0^{\pi}K J_{G}\sqrt{\left\vert g_{ij}\right\vert} \, d\t\,d\p,
$$
where $J_G$ is the determinant of Jacobian matrix for $G$:
$\quad
J_G =
\left\vert \dfrac{D \left(\, x , y \, \right)}{D\left(\, \t , \p \, \right)}\right\vert
=
\begin{vmatrix}
\frac{\d X}{\d \t} & \frac{\d X}{\d \p} \\
\frac{\d Y}{\d \t} & \frac{\d Y}{\d \p}
\end{vmatrix}
$
Since Jacobian of a map is reciprocal of Jacobian of the its inverse,
$$
J_G = \left\vert\dfrac{D(x,y)}{D(\t,\p)}\right\vert
= \left\vert\dfrac{D(\t,\p)}{D(x,y)}\right\vert^{-1}
= \dfrac{1}{J_{G^{-1}}},
$$
and since Jacobian determinant of Gauss map is equal to Gaussian curvature,
we get rid of $K(x,y)$ and compute the integral.
I think you can pick it up from here and make the final step by plugging everything into the integral and computing it.
To summarize, in order to compute this integral we use the following facts:
Integration by substitution involves multiplying integral by Jacobian
The Jacobian on an inverse transformation is reciprocal to the Jacobian of direct transformation
Gaussian curvature is equal to the Jacobian of Gauss map
PS The integral you are computing looks a lot like total Gaussian curvature. |
How do you go about proving the existence of a polynomial in an ideal? | Let $G$ be the greatest common divisor of $P$ and $Q$.
Since $P$ and $Q$ are multiple of $G$, then $P,Q \in <G>$, which implies that $<P,Q>\subset <G>$.
For the other inclusion, Bezout's Theorem says that there exists polynomials $U$ and $V$ such that $G=PU+QV$. Therefore, $G$ is an element of $<P,Q>$, so $<G>\subset<P,Q>$.
As a conclusion, we have $<P,Q>=<G>$. |
Why is the directional derivative maximal in the direction of gradient? | Let $P$ be the tangent plane to $g=0$. We want to maximize $\nabla f\cdot v$ subject to $v\in P$, $\|v\|=1$. Let $(v,w)$ be an orthonormal basis of $P$. A reformulation is the maximization of $x (\nabla f\cdot v)+y(\nabla f\cdot w)$ for $x^2+y^2=1$. But this is a linear function in $(x,y)$ which is, on a unit circle, maximized in a multiple of its own gradient $(\nabla f\cdot v, \nabla f\cdot w)$. So the maximal ascent in $P$ is in the direction $v (\nabla f\cdot v)+w(\nabla f\cdot w)$ which is also equal to a multiple of the projection of $\nabla f$ to $P$.
Intuitively, the steepest ascend is in the direction of $\nabla f$ and if you are not allowed to go there, then you at least want to go in the direction as close to $\nabla f$ as possible. But this is nothing else then the projection of $\nabla f$ to $P$. |
Proving equivalence of 3 facts without axiom of infinity | The axiom of infinity states "There exists an inductive set". An infinite set is a set which is not finite. Inductive sets are not finite, so the axiom of infinity tells us that there is an infinite set. But specifically, it tells us that there is an inductive set.
Of the three implications, $1\implies 2$ is trivial, by noting that no finite set is inductive; and $3\implies 1$ is trivial by noting that $\omega$ is an inductive set. So the only part which left is $2\implies 3$.
You have to show now, why given that there exists an infinite set, we can deduce the existence of $\omega$. If you already know what a transitive closure is, and what is the rank of a set, then this should be sufficient for a quick proof. If you don't you might have to do this slightly more "by hand".
Here's a small hint: Let $X$ be an infinite set. Use the power set and separation axioms to conclude that the set $\{A\subseteq X\mid A\text{ is finite}\}$ exists. Now use replacement to prove that $\omega$ exists. |
Find the volume of the region bounded by a sphere and a paraboloid using cylindrical and spherical coordinates. | In cylindrical coordinates, the volume is
$$\int^{2\pi}_0 \int_0^{\sqrt3} \int^{-\frac{r^2}{3}}_{-\sqrt{4-r^2}} r\>dr d\theta
= \frac{19\pi}6$$
and, in spherical coordinates
$$\int^{2\pi}_0 \int_{\pi/2}^{2\pi/3} \int_0^{-\frac{3\cos\phi}{\sin^2\phi}}
\rho ^2\sin\phi \,d\rho d\phi d\theta
+\int^{2\pi}_0 \int^{\pi}_{2\pi/3} \int_0^{2}
\rho ^2\sin\phi \,d\rho d\phi d\theta = \frac{19\pi}6$$ |
Upper bound of this fraction? | Numerator and denominator represent two planes , when each is put equal to $d_1, \, d_2$ .
So either they are parallel and there are no common points, or they are coincident, and the ratio is constant everywhere, or they have a common intersection line where the ratio is $d_1 / d_2$.
Then if you increase $d_1$ the upper plane will move .. and the intersection line will move .. |
Why $\sqrt{x^2}$ is not equal to $\big(\sqrt{x}\big)^2$? | It is true for all positive and zero $x$, but if $x \lt 0, \sqrt x$ is not defined in the reals, so $(\sqrt x)^2$ is also not defined. On the other hand $\sqrt {x^2}$ is defined and equals $|x|$ for all real $x$ |
Cantelli's inequality and Chebyshev's inequality in comparison | "Better" stands for a sharper bound, that is the bound for Cantelli's inequality is smaller than the bound in Chebyshev's inequality. More on this after I answer the second question.
To get equality for Cantelli's inequality: Let the distribution of $ X $ be a Bernoulli distribution with parameter $p$, $X \in Be(p)$. Take $a = 1 - p$. Then the Cantelli inequality gives
$$ \begin{align} P(X - EX \geq \alpha) &\leq \frac{\sigma^2}{\sigma^2 + \alpha^2)} \\\\ &= \frac{p(1-p)}{p(1-p) + (1-p)^2} \\\\ &= p \end{align} $$
But the true probability
$$\begin{align} P(X - EX \geq \alpha) &= P(X - p \geq 1-p) \\\\ &= P(X \geq 1) \\\\ &= p \end{align} $$
is the same as the bound from Cantelli's inequality. This means that the Cantelli inequality cannot be improved without further assumptions.
The question now is when is Cantelli's inequality better than Chebyshev's inequality.
Cantelli's Inequalities :
$$P(X-EX \geq \alpha) \leq \frac{\sigma^2}{\sigma^2 + \alpha^2} $$
$$P(|X-EX| \geq \alpha) \leq \frac{2\sigma^2}{\sigma^2 + \alpha^2} $$
Chebyshev's inequality:
$$P(|X-EX| \geq x) \leq \frac{\sigma^2}{\alpha^2}$$
A sharper bound means
$$ \frac{2\sigma^2}{\sigma^2 + \alpha^2} \leq \frac{\sigma^2}{\alpha^2}$$
which becomes $\alpha^2 \leq \sigma^2$. This makes both bounds more than 1, hence not very useful. However the one sided Cantelli inequality is indeed useful as we have shown that it can produce sharp bounds. The one-sided version is also useful when Chebyshev's inequality isn't good enough. |
I am looking for a modern and thorough exposition for presentations of groups | The answer occurs in the comments above. I summarize
"Combinatorial Group Theory: Presentations of Groups in Terms of
Generators and Relations" by W. Magnus and others, 444 pages, ISBN
0486438309, 2004-reprint of 1976-edition
"Presentations of Groups,
2ed" by D.L. Johnson, 232 pages, ISBN 0521585422, 2008-reprint of
1997-edition
"Topics in the theory of group presentations" by D. L.
Johnson, 311 pages, ISBN 9780521231084, 2008-reprint of 1980-edition |
Why do mathematicians use only symmetric matrices when they want positive semi-definite matrices? | First, one can argue that non-symmetric positive definite matrices are pathological, in the sense that when you move to the complex case all positive definite matrices are hermitian.
For a non-symmetric positive definite matrix you can say little more than the fact that it has positive eigenvalues. You don't have many of the nice properties that symmetry adds. For instance, without symmetry you don't even have that the singular values agree with the eigenvalues, nor diagonalizability.
Edit: here is why in the complex case, positive semidefinite implies hermitian. Actually, the proof implies that in the complex case $A$ is hermitian if and only if $x^*Ax\in\mathbb R$ for all $x$.
Assume $x^*Ax\in\mathbb R$ for all $x$. then
$$
\mathbb R\ni(y+\alpha x)^*A(y+\alpha x)=y^*Ay+\overline\alpha\,x^*Ay+\alpha\,y^*Ax+|\alpha|^2\,x^*Ax.
$$
As this expression is real, it equals its complex conjugate
$$
y^*Ay+\alpha\,y^*A^*x+\overline\alpha\,x^*A^*y+|\alpha|^2\,x^*Ax.
$$
So
$$
\overline\alpha\,x^*Ay+\alpha\,y^*Ax=\alpha\,y^*A^*x+\overline\alpha\,x^*A^*y.
$$
Taking first $\alpha=1$ and then $\alpha=i$, we get
$$
x^*Ay+y^*Ax=y^*A^*x+x^*A^*y,
$$
$$
-i\,x^*Ay+i\,y^*Ax=i\,y^*A^*x-i\,x^*A^*y.
$$
Multiplying the first equation by $i$ and adding, we get
$$
2i\,y^*Ax=2i\,y^*A^*x.
$$
As this works for any $x,y$, we deduce that $A=A^*$. |
If $p>3$ and $p+2$ are twin primes then $6\mid p+1$ | The easy way.
Note that one of $p,p+1,p+2$ must be divisible by $3$, since they are three consecutive numbers, and since $p$ and $p+2$ are prime, that must be $p+1$. We can do the same to show that $p+1$ is divisible by $2$.
Looking modulo $6$.
We can look $\mod 6$. We see that
\begin{align}
6k+0\equiv 0\mod 6&\Rightarrow 6|6k+0\\
6k+1\equiv 1\mod 6&\Rightarrow \text{possibly prime}\\
6k+2\equiv 2\mod 6&\Rightarrow 2|6k+2\\
6k+3\equiv 3\mod 6&\Rightarrow 3|6k+3\\
6k+4\equiv 4\mod 6&\Rightarrow 2|6k+4\\
6k+5\equiv 5\mod 6&\Rightarrow \text{possibly prime}\\
\end{align}
So for a number to be prime it must be either of the form $6k+1$ or $6k-1$ (equivalent to $6k+5$ since $6k-1=6(k-1)+5$). So if you have two primes, $p$ and $p+2$, then $p=6k-1$ and $p+2=6k+1$ for some $k$; thus, $p+1=6k$ is a multiple of $6$. |
Interpolation inequality | By restricting the function to a line, i.e., considering the function, $t \mapsto u(a+tb)$ for some point $a\in \mathbb{R}^n$ and a unit vector $b\in\mathbb{R}^n$, you can reduce the problem to the case $n=1$. Now the problem is for a $\mathcal{C}^2$ function $f:\mathbb{R}\to\mathbb{R}$ to show $\|f'\| \le \epsilon \|f''\| + C\|f\|$. The idea is that if you have a point $x_0$ and a constant $M>0$ such that $f'(x_0)\ge M+1$ (or $-f'(x_0)\ge M+1$), and if you have a uniform bound $\|f''\|\le K$, then $f'\ge M$ (or $-f'\ge M$) whenever $|x-x_0| \le 1/K$, and then $|f(x_0+1/K)-f(x_0-1/K)| \ge 2M/K$, so $\| f\| \ge M/K$. This implies the desired inequality by juggling of constants. |
Airy differential equation and Galois group | This differential Galois group $G$ of the Airy equation is $SL_2$. Therefore it is connected, i.e. $G = G^0$.
It is computed for example in Examples 4.29 of [1]. See also example 6.21.
[1] Andy R. Magid. Lectures on differential Galois theory, volume 7 of University Lecture Series. American Mathematical Society, Providence, RI, 1994. |
How to do an Epsilon/N Argument for this sequence | We have $n<2^n$ and $4^n=(2^n)^2$, so the quotient $a_n$ satisfies $\displaystyle 0<a_n<\frac1{2^n}<\frac1n$. Now, given $\epsilon>0$ there is an $N$ with $1/N<\epsilon$, so any $n>N$ also satisfies $1/n<\epsilon$. |
How to directly show convergence in probability | $$\hat{\theta}_n - \theta = \begin{cases} 0 & W.P. \frac{n-1}{n} \\ nk-\theta & W.P. \frac{1}{n}\end{cases}$$
Hence $\lim_{n \to \infty} P(|\hat{\theta_n} - \theta| \geq 0) \leq \lim_{n \to \infty} \frac{1}{n}$ |
Understanding how to evaluate $\lim_{x\to\frac\pi2} \frac{2^{-\cos x}-1}{x-\frac\pi2}$ | It suffices to write :
$$\frac{2^{\sin u}-1}{u} = \frac{2^{\sin u}-1}{\sin u}\frac{\sin u}{u}$$
and you know that
$$\lim_{u\to 0} \frac{2^{\sin u}-1}{\sin u} = \lim_{U\to 0} \frac{2^{U}-1}{U} = \ln(2)$$
and
$$\lim_{u\to 0} \frac{\sin u}{u}=1.$$
Other method : posing $f(x) = 2^{\sin x}=e^{\ln(2)\sin x}$, we see that $f'(x) = \ln(2)\cos x\,e^{\ln(2)\sin x}$ hence
$$\lim_{u\to 0} \frac{2^{\sin u}-1}{u} = \lim_{u\to 0}\frac{f(u)-f(0)}{u-0} = f'(0) = \ln(2).$$ |
Showing $\sum 1/a_i<2$: is my proof correct? | I am not sure about your proof, but I suggest a different aproach (using your idea to compare with the sum of the inverses of the powers of $2$): try to proof a stronger result, that is $$\sum_{i=0}^n \frac{1}{a_i} \le \sum_{i=0}^n \frac{1}{2^i} = 2-\frac{1}{2^{n+1}}$$
under the same hypothesis.
Assume as you did that $a_0<a_1<a_2<\cdots <a_n$.
As you said, one can easily show that
$$S_j:=\sum_{i=0}^j a_i \ge \sum_{i=0}^j 2^i=2^{j+1}-1=: D_j$$
for all $0\le j\le n$.
Moreover, one has $a_i2^i<a_{i+1}2^{i+1}$ obviously.
Now, we will compute
$$\sum_{i=0}^n \left(\frac{1}{2^i}-\frac{1}{a_i}\right)= \sum_{i=0}^n \frac{1}{2^ia_i}\left(a_i-2^i\right)=\sum_{i=0}^n \frac{1}{2^ia_i}\left((S_i-S_{i-1})-(D_i-D_{i-1})\right)=$$
$$= \sum_{i=0}^{n-1} \left(\frac{1}{2^ia^i}-\frac{1}{2^{i+1}a_{i+1}}\right)\left(S_i-D_i\right)+\frac{1}{2^na_n}\left(S_n-D_n\right) \ge 0$$
as $S_i-D_i\ge 0$ for all $i\ge 0$ as it is $$\left(\frac{1}{2^ia^i}-\frac{1}{2^{i+1}a_{i+1}}\right)> 0$$ |
Number of compositions of $n$ into $k$ parts with each part at most $1$ | The complement of all parts at most $1$ is at least one part of at least $2$, not all parts of at least $2$. You need $n\ 1$'s and $k-n\ 0$'s, so are choosing the location of the $1$'s in the list. You can do that in $k\choose n$ ways. |
Span of two vectors is the same as the Span of the linear combination of those two vectors. | Well, the RHS is quite trivially a subset of the LHS. Conversely, you can recover both $x$ and $y$ very easily from the vectors $x+y$ and $x-y$. Namely, $1/2\cdot(x+y)+1/2\cdot(x-y)=x$. Then once you have $x$, you can subtract it from $x+y$ to get $y$. Thus we have the reverse inclusion.
Hence they are equal. |
Abstract Algebra: Ring Homomorphisms | The assertion does not appear to be true, Let $m=n=2$. Then $m/d=n/d=1$. But by the definition of $\tau$, we have $\tau(1)=([1]_m,[1]_n)$.
Edit: For a somewhat less trivial example, let $m=6$ and $n=10$. Then $m/d=3$ and $n/d=5$. Note that $\tau(15)=([3]_m,[5]_n)$. |
$\sum_{\alpha<\omega_2}|\alpha|^{\aleph_0}=\aleph_2\cdot\aleph_1^{\aleph_0}$ | The sum has $\aleph_2$ terms, each of which is at most $\aleph_1^{\aleph_0}$, so it’s bounded above by $\aleph_2\cdot\aleph_1^{\aleph_0}$. On the other hand, it’s clearly at least $\aleph_2$ and at least $\aleph_1^{\aleph_0}$, so its bounded below by their maximum, which is simply their product. |
A zero-dimension set and self-referencial equation | I will used all the time the notation given in the problem. Then the function $f_i$ will always mean that it supposed to mean in the hypothesis.
To prove the theorem we will find a clopen basis. For simplicity define
$$\bigcirc_{k=1}^sf_k =f_s\circ f_{s-1}\circ\cdots\circ f_{1}$$
That is a successive composed functions. Now take a sequence $\{i_k\}_{n=1}^\infty$ such that all its member are integers between $1$ and $n$ ($1\leq i_k\leq n$ for every $k$).
The letters $r_1,..., r_n$ will denote the constant of contraction (that is the real number such that $|f_i(x)-f_i(y)|=r_i|x-y|$ for every $x,y$ in the domain).
Since $K$ is compact then $A=\text{diam } K$ is finite.
First claim: Each $f_{i}[K]$ is a clopen set.
proof: Each $f_{i}[K]$ is compact, since $K$ is compact. By the condition $f_{i}[K]\cap f_j[K]=\emptyset$ the distance between each $f_i[K]$ is positive furthermore this implies too that $f_i[K]$ is closed being the image of a compact set in a eucliden space. Its complement is
$$\bigcup_{j\not= i}f_j[K] $$
and this set is closed, being the finite union of closed sets, then $f_i[K]$ is open therefore clopen.
Second claim: For every sequence $f_{i_k}$ where $\{i_k\}_{n=1}^\infty$ is a sequence defined as above the following is true:
$$\lim_{s\rightarrow \infty}\text{diam }\left[\bigcirc_{k=1}^sf_k(K)\right]=0 $$
proof: Defined $ g_s=\bigcirc_{k=1}^sf_{i_k}(K)$. For each $x,y\in f_{i_k}(K)$, the equality $|f_{i_k}(x)-f_{i_k}(y)|=r_{i_k}|x-y|\leq r_{i_k}A $ is the basic step on the proof by induction of
$$|g_{s+1}(x)-g_{s+1}(x)|=r_{i_s}r_{i_{s-1}}\cdots r_{i_1}|x-y|\leq r_{i_s}r_{i_{s-1}}\cdots r_{i_1} A $$. Which implies
$$\text{diam }g_s[K]\leq r_{i_s}r_{i_{s-1}}\cdots r_{i_1} A $$
and the sequence $r_{i_s}r_{i_{s-1}}\cdots r_{i_1} A$ is bounded by $AR^s$ where $R=\max\{r_1,...,r_n\}<1$ then $AR^n\rightarrow 0$ when $n\rightarrow\infty$. This proves the second claim.
Third claim: Taking indices $i_1,..., i_n$ varying between $1$ to $n$. We will have that
$$K=\bigcup_{i_n,...,i_1=1}^n \bigcirc_{k=1}^nf_{i_k}[K] $$
and that $f_{i_n}\circ\cdots\circ f_{i_1}[K]\cap f_{i_n'}\circ\cdots\circ f_{i_1}[K]=\emptyset$ if $i_n\not= i_n'$. Which implies that this set are clopen.
As ilustration i will show the case $n=2$. We have that
$$K=\bigcup_{i=1}^nf_i[K]$$
Then $f_jK=\bigcup_{i=1}^nf_jf_i[K]$, consequently
$$K=\bigcup_{j=1}^n\bigcup_{i=1}^nf_jf_i[K]$$
And, since $f_i[K]\subset K$ then $f_jf_i[K]\subset f_j[K]$ then $f_jf_i[K]\cap f_{j'}f_i[K]=\emptyset$ if $j\not=j'$. And it is clear that the set $f_jf_i[K]$ are clopen (the reason is analagous to the proof in the first claim).
This three claims proof the theorem, since for each neighborhood of a pint in $K$ I can take a sufficiently small (by the firts claim) set of the form $f_{i_k}\circ\cdots\circ f_{i_1}[K]$ wich is clopen by the second claim.
Then the family of sets $f_{i_k}\circ\cdots\circ f_{i_1}[K]$ form a clopen basis, then $K$ is zero dimensional. |
Need help showing this map is well-defined. | You need to show that $(a,b)\equiv(a',b')$ implies $ab\bmod d=a'b'\bmod d$.
Here, $(a,b)\equiv(a',b')$ means that $a\equiv a'\pmod m$ and $b\equiv b'\pmod n$. |
Lower bound of generating a biased coin? | This answer concerns the maximal (rather than expected) number of tosses; which is not what is asked for.
After flipping a fair coin $n$ times, you have $2^n$ equally likely outcomes. Every event defined in terms of these outcomes has probability $k/2^n$ for some $k\in\{0,\dots,2^n\}$. And conversely, for every $k$ there is such an event. Conclusion:
Required number of flips is $ \inf\{n: 2^np\in\mathbb Z\}$.
Which is infinite when $p$ is not a dyadic rational; meaning that for such $p$ you can't simulate the biased coin at all.
Example: if $p=0.375$, you need $3$ flips. |
How do I know when I can do something to both sides in an inequality? | You can use intuition to know whether an inequality manipulation is valid. For example, it is easy to see that adding $2$ to both sides of $a \gt b$ makes $a+2 \gt b+2$ true. However, if you multiply both sides by $-1$, $-a \gt -b$ is FALSE. This is because negative numbers with big magnitudes, like $-7$ are smaller than negative numbers with smaller magnitudes, like $-2$, thus if you negate both sides of an inequality, the side with the bigger magnitude side becomes smaller.
However, you can also use rigorous math, as well, to see when an inequality manipuation is valid. If a function $f(x)$ is increasing for all $x$, then $f(a) > f(b)$ if $a \gt b$, which you can see intuitively. So, if we want to see if adding $2$ to both sides of an inequality preserves the inequality, we can let $f(x) = x+2$. From the graph of this, we can tell $f(x) = x+2$ is increasing, thus if $a \gt b$, then $f(a) = a+2 \gt f(b) = b+2$ or $a+2 \gt b+2$. However, a function like $f(x) = -x$ is decreasing for all $x$, which you can tell from the graph, thus $f(a) < f(b)$ if $a \gt b$. Thus, if we have $a \gt b$, we have $f(a) = -a < f(b) = -b$ or $-a \lt -b$.
See if you can apply similar logic on inequality manipulations like taking the reciprocal of both sides, or the absolute value. |
finite difference equations | From what I understood of your question, that is quite confusing.
A simple reference for formulas for finite differences, or their coeficientes, is found here in Wiki , for backward, forward and central.
Regarding being explicit, implicit:
You didn't say where you want to apply those finite difference discretizations. What is the equation, depending on what you choose backward/forward/central for each term of your equation you get a implicit or explicit schema.
If you specify the equation will be easier, for example for the heat equation:
$$ U_t = \alpha U_{xx} $$
Using a forward difference at time and a second-order central difference for the space derivative at position $u_j$ (FTCS) we get explicit schema:
$$ \frac{u_j^{t+1}-u_j^t}{\Delta t} = \alpha \frac{u_{j+1}^t - 2u_j^t + u_{j-1}^t}{h^2} $$
where $ \Delta t $ is the discretization in time and $ h $ in space. |
a question for arbitrary union of compact sets | Any infinite space in the cofinite topology has the property that all of its subsets are compact and so the union of compact subsets is automatically compact too.
Note that this space is just $T_1$, if $X$ were Hausdorff (or even just KC) then “any union of compact subsets is compact” implies that $X$ is finite and discrete. |
How to solve problems such as: find $A(x)$, the accumulation function of $f(x)=x^2$ such that $A(x)= \frac{x^3}3$? | We require $A(x) = x^3/3$ and we compute
$$ A(x) = \int_a^x t^2 \,\mathrm{d}t = \left. \frac{t^3}{3} \right|_{t=a}^x = \frac{x^3}{3} - \frac{a^3}{3} \text{.} $$
Equality of these two expressions for $A(x)$ requires $-a^3/3 = 0$, which forces $a = 0$.
The general technique, demonstrated above, is to compute the definite integral, then solve the resulting equality for the value or values of $a$ that satisfy it.
Warning: The set of accumulation functions is not the full set of antiderivatives and it can be confusing to think otherwise at this time.
An example of this deficiency of accumulation functions is: find $b$ such that $B(x)$, the accumulation function of $2x$ is $B(x) = x^2 + 10$. We proceed by the generic technique:
$$ \int_b^x 2t \,\mathrm{d}t = \left. \frac{2t^2}{2} \right|_{b}^x = x^2 - b^2 \text{.} $$
This gives the equation $10 = -b^2$, which has no solution (because the left-hand side is positive and the right-hand side is negative for all choices of $b$). However, $\frac{\mathrm{d}}{\mathrm{d}x} \left( \frac{x^2}{2}+10 \right) = 2x$, so this $B$ is an example of an antiderivative of $2x$ which is not an accumulation of $2x$.
Also, beware this notion of "basic": Any identity that happens to contain a constant offset makes "basic"ness unclear. Which is more "basic": $\sec^2 \theta$ or $\tan^2 \theta + 1$. The two expressions are equal for all $\theta$ and the "${}+1$" is not optional. |
Adding the reverse of digits | This is a simple recursive algorithm:
// Precondition: n >= 0
// Postcondition: n with its digits reversed
int reverse(int n)
{
if (n < 10) {
return n;
} else {
int last_digit = n % 10;
int ten_power = pow(10, (int)log10(n));
return last_digit * ten_power + reverse(n/10);
}
}
So, reverse(123456) returns $654321$.
If $n \geq 0$, the formula may be written as
$$\mathrm{rev}(n) = \begin{cases}
n,&\text{if }n < 10\\
k\cdot 10^{\lceil\log_{10}(n)\rceil} + \textrm{rev}(\lfloor n/10\rfloor),&\text{otherwise}
\end{cases}$$
where $k$ is the last digit of $n$. That is, $k$ is the remainder of $n/10$, or equivalently, $$k = n - 10 \lfloor \frac{n}{10} \rfloor$$ |
Find the meaning of this proportionality | The first concept is correct: as $v$ increases, the RHS decreases and thus the LHS must decrease as well.
However, how that decrease happens is not quite what you describe. Either $A$ must decrease or $B$ must increase. Either of those is sufficient. |
What is the Probability that coin is tossed three times | It is worth going through the effort of calculating the probability via the definition of conditional probability in early examples.
$$Pr(A\mid B):=\frac{Pr(A\cap B)}{Pr(B)}$$
Let $B$ be the event that the first coin flipped is not a head (i.e. the first coin flipped turned up tails).
Let $A$ be the event that the coin is flipped exactly three times.
We are tasked with calculating $Pr(A\mid B)$, the probability that the coin is flipped exactly three times given that the first flip did not turn up heads.
We can draw ourselves a tree diagram or however else we like to arrive at the following table of outcomes and respective probabilities:
$$\begin{array}{|c|c|}\hline\text{Outcome}&\text{Probability}\\\hline H&\frac{1}{2}\\\hline TH&\frac{1}{4}\\\hline TTH&\frac{1}{8}\\\hline TTT&\frac{1}{8}\\\hline\end{array}$$
It is worth taking a moment to check that this does in fact make sense as a probability distribution by verifying that the probabilities add up to exactly one. Indeed $\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}$ does equal $1$.
The event that the first flip is not heads corresponds to all of the above listed outcomes except the first and so occurs with probability $\frac{1}{4}+\frac{1}{8}+\frac{1}{8}=\frac{1}{2}$ so we learn that $Pr(B)=\frac{1}{2}$.
The event that the first flip is not heads and it takes three flips in total corresponds to the last two outcomes in the above table and so occurs with probability $\frac{1}{8}+\frac{1}{8}=\frac{1}{4}$ so we learn that $Pr(A\cap B)=\frac{1}{4}$.
Putting this information together, we get:
$$Pr(A\mid B)=\frac{Pr(A\cap B)}{Pr(B)}=\frac{1/4}{1/2}=\frac{1}{2}$$ |
Fatou's Lemma in nonpositive function | This is correct. Using hat you said about the limsups,
$$\int \limsup f_n = \int - \liminf (-f_n)$$
As $-f_n$ is positive, so applying Fatou one gets $\int \liminf (-f_n) \leqslant \liminf (-\int f_n)$, so
$$\int \limsup f_n \geqslant - \liminf (- \int f_n) = \limsup \int f_n $$ |
Generating sets for topological groups | The following proof at least works in the case of compact Lie groups and will transfer over to topological groups iff the natural projection $\pi:G\rightarrow G/N$ is a quotient map in the sense that a subset of $G/N$ is open iff its preimage in $G$ is. I've only ever studied compact Lie groups, so I simply don't know if this is true of compact topological groups or not.
Let $U$ be an open subset of $G$. We wish to find a point in the subgroup generated by $X$ and $Y$ in $U$.
We know $\pi(U)$ is open in $G/N$ so that there is some element $y\in Y$ with $\pi(y)\in \pi(U)$. So there is a $u\in U$ with $\pi(u) = \pi(y)$. But then $uy^{-1} \in N$. So $uy^{-1} = n$ for some $n\in N$.
Now, right multiplication by $y^{-1}$, thought of as a map from $G$ to itself is a homeomorphism. Hence, if $V$ is an open set around $u$ entirely contained in $U$, then $Vy^{-1}$ is an open set around $n$. Since $X$ is dense in $N$, there is some $x\in X\cap(N\cap Vy^{-1})$.
Hence, $vy^{-1} = x$ for some $v\in V\subseteq U$. But $v = xy$, so we've found an element in the subgroup generated by $X$ and $Y$ lying in $U$.
Edit According to wikipedia, the coset space $G/N$ is always given the quotient topology and the natural projection is automatically open, so aparently my proof works for all compact topological groups. |
If $y = \sin^{-1} xy$, how do I verify that $xy' + y = y' \sqrt{1-x^{2}y^{2}}$ | The question is merely about the semi-finished derivative evaluation.
Derivative of $\sin^{-1}u= \dfrac{u'}{\sqrt{1-u^2}}$ with respect to x.
After that apply Chain Rule and Product Rule while differentiating next step.
$$y'= \dfrac {(y+ x y')}{\sqrt {1-x^2y^2}}.$$ |
If functions agree at all but finitely many points then the integrals are the same | Your idea of considering the difference is a good one, since your problem is then seen to be equivalent to proving that a function which is $0$ except at finitely many points is integrable and must have integral equal to $0$ (try to understand why if this isn't clear)
In order to prove this statement, taking partitions with small diameter will show that you can let your upper and lower sums be as near zero as you want (they will be bounded by $\pm N \cdot\mathrm{diam}(P)^n\cdot\max |f|$, where $N$ is the number of points where the function is not zero).
Now, to see why $g$ is integrable, notice that $g=(g-f)+f$.
Therefore, $\int g=\int(g-f)+\int f=\int f$.
You mention in the question that the book does not say that sum of integrable functions is integrable before this point. If you are uncomfortable with using this fact, try to adapt the proof above to avoid using it. The core idea is in the second paragraph of this answer. |
Will it be correct to intregrate when answer comes in via square roots since they can be positive or negative? | $$
\int_0^2x(x+2)^{1/2}dx
$$
If $t^2 = x+2$ then $2t\,dt = dx$ and $t^2 - 2 = x$.
As $x$ goes from $0$ to $2$, then $t^2$ goes from $2$ to $4$, so $t$ goes from${}\,\ldots\,{}$ what, to what?
We could let $t$ go from $\sqrt 2$ to $2$. Then we have
$$
\int_{\sqrt 2}^2 (t^2-2) t\,(2t\,dt) = 2 \int_{\sqrt 2}^2 (t^4 - 2t^2)\,dt = 2\left[ \frac{t^5} 5 - \frac{2t^3} 3 \right]_{\sqrt 2}^2 = \frac {112 + 16\sqrt 2} {15}.
$$
Or we could let $t$ go from $-\sqrt 2$ to $-2$. Then we have
$$
\int_{-\sqrt 2}^{-2} (t^2-2)(-t)(2t\,dt) = -2\int_{-\sqrt 2}^{-2} (t^4-2t^2)\,dt = - 2 \left[ \frac{t^5} 5 - \frac{2t^3} 3 \right]_{-\sqrt 2}^{-2} = \frac{112 + 16\sqrt 2}{15}.
$$
The answer is the same either way. |
Quasilinear PDE $\left\{\left(x+y\right)\frac{\partial }{\partial x}+\frac{\partial }{\partial z}\:\right\}u\left(x,y,z\right)=0$ | Well you can try and verify your answer relatively easily. Let $\Phi = \Phi(u,v)$:
$$\partial_x u = \frac{1}{z}\partial_v\Phi(y,\frac{x+y}{z})$$
$$\partial_z u = -\frac{x+y}{z^2}\partial_v\Phi(y,\frac{x+y}{z})$$
But in general
$$(x+y)\partial_x u + \partial_z u = \bigg[\frac{x+y}{z} - \frac{x+y}{z^2}\bigg]\partial_v\Phi(y,\frac{x+y}{z}) \neq 0$$
So your solution is not correct. Where you went wrong is integrating
$$\frac{dx}{x+y} = dz \implies \ln(x+y) = z + C_2$$
so
$$\frac{x+y}{e^z} = \tilde{C}_2$$
The general solution is then
$$u(x,y,z) = \Phi(y, \frac{x+y}{e^z})$$ |
Prove $-f'(x)$ using difference quotient? | Let $g(x) = f(-x)$,
so
$g'(x) = -f'(-x)$
or
$f'(x) = -g'(-x)$.
Then
$\frac{f(c-t)-f(c)}{t}
=\frac{g(t-c)-g(-c)}{t}
\to g'(-c)
=-f'(c)
$. |
Solving $ f(\log x)$ | The solution $ > e$ of the equation $y/\log(y) = z$ is $y = - z\; \rm{LambertW}(-1,-1/z)$ (in Maple's terminology) for $z > e$. |
An idea to compute the following integral $ \int_0^1 s^a (1-s)^b e^{-c/s} \, ds$ | $\int_0^1s^a(1-s)^be^{-\frac{c}{s}}~ds$
$=\int_\infty^1\left(\dfrac{1}{t}\right)^a\left(1-\dfrac{1}{t}\right)^be^{-ct}~d\left(\dfrac{1}{t}\right)$
$=\int_1^\infty t^{-a-b-2}(t-1)^be^{-ct}~dt$
$=\int_0^\infty(t+1)^{-a-b-2}t^be^{-c(t+1)}~d(t+1)$
$=e^{-c}\int_0^\infty t^b(t+1)^{-a-b-2}e^{-ct}~dt$
$=e^{-c}~\Gamma(b+1)U(b+1,-a,c)$ (according to https://en.wikipedia.org/wiki/Confluent_hypergeometric_function#Integral_representations) |
Laplacian Boundary Value Problem | As pointed out in comments by Shucao Cao and Ray Yang, Gilbarg-Trudinger is the place to look up such estimates: it does not have 3704 MathSciNet citations for nothing (as of today). In addition to global Hölder and Sobolev estimates (which require appropriate smoothness of $\partial M$ and $f$) there is a simple interior estimate for the Laplace equation, which applies in great generality. I quote it below.
Theorem 2.10. Let $u$ be harmonic in $\Omega\subseteq \mathbb R^n$ and let $\Omega'$ be a compact subset of $\Omega$. Then for any multi-index $\alpha$ we have
$$\sup_{\Omega'} |D^\alpha u|\le \left(\frac{n|\alpha|}{d}\right)^{|\alpha|}\sup_\Omega |u|$$ where $d=\operatorname{dist}(\Omega',\partial\Omega)$.
By the maximum principle, you can replace $\sup_\Omega |u|$ with $\sup_{\partial \Omega}|f|$ where $f$ is the boundary data. |
Show convergence of a given series and find the limit. | Hint: Use the integral test for verifying the series is convergent. Take $f(x)=\frac{1}{x(x+2)}$. $f(x)$ is positive and monotonic decreasing on $[1,\infty]$. |
How to prove this limit's equality? | This looks obvious but it does need a sound proof. Before giving a formal $\epsilon, \delta$ type argument it is always better to have an informal argument and one also needs to understand that rigor is different from formalism.
So let us assume that $f(x) \to L$ as $x \to 0^{+}$. Now as $x \to 0^{+}$ we can see that $y = 1/x \to \infty$. And rewording it we see that as $y \to \infty$ we have $x = 1/y \to 0^{+}$ and hence $f(1/y) \to L$ as $y \to \infty$. Since the name of the variable is immaterial when calculating limits we can as well write $f(1/x) \to L$ as $x \to \infty$.
This can be easily translated into formal $\epsilon, \delta$ type argument. The statement $f(x) \to L$ as $x \to 0^{+}$ implies that for any $\epsilon > 0$ we have a number $\delta > 0$ such that $|f(x) - L| < \epsilon$ whenever $0 < x < \delta$.
We need to show that $f(1/x) \to L$ as $x \to \infty$. This would require us to show that corresponding to any $\epsilon > 0$ we must be able to find a number $N > 0$ such that $|f(1/x) - L| < \epsilon$ whenever $x > N$. To find $N$ we first choose a $\delta > 0$ corresponding to $\epsilon$ such that $|f(x) - L| < \epsilon$ whenever $0 < x < \delta$. Let $N = 1/\delta$ so that $N > 0$. Now consider a new variable $y$ such that $y > N$. This means that $0 < 1/y < \delta$ and hence $|f(1/y) - L| < \epsilon$. This shows that $f(1/y) \to L$ as $y \to \infty$ or changing the variable name $f(1/x) \to L$ as $x \to \infty$. |
What is the difference between nested set-builders and set builders with combined predicates? As they express sets of sets. | Example:
Let $\mathbb X=\{1,2\}$ and $\mathbb Y=\{3,4\}$, and $f(x,y)=xy$.
$\{\{f(x,y)\}:x\in\mathbb X,y\in\mathbb Y\}=\{\{3\},\{4\},\{6\},\{8\}\}$
$\{\{f(x,y):x\in\mathbb X\}:y\in\mathbb X\}\}=\{\{3,4\},\{6,8\}\}$
The first is a set of sets containing a single element. The latter is a set of sets, each containing multiple elements. |
Proof that $\text{Diff}(M)$ is a topological group | Ok, I think I worked this out. The key insight, oddly enough, was that $\operatorname{Diff}(M)$ is first countable. That helps immensely, because now instead of using the "preimages of open sets are open" definition of continuity, I can use convergent sequences. This simplified things enough for me to understand what was going on.
[To see it is first countable, it's pretty easy to cover $M$ with a countable basis of precompact charts $\{U_i\}$, and then to check that the open sets
$$ N_{1/m}^r(f; \overline{U_i},U_j,U_k) $$
Form a countable subbasis around $f$, where all the subscripts and superscript are positive integers.]
Now if $f_n\xrightarrow{\operatorname{Diff}}f$, then in terms of the topology described in the question, this means that for any tuple $(\epsilon, r, K, U, V)$, there is a corresponding $N$ such that for $n\ge N$,
$f_n(K)\subset V$
$\lVert f_n^{(i)}-f^{(i)}\rVert_K<\epsilon$ for $0\le i\le r$
Since the compact-open topology is coarser than this topology, convergence in the latter implies convergence in the former. So we can get rid of (1) and instead say that $f_n\xrightarrow{\operatorname{Diff}}f$ is equivalent to
$f_n\xrightarrow{M}f$
$f_n^{(i)}\xrightarrow{K}f^{(i)}$ for all $i$ and all valid $K$
[I'm writing $a\xrightarrow{L} b$ to mean uniform convergence on the compact set $L$. I'm writing $a\xrightarrow{\operatorname{Diff}}b$ to mean convergence in the topology on $\operatorname{Diff}(M)$.]
Finally, since we already know inversion and composition are continuous in the compact-open topology, we can focus on (4).
Now for inversion, suppose $f_n\xrightarrow{\operatorname{Diff}}f$. Then using the fact that $f^{-1}\circ f(x)=x$ and applying the chain rule repeatedly, we see that we can write
$$ f^{-(r)}\circ f(x)=c_r(f^{(1)}(x), f^{(2)}(x), \ldots, f^{(r)}(x))$$
where $c_r:\mathbb{R}^+\times\mathbb{R}^{r-1}\rightarrow\mathbb{R}$ is continuous. By choosing $n$ large enough, we can restrict the domain of $c_r$ [essentially by (1) and/or (3) above], and we can then assume $c_r$ is uniformly continuous. Then (4) plus the above equation implies
$$ f_n^{-(r)}\circ f_n\xrightarrow{K}f^{-(r)}\circ f$$
Finally, (3) implies we also have
$$ f_n^{-(r)}\circ f_n\xrightarrow{K}f_n^{-(r)}\circ f$$
and the fact that $f$ is a diffeomorphism shows we reallly have
$$ f_n^{-(r)}\xrightarrow{K}f^{-(r)}$$
which is enough for (4), and thus inversion is continuous.
For composition, again the chain rule applied to $g\circ f$ gives an equation like
$$ (g\circ f)^{(r)}(x) = d_r(g^{(1)}\circ f(x), \ldots, g^{(r)}\circ f, f^{(1)}(x), \ldots, f^{(r)}(x))$$
where we can assume $d_r$ is uniformly continuous.
Then (4) applied to $g$ shows we have
$$ (g_n\circ f)^{(r)}\xrightarrow{K}(g\circ f)^{(r)} $$
and (4) applied to $f$ then gives
$$ (g_n\circ f_n)^{(r)}\xrightarrow{K}(g\circ f)^{(r)} $$
The real insight here is that using the "convergent sequence" definition of continuity really simplifies the notation and presentation, and then really it all falls on the chain rule. |
$\aleph_1 = 2^{\aleph_0}$ | Your friend was incorrect when he defined $\aleph_1$ to be $2^{\aleph_0}$. It is possible that $\aleph_1=2^{\aleph_0}$ (this statement is called the "continuum hypothesis"), but it is also possible that $\aleph_1<2^{\aleph_0}$. Indeed, it is known that the usual axioms of set theory (ZFC) cannot decide this question: it is consistent with ZFC that CH holds, and consistent with ZFC that CH fails. In fact, essentially the only thing that ZFC can prove about $2^{\aleph_0}$ is that it has uncountable cofinality; for instance, it is consistent with ZFC that $2^{\aleph_0}=\aleph_{\omega^2\cdot 3+17}$, which is pretty wild!
So what are the right definitions?
$2^{\aleph_0}$ is defined to be the cardinality of the set of functions from $\mathbb{N}$ (or, any set of size $\aleph_0$) to $\{0,1\}$ (or, any set of size $2$). It turns out that this is also the cardinality of $\mathbb{R}$; this isn't hard to show, but also isn't trivial (an important issue is the fact that a real number can have multiple decimal expansions). By Cantor's diagonal argument, $2^{\aleph_0}$ is uncountable.
Meanwhile, $\aleph_1$ is defined to be the cardinality of the smallest uncountable ordinal; equivalently, it is the number of distinct ordertypes of countable ordinals. By an argument similar to that of the Burali-Forti paradox, $\aleph_1$ is uncountable, and (assuming a very mild version of the axiom of choice) $\aleph_1$ is the least uncountable cardinality.
Note that these two objects have very different definitions, so it shouldn't be surprising that they are not perfectly related. They are often conflated, possibly due to the fact that $2^{\aleph_0}$ is often assumed to be the next "nicely definable" cardinality after $\aleph_0$ (which is an interesting claim which is studied extensively in descriptive set theory); however, this conflation is indeed incorrect unless the author explicitly assumes CH (or some additional axiom implying CH).
Incidentally, it is quite easy to show that $\aleph_0^k=\aleph_0$ for finite $k$, so your friend was correct there. |
If an infinite well-ordered set has initial segments of finite cardinality only, is the set isomorphic to $\mathbb N$? | Yes. This is true. And you should build an explicit isomorphism, because there is really just one isomorphism. Let me give you a hint about that.
HINT: Every initial segment has a unique cardinality. |
$A$ is diagonalizable $\iff \phi$ is diagonalizable | If we start with 2), set $C$ to be the matrix whose columns are the $\alpha_i$. If we start with 1) let $\alpha_i$ be the columns of $C$ (and let $D$ denote the diagonal matrix of eigenvalues $\lambda_i$). In either direction, note that
$$
C^{-1}AC = D \iff\\
AC = CD \iff \\
A \pmatrix{\alpha_1 & \alpha_2 & \cdots & \alpha_n} =
\pmatrix{\alpha_1 & \alpha_2 & \cdots & \alpha_n} \pmatrix{\lambda_1\\&\lambda_2\\&&\ddots\\&&&\lambda_n} \iff\\
\pmatrix{A\alpha_1 & A{\alpha_2} & \cdots & A \alpha_n} = \pmatrix{\lambda_1 \alpha_1 & \lambda_2 \alpha_2 & \cdots & \lambda_n \alpha_n} \iff\\
A \alpha_i = \lambda \alpha_i \qquad i = 1,2,\dots,n
$$ |
A class of finite groups with no element of order $\geq 8$ | This is, I think, a realization, as a permutation group, of Levent's construction in the comments:
Consider the subgroup of the symmetric group $S_{3n}$ generated by the 3-cycles $(123),(456),(789),\dots$ and the involution $b=(12)(45)(78)\cdots$. This will have $3^n$ elements of order 2 (all looking a lot like $b$), $3^n-1$ elements of order 3 (products of the 3-cycles), and the identity. |
Uniform convergence of sequence of analytic functions | The best way to prove this is to use the fact that analytic functions have the Mean Value Property: $m(B)|f_n(z)-f_m(z)|=|\int_B (f_n-f_m)| \leq \int_B |f_n-f_m| \leq \int_{\Omega} |f_n-f_m| $ where $B$ is any open ball around $z$ contained in $\Omega$ and $m$ is Lebesgue measure. This inequality proves that $f_n$ converges uniformly on any open ball whose closure is contained in $\Omega$. Hence $(f_n)$ is a normal family, $f$ is analytic and $f_n \to f$ uniformly on compact subsets of $\Omega$. |
Using the sequential Criterion, give a proof that $\lim\limits_{x\to 0} f(x)$ does not exist, where: $f(x) = -1$, $x \leq 0$ or $x$, $x>0$ | Hint: Define a sequence $(x_n)$ as follows: $x_n= \frac{1}{2n}$ for $n$ even, $x_n = -\frac{1}{2n}$ for $n$ odd.
Added: Here $x_n \to 0.$ On the other hand $f(x_n)$ is not convergent. For $n$ even we have $f(x_n)=\frac{1}{2n} \to 0,$ where as for all $n$ odd $f(x_n) = -1.$ |
Evaluation of integral involving $ \tanh(ax) $ | A way forward, albeit not very pretty, is to express the $\tanh$ term as exponentials:
$$\begin{align} \tanh{a r} &= \frac{1-e^{-2 a r}}{1+e^{-2 a r}}\\ &= (1-e^{-2 a r}) \sum_{k=0}^{\infty} (-1)^k e^{-2 k a r} \\ &= \sum_{k=0}^{\infty} (-1)^k (e^{-2 k a r}-e^{-2 (k+1)a r}) \end{align}$$
so that the integral becomes
$$\begin{align}\int_{0}^{\sqrt{x}}dr \:\frac{r}{\sqrt{x-r^{2}}} \sum_{k=0}^{\infty} (-1)^k (e^{-2 k a r}-e^{-2 (k+1)a r}) &= \sum_{k=0}^{\infty} (-1)^k \int_{0}^{\sqrt{x}}dr \:\frac{r}{\sqrt{x-r^{2}}} (e^{-2 k a r}-e^{-2 (k+1)a r}) \\ \end{align}$$
It turns out that the integrals may be evaluated in terms of Bessel and Struve functions:
$$\int_{0}^{\sqrt{x}}dr \:\frac{r}{\sqrt{x-r^{2}}} (e^{-2 k a r}-e^{-2 (k+1)a r}) = \frac{1}{2} \pi \sqrt{x} (\pmb{L}_{-1}(2 a \sqrt{x} k)-\pmb{L}_{-1}(2 a \sqrt{x} (k+1))-I_1(2 a \sqrt{x} k)+I_1(2 a \sqrt{x} (k+1)))$$
This is about as far as I am able to go for now. I cannot determine if the series is summable in closed form; Mathematica is not able to do it, but that does not mean it is so. Also I do not have a feel for the rate of convergence of the series, so I am not sure how useful it is compared to a simple numerical evaluation or, as you imply, a power series approximation for small $r$. |
Intuition behind Heaviside expansion formula | $\DeclareMathOperator{\Re}{Re}\DeclareMathOperator{\Im}{Im}\DeclareMathOperator{\Res}{Res}\def\e{\mathrm{e}}\def\i{\mathrm{i}}\def\d{\mathrm{d}}$In fact, your first formula should be for $F(s) = \dfrac{P(s)}{Q(s)}$ where $\deg P < \deg Q$ and all zeros of $Q$ are simple, and the second one should be$$
\mathcal{L}^{-1}(F)(t) = \frac{P(0)}{R(0)} + \sum_{k = 1}^n \frac{P(s_k)}{s_k R'(s_k)} \e^{s_k t}
$$
where $n = \deg R \geqslant \deg P$ and all zeros of $s R(s)$ are simple.
Lemma: Denote$$
γ(a, r; θ_1, θ_2) = \{a + r\e^{\i θ} \mid θ_1 < θ < θ_2 \},\\
D(a, r) = \{z \in \mathbb{C} \mid \Re z < a,\ |z - a| > r \},\\
E(a, r) = \{z \in \mathbb{C} \mid \Im z > 0,\ |z - a| > r \}.
$$
If $f$ is continuous on $D(a, r_0)$ and $\lim\limits_{\substack{|z| → ∞\\z \in D(a, r_0)}} f(z) = 0$, then$$
\lim_{r → ∞} \int\limits_{γ(a, r; \frac{π}{2}, \frac{3π}{2})} f(z) \e^{tz} \,\d z = 0. \quad \forall t > 0
$$
Proof: By making substitution $w = -\i(z - a)$,$$
\int\limits_{γ(a, r; \frac{π}{2}, \frac{3π}{2})} f(z) \e^{tz} \,\d z = \int\limits_{γ(0, r; 0, π)} f(\i w + a) \e^{t(\i w + a)} \,\d w = \e^{ta} \int\limits_{γ(0, r; 0, π)} f(\i w + a) \e^{\i tw} \,\d w.
$$
Because $g(w) := f(\i w + a)$ is continuous on $E(0, r_0)$ and$$
\lim_{\substack{|w| → ∞\\w \in E(0, r_0)}} g(w) = \lim\limits_{\substack{|w| → ∞\\w \in E(0, r_0)}} f(\i w + a) = \lim_{\substack{|z| → ∞\\z \in D(a, r_0)}} f(z) = 0,
$$
by Jordan's lemma,$$
\lim_{r → ∞} \int\limits_{γ(a, r; \frac{π}{2}, \frac{3π}{2})} f(z) \e^{tz} \,\d z = \e^{ta} \lim_{r → ∞} \int\limits_{γ(0, r; 0, π)} g(w) \e^{\i tw} \,\d w = 0.
$$
Now back to find $\mathcal{L}^{-1}(F)$ for $F(s) = \dfrac{P(s)}{Q(s)}$ where $\deg P < \deg Q$ and all zeros of $Q$ are simple. $Q$ can be written as $Q(s) = c \prod\limits_{k = 1}^n (s - s_k)$ where $c \in \mathbb{C}^*$ and $s_1, \cdots, s_n$ are distinct complex numbers, thus all singularities of $F$ are $s_1, \cdots, s_n$. By taking $a = \max\limits_{1 \leqslant k \leqslant n} \Re s_k + 1$,$$
\mathcal{L}^{-1}(F)(t) = \frac{1}{2π\i} \lim_{r → ∞} \int_{a - \i r}^{a + \i r} F(s) \e^{ts} \,\d s. \quad (t > 0)
$$
Denote $r_0 = \max\limits_{1 \leqslant k \leqslant n} |s_k|$, then $F$ is continuous on $D(a, r_0)$ and $\deg P < \deg Q$ implies $\lim\limits_{\substack{|z| → ∞\\z \in D(a, r_0)}} F(z) = 0$. For any $r > r_0$, by the residue thorem,$$
\frac{1}{2π\i} \int_{a - \i r}^{a + \i r} F(z) \e^{tz} \,\d z + \frac{1}{2π\i} \int\limits_{γ(a, r; \frac{π}{2}, \frac{3π}{2})} F(z) \e^{tz} \,\d z = \sum_{k = 1}^n \Res(F(z) \e^{tz}, s_k).
$$
Thus by the lemma,$$
\frac{1}{2π\i} \lim_{r → ∞} \int_{a - \i r}^{a + \i r} F(s) \e^{ts} \,\d s = \sum_{k = 1}^n \Res(F(z) \e^{tz}, s_k).
$$
Now, note that $Q'(z) = c\sum\limits_{k = 1}^n \prod\limits_{j ≠ k} (z - s_j)$. For each $k$, since $s_1, \cdots, s_n$ are distinct, then$$
(z - s_k)F(z) \e^{tz} = \frac{P(z)}{c\prod\limits_{j ≠ k} (z - s_j)} \e^{tz}$$
is holomorphic in a neighborhood of $s_k$. Thus$$
\Res(F(z) \e^{tz}, s_k) = \lim_{z → s_k} (z - s_k)F(z) \e^{tz} = \frac{P(s_k)}{c\prod\limits_{j ≠ k} (s_k - s_j)} \e^{t s_k} = \frac{P(s_k)}{Q'(s_k)} \e^{t s_k},
$$
where the last step is because $\prod\limits_{j ≠ k'} (s_k - s_j) = 0$ for $k' ≠ k$. Therefore,$$
\mathcal{L}^{-1}(F)(t) = \sum_{k = 1}^n \Res(F(z) \e^{tz}, s_k) = \sum_{k = 1}^n \frac{P(s_k)}{Q'(s_k)} \e^{s_k t}.
$$
For the second formula, take $Q(s) = s R(S)$. Since zeros of $Q$ are $s_0 = 0, s_1, \cdots, s_n$ and $Q'(s) = R(s) + s R'(s)$, then$$
\mathcal{L}^{-1}(F)(t) = \sum_{k = 0}^n \frac{P(s_k)}{Q'(s_k)} \e^{s_k t} = \frac{P(s_0)}{Q'(s_0)} \e^{s_0 t} + \sum_{k = 1}^n \frac{P(s_k)}{Q'(s_k)} \e^{s_k t} = \frac{P(0)}{R(0)} + \sum_{k = 1}^n \frac{P(s_k)}{s_k R'(s_k)} \e^{s_k t}
$$ |
How can I get the value for $\theta$ when $\cos(\theta) = \frac{\sqrt{3}-1}{2\sqrt2}$, $\sin(\theta) = \frac{\sqrt3+1}{2\sqrt2}$ | I can give you a better way - We know that $\sin(2\theta)=2*\sin(\theta)\cos(\theta)$
So, multiplying both the given equations, we get
$\sin(2\theta)=$$2(3-1)\over{8}$$=$$1\over 2$
$2\theta=$$5\pi\over 6$
$\theta=$$5\pi\over 12$
But, in most general cases, you have to estimate the values. The intuition as to how to solve the problem, or what A and B to use, can be improved by practice.
EDIT: $2\theta = $$5\pi\over 6$ because sin is greater than cosine and hence, $\theta > 45^o$ |
Second Stiefel-Whitney class of $\mathbb{C}\text{P}^2\#\overline{\mathbb{C}\text{P}^2}$ | The second Stiefel Whitney class of $Y=\mathbb{CP}^2$ is non-trivial. Similarly for $Z=\mathbb{CP^2}\setminus \{pt\}$. Namely the inclusion $Z\rightarrow Y$ pulls back the tangent bundle from $Y$ to $Z$, and on cohomology it induces an isomorphism on $H^2(\cdot;\mathbb{Z}_2)$.
Consider now the inclusion $i:Z\rightarrow X$, where $X$ is the space of the OP. The tangent bundle is again pulled back to Z, and $i^*(w_2(X))=w_2(Z)\not=0$. Thus $w_2(X)\not=0$.
This is a more general phenomenon: The first stiefel whitney class of a vector bundle over a $CW$ complex: i.e. if the bundle orientable or not , only depends on the restriction of the bundle over the one skeleton. Something similar is true of the second Stiefel Whitney class over the two-skeleton. |
Q on proof of periods of non-constant meromorphic function - either {0}, {nw_1} or {nw_1 + mw_2} | That's quite immediate: For every $w\in \Omega$, $a\in B(w, \epsilon)\cap \Omega$, $b=a-w\in \Omega$, since $\Omega$ is a group. Now, can you show that $|b-0|<\epsilon$? Here $B(p, r)$ is the open disk with the center $p$ and radius $r$. |
Factorial Divides Rising Power Proof Help | The simplest proof of this I know is that the binomial coefficient $\binom ab$ is an integer for suitably chosen $a$ and $b$. There are combinatorial proofs and induction proofs of that.
I mention this because the link with binomial coefficients does not seem to be mentioned very often in this context, yet the problems are identical. |
Why does $C_1\cap C_2\cap C_3\cap \ldots = \{x:x=2\}$? | The book is right!
Observe that if $x=2$ then $2 \geq x > 2 - \frac{1}{k}$ so $x \in \lim\limits_{k \to +\infty} C_k$.
The point is that on limits usually you lose strict inequalities, they can also have equality then. |
Double integral over annulus | Since $r^2 = x^2 + y^2$ (do you see why?), we're considering the annulus $\sqrt{a} \le r \le \sqrt{b}$. Then the integral is
$$\iint_D\ e^{-r^2}\ dA,$$
The area element is $dA = r\ dr\ d\theta$ in polar coordinates. Now what do you make the limits of integration? |
Mersenne prime variation | There are certainly numbers of that form that are prime, some examples:
$3^2-2 = 7$
$3^4-2 = 79$
$3^5-2 = 241$
$5^2-2 = 23$
$7^2-2 = 47$
and some numbers of that form that are not prime:
$3^3-2 = 25 = 5\cdot 5$
$5^3-2 = 123 = 3\cdot 41$
$5^4-2 = 623 = 7\cdot 89$
One of the reasons the concept of Mersenne primes is so known is that it's relatively simple to test Mersenne numbers for primality (it's quite basic to show that the exponent must be prime, which is just a simple prerequisite, but one that eliminates a fair number of candidates). I haven't heard of any tests optimised for numbers of this form (and as some of the examples show, we don't have a similar simple rule for the exponents), which probably makes these numbers much less studied.
As a result of that much less is probably known, including whether there are infinitely many. |
Minimum distance for a site between two towns | We can assume the site is on the line segment $AB$.
Let $x$ be the distance of the site from $A$, where $0 \le x \le 12$.
The total distance traveled is given by $d=(3n)x + n(12-x)$, where $3n,n$ are the (constant) number of trips to $A,B$, respectively.
Then $d = (2n)x + 12n$, hence for $x \in [0,12]$, the minimum is achieved at $x=0\;$(i.e., place the site at $A$). |
Characterisation of conditional expectation, $Z = \mathbb{E}[X|G]$ | Since $\mathcal{G}$ is generated by a finite partition $B_1, \ldots, B_k$, show that a $\mathcal{G}$-measurable random variable $Z$ is necessarily constant on each $B_i$. (That is, if $\omega$ and $\omega'$ are both in $B_i$, then $Z(\omega) = Z(\omega')$.)
Thus it suffices to find the value that $Z$ takes on each $B_i$. The condition on $Z$ implies $E[I_{B_i} X] = E[I_{B_i} Z]$, but since $Z$ is some constant $z_i$ on $B_i$, we have $E[I_{B_i} X] = z_i P(B_i)$. Thus the value $z_i$ that $Z$ takes on $B_i$ is $E[I_{B_i} X] / P(B_i) =: E[X \mid B_i]$. |
Second order linear differential equation with distributions: $T''+zT=0$ | Let $a = z^{1/2}.$ The differential equation can then be written as
$(D+ia)(D-ia)T = 0,$ where $D$ is the differentiation operator. We multiply this with the nowhere vanishing integrating factor $e^{iax}$:
$$e^{iax}(D+ia)(D-ia)T = 0.$$
We can now do the rewrite $e^{iax}(D+ia)\bullet = De^{iax}\bullet,$ which gives
$De^{iax}(D-ia)T = 0.$
We can now take the antiderivative:
$e^{iax}(D-ia)T = A,$
where $A$ is a constant, i.e. $(D-ia)T = A e^{-iax}.$
Then we multiply this equation with $e^{-iax}$ giving $e^{-iax} (D-ia) T = A e^{-i2ax}$ or $De^{-iax}T = Ae^{-i2ax},$ which after taking the antiderivative results in $e^{-iax}T = A \frac{1}{-i2a} e^{-i2ax} + B,$ where $B$ is a constant.
Thus, we get $T = A \frac{1}{-i2a} e^{-iax} + B e^{iax},$ where $A$ and $B$ are constants.
As you can see, when $a=0$ we get a problem. Solve the equation for that case separately. |
What is this result called: $\int_a^b f(x)g(x)dx=f(c)\int_a^b g(x)dx$? | Mean value theorem for integration. In case you want to know how this name came about:
taking $g=1$, we see that there is $c$ with
$$f(c)=\frac{1}{b-a}\int_a^b f \ dx,$$
the right hand side of which is called the integral mean of $f$ over $[a,b]$. (These integrals are fairly important, e.g. because it appears in the Lebesgue differentation theorem; in fact so important that there is even a standard notation. Well it should be without the absolute value bars, but somehow no code seems to work here.) |
Determining whether two events are "the same" or not | The given statement means that $P(X=i)=P(Y=j)$ provided that $i+j=N$ is a constant. Now from the description of $X$ and $Y$, $X\sim\operatorname{binom}(N,p)$ and $Y\sim\operatorname{binom}(N,1-p)$, so
$$P(X=i)=\binom{i+j}ip^i(1-p)^j$$
and
$$P(Y=j)=\binom{i+j}j(1-p)^jp^i$$
It is well-known that $\binom{i+j}i=\binom{i+j}j$. The equality follows. |
Is $dX/dt=X(t)$ the correct ODE for $X(t)=e^t$? | The differential equation
$$ \frac{d X}{dt}=X(t)$$
has the general solution
$$X(t)=Ce^t$$
where $C \in \mathbb R.$ |
Dirichlet boundary value problem in convex domains with discontinuous boundary values | Quote from the article:
Let $C$ be the class of smooth, bounded and convex domains in $\mathbb R^n$ such that $K$ belongs to the boundary of the domain. Let $\Omega \in C$, we denote furthermore by $u_\Omega$ the function fulfilling
$$\begin{align} \Delta u_\Omega &= 0, \\
u_\Omega &= 0 \text{ in }\partial \Omega \setminus K \\
u_\Omega &= 1 \text{ in } K \end{align}$$
First, we cannot expect $u_\Omega$ to be in $H^1(\Omega)$; the jump between $0$ and $1$ will have "ripple effect" inside the domain, making $|\nabla u|^2$ just large enough for the integral to diverge. A typical example of this behavior, in the plane, is $u(z) = \frac{1}{\pi}\arg z$ on the upper half-plane. Along the real axis, this function jumps from $0$ to $1$ at the origin. (I only consider the local behavior near $0$, which is representative of what happens in your case.) Since $|\nabla u(z)| \approx |z|^{-1}$, the $L^2$ norm of $|\nabla u|$ is infinite.
So, $u_\Omega$ is not a variational (Dirichlet-energy-minimizing) solution, since it has infinite energy. Sobolev spaces don't play a role in its existence. The existence and uniqueness are established with the help of potential theory. Key words: Perron solution, harmonic measure, Poisson kernel, Green function. One reference is section 2.8 of Gilbarg & Trudinger. The Perron solution is uniquely defined for every bounded function on the boundary. It satisfies $\lim_{y \rightarrow x_0} u(y) = g(x_0) $ whenever $g$ is continuous at $x_0$ and $x_0$ is a regular boundary point. In a convex domain, every boundary point is regular. |
Eigenvectors in non-orthogonal coordinate system | You go about finding eigenvalues and eigenvectors the same way regardless of the coordinate system you’re working in, but the results will be specific to that coordinate system: in general, eigenvalues in different coordinate systems will be different and eigenvectors will not be related to each other via the coordinate transformation. Therefore, if you want to know the principal axes in the standard coordinate system, you’ll have to transform the ellipsoid’s matrix first.
A simple two-dimensional example to illustrate: Consider the ellipse given by $x^2/25+y^2/9=1$ in the standard basis. The corresponding matrix is $$C=\begin{bmatrix}\frac1{25}&0\\0&\frac19\end{bmatrix}.$$ Its eigenvalues are of course $1/25$ and $1/9$, with the standard basis vectors for the corresponding eigenvectors. If we rotate the $y$-axis thirty degrees so that the angle between the positive axis directions is $120°$, the corresponding change-of-basis matrix is $$M=\begin{bmatrix}1&\frac1{\sqrt3}\\0&\frac2{\sqrt3}\end{bmatrix}$$ and the matrix of the ellipse relative to this new basis is $$C'=\begin{bmatrix}\frac1{25}&-\frac1{50}\\-\frac1{50}&\frac7{75}\end{bmatrix}.$$ Its eigenvalues are $1/10$ and $1/30$, with respective eigenvectors $(1,-3)^T$ and $(3,1)^T$, which, even if normalized, are clearly not the images under $M$ of the eigenvectors of $C$. |
proving a a set is closed | The intersection of two open sets is open. Your set is the intersection of the circle and the other shape, both of which are open as you've noted. |
local expression for affine connections | The claim here is that in that local coordinate system, the field $\nabla_X Y$ is equal to the thing on the right. That's completely obvious, in the sense that $X$ is equal to the thing in the subscript (within this patch) and $Y$ is equal to the thing on the right (within this patch), so the left and right hand sides are equal just by substitution.
You might well ask "Well, what if we have two overlapping patches? Then we'd have two DIFFERENT expressions for $\nabla_X Y$. How do we know that they're equal?" And the answer is "That depends on how your source defines that covariant derivative." It might be that they're equal because the covariant derivative has been shown to be well-defined everywhere through some coordinate-invariant means; it might be because through explicit computation, the two different coordinate expressions have been shown equal. There are probably other proofs as well, but I can't say which one applies without knowing which definition you've used. |
$z\mapsto \frac1{z^2-1/2}$ uniformly approximable by polynomials over the unit circle | Hint: compare $\oint_\Gamma z f(z)\; dz$ to what you would get with a polynomial, where $\Gamma$ is the positively oriented unit circle. |
Finding loci of possible points satisfying vector simultaneous equations | You don't need to find an expression fo $\vec {x}\,$:$\,$ it is enough to verify that $\|\vec{x}-\vec {c'}\|$ is constant
(with respect to
$\vec{x}$) where $\vec {c'}$ is the (orthogonal) projection of $\vec {c}$ onto the plane.
If $d$ is the distance of $\vec {c}$ from the plane, then
$$d=\frac {(\vec{b}-\vec{c})\cdot\vec{a}}{\|\vec{a}\|}$$ and $$\vec {c'}=\vec {c} + \frac d{\|\vec{a}\|}\vec{a}$$ So check yourself that$$(\vec{x}-\vec {c'})\cdot(\vec{x}-\vec {c'})=r^2-d^2$$ $\dots$ remembering that $$\vec{a}\cdot\vec{x}=\vec{a}\cdot\vec{b}$$$$$$See $\,$ W.A.Meyer $\,$ Geometry and Its Applications (2006), section 5.3, ex. 15, p. 241 .
Addendum
Of course the circle can be parameterized in a typical way.
Let $\,\vec {u}=\vec {b}-\vec {c'}$ and $\,\vec{v}=\vec {u} \times \vec {a}$ .
Then $$\vec {x}= \vec {c'}+ \sqrt {r^2-d^2} \left (\frac {\vec {u}}{\|\vec {u}\|} \cos t + \frac {\vec {v}}{\|\vec {v}\|} \sin t \right)$$ with $\,0 \le t < 2\pi$ . |
Whether the graph of rational map is closed | Let $X=\mathbb{A}^1$ and $Y=X-\{0\}$. Then $Y$ is open in $X$ and the identity map $Y\to Y$ gives a rational map (not defined at $0$) $X\to Y$. Then you can easily check that the graph as you describe is closed. |
Is family of sets bounded from above an equivalence class of certain equivalence relation? | As Don Thousand already points out in the comments, all elements of $\mathcal{A}$ are equivalent (under $R$). Just repeating the argument here: given any two sets $A$ and $B$ bounded above, there is an upper bound $M$ for both of them. Then $A \cap [M+1, \infty) = B \cap [M+1, \infty) = \emptyset$.
To see that $\mathcal{A}$ is indeed an equivalence class, we have to show that there is no set that is not in $\mathcal{A}$, which is related to something in $\mathcal{A}$. Let $B \subseteq \mathbb{R}$, $B \not \in \mathcal{A}$. For any $A \in \mathcal{A}$ we have some upper bound $M$, so $A \cap [M+1, \infty) = \emptyset$. Since $B \not \in \mathcal{A}$, we must have that $B$ is unbounded and thus $B \cap [M+1, \infty) \neq \emptyset$. So we see that $B$ is not related to anything in $\mathcal{A}$, and so $\mathcal{A}$ is indeed an $R$-equivalence class. |
Find $(a,b)$ such that $\lim\limits_{x\to 0}\frac{f(x)}{x}=1$ implies $\lim\limits_{x\to 0}\frac{x(1+a\cos x)-b\sin x}{(f(x))^3}=1$ | Since we can write
$$\lim_{x\to 0}\frac{x(1+a\cos x)-b\sin x}{(f(x))^3}=\lim_{x\to 0}\frac{x(1+a\cos x)-b\sin x}{x^3}\cdot\frac{1}{(f(x)/x)^3}$$
we have to have
$$\lim_{x\to 0}\frac{x(1+a\cos x)-b\sin x}{x^3}=1$$
We can write
$$\lim_{x\to 0}\frac{x(1+a\cos x)-b\sin x}{x^3}\tag1$$$$=\lim_{x\to 0}\frac{1+a\cos x-b\frac{\sin x}{x}}{x^2}\tag2$$
But we cannot write $(2)$ as
$$\lim_{x\to 0}\frac{1+a\cos x-b\cdot 1}{x^2}$$
From $(1)$, by L'Hôpital's rule,
$$(1)=\lim_{x\to 0}\frac{1+a\cos x-ax\sin x-b\cos x}{3x^2}\tag3$$
Here, we have to have
$$1+a-b=0\tag 4$$
Using L'Hôpital's rule several times,
$$\begin{align}(3)&=\lim_{x\to 0}\frac{1-ax\sin x-\cos x}{3x^2}\\&=\lim_{x\to 0}\frac{-a(\sin x+x\cos x)+\sin x}{6x}\\&=\lim_{x\to 0}\frac{-2a\cos x+ax\sin x+\cos x}{6}\\&=\frac{-2a+1}{6}\end{align}$$
and so
$$\frac{-2a+1}{6}=1\tag5$$
Now solve $(4)(5)$. |
How to find all polynomials P(x) such that $P(x^2-2)=P(x)^2 -2$? | Lemma : If $P(x)^2$ is a polynomial in $x^2$, then so is either P(x) or P(x)/x.
By the lemma, there is a polynomial Q such that $P(x)=Q(x^2-2)$ or $P(x)=xQ(x^2-2)$.
Then $Q((x^2-2)^2-2)=Q(x^2-2)^2−2$ or $(x^2-2)Q((x^2-2)^2-2)=x^2Q(x^2-2)^2-2$
Substituting $x^2-2=y$ yields $Q(y^2-2)=Q(y)^2-2$ and $yQ(y^2-2)=(y+2)Q(y)^2+1$
Suppose that $yQ(y^2-2)=(y+2)Q(y)^2-2$. Setting $y=-2$ we obtain that $Q(2)=1$
Note that, if $a\neq 0$ and $Q(a)=1$ then also $aQ(a^2+1)=(a+2)-2$ and hence $Q(a^2+1)=1$.
We thus obtain an infinite sequence of points at which Q takes value 1.
Namely the sequence given by $a_{n+1}=a_{n^2}-2$.
Therefore $Q≡1$. It follows that if $Q≢1$, then $P(x)=Q(x^2-2)$.
Now we can easily list all solutions: these are the polynomials of the form $T(T(⋯(T(x))⋯))$, where $T(x)=x^2-2$.
NB: Lemma/proof,
Let $P(x)=a_nx^n+a_{n−1}x_{n−1}+⋯+a_0$, $a_n≠0$.
The coefficient at $x^{2n−1}$ (of $P(x)^2$) is $2a_na_{n−1}$, from which we get $a_{n−1}=0$. Now the coefficient at $x_{2n−3}$ equals $2a_na_{n−3}$; hence $a_{n−3}=0$, and so on. Continuing in this manner we conclude that $a_{n−2k−1}=0$ for $k=0,1,2,…,$ i.e. $P(x)=a_nx^n+a_{n−2}x^{n−2}+a_{n−4}x_{n−4}+⋯$. |
How can I prove that the number of leaves in a non-empty full K-ary tree is (K − 1)n + 1 | A proof indeed is a simple induction with respect to the height $h$ of the tree. Denote by $n_h$ the number of internal nodes of a complete $K$-ary tree and by $l_h$ the number of its leaves. At the base of induction for $h=1$ we have $n_h=1$ and $l_h=K$, so the formula $l_h=(K-1)n_h+1$ is verified. At the induction step each leaf of a tree of height $h$ becomes an internal vertex, but generates $k$ leaves of a tree of a height $h+1$. Thus $n_{h+1}=n_h+l_h$ and
$$(K-1)n_{h+1}+1=(K-1)(n_h+l_h)+1=(K-1)n_h+1+(K-1)l_h=Kl_h=l_{h+1}.$$ |
Why is the obelus sign $\div$ exclusively used in elementary school education | In elementary school, it's hard for kids to "switch between notation", as we have done later on in our lives.
We start with $\div$, then we go to $/$, and now we do something like $\frac{a}{b}$.
In elementary, students are not familiar with "fractions", or just know the very basics about them. Hence the $\frac{a}{b}$ ratio doesn't make sense ... not until you actually use fractions in middle school and so on.
It's important to realise that $'/'$ the slash, refers to fraction. When we write $(a/b)$, it's because we are lazy and do not want to write $\frac{a}{b}$. So technically it's the same thing as fraction.
This is why students in elementary just use the regular $\div$ symbol. It's just a symbol. No fractions or anything complicated just yet. It's the same reason why kids in elementary use $\times$ to indicate multiplication and not the dot, $\cdot$, as we use when we are older. They do not deal with complex numbers, and so these symbols that we use as adults are really after we see the use of math. In elementary, it's very basic, hence basic symbols.
In the end it's because of the type of math we do. Imagine writing something like this:
$$\int^{\dfrac{x}{5}}_{0}(x^3+\frac{x^2}{5}+\ln(\sin(\frac{x}{4})))dx$$
$$\int^{x\div 5}_{0}(x^3+(x^2 \div 5)+\ln(\sin(x\div 4)))dx$$
We even had to add extra parentheses to make it clear that $x^2\div 5$ is one thing. Heavier math = more compact symbols |
Uniformly convergence of series on compact set | Hint: Weierstrass $M$-test works also for complex series. Fix a compact set $K$. If $z \in K$ and $|n|$ is large enough, the denominator doesn't vanish on $K$. Estimate the supremum norm of
$$
\frac{1}{(z-n)^2}
$$
for large $n$ on $K$. (Note that the sum isn't defined for $z \in \mathbb{Z}$.) |
Prove a group action is wandering | I assume that the manifold is finite dimensional.
By definition, the action of $G$ on $M$ is proper if and only if for any compact subset $C\subset M$ of $M$, $G_C=\{g\in G: g(C)\cap C\neq\phi\}$ is finite.
Let $x\in M$, since $M$ is finite dimensional, $x$ has a neighborhood $U_x$ such that the adherence $U'_x$ of $U_x$ is compact.
Since the action is proper $G_{U'_x}$ is finite. You also have $G_{U_x}\subset G_{U'_x}$ is finite. |
cardinality of finite algebra in measure theory | Let $\mathscr{A}$ be the set of finite subsets of $\Bbb N$ together with their complements. |
Is the function complex differentiable or even holomorphic? | Note that if $z=x+iy$, then
$$
\left( \overline{z}\right)^2 =(x-iy)^2 = x^2-y^2-2ixy
$$
Letting $u(x,y)=x^2-y^2$ and $v(x,y)=-2xy$, you should verify that the Cauchy-Riemann equations are only satisfied if $x=y=0$. Thus $f$ is only complex differentiable at $z_0=0$. |
What is the limit of $\lim\limits_{x→∞}\frac{\sin x}{x}$ | we can see that for $x>0$ we have
$$-\frac{1}{x}\le\frac{\sin x}{x}\le+\frac{1}{x}$$
then by squeeze theorem you can conclude that $\lim\limits_{x\to+\infty}\frac{\sin x}{x}=0$ since $\lim\limits_{x\to+\infty}-\frac{1}{x}=\lim\limits_{x\to+\infty}+\frac{1}{x}=0$ |
How to prove that $E^Y(h(X)|y) = \int h(x)\mathcal{L}_{X|Y}(dx|y)$? | As you already pointed out, the assertion holds if $h(x) = 1_C(x)$ is the indicator function of a measurable set $C$. By the linearity of the (conditional) expectation, this implies that
$$\mathbb{E}(h(X) \mid Y=y) = \int h(x) \mathcal{L}_{X \mid Y}(dx \mid y)$$
holds for any simple function $h$, i.e. any function $h$ of the form
$$h(x) = \sum_{j=1}^n a_j 1_{C_j}(x).$$
Now if $h \geq 0$ is a measurable function, then there exists a sequence of simple functions $(h_j)_{j \in \mathbb{N}}$ such that $h_j \geq 0$, $h_j(x) \uparrow h(x)$ as $j \to \infty$ for all $x$. Using the monotone convergence theorem (MCT) and the fact we already know that the assertion holds for simple functions we get
$$\begin{align*} \mathbb{E}(h(X) \mid Y) &\stackrel{\text{MCT}}{=} \lim_{j \to \infty} \mathbb{E}(h_j(X) \mid Y) \\ &= \lim_{j \to \infty} \int h_j(x) \mathcal{L}_{X \mid Y}(dx \mid Y) \\ &\stackrel{\text{MCT}}{=} \int h(x) \, \mathcal{L}_{X \mid Y}(dx \mid Y) \end{align*}$$
which shows that
$$\mathbb{E}(h(X) \mid Y=y) = \int h(x) \, \mathcal{L}_{X \mid Y}(dx \mid y).$$
Finally, if $h$ is a measurable function such that $h(X) \in L^1$, then we can write $h=h^+-h^-$ and apply the first part of the proof to positive part $h^+ \geq 0$ and negative part $h^- \geq 0$ and use again the linearity of the integral. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.