title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Integral form of $\Gamma (x)$ | In order to expand my comment: put for $0\leq t\leq 1$: $\displaystyle S_n(t)=\sum_{k=0}^n\frac{(-t)^k}{k!}$. We have
$$\sup_{0\leq t\leq 1}|\exp(-t)-S_n(t)| = \sup_{0\leq t\leq 1}\sum_{k=n+1}^{+\infty}\frac{(-t)^k}{k!}\leq \sum_{k=n+1}^{+\infty}\frac 1{k!},$$
and since the series $\displaystyle\sum_{k=0}^{+\infty}\frac 1{k!}$ is convergent, the sequence $\{S_n\}$ converges uniformly to $t\mapsto \exp (-t)$ on $\left[0,1\right]$. Therefore, we can switch the series and the integral for $x\geq 1$:
$\begin{align*}
\int_0^1t^{x-1}e^{-t}dt&=\int_0^1t^{x-1}\left(\sum_{k=0}^{+\infty}\frac{(-1)^kt^k}{k!}\right)dt\\
&=\sum_{k=0}^{+\infty}\frac{(-1)^k}{k!}\int_0^1 t^{x+k-1}dt\\
&=\sum_{k=0}^{+\infty}\frac{(-1)^k}{k!(x+k)}.
\end{align*}$
For $0<x<1$, we can use the dominated convergence theorem since $|t^{x-1}S_n(t)|\leq e\cdot t^{x-1}$ which is integrable on $\left[0,1\right]$ (we can use a more simple argument in the case $x\geq 1$). |
Trick to remember the difference between $\sqrt[b]{a^b}=a$ and $\log_a{a^b}=b$? | Wanted to comment, but I don't have enough reputation...
Both results come straightforwardly from the definitions. If you have them clear in your mind, you shouldn't have this problem... Indeed, the square root is, in some domain (for example, in $\mathbb{R}_{\geq 0}$), the inverse function of the power. This means precisely that for any $a\in\mathbb{R}_{\geq 0}$ you have $\sqrt[n]{a^n}=a$.
As for the logarithm with base $a$, again with some care about the domain, it is the inverse function of the power with base $a$, so, again, you get precisely your result.
Also a numerical example could do the trick. Just think about $\sqrt[2]{a^2}$ (if you want, also give a numerical value to $a$, but choose it different from $b=2$; for examples, for $a=3$ you get the square root of 9, which is $a$). As for the logarithm, just think about the logarithm with base 10, which "counts" the number of zeroes of the powers of 10: $Log_{10}(10^b)=b$, because $10^b$ has $b$ zeroes. |
Parameterize a polynomial with no real roots | Let $p(x)=1+c_1+\dots c_n$. Since $p(0)=1>0$, if $p$ does not have real roots, it must be positive. This implies $c_n>0$ (otherwise there would be a positive root) and $n$ even (otherwise there would be a negative root.) Applying Descartes rule of signs to $p(x)$ and $p(-x)$ we get the following necessary condition: the sequences of coefficients
$$
1,\,c_1,\,c_2,\,c_3,\dots,c_n,\quad\text{and}\quad
1,\,-c_1,\,c_2,-\,c_3,\dots,c_n
$$
must have an even number of sign changes. |
Prove $\textbf{F}^{\{1,2\}}$ and $\textbf{F}^{2}$ are isomorphic | Your proof looks fine to me! Well done! |
If $A$ is a permutation matrix, then why is $A^{T}A=I$? | Well, if $A\in \mathcal{M}_{n}(\mathbf{R})$ then we know that the columns of such a permutation matrix form an orthnormal basis for $\mathbf{R}^n$. So, $A$ is orthogonal and $A^TA=I_n$ by a standard fact about orthogonal matrices. To see why this is the case, observe that $A^TA$ corresponds to dotting each column of $A$ with another column of $A$. By orthonormality, $A^TA$ returns the identity matrix.
For example, take
$$ A=\begin{bmatrix}
1&0&0\\
0&0&1\\
0&1&0
\end{bmatrix}.$$
$$ A^TA=
\begin{bmatrix}
1&0&0\\
0&0&1\\
0&1&0
\end{bmatrix}\begin{bmatrix}
1&0&0\\
0&0&1\\
0&1&0
\end{bmatrix}=I_3.$$ |
$UTM_n[D]$ is artinian | Hint: right ideals are also right $D$-subspaces. What can you say about the $D$-dimension of $UTM_n(D)$?
The same argument works on the left side. |
If $f\colon\mathbb{Z}\to F$ is an onto morphism and $F$ is a field, then $\mathbb{Z}_p\cong F$ where $p$ is a prime. | We can assume $\ker(f)\neq (0)$, since this would imply that $\Bbb Z$ is a field.
Since $\Bbb Z$ is a PID, we can assume $\ker(f)=(n)$, where $n\in\Bbb Z^+$. You are ofcourse right that $\Bbb Z_n$ is a field $\iff n$ prime. One way to show this:
Suppose $n=ab$ so that $\ker(f)=(ab)$, then $f(ab)=f(a)f(b)=0$ in $F$. Since $F$ is a field, and so an integral domain we need to have $f(a)=0$ or $f(b)=0$, which forces either $a,b$ to be a unit. This means by definition that $n$ is prime. |
Let $b>a>0$ , prove $\int_a^b\ln(x)\leq\frac{b^2-a^2}{2}$ | My process for proving this would be as follows.
To start, let's quickly prove that $\ln x < x$ for all $x > 0.$ We can do this by first showing this for all $0 < x \leq 1,$ then all $x > 1.$
If $0 < x \leq 1,$ then by the monotonicity of the natural log we have $\ln x \leq 0,$ and because $0 < x$ we have $\ln x < x.$
If $x > 1,$ then consider the function $f(x) = x - \ln x.$ Its derivative, $f'(x) = 1 - \frac1{x}$ is strictly positive for $x > 1.$ So, $f(x)$ is strictly increasing for $x > 1,$ and because $f(1) = 1, f(x) > 1$ for all $x > 1.$
Now let's use this to prove our desired statement. If $f(x) \leq g(x)$ for all $x \in [a,b],$ then $\int_a^b f(x) dx \leq \int_a^b g(x) dx.$
So, $\ln x \leq x$ on the region $[a,b]$ for any $0 < a < b$ implies that $\int_a^b \ln x dx \leq \int_a^b x dx = \frac{b^2 - a^2}{2},$ which is our desired statement. |
Limit point compact subspace of Hausdorff space | In a general topological space (even a Hausdorff one) closedness of a subset cannot be expressed in terms of sequences. If the limit of every sequence in $X$ belongs to $X$ you cannot conclude that $X$ is closed. Such an argument works in metric spaces but not in general topological spaces. |
An odd application of Jensen's inequality. | If $x=0$ the inequality is immediate. Otherwise, dividing through by $|x|^p$ and renaming $x:=y/x$ it is enough to show for $x\in\mathbb{R}$ that
$1-|1-x|^p\le (1/t)(|1+tx|^p-1),$ i.e., $1+t\le |1+tx|^p+t|1-x|^p$, i.e., $2-|1+tx|^p\le 1-t+t|1-x|^p$. By Jensen's inequality for the function $x\mapsto |x|^p$, this follows if $2-|1+tx|^p\le |(1-t)1+t(1-x)|^p=|1-tx|^p,$ i.e., $1\le (1/2)|1+tx|^p+(1/2)|1-tx|^p,$ which follows by another application of Jensen's inequality. |
What is the maximum likelihood estimation for a binomial distribution with zero successes in the data? | The MLE for Bernoulli distribution is computed here.
In your case it leads to $\theta=0$. And MLE doesn’t change when you change from closed to open interval for $\theta$. |
Positive integers expressable as sums of powers of 2 | We can do it by (strong) induction. Let $P(x)$ be the assertion that $x$ is a sum of $0$ or more distinct powers of $2$. The number $0$ is a sum of $0$ or more distinct powers of $2$. So $P(0)$ holds.
Suppose that $P(k)$ is true for all $k\lt x$. We show that $P(x)$ is true. Let $2^p$ be the largest power of $2$ which is $\le x$. Then $x-2^p \lt 2^{p-1}$. By the induction hypothesis, $x-2^p$ is expressible as a sum of $0$ or more distinct powers of $2$. All these powers of $2$ are $\lt 2^p$, since $x-2^p\lt 2^p$. It follows that $x=2^p$ plus a sum of $0$ or more distinct powers of $2$ that are less than $2^p$. So $x$ is a sum of distinct powers of $2$. |
Separable differential equation $x^2 y'' = 2y$ | The question asks to verify the statement in OP. The second derivative of $y: x\mapsto x^2-x^{-1}$ w.r.t. $x$ is $y'': x\mapsto 2 - 2 x^{-3}$. Hence, $$x^2 y''(x) = 2x^2 - 2 x^{-1} = 2 y(x) .$$
The statement in OP is correct.
To solve the differential equation, we proceed the same way as for every Cauchy-Euler equation by using the trial solution $y=x^m$ or by making the change of variable $t=\ln x$. |
Reference request: Introduction to Applied Differential Geometry for Physicists and Engineers | Gauge Fields, Knots and Gravity by Baez and Muniain. The mathematical prerequisites are few and the book gives a great amount of physical intuition without being too sloppy. You will not find many 'engineering' applications of manifolds here, but electromagnetism and other gauge theories are treated extensively. |
Evaluating a complex definite integral of $e^{-st^{2}+it}$ from t=0 to t= infinity maybe gamma function?? | This becomes the Gaussian integral by performing a suitable substitution.
$$\begin{align}
\int_{0}^{\infty} e^{-st^{2}+it}dt
&=\int_{0}^{\infty} e^{-s(t^{2}-\frac{i}st)}dt\\
&=\int_{0}^{\infty} e^{-s((t-\frac{i}{2s})^2+\frac1{4s^2})}dt\\
&=\int_{0}^{\infty} e^{-s(t-\frac{i}{2s})^2-\frac1{4s}}dt\\
&=e^{-\frac1{4s}}\int_{0}^{\infty} e^{-s(t-\frac{i}{2s})^2}dt\\
\end{align}$$
Then applying the substitution $u=\sqrt{s}(t-\frac{i}{2s})\implies du=\sqrt{s}dt$ gives
$$\begin{align}
e^{-\frac1{4s}}\int_{0}^{\infty} e^{-s(t-\frac{i}{2s})^2}dt
&=\frac{1}{e^{\frac1{4s}}\sqrt{s}}\int_{-\frac{i}{2\sqrt{s}}}^{\infty} e^{-u^2}du\\
&=\frac{1}{e^{\frac1{4s}}\sqrt{s}}\int_{-\frac{i}{2\sqrt{s}}}^0 e^{-u^2}du+\frac{1}{e^{\frac1{4s}}\sqrt{s}}\int_0^\infty e^{-u^2}du\\
&=\frac{i\sqrt{\pi}}{2e^{\frac1{4s}}\sqrt{s}}\mathrm{erfi}\left(\frac{1}{2\sqrt{s}}\right)+\frac{\sqrt{\pi}}{2e^{\frac1{4s}}\sqrt{s}}\\
&=\frac{\sqrt{\pi}}{2e^{\frac1{4s}}\sqrt{s}}\left(1+i\mathrm{erfi}\left(\frac{1}{2\sqrt{s}}\right)\right)\\
\end{align}$$ |
Examples of symplectromorphism other than $Sp(V)$ | Take any function $f$ with compact support and the Hamiltonian vector field defined by $\omega(X_f,.)=df$. The flow $\phi_t$ of $X_f$ is a flow of symplectomorphisms. If $f$ is not constant, then there exists $R$ such that for $\|x\|\geq R$, $\phi_t(x)=x$, but there exists $r>0$ such that the restriction of $\phi_t$ to the ball $B(0,r)$ is not the identity. |
Trigonometric formula simplifies to $\sin x\cos x[\tan x+\cot x]$ | The $\cos(2\pi+x)$ and $\cot(2\pi+x)$ terms are easy to get rid of: the basic trigonometric functions have period $2\pi$. So $\cos(2\pi +x)=\cos x$, and $\cot(2\pi+x)=\cot x$.
As to $\cos(3\pi/2+x)$, we can use the addition formula. It is $\cos(3\pi/2)\cos x-\sin(3\pi/2)\sin x$. This is simply $\sin x$. That's because $\cos(3\pi/2)=0$ and $\sin(3\pi/2)=-1$.
Your turn. You need to deal with $\cot(3\pi/2 -x)$. Use the procedure of the preceding paragraph, using the fact that $\cot t=\frac{\cos t}{\sin t}$. |
The multiplicative group of units in $\mathbb{Z}_{p}$ is isomorphic to $\mathbb{Z}_{p} \times C_{n}$ with $n=\max\{p-1,2\}$. | I'll follow up my comment with a sketch of how I assume the exercise is meant to be solved. People with knowledge in $p$-adics will see I try to recover the standard (for $p \neq 2$) $\Bbb Z_p^\times \simeq \mu(\Bbb Q_p) \times (1+p\Bbb Z_p)$ -- and Teichmüller representatives/roots of unity on the first factor, and the second factor being $\simeq (1+p)^{\Bbb Z_p} \simeq (\Bbb Z_p, +)$ -- with arithmetic congruences modulo $p$-powers.
Let's look at the rings $\Bbb Z/p^j\Bbb Z$. Everyone since Euler knows that the unit group $(\Bbb Z/p^j\Bbb Z)^\times$ has order $(p-1)\cdot p^{j-1}$, which suggests it's of the form $C_{p-1} \times C_{p^{j-1}}$, where $C_n$ denotes the cyclic group of order $n$. (That's actually not the case for $p=2$ ($j\ge 3$), see below, but indeed for all other primes.) Well great: If that is the case, and furthermore if we can establish these isomorphisms in a compatible way when we let $j$ vary, then we have
$$\Bbb Z_p^\times \simeq\varprojlim (\Bbb Z/p^j\Bbb Z)^\times \simeq \varprojlim (C_{p-1} \times C_{p^j}) \simeq \varprojlim C_{p-1} \times \varprojlim C_{p^j} \simeq C_{p-1} \times \Bbb Z_p$$
(there's a few things to check here maybe, but this chain of isomorphisms should be straightforward).
For $p \neq 2$, such compatible isomorphisms $(\Bbb Z/p^j\Bbb Z)^\times \simeq C_{p-1} \times C_{p^{j-1}}$can be found like this:
First, the $C_{p^{j-1}}$-part. I claim that the element $1+p$ (formally, its various residues mod $p^j$) generates such a cyclic subgroup. Namely, Let $y_j=$ the residue of $1+p$ in $\Bbb Z/p^j\Bbb Z$.
Check that the order of $y_j$ in $(\Bbb Z/p^j\Bbb Z)^\times$ is $p^{j-1}$, i.e. that $(1+p)^{p^k} \not \equiv 1$ mod $p^{j}$ for $k <j-1$, but $(1+p)^{p^{j-1}} \equiv 1$ mod $p^{j}$. For this, you need some playing with $p$-divisibility of binomial coefficients $\binom{p^k}{n}$, and at some point you really need $p \neq 2$.
Compatibility is clear, as the projection $(\Bbb Z/p^j\Bbb Z) \rightarrow (\Bbb Z/p^i\Bbb Z)$ sends $y_j$ to $y_i$.
The $C_{p-1}$ part needs different considerations. A crucial fact here, proved by induction, is
$$a \equiv b \text{ mod } p \Rightarrow a^{p^{j-1}} \equiv b^{p^{j-1}} \text{ mod } p^j \quad (*)$$
With this, for $a \in\lbrace 1, ..., p-1\rbrace$, check that the residues of $a^{p^{j-1}}$ in $\Bbb Z/p^j\Bbb Z$ are distinct from each other and form a subgroup isomorphic to $(\Bbb Z/p\Bbb Z)^\times$, which is cyclic of order $p-1$.
Choose and fix $2 \le a \le p-1$ such that its residue mod $p$ is a generator of $(\Bbb Z/p\Bbb Z)^\times$. Then $x_j :=$ the residue of $a^{p^{j-1}}$ in $(\Bbb Z/p^j\Bbb Z)^\times$ is a generator of the $C_{p-1}$ we want, and by the fact $(*)$ and iterations of Fermat's little theorem, we also have that the projection $(\Bbb Z/p^j\Bbb Z) \rightarrow (\Bbb Z/p^i\Bbb Z)$ sends $x_j$ to $x_i$.
Finally, for $p=2$, the unit group turns out to be not $C_{2^{j-1}}$, but $C_2 \times C_{2^{j-2}}$ (for $j \ge 3$). To adapt the above, for $y_j$ instead of $1+p$ take $1+p^2$ (i.e. $5$) mod $2^j$, and show its order in the unit group is $2^{j-2}$. Funnily, here the constant $C_2$ part is easy, namely it's generated by $x_j :=$ the residue of $-1$ in $(\Bbb Z/2^j\Bbb Z)$. |
Looking books about the topology of n-manifold ($n > 4$) | Kosinski's "Differentiable Manifolds" and Milnor's h-cobordism theorem lecture notes I consider to be two of the standard high-dimensional manifold theory textbooks.
Kosinski, Differential Manifolds, Volume 138 (Pure and Applied Mathematics)
Milnor, John, Lectures on the h-cobordism theorem, notes by L. Siebenmann and J. Sondow, Princeton University Press, Princeton, NJ, 1965. v+116 pp.
another popular reference in the PL case is:
Rourke, Colin Patrick; Sanderson, Brian Joseph, Introduction to piecewise-linear topology, Springer Study Edition, Springer-Verlag, Berlin-New York, 1982. ISBN 3-540-11102-6.
I believe they go as far as the $s$-cobordism theorem in that reference but I don't have a copy.
The standard references for the relationship between topological, smooth and PL structures would be Kirby and Siebenmann:
Kirby, Robion C.; Siebenmann, Laurence C. (1977) Foundational Essays on Topological Manifolds, Smoothings, and Triangulations. Princeton, NJ: Princeton Univ. Pr.. ISBN 0-691-08191-3
of all the above references I find this one the least reader-friendly, as it tends to be more heavy on technical constructions and steers-away from narrative. The above is still missing a lot of details -- for much of this material people still go back to Milnor's papers rather than book references. Milnor's writing is quite pleasant, for example you'll have to go there for his work on microbundles. |
The Vector (13,-15) is a linear combination of the vectors (1,5) and (3,c)? Find the scalar c to make this true. | If you want to do it without matrices too, the solution is straightforward. Just eliminate one of the variables, say r(because it doesn't have c in it). Then we have,
$$5*13+(c-15)*s = -15 \implies s*(c-15) = -80$$. Now observe that $s$ has always a unique solution when $c\neq15.$When, $c=15,$ LHS=0, while RHS=-80, so there is no solution for s and so no solution for c. In other terms, $x^{-1}$ exists when $x\neq0 (x\in \mathbb R)$, and $\mathbb R$ is a field. Now try to generalize this statement to vector spaces and you get the matrix solution too. (The notion of invertibility of a matrix) |
How can a subset be disjointed? | The statement $A\cap B\cap C=\varnothing$ doesn't mean that the sets are disjoint. It means that $C$ and $A\cap B$ are disjoint.
Indeed, recall that $A\triangle B$ is the same as $(A\cup B)\setminus (A\cap B)$. So if $C\subseteq A\triangle B$ then $C\cap(A\cap B)=\varnothing$. |
Proving a solution exists to an integral equation | Define $T: C[0,1] \to C[0,1]$ by
$$(Tf)(x)=\int_{0}^{x} f(x-t)e^{-t^2}dt + g(x).$$
Then show that
$$||Tf-Th||_{_\infty} \le \frac{4}{5}||f-h||_{_\infty}.$$
Proceed with Banach's fixed point theorem. |
Multiplying matrices, need some clarification simple dot product | The inner dimensions need to match (# of columns in first matrix = # of rows in second matrix). If $A$ is m by n (m rows, n columns), and $B$ is n by p, then $AB$ is m by p (and is undefined otherwise). The i-j'th component (ith row, jth column) of $AB$ is the $i$-th row of $A$ dotted with the $j$-th column of $B$.
Thus, the final matrix multiplication is undefined. |
Showing a linear function exists from Hahn-Banach theorem, with equality at a point. | You can think of the function $p$ restricted to $Span\{x_0\}$ as a one-dimensional function $a \mapsto p(ax_0)$ that is convex and piecewise linear (with one knot at $a=0$), by sublinearity. Then it is easy to find a linear function $F$ on $Span\{x_0\}$ such that $F \le p$ and $F(x_0)=p(x_0)$, and then use Hahn-Banach to extend to $X$.
More detail:
For $a>0$, $p(ax_0)=a p(x_0)$. For $a < 0$, $p(ax_0) = -ap(-x_0)$. So the one-dimensional function has slope $p(-x_0)$ on $(-\infty,0)$ and slope $-p(x_0)$ on $(0,\infty)$. Furthermore, $p(x_0) \ge -p(-x_0)$ by sublinearity. So choosing $F$ to satisfy $F(ax_0):= a p(x_0)$ for all $a \in \mathbb{R}$ works. |
Explain why a system of more than 2 equations has only one solution | If $c \in \mathbf R^3$ is a vector such that $Ac = b$, then the solutions of $Ax = b$ are precisely
$$
c + \operatorname{null} A = \{c + d : d \in \mathbf R^3 \text{ and } Ad = 0\}.
$$
If you can use or justify this, then all you need to do is show that the homogeneous system $Ax = 0$ has only the trivial solution $x = (0, 0, 0)^T$. This is true if and only if after performing elementary row operations to $A$ to get a matrix in row echelon form there are exactly three (the maximum possible number) pivots. If you had fewer pivots, then there would be free variables.
Do you know how to put a matrix in row echelon form? We can certainly go over that. |
Evaluating the limit $\lim_{x \to 1} (x^3 - 1) / (x - 1)$ | Hint
$$x^3-1=(x-1)(x^2+x+1)$$
$\ \ \ \ \ \ \ \ \ \ $ |
What is the variance and expectation of the number of people wearing black socks? | Presumably, owning a cat and wearing black socks are to be treated as independent events
(although, if cat owners prefer to wear socks that don't show their cat's hair
as much, that might not be the case). The number $Y$ of people interviewed is indeed a geometric random variable (in the version that has support $\{1,2,3,\ldots\}$, i.e. the number of trials up to and including the first success). If $X$ is the number of people interviewed wearing black socks, then the conditional distribution of $X$ given $Y$ is binomial with parameters $Y$ and $p$.
My interpretation of the question is that she asks each person about socks immediately after asking that person about cats. That's certainly more practical than having to keep everybody waiting until she meets a cat-owner, but (unless
the interview subjects get so annoyed at waiting that they don't answer correctly) the resulting distribution would be the same. |
Question about polynomial ring and coefficients | As $a F = a c_1 f_1 + \dots + a c_k f_k$, the questions is really if given
$c_1 , \dots, c_k$ there is an $a$ such that $ac_1 = \dots = ac_k$.
However, if $a\neq 0$, then $a c_i = a c_j $ if and only if $c_i = c_j$.
While for $a=0$ of course $ac_i = ac_j$.
Thus, yes $a=0$ works, but this might not be what you want.
Other than that, no, except the $c_i$ are all equal in the first place. |
Dimension of sum of subspaces | The dimension of $U \cap W$ can be no more than the dimension of $U$ nor more than the dimension of $W$. So there is the dimension of $U \cap W$ is at most $15$.
Since both $U$ and $W$ are subspaces of $V$, which has dimension $29$, the dimension of $U+V$ is at most $29$.
Putting these two together, we see that:
\begin{align*}
10 &\leq \dim (U \cap V) \leq 15 \\
24 &\leq \dim (U + V) \leq 29
\end{align*}
I like to think of it as $29$ degrees of freedom available in $V$ and $U+W$ has at least the degrees of freedom occupied by $U$, which is $24$, and cannot surpass $29$. $U \cap W$ can do no more than $W$, which has $15$ degrees of freedom, and, trying to minimize the intersection with $U$, which has already occupied $24$ degrees of freedom, there are only $5$ left in $V$ for $W$, meaning it is forced to share at least $10$ with $U$. |
Calculation of two dimensional Fourier transform on a disk using FFT? | You would have to define first what the DFT on a disk is. The DFT can in general be defined for any group of translations. Take the group defined by two translation vectors in the plane and you get the DFT on the torus or unwrapped a rectangle or parallelogram.
I don't see a similar group action for the circle. |
Why can't we keep adding axioms forever? | The challenge is that the theory has to remain effective. You are asking about adding new axioms in stages. For the first few stages things are clear. We can make $F_2 = F + G_1$, $F_3 = F_2 + G_2$, etc.
So we can make the theory $F + \{ G_i : i \in\mathbb{N}\}$. Let's call that $F_\omega$. That will be an effective theory as well. This means that, just from $F$, we can enumerate the entire sequence of formulas $\{G_i : i \in \mathbb{N}\}$.
So we can then continue, making $F_{\omega + 1} = F_{\omega} + G_{\omega}$, $F_{\omega + 2} = F_{\omega+ 1} + G_{\omega+1}$, etc.
Eventually, we can make $F_{\omega + \omega}$ in the same way. To do this, we need to enumerate the entire sequence of formulas $\{G_\alpha: \alpha < \omega + \omega\}$, but we can do that effectively by enumerating $\{G_i : i \in \mathbb{N}\}$, then $G_\omega$, then $\{G_{\omega + i} : i \in \mathbb{N}\}$. So, as long as we have a good grasp of the overall sequence of extensions we have made, we can enumerate the necessary sequence of Goedel sentences.
For the incompleteness theorem to apply, we need the theory at hand to be effective. The problem, as you can see, is that we have to have a computable way to keep track of the "stages", so that we know which sentences have been added at each stage. There are limits to how many stages can be described in a coherent, computable way. The stages are usually treated as ordinals, re-using a concept from set theory.
Essentially, in order to create the Goedel sentence $G_\alpha$ for some ordinal $\alpha$, we need to have an effective way of describing $\alpha$. Recall that the Goedel sentence for a theory refers to an effective axiomatization of the theory, and to axiomatize one of the theories in our sequence of extensions, we need to know just how much it has been extended.
The easiest limit on how long we can keep going is that the number of stages can be at most countable, especially when the theory at hand is only countable. If there are only countably many sentences overall, then we can't continue adding Goedel sentences an uncountable number of times.
It turns out, however, that if we want the description of the stages to be effective, it cannot be near uncountable. The least bound for the computable ordinal numbers, known as the "Church-Kleene ordinal", $\omega_1^{CK}$, is countable. So our sequence of extensions will eventually have to stop, not because we run out of sentences, but because we run out of effective ways to describe the overall sequence of extensions we have made. |
On bernoulli percolation, increasing events and Russo's formula | I think it's better if you use the 'standard coupling' of percolation and express the event (non strictly increasing) in terms of the uniforms only to realize the proba is 0. (the proof wont depend on dimension, and will be valid on many graph).
Also, the events you have considered are not both increasing (one is the other is decreasing). so the FKG inequality will not provide you with the bound you want. |
Generalized Cross Product | If $x_1,\dotsc,x_{n-1} \in \mathbb{R}^n$, one defines $x_1 \times \cdots \times x_{n-1} \in \mathbb{R}^n$ to be the unique vector such that
$$
\forall y \in \mathbb{R}^n, \quad \langle x_1 \times \cdots \times x_{n-1},y \rangle = \operatorname{det}(x_1,\dotsc,x_{n-1},y),
$$
where the determinant is being viewed as a function of the rows or columns of the usual matrix argument, i.e., as the unique antisymmetric $n$-form $\operatorname{det} : \mathbb{R}^n \times \cdots \times \mathbb{R}^n \to \mathbb{R}$ such that $\det(e_1,\dotsc,e_n) = 1$ for $\{e_k\}$ the standard ordered basis of $\mathbb{R}^n$.
Now, suppose that $x_1,\dotsc,x_{n-1} \in \mathbb{R}^n$ are linearly independent, and hence span a hyperplane $H$ ($n-1$-dimensional subspace) in $\mathbb{R}^n$. Then, in particular, $x_1 \times \cdots \times x_{n-1} \neq 0$ is orthogonal to each $x_k$, and hence defines a non-zero normal vector to $H$; write $$x_1 \times \cdots \times x_{n-1} = \|x_1 \times \cdots \times x_{n-1}\|\hat{n}$$ for $\hat{n}$ the corresponding unit normal. Let $y \notin H$. Then $x_1,\dotsc,x_{n-1},y$ are linearly independent and span an $n$-dimensional parallelopiped $P$ with $n$-dimensional volume
$$
|\operatorname{det}(x_1,\dotsc,x_{n-1},y)| = |\langle x_1 \times \cdots x_{n-1},y\rangle| = \|x_1 \times \cdots \times x_{n-1}\||\langle \hat{n},y\rangle|.
$$
Now, with respect to the decomposition $\mathbb{R}^n = H^\perp \oplus H$, let
$$
T = \begin{pmatrix} I_{H^\perp} & 0 \\ M & I_{H} \end{pmatrix}
$$
for $M : H^\perp \to H$ given by $$M(c \hat{n}) = -c \langle \hat{n},y \rangle^{-1} P_H y = -c\langle \hat{n},y\rangle^{-1}(y-\langle\hat{n},y\rangle\hat{n}),$$ where $P_H(v)$ denotes the orthogonal projection of $v$ onto $H$. Then $T(P)$ is a $n$-dimensional parallelepiped with with vertices $Tx_1 = x_1,\dotsc,Tx_{n-1}=x_{n-1}$, and
$$
Ty = \langle \hat{n},y \rangle \hat{n} = P_{H^\perp} y = y - P_H y,
$$
with the same volume as $P$. On the one hand, since $Ty = y - P_H y$ for $P_H y \in H = \{x_1 \times \cdots \times x_{n-1}\}^\perp$,
$$
\operatorname{Vol}_n(T(P)) = |\operatorname{det}(Tx_1,\dotsc,Tx_{n-1},Ty)|\\ = |\operatorname{det}(x_1,\dotsc,x_{n-1},y-P_H y)|\\ = |\operatorname{det}(x_1,\dotsc,x_{n-1},y)|\\ = \|x_1 \times \cdots \times x_{n-1}\||\langle \hat{n},y\rangle|.
$$
On the other hand, since $Ty \in H^\perp$, $T(P)$ is an honest cylinder with height $\|Ty\| = |\langle \hat{n},y\rangle|$ and base the $(n-1)$-dimensional parallelopiped $R$ spanned by $x_1,\dotsc,x_{n-1}$, so that
$$
\operatorname{Vol}_n(T(P)) = \operatorname{Vol}_{n-1}(R)|\langle \hat{n},y\rangle|.
$$
Thus,
$$
\operatorname{Vol}_{n-1}(R)|\langle \hat{n},y\rangle| = \operatorname{Vol}_n(T(P)) = \|x_1 \times \cdots \times x_{n-1}\||\langle \hat{n},y\rangle|,
$$
so that
$$
\operatorname{Vol}_{n-1}(R)| = \|x_1 \times \cdots \times x_{n-1}\|,
$$
as required.
EDIT: Theoretical Addendum
Let's see what $\phi x_1 \times \cdots \times \phi x_n$ is in terms of $x_1 \times \cdots \times x_{n-1}$ for $\phi$ a linear transformation on $\mathbb{R}^n$.
Define a linear map $T : (\mathbb{R}^n)^{\otimes(n-1)} \to (\mathbb{R}^n)^\ast$ by
$$
T : x_1 \otimes \cdots \otimes x_{n-1} \mapsto \operatorname{det}(x_1,\cdots,x_{n-1},\bullet),
$$
so that if $S : \mathbb{R}^n \to (\mathbb{R}^n)^\ast$ is the isomorphism $v \mapsto \langle v,\bullet \rangle$, then
$$
x_1 \times \cdots \times x_n = (S^{-1}T)(x_1 \otimes \cdots \otimes x_n).
$$
Now, since the determinant is antisymmetric, so too is $T$, and hence $T$ descends to a linear map $T : \bigwedge^{n-1} \mathbb{R}^n \to (\mathbb{R}^n)^\ast$,
$$
x_1 \wedge \cdots \wedge x_{n-1} \mapsto \operatorname{det}(x_1,\cdots,x_{n-1},\bullet);
$$
indeed, if $\operatorname{Vol} = e_1 \wedge \cdots \wedge e_n$ for $\{e_k\}$ the standard ordered basis for $\mathbb{R}^n$, then for any $y \in \mathbb{R}^n$,
$$
\langle x_1 \otimes \cdots \otimes x_{n-1},y \rangle \operatorname{Vol} = \operatorname{det}(x_1,\cdots,x_{n-1},y)\operatorname{Vol} = x_1 \wedge \cdots \wedge x_{n-1} \wedge y,
$$
which, in fact, shows that
$$
x_1 \times \cdots \times x_{n-1} = \ast (x_1 \wedge \cdots \wedge x_{n-1}),
$$
where $\ast : \wedge^{n-1} \mathbb{R}^n \to \mathbb{R}^n$ is the relevant Hodge $\ast$-operator. Thus, a cross product is really an $(n-1)$-form in the orientation-dependent disguise given by the Hodge $\ast$-operator; in particular, it will really transform as an $(n-1)$-form, as we'll see now.
Now, let $\phi : \mathbb{R}^n \to \mathbb{R}^n$ be linear. Observe that the adjugate matrix $\operatorname{Adj}(\phi)$ of $\phi$ can be invariantly defined as the unique linear transformation $\operatorname{Adj}(\phi) : \mathbb{R}^n \to \mathbb{R}^n$ such that for any $\omega \in \bigwedge^{n-1} \mathbb{R}^n$ and $y \in \mathbb{R}^n$,
$$
(\wedge^{n-1})\omega \wedge y = \omega \wedge \operatorname{Adj}(\phi) y,
$$
e.g., in our case,
$$
x_1 \wedge \cdots \wedge x_{n-1} \wedge \operatorname{Adj}(\phi) y = (\wedge^{n-1}\phi)(x_1 \wedge \cdots \wedge x_{n-1}) \wedge y = \phi x_1 \wedge \cdots \wedge \phi x_{n-1} \wedge y,
$$
and that, as a matrix, $\operatorname{Adj}(\phi) = \operatorname{Cof}(\phi)^T$, where $\operatorname{Cof}(\phi)$ denotes the cofactor matrix of $\phi$. Then for any $y$,
$$
\langle \phi x_1 \times \cdots \times \phi x_{n-1},y \rangle \operatorname{Vol} = \operatorname{det}(\phi x_1,\cdots,\phi x_{n-1},y)\operatorname{Vol}\\ = \phi x_1 \wedge \cdots \wedge \phi x_{n-1} \wedge y\\ = (\wedge^{n-1}\phi)(x_1 \wedge \cdots \wedge x_{n-1}) \wedge y\\ = (x_1 \wedge \cdots \wedge x_{n-1}) \wedge \operatorname{Adj}(\phi)y\\ = \langle x_1 \times \cdots \times x_{n-1},\operatorname{Adj}(\phi)y \rangle \operatorname{Vol}\\ = \langle \operatorname{Cof}(\phi)(x_1 \times \cdots \times x_{n-1}),y \rangle \operatorname{Vol},
$$
and hence, since $y$ was arbitrary,
$$
\phi x_1 \times \cdots \times \phi x_{n-1} = \operatorname{Cof}(\phi)(x_1 \times \cdots \times x_{n-1}) = (\ast \circ \wedge^{n-1}\phi \circ \ast^{-1})(x_1 \times \cdots \times x_{n-1}),
$$
in terms of the Hodge $\ast$-operation and the invariantly defined $\wedge^{n-1}\phi$. |
Equivalent definitions of Borel $\sigma$-algebra | No, there are many more Borel sets than those. The Borel sets contain (by definition ) all countable intersections of open sets, all complements of open sets ( ie all closed sets), so all countable unions of closed sets too. These are just the first steps in the Borel hierarchy which has $\omega_1$ levels for separable complete metric spaces like $\Bbb R$. It’s not an exhaustive list of the Borel sets and how they can be built up from open sets. |
Show $\liminf\limits_{n\to \infty} X_n = 0$ almost surely for $X_n$ uniform on $(0,n)$ and $(X_n)$ independent | Let $\varepsilon>0$ and set $$A_n:=A_n(\varepsilon)=\{X_n<\varepsilon\}.$$
Since $X_n$ are independent, then so is $A_n$'s. Since $\mathbb P(A_n)=\frac{\varepsilon}{n}$, we have that $$\sum_{n=1}^\infty \mathbb P(A_n)=\infty .$$
Using second Borel-Cantelli lemma, we get $$\mathbb P\{A_n\ \ i.o\}=\mathbb P\{X_n<\varepsilon\ \ i.o.\}=1.$$
Therefore, $$\mathbb P\left\{\liminf_{n\to \infty }X_n\leq \varepsilon\right\}=1.$$
Since $\varepsilon>0$ is unspecified, the claim follow.
Small exercise :-)
If you want, you can prove that if $X_n$ are i.i.d. uniform $[0,n^\alpha ]$ for a $\alpha >1$, then $$\liminf_{n\to \infty }X_n=+\infty \ \ \ a.s.$$ |
conjugacy of elements of group $GL_2(\mathbb{Z_p})$ | An element of order $p$ is contained in a Sylow subgroup so it will be conjugate to an element from a fixed Sylow subgroup, in this case the subgroup of upper triangular matrices with $1$ on the diagonal. Worth noting so all the elements $\ne I$ from that subgroup are conjugate. |
Space of Riemann integrable functions is complete under uniform metric. | You have already shown that $f_n\to f$ uniformly on $[a,b]$, so for a given $\epsilon>0$ there is an integer $N$ such that
$n>N\Rightarrow \sup |f(x)-f_n(x)|<\epsilon.$ But then
$\int (f_n-\epsilon)\le \underline \int f\le \overline \int f\le \int (f_n+\epsilon)\Rightarrow \overline \int f-\underline \int f\le 2\epsilon (b-a), $
which shows that $f$ is Riemann integrable. |
How to show $V$ is an open subset of $NM$? | Hint:
If $f:X\to Y$ is continuous and $U\subset Y$ is open, then $f^{-1}(U)$ is open. |
Profinite Local Ring | Since any quotient of a local ring is local, we see that $A$ is the proj. lim. of finite local rings, and that its maximal ideal is the proj. lim. of the maximal ideals of these rings. Thus if $A_i$ is any finite quotient of $A$, we see
that $m^n$ has zero image in $A_i$ for $n$ large enough.
Thus $\bigcap_n m^n$ lies in the kernel of $A \to A_i$ for all $i$, and so vanishes. |
Finding all homomorphisms between $S_3$ and $\mathbb Z_8$ | Hint: consider that, called $\tau$ any homomorphism, $[G,G] \subset Ker(\tau),$ so you have to count homomorphisms from $\mathbb{Z}_2$ to $\mathbb{Z}_8$ and they are two. |
$\chi^2$ critical value ranges | If $V\sim \chi^2(7)$ and $F$ denotes its distribution function, i.e. $F(v)=P(V\leq v)$, then these values express that $F(1.239)=0.01, F(2.167)=0.05,\ldots, F(18.58)=0.99$. Using this we can obtain
$$
P(1.239<V\leq 18.48)=P(V\leq 18.48)-P(V\leq 1.239)=F(18.58)-F(1.239)=0.98.
$$
And then
$$
P(V\notin (1.239,18.48])=1-0.98=0.02
$$
as you claimed. |
Unbiased estimator for $p^2$. Bernoulli distribution. | Your notation is confusing. $\hat p^2$ could either mean $(\hat p)^2$, or it could mean $\widehat{p^2}$. You also don't define $\hat p$ itself.
The question asks for an unbiased estimator of $p^2$. Naturally, an unbiased estimator of $p$ is $$\hat p = \bar X = \frac{1}{n}\sum_{i=1}^n X_i,$$ the sample mean of observations. We can confirm this by computing $$\operatorname{E}[\hat p] = \operatorname{E}\left[\frac{1}{n} \sum_{i=1}^n X_i\right] = \frac{1}{n} \sum_{i=1}^n \operatorname{E}[X_i] = \frac{1}{n} \sum_{i=1}^n p = \frac{1}{n} \cdot np = p.$$
What if we simply took as our estimator for $p^2$ $$(\hat p)^2 = (\bar X)^2 = \left(\frac{1}{n} \sum_{i=1}^n X_i\right)^2?$$ What is the expectation of this value? Well, there are a few ways we can compute it. The naive way is to perform the expansion; i.e. $$\operatorname{E}\left[\left(\frac{1}{n} \sum_{i=1}^n X_i \right)^2\right] = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \operatorname{E}[X_i X_j].$$ When $i \ne j$, $\operatorname{E}[X_i X_j] = \operatorname{E}[X_i]\operatorname{E}[X_j]$ because they are independent; but when $X_i = X_j$, we have $$X_i X_j = X_i^2,$$ hence $$\operatorname{E}[X_i^2] = \operatorname{Var}[X_i] + \operatorname{E}[X_i]^2 = p(1-p) + p^2 = p.$$ Since in our double sum we have $n$ instances where $i = j$ and $n^2$ terms in total, it follows that $$(\hat p)^2 = \frac{1}{n^2} \left( (n^2-n) p^2 + n p\right) = \frac{p(1-p)}{n} + p^2.$$ So the bias here is $p(1-p)/n$. To rid ourselves of it, we should collect like terms in $p$ to get $$(\hat p)^2 = \frac{p}{n} + \frac{n-1}{n} p^2,$$ from which we see from linearity of expectation that $$(\hat p)^2 - \frac{\hat p}{n} = \frac{n-1}{n} p^2.$$ Notice while it looks like we simply replaced $p$ with $\hat p$ and rearranged the equation, this is not what we actually did. What actually happened is $$\operatorname{E}\left[(\hat p)^2 - \frac{\hat p}{n}\right] = \operatorname{E}[(\hat p)^2] - \frac{1}{n}\operatorname{E}[\hat p] = \left(\frac{p}{n} + \frac{n-1}{n} p^2\right) - \left(\frac{p}{n}\right) = \frac{n-1}{n} p^2.$$ Therefore, our unbiased estimator should be $$\widehat{p^2} = \frac{n}{n-1}\left((\hat p)^2 - \frac{\hat p}{n}\right).$$ |
Combinatorics: Three letters are to be selected from the letters in the word questions... | Your approach is correct. You are basically making a case for each way to get three letters, and then adding all of these individual cases together. Number of letters in CONSTANT is 6
Hence, number of 3 letter words is $${6 \choose 3}.3! + {2 \choose 1}.{5 \choose 1}.3$$ |
How to know if a point belongs more or less to a circle? | You must first define the maximal deviation =: $\varepsilon$ allowed.
If we denote:
d := Distance from center
Then you would have the pseudo code:
if(abs(d-r)$\le \varepsilon)${Point can be considered as being on the circle} with r being the radius of the circle. |
Normal Distribution given a Probability Inequality? | Hints:
$P(|Y-a| < b) = P(a - b < Y < a + b)$ and
$Z = (Y-\mu) / \sigma$ is standard normal where $\mu = 0.3$ and $\sigma = \sqrt{0.7}$. |
Random walk of geometric random variables | Looks like textbook "renewal theory" (see, for example, the Gallager book Discrete Stochastic Processes). Think of $\{T_1, T_2, T_3, \ldots\}$ as i.i.d. inter-arrival times of a process where jobs arrive over time. Then I believe your definition of $\tau(t)$ is just the number of jobs that have arrived during the time interval $[0,t]$. Can we call this $N(t)$ instead? (I like that notation better). So let's define $N(t) = \tau(t)$.
Basic renewal results (mainly just the law of large numbers) can be used to show that $N(t)/t \rightarrow 1/E[T]$ with probability 1. A more complicated result shows that $E[N(t)]/t \rightarrow 1/E[T]$.
In your case, the $\{T_i\}_{i=1}^{\infty}$ times are geometric random variables, which makes things a bit simpler. The arrival process is then equivalent to a discrete time process with i.i.d. Bernoulli arrivals with probability $p$. Every slot we independently get one arrival with probability $p$, else we get no arrival. The expected number of arrivals in $k$ slots is $kp$. Thus:
$$ \boxed{E[N(t)] = tp \: \: \forall t \in \{1, 2, 3, \ldots\}} $$
The discrete nature of the process ensures that there are no arrivals except at integer times, so, for example, $E[N(3.4)] = E[N(3)] = 3p$.
If you changed $\{T_i\}_{i=1}^{\infty}$ to i.i.d. exponential random variables with rate $\lambda$, then $N(t)$ would simply be a Poisson process and $E[N(t)] = \lambda t$ for all $t \geq 0$. |
Intuition behind the construction of a fixed point from Kleene's fixed point theorem | One intuition that would be helpful for coming up with this sort of thing yourself is that these fixed point constructions are related to diagonalization. A very general case of this is Lawvere's fixed point theorem, but that is even essentially just using a very general setting to interpret the sort of fixed point function one would define in (untyped) lambda calculus:
$$λf. (λx. f(x\ x))(λx. f(x\ x))$$
Applied to a given $f$, this satisfies the equivalence:
$$(λx. f(x\ x)) (λx. f(x\ x)) \\
= \\
f ((λx. f(x\ x))(λx. f(x\ x)))
$$
So is a fixed point of $f$. This involves two cases of self-application: $x$ is applied to itself and the subterm $λx. f(x\ x)$ is applied to itself. This double use of self-application is the essence of diagonal arguments.
The fixed point theorem you cite is doing something very similar, but it needs to talk about mediating between functions and their encoding as natural numbers. If we ignore some of that, we get:
$$
f\ x = x\ x \\
v\ n\ x = f\ n\ x = n\ n\ x \\
s\ n\ x = v\ n\ x = n\ n\ x \\
r\ n = h\ (s\ n) = h (λx. n\ n\ x)
$$
So, at this point, $r$ should look very similar to $λn. h(n\ n)$, which is one of the self-applications above. To get the fixed point the usual way, $r$ needs to be applied to itself, which is the reason you should suspect $[s](r)$, as $[s]$ is the self-application function. It is essentially:
$$
s\ r = r\ r = (λn. h(λy. n\ n\ y))\ (λn. h(λy. n\ n\ y))
$$
which is almost the term I wrote above, modulo $f = λx. f\ x$ (the difference in this case has to do with the encoding, I think).
Often these kind of diagonal arguments are used to deduce contradictions. For instance, Russell's paradox and the proof of undecidability of the halting problem carry out this sort of diagonalization to produce a fixed point of logical negation. Since logical negation doesn't have a fixed point, you deduce that the assumptions used to construct the fixed point are false. However, in the case of total computable functions, the premises for carrying out the diagonalization are true, and enable you to construct fixed points that do exist. |
Constructing a function discontinuous precisely in a given closed/open set. | Let $U$ be the open set. Set $f(x)=d(x,\mathbb R \setminus U)\cdot\chi_{U\cap\mathbb Q}(x).$ |
An Inequality for sides and diagonal of convex quadrilateral from AMM | The simplest special case is when $\overline{ABCD}$ is a square with side length $a$. In this case, the diagonals are of equal length and given by $a\sqrt{2}$.
$4a \ge 2a\sqrt{2},\quad 4 \ge 2\sqrt{2},\quad$the conjecture holds.
The next simplest case is when $\overline{ABCD}$ is a rectangle of side lengths $a$ and $b$. In this case the diagonals are still equal and given by $\sqrt{a^2 +b^2}$, so
$2(a+b)\ge 2\sqrt{a^2 + b^2}, \quad a+b\ge \sqrt{a^2 + b^2}, \quad a^2 + 2ab + b^2 \ge a^2 + b^2, \quad 2ab \ge 0, \quad$ and the conjecture holds.
The next case is when $\overline{ABCD}$ is a parallelogram, with sides $a, b$ and interior angles $h, k$. The diagonals can be given by the law of cosines, $d_1 = \sqrt{a^2 + b^2 + 2abcos(h)}$, and $d_2 = \sqrt{a^2 + b^2 + 2abcos(k)}$. These can both be written in terms of one interior angle, since ${h, k} \in [0, \pi]$ and $h=\pi-k$, and $cos(\pi-h) = -cos(h)$. Ie: $d_{1,2} = \sqrt{a^2 + b^2 \pm 2abcos(h)}$.
$2(a+b) \ge \sqrt{a^2 + b^2 + 2abcos(h)} + \sqrt{a^2 +b^2 -2abcos(h)}$
$4(a^2 + 2ab + b^2) \ge a^2 + b^2 + 2abcos(h) + 2\sqrt{(a^2 + b^2 + 2abcos(h)(a^2+b^2 -2abcos(h))} + a^2 +b^2 -2abcos(h)$
$4a^2 + 8ab + 4b^2 \ge 2a^2 + 2b^2 + 2\sqrt{(a^2 + b^2 + 2abcos(h)(a^2+b^2 -2abcos(h))}$
$a^2 + 4ab + b^2 \ge \sqrt{(a^2 + b^2 + 2abcos(h)(a^2+b^2 -2abcos(h))}$
$(a^2 + 4ab + b^2)^2 \ge (a^2 + b^2 + 2abcos(h)(a^2+b^2 -2abcos(h))$
(in the interest of space, I’m gonna skip the foiling and ask you to trust me that this simplifies down to the following:)
$2a^3b+4a^2b^2+2ab^3\ge-a^2b^2cos(h)$
That $cos(h)$ term is bounded in $[-1, 1]$ so worst case scenario, the right hand side of that equality is negative and the inequality is obviously true. Best case scenario, $2a^3b+2a^2b^2+2ab^3\ge 0$ and the conjecture holds.
In all of the above cases, $\overline{EF} = 0$, because for all parallelograms, the diagonals intersect at their midpoints.
The next case is a right trapezoid, with sides $b_1, b_2, h,$ and the last side can be given by $\sqrt{h^2 + (b_2-b_1)^2}$. Assume $b_1 \le b_2$. This is the first case in which the line $\overline{EF}$ will come into play. In a right trapezoid, the line $\overline{EF}$ lies on the perpendicular bisector of $h$, and its length is given by $\frac{b_2-b_1}{2}$. And the lengths of the diagonals $d_1, d_2$ can be given by $d_1 = \sqrt{h^2 +b_1^2},$ and $d_2=\sqrt{h^2+b_2^2}$. Finally we can write out our inequality as:
$h + b_1 + b_2 + \sqrt{h^2 + (b_2-b_1)^2} \ge \sqrt{h^2 + b_1^2} + \sqrt{h^2 + b_2^2} +2\frac{(b_2-b_1)}{2}$
But alas, in attempting to solve this inequality I realized I would have to square a four term polynomial. I have already shown geometrically that the conjecture holds for all parallelograms. The next cases to consider are this case of the right trapezoid, then the general trapezoid, and finally the general quadrilateral. If you still want a purely geometric solution, perhaps I’ll come back and give it another shot. |
Find all $\alpha$ such that $n^\alpha\chi_{[n,n+1]}$ converges weakly to 0 in $L^p$. | Hints, assuming $1<p<\infty$ (the answer is slightly different for the endpoint cases):
First, if $f_n\to0$ weakly in $L^p$ then $||f_n||_p$ is bounded (by the Uniform Boundedness Principle). Second, if $g\in L^q$ then $\sum\int_n^{n+1}|g|^q<\infty$, hence $\int_n^{n+1}|g|^q\to0$. |
How to differentiate e to a function? | Differentiate complex functions the same way you would with real functions. Treat $i$ as a constant. |
Function Quantification and Set Theory | When studying first-order set theory, there are two separate notions of relation:
Certain kinds of formulas
Certain kinds of sets
Generally, when someone quantifies over relations (or functions or predicates) in this context, they mean the latter.
Explicitly, for any set $S$, set theory defines another set $\mathcal{P}(S)$. The quantifier "for every predicate $x$ on $S$" means "$\forall x \in \mathcal{P}(S)$".
(Yes, $\mathcal{P}(S)$ is usually given as the power set of $S$. A subset $T \subseteq S$ corresponds to the predicate "___ is in $T$")
Set theory is itself a form of higher-order logic. Commonly, when working in first-order set theory, one tends to use the set-theoretic incarnation of logical notions as much as possible. The few occasions when one has to fall back to the ambient first-order language, one often uses different language, such as speaking of "proper classes" or "class functions".
E.g. what you are (or seem to be) calling a "function", a set theorist would call a "class function". |
Regularity of sigma finite measures on borel sigma algebra | Since $\mu$ is $\sigma$ finite on $S$, there exists some sequence of measurable subsets $S_k \uparrow S$ with $\mu(S_k) < \infty$. Each of these measurable subsets itself generates a sigma algebra which we can call $\mathcal{A}_k$, derived from the original Borel sigma algebra on $S$ i.e. $\mathcal{B}(S)$ which we'll just call $\mathcal{A}$ for short henceforth. :
$$\mathcal{A}_k = \{E \cap S_k : E \in \mathcal{A} = \mathcal{B}(S)\}$$
You can check that each such $\mathcal{A}_k$ is indeed a sigma algebra on the set $S_k$. Moreover, it can be shown that each $\mathcal{A}_k$ is bounded, because for every $F \in \mathcal{A}_k$ there exists some $F' \in \mathcal{B}(S)$ with $F = F' \cap S_k$. That's just the definition of $\mathcal{A}_k$. And so,
$$\mu(F) = \mu(F' \cap S_k) \leq \mu(S_k) < \infty$$
So, we now have a family of bounded sigma algebras increasing to the original sigma algebra. And we can use continuity of the measure from below to show that when the result holds on all bounded algebras, and thus each bounded algebra in our sequence, then it follows for the limit of that sequence.
I'll just show you how it follows for one of your results, the other is similar. So you want to show:
$$\mu(B) = \sup_{F \subset B} \mu (F)$$
For all $B \in \mathcal{A}$, where $F$ is open. Start by defining $B_k = B \cap S_k$. Since each $B_k \in \mathcal{A}_k$ and $\mathcal{A}_k$ is bounded, we have by assumption that:
$$\mu(B_k) = \sup_{F \subset B_k} \mu(F)$$
Now, since $S_k \uparrow S$, then $B_k \uparrow B$, so by the continuity of our measure from below:
$$\mu(B_k) \uparrow \mu(B)$$
Now, we have both a supremum and a limit in $\mu$. With both, we can get as close as we want to $\mu(B)$, in other words for any $\epsilon$ we can find an open set $F$ and some $k$ with:
$$F \subset B_k \subset B$$
and $\mu(B) - \epsilon < \mu(F) < \mu(B)$ which implies that the supremum limit is indeed $\mu(B)$. Not sure how that only warranted a mere throwaway sentence "We may clearly assume that $\mu$ is bounded" in the image you sent. Maybe there's a more obvious explanation. |
Optimal probability measure | For a finite sequence $Y= (y_0,\dots,y_m)$ let $\left <Y\right>\subset A^{m+1}$ be the set of sequences
$$\left<Y\right> = \{ (y_i, y_{i+1 \mod (m+1)}, \dots y_{i+1 \mod (m+1)}) : 0 \leq i \leq m \}$$
be the set of shifts of $Y$.
Now let $$\mathscr F = \left\{\left<Y\right> : Y\in A^{m+1}\right\}$$
be the shift invariant $\sigma$-algebra on $A^{m+1}$. As $\mathscr F$ is a partition $\sigma$-algebra we have
$$\mathbb P^m\left[F \left| \mathscr F\right.\right](Y) = \frac 1{m+1} \sum_{\hat Y\in\left<Y\right>} F(\hat Y).$$
Therefore if $\bar Y$ is such that
$\sum_{\hat Y\in\left<\bar Y\right>} F(\hat Y)\geq\sum_{\hat Y\in\left< Y\right>} F(\hat Y)$ for every $Y\in A^{m+1}$ we have
$$\begin{array}{rl}
\mathbb P^m\left[F \right] &= \int_{A^{m+1}} \mathbb P^m\left[F \left| \mathscr F\right.\right](X) d \mathbb P^m(X) \\
&\leq \int_{A^{m+1}} \mathbb P^m\left[F \left| \mathscr F\right.\right](\bar Y) d \mathbb P^m(X) \\
&=\frac 1{m+1}\sum_{\hat Y\in\left<\bar Y\right>} F(\hat Y)
\end{array}$$
So set $\mathbb P$ to be the measure constructed by choosing $(x_0,\dots,x_m)$ uniformly from $\left<\bar Y\right>$ and setting $x_k = x_{k \mod (m+1)}$ for $k> m$ then $\mathbb P$ is shift invariant by construction and
$$\mathbb P^m\left[F \right] =\frac 1{m+1}\sum_{\hat Y\in\left<\bar Y\right>} F(\hat Y)$$
attains its maximal value. |
The permutation/combination puzzle: What are the maximum number of tickets possible in a game of Tambola? | Please note that there is a mistake in the question. The conditions
Column 1 on any ticket has numbers between 1-9
Column 2 on any ticket has numbers between 10-19
Column 3 on any ticket has numbers between 20-29
Column 4 on any ticket has numbers between 30-39
Column 5 on any ticket has numbers between 40-49
Column 6 on any ticket has numbers between 50-59
Column 7 on any ticket has numbers between 60-69
Column 8 on any ticket has numbers between 70-79
Column 9 on any ticket has numbers between 80-90
are wrong, not only because they don’t represent the actual game, but also because it would be strange to allow 9 numbers in the first column, 11 in the last, and 10 in the rest. In the following, I solve the problem with the real life constraints:
Column 1 on any ticket has numbers between 1-10
Column 2 on any ticket has numbers between 11-20
Column 3 on any ticket has numbers between 21-30
Column 4 on any ticket has numbers between 31-40
Column 5 on any ticket has numbers between 41-50
Column 6 on any ticket has numbers between 51-60
Column 7 on any ticket has numbers between 61-70
Column 8 on any ticket has numbers between 71-80
Column 9 on any ticket has numbers between 81-90
In each ticket, we have $k$ columns with 3 numbers, $l$ columns with 2 numbers and $m$ columns with 1 number, with $k+l+m=9$ and $2k+l=6$. For each 3-numbered column, we have $10\cdot9\cdot8$ choices; for each 2-numbered column, we have $10\cdot9$, and for each 1-numbered column, simply $10$. They are independent from each other, so we have $10^{k+l+m}9^{k+l}8^{k}$ choices, and we have $$\binom{9}{k,l,m}={9!\over k!l!m!}$$ ways to assign them to columns. To actually place the numbers in the columns they belong to, we have $3^{m}$ choices for the nonempty cells of 1-numbered columns, and $3^{l}$ choices for the empty cells of 2-numbered columns. They are relevant, because the patterns receiving prizes in the real game are row-based (this might not be apparent from the original question) and different empty places generate functionally distinct tickets.
Note that $0\le k\le 3,\,\,\,l=6-2k,\,\,\,m=k+3$, and putting it all together, we get the number of distinct tombola tickets:
$$\sum_{k=0}^{3}\binom{9}{k,l,m}10^{k+l+m}9^{k+l}8^{k}3^{l+m}=\sum_{k=0}^{3}{9!\over k!(6-2k)!(k+3)!}10^{9}9^{6-k}8^{k}3^{9-k} =$$
$$ = 3548382664428000000000 \approx 3.5\cdot10^{21}$$
Note that the game in very popular in Italy and there are several sites which produce tickets for neighborhood festivals, church fundraisers, and similar occasions. I don’t link to them because this could be misunderstood as spamming. Just google “cartelle tombola” to find them. What is relevant here is that many of them (possibly around half) don’t follow the constraint of ordered columns. This would increase the number of possible tickets by a factor of $3!^9$, if we count tickets differing only by a permutation of the entire rows as distinct. However, given the rules of the actual game, they would be functionally identical and having two such tickets simultaneously in the game would be an embarrassment. Therefore, I don’t consider them distinct and to get the number of distinct tombola tickets with unordered columns I multiply the preceding result by $6^8$, to get
$$ 5959920297295899648000000000 \approx 5.9\cdot10^{27}$$
Note: the name of the game is TOM-bola, with the stress on the first syllable. In fact, it is related to the English word tumble. |
Existence of left invariant $n-$forms in a lie Group | You already proved that a left invariant $n$-form would be unique up to a constant. Now, to prove that there indeed exist such a left invariant $n$-form, let $X_1,\ldots,X_n$ be a basis of $T_eG$. Let $\theta^1,\ldots,\theta^n$ be its dual basis in $T_eG^*$. Then $\omega_e=\theta^1\wedge\cdots\wedge \theta^n$ is a non-zero differential form of degree $n$ in $T_eG$ because $\omega_e(X_1\wedge\cdots\wedge X_n) = 1$. Now, one can define a $n$-form $\omega$ on $G$ by requiring it to be left invariant, that is
$$
\omega_g = \omega_e\left((\mathrm{d}L_{g^{-1}})\cdot,\ldots,\left(\mathrm{d}L_{g^{-1}}\right)\cdot \right)
$$
It is a smooth $n$-form, and it is left invariant by construction. The smoothness follows from the fact that $g \mapsto \mathrm{d}L_{g^{-1}}$ is smooth because $G$ is a Lie group! Then, if $X$ is a vector field, in the left invariant frame $X_1,\ldots,X_n$ created from the basis on $T_eG$, one can write
$$
Y(g) =\sum_{i=1}^n Y^i(g)\left(\mathrm{d}L_gX_i\right)
$$
and then
$$
\omega_g(Y_1(g),\ldots,Y_n(g)) = \omega_g\left(\sum_{i}Y_1^j(g)X_i(g),\cdots,\sum_{i}Y_n^j(g)X_i(g) \right)
$$
And using linearity and left-invariance of $\omega_g$ and $X_j(g)$, the smoothness follows from the smoothness of the function $g\mapsto Y_j^i(g)$. Hence, $\omega$ is smooth. |
How many min, max and saddle points does $f(x,y) = (x+y)\sin(x-y)$ have? | If $f_{xx}f_{yy} − f^2_{xy} < 0$ at $(a, b)$ then $(a, b)$ is a saddle point.
If $f_{xx}f_{yy} − f^2_{xy} > 0$ at $(a, b)$ then $(a, b)$ is either a maximum or a minimum.
Distinguish between these as follows:
– if $f_{xx} < 0$ and $f_{yy} < 0$ at $(a, b)$ then $(a, b)$ is a maximum point.
– if $f_{xx} > 0$ and $f_{yy} > 0$ at $(a, b)$ then $(a, b)$ is a minimum point.
Now $f(x,y) = (x+y) \sin(x-y)$ then
$$f_x = \sin(x-y) + (x+y)\cos(x-y)$$
$$f_{xx} = 2\cos(x-y) - (x+y)\sin(x-y)$$
$$f_{xy} = (x+y)\sin(x-y)$$
$$f_y = -\sin(x-y) - (x+y)\cos(x-y)$$
$$f_{yy} = 2\cos(x-y) - (x+y)\sin(x-y)$$
Note that $f_{xx}=f_{yy}$.
$$f_{xx}f_{yy} − f^2_{xy} = 4\cos^2(x-y) - 4(x+y)\sin(x-y)\cos(x-y)$$
Saddle points:
The point $(a,b)$ is a saddle point if $$\cos^2(a-b) > \frac{1}{2}(a+b)\sin(2a-2b)$$
Max point:
The point $(a,b)$ is a Max point if $$\left(\cos^2(a-b) < \frac{1}{2}(a+b)\sin(2a-2b)\right) ~\mbox{and}~ \left(\cos(a-b) < \frac{1}{2}(a+b)\sin(a-b)\right)$$
Min point:
The point $(a,b)$ is a Min point if $$\left(\cos^2(a-b) < \frac{1}{2}(a+b)\sin(2a-2b)\right) ~\mbox{and}~ \left(\cos(a-b) > \frac{1}{2}(a+b)\sin(a-b)\right)$$ |
Find value of $a$ and $b$ | Hint
$$ \frac {x(1+a\cos x) -b\sin x}{(f(x))^3}= \frac {x(1+a\cos x) -b\sin x}{x^3}\frac{x^3}{(f(x))^3}$$
and
$$\lim_{x\to 0} \frac{x^3}{(f(x))^3}=1$$ |
At what value(s) of $x$ does $\cos x = 5x$? | It's not sophisticated, but you can approximate the answer using a series expansion
$$
\cos(x) = 1-(x^2/2)+x^4/24-x^6/720... = 5x
$$
Then, ignore the really small terms...
$$
\cos(x) \approx 1-(x^2/2) = 5x
$$
Then, rearrange to
$$
-x^2 - 10x + 2 = 0
$$
... giving a positive root of $3\sqrt(3)-5$
Checking:
$\cos(3\sqrt(3)-5) = 0.9808237171904401982886706009894$
$5*(3\sqrt(3)-5) = 0.98076211353315940291169512258809$
You could get more accurate by including more terms of the series expansion, if you know how to compute the roots of higher order equations like
$$
-x^6+30 x^4-360 x^2-3600 x + 720 = 0
$$
which is the equation you get from including one more term of the series expansion. |
Can we factor an integer N if we know some of the digits of an associated number M? | $N = pq$
$M = (pq)(p+q)(0\dots 01) = rs$
It looks to me that you indend to have, somewhat using your notation,
$r = (p)(0\dots 01)$ and $s = (q)(0\dots 01)$
Let's say $r=p10^A+1$ and $s = q10^B + 1$.
If $(p+q)$ is going to be in the middle, then we must have $A=B$. So
$r=p10^A+1$ and $s = q10^A + 1$.
Hence $M = rs = pq10^{2A} + (p+q)10^A + 1$
The diagram below shows why $A$ needs to be the number of digits in $p+q$.
$$M = (pq)
\underbrace{
\overbrace{(p+q)}^\text{A-digits}
\overbrace{(0\dots 01)}^\text{A-digits}}_\text{2A-digits}$$
So
$r=p10^A+1 \; \text{and} \; s = q10^A + 1 \;
\text{where $A$ is the number of digits in $p+q$}$.
You asked
So the real question is: can we build a 9 digit number if we only knew 5
of its digits? And of course, at this point I don't know if the proportion
of unknown to known digits is constant.
Do you mean can we build a $9-$digit $N$ if we only know $5$ of its digits?
Well, if $pq$ is a $B-$digit number, then we need the first $A+B$ digits of N to find what $p$ and $q$ are. This can get tricky, however, if $p+q$ ends in some zeros; for example, $p+q = 320$.
For example, if you have $M=679104001$ but you only see $M = 67910****$, then the only possibility is
$pq = 679$ and $p+q = 10*$.
The only possibilities are $\{p,q\}=\{1, 679\}$ and $\{p,q\}=\{7, 97\}$. It isn't hard to see that the second option is the correct one.
Consider $N = 360$ and $M = 3602**1$, a $7-$-digit number.
Then we know pq = (360)(3*)(01)
So pq = 360 and p+q = 3*.
Assuming $p<q$, these are only two possibilities for p and q such that p+q = 3*:
\begin{align}
p=\color{red}{15}, q=\color{blue}{24}
&\implies
\left\{
\begin{array}{l}
p+q=\color{green}{39} \\
M=(360)(\color{green}{39})(01) \\
r=(\color{red}{15})(01) \\
s=(\color{blue}{24})(01) \\
\end{array}
\right.
\\
p=\color{red}{18}, q=\color{blue}{20}
&\implies
\left\{
\begin{array}{l}
p+q=\color{green}{38} \\
M=(360)(\color{green}{38})(01) \\
r=(\color{red}{18})(01) \\
s=(\color{blue}{20})(01) \\
\end{array}
\right.
\\
\end{align}
In general, if you know pq, then M=(pq)(...)(0...01) where
pq = ... and 0...01 have the same number of digits.
Since you know what pq is, then there are only a finite number
of possibilities for p and q. Hence there are only a finite number
of possibilities for the value of p+q.
In general,if you are given $N = pq$ then there are as many possibilities for $r, s, M=rs$ as there are divisors $p \le q$ such that $N=pq$. For example, if you know that $pq = 1001$:
\begin{array}{|rrrr|rrr|c|}
\hline
N & p & q & p+q & r & s & M & \% \text{ of digits in M}\\
\hline
1001 & 1 & 1001 & 1002 & 10001 & 10010001 & 100110020001 & 50\\
1001 & 7 & 143 & 150 & 7001 & 143001 & 1001150001 & 60 \\
1001 & 11 & 91 & 102 & 11001 & 91001 & 1001102001 & 60 \\
1001 & 13 & 77 & 90 & 1301 & 7701 & 10019001 & 75 \\
\hline
\end{array} |
Complex Manifold Basic Books | Follow Abel's motto: Read The Masters!
Since Kodaira and Grauert are among the undisputed masters in complex manifold theory, I recommend:
A) Complex Manifolds and Deformation of Complex Structures
B) From Holomorphic Functions to Complex Manifolds
The Kodaira book A) is chock-full of examples. You should essentially begin by studying them, which means reading from page 28 to page 59.
The Fritzsche-Grauert book B) is very rich but you should jump as soon as possible to Chapter IV, pages 153 to 171, which are essentially self-contained.
There are many other books on complex manifolds/varieties/several complex variables : Grauert-Remmert, Gunning-Rossi, Hörmander, Huybrechts, Krantz, Narasimhan, Shabat, Taylor, Wells,.. but none is (in my opinion) more accessible than the above two. |
Use Maclaurin Series to evaluate the definite integral correct to within an error $\lt 0.0001$ | You were on the right track, but you made a small error. The indefinite integral is
$$\int{1\over1+x^5}dx=\sum_{n=0}^\infty{(-1)^nx^{5n+1}\over5n+1}+C$$
(You had the sum with an $x^{5n}$ instead of $x^{5n+1}$.) Thus
$$\int_0^{0.2}{1\over1+x^5}dx=\sum_{n=0}^\infty{(-1)^n(0.2)^{5n+1}\over5n+1}={1\over5}-{1\over6\cdot5^6}+{1\over11\cdot5^{11}}-\cdots$$
Now the nice things about alternating series in which each term is smaller than the one before it (which is certainly the case here) is that no matter where you stop, the error is no greater than the next term. So if you want the error to be less than $0.0001$, you just have to look for the first term in the series that is smaller than that. But this happens almost immediately:
$${1\over6\cdot5^6}={1\over93750}\lt{1\over10000}=0.0001$$
So
$$\int_0^{0.2}{1\over1+x^5}dx\approx{1\over5}$$
is good to within $0.0001$. |
Cubic Equation with one real root | You know that one factor of this cubic equation is $(x-1)$ from the root. Therefore, you can equate the above polynomial to $(x-1)(x^2 + ax +b)$ and use that to get the relationship between $h$ and $k$ on the one side and $a$ and $b$ on the other.
You also know that $(x^2 +ax +b)$ has no real roots. Using the discriminant should give you enough to finish the question. |
Understanding the multiplication of fractions | By definition $\ x = a/b\ $ is the unique solution of $\ b\,x=a$
By definition$\ y = c/d\ $ is the unique solution of $\ d\,y = c$
Multiplying these equations $\,\Rightarrow\, bd\,xy = ac\ $ so $\ xy = (ac)/(bd),\ $ i.e. $\ \dfrac{a}b\dfrac{c}d = \dfrac{ac}{bd}$
Remark $ $ We used basic laws of the underlying ring $\Bbb Z$ (notably '*' is associative & commutative, and $\,b,d\neq0\,\Rightarrow\, bd\neq 0)$
Analogous "reductionist" arguments apply elsewhere, e.g.
By definition $\ x = \sqrt2 \ $ is the unique solution $>0\,$ of $\ x^2 = 2$
By definition $\ y = \sqrt 3\ $ is the unique solution $>0\,$ of $\ y^2 = 3$
Multiplying these equations $\,\Rightarrow\, (xy)^2= 6\ $ thus $\ xy = \sqrt 6,\ $ i.e. $\ \sqrt2 \sqrt 3 = \sqrt 6,\,$ which reduces the multiplication of algebraic integers to that of integers, analogous to above, where we reduced the multiplication of fractions to the multiplication of (a pair of) integers above.
Generally the properties of the extended number systems follow from the fact that we desire (so require) it to have the same algebraic structure (e.g. satisfy ring axioms) as the base structure, i.e. we wish the essential arithmetical laws to persist in the extended number system. This was the principle that guided many extensions of number systems historically (e.g. see the Hankel or Peacock Permanence Principle), which nowadays is a basic constituent of the axiomatic method. |
Exterior differentiation of one form on a smooth manifold | Let $w=fdg$ where $f, g$ are $0$-forms
Evaluate the first term
$$dw(V, W)=df\wedge dg(V, W)=df(V)dg(W)-df(W)dg(V)=V(f)W(g)-V(g)W(f)$$
the second term
$$Vω(W)−Wω(V)−ω([V,W])$$
$$=V(fdg(W))-W(fdg(V))-fdg([V,W])$$
$$=V(fW(g))-W(fV(g))-f(VW(g)-WV(g))$$
$$=V(f)W(g)-V(g)W(f)$$
are the same. |
Why does the curvature approach $\infty$ at cusps? | Curvature measures the rate at which the tangent rotates as you move along the curve. A large curvature value means that the tangent is turning very quickly. Or, saying it another way, curvature measures the change in the tangent direction per unit step along the curve. At a cusp, the tangent rotation is infinitely fast because the tangent direction jumps from one value to another in a zero length step.
Another explanation: curvature is the reciprocal of radius of curvature. At a cusp, radius of curvature is zero, so curvature is infinite. |
Show that $b(\ ., \ .)$ it is not coercive. | First note that you have
\begin{align*}b(u, u)
&= \int_0^T((u(t), \ u(t)))dt- \int_0^T(u(t), \ u'(t))dt \\
&= \int_0^T((u(t), \ u(t)))dt- \frac12 \, \int_0^T \frac{d}{dt}(u(t), \ u(t))dt \\
&= \int_0^T((u(t), \ u(t)))dt- \frac12 \, (u(T), \ u(T)) + \frac12 \, (u(0), \ u(0)) \\
&= \int_0^T((u(t), \ u(t)))dt+ \frac12 \, (u(0), \ u(0)).
\end{align*}
From here, you can find easy examples showing that $b$ is not coercive. Try $u(t,x) = u_1(t) \, u_2(x)$ with arbitrary $u_2$ and suitable $u_1$. |
Is $\mathbb P((X,Y)\in A)=\int_{\mathbb R}\mathbb P(X\in A^y\mid Y=y)\mu_Y(dy)$ always true? | Answer: Yes!
Proof.
Let's fix the probability space $(\Omega,\mathcal F,\mathbb P)$. I'm wondering how you define conditional probability on the event $\{Y=y\}$. A natural way to do so is through the conditional expectation (definition with $\sigma$-algebra etc). Let $g$ be a Borel-measurable function such that
$$g(Y)=\mathbb E[\mathbf{1}_{\{(X,Y)\in E\}}\mid \sigma(Y)]\ \ \ \text{a.s. }$$
Then we set
$$\mathbb P(X\in E^y\mid Y=y):=g(y)$$
You ask whether
$$\mathbb P((X,Y)\in E)=\int_\mathbb Rg(y)\,d\mu_Y(y)$$
is true or not.
Let's see, we have using change of measure formula and the definition of conditional expectation
$$\int_\mathbb Rg(y)\,d\mu_Y(y)=\int_\Omega g(Y)\,d\mathbb P=\int_\Omega \mathbf{1}_{\{(X,Y)\in E\}}\,d\mathbb P=\mathbb P((X,Y)\in E)$$
Formula proved! |
Exponential decay leeds to analytic Fourier transform - Paley-Wiener Theorem | Let $d =1$. The condition $|f(x)| e^{|\alpha|x} \in L^1$ implies $F(\omega)=\int_{-\infty}^\infty f(x) e^{-i \omega x}dx$ and all its derivatives $F^k(\omega)=\int_{-\infty}^\infty f(x) (-ix)^k e^{-i \omega x}dx$ converge absolutely and uniformly for $|\Im(\omega)| \le \alpha-\epsilon$. Thus $F(\omega)$ is holomorphic (and hence analytic) for $|\Im(\omega)| < \alpha$.
You could also show directly its Taylor series $$F(\omega) = \sum_{k=0}^\infty \frac{F^{(k)}(\omega_0)}{k!} (\omega-\omega_0)^k = \sum_{k=0}^\infty \frac{(\omega-\omega_0)^k}{k!} \int_{-\infty}^\infty f(x) (-ix)^k e^{-i \omega_0 x}dx$$
$$ \le \sum_{k=0}^\infty \frac{|\omega-\omega_0|^k}{k!} \int_{-\infty}^\infty |f(x)x^k e^{-i \omega_0 x}|dx$$
converges absolutely for $|\Im(\omega_0)| + |\omega-\omega_0| < |\alpha|$. |
How can I solve the differential equation $ \frac{\mathrm d^2y}{\mathrm dx^2}(y\frac{\mathrm dx}{\mathrm dy}+x) = -A^2\frac{\mathrm dy}{\mathrm dx} $? | Note that the equation is unchanged if you double $y$, and is also unchanged if you double $x$.
So substitute $y=\exp(w)$ and $x=\exp(t)$. The result will involve $dw/dt$ and $d^2w/dt^2$ but not $w$ because the equation is unchanged if you add a constant to $w$. It won't involve $t$ explicitly because the equation is unchanged if you add a constant to $t$.
The result is a first-order separable DE in $v(t)=dw/dt$ |
Equivalency of simultaneously block diagonalizing two matrices and finding invariant subspaces | I don't know of a reference, but here is a way to see the equivalence.
When you write a matrix in block form, you decompose the underlying space(s) as a direct sum.
You can also see writing a matrix in a given basis this way; the summands are one-dimensional, corresponding to the chosen basis vectors.
If the domain is $\mathbb R^n=D_1\oplus D_2\oplus\dots\oplus D_k$ and the target is $\mathbb R^l=T_1\oplus\dots\oplus T_m$ (as direct sums of linearly independent subspaces), then an $l\times n$ matrix $A$ can be written as a $m\times k$ block matrix.
Since zero-dimensional spaces $D_i$ or $T_i$ should be excluded (they're just silly), we have $l\geq m$ and $n\geq k$ but there are no other constraints.
It makes sense to say that the matrix is $A$ block-diagonal if $k=m$ and all off-diagonal blocks are zero.
Observe that $k=m$ does not imply $n=l$.
Considering $A$ as a mapping $\mathbb R^n\to\mathbb R^l$, this is the same as requiring that $A(D_i)\subset T_i$ for all $i$.
If you had $A(D_i)\not\subset T_i$, then $A(D_i)\cap T_j$ would be non-zero for some $i\neq j$, meaning that the block at $(j,i)$ is non-zero.
If you have a square matrix, it is often most convenient to choose $D_i=T_i$ for every $i$, and this is what is typically meant by a block matrix.
(Observe that if you change the basis of a matrix, you apply the same change on both sides of the matrix.)
In this block structure block-diagonality means that $A(D_i)\subset D_i$, which means that the space $D_i$ is an invariant subspace for $A$.
That is, block-diagonialization amounts to finding subspaces $D_i$ so that the original space is the direct sum of them and $A(D_i)\subset D_i$ for all $i$.
If you have two matrices that are simultaneously block-diagonal, they both have to satisfy the block-diagonal assumption in the same basis.
(Simultaneity means precisely that the same basis works for both.)
That is, two $n\times n$ matrices $A$ and $B$ are simultaneously block-diagonalized by the subspaces $D_1\oplus\dots D_k$ if and only if all the spaces are invariant for both:
$A(D_i)\subset D_i$ and $B(D_i)\subset D_i$ for all $i$. |
Square integrable functions with singularities | In any interval $(a,b)$ take a point $c$ and consider $f(x)=\frac 1 {|x-c|^{t}}$ where $0<t<1/2$. Then $f$ is square integrable with a singilarity. |
Show that $\Omega^1(X) \to \operatorname{Rh}^1(X)$ is injective. | Suppose $\omega = dg$ for some smooth function $g : X \to \mathbb{C}$. As $dg = \partial g + \bar{\partial g}$ and $\omega$ is a $(1, 0)$-form, $\bar{\partial g} = 0$. That is, $g$ is holomorphic and hence constant. Therefore $\omega = dg = 0$. |
Range of $\log(16-4x^2-4y^2-z^2)$ | $$\text{The argument of the log must be positive:}$$
$$\ 16-4x^2-4y^2-z^2>0$$
$$\ 4x^2+4y^2+z^2<16$$
$$\ \frac{x^2}{4}+\frac{y^2}{4}+\frac{z^2}{16}<1$$
$$\ (\frac{x}{2})^2+(\frac{y}{2})^2+(\frac{z}{4})^2<1$$
$$\text {so the domain is a region enclosed in an ellipsoid.}$$
$$\ \\ \text{Now you calculate what happens to the range 'moving in the domain':}$$
$$\text{note that }$$
$$\ log(16-4x^2-4y^2-z^2)=log[16-(4x^2+4y^2+z^2)]=log[16-p(x,y,z)]$$
$$\ \text{and }\ p(x,y,z)≥0 \forall(x,y,z).$$
$$\text{In the origin }p(x,y,z)=0 \text{ so you have the maximum value in the range: }$$
$$\ f(x,y,z)_{max}=log16$$
$$\text{Approaching the surface of the ellipsoid instead, the argument of the log tends to }\ 0^+$$
$$\text{so }\ f(x,y,z)\to -\infty$$
$$\text{The total range should be therefore: }\ (-\infty,log16]$$ |
Pullback in the category of graphs | Your intuition that the pullback "sounds like the "minimal" (actually maximal) compatible multigraph is true, and in fact is true in many more cases.
This is because the pullback of $X\xrightarrow{f}Z\xleftarrow{g}Y$ in any category is the equalizer of the parallel pair $X\times Y \rightrightarrows Z$ given $f\circ\text{pr}_X$ and $g\circ\text{pr}_Y$.
Specializing to your case of multigraphs:
the product of $G_1 = (V_1,E_1,r_1)$ and $G_2 = (V_2,E_2,r_2)$ is $(V_1\times V_2,E_1\times E_2,r_1\times r_2)$
the equalizer of a parallel pair $f,g:G_1\rightrightarrows G_2$ is the maximal subgraph of $G_1$ where $f=g$
Combining these two, we get
the pullback of $G_1\xrightarrow{f}G\xleftarrow{g}G_2$ isthe maximal subgraph of $(V_1\times V_2,E_1\times E_2,r_1\times r_2)$ where $f\circ\text{pr}_{G_1}$ and $g\circ\text{pr}_{G_2}$ |
Does this principle have a name? $A\subset B\subset A \implies A=B$ | The inclusion $\subset$ - relation is "anti-symmetric":
$$ A \subset B, B \subset A \implies A=B.$$ |
Definition of a bounded, nonempty subset of real numbers. | A bounded set is one which could be contained in an interval $[a,b]$
It could be finite or infinite, continuous or discrete.
For example the set $\{1,1/2,1/3,...\}$ is a nonempty bounded set because it is contained in $[0,1]$
The interval $(0,1)$ is bounded because it is contained in $[0,1]$
The set $\{1,2,3,4,5\}$ is bounded because it is contained in $[1,5]$ |
Suppose that a function $f\colon[0,1]\rightarrow \mathbb{R}$ is continuous, $f\left(0\right)=0$ | Let $x_0 = \sup \{x \in [0,1]: f(x) = 0\}$. You need to show two things. First, that $f(x_0) = 0$, and second that $f(x) > 0$ for all $x > x_0$. The intermediate value theorem may be helpful for the latter part. |
Contragredient representation of a finite group | If $u,v$ are maps with the same domain and codomain, and $u(x)=v(x)$ for every $x$ in the domain, then $u=v$ as maps. This applies to the symmetries of a function space just the same:
$$\begin{array}{c l} & \forall f\in V^*, & \forall x\in V: & (\rho_s^*f)(\rho_sx)=f(x)=(\sigma_sf)(\rho_sx) \\ \iff & \forall f\in V^*, & \forall v\in V: & (\rho_s^*f)(v)=(\sigma_sf)(v) \\ \iff & \forall f\in V^*: & & \rho_s^*f=\sigma_sf \\ \iff & & &\rho_s^*=\sigma_s. \end{array}$$
Note the substitution $u=\rho_sx$ ($\rho_sV=V$ because $\rho_s$ is always invertible). Applies for every $s\in G$.
The moral: recognize reparametrizations of universal quantification, and recognize that universally quantified equalities in a space pass up higher to equalities in the relevant function space. |
If $H$ is a group of order $6$ and $f:S_n\to H$ surjective homomorphism, then what is $n$? | $n$ has to be greater than $3$, since $|S_n|\gt6$. But $n\lt5$ since $S_n$ has only $A_n$ as a normal subgroup for $n\ge 5$. Therefore $n=4$. |
How to represent the point-wise devision in mathematical symbols? | $$
\left\{X_{ij}\right\}_{nm} \rightarrow \forall i < n \quad \forall j < m
$$
$$
Z_{ij} = \frac{1}{1+e^{-\beta X_{ij}}}
$$ |
Taylor Series Based Numerical Method for Non-Linear Equations | The Householder (?) methods are a generalization of Newton's and Halley's method with the closed formula
$$
x_{n+1} = x_n+k\frac{(1/f)^{(k-1)}(x_n)}{(1/f)^{(k)}(x_n)}
\text{ i.e., }
e_{n,k}=k\frac{(1/f)^{(k-1)}(x_n)}{(1/f)^{(k)}(x_n)}
$$
Let the local power series of $f$ and $g=1/f$ be $f(x+e)=\sum a_ke^k$ and $g(x+e)=\sum b_ke^k$. The product of both has to be the constant $1$. If the closest singularity of $g$, i.e., the closest root of $f$ as complex function, is real (and simple, and the following limit exists etc.), then it is located at $x_+=x+e$ with
$$
e=\lim_{k\to\infty}\frac{b_{k-1}}{b_k}=\lim_{k\to\infty}k\frac{g^{(k-1)}(x)}{g^{(k)}(x)}
$$
Now introduce the $e_k=\frac{b_{k-1}}{b_k}$ so that for $j<k$
$$
b_j=b_ke_k…e_{j+1}
$$
and for $k>1$ the Cauchy product formula reads as
\begin{align}
0&=a_0b_k+a_1b_{k-1}+a_2b_{k-2}+…+a_kb_0
\\
&=a_0b_k+(a_1+a_2e_{k-1}+a_3e_{k-1}e_{k-2}+…+a_ke_{k-1}…e_1)b_{k-1}
\end{align}
so that
$$
e_k=\frac{b_{k-1}}{b_k}=-\frac{a_0}{a_1+e_{k-1}(a_2+e_{k-2}(a_3+…))}
=-\frac{f}{f'+e_{k-1}(\frac{f''}{2!}+e_{k-2}(\frac{f'''}{3!}+…(e_1\frac{f^{(k)}}{k!})))}
$$ |
the "unit speed" anlogue of the evolute of the curve | Pez, if I understand correctly, you want to know how each polygon vertex $v = (a_i,b_i)$ moves as you move each edge along the edge normal by $\epsilon$?
First, compute the normals $N_1, N_2$ of the two edges adjacent to $v$. Then the inflated vertex position $v'$ is given by
$$v' = v + \frac{N_1+N_2}{\|N_1+N_2\|} \epsilon \sec \frac{\psi}{2},$$
where $\psi$ is the angle between the normals (see the diagram at https://www.dropbox.com/s/glpwplrtim8fkl7/kites.pdf). Applying the cosine half-angle formula I get
$$v' = v + \frac{(N_1+N_2)\epsilon\sqrt{2}}{\|N_1+N_2\|\sqrt{1+N_1\cdot N_2}} = v + \frac{(N_1+N_2)\epsilon}{1+N_1\cdot N_2}.$$
As a sanity check, notice that $(v'-v)\cdot N_1 = \epsilon$ and likewise for $N_2$. |
What are the most prominent uses of transfinite induction outside of set theory? | Here's an example due to Erdos and Hajnal:
Theorem: There is a partition of plane into countably many pieces such that the distance between any two points in the same piece is irrational.
Corollary: Every non (Lebesgue) null subset $X$ of plane contains a non null subset $Y$ such that the distance between any two points of $Y$ is irrational.
Open question: Can we strengthen the corollary to: Every non null subset $X$ of plane contains a subset $Y$ of same outer measure as $X$ such that the distance between any two points of $Y$ is irrational?
Proof of theorem: By induction on $\kappa$, we show that
$(\star)$: Whenever $X \subseteq \mathbb{R}^2$ has size $\kappa$, there is a well orering $\preceq$ of $X$ such that for every $x \in X$, the set of $\preceq$-predecessors of $x$ which are at a rational distance from $X$ is finite.
Note that $(\star)$ suffices to construct a rational distance free partition of $X$ into countably many subsets.
When $\kappa \leq \omega$, this is obvious. So assume this is true for all $X \subseteq \mathbb{R}^2$ such that $|X| < \kappa$ where $\kappa \geq \aleph_1$. Let $|X| = \kappa$. Inductively construct $\langle X_i : i < \kappa \rangle$ such that the following hold:
(0) $X_i$'s are increasing continuous and $X = \bigcup \{X_i : i < \kappa\}$
(1) $|X_i| = \max(\aleph_0, |i|)$
(2) Whenever $x \neq y$ are from $X_i$, $z \in X$ and the distance of $z$ from each one of $x, y$ is rational, $z \in X_i$
Let $\preceq_i$ be a well order on $X_i$ witnessing $(\star)$. Define a well order $\preceq$ on $X$ as follows: If $x, y \in X_i \setminus \bigcup \{X_j : j < i\}$, then $x \preceq y$ iff $x \preceq_i y$. If $x \in X_i \setminus \bigcup \{X_j : j < i\}, y \in \bigcup \{X_j : j < i\}$, then $y \preceq x$. It is easy to check that $\preceq$ witnesses $(\star)$.
Komjath extended this to $\mathbb{R}^n$ for every $n$. The proof is slightly more complicated - $(\star)$ is replaced by a different statement which is again proved by transfinite induction (Note that $(\star)$ is false when $n \geq 3$). |
Finding the limit of $2^{-1/\sqrt {n}}$ as $n \rightarrow \infty$. | Since $2^x$ is continuous, $\displaystyle{\lim_{n \to \infty}} 2 ^{-\frac{1}{\sqrt{n}}} = 2^{\displaystyle{\lim_{n \to \infty}} -\frac{1}{\sqrt{n}}}= 2^0 = 1$ |
Kronecker Products and Powers notation | $ x^{\otimes\mkern1.5mu n}=\underbrace{x\otimes x\otimes \dots\otimes x}_{n\enspace\text{times}} $ |
Where did I go wrong in my calculation? (Complex numbers) | If I understand your picture correctly, the answer that you got is$$2\left(\cos\left(\frac{7\pi}6\right)+\sin\left(\frac{7\pi}6\right)i\right),\tag1$$whereas Maple got$$2\left(\cos\left(-\frac{5\pi}6\right)+\sin\left(-\frac{5\pi}6\right)i\right).\tag2$$Am I right? If so, what's the problem? After all, $(1)=(2)$. |
Show that $\boldsymbol{\mathrm{F}}$ is independent of path. | As you showed, the vector field is conservative, so it doesn't matter which path you take, the only thing you need are the starting and end point.
First, as $\mathrm{F}$ is conservative, you have to calculate a function $f$ such that $\nabla f=\mathrm{F}$. An easy way to do this is using this formula:
$$\displaystyle f(x,y) = \int_{0}^{x}\mathrm{F}_{1}(t,0)dt + \int_{0}^{y}\mathrm{F}_2(x,t)dt$$
Where $\mathrm{F}_1,\mathrm{F}_2$ are the first and second value of the vector field $\mathrm{F}$. Therefore $$\displaystyle \int_{C}\mathrm{F}\cdot dr = f(x,y)|^{r_1}_{r_0}$$
Where $r_1$ and $r_0$ are your end and starting points, respectively. |
Why is the $n$-sphere a Smooth Manifold? | Write $f(x)=\|x\|^2$. This is a smooth map $\mathbb{R^{n+1}}\rightarrow\mathbb{R}$. The $n$-sphere is given as
$$S^n = \{x\in\mathbb{R}^{n+1}\,:\,\|x\|^2=1\} = f^{-1}(1)$$
Since $1$ is a regular value of $f$ (check it!), $S^n$ is a smooth $n$ dimensional submanifold of $\mathbb{R}^{n+1}$ by the submanifold theorem. |
Linearly independent matrix proof | If S is a linear independent set, then $a_1u_1+a_2u_2+...+a_nu_n=0$ implies all $a_i=0$ by definition. Thus
$$b_1u_1+b_2\sum_i^2u_i+...+b_n\sum_i^nu_i=\sum_i^nb_iu_1+\sum_i^{n-1}b_iu_2+...+b_nu_n=0$$
implies $\sum_i^kb_i=0$ for all $k$, namely $b_i=0$ for all $i$. That is, S' is also a linear independent set.
The reverse direction is similar and easy now. |
How can I integrate $\int{1\over 2x+2}$ | You can do this
$$\int{1\over 2x+2}dx = \frac 12\int{dx\over x+1} = \frac{1}{2}\int{\frac{du}{u}} = \frac{1}{2}\ln{u} + c$$
where $u = x + 1$, $du = dx$. You know that
$$ \int{\frac{du}{u}} = \ln{u} + c $$ |
Prove the definition of the arcsin(s). | You can use the fact that$$\arcsin'(x)=\frac1{\sin'(\arcsin x)}=\frac1{\cos(\arcsin x)}=\frac1{\sqrt{1-x^2}}.$$So, and since $\arcsin(0)=0$, you get that$$\arcsin s=\int_0^s\frac{\mathrm dx}{\sqrt{1-x^2}}.$$ |
continuity of the derivative under certain conditions | You can use l'Hospital by continuity of $\,f\,$ at $\,x_0\,$:
$$f'(x_0):=\lim_{x\to\ x_o}\frac{f(x)-f(x_0)}{x-x_0}\stackrel{\text{l'H}}=\lim_{x\to x_0}f'(x)$$
since you know the rightmost limit exists (finitely, I suppose), the limit defining the leftmost term exists (finitely), and this says $\,f'(x_0)\,$ exists. |
If $f$ is continuous and $3\geq f(x)\geq 1$ for all $x\in[0,1]$, show the integral inequality. | By C-S
$$ \int\limits_0^1f(x)dx\int\limits_0^1\frac{1}{f(x)}dx \geq \left(\int\limits_0^11dx\right)^2=1.$$
For the proof of the right inequality use the Schweitzer's inequality.
See here:
http://www.ssmrmh.ro/wp-content/uploads/2016/08/INTEGRAL-FORMS-FOR-SCHWEITZERS-AND-POLYA-SZEGOS-INEQUALITIES.pdf |
Minimum length of a nontrivial word in $[[F_n,F_n],F_n]$ | There are no such elements of length $4$ or $6$.
As I said in my comment, the only length $4$ word in $\gamma_2(F_n) = [F_n,F_n]$ is $aba^{-1}b^{-1}$ where $a^{\pm 1}$ and $b^{\pm 1}$ are free generators, and that does not lie in $\gamma_3(F_n)$.
By considering cyclic conjugates, we can see that the only essentially distinct words of reduced length $6$ in $\gamma_2(F_n)$ are $a^2ba^{-2}b^{-1}$, $aba^{-1}cb^{-1}c^{-1}$, and $abca^{-1}b^{-1}c^{-1}$, where $c^{\pm 1}$ is a third free generator.
The first of these does not lie in $\gamma_3(P)=1$ in the finite order $27$ quotient $\langle a,b \mid a^{9}=b^3=1, a^b=a^4 \rangle$ $P$ of $F_n$, so it does not lie in $\gamma_3(F_n)$.
For the other two words, we can consider their images in the quotient of $H = \langle a,b,c,\ldots \mid b=c \rangle$ of $F_n$, which is free of rank $n-1$, and we get the words
$aba^{-1}b^{-1}$ and $ab^2a^{-1}b^{-2}$, neither of which lie in $\gamma_3(H)$.
So there are no words of length less than $8$ in $\gamma_3(F_n)$. Of course this type or argument will only work in relatively straightforward examples. I don't know how you would go about finding the minimal length of a word in $\gamma_k(F_n)$ for higher values of $k$. |
Another question in Bott-Tu. | I was going to write a comment: What Thomas Rot said plus "the pullback along the inclusion $V\times F\to U\times F$ commutes with the Künneth-isomorphism".
This however deserves some explanations.
Note that
We can choose an isomorphism $\varphi\colon \pi^{-1}U\to U\times F$ which is compatible with $\pi$. (More precisely, $\varphi$ shall factorise $\pi|_{\pi^{-1}U}$ over the canonical projection $U\times F\to U$.) In particular, under $\varphi$ the inclusion $\pi^{-1}V\subset\pi^{-1}U$ corresponds to $V\times F\subset U\times F$.
Thus, we can reduce the problem to the case where $\pi$ is the projection $U\times F\to U$.
The inclusion $V\to U$ induces isomorphisms on cohomology groups, simply because if they're both contractible, all these groups are trivial except $H^0$, but this case is easy.
(I don't know what kind of cohomology you want to use; I've read De Rham somewhere, that's fine. I'm not going to use it explicitly anyway, but let's say we have coefficients in a field if you want to use something like singular cohomology instead.)
Now finally we can see that pull back is in fact an isomorphism. From the projections $U\times F \to U$ and $U\times F\to F$, we get homomorphisms $H^*(U)\to H^*(U\times F)$ and $ H^*(F)\to H^*(U\times F)$ respectively (and likewise for $V$).
Using these, for each $k,l$ with $n = k+l$ we get a homomorphism $H^k(U)\otimes H^l(F)\to H^n(U\times F)$ and likewise for $V$ and it is easy to see that these maps make the diagram
$$\require{AMScd}
\begin{CD}
H^k(V)\otimes H^l(F) @>>{i^*\otimes \mathrm{id}}> H^k(U)\otimes H^l(F)\\
@VVV @VVV \\
H^n(V\times F) @>{j^*}>> H^n(U\times F)
\end{CD}
$$
commute, where $i$ and $j$ are the canonical inclusions.
By our second note, the above horizontal map is an isomorphism, and so is the sum of all of them, $$\bigoplus_{k+l = n}H^k(V)\otimes H^l(F) \to \bigoplus_{k+l = n}H^k(U)\otimes H^l(F).$$
(On both sides, all but one of the summands is trivial, where the non-trivial ones are $H^0(V)\otimes H^n(F)\cong H^n(F)\cong H^0(U)\otimes H^n(F)$.)
To the very end, by the Künneth theorem, the vertical homomorphisms in the commutative diagram
$$\require{AMScd}
\begin{CD}
\bigoplus_{k+l = n}H^k(V)\otimes H^l(F) @>{}>> \bigoplus_{k+l = n}H^k(U)\otimes H^l(F)\\
@VVV @VVV \\
H^n(V\times F) @>{}>> H^n(U\times F)
\end{CD}
$$
are also isomorphisms. Hence so is the bottom map and this is what we wanted to prove. |
Why are Lebesgue measurable sets defined in connection to intersections with other sets? | Lebesgue's outer measure $m^*$ is constructed as an attempt to extend the notion of length of an interval, in a coherent way, to all of $\mathcal P(\mathbb R)$.
But then Vitali sets exist, which show that $m^*$ is cannot be a measure. Caratheodory's criterion gives you a family of subsets $\mathcal M$ that satisfies three things:
restricted to $\mathcal M$, the outer measure $m^*$ is a measure
$\mathcal M$ is a $\sigma$-algebra
$\mathcal M$ contains all open sets, and thus the Borel $\sigma$-algebra
The three points together give us a big enough domain (bigger than the Borel $\sigma$-algebra) where the idea of $m^*$ works and actually provides a measure. e: the Lebesgue measure.
On the question about how Caratheodory came up with such an idea, I cannot really say. With the criterion already in place, one may think that the motivation is to take sets where $m^*$ is additive. |
Show that $ \forall{x}\exists{y}{(P(x) \to Q(y))} \vdash \exists{y}\forall{x}{(P(x) \to Q(y))} $ | A proof of the problem given, adapted from @Bram28's answer |
Marginal distribution of $X$ when $X|m \sim Pois(m)$ and $M \sim \Gamma (2,1) $ | Looks almost fine to me. Note that it is indeed continuous as function of $m$ but it is not continuous as function of $x$. However, do note that what you calculated here is the joint pdf of $X$ and $M$. What you should want is the density of only $x$. As you usually would approach, this is done by integrating out $m$, i.e.,
$$P(X=x) = \int_{0}^{\infty} P(X=x|M=m)\cdot f_M(m)\,\mathrm{d} m. $$
I think this will answer your question. I leave the calculations up to you. However, if you get stuck again, let me know in the comments where. |
How to graph the equation: $y=\frac {x-2}{x+1}$? | Rewrite as $y=1-3/(x+1)$, or $(y-1)= -3/(x+1)$
This is simply the regular recriprocal equation, with the axies rescaled. The standard $Y$ axis, representing $X=0$, is now marked $x=-1$. That is, the x-numbers are moved one to the right. The old X axis of 0,1,2,3 become -1, 0, 1, 2...
The old $X$ axis, or $Y=0$ becomes $y=1$, and we have to rescale by a factor of 3. So the old Y coordinates 0, 1, 2, 3... become 1, -2, -5, -8...
And that's it. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.