title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Inverse of a multi-variable function | I haven't studied all of the technicalities behind this, but here are my two cents.
Think about how this works in the $\mathbb{R} \to \mathbb{R}$ approach: you want to show that for every $y$, there is only one $x$. In the algebraic procedure, it's a clean $x \leftrightarrow y$ swap, solve for $y$, you get the inverse function.
In the $\mathbb{R}^2 \to \mathbb{R}^2$ case (say $(x, y) \mapsto (z_1, z_2)$), you want to show for every $(z_1, z_2)$, there is only one $(x, y)$. There isn't an easy way to do this, because there's not an easy one-to-one correspondence between one of the variables in $(x, y)$ and another one of the variables in $(z_1, z_2)$. You can't, for example, necessarily say that $x \leftrightarrow z_1$ and $y \leftrightarrow z_2$ are valid swaps, because $z_1$ and $z_2$ are both dependent on $x$ and $y$. Thus, the best you can do is manipulate $z_1$ and $z_2$ until you get $x$ and $y$ in terms of $z_1$ and $z_2$. |
Expressing Δf with an integral of a differential (infinitesimal) | This is just the fundamental theorem of calculus. If $\Delta x$ is a finite distance on the $x$-axis, then the corresponding change in $f$ is simply $$\Delta f = f(x + \Delta x) - f(x) = \int_x^{x + \Delta x} f'(x)\, dx.$$ |
A stick of unit length is cut into three pieces of lengths $X, Y, Z$ according to its length in two sequence of cuts. Find Cov(X,Y). | The length $S$ of the shorter piece is uniform over $\left[0,\frac12\right]$. Given $S=s$, the length $X$ of the shorter piece in the second cut is uniform over $\left[0,\frac s2\right]$, and the length $Y$ of the longer piece in the second cut is uniform over $\left[\frac s2,s\right]$. Thus we have
\begin{align}
\mathbb E[X]&=2\int_0^\frac12\mathrm ds\,\frac14s=\frac1{16}\;,\\
\mathbb E[Y]&=2\int_0^\frac12\mathrm ds\,\frac34s=\frac3{16}\;,\\
\mathbb E[XY]&=2\int_0^\frac12\mathrm ds\,\frac2s\int_0^\frac s2\mathrm dx\,x(s-x)\\
&=2\int_0^\frac12\mathrm ds\,\frac2s\left(\frac18s^3-\frac1{24}s^3\right)\\
&=2\int_0^\frac12\mathrm ds\,\frac{s^2}6\\
&=\frac1{72}\;,
\end{align}
and the covariance is
\begin{align}
\operatorname{Cov}(X,Y)&=\mathbb E[XY]-\mathbb E[X]\mathbb E[Y]\\
&=\frac1{72}-\frac1{16}\cdot\frac3{16}\\
&=\frac5{2304}\\
&\approx0.002\;.
\end{align}
So the correlation from the first cut slightly outweighs the anticorrelation from the second cut. |
What does $~ u(\cdot, t)$ mean when referring to a function? | This indicates that we are viewing $u$ as a function of the first variable only, with the second variable fixed at $t$.
Another way to say this is that if $u$ is a function from $\mathbb{R}^2$ to $\mathbb{R}$, and $t \in \mathbb{R}$, then $u(\cdot,t)$ is another name for the function from $\mathbb{R}$ to $\mathbb{R}$ which we might define by $f(x) = u(x,t)$. |
What are the triangle free graphs on $\lfloor\frac{n^2}{4}\rfloor$ edges . | This is a partial case of Turán’s theorem for $r=2$. The unique (up to a isomorphism) edge-maximal triangle-free graph is a Turán graph, which is a complete bipartite graph on $n$ vertices such that the sizes of the parts differs by at most one. |
Existence of a bounded function on the disk with specified values | Here's another way. If we let $g(z) = [f(z)]^2 -(1-z)^2$, then $g$ is bounded on the unit disk and $g(x_n)=0$ for all $x_n = 1-\frac{1}{n}$. By Blaschke condition, we must have $g \equiv 0$. But this implies $h(z)= \frac{f(z)}{1-z}$ is a continuous functions taking values in a disconnected set $\{-1,1\}$. It says that $h$ must be a constant. This leads to a contradiction to that $h(x_n)=(-1)^n$. |
$\Delta \vec{v}=0$ implies $\nabla\cdot \vec{v}=\nabla\times \vec{v}=0$? | I assume the statement $\nabla \cdot \vec v = \nabla \times \vec v = 0$ is a minor abuse of notation, since as pointed out in the comments, $\nabla \cdot \vec v$ is a scalar but $\nabla \times \vec v$ is a vector. Furthermore, I assume that $\vec v$ is a $C^2$ vector field on $\Bbb R^3$, so that all the derivatives make sense.
These things being said, here is a proof which, though not directly employing the given hint, goes a little further than requested and shows that $\vec v = 0$ identically, from which the required results immediately and trivially follow.
By $\Delta$ we mean $\nabla^2$, and so $\Delta \vec v = \nabla^2 \vec v$, where by $\nabla^2 \vec v$ we mean, in a global Cartesian coordinate system, that $\nabla^2$ is to be applied to each component of $\vec v$ separately, so that taking
$\vec v = (v_1, v_2, v_3) \tag{1}$
we have
$\nabla^2 \vec v = (\nabla^2 v_1, \nabla^2 v_2, \nabla^2 v_3). \tag{2}$
(2) may also be directly validated by working through our OP Dan Smith's given equation
$\nabla^2 \vec v = \Delta \vec v = \nabla (\nabla \cdot \vec v) - \nabla \times (\nabla \times \vec v) \tag{3}$
in Cartesian coordinates, a straightforward task requiring mainly careful bookkeeping of indices:
$\nabla \times \vec v = (v_{3, 2} - v_{2, 3}, v_{1, 3} - v_{3, 1}, v_{2, 1} - v_{1, 2}) \tag{4}$
where we have used the notation
$v_{1, 2} = \dfrac{\partial v_1}{\partial x_2} \tag{5}$
and so forth, where $x_1$, $x_2$, $x_3$ are the coordinates on $\Bbb R^3$. From (4) it follows that the $1$-component of $\nabla \times (\nabla \times \vec v)$ is
$(\nabla \times (\nabla \times \vec v))_1 = v_{2, 12} - v_{1, 22} - v_{1, 33} + v_{3, 13}, \tag{6}$
and the $1$-component of $\nabla(\nabla \cdot \vec v)$ is
$(\nabla(\nabla \cdot \vec v))_1 = v_{1, 11} + v_{2, 21} + v_{3, 31}; \tag{7}$
we see from (6) and (7) that
$(\nabla(\nabla \cdot \vec v) - \nabla \times (\nabla \times \vec v))_1 = v_{1, 11} + v_{1, 22} + v_{1, 33}, \tag{8}$
and the corresponding results hold for the $2$ and $3$ components, so
$\nabla^2 \vec v = \Delta \vec v = (\nabla^2 v_1, \nabla^2 v_2, \nabla^2 v_3) \tag{9}$
as claimed. It follows then from $\Delta \vec v = 0$ that $\nabla^2 v_i = 0$ for $1 \le i \le 3$; each component of $\vec v$ is a harmonic function on $\Bbb R^3$. But a harmonic function which vanishes outside of a bounded set such as $V$ must be identically zero; this follows from the mean value property of harmonic functions $u$, which in the present circumstances states that for $p \in \Bbb R^3$
$u(p) = \dfrac{1}{4 \pi R^2}\int_{\partial B(p, R)} u dS, \tag{10}$
where $B(p, R)$ is the ball of radius $R$ centered at $p$, and $dS$ is the surface area measure on the sphere bounding said ball. By taking $R$ sufficiently large for any particular $p$, we see that the integral in (10) must vanish if $u$ vanishes outside of some bounded set $V$; $u = 0$ identically on a sufficiently large sphere. Thus we see that $\vec v = 0$ everywhere, whence in particular $\nabla \cdot \vec v$ and $\nabla \times \vec v = 0$ as well. QED.
Hope this helps. Cheers,
and as always,
Fiat Lux!!! |
What will be the value of $h'(1)?$ | Let $h(x)$ be given by the integral
$$h(x)=\int_0^x \int_0^x f(u,v)\,du\,dv$$
Then, using Leibniz's Rule for differentiating under the integral sign, the deriviative $h'(x)$ is given by
$$\bbox[5px,border:2px solid #C0A000]{h'(x)=\int_0^x f(u,x)\,du+\int_0^x f(x,v)\,dv} \tag 1$$
To see this perhaps a bit more easily, we let $F(x,v)\equiv \int_0^x f(u,v)\,du$. Then, we can write
$$h(x)=\int_0^x F(x,v)\,dv$$
Now, using Leibniz's Rule, we see that
$$h'(x)=F(x,x)+\int_0^x \frac{\partial F(x,v)}{\partial x}\,dv \tag 2$$
Using $F(x,x)=\int_0^x f(u,x)\,du$ and $\frac{\partial F(x,v)}{\partial x}=f(x,v)$ in $(2)$ reveals that
$$\bbox[5px,border:2px solid #C0A000]{h'(x)=\int_0^x f(u,x)\,du+\int_0^x f(x,v)\,dv}$$
which agrees with the result in $(1)$. |
Upper bound on minimum number of moves to solve the $m\times n$ sliding puzzle | As noted in the comments, the asymptotic bound is precisely $\Theta(n^3)$ (For the general case, with $m\geq n$, it's actually $\Theta(m^2n)$; for clarity's sake I'm just considering $m=n$ but it's easy to adapt most of these arguments to the general case). The '180 flip' position establishes the lower bound, since there are $n^2-(\frac{n}{3})^2 = \frac{8}{9}n^2$ tiles in the outer third of rows and columns (that is, with $x$ or $y$ coordinate either $\lt n/3$ or $\gt 2n/3$) and each of those tiles must move at least $n/3$ places to get to its final positions. Any one move can only change the distance-from-target of one tile by one unit, so that sets the $O(n^3)$ lower bound.
The upper bound comes from the standard 'fill cells one by one' approach. Since each tile is no more than $2n$ cells from its final position and moving it a single square takes $O(1)$ moves (for instance, to move a cell into an empty square above it and leave the empty square above it, move the empty square D,R,U,U,L) we can fill all but one of the squares in a row in $O(n^2)$ time without having to displace any already-set tiles. The final square in a row takes more care, but that one can be filled by shifting the leftmost tile in a row down, shifting the rest of the row right, setting the final tile into either of the two rightmost squares (without disturbing any of the other cells in the row) and then shifting the rest of the row back; this is a complicated process but only adds $O(n)$ moves per row. A similar approach sets the bottom row once the rest of the puzzle is complete: $O(n)$ moves suffice to move any three tiles from the bottom row into a 'canonical position' in the bottom-right corner, perform a standard 3-cycle on them, and then undo the movements to bring them back to their original locations. Since any even permutation of the tiles in the bottom row can be expressed as a product of $O(n)$ 3-cycles, this adds $O(n^2)$ time to the total effort. In the $m\geq n$ case, the above procedure should yield $O(m^2n)$, the same as your estimate.
'All' that's left at this point is the hardest part of the problem: establishing the constant on the $n^3$ term... |
Showing a metric space is not complete. | Let $n \in \mathbb{N}$, and define
$ f_n(x) = \left\{\def\arraystretch{1.2}%
\begin{array}{@{}c@{\quad}l@{}}
0 & \text{if } 0 \leq x \leq 1/2\\
2nx - n & \text{if } 1/2 < x \leq (n + 1)/(2n)\\
1 & \text{if } (n + 1)/(2n) < x \leq 1\\
\end{array}\right.$
Then $\{f_n\}$ is a Cauchy sequence whose limit is
$ f(x) = \left\{\def\arraystretch{1.2}%
\begin{array}{@{}c@{\quad}l@{}}
0 & \text{if } 0 \leq x \leq 1/2\\
1 & \text{if } 1/2 < x \leq 1\\
\end{array}\right.$
because $d(f,f_n) = \int_0^1 |f(x) - f_n(x)| dx = 1/(4n)$. |
What are the rules used for this integral representation? | For fixed $w$, we consider $w^{-\frac{\alpha}{\beta}}\displaystyle\int_0^w \frac{z^{\frac{\alpha}{\beta}-1}}{1-z} \mathrm{d} z$, which you called $f(w)$.
Let $z=wu$. Hence, if $z=w$, then $u=1$; if $z=0$, then if $z=0$, then $u=1$. Also, $\mathrm{d} z=w\,\mathrm{d}u$. On the other hand,
$$\frac{z^{\frac{\alpha}{\beta}-1}}{1-z}=\frac{w^{\frac{\alpha}{\beta}-1}u^{\frac{\alpha}{\beta}-1}}{1-wu}.$$
Now putting all these calculations together, we obtain
$$w^{-\frac{\alpha}{\beta}}\displaystyle\int_0^w \frac{z^{\frac{\alpha}{\beta}-1}}{1-z} \mathrm{d} z=w^{-\frac{\alpha}{\beta}}\int_0^1\frac{w^{\frac{\alpha}{\beta}-1}u^{\frac{\alpha}{\beta}-1}}{1-wu}\cdot w\,\mathrm{d}u=\int_0^1\frac{u^{\frac{\alpha}{\beta}-1}}{1-wu} \mathrm{d}u,$$
as required. |
Show that there is a continuous function $g$ on $[-\pi,\pi]$ satisfying $||f-g||_{2} < \epsilon$ | Remark: You don't need to use very powerful machinery to prove this statement, i.e. Weierstrass's second approximation theorem.
Hint: Fix $\varepsilon>0$. Since $f$ is riemann integrable, then there exists a partition of $[-\pi, \pi]$ say $\mathcal{P}=\{x_0=-\pi, x_1, \ldots, x_{N-1}, x_N=\pi\}$ such that $\sum^N_{i=1} M_i\Delta x_i-\sum^N_{i=1} m_i\Delta x_i<\varepsilon$ where $M_i$ is the supremum on $[x_{i-1}, x_i]$ and $m_i$ the infimum. Then define
\begin{align}
m_i\leq g(t) := \frac{x_i-t}{\Delta x_i}f(x_{i-1})+\frac{t-x_{i-1}}{\Delta x_i}f(x_i) \leq M_i
\end{align}
if $t \in [x_{i-1}, x_i]$, i.e. you linearly interpolated the points. Observe $g$ is continuous on $[-\pi, \pi]$. |
Prove that the family $\{l_x :\, x \in X\}$ is a basis for $V^*$. | Since $X$ is a finite set, let $x_1,x_2,\dots,x_n$ be the distinct elements of $X$. Now, for each $i$ between $1$ and $n$, consider the function $f_i \in V$ defined by
$$f_i(x_j) = \delta_{ij} = \begin{cases}
1 & \textrm{if } i = j \\ 0 & \textrm{if } i \neq j
\end{cases}$$
Exercise. Prove that the functions $f_1,f_2,\dots,f_n$ form a basis for $V$.
Now, in order to prove that $\{l_x\}_{x \in X} = \{l_{x_1}, l_{x_2}, \dots, l_{x_n}\}$ is a basis for $V^*$, we need to prove that only the linear independence (why?).
So, suppose that the zero $0 \in V^*$ can be written as
$$0 = a_1l_{x_1} + a_2l_{x_2} + \cdots + a_nl_{x_n}$$
for some choice of scalars $a_1,a_2,\dots,a_n$ in $F$. Now, evaluating the preceding equation in $f_1$ we obtain
\begin{align}
0 = 0(f_1) &= (a_1l_{x_1} + a_2l_{x_2} + \cdots + a_nl_{x_n})(f_1) \\
&= a_1l_{x_1}(f_1) + a_2l_{x_2}(f_1) + \cdots + a_nl_{x_n}(f_1) \\
&= a_1f_1(x_1) + a_2f_1(x_2) + \cdots + a_n f_1(x_n) = a_1
\end{align}
and similarly, $a_2 = \cdots = a_n = 0$. |
Why is a two sheeted hyperboliod regular but two cone connected at their start point not | Basically by the same reason why $\sqrt{1+x^2}$ is differentiable, whereas $\sqrt{x^2}(=\lvert x\rvert)$ is not: the surface $x^2=y^2+z^2$ is the union of the surfaces $x=\pm\sqrt{y^2+z^2}$ and you have a problem concerning differentiability when $(x,y,z)=(0,0,0)$. But you have no such problem with the surfaces $x=\pm\sqrt{1+x^2+y^2}$; each one of thm is well-behaved at any point, as far as differentiability is concerned. |
Two Different Statements about Schur's Decomposition Theorem (Linear Algebra) | Proof of the equivalency
$(2) \implies (1)$
Let $\mathcal E' = \mathcal B U^*$, then $T (\mathcal B) = \mathcal B U^* A U $, or $T (\mathcal E' U) = \mathcal E' U U^* A U = \mathcal E' A U $, hence $T (\mathcal E') = \mathcal E' A$, i.e. $\mathcal M(T, \mathcal E') = A$ is upper triangular. Now apply Gram-Schmidt orthogonalization process to $\mathcal E'$ to obtain an orthonormal basis $\mathcal E$, then $\mathcal M(T, \mathcal E)$ is still upper triangular, hence the statement.
$(1) \implies (2)$
From now on, the bold-italic letters represent matrices, the regular letters are mappings, and the calligraphic letters are basis. Also $\mathbb C^n \cong \mathrm M_{n,1}(\mathbb C)$ is the collection of $n \times 1$ complex matrices.
Let $\boldsymbol S = \mathcal M(T, \mathcal B)$, on $W = \mathbb C^n$, define linear operator $S \colon \boldsymbol x \mapsto \boldsymbol {Sx}$ where $\boldsymbol x \in \mathbb C^n$, then apply $(1)$ to the operator $S$, there exists some ONB $\mathcal F$ of $\mathbb C^n$ that $\mathcal M(S, \mathcal F) = \boldsymbol A$ is upper triangular. Let $\mathcal E$ be the standard basis of $\mathbb C^n$, and let $\mathcal F = \mathcal E \boldsymbol U$, then $\boldsymbol U$ is unitary. Write $\boldsymbol U^*$ instead of $\boldsymbol U$, i.e. $\mathcal F = \mathcal E \boldsymbol U^*$ and $S(\mathcal F) = \mathcal F \boldsymbol A$ becomes $S (\mathcal E \boldsymbol U^*) = \mathcal E \boldsymbol U^* \boldsymbol A$, hence $S (\mathcal E) = \mathcal E (\boldsymbol U^*\boldsymbol {AU})$. But $S (\mathcal E) = \boldsymbol S$, hence $\boldsymbol S = \boldsymbol {U}^* \boldsymbol {AU}$. Hence the statement. |
Prove that a simple graph with $n \ge 2$ and at least $\frac{(n-1)(n-2)}{2}+1$ edges is connected. | (b) Any vertex is connected to at most $n-2$ edges because $G$ is not connected so $\frac{(n-1)(n-2)}{2}+1-(n-2)=\frac{(n-2)(n-3)}{2}+1$
(c) Suppose $G$ with $n$ vertices is not connected and have at least $\frac{(n-1)(n-2)}{2}+1$ edges then remove a vertex we have a not connected graph with $n-1$ vertices and at least $\frac{(n-2)(n-3)}{2}+1$ edges contradicting our induction hypothesis for $n-1$ case. |
If $\big|\int g\varphi \, d\mu\big|\le M\int\lvert\varphi\rvert\,d\mu,$ for all simple functions $\varphi,\,$ then $\lvert g\rvert \le M,$ a. e. | Let
$$
A=\{x\in X :\lvert g(x)\rvert>M\}.
$$
Then $A=\bigcup_{n\in\mathbb N} A_n$, where
$$
A_n=\left\{x\in X :\lvert g(x)\rvert>M+\frac{1}{n}\right\}.
$$
It suffices to show that $\mu(A_n)=0$, for all $n\in\mathbb N$.
Let $\varphi_n=\chi_{E_n}\cdot\mathrm{sgn}\,g$; where $\chi_{A_n}$ the characteristic on $A_n$ and $\,\mathrm{sgn}\,g(x)=1$ if $g(x)>0$ and $\,\mathrm{sgn}\,g(x)=-1$ if $g(x)<0$.
Then if $\mu(A_n)>0$, we have that
$$
\left|\int g\varphi_n \,d\mu\,\right| =\int_{A_n} \lvert g\rvert\, d\mu
\ge
\left(M+\frac{1}{n}\right)\mu(A_n)>M\mu(A_n)=M\int_X\lvert\varphi_n\rvert\,d\mu,
$$
which is a contradiction. Thus $\mu(A_n)=0$, for all $n$, and finally $\mu(A)=0$. |
Indexing problem in array | Let $F(n)$ be the number of partitions for an array of $n$ elements. My claim is that $F(1) = 0$, $F(2) = 1$, and$F(n) = F(n-1) + F(n-2)$ when $n > 2$. It is trivial to see that the values for $F(0)$ and $F(1)$ are correct so the tricky part is proving the recursive formula. To see that it is true note that the partitions of an array $[a_1, a_2, ..., a_n]$ come in two different forms:
The cases where $[a_1, a_2]$ is part of the partition
The cases where $[a_1, a_2]$ is not part of the partition
You can get all of the partitions in case (1) by generating the partitions for the smaller array $[a_3, a_4, ..., a_n]$ and adding in $[a_1, a_2]$ for each partition. This means there are $F(n-2)$ partitions like this.
You can get all of partitions in case (2) by generating the partitions for the array with $a_1$ missing ($[a_2, a_3, ..., a_n]$) and then prepending $a_1$ to the sub-array containing $a_2$ of the partitions. This means we have $F(n-1)$ such partitions.
The total number of partitions would then be the sum of these two cases which is $F(n) = F(n-1) + F(n-2)$. This is exactly the fibonacci sequence except shifted over (normally $F(1) = 1$) which has the closed solution:
$$ F(n)=\frac{1}{\sqrt{5}}\left(\frac{1+\sqrt{5}}{2}\right)^{n-1}- \frac{1}{\sqrt{5}}\left(\frac{1-\sqrt{5}}{2}\right)^{n-1}$$
This means your complexity is in fact exponential. |
Are $L^p$ norms continuous on finite spaces | Assume for simplicity that the underlying measure space $(X,\mu)$ satisfies $\mu(X)=1$. If $1\leq q\leq p<\infty$ then
$$ ||f||_q\leq ||f||_p $$
by Jensen's inequality. This implies that $L^p\subset L^q$, so the $L^q$ norm can be restricted to $L^p$.
To show that $||\cdot||_q:L^p\to\mathbb{R}$ is continuous, it is enough to show that $||f_n||_q\to ||f||_q$ for all $\{f_n\},f\in L^p$ with $f_n\to f$ in $L^p$.
But if $f_n\to f$ in $L^p$ then $||f_n-f||_p\to 0$, and since $||f_n-f||_q\leq ||f_n-f||_p$ for all $n$, this implies that $||f_n-f||_q\to 0$ as well. The triangle inequality in $L^q$ implies that
$$ |||f_n||_q-||f||_q|\leq ||f_n-f||_q $$
hence it follows that $||f_n||_q\to ||f||_q$. |
Expected number of 5-sticker packs needed to complete a Panini soccer album | The usual coupon collector problem, where you get the coupons one by one, would say the expected number to get a set is $200H_n \approx 200(\ln(200)+\gamma)\approx 1175$ coupons. Naively you would expect to need $1175/5=225$ packs. This will be correct if the stickers are randomly distributed in the packs, including the chance that there are duplicates in a pack. I don't know how to calculate it, but suspect prohibiting duplicates will marginally reduce the expected number of packs. |
Let $G$ be a group of order $1365$. Prove $G$ is not simple. | As @lulu suggests, this is a situation where we should apply a simple counting argument. This is a common theme in Sylow-theorem exercises.
Suppose for contradiction that there is a simple group $G$ of order $1365$. For this group, we must have $n_3, n_5, n_7, n_{13} > 1$, so by your computations we get
$$n_3 \geq 7$$
$$n_5 \geq 21$$
$$n_7 = 15$$
$$n_{13} = 105$$
This means the group has $1260=105\times 12$ elements of order $13$, $90$ elements of order $7$, at least $84$ elements of order $5$, and at least $14$ elements of order $3$ (make sure you understand how to prove all of this!!)
In total, we have accounted for at least $1260 + 90 + 84 + 14 = 1448$ distinct elements of $G$. This is impossible, because $\lvert G \rvert < 1448$. |
Countably Saturated Models | Below is an example in an infinite language; it's not hard to modify it to work in a finite language.
Actually, barring further context there's a serious typo in the text. All complete theories have countably saturated models; what's true is that we don't always have countable countably saturated models.
Interestingly, the existence of fully saturated models of any cardinality of arbitrary complete theories is independent of the usual axioms of set theory!
For $i\in\mathbb{N}$ let $U_i$ be a unary relation symbol. For $A,B$ disjoint finite sets of natural numbers, let $$\psi_{A,B}\equiv\exists x[\bigwedge_{a\in A}U_a(x)]\wedge[\bigwedge_{b\in B}\neg U_b(x)].$$ Let $$T=\{\psi_{A,B}: A,B\mbox{ disjoint finite sets of naturals}\}.$$
Intuitively, a model of $T$ amounts to a bunch of subsets of $\mathbb{N}$, possibly with repetition (and conversely, a multiset of subsets of $\mathbb{N}$ yields a model of $T$ precisely when it is appropriately "dense" in $\mathcal{P}(\mathbb{N})$).
If $M\models T$ is countably saturated, then for each $S\subseteq\mathbb{N}$ there must be some $c\in M$ such that for all $i$ we have $M\models U_i(c)$ iff $i\in S$. But this means that no model of $T$ with cardinality $<2^{\aleph_0}$ is countably saturated.
(It's not hard to modify the above to use only a finite language.)
In general, the point is that a structure's cardinality can't be smaller than the number of distinct types realized in it (trivially), and reasonably saturated models have to realize lots of types. |
How to calculate probability of this random variable if all is known is the mean and variance? | Poisson distribution? I agree with @user1, that it's reasonable to assume the number of lightbulbs turning on in an hour is Poisson. It is typical to talk of Poisson events as taking place with a certain average number within a particular period of time.
Also, Poisson distributions are among those that have mean and variance equal. It would add to the strength of this supposition if you have recently covered Poisson distributions. So I'd say to go ahead and "jump."
Desired probability. So consider $X \sim \mathsf{Pois}(\lambda = 1000).$ Then you seek $P(X > 1200) = 1 - P(X \le 1200) \approx 0.$
Normal approximation: With such a large mean as $\lambda = 1000,$ one can use the normal approximation to Poisson distributions to find this
probability in terms of a normal distribution with $\mu = 1000, \sigma = \sqrt{1000}= 31.62278.$
So you can get the answer by standardizing and using printed normal tables.
Because $(1200 - 1000)/31,62278 \approx 6.32$ standard deviations above the mean, you can guess that the probability is essentially $0.$
Software: You can also use statistical computer software or a statistical calculator to find the exact Poisson probability. In R, where ppois is a Poisson CDF, the
computation is as shown below:
1 - ppois(1200, 1000)
[1] 3.884939e-10
In terms of the normal approximation, R gives nearly $0$ again:
1 - pnorm(1200, 1000, sqrt(1000))
[1] 1.269814e-10
If you do this by standardizing and using normal tables,
you will see that z-scores above about 3.5 are off the table, so you would have to understand that a z-score above $6$ corresponds to an answer near $0$ (which might be one point of this exercise).
Below is a plot of the Poisson probabilities (with values between 860 and 1200) along with the density function of he approximating normal distribution.
x = 860:1200; pdf = dpois(x, 1000)
plot(x, pdf, type="h", col="blue")
abline(h=0, col="green2")
curve(dnorm(x,1000,sqrt(1000)), add=T, col="brown", lwd=2, lty="dashed")
abline(v = 1200) |
Why is 'isomorphism' defined more generally in Category theory than in Abstract Algebra? | Notice that Awodey writes "Moreover, in some cases only the abstract definition makes sense, for example, in the case of a monoid", not "in the case of monoids".
For Awodey, a monoid is a category with one object, and the isomorphisms are the invertible arrows, i.e. the invertible elements of the monoid. Since the arrows aren't actually functions, it doesn't make sense to ask if they are bijective. |
Poker, number of three of a kind, multiple formulaes | In counting the number of hands with three of a kind we must not include those that have four of a kind or a full house.
As @Jean-Sébastien notes in the comments, your formula counts
$$\# (\textrm{three of a kind})
+ 4\# (\textrm{four of a kind})
+ \# (\textrm{full house})$$
or
$$54,912 + 4\times 624 + 3,744.$$
The factor of four arises since
$\rm AAA{\underline A} = AA{\underline A}A = A{\underline A}AA = {\underline A}AAA$. |
convex function bounded | $X=l_2$ and $f(x) = \sum_{n=1}^\infty {1 \over n} x_n^2$. Note that $f$ is convex, $f^{-1}(\{0\}) = \{0\}$.
However, $f^{-1}(\{1\})$ includes $\sqrt{n}e_n$ for all $n$. |
How to draw the reverse automaton for this language to show that L is regular. | We need to remember if there was carry bit, and if word we read so far is correct. So, 3 states will be enough: valid without carry bit ($a$), valid with carry bit ($b$), invalid ($c$). Initial state is $a$, accepting state is also $a$.
If we read $n$ bits so far, giving number $x$ in the first row, $y$ in the second and $z$ in the third, automaton will be in state $a$ iff $x + y = z$, in state $b$ iff $x + y = 2^n + z$, and in $c$ otherwise.
Transitions are pretty straightforward:
anything from $c$ goes to $c$
$000$ transitions $a$ to $a$, and $b$ to $c$
$001$ transitions $a$ to $c$ and $b$ to $a$
$010$ transitions $a$ to $c$ and $b$ to $b$
$110$ transitions $a$ to $b$ and $b$ to $c$
$111$ transitions $a$ to $c$ and $b$ to $b$
and so on.
In general, if we read $uvw$ and assume $p = 0$ if state was $a$ and $p = 1$ if it was $b$, we need to check $w = u \oplus v \oplus p$ (if not - move to $c$), and move to $a$ if $u + v + p < 2$ and to $b$ otherwise. |
Subject of the equation | Your solution is OK, but I think you should rewrite the last term of the solution as: $$l = +\sqrt{\frac{Lw - 2M}w} \lor l = -\sqrt{\frac{Lw - 2M}w}$$ |
True or False: Set theory | Right, but you should say that there are numbers (infinitely many of them, in fact) $x$ such that $xy=y$.
Right, but for the wrong reasons: denying $x\leqslant y$ is the same thing as asserting that $x\color{red}>y$.
It's false: for any set $A$, $A\setminus\emptyset=A$. In particular $\{\emptyset\}\setminus\emptyset=\{\emptyset\}\ne\emptyset$. (Note: I use the notation $X\setminus Y$ instead of $X-Y$.)
It's true: the empty set is a subset of any set. |
Sum of binomial coefficients so that the sum equals ${n\choose n/2}$ | This is an interesting problem from a numerical point of view.
Transposed in the real domain, you are looking for $m$ such that
$$\color{blue}{\frac{\, _2F_1(1,m-n+1;m+2;-1)}{\Gamma (m+2) \,\,\Gamma (n-m)} =\frac{2^n}{\Gamma (n+1)}-\frac{1}{\Big[\Gamma \left(\frac{n}{2}+1\right)\Big]^2}}$$ where $m$ and $n$ are real numbers.
This equation is not difficult to solve using Newton method with $m_0=\frac n 2$. This starting point is justified by the left part of the trivial double inequality
$$\binom{n}{m} \leq\sum_{k=0}^m\binom{n}{k}\leq (m+1)\binom{n}{m}$$ which means that we already know that $m \leq \frac n 2$. I did not find any simple way to use the right part of the above inequality (this is no more true : have a look at the $\color{red}{\text{ update}}$) .
For example, for $n=10$, the iterates are
$$\left(
\begin{array}{cc}
k & m_k \\
0 & 5.000000000 \\
1 & 3.419647982 \\
2 & 3.407971414 \\
3 & 3.407943361
\end{array}
\right)$$
Below are some results (I let you rounding the results the way you want).
$$\left(
\begin{array}{cc}
n & m \\
10 & 3.40794 \\
20 & 7.41879 \\
30 & 11.5964 \\
40 & 15.8702 \\
50 & 20.2093 \\
60 & 24.5969 \\
70 & 29.0227 \\
80 & 33.4793 \\
90 & 37.9619 \\
100 & 42.4665 \\
110 & 46.9903 \\
120 & 51.5309 \\
130 & 56.0864 \\
140 & 60.6554 \\
150 & 65.2365 \\
160 & 69.8287 \\
170 & 74.4310 \\
180 & 79.0426 \\
190 & 83.6628 \\
200 & 88.2910 \\
210 & 92.9267 \\
220 & 97.5694 \\
230 & 102.219 \\
240 & 106.874 \\
250 & 111.535 \\
260 & 116.202 \\
270 & 120.874 \\
280 & 125.550 \\
290 & 130.232 \\
300 & 134.918
\end{array}
\right)$$
This looks to be very close to linearity. Using these numbers, a quick and dirty linear regression for $m=a +b \,n$ leads to $R^2=0.999957$
$$\begin{array}{clclclclc}
\text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\
a & -2.6778 & 0.2028 & \{-3.0938,-2.2618\} \\
b & +0.4561 & 0.0011 & \{+0.4538,+0.4585\} \\
\end{array}$$
Using this empirical model for $n=400$, it gives $m=179.775$ while the solution is $181.986$.
As I wrote in comments, this also works for non integer values of $n$. For $n=123.456$, $m=53.1037$.
Update
I managed to use
$$\sum_{k=0}^m\binom{n}{k}\leq (m+1)\binom{n}{m}$$ defining the function
$$f(m)=(m+1)\binom{n}{m}- {n\choose \frac{n}{2}}$$ which was expanded as a series to
$O\left(\left(m-\frac{n}{2}\right)^3\right)$. Solving the quadratic, the approximate solution is given by
$$m=\frac n 2-\frac{n}{1+\sqrt{n} \sqrt{(n+2) \psi ^{(1)}\left(\frac{n}{2}\right)-\frac{3
n+8}{n^2}}}$$ which is a much better starting point as shown below
$$\left(
\begin{array}{ccc}
n & \text{approximation} & \text{solution} \\
50 & 20.5142 & 20.2093 \\
100 & 43.4413 & 42.4665 \\
150 & 66.8507 & 65.2365 \\
200 & 90.5099 & 88.2910 \\
250 & 114.329 & 111.535 \\
300 & 138.261 & 134.918
\end{array}
\right)$$
The asymptotics of the approximation is
$$m=\frac n2 \left(1-\sqrt{\frac 2 n}+\frac 1 n+O\left(\frac{1}{n^{3/2}}\right)\right)$$
Update
I found later this question; @user940 gave a very interesting asymptotic approximation. Adapted to your problem, we look for the solution $m$ of the equation
$$2^{n-1} \left(1-\text{erf}\left(\frac{n-2 m}{\sqrt{2n}
}\right)\right)=\binom{n}{\frac{n}{2}}$$ that is to say
$$\text{erf}\left(\frac{n-2 m}{\sqrt{2n} }\right)=1-\frac{2\, \Gamma \left(\frac{n+1}{2}\right)}{\sqrt{\pi } \,\Gamma \left(\frac{n}{2}+1\right)}$$ This can be inversed using approximations of the error function (have a look here).
This would give
$$\left(
\begin{array}{cc}
50 & 20.7060 \\
100 & 42.9608 \\
150 & 65.7299 \\
200 & 88.7840 \\
250 & 112.028 \\
300 & 135.410
\end{array}
\right)$$ which is significantly better for large values of $n$.
Concerning the asymptotics of $n$, using
$$\text{erf}(x)=1+e^{-x^2} \left(-\frac{1}{\sqrt{\pi }
x}+O\left(\frac{1}{x^3}\right)\right)$$ we have
$$m=\frac n 2-\frac {\sqrt n } 2 \sqrt{W(t)}\qquad \text{where} \qquad t=\frac 12\left(\frac{\Gamma \left(\frac{n+2}{2}\right)}{\Gamma \left(\frac{n+1}{2}\right)} \right)^2$$ $W(.)$ being Lambert function. So, as expected earlier, a logarithmic contribution in the asymptotics of $m$. |
How to construct a transversal of three lines with given ratio? | Project your whole scene orthogonally onto a plane which is orthogonal to $b$. In that plane, you have a single point $B'$ which will be the image of both $b$ and $B$, and two lines $a'$ and $c'$ which are the images of $a$ and $c$ under this orthogonal projection. Then take line $a'$ and construct lines $g_1$ and $g_2$ parallel to $a'$ but at twice its distance from $B$. There are two such lines, one on either side of $B$. Intersect these lines with $c'$ and you obtain two points $C'_1$ and $C'_2$. Connect them with $B'$ and intersect with $a'$ to obtain $A'_1$ and $A'_2$.
Now project these points back onto $a$ and $c$ and you obtain $A_1,A_2,C_1,C_2$. Their connection will intersect $b$ in $B_1$ and $B_2$. |
Distribution of integer solution pairs (x,y) for $2x^2 = y^2 + y$ | If you have an equation $2x^2 = y^2+y = y(y+1)$, then, since $y$ and $y+1$ are coprime, either $y$ is a square and $y+1$ twice a square, or $y$ is twice a square and $y+1$ a square. Either way, $y$ corresponds to a solution of
$$k^2 - 2m^2 = \pm 1.\tag{1}$$
The (positive) integer solutions are obtained from the powers of $1+\sqrt{2}$, if $k_n + m_n\sqrt{2} = (1+\sqrt{2})^n$, then
$$k_n^2 - 2 m_n^2 = (-1)^n,$$
and we obtain the possible values for $y$ as the $k_{2n+1}^2 = 2m_{2n+1}^2-1$ resp. $2m_{2n}^2 = k_{2n}^2-1$.
Since $k_{n+1} + m_{n+1}\sqrt{2} = (1+\sqrt{2})(k_n+m_n\sqrt{2}) = k_n + 2m_n + (k_n + m_n)\sqrt{2}$, it is easy to obtain the next $(k,m)$ pair from any particular, and via that, you also get a recurrence for the $(x,y)$ pairs. |
Solutions of the equation $((z-1)/z)^4=1$ | To solve $\left(\frac{z-1}z\right)^4=1$, one can proceed in two steps:
First, solve $w^4=1$ (apparently, this is already done)
Then, solve $\frac{z-1}z=w$, for any given $w$ (and then, one is led to treat separately the cases $w=1$ and $w\ne1$) |
Show that the real part of the root of an equation is constant | Since $z=0$ is not a root, our roots all satisfy $x^n =-1$ where $x=1+1/z.$ So the solutions are $$z= \frac{1}{ \cos \left(\dfrac{(1+2k)\pi}{n}\right)-1+ i\sin \left(\dfrac{(1+2k)\pi}{n}\right)}.$$
Multiply the numerator and denominator by $\cos \left(\dfrac{(1+2k)\pi}{n}\right)-1- i\sin \left(\dfrac{(1+2k)\pi}{n}\right)$ and use some basic trig identities to see that the real part is $-1/2.$ |
Orientation preserving diffeomorphism. | If $F \colon (N,[\omega_N]) \to (M, [\omega_M]) $ is an orientation-preserving diffeomorphism what can you say about $F^* \omega_M$? Hint. You get some condition on the determinant of a Jacobi matrix.
Knowing the property of the determinant, use now the chain rule to ensure that the transition maps between charts in the atlas on $N$ that you obtain satisfy the requirements for this new atlas to be oriented.
Edit: The above is an attempt to stimulate some thinking on the problem. More specifically, one needs to have all the definitions handy in order to see that it is actually much easier than it seems to be at the first sight. Apparently, this question has been duplicated, so we now have some more information on the necessary background :-)
Now let me present a more elaborated answer.
This question is to solve Problem 20.3 from L.Tu "Introduction to Manifolds", p.209.
The key concept for this problem is when an atlas specifies the orientation on an oriented manifold. This notion is explained in the bottom of p. 207 of the cited textbook. I shall try to rephrase this in my own words.
Definition. An atlas $\{ (V_{\alpha}, \psi_{\alpha} ) \}_{\alpha \in A} $ specifies the orientation $[\omega_M]$ of an oriented manifold $(M, [\omega_M])$ if
$$
\psi_{\alpha}^* \epsilon^n \in [\omega_M |_{V_{\alpha}}]
$$
for each $\alpha \in A$, where $\epsilon^n$ is the standard Euclidean volume form, that is
$$
\epsilon^n := \mathrm{d} r^1 \wedge \dots \wedge \mathrm{d} r^n
$$
and $\omega_M |_{V_{\alpha}}$ is the restriction of a form $\omega_M$ onto $V_{\alpha}$.
(Functions $r^i$ here are the standard Euclidean coordinates on $\mathbb{R}^n$)
In the problem we are asked to show that the atlas $\{ (U_{\alpha}, \phi_{\alpha}) \}_{\alpha \in A}$ on manifold $N$, constructed so that $U_{\alpha} := F^{-1} V_{\alpha}$ and $\phi_{\alpha} := \psi_{\alpha} \circ F$ for each $\alpha \in A$, specifies the orientation $[F^* \omega_N]$ on $M$.
This brings us very close to the solution. I must stop here as this question is tagged as a homework, but really one needs just check that we have some everywhere positive functions involved.
In particular, there is no need to check Jacobians! |
Proper definition of well-founded induction on a class | You don't need the stronger version of well-foundedness. The idea is that any failure of well-foundedness is in some sense "local" and can be witnessed on just a set.
Let me first sketch the argument intuitively. Suppose there is a nonempty subclass $C\subseteq X$ with no minimal element. Pick an element $c_0\in C$. Since $c_0$ is not minimal, you can pick $c_1\in C$ such that $c_1Rc_0$. Since $c_1$ is not minimal, you can pick $c_2\in C$ such that $c_2Rc_1$. Continuing recursively, you can pick a sequence $(c_n)_{n\in\mathbb{N}}$ of elements of $C$ such that $c_{n+1}Rc_n$ for each $n$. But now consider $\{c_n\}_{n\in\mathbb{N}}$. This is a set by Replacement since $\mathbb{N}$ is a set, and has no minimal element, which is a contradiction.
Now, there are some technical issues with this argument as stated. To actually carry out this recursion, we need to use the axiom of choice to choose our infinitely many $c_n$'s. Moreover, doing so requires us to fix a choice function ahead of time, which must be defined on a set (the axiom of choice only applies to sets, not classes), so we need to know ahead of time there is a set from which we will be making all our choices. The trick now is that we can use the set-like condition on $R$ to find such a set. And in fact, once we've found such a set, we can formulate the argument to avoid using the axiom of choice.
Here's how it works. After picking $c_0$, instead of picking a single $c_1\in C$ such that $c_1Rc_0$, we take the set $C_1$ of all such elements $c_1$, which is a set since $R$ is set-like. For convenience, let me give a name to the operation we just did: if $c\in C$, let $F(c)$ be the set of all $d\in C$ such that $dRc$. So we've defined $C_1=F(c_0)$
Now for the next step, we'd like to do the same thing and define $C_2=F(c_1)$, except that we don't have a single element $c_1$ but instead an entire set $C_1$ of "candidate" values for $c_1$. That's no problem; we just do it for all our candidates. That is, we define $C_2=\bigcup_{c\in C_1}F(c)$. We can continue this recursively to define a sequence $(C_n)_{n\in\mathbb{N}}$ of sets such that $C_0=\{c_0\}$ and $C_{n+1}=\bigcup_{c\in C_n}F(c)$ for each $n$.
Now if we take $S=\bigcup_{n\in\mathbb{N}} C_n$, this $S$ is a set and we can carry out our original intuitive argument by fixing a choice function on its nonempty sets ahead of time. Actually, though, we don't even need to make that argument. We can just directly observe that $S$ has no minimal element: any $c\in S$ is in some $C_n$, and then $F(c)\subseteq C_{n+1}\subseteq S$. Since $c$ is not minimal in $C$, $F(c)$ is nonempty, and any element of it witnesses that $c$ is not minimal in $S$.
Finally, let me remark that assuming the axiom of regularity, you don't even need to assume that $R$ is set-like, since you can use Scott's trick. Namely, in the argument above, instead of letting $F(c)$ be the set of all $d\in C$ such that $dRc$ (which may not be a set if $R$ is not set-like), you can let $F(c)$ be the set of all such $d$ of minimal rank. That is, let $\alpha$ be the least ordinal such that there exists $d\in C\cap V_\alpha$ such that $dRc$, and then let $F(c)$ be the set of all such $d\in C\cap V_\alpha$ (which is a set since $V_\alpha$ is a set). With this modified definition of $F$, the rest of the argument still works just as before. |
An identity map which is not null-homotopic | The following answer has been discussed in the comments. Since in your example the $d^n$s are all multiplication by $2$, you are looking for maps $s^n$ and $s^{n+1}$ such that $$x=2s^n(x)+s^{n+1}(2x)=2(s^n(x)+s^{n+1}(x))$$ for all $x\in \mathbb{Z}/4\mathbb{Z}$. But this is impossible, since not every $x\in\mathbb{Z}/4\mathbb{Z}$ is divisible by $2$. |
Need help with proving that group is not finitely-generated | Your idea about prime numbers is a good one! Let $T$ be the set of primes which appear in either the numerator or denominator of some element of $S$. Then $T$ is finite (why?) so there is some prime $p \not \in T$. Show by unique factorization that $p$ cannot be the product of powers of elements of $S$. |
Counting how many Sudoku-like grids of numbers there are | $$9! \bigg[ \sum_{k=0}^3 \bigg( {3\choose k}{3 \choose 3-k}3! \bigg)^2 3! \bigg] \tag 1$$
is not correct. The point that the number of numbers in the first three of the second row that are taken from the second three in the first row matches the number in the second three taken from the third three is correct. I would pull the $3!$ out of the sum, because the sum allocates the numbers to the groups of three in the second row. You then have $3!^3$ because you can reorder each group of three as you wish. That gives your expression, but I think makes it more clear how you found it. Then in the third row the set of numbers in each group of three is determined but they can come in any order, which gives another factor of $3!^3$ so I would represent it as
$$9! 3!^6\bigg[ \sum_{k=0}^3 \bigg( {3\choose k}{3 \choose 3-k} \bigg)^2 \bigg] \tag 1$$ |
solutions of $az^2 + bz + c = 0$ have the same module $r > 0$ | let $p,q$ be the roots of equation. then $$\bar p=\frac{r^2}{p},\bar q=\frac{r^2}{q}$$ Now by vieta $$\color{blue}{p+q=\frac{-b}{a}},pq=\frac{c}{a}$$ hence taking conjugate of blue colored equation $$\color{blue}{\frac{-\overline{ b}}{\bar a}=\bar p+\bar q}=r^2(\frac{p+q}{pq})=\frac{-r^2b}{c}$$ $$\iff b\bar c=r^2a \bar b$$ |
Proving that $\lim \limits_{b \rightarrow \infty} F(1,b,1;\frac{z}{b})=e^z$ | From the series definition of the hypergeometric function it follows that
$$_2F_1(1,b;1;z)=(1-z)^{-b}.$$
Does this solve your problem? |
Probability of having the same birthday | This is a "real-world" problem, so at best we can make an estimate based on a reasonable set of assumptions. For example, if your birthday is on February $29$, the answer depends very much on the range of ages of your friends.
Let us construct a mathematical model of the situation. We assume that the year has $365$ days (not quite true) and that birthdays are uniformly distributed over the year. This again is not quite true, the distribution is not uniform, and there is some variation from country to country. We also assume that your choice of friends is birthday-independent.
If we pick one of your friends at random, the probability that he/she has a birthday different from yours is $\frac{364}{365}$. So if you have $n$ friends, by independence the probability they all have birthdays different from yours is $\left(\frac{364}{365}\right)^n$. Thus the probability that at least one of your friends has the same birthday as yours is $1-\left(\frac{364}{365}\right)^n$. |
Is first homology (with integer coefficients) always a vector space? | You can very well have representations over arbitrary groups, or in fact any type of "structure": for any $X$ that has a reasonable $Aut(X)$, a representation of $G$ on $X$ is a morphism $G\to Aut(X)$.
A lot of the time in linear representations we work over fields because it's somewhat easier (vector spaces are easier to understand than modules, most of the time), but it's sometimes also interesting to work over $\mathbb Z$, that is, to work with abelian groups. |
How many ways exist to get from $A$ with the following conditions in the image? | Your reasoning is good, but there are more options. Numbering the circular tracks $1,2$ and $3$ from inside to out, and noting that changes of direction are onto or off a circular track, your option (A) visits tracks $1,2,3$ in that order and (B) is $3,2,3$ but also possible are $2,1,3$ and $3,1,3$.
Avoiding visiting the same point might get more complicated with the other options. I'm not completely sure you avoided that in assessing option (B) either. |
Verifying that a computed density is a probability density | Let $(a_{k})_{k\in\Bbb N}$ be arbitrary numbers. We make the following claim: For $n\in\Bbb N$ holds for all $i\in\{1,\ldots , n\}$
$$\sum_{k=i}^n (-1)^{i+k} a_k \binom{k-1}{i-1} \binom{n}{k} = \sum_{m=i}^n \binom{m-1}{i-1} \sum_{k=i}^m (-1)^{i+k} a_k \binom{m-i}{k-i}$$
For the proof assume this holds for $n-1$. Then for $i\leq n-1$ holds
$$\sum_{k=i}^n (-1)^{i+k} a_k \binom{k-1}{i-1} \binom{n}{k} = \sum_{k=i}^{n-1} (-1)^{i+k} a_k \binom{k-1}{i-1} \binom{n}{k} + (-1)^{i+n} a_n \binom{n-1}{i-1}\\
= \sum_{k=i}^{n-1} (-1)^{i+k} a_k \binom{k-1}{i-1} \binom{n-1}{k} + \sum_{k=i}^{n-1} (-1)^{i+k} a_k \binom{k-1}{i-1} \binom{n-1}{k-1} + (-1)^{i+n} a_n \binom{n-1}{i-1}\\
=\sum_{m=i}^{n-1} \binom{m-1}{i-1} \sum_{k=i}^m (-1)^{i+k} a_k \binom{m-i}{k-i}+\sum_{k=i}^{n} (-1)^{i+k} a_k \binom{k-1}{i-1} \binom{n-1}{k-1}\\
=\sum_{m=i}^{n-1} \binom{m-1}{i-1} \sum_{k=i}^m (-1)^{i+k} a_k \binom{m-i}{k-i}+ \binom{n-1}{i-1}\sum_{k=i}^{n} (-1)^{i+k} a_k \binom{n-i}{k-i}\\
=\sum_{m=i}^{n} \binom{m-1}{i-1} \sum_{k=i}^m (-1)^{i+k} a_k \binom{m-i}{k-i}
$$
Since for $n=1$ or $i=n$ the statement is clear the claim follows by induction.
From the claim the desired follows for $n$ by choosing $a_k = \lambda_k^N$. (I went the way with abitrary $a_k$ because the $\lambda_k^N$ depend on $n$.) |
Limit $\lim_{n \rightarrow \infty } \frac{1}{\sqrt{nn}} + \frac{1}{\sqrt{n(n+1)}}+ \frac{1}{\sqrt{n(n+2)}}+\cdots+ \frac{1}{\sqrt{n(n+n)}} .$ | Now
\begin{align*}
\lim_{n\to \infty} \sum_{k=0}^{n}
\frac{1}{n} \frac{1}{\sqrt{1+\frac{k}{n}}} &=
\int_{0}^{1} \frac{dx}{\sqrt{1+x}} \\
&= \left[ 2\sqrt{1+x} \right]_{0}^{1} \\
&= 2(\sqrt{2}-1)
\end{align*} |
Proof of L'Hospitals Rule | One can prove it using linear approximation:
Recall the definition of the derivative, $f'(x)$:
$$
f'(x) = \lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{h}
$$
which we can write as
$$
f'(x) = \frac{f(x+h)-f(x)}{h} + \eta(h)
$$
where $\eta(h)$ is a function such that $\lim_{h\rightarrow 0} \eta(h)=0$ (and is continuous).
Then in the above displayed equation multiply through by $h$, which gives (after rearrangment and switching $\eta(h)$ with $-\eta(h)$)
$$
f(x+h) = f(x) +f'(x)h+h\cdot \eta(h).
$$
Now use L'Hopitals rule for the limit of $f(x)/g(x)$ when $x$ goes to $x_0$, so we have
$f(x_0)=g(x_0)=0$. Then by using linear approximation as above in both numerator and denominator (and writing the $\eta$-function for $g$ as $\epsilon$), we get
$$
\frac{f(x_0+h)}{g(x_0+h)}=\frac{f'(x_0)h + h\cdot \eta(h)}{g'(x_0)h+h\cdot \epsilon(h)}
$$
and now, in both numerator and denominator, there is a common factor $h$ we can cancel. Taking the limit as $h \rightarrow 0$ now gives the result.
This proof is an upgraded version of Bernoulli's original one. See https://mathoverflow.net/a/51691 |
Can a relation be acyclic and complete but not transitive? | Yes, it's true: $\def\R{\mathop{\mathrm R}}$
Let $\R\subseteq A\times A$ be an acyclic and complete relation.
Suppose $a\R b$ and $b\R c$. If we don't have $a\R c$, then by completeness, $c\R a$ follows, but then this produces a cycle $a\R b\R c\R a$. |
Discrete Uniform random variable calculate mean and var | Hint: if $Z$ is the number of winning tickets, find $E[Z]$ and $E[Z^2]$ by conditioning on the number of tickets he buys. |
Maximum-Likelihood-estimator for number of marbles | Let's start from the beginning. The joint probability of the sample $\boldsymbol x = (x_1, \ldots, x_N)$ where each $$x_i \sim \operatorname{DiscreteUniform}(1,n)$$ is IID, is given by $$f(\boldsymbol x \mid n) = \prod_{i=1}^N f(x_i \mid n) = n^{-N} \mathbb 1(1 \le x_{(1)} \le x_{(N)} \le n).$$ Consequently, the joint likelihood is proportional to $$\mathcal L(n \mid \boldsymbol x) \propto f(\boldsymbol x \mid n),$$ and this likelihood is maximized for the parameter $n$ and a fixed $N$ when $n$ is made as small as possible; i.e., $n = x_{(N)}$. Thus $$\hat n = T(\boldsymbol x) = x_{(N)} = \max_i x_i,$$ the maximum order statistic. So far, our solutions agree. Now, the distribution of $\hat n$ is given by $$\Pr[x_{(N)} = k] = \Pr[x_{(N)} \le k] - \Pr[x_{(N)} \le k-1],$$ where $$\Pr[x_{(N)} \le k] = \Pr\left[\bigcap_{i=1}^N (x_i \le k)\right] = \prod_{i=1}^N \Pr[x_i \le k] = \left(\frac{k}{n}\right)^N, \quad k \in \{1, \ldots, n\}.$$ Hence $$\Pr[x_{(N)} \le k] = \left(\frac{k}{n}\right)^N - \left(\frac{k-1}{n}\right)^N = \frac{k^N - (k-1)^N}{n^N}.$$ The expectation is $$\operatorname{E}[\hat n] = \sum_{k=1}^n k \left(\frac{k^N - (k-1)^N}{n^N}\right).$$ So far, we agree. However, the calculation of this sum is not trivial. First, it partially telescopes: $$\operatorname{E}[\hat n] = n - \sum_{k=1}^{n-1} \frac{k^N}{n^N}.$$ You can convince yourself of this by writing out the first few terms, and seeing how they cancel (a formal proof is possible but I leave it as an exercise). Second, dividing by $n$ yields $$\frac{\operatorname{E}[\hat n]}{n} = 1 - \frac{1}{n} \sum_{k=1}^{n-1} \left(\frac{k}{n}\right)^N.$$ We are now interested in the limit of this sum; namely, $$g(N) = \lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \left(\frac{k}{n}\right)^N.$$ Note that I added the $k = 0$ term, which does not affect the value of the sum since this term equals zero for positive integers $N$. Then we recognize $g$ as a Riemann sum $$\int_{x=a}^b h(x) \, dx = \lim_{n \to \infty} \frac{b-a}{n} \sum_{k=0}^{n-1} h\left(a + \frac{b-a}{n} k\right),$$ for the choice $a = 0$, $b = 1$, $h(x) = x^N$. Therefore, $$g(N) = \int_{x=0}^1 x^N \, dx = \frac{1}{N+1},$$ and $$\lim_{n \to \infty} \frac{\operatorname{E}[\hat n]}{n} = 1 - \frac{1}{N+1} = \frac{N}{N+1}.$$ This expectation is clearly less than $1$ and is a function of the number of draws made. |
Can two equations be subtracted? | You are given:
$$\begin{cases} T=Ma \\ T-mg=-ma \end{cases}$$
As suggested, subtract the two equations:
$$T-(T-mg)=Ma-(-ma)$$
This gives the following. Notice that the reason that you got a negative sign instead is due to a mistake with the signs on equation $(1)$.
$$mg=Ma\color{red}{+}ma$$
Now, here comes the trick you probably missed out. You can factorise the right hand side:
$$mg=a(M+m)$$
This will give you what you require after rearranging for $a$. |
Simple joint probability question of two dice throw | Give the dice a number.
Let $D_1$ denote the result of the first and let $D_2$ denote the result of the second die.
Then:
$$P(X=0,Y=1)=P(D_1\in\{2,6\}, D_2\in\{1,3,5\})+P(D_1\in\{1,3,5\}, D_2\in\{2,6\})=$$$$2P(D_1\in\{2,6\})P(D_2\in\{1,3,5\})=2\frac26\frac36$$
Edit (after edit of question):
$$P(X=1,Y=2)=P(D_1=4, D_2\in\{2,6\})+P(D_1\in\{2,6\}, D_2=4)=$$$$2P(D_1=4)P(D_2\in\{2,6\})=2\frac16\frac26$$ |
Are partial derivatives always commutative? When is $\frac{\partial^2}{ \partial x\partial y}f(x,y)\neq\frac{\partial^2}{\partial y\partial x}f(x,y)$? | A classical example of a function not satisfying $f_{xy}=f_{yx}$ is $$f(x,y) = \left\{ \matrix{\frac{xy(y^2-x^2)}{x^2+y^2} & (x,y) \not = 0\\ 0 & (x,y) = (0,0)}\right.$$ At the point $(x,y) = (0,0)$ we have $f_{xy}(0,0) = 1$, but $f_{yx}(0,0) = -1$. |
How many different combination we can construct? | Your answer to the first question is far too high: you have five main choices, two side choices and four drink choices, making $5 \times 2 \times 4$ overall possible combinations, since you want one of each.
On the second, the answer is yes. Note that if you want no matches week-to-week then you will have to alternate sides each week, but you have enough flexibility on mains and drinks to do this. It is not too difficult to make a list of $40$ combinations to prove this. Here is one possibility:
HFS
CAT
NFJ
WAW
SFS
HAT
CFJ
NAW
WFS
SAT
HFJ
CAW
NFS
WAT
SFJ
HAW
CFS
NAT
WFJ
SAW
HFT
CAS
NFW
WAJ
SFT
HAS
CFW
NAJ
WFT
SAS
HFW
CAJ
NFT
WAS
SFW
HAJ
CFT
NAS
WFW
SAJ |
Canonical form of PDE | I have now solved it by putting $x^3 = (\xi - \eta)^{3/2}$ and then integrating the canonical equation will give the given solution.
Edit(Complete solution):
Comparing the given PDE with the general form $A(x, y)u_{xx} + B(x, y)u_{xy} + C(x, y)u_{yy} = Φ(x, y,u,u_x,u_y)$ we find
$$A = x, B= 2x^2, C = 0$$
Now, the characteristic curves can be found by the equation $\frac{dy}{dx} = \frac{B \pm\sqrt{B^2-4AC}}{2A} = 0, 2x$
Integrating, we find the 2 curves as $y = $constant and $y-x^2 = $constant, so we choose
$$\xi = y,\qquad \eta = y-x^2$$
and apply the relations found by chain rule for differentiating $u$ w.r.t. $\xi$ and $\eta$:
$u_x = u_ξξ_x + u_ηη_x, u_y = u_ξξ_y + u_ηη_y,$
$u_{xx} = u_{ξξ}ξ^2_x + 2u_{ξη}ξ_xη_x + u_{ηη}η^2_x+ u_ξξ_{xx} + u_ηη_{xx}$
$u_{xy} = u_{ξξ}ξ_xξ_y + u_{ξη} (ξ_xη_y + ξ_yη_x) + u_{ηη}η_xη_y + u_ξξ_{xy} + u_ηη_{xy},$
$u_{yy} = u_{ξξ}ξ^2_y + 2u_{ξη}ξ_yη_y + u_{ηη}η^2_y + u_ξξ_{yy} + u_ηη_{yy}$
and substituting in original equation will give $4x^3u_{\xi\eta} = -1$
We want to eliminate $x$ so that equation consists only of $\xi$ and $\eta$, so put $$x^3 = (\xi - \eta)^{3/2}$$ then integrate equation once w.r.t. variable $\xi$ , then with $\eta$ to get the given solution. |
Probability of opening 3 doors in 3 moves | If the doors are chosen uniformly at random, and each choice is independent of the previous choices, then there are $3^3 = 27$ unique sequences of $3$ moves. Of these, there are $3! = 6$ sequences in which each door is chosen exactly once, thereby each door is opened when the initial state of each door is closed. Therefore, the probability in which all three doors are opened at the end of $3$ moves is $6/27 = 2/9$, as in your first solution.
The second solution is incorrect because, although it is true that the number of ordered triples $(x,y,z)$ of nonnegative integers satisfying $x + y + z = 3$ is $10$, it is not true that each such triple occurs with equal probability, which is what you are implicitly assuming when you say the only elementary outcome for which all three doors end in the open state corresponds to $(1,1,1)$, hence the probability is $1/10$. You can see this immediately by observing that the outcome $(3,0,0)$ can only happen with probability $1/27$, whereas the outcome $(1,1,1)$ can occur from selecting the sequence $(a,b,c)$, or $(b,c,a)$, or $(c,a,b)$, etc.
This is a common mistake, much in the way that when we toss a fair coin twice, the outcome $(H, T)$ is distinct from $(T, H)$, hence the probability of one head and one tail is not $1/3$ but $1/2$. |
If $A\in M_3\left(\mathbb C\right)$ is an invertible matrix such that $2A^2=4A+A^3$ | Since $A^2-2A+4I=0$, we have $A^2-2A=-4I$. From the original equation together with this,
$$
A^3=2A^2-4A=2(A^2-2A)=-8I.
$$
Then $(\det A)^3=-8^3$, which tells us that $\det A$ is either $4-4i\sqrt3$ or $4+4i\sqrt3$. The real root is not an option, because no combination of the eigenvalues can have a real product. So $(a)$ is false.
From $A^2-2A+4I=0$, the eigenvalues are either $(1+i\sqrt3,1+i\sqrt3,1-i\sqrt3)$, or $(1-i\sqrt3,1-i\sqrt3,1+i\sqrt3)$.
For (c), $A-2I=A^2/2 $, so $(A-2I)^3=A^6/8=(A^3)^2/8=(-8I)^2/8=8I$. So $\text{Tr}\,(A-2I)^3=\text{Tr}\,(8I)=24$, and (c) is true.
Since $A/2-A^2/4=I $, we deduce that $A^{-1}=I/2-A/4$. Then $$\text {adj}\,A=(\det A)\,A^{-1}=(\det A)\, (I/2-A/4)=\frac18\,(\det A)\,(-A^2).$$ As we know from above that $\det A$ is not real, $(d)$ is false.
We can also use $A/2-A^2/4=I$, written as $(A/2)(I-A/2)=I$, to deduce that $(A/2)^{-1}=I-A/2$. And so
$$
\text{adj}\,\frac{A}2=(\det\frac A2)\,(I-\frac A2)=\frac18\,(\det A)\,(I-\frac A2).
$$
Then
$$
\det(\text{adj}\,\frac A2)=\frac{(\det A)^3}{8^3}\,\det(I-\frac A2)=-\frac1{64}\,\det(I-\frac A2).
$$
This cannot be $1$, as all the eigenvalues of $I-A/2$ have absolute value $1$. |
Basic Limit Theorem proof for a markov chain with period $\ge 2$ | Some hints for how to proceed:
(a) I don't think induction will be all that necessary or helpful here. Fix a state $x$; the $d$ classes will correspond to the periodicity of the original chain. That is, one class will be $\{y \colon p^{nd}(x,y) > 0$ for some $n\}$, another will be $\{y \colon p^{nd +1}(x,y) > 0$ for some $n\}$, then $\{y \colon p^{nd+2}(x,y) > 0$ for some $n\}$, etc. Your task is now to prove that this does indeed give an equivalence relation and that these classes are actually distinct.
(b) This should come straight from the definition of periodicity (or aperiodicity) of a Markov Chain.
(c) Use the Basic Theorem for Markov Chains on the chain $\{Y_n\}$; then, use the fact that one "step" in the $\{Y_n\}$ chain is equivalent to $d$ steps on the $\{X_n\}$ chain. |
Exercise 3.3.25 of Karatzas and Shreve | First of all, your calculation is not correct. Itô's formula gives
$$\begin{align*} M_T^{2m} &= 2m \cdot \int_0^T M_t^{2m-1} \, dM_t + m \cdot (2m-1) \cdot \int_0^t M_t^{2m-2} \, d \langle M \rangle_t \\ \Rightarrow \mathbb{E}(M_T^{2m}) &= m \cdot (2m-1) \cdot \mathbb{E} \left( \int_0^T M_t^{2m-2} \cdot X_t^2 \, dt \right)\tag{1} \end{align*}$$
where we used in the second step that the stochastic integral is a martingale and $d\langle M \rangle_t = X_t^2 \, dt$. Note that $(M_t^{2m-2})_{t \geq 0}$ is a submartingale; therefore we find by the tower property
$$\mathbb{E}(M_t^{2m-2} \cdot X_t^2) \leq \mathbb{E}((M_T^{2m-2} \mid \mathcal{F}_t) \cdot X_t^2) = \mathbb{E}(M_T^{2m-2} \cdot X_t^2). \tag{2}$$
Hence, by Fubini's theorem,
$$\mathbb{E} \left( \int_0^T M_t^{2m-2} \cdot X_t^2 \, dt \right) \leq \mathbb{E} \left( M_T^{2m-2} \cdot \int_0^T X_t^2 \, dt \right).$$
By applying Hölder's inequality (for the conjugate coefficients $p=\frac{2m}{2m-2}$, $q=m$), we find
$$\mathbb{E} \left( \int_0^T M_t^{2m-2} \cdot X_t^2 \, dt \right) \leq \left[\mathbb{E} \left( M_T^{2m} \right) \right]^{1-\frac{1}{m}} \cdot \left[ \mathbb{E} \left( \int_0^T X_t^2 \, dt \right)^m \right]^{\frac{1}{m}}. \tag{3}$$
Combining $(1)$ and $(3)$ yields,
$$\bigg(\mathbb{E}(M_T^{2m}) \bigg)^{\frac{1}{m}} \leq m \cdot (2m-1) \cdot \left[ \mathbb{E} \left( \int_0^T X_t^2 \, dt \right)^m \right]^{\frac{1}{m}}.$$
Finally, by Jensen's inequality,
$$\left(\int_0^T X_t^2 \, dt \right)^m \leq T^{m-1} \cdot \int_0^T X_t^{2m} \, dt.$$
This finishes the proof. |
May I know how this integral was evaluated by using the theory of elliptic integrals? | I will show the relation to the elliptic integrals, as I said in a comment (and Albas in the next comment):
$$\sin(x)=-\sin(-x)=-\cos \left(x +\frac{\pi}{2} \right)=-1+2 \sin^2 \left( \frac{x}{2}+\frac{\pi}{4} \right)$$
$$\phi=\frac{x}{2}+\frac{\pi}{4}$$
$$x=2\phi-\frac{\pi}{2}$$
$$\alpha=\frac{a}{2}+\frac{\pi}{4}$$
$$\beta=\frac{b}{2}+\frac{\pi}{4}$$
$$\gamma=\sqrt{\frac{2}{c+1}}$$
$$\int_a^b \frac{\sin(x)}{\sqrt{c-\sin(x)}}dx=\sqrt{2} \gamma \int_\alpha^\beta \frac{-1+2 \sin^2 \phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}d\phi=$$
$$=2\sqrt{2} \gamma \int_\alpha^\beta \frac{\sin^2 \phi ~d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}-\sqrt{2} \gamma \int_\alpha^\beta \frac{d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}$$
The second integral is incomplete elliptic integral of the first kind $F(\alpha, \gamma)-F(\beta, \gamma)$ and the second can be calculated in terms of incomplete elliptic integrals of first and second kind.
I will show the full solution for the easiest case of complete elliptic integrals.
$$\alpha=0$$
$$\beta=\frac{\pi}{2}$$
This means that:
$$a=-\frac{\pi}{2}$$
$$b=\frac{\pi}{2}$$
Now the second integral will just be:
$$ \int_0^{\frac{\pi}{2}} \frac{d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}=K(\gamma)$$
On the other hand, the complete elliptic integral of the second kind is defined:
$$\int_0^{\frac{\pi}{2}} \sqrt{1-\gamma^2 \sin^2 \phi} ~~d\phi=E(\gamma)$$
$$\frac{d E}{d \gamma}=-\gamma \int_0^{\frac{\pi}{2}} \frac{\sin^2 \phi ~d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}=\frac{1}{\gamma}(E(\gamma)-K(\gamma))$$
So the first integral is:
$$\int_0^{\frac{\pi}{2}} \frac{\sin^2 \phi ~d\phi}{\sqrt{1-\gamma^2 \sin^2 \phi}}=\frac{1}{\gamma^2}(K(\gamma)-E(\gamma))$$ |
Evaluate $\int 5x^4 \sqrt{x^5+5}dx$ using u substitution | That answer is correct and the work is correct (but a bit excessive). |
Help with surjectivity of a function | Your approach is a bit non-concrete (though I can appreciate that to recognise this might take some more experience).
For a more concrete approach: there are at least two possible ways to approach this.
Restrict one of the coordinates to something well-behaved. $2\cos(t)$ is equal to $0$ precisely when… what? And how does $-\sin(t)$ behave under those conditions?
Are the projections $2 \cos(t)$ and $-\sin(t)$ surjective? (This is a necessary condition for $(2 \cos(t), -\sin(t))$ to be surjective.)
(On second thoughts, the second method is a degenerate version of the first.) |
Is the halting problem an example of a paradox that arises under vary specific conditions (like a division by zero,) or is it more general than that? | Assuming your program is perfectly implemented, it may fail to terminate on specific inputs.
To facilitate language, let us call the "halts"/"doesn't halt" object "the program" and the code under inspection "the machine". (This is just to give clear words to distinguish what we are talking about.)
The program is to determine whether the machine will halt when fed a specified input. (For instance, when supplied with 0-length input.)
This may not seem powerful, but it is a method to test the consistency of logical systems. For instance, this article describes such compositions -- one machine that produces instances of a problem type and a program that halts if an instance satisfying a requirement is met. One can ask if it possible to prove that such a program halts. It turns out that an 8000 state Turing machine (models of computing machines simple enough to prove things about) is sufficiently complicated that it encodes the consistency of ZF plus a large cardinal axiom. Since it is known that ZF plus a large cardinal axiom cannot prove the consistency of itself, ZF plus a large cardinal axiom is not strong enough to resolve whether the program halts. Maybe it runs forever. Maybe it halts. Set theory is too weak to say which.
So your program has to be able to resolve questions more difficult than can be resolved by any chain of reasoning in one of our most successful theories for modeling counting and computation.
What you would see if you tried this is that for certain pairs of machines and inputs, your program would run forever, and at no time during its running would anyone (including programs and machines) be able to determine whether your program will eventually halt or not. That is, the complexities inherent in the machine/input pair are so vast that the program keeps finding more and more alleys to check to determine whether there is an eventual halt. And it never finishes the check. |
clarifying the characteristic function | $F$ is the cumulative distribution function(cdf) of $X$.
You would see different definition of the expectation when the cdf or the density function $f_X$ is known
$$E[g(X)]=\int_{\mathbb{R}}{g(x)f_X(x)dx}$$
or you replace $f_X$ by $F$ using $$dF(x)=f_X(x)dx$$
$$E[g(X)]=\int_{\mathbb{R}}{g(x)dF(x)}$$ |
Is $\sum \cos(n \pi) \frac{n}{n^2+1}$ conditionally or absolutely convergent? | Simply notice that $\frac{n}{n^2+1}\sim_{+\infty}\frac{1}{n}$, hence the series doesn't converge absolutely. |
An explanation of the definition of the characteristic function of the bivariate stable distribution | Let me make an analogy to a characteristic function of a real valued random variable.
Given a random variable $X$, its characteristic function is the map $\varphi_X \colon \mathbb{R} \rightarrow \mathbb{C}$ defined as:
$$ \varphi_X(u) \, \colon = \mathbf{E} \left[ e^{ i u X} \right].$$
Note that since the expectation is taken over the variable $X$, this becomes a function of $u$ only. Characteristic functions are useful, since if you have two random variables $X, \, Y$ and you can show that
$$\varphi_X(u) = \varphi_Y(u), \qquad \forall \, x,\,y \, \in \mathbb{R}$$
then it follows that $X$ and $Y$ have the same distribution.
Now returning back to your context: in your case the variable does not take values in $\mathbb{R}$, but rather in $\mathbb{R}^d$. The equivalent form of a characteristic function for a variable $X \in \mathbb{R}^d$ is a function $\varphi_X \colon \mathbb{R}^d \rightarrow \mathbb{C}$, where now:
$$\varphi_X( u) \, \colon = \mathbf{E} \left[ e^{i u^T X} \right],$$
note that we can take the expectation as $u^T X = \sum_{i=1}^d u_i X_i \in \mathbb{R}$ (since expectations only make sense for `1-dimensional' variables).
As in the case of $d = 1$, if you can show that given two variables $X, Y$ taking values in $\mathbb{R}^d$ such that
$$\varphi_X(u) = \varphi_Y(u), \qquad \forall \, x, \, y \in \mathbb{R}^d$$
then it follows that the variables are equal in distribution.
Further Introduction to Characteristic Functions
In this I go into some more detail to motivate characteristic functions. I return to the scenario that $X$ is a real valued variable, and $\varphi_X \colon \mathbb{R} \rightarrow \mathbb{C}$.
A common question in probability / statistics is: given two random variables / sets of data, how do I know if they have the same distribution.
A first approach might be to check: do they have the same expectation (mean, if doing statistics)? So suppose we have variables $X, Y$ we might say
$$ \text{Is } \mathbf{E}[X] = \mathbf{E}[Y] \text{ true?}$$
If its not then we'd conclude that they're definitely do not have the same distribution.
But, is the converse true? If $X, \, Y$ have the same mean, do they have the same distribution? The answer is no.
Example
Suppose $X \sim N(0,1)$ (a normal distribution with mean $0$ and standard deviation $1$), and $Y \sim \text{Unif}[-1,1]$ (a uniform variable on the interval $[-1,1]$). Both variables have the property
$$ \mathbf{E}[X] = \mathbf{E}[Y] = 0$$
However they definitely don't have the same distribution (since $\mathbf{P}[X > 1] > 0$, whilst $\mathbf{P}[Y > 1] = 0$).
So we might then ask: "If two random variables have the same mean, and the same variance... Do they have the same distribution?". Again the answer to this is no.
Example
Suppose $X \sim N(1,1)$ and $Y \sim \text{Poi}(1)$ (a Poisson variable with mean $1$). Then
$$\mathbf{E}[X] = \mathbf{E}[Y] = 1, \qquad \text{Var}(X) = \text{Var}(Y) = 1$$
Similarly we can construct examples for any given integer $k \geq 0$ that show that
$$\mathbf{E}[ X^k] = \mathbf{E}[Y^k]$$
and yet $X$ and $Y$ do not have the same distribution. We call $\mathbf{E}[X^k]$ the $k-$th moment of $X$.
However... It turns out (and this requires proving) that if you can show that all moments of the distributions agree: then they have the same distribution. i.e. if
$$\mathbf{E}[X^k] = \mathbf{E}[Y^k], \qquad \forall \, k \geq 0$$
then $X \sim Y$.
How does this relate to the characteristic function?
A similar statement from real analysis says that a power series uniquely determines a function. So the insight is: if we create a power series with the moments of $X$ as coefficients, and then we find a variable $Y$ with the same power series... Then: uniqueness of power series implies they have the same coefficients, i.e. the same moments, and therefore have the same distribution!
So we construct the power series:
$$ f(u) = \mathbf{E}[X^0] + \mathbf{E}[X^1] u + \mathbf{E}[X^2] \frac{u^2}{2!} + \mathbf{E}[X^3] \frac{u^3}{3!} + \cdots$$
We introduce the parameter $u$ purely as an artificial way to make a function from a sequence of moments $\{\mathbf{E}[X^k]\}_{k \geq 0}$. Using linearity of expectations, and the convenience of dividing the moments by factorials,we get:
$$ f(u) = \mathbf{E}\left[ 1 + uX + \frac{u^2X^2}{2}+ \frac{u^3X^3}{3!} + \cdots \right] = \mathbf{E}[ e^{uX} ]$$
The function $f$ is almost the characteristic function (its missing the $i$), and is known as the moment generating function. The above argument therefore says that if we have two random variables $X,Y$ and they have the same moment generating function, then they have the same distribution.
Introduction of the parameter $i$, so $f(iu)$, changes very little: the claim is still true that $\varphi = f(iu)$ uniquely determines the distribution. Really the addition of $i$ is historic, in as much as $\varphi$ is equivalent to the Fourier transform in real analysis.
So in short: the moments of a random variable uniquely determine the distribution. Therefore, a power series with the moments as coefficients uniquely determines the distribution. The characteristic function is exactly that: a power series, in some parameter $u$, with the moments of a random variable as coefficients.
The value of $u$ itself is not of interest: $u$ is a way to encode a sequence of information (the coefficients), as a function over a continuous parameter. |
How to conclude this solution is periodic? | For simplicity I will denote $U$ by $x$. Suppose $x$ is a non-zero solution of your ODE. Then $x$ solves the Hamiltonian equation
$$\tag{HS}
\ddot{y}(t)+y^3(t)=0,
$$
and in particular
$$
\frac{1}{2}\dot{x}^2(t)+\frac{1}{4}x^4(t)=h
$$
for some constant $h>0$ not dependending on $t$. Therefore
$$
(x,\dot{x})=(\pm h^{1/4}\sqrt{2\sin\theta},\sqrt{2h}\cos\theta),
$$
for some function $\theta:\mathbb{R} \to [2k\pi,(2k+1)\pi],\ k \in \mathbb{Z}$. Since $x$ solves (HS), it follows that $\theta$ solves the ODE
$$
\ddot{\theta}=4\sqrt{h}\cos\theta.
$$
The function
$$\tag{1}
t \mapsto \phi(t)=\theta(t/2h^{1/4})-\frac{\pi}{2}
$$
then solves the (mathematical) pendulum equation
$$\tag{P}
\ddot{\phi}=-\sin\phi.
$$
We recall that any solution $\phi$ of (P) has constant energy, i.e.
$$
E(t)=\frac{1}{2}\dot{\phi}^2(t)+1-\cos\phi(t)=E(0) \quad \forall t.
$$
It is well known that (P) admits periodic, homoclinic, and heteroclinic solutions.
Let $\phi_\tau$ be a periodic solution of (P) with period $\tau>0$.
Then
$$
x_\tau(t)=\pm h\sqrt{\cos\phi_\tau(2h^{1/4}t)}
$$
is a periodic solution of (HS) with period $T_h=\tau/2h^{1/4}$ and energy $h$.
Let $x^T$ be a periodic solution solution of (HS) with period $T=T_h>0$, and energy $h>0$. Setting
$$
r=\sqrt{2}h^{1/4},
$$
we have
\begin{eqnarray}
T_h&=&2\sqrt{2}\int_{-r}^r\frac{dy}{\sqrt{4h-y^4}}=4\sqrt{2}\int_0^r\frac{dy}{\sqrt{4h-y^4}}\cr
&=&\frac{4\sqrt{2}}{r}\int_0^1\frac{ds}{\sqrt{1-s^4}}=4h^{-1/4}\int_0^1\frac{ds}{\sqrt{1-s^4}}.
\end{eqnarray} |
Nash Equilibrium in General sum game | Start from $P^{T}Bq \geq P^{T} B e_j$. You correctly replaced $e_j = (1,0)$ (play $M_1$) to get $p_1 \ge 1/3$. Now replace $e_j = (0,1)$ (play $M_2$) to get $p_1 \le 1/3$ and conclude $p_1^*=1/3$.
Given that $p_1 + p_2 \le 1$ and that there are no other constraints to satisfy, conclude that there is a continuum of equilibria in mixed strategies where Row plays $L_1$ with probability $p^*_1 = 1/3$, $L_2$ with probability $p^*_2 \in [0,2/3]$ and $L_3$ with probability $1-p^*_1-p^*_2$ in $[0,2/3]$. |
How do you calculate Pr(A | B, C, D, E) if you know Pr( A ∩ B ∩ C ∩ D ∩ E) and Pr(B ∩ C ∩ D ∩ E)? | If you need to calculate $P(A\mid B,C,D,E)$ which is just another notation for $P(A\mid B\cap C\cap D\cap E)$, then your proposed calculation is merely using the definition of the conditional probability of $A$ given that $B, C, D, E$ all occurred. The chain rule, on the other hand, says that $$P(A\cap B\cap C\cap D\cap E) = P(E)P(D\mid E)P(C\mid D,E)P(B\mid C,D,E)P(A\mid B,C,D,E),$$ that is, we use the conditional probability to find the unconditional probability that all the events occurred, not the other way around.
The chain rule can be "justified" as saying that in order for $A,B,C,D,E$ all
to occur, we must assume that $E$ occurred (which has probability $P(E)$), and
having assumed that, we also need to assume that $D$ occurred which has
probability not $P(D)$ but rather $P(D\mid E)$ since we have already assumed
that $E$ occurred; and then we need to assume that $C$ occurred for which we
use the conditional probability $P(C\mid D,E) = P(C\mid D\cap E)$, and so on.
Note that the first multiplication on the right gives
$$P(E)P(D\mid E) = P(E) \frac{P(D\cap E)}{P(E)} = P(D\cap E)$$
and so the next one gives
$$P(D\cap E)P(C\mid D\cap E) = P(D\cap E)\frac{P(C\cap (D\cap E))}{P(D\cap E)}
= P(C\cap (D\cap E))$$
Do you see a pattern emerging? |
From an equality to a comparison | There is no answer to this question. Consider for example when a reverse of the above formula holds:
$$\forall i \in n : ( f_i \neq 0 \wedge g_i \neq 0 ) \wedge P f \le P
g \Rightarrow \forall i\in n: f_i \ge g_i.$$
This case can't be distinguished from what is wanted in the question, without explicitly specifying a partial order for the image of $P$. |
Give an example of a function that fails to satisfy the Lipschitz condition at a point of continuity. | Hint: If $f$ is differentiable at $x$, then the quantity
$$\left| \frac{f(y) - f(x)}{y - x} \right|$$
must be bounded in a neighborhood of $x$ (why?). How does this relate to the Lipschitz condition?
For the other question, consider a function with a vertical tangent line, such as $x^{1/3}$. |
$\sigma$-algebra containing all singletons | The $\sigma$-algebra $\mathcal{A}$ is called the countable-cocountable $\sigma$-algebra and is defined as follows:
$$\mathcal{A} := \{A \subseteq \mathbb{R}; \text{$A$ or $\mathbb{R} \backslash A$ is countable}\}.$$
Any $\sigma$-algebra containing all singletons has to contain all countable sets (since they can be written as a countable union of singletons) and their complements. Consequently,
$$\sigma(\{x\}; x \in \mathbb{R}\} \supseteq \mathcal{A}.$$
(Here the left-hand side denotes the minimal $\sigma$-algebra containing all singletons). On the other hand, it is not difficult to check that $\mathcal{A}$ is a $\sigma$-algebra, and therefore
$$\sigma(\{x\}; x \in \mathbb{R}\} = \mathcal{A}.$$ |
If every five point subset of a metric space can be isometrically embedded in the plane then is it possible for the metric space also? | "Embeddings" below are isometric; $(X,d)$ is a metric space.
In fact $X$ can be embedded in $\Bbb R^n$ if and only if every subset of $X$ containing no more than $n+3$ points can be embedded in $\Bbb R^n$. Thanks to @achillehui for noticing that the first version of this post was wrong, for stating the correct version of the result, and for telling us it's due to Menger (Untersuchungen über allgemeine Metrik, Mathematische Annalen, 100:75–163, 192) way back in 1928. Edit I've finally seen how to give an example showing that $n+3$ is best possible. See the bonus section at the bottom.
The proof is by induction on $n$. You may notice that the structure of the proof for $n=1$ is identical to the structure of the induction step. In fact the case $n=1$ could be derived by induction from the case $n=0$. That might involve some head-scratching; it seems best just to give the case $n=1$ separately.
In the case $n=1$ various plausible but not quite trivial lemmas can be replaced by the following triviality: If $X$ can be embedded in $\Bbb R$, $a,b$ are two distinct points of $X$, $x,y$ are two distinct points of $\Bbb R$, and $|x-y|=d(a,b)$, then there is exactly one embedding $f:X\to\Bbb R$ with $f(a)=x$ and $f(b)=y$. (For existence, replace the given embedding $g$ by $\pm g+c$. For uniqueness, note that if $|f(a)-t|=|f(a)-s|$ and $t\ne s$ then $f(a)=(s+t)/2$, and similarly for $f(b)$.)
Theorem 1. $X$ can be embedded in $\Bbb R$ if and only if any four points of $X$ can be embedded in $\Bbb R$.
Proof: For the less trivial direction: We may assume $X$ has more than one point. Choose $p,q\in X$ and define $f(p)=0$, $f(q)=d(p,q)$.
Suppose $r\in X$, $r\ne p,q$. The hypothesis and the comment above (with $\{p,q,r\}$ in place of $X$) shows that there is a unique embedding $g$ of $p,q,r$ with $g(p)=f(p)$ and $g(q)=f(q)$. Define $f(r)=g(r)$.
We have defined $f:X\to\Bbb R$ such that $|f(r)-f(p)|=d(r,p)$ and $|f(r)-f(q)|=d(r,q)$ for every $r$. We are done if we can show that $|f(r)-f(s)|=d(r,s)$ for $r,s\notin\{p,q\}$. As above there is a unique embedding $g$ of $p,q,r,s$ in $\Bbb R$ with $g(p)=f(p)$ and $g(q)=f(q)$. Uniqeness of the embedding of $p,q,r$ shows that $g(r)=f(r)$. Similarly $g(s)=f(s)$, so $|f(r)-f(s)|=|g(r)-g(s)|=d(r,s)$. QED.
The case $n>1$ requires a few preliminaries.
Lemma 0. Suppose $x_1,\dots,x_N,y_1,\dots,y_N\in\Bbb R^n$ and $|x_j-x_k|=|y_j-y_k|$ for all $j,k$. Then there is an isometry $f:\Bbb R^n\to\Bbb R^n$ with $f(x_j)=y_j$.
Proof: Let $v_j=x_j-x_0$, $w_j=y_j-y_0$. We have $|v_j|=|w_j|$ and $|v_j-v_k|=|w_j-w_k|$. The parallelogram law shows that $|v_j+v_k|=|w_j+w_k|$, so that $v_j\cdot v_k=w_j\cdot w_k$ by polarization. Hence $$\left|\sum t_jv_j\right|=\left|\sum t_jw_j\right|.$$In particular $\sum t_jv_j=0$ if and only if $\sum t_jw_j=0$, so that there exists a linear map $T:span(v_j)\to span(w_j)$ with $$T\left(\sum t_jv_j\right)=\sum t_jw_j.$$Now $T$ is evidently an isometry; extend $T$ to an orthogonal map from $\Bbb R^n$ to itself by mapping the orthogonal complement of $span(v_j)$ to the orthogonal complement of $span(w_j)$. QED.
Definition We say $F\subset\Bbb R^n$ is coplanar if there exist $c\in\Bbb R$ and $v\in\Bbb R^n$ with $v\ne0$ such that $x\cdot v=c$ for every $x\in F$.
Lemma 1. Suppose $x_1,\dots,x_N\in\Bbb R^{n}$ and $\{x_1,\dots,x_N\}$ are not coplanar. If $|x_j-y|=|x_j-z|$ for all $j$ then $y=z$.
Proof: Saying $|x_j-y|=|x_j-z|$ is the same as $(x_j-\frac{y+z}{2})\cdot(z-y)=0$; if $y\ne z$ this says that $x_1,\dots,x_N$ are coplanar. QED.
Lemma 2. Suppose $F\subset\Bbb R^n$ is not coplanar. Then $F$ has a non-coplanar subset containing exactly $n+1$ points.
Proof: Fix $x_0\in F$. The set $\{x-x_0:x\in F\}$ spans $\Bbb R^n$, so it must contain a basis. QED.
Theorem 2. $X$ can be embedded in $\Bbb R^n$ if and only if every subset of $X$ containing no more than $n+3$ points can be embedded in $\Bbb R^n$.
Proof: The proof is by induction on $n$. The case $n=1$ is exactly Theorem 1 above. Suppose then that the theorem holds for embeddings into $\Bbb R^n$, and suppose that every subset of $X$ containing no more than $n+4$ points can be embedded in $\Bbb R^{n+1}$.
We want to show that $X$ can be embedded in $\Bbb R^{n+1}$, so we may assume that $X$ cannot be embedded in $\Bbb R^n$. By induction there exists $S \subset X$, with $|S|\le n+3$, such that $S$ cannot be embedded in $\Bbb R^n$. By hypothesis there exists an embedding $f:S\to\Bbb R^{n+1}$. Since $S$ cannot be embedded in $\Bbb R^n$, the image $f(S)$ cannot be coplanar. Applying Lemma 2 we see this: There exist $p_1,\dots,p_{n+2}\in X$ and an embedding $f$ of $p_1,\dots,p_{n+2}$ in $\Bbb R^{n+1}$ such that if we set $x_j=f(p_j)$ then $x_1,\dots,x_{n+2}$ are not coplanar.
Suppose that $q\ne p_1,\dots,p_{n+2}$. The hypothesis and Lemma 1 show that there is exactly one $x\in\Bbb R^{n+1}$ with $|x-x_j|=d(q,p_j)$ for all $j$; define $f(q)=x$.
We have defined $f:X\to\Bbb R^{n+1}$. To show that $f$ is an isometry it is enough to show that if $r,q\ne p_1,\dots p_{n+2}$ then $|f(r)-f(q)|=d(r,q)$. The hypothesis shows that there is an embedding $g:\{r,q,p_1,\dots,p_{n+2}\}\to\Bbb R^{n+1}$, and now Lemma 0 shows that there exists such a $g$ with $g(p_j)=f(p_j)$. Now Lemma 1 shows that $g(r)=f(r)$ and $g(q)=f(q)$, so that $|f(r)-f(q)|=|g(r)-g(q)|=d(r,q)$. QED
Bonus In fact $n+3$ is best possible.
In the first version of this answer I "proved" Theorem 2 with $2n+1$ in place of $n+3$. The induction was ok, more or less the same as above; the problem was that I'd "proved", largely on the basis of wishful thinking, that Theorem 1 was true with $3$ in place of $4$. Thanks again to @achillehui for pointing out that was wrong.
I think the simplest way to show that $4$ is best possible for $n=1$ is this: Let $X$ consist of the four points $(0,0),(0,1),(1,0),(1,1)$, with the $\ell^1$ metric (that is, $d(x,y)=|x_1-y_1|+|x_2-y_2|$). It's easy to see that any $3$-point subset of $X$ embeds in $\Bbb R$, and that $X$ itself does not.
That example actually generalizes to $n\ge1$, but when it's presented that way it's not so clear how. I'm going to give a fairly careful description of an example showing that $5$ is best possible for $n=2$. I claim that it is clear that the example below generalizes to show that $n+3$ is best possible for $n\ge2$, and I leave as an unimportant exercise to show how the example for $n=1$ above is actually the same construction as what I give for $n=2$, if you look at it right.
Let $a_1,a_2,a_3$ be the vertices of an equilateral triangle in the plane. Let $c$ be the center of that triangle. Let $X=\{a_1,a_2,a_3,c,c'\}$, where $c'$ is something else. Define a metric on $X$: First, define the distance between any two points of $\{a_1,a_2,a_3,c\}$ to be the euclidean distance as points in the plane. Define $$d(c',a_j)=d(c,a_j)$$and set $$d(c,c')=2h,$$where $h$ is the perpendicular distance from $c$ to one of the sides of our triangle. No need to scratch your head over the triangle inequality, that will follow when we show that any $4$-point subset of $X$ can be embedded in the plane.
If we remove $c'$ from $X$ what remains is a subset of the plane. If we remove $c$ we can set $f(c')=c$ to embed the remaining four points.
Suppose we remove $a_3$. Map each of the points $a_1,a_2,c$ to itself. Map $c'$ to the point on the other side of the segment $[a_1,a_2]$, so we get a rhombus with diagonals $[a_1,a_2]$ and $[c,f(c')]$.
So any four points of $X$ can be embedded in the plane. But $X$ itself does not embed in the plane; if this is not clear it can be proved from Lemma 1 and Lemma 0 using arguments as above.
So $5$ is best possible for $n=2$. And I assert again that the same construction shows that $n+3$ is best possible for $n\ge2$; if anyone wants to write a formal description go for it. (For $n=3$ we start with the vertices and center of a regular tetrahedron and add a sort of duplicate of the center point...) |
Which iterative method is best for finding the polynomial's root and why? | As was mentioned in the comments by @LutzL (including the error he spotted), we have
$$\tag 1 x_{i+1}=\sqrt{\frac16(x^3+4x+7)} = g(x)$$
We can use the following test (see these notes for theory) to determine if the iteration converges.
$$\displaystyle \max_{a \le x \le b} | g'(x) | < 1$$
For $(1)$, we have
$$\tag 2g'(x) = \frac{3 x^2+4}{2 \sqrt{6} \sqrt{x^3+4 x+7}}$$
The maximum of $(2)$ occurs at $x = 2$ and is
$$\displaystyle \max_{1 \le x \le 2} \left|\frac{3 x^2+4}{2 \sqrt{6} \sqrt{x^3+4 x+7}} \right| = 4 \sqrt{\frac{2}{69}} \approx 0.6810052246069989 \lt 1$$
So, this is a contraction and converges to the root. The iterations with $x0 = 1.5$ are
$1.5,1.65202,1.73766,1.78873,1.82017,1.83988,1.85238,1.86036,1.86548,1.86877,1.87089,1.87226,1.87314,1.87371,1.87408,1.87432,1.87447,1.87457,1.87464,1.87468,1.87471,1.87472,1.87473,1.87474,1.87475,1.87475$
Try this test on the two other methods. |
Showing How Prime Factorization Helps Solving Problems | Not exactly a problem, more of a curiosity or trick, but maybe this will qualify:
"Choose a random 3 digit number such as 379. Repeat it to make 379379. If you now divide this number by 7, then the answer by 11, then the answer by 13, each division works exactly (no remainder) and the final answer is the number first chosen - why does this work?"
Explaining this simply depends on knowing the factorisation $1001 = 7 \times 11 \times 13$ (but it has been known to keep high-school students puzzled and entertained for a while). |
A generalized derivative | Hint:
This just the composition of $f$ with $g$ where $g\colon x\mapsto x^\alpha$. So just apply the chain rule along with the power rule :
$$
\begin{align}
\mathcal D^{*^\alpha}&=\frac{\rm d}{{\rm d}x}g(f(x))\left|\right._{x=x_0}\\
&=g'(f(x))\cdot f'(x)\left|\right._{x=x_0}\\
&=\alpha f(x)^{\alpha-1}\cdot f'(x)\left|\right._{x=x_0}\\
\end{align}
$$
Use this result and see where it leads.
Remark: $\mathcal D^{*^\alpha}$ can be understood to be the set made by raising the derivative of a specific function to all powers, provided that $\alpha$ is considered to be non-constant. |
Infinite-dimensional translation-invariant measure | The statement in the question "Why is there no translation-invariant measure on an infinite-dimensional Euclidean space?" is not correct.
(i) A counting measure defined in infinite-dimensional Euclidean space is an example of such measure which is translation-invariant.
(ii)There does not exist a translation-invariant Borel measure in an infinite-dimensional Euclidean space $\ell_2$ which gets the value 1 on the unit ball. Indeed, assume the contrary and let $\mu$ be such a measure. let $(e_k)_{k \in N}$ be a standard basis with $||e_k||=1$ for $k \in N$. Let $B_k$ be an open ball with center at $\frac{e_k}{2}$ and with radius $r$ less than $\frac{\sqrt{2}}{4}$. Then $(B_k)_{k \in N}$ is a family of pairwise disjoint open balls with radius $r$. On the one hand, $\mu$ measure of $B_k$ must be zero because in other case the $\mu$ measure of the unit ball will be $+\infty$. On the other hand, since $\ell_2$ is separable, $\ell_2$ can be covered by countably many translations of $B_1$ which together with an invariance of $\mu$ implies that $\mu$ measure of $\ell_2$ is zero. This is a contradiction and assertion (ii) is proved.
(iii) There exists a translation-invariant measure on an infinite-dimensional Euclidean space $\ell_2$ which gets the value 1 on the parallelepiped $P$ defined by $P=\{x : x \in \ell_2 ~\&~ |<x,e_k>|\le \frac{1}{2^k}\}$.
Let $\lambda$ be infinite-dimensional Lebesgue measure in $R^{\infty}$ (see, Baker R., ``Lebesgue measure" on~$\mathbb{R}^{\infty}$,Proc. Amer. Math. Soc., vol. 113, no. 4, 1991,
pp.1023--1029). We set
$$
(\forall X)(X \in {\cal{B}}(\ell_2) \rightarrow \mu(X)=\lambda(T(X)))
$$
where ${\cal{B}}(\ell_2)$ denotes the $\sigma$-algebra of Borel subsets of $\ell_2$
and the mapping $T : \ell_2 \to R^{\infty} $ is defined by: $T(\sum_{k \in N}a_ke_k)=(2^{k-1}a_k)_{k \in N}$.
Then $\mu$ satisfies all conditions participated in (iii).
P.S. There exist many interesting translation-invariant non-sigma finite Borel measures in infinite-dimensional separable Banach spaces(see, for example, G.Pantsulaia , On generators of shy sets on Polish topological vector spaces, New York J. Math.,14 ( 2008) , 235 – 261) |
Probability question about $n$ balls and $n$ boxes | Part (a) looks good.
But (b) does not. Look at the complementary event, that there are no empty boxes. How many ways can that happen? |
Find all integers n (positive, negative, or zero) so that $n^3 – 1$ is divisible by $n + 1$ | If $n + 1$ is a factor, then note that $n \equiv -1 \pmod{n + 1}$. As such, $n^3 - 1 \equiv \left(-1\right)^3 - 1 = -2 \pmod{n + 1}$. This can only be the case if $n + 1 = \pm 2$ or $n + 1 = \pm 1$, i.e., $n = 0, 1, -2, -3$. |
How does $\sqrt{y^2 + y^2} = 1$ give $y = -1/\sqrt{2}$ here? | $2y^2 = 1$
at this point you need to divide both sides by 2:
$y^2 = \frac12$. (You said $y^2 = 2$)
so
$y = \pm \sqrt{\frac 12}$ As y is negative
$y = - \sqrt{\frac 12} = \frac {-1}{\sqrt 2}$ |
Proving the Markov inequality for a non-negative random variable | Comment: Sketch of proof for a continuous random variable. Can you give a justification for each step?
$$E(U) = \int_0^\infty\! xf(x)\,dx \ge \int_b^\infty\! xf(x)\,dx
\ge \int_b^\infty\! bf(x)\,dx = b\int_b^\infty\! f(x)\,dx = bP(X \ge b). $$
The proof for a discrete random variable
involves summations instead of integrals. Other types of integrals for a more general proof, if you know about them. |
Gentzen Cut elimination: Why do we have to "go infinite"? | The claim on the slides is well known: Peano arithmetic does not admit cut elimination with finitary deductions. It's theorem 10.4.12 in Basic Proof Theory by Troelstra and Schwichtenberg.
In the comments, Zhen Lin pointed out that $\text{PA}_\omega$, which is PA augmented by the $\omega$-rule, proves Con(PA). This is correct - $\text{PA}_\omega$ is much stronger than PA.
However, there is a second subtlety in the cut elimination proof. Looking at the embedding theorem, slide 30 of the linked slides, we see that if $\text{PA} \vdash \Gamma \to \Delta$ then not only does $\text{PA}_\omega$ prove $\Gamma \to \Delta$, in fact $\text{PA}_\omega \vdash^{\omega + m}_k \Gamma \to \Delta$ for some $k,m < \omega$. This latter fact is key to the actual ordinal analysis of $\text{PA}$. This is proved in detail in sections 10.3 and 10.4 in Troelstra and Schwichtenberg, although they use $Z$ for PA and $Z^\infty$ for $\text{PA}_\omega$. |
construct a representation of a $C^*$ algebra | The GNS construction works for any positive linear functional (not just tracial states), and every $C^*$-algebra has plenty of positive linear functionals. |
How do you integrate $\int \frac{1}{\sqrt{2ax-x^2}}dx$? | Actually, that integral is equal to$$\int\frac1{\sqrt{a^2-(x-a)^2}}\,\mathrm dx=\frac1a\int\frac1{\sqrt{1-\left(\frac{x-a}a\right)^2}}\mathrm dx=\arcsin\left(\frac{x-a}a\right)+c.$$ |
Number of elements of a finite field | Ok so the fact that the field $F$ is finite tells you a lot. First the characteristic must be $p$ for some prime since fields of characteristic $0$ are infinite. Secondly, by that fact, $F$ must be finitely generated as a vector space over $\mathbb{Z}/p\mathbb{Z}$.
So let's give $F$ a basis $x_1, ..., x_n$. Then the elements of $F$ can all be written uniquely as $\alpha_1 x_1 + \alpha_2 x_2 + ... + \alpha_n x_n$ for $\alpha_1, \alpha_2, ..., \alpha_n\in\mathbb{Z}/p\mathbb{Z}$. Also each such linear combination gives an element of $F$.
How many choices are there for the alphas? Well there are $p$ choices for each alpha so we have that $|F| = p^n$. |
eigenvalues by inspection | In this case you can see the eigenvalues "by inspection".
Item #1: If you subtract the identity matrix from your matrix, you get three repeated rows,
i.e. $A-I$ has rank $1$ only. This means that the eigenspace associated with $\lambda=1$ is $2$-dimensional, IOW it is a double eigenvalue (and a double root of characteristic polynomial).
Item #2: The sum of entries on all rows is equal to $7$. This means that the vector $(1,1,1)^T$ is an eigenvector belonging to $\lambda=7$.
This is a 3x3 matrix, so that's all. |
Infinite dimensional vector space, and infinite dimensional subspaces. | Let $v_1,v_2,\dots$ be infinitely many linearly independent vectors in $V$.
Define
$U_n:={\rm span}(v_n,v_{n+1},v_{n+2},\dots)$. |
Different ways to solve a number sequencing problem | This is just a somewhat more compact notation when using generating functions. We can select either $1$ or $2$ and encode this as exponents in the expression
\begin{align*}
x^1+x^2
\end{align*}
Since we want strings built from the alphabet $\{1,2\}$ which sum up to $N$ we consider all expressions $(x^1+x^2)^j$ with $j\geq 0$ and extract the coefficient of terms $x^N$.
We use the coefficient of operator $[x^N]$ to denote the coefficient of $x^N$ of a series.
We obtain this way
\begin{align*}
[x^N]\frac{1}{1-(x+x^2)}&=[x^N]\sum_{j=0}^\infty(x+x^2)^j\tag{1}\\
&=\sum_{j=0}^N[x^{N-j}](1+x)^j\tag{2}\\
&=\sum_{j=0}^N[x^{N-j}]\sum_{l=0}^j\binom{j}{l}x^l\\
&=\sum_{j=0}^N\binom{j}{N-j}\tag{3}
\end{align*}
Comment:
In (1) we represent all strings built from the alphabet $\{1,2\}$ by a geometric power series.
In (2) we use the linearity of the coefficient of operator and apply the rule $$[x^{p+q}]A(x)=[x^p]x^{-q}A(x)$$ We also respect that the coefficient is non-negative and restrict the upper limit of the series by $N$.
In (3) we select the coefficient of $x^{N-j}$. Note the convention $\binom{r}{k}=0$ if $r\in\mathbb{C}$ and $k$ a negative integer (see e.g. (5.1) p. 154 in Concrete Mathematics). |
Question on asymptotes | Yes, provided that the limit of the derivative exists. |
For what non trivial values of $a$ and $k$, $a^k + 1$ will be prime? | consider the case $k=3$ then we get $$a^3+1=(a+1)(1-a+a^2)$$ |
The sum of the squares of the prime factors | The density of such integers is exactly $1 - \log 2$. For the lower bound consider the integers $n$ with $\leq 2\log\log n$ prime factors and that are $n^{1/2 - \varepsilon}$ smooth (i.e all of their prime factors are $< n^{1/2 - \varepsilon}$.. Such integers satisfy the condition $d_n \leq n$ for all large enough $n$. By a theorem of Alladi matwbn.icm.edu.pl/ksiazki/aa/aa49/aa4918.pdf the density of such integers is at least $1 - \log 2 - 100 \varepsilon$. Squeeze $\varepsilon \rightarrow 0$.
For reference, note that the density of integers with $\leq 2 \log\log n$ prime factors is $1$, while the density of integers with all of their prime factors $\leq n^{1/2}$ is $1 - \log 2$ (the $1 - \log 2$ corresponds to Dickman's function evaluated at $2$).
Call the above set of integers $A$. Consider the integers $B$ which are not $n^{1/2}$ smooth (i.e have one prime factor $> n^{1/2}$). The density of $B$ is $\log 2$. In addition $A$ and $B$ are disjoint while the density of their union is $1$. The set $C$ of integers with $d_n \leq n$ contains $A$ and is disjoint from $B$. It follows that $d(A) \leq d(C) \leq 1 - d(B)$. Therefore $d(C) = 1- \log 2$. |
Convergence of $\int_{0}^{\pi} \frac{\sin{(x)}}{(x+n\pi)^{p}} dx$ | How about $$\frac{1}{(x+n\pi)^p} \leq \frac{1}{\pi^p n^p}?$$
Getting rid of $x$ entirely will make your estimations that much easier...
Another way is to simply calculate $$\int_0^\pi \frac{1}{(x+n\pi)^p} dx$$ which equals $$\frac{1}{(x+n\pi)^{p-1}}|_0^\pi = \frac{1}{((n+1)\pi)^{p-1}} - \frac{1}{(n\pi)^{p-1}}$$
for $p\neq 1$ and $$\ln\left(\frac{(n+1)}{n}\right)$$ for $p=1$. The convergence is obvious in all cases. |
Show $(-1)^n+\frac 1n$ diverges from the definition | Hint: Assume it has a limit $L$. Clearly $L>0$ or $L<0$. Treat each case separately and demonstrate infinitely many points that are $<0$ (if $L>0$) and vice versa for the second case.
This shows you can find an $\varepsilon>0$ (what is it exactly?) such that $|s_k -L|>\varepsilon$ for some $k>N$ for any $N$. |
How to provide a counterexample for if $n$ is prime, then $2^n -1$ is prime | No. There's a way to test for prime $n$ whether $2^n-1$ is prime without computing it first, but showing some $n$ fails that test still requires manually finding a counterexample. A proof that didn't have this requirement would be nonconstructive; but I doubt there's a nonconstructive proof not all prime $n$ obtain prime $2^n-1$. |
Bezier curve, constrained shortest path and minimum time: is the optimal curve always of minimal degree? | If by velocity you mean $\mathbf{B}'(t)$, then the diagram below shows that the answer to your first question is no. The red curve is the quadratic Bézier $(ACB)$, the green curve is the quintic Bézier $APDEQB$. Control points $P$ and $Q$ satisfy $AP/AC=BQ/BC=2/5$, so that initial and final velocities are the same for both curves. But green curve is shorter.
EDIT.
Initial velocity is $\mathbf{B}'(0)=2(C-A)$
for the quadratic Bézier and
$\mathbf{B}'(0)=5(P-A)$ for the quintic Bézier.
But $PA/CA=2/5$, hence those two results are equal, both in direction and magnitude. The same holds for $\mathbf{B}'(1)$.
EDIT 2.
With your definition of velocity, every Bézier curve is covered in unit time:
$$
t=\int_0^L{ds\over v}=\int_0^1{|\mathbf{B}'(t)|dt\over|\mathbf{B}'(t)|}=
\int_0^1dt=1.
$$
Which, after all, is obvious: a point on any Bézier joining $A$ and $B$ "starts" from $A$ at $t=0$ and "arrives" at $B$ at $t=1$. |
Exsitence of element of a certain order in an infinite abelian group | The answer is positive. All elements have finite order divisible by $m$, so your proof (https://math.stackexchange.com/a/1361378/211913) works fine even in that case. Note that you have never used the finiteness of the group, but only the (bounded) finiteness of the order of each element. |
What is the difference between $\propto$ and $\sim$ | Like you said, $a \propto b$ means: a is proportional to b and so:
$a \propto b\Rightarrow a=k\cdot b$ for some $k \in \mathbb{R}$
In contrast: $a \sim b$ means: a is distributed according to b and so:
$a \sim b \Rightarrow \lim_{x\rightarrow \infty} \dfrac{a(x)}{b(x)} = 1 $
So, $\sim$ indicates identical asymptotic behavior for arbitrary functions $a,b$. Maybe functions over time for your usecase?
Note that although $a \propto b$ is a more general statement than $a \sim b$ since it holds for all values of $a$ and $b$, rather than only for the limit, it does not imply $a \sim b$. |
Understanding the necessity of a condition in a question of group theory | It's kind of automatic: if two functions $f,g:S\rightarrow S$ (where $S$ is any set) are such that $f\circ g$ is bijective, then $f$ is surjective and $g$ is injective. Hence $f^2$ being the identity implies $f$ is also bijective. |
2D point projection on an ellipse | Let's assume that the ellipse is centered at the origin. If it's not, then translate the ellipse and the points to make this so.
Given a point $(x_0, y_0)$ that you want to project, you first find the angle $\theta$ between the $x$-axis and the ray leading to the point: in code, use $\theta = \text{atan2}(y_0, x_0)$. Then the projected point $(x,y)$ can be calculated using
$$k = \frac{ab}{\sqrt{ {b^2}\cos^2{\theta} + {a^2}\sin^2{\theta} }}$$
$$x = k \cos\theta$$
$$y = k \sin\theta$$ |
Continuous function between discrete topological spaces | A map $f:X\to T $ is continuous if for every open set $U \subset T $, the set $f^{-1}(U) $ of all $x\in X $ so that $f (x) \in U $ is open. If $X$ has the discrete topology, the every subset is open, so $f^{-1}(U) $ is open. Therefore $f $ is continuous. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.