tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
linear-algebra | <p>Can someone point me to a paper, or show here, why symmetric matrices have orthogonal eigenvectors? In particular, I'd like to see proof that for a symmetric matrix $A$ there exists decomposition $A = Q\Lambda Q^{-1} = Q\Lambda Q^{T}$ where $\Lambda$ is diagonal.</p>
| <p>For any real matrix $A$ and any vectors $\mathbf{x}$ and $\mathbf{y}$, we have
$$\langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle.$$
Now assume that $A$ is symmetric, and $\mathbf{x}$ and $\mathbf{y}$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda$ and $\mu$. Then
$$\lambda\langle\mathbf{x},\mathbf{y}\rangle = \langle\lambda\mathbf{x},\mathbf{y}\rangle = \langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle = \langle\mathbf{x},A\mathbf{y}\rangle = \langle\mathbf{x},\mu\mathbf{y}\rangle = \mu\langle\mathbf{x},\mathbf{y}\rangle.$$
Therefore, $(\lambda-\mu)\langle\mathbf{x},\mathbf{y}\rangle = 0$. Since $\lambda-\mu\neq 0$, then $\langle\mathbf{x},\mathbf{y}\rangle = 0$, i.e., $\mathbf{x}\perp\mathbf{y}$.</p>
<p>Now find an orthonormal basis for each eigenspace; since the eigenspaces are mutually orthogonal, these vectors together give an orthonormal subset of $\mathbb{R}^n$. Finally, since symmetric matrices are diagonalizable, this set will be a basis (just count dimensions). The result you want now follows.</p>
| <p>Since being symmetric is the property of an operator, not just its associated matrix, let me use <span class="math-container">$\mathcal{A}$</span> for the linear operator whose associated matrix in the standard basis is <span class="math-container">$A$</span>. Arturo and Will proved that a real symmetric operator <span class="math-container">$\mathcal{A}$</span> has real eigenvalues (thus real eigenvectors) and that eigenvectors corresponding to different eigenvalues are orthogonal. <em>One question still stands: how do we know that there are no generalized eigenvectors of rank more than 1, that is, all Jordan blocks are one-dimensional?</em> Indeed, by referencing the theorem that any symmetric matrix is diagonalizable, Arturo effectively threw the baby out with the bathwater: showing that a matrix is diagonalizable is tautologically equivalent to showing that it has a full set of eigenvectors. Assuming this as a given dismisses half of the question: we were asked to show that <span class="math-container">$\Lambda$</span> is diagonal, and not just a generic Jordan form. Here I will untangle this bit of circular logic.</p>
<p>We prove by induction in the number of eigenvectors, namely it turns out that finding an eigenvector (and at least one exists for any matrix) of a symmetric matrix always allows us to generate another eigenvector. So we will run out of dimensions before we run out of eigenvectors, making the matrix diagonalizable.</p>
<p>Suppose <span class="math-container">$\lambda_1$</span> is an eigenvalue of <span class="math-container">$A$</span> and there exists at least one eigenvector <span class="math-container">$\boldsymbol{v}_1$</span> such that <span class="math-container">$A\boldsymbol{v}_1=\lambda_1 \boldsymbol{v}_1$</span>. Choose an orthonormal basis <span class="math-container">$\boldsymbol{e}_i$</span> so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span>. The change of basis is represented by an orthogonal matrix <span class="math-container">$V$</span>. In this new basis the matrix associated with <span class="math-container">$\mathcal{A}$</span> is <span class="math-container">$$A_1=V^TAV.$$</span>
It is easy to check that <span class="math-container">$\left(A_1\right)_{11}=\lambda_1$</span> and all the rest of the numbers <span class="math-container">$\left(A_1\right)_{1i}$</span> and <span class="math-container">$\left(A_1\right)_{i1}$</span> are zero. In other words, <span class="math-container">$A_1$</span> looks like this:
<span class="math-container">$$\left(
\begin{array}{c|ccc}
\lambda_1 & \\
\hline & & \\
& & B_1 & \\
& &
\end{array}
\right)$$</span>
Thus the operator <span class="math-container">$\mathcal{A}$</span> breaks down into a direct sum of two operators: <span class="math-container">$\lambda_1$</span> in the subspace <span class="math-container">$\mathcal{L}\left(\boldsymbol{v}_1\right)$</span> (<span class="math-container">$\mathcal{L}$</span> stands for linear span) and a symmetric operator <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> whose associated <span class="math-container">$(n-1)\times (n-1)$</span> matrix is <span class="math-container">$B_1=\left(A_1\right)_{i > 1,j > 1}$</span>. <span class="math-container">$B_1$</span> is symmetric thus it has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which has to be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span> and the same procedure applies: change the basis again so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span> and <span class="math-container">$\boldsymbol{e}_2=\boldsymbol{v}_2$</span> and consider <span class="math-container">$\mathcal{A}_2=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1,\boldsymbol{v}_2\right)^{\bot}}$</span>, etc. After <span class="math-container">$n$</span> steps we will get a diagonal matrix <span class="math-container">$A_n$</span>.</p>
<p>There is a slightly more elegant proof that does not involve the associated matrices: let <span class="math-container">$\boldsymbol{v}_1$</span> be an eigenvector of <span class="math-container">$\mathcal{A}$</span> and <span class="math-container">$\boldsymbol{v}$</span> be any vector such that <span class="math-container">$\boldsymbol{v}_1\bot \boldsymbol{v}$</span>. Then
<span class="math-container">$$\left(\mathcal{A}\boldsymbol{v},\boldsymbol{v}_1\right)=\left(\boldsymbol{v},\mathcal{A}\boldsymbol{v}_1\right)=\lambda_1\left(\boldsymbol{v},\boldsymbol{v}_1\right)=0.$$</span> This means that the restriction <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> is an operator of rank <span class="math-container">$n-1$</span> which maps <span class="math-container">${\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> into itself. <span class="math-container">$\mathcal{A}_1$</span> is symmetric for obvious reasons and thus has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which will be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span>.</p>
|
matrices | <p>Let $A$ be an $n\times n$ complex nilpotent matrix. Then we know that because all eigenvalues of $A$ must be $0$, it follows that $\text{tr}(A^n)=0$ for all positive integers $n$.</p>
<p>What I would like to show is the converse, that is, </p>
<blockquote>
<p>if $\text{tr}(A^n)=0$ for all positive integers $n$, then $A$ is nilpotent.</p>
</blockquote>
<p>I tried to show that $0$ must be an eigenvalue of $A$, then try to show that all other eigenvalues must be equal to 0. However, I am stuck at the point where I need to show that $\det(A)=0$.</p>
<p>May I know of the approach to show that $A$ is nilpotent?</p>
| <p>Assume that for all $k=1,\ldots,n$, $\mathrm{tr}(A^k) = 0$ where $A$ is a $n\times n$ matrix. <br />
We consider the eigenvalues in $\mathbb C$.</p>
<p>Suppose $A$ is not nilpotent, so $A$ has some non-zero eigenvalues $\lambda_1,\ldots,\lambda_r$. <br />
Let $n_i$ the multiplicity of $\lambda_i$ then $$\left\{\begin{array}{ccc}n_1\lambda_1+\cdots+n_r\lambda_r&=&0 \\ \vdots & & \vdots \\ n_1\lambda_1^r+\cdots+n_r\lambda_r^r&=&0\end{array}\right.$$
So we have $$\left(\begin{array}{cccc}\lambda_1&\lambda_2&\cdots&\lambda_r\\\lambda_1^2 & \lambda_2^2 & \cdots & \lambda_r^2 \\ \vdots & \vdots & \vdots & \vdots \\ \lambda_1^r & \lambda_2^r & \cdots & \lambda_r^r\end{array}\right)\left(\begin{array}{c}n_1 \\ n_2 \\ \vdots \\ n_r \end{array}\right)=\left(\begin{array}{c}0 \\ 0\\ \vdots \\ 0\end{array}\right)$$
But $$\mathrm{det}\left(\begin{array}{cccc}\lambda_1&\lambda_2&\cdots&\lambda_r\\\lambda_1^2 & \lambda_2^2 & \cdots & \lambda_r^2 \\ \vdots & \vdots & \vdots & \vdots \\ \lambda_1^r & \lambda_2^r & \cdots & \lambda_r^r\end{array}\right)=\lambda_1\cdots\lambda_r\,\mathrm{det}\left(\begin{array}{cccc} 1 & 1 & \cdots & 1 \\ \lambda_1&\lambda_2&\cdots&\lambda_r\\\lambda_1^2 & \lambda_2^2 & \cdots & \lambda_r^2 \\ \vdots & \vdots & \vdots & \vdots \\ \lambda_1^{r-1} & \lambda_2^{r-1} & \cdots & \lambda_r^{r-1}\end{array}\right)\neq 0$$
(Vandermonde)</p>
<p>So the system has a unique solution which is $n_1=\ldots=n_r=0$. Contradiction.</p>
| <p>If the eigenvalues of $A$ are $\lambda_1$, $\dots$, $\lambda_n$, then the eigenvalues of $A^k$ are $\lambda_1^k$, $\dots$, $\lambda_n^k$. It follows that if all powers of $A$ have zero trace, then $$\lambda_1^k+\dots+\lambda_n^k=0\qquad\text{for all $k\geq1$.}$$ Using <a href="http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums">Newton's identities</a> to express the elementary symmetric functions of the $\lambda_i$'s in terms of their power sums, we see that all the coefficients of the characteristic polynomial of $A$ (except that of greatest degree, of course) are zero. This means that $A$ is nilpotent.</p>
|
differentiation | <p>I don't understand something about L'Hôpital's rule. In this case: </p>
<p>$$
\begin{align}
& {{}\phantom{=}}\lim_{x\to0}\frac{e^x-1-x^2}{x^4+x^3+x^2} \\[8pt]
& =\lim_{x\to0}\frac{(e^x-1-x^2)'}{(x^4+x^3+x^2)'} \\[8pt]
& =\lim_{x\to0}\frac{(e^x-2x)'}{(4x^3+3x^2+2x)'} \\[8pt]
& =\lim_{x\to0}\frac{(e^x-2)'}{(12x^2+6x+2)'} \\[8pt]
& = \lim_{x\to0}\frac{(e^x)'}{(24x+6)'} \\[8pt]
& = \lim_{x\to0}\frac{e^x}{24} \\[8pt]
& = \frac{e^0}{24} \\[8pt]
& = \frac{1}{24}
\end{align}
$$</p>
<p>Why do we have to keep on solving after this step:</p>
<p>$$\displaystyle\lim_{x\to0}\dfrac{(e^x-2)'}{(12x^2+6x+2)'}$$</p>
<p>Can't I just plug in $x=0$ and compute the limit at this step giving me:</p>
<p>$$\dfrac{1-2}{0+0+2}=-\dfrac{1}{2}$$</p>
<p>I'm very confused, because I get different probable answers for the limit, depending on when do I stop to differentiate, as clearly $-\frac1{2}\neq \frac 1{24}$.</p>
| <p>Once your answer is no longer in the form 0/0 or $\frac{\infty}{\infty}$ you must stop applying the rule. You only apply the rule to attempt to get rid of the indeterminate forms. If you apply L'Hopital's rule when it is not applicable (i.e., when your function no longer yields an indeterminate value of 0/0 or $\frac{\infty}{\infty}$) you will most likely get the wrong answer. </p>
<p>You should have stopped differentiating the top and bottom once you got to this:</p>
<p>$\dfrac{e^x-2x}{4x^2+3x^2+2x}$. Taking the limit at that gives you $1/0$. The limit is nonexistent. </p>
<p>Also, don't be tempted to say "infinity" when you see a 0 in the denominator and a non-zero number in the top. It may not be the case. For example, the function $\frac{1}{x}$ approaches infinity and negative infinity from both sides of the limit as x approaches 0. Its not necessarily infinite; its best just to leave it as "nonexistent". </p>
| <p>After differentiating just once, you get $$\lim_{x \to 0} \dfrac{e^x-2x}{4x^3+3x^2+2x}$$ which "evaluates" to $\dfrac 10$, i.e., the numerator approaches $1$, and the denominator approaches $0$. Hence, L'Hopital <strong>no longer applies</strong> and we have $$\lim_{x \to 0} \dfrac{e^x-2x}{4x^3+3x^2+2x}\quad\text{does not exist}.$$ </p>
<p><a href="http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule">L'Hopital's rule</a> applies <em>provided and only while</em> a limit evaluates to an "indeterminate" form: e.g., $\dfrac 00, \;\text{or}\;\dfrac {\pm\infty}{\pm\infty}$.</p>
|
linear-algebra | <ol>
<li><p>How does <span class="math-container">$ {\sqrt 2 \over 2} = \cos (45^\circ)$</span>?</p>
</li>
<li><p>Is my graph (the one underneath the original) accurate with how I've depicted the representation of the triangle that the trig function represent? What I mean is, the blue triangle is the pre-rotated block, the green is the post-rotated block, and the purple is the rotated change (<span class="math-container">$45^\circ$</span>) between them.</p>
</li>
<li><p>How do these trig functions in this matrix represent a clockwise rotation? (Like, why does "<span class="math-container">$-\sin \theta $</span> " in the bottom left mean clockwise rotation... and "<span class="math-container">$- \sin \theta $</span> " in the upper right mean counter clockwise? Why not "<span class="math-container">$-\cos \theta $</span> "? <span class="math-container">$$\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta & \cos \theta \end{bmatrix}$$</span></p>
</li>
</ol>
<p><img src="https://i.sstatic.net/mAexq.jpg" alt="enter image description here" /></p>
<p>Any help in understanding the trig representations of a rotation would be extremely helpful! Thanks</p>
| <p>Here is a <strong>"small" addition to the answer by @rschwieb</strong>:</p>
<p>Imagine you have the following rotation matrix:</p>
<p><span class="math-container">$$
\left[
\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
$$</span></p>
<p>At first one might think this is just another identity matrix. Well, yes and no. This matrix can represent a rotation around all three axes in 3D Euclidean space with...<strong>zero degrees</strong>. This means that no rotation has taken place around any of the axes.</p>
<p>As we know <span class="math-container">$\cos(0) = 1$</span> and <span class="math-container">$\sin(0) = 0$</span>.</p>
<p>Each column of a rotation matrix represents one of the axes of the space it is applied in so if we have <strong>2D</strong> space the default rotation matrix (that is - no rotation has happened) is</p>
<p><span class="math-container">$$
\left[
\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}
\right]
$$</span></p>
<p>Each column in a rotation matrix represents the state of the respective axis so we have here the following:</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c}
1 & 0\\
0 & 1
\end{array}
\right]
$$</span></p>
<p>First column represents the <strong>x</strong> axis and the second one - the <strong>y</strong> axis. For the 3D case we have:</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
$$</span></p>
<p>Here we are using the canonical base for each space that is we are using the <strong>unit vectors</strong> to represent each of the 2 or 3 axes.</p>
<p>Usually I am a fan of explaining such things in 2D however in 3D it is much easier to see what is happening. Whenever we want to rotate around an axis, we are basically saying "The axis we are rotating around is the anchor and will NOT change. The other two axes however will".</p>
<p>If we start with the "no rotation has taken place" state</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
$$</span></p>
<p>and want to rotate around - let's say - the <strong>x</strong> axis we will do</p>
<p><span class="math-container">$$
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & \cos(\theta) & -\sin(\theta)\\
0 & \sin(\theta) & \cos(\theta)
\end{array}
\right] .
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right] =
\left[
\begin{array}{c|c|c}
1 & 0 & 0\\
0 & \cos(\theta) & -\sin(\theta)\\
0 & \sin(\theta) & \cos(\theta)
\end{array}
\right]
$$</span></p>
<p>What this means is:</p>
<ul>
<li>The state of the <strong>x axis</strong> remains unchanged - we've started with a state of no rotation so the x axis will retain its original state - the unit vector <span class="math-container">$\left[\begin{array}{c}1\\0\\0\end{array}\right]$</span></li>
<li>The state of the <strong>y and z axis</strong> has changed - instead of the original <span class="math-container">$\left[\begin{array}{c}0\\1\\0\end{array}\right]$</span> (for y) and <span class="math-container">$\left[\begin{array}{c}0\\0\\1\end{array}\right]$</span> (for z) we now have <span class="math-container">$\left[\begin{array}{c}0\\\cos(\theta)\\\sin(\theta)\end{array}\right]$</span> (for the new orientation of y) and <span class="math-container">$\left[\begin{array}{c}0\\-\sin(\theta)\\\cos(\theta)\end{array}\right]$</span> (for the new orientation of z).</li>
</ul>
<p>We can continue applying rotations around this and that axis and each time this will happen - the axis we are rotating around remains as it was in the previous step and the rest of the axes change accordingly.</p>
<p>Now when it comes to 2D we have
<span class="math-container">$$
R(\theta) = \left[
\begin{array}{c|c}
\cos(\theta) & -\sin(\theta)\\
\sin(\theta) & \cos(\theta)
\end{array}
\right]
$$</span></p>
<p>for counterclockwise rotation and</p>
<p><span class="math-container">$$
R(-\theta) = \left[
\begin{array}{c|c}
\cos(\theta) & \sin(\theta)\\
-\sin(\theta) & \cos(\theta)
\end{array}
\right]
$$</span></p>
<p>for clockwise rotation. Notice that both column vectors are different. This is because in 2D none of the two axes remains idle and both need to change in order to create a rotation. This is why also the 3D version has two of the three axes change simultaneously - because it is just a derivative from its 2D version.</p>
<p>When it comes to rotating clock- or counterclockwise you can always use the <strong>left or right hand rule</strong>:</p>
<ol>
<li>Use your right or left hand to determine the axes:</li>
</ol>
<p><a href="https://i.sstatic.net/kJdJU.gif" rel="noreferrer"><img src="https://i.sstatic.net/kJdJU.gif" alt="enter image description here"></a></p>
<ol start="2">
<li>See which way is clock- and which way is counterclockwise. In the image below the <strong>four</strong> finger tips that go straight into your palm <strong>always</strong> point along the direction of rotation (<a href="https://en.wikipedia.org/wiki/Right-hand_rule" rel="noreferrer">right hand rule</a>):</li>
</ol>
<p><a href="https://i.sstatic.net/n6k7r.png" rel="noreferrer"><img src="https://i.sstatic.net/n6k7r.png" alt="enter image description here"></a></p>
<p>Once you pick one of the two hands stick with it and use it until the end of the specific task otherwise the results will probably end up screwed up. <strong>Notice also that this rule can also be applied to 2D</strong>. Just remove (but not cut off) the finger that points along the <strong>z</strong> axis (or whichever dimension of the three you don't need) and do your thing.</p>
<p>A couple of <strong>must knows</strong> things:</p>
<ol>
<li><p>Matrix multiplication is generally not commutative - what this means is that <span class="math-container">$A.B \ne B.A$</span></p></li>
<li><p>Rotation order is determined by the multiplication order (due to 1)) - there are a LOT of rotation conventions (RPY (roll,pitch and yaw), Euler angles etc.) so it is important to know which one you are using. If you are not certain pick one and stick with it (better have one consistent error than 10 different errors that you cannot follow up on) (see <a href="https://en.wikipedia.org/wiki/Rotation_matrix#General_rotations" rel="noreferrer">here</a> for some compact information on this topic)</p></li>
<li><p>Inverse of a rotation matrix rotates in the opposite direction - if for example <span class="math-container">$R_{x,90}$</span> is a rotation around the x axis with +90 degrees the inverse will do <span class="math-container">$R_{x,-90}$</span>. On top of that rotation matrices are awesome because <span class="math-container">$A^{-1} = A^t$</span> that is the inverse is the same as the transpose</p></li>
</ol>
| <p>All of the trigonometry will be clear if you examine what happens to the points $(1,0)$ and $(0,1)$ under these transformations. After they have moved, drop a perpendicular vertically and a line through the origin and consider the triangle formed. They will be (sometimes degenerate) triangles with hypotenuese 1 and then you will see why each of their legs has measure $\sin(\phi)$ or $\cos(\phi)$ etc.</p>
<p>Here's what I mean: after a $\pi/6$ rotation counterclockwise, the point $(1,0)$ has moved to $(\sqrt{3}/2,1/2)$. This point, in addition to $(0,0)$ and the point directly below it on the $x$ axis, $(\sqrt{3}/2,0$ form a right triangle with hypoteneuse $1$. Look at lengths of the short sides of the triangle. Try to do the same thing with an angle $\phi$ between 0 and $\pi/2$, and analyze what the sides of the triangle have to be in terms of $\sin(\phi)$ and $\cos(\phi)$.</p>
<p>Because a rotation in the plane is totally determined by how it moves points on the unit circle, this is all you have to understand.</p>
<p>You don't actually need a representation for <em>both</em> clockwise and counterclockwise. You can use the counterclockwise one all the time, if you agree that a clockwise rotation would be a negative counterclockwise rotation. That is, if you want to perform a clockwise rotation of $\pi/4$ radians, then you should use $\phi=-\pi/4$ in the counterclockwise rotation representation. </p>
<p>The fact that $\sin(-\phi)=-\sin(\phi)$ accounts for the change in the sign of sine between the two representations, and the fact that the $\cos(\phi)$ doesn't change is because $\cos(-\phi)=\cos(\phi)$. You may as well just pick the counterclockwise representation scheme, and perform <em>both</em> clockwise and counterclockwise rotations with it.</p>
<hr>
<p>To provide some extra evidence that it makes sense these are rotation matrices, you can check to see that the columns of these matrices always have Euclidean length 1 (easy application of the $\sin^2(x)+\cos^2(x)=1$ identity.) Moreover, they are orthogonal to each other. That means they are orthogonal matrices, and consequently represent rotations. They satisfy $UU^T=U^TU=I_2$. This demonstrates that $U^T=U^{-1}$, and now you'll notice that the transpose of the counterclockwise representation gives you the clockwise representation! Of course, rotating clockwise and rotating counterclockwise by $\phi$ radians are inverse operations.</p>
|
logic | <p>In a probability course, a game was introduced which a logical approach won't yield a strategy for winning, but a probabilistic one will. My problem is that I don't remember the details (the rules of the game)! I would be thankful if anyone can complete the description of the game. I give the outline of the game, below.</p>
<p>Some person (A) hides a 100 or 200 dollar bill, and asks another one (B) to guess which one is hidden. If B's guess is correct, something happens and if not, something else (this is what I don't remember). The strange point is, B can think of a strategy so that always ends to a positive amount, but now A can deduce that B will use this strategy, and finds a strategy to overcome B. Now B knows A's strategy, and will uses another strategy, and so on. So, before even playing the game for once, there is an infinite chain of strategies which A and B choose successively!</p>
<p>Can you complete the story? I mean, what happens when B's guess correct and incorrect?</p>
<p>Thanks.</p>
| <p>In the <a href="http://blog.plover.com/math/envelope.html" rel="noreferrer">Envelope Paradox</a> player 1 writes any two different numbers $a< b$ on two slips of paper. Then player 2 draws one of the two slips each with probability $\frac 12$, looks at its number $x$, and predicts whether $x$ is the larger or the smaller of the two numbers.</p>
<p>It appears at first that no strategy by player 2 can achieve a success rate grater than $\frac 12$. But there is in fact a strategy that will do this.</p>
<p>The strategy is as follows: Player 2 should first select some probability distribution $D$ which is positive everywhere on the real line. (A normal distribution will suffice.) She should then select a number $y$ at random according to distribution $D$. That is, her selection $y$ should lie in the interval $I$ with probability exactly $$\int_I D(x)\; dx.$$ General methods for doing this are straightforward; <a href="https://en.wikipedia.org/wiki/Box-Muller_transform" rel="noreferrer">methods for doing this</a> when $D$ is a normal distribution are well-studied.</p>
<p>Player 2 now draws a slip at random; let the number on it be $x$.
If $x>y$, player 2 should predict that $x$ is the larger number $b$; if $x<y$ she should predict that $x$ is the smaller number $a$. ($y=x$ occurs with probability 0 and can be disregarded, but if you insist, then player 2 can flip a coin in this case without affecting the expectation of the strategy.)</p>
<p>There are six possible situations, depending on whether the selected slip $x$ is actually the smaller number $a$ or the larger number $b$, and whether the random number $y$ selected by player 2 is less than both $a$ and $b$, greater than both $a$ and $b$, or in between $a$ and $b$.</p>
<p>The table below shows the prediction made by player 2 in each of the six cases; this prediction does not depend on whether $x=a$ or $x=b$, only on the result of her comparison of $x$ and $y$: </p>
<p>$$\begin{array}{r|cc}
& x=a & x=b \\ \hline
y < a & x=b & \color{blue}{x=b} \\
a<y<b & \color{blue}{x=a} & \color{blue}{x=b} \\
b<y & \color{blue}{x=a} & x=a
\end{array}
$$</p>
<p>For example, the upper-left entry says that when player 2 draws the smaller of the two numbers, so that $x=a$, and selects a random number $y<a$, she compares $y$ with $x$, sees that $y$ is smaller than $x$, and so predicts that $x$ is the larger of the two numbers, that $x=b$. In this case she is mistaken. Items in blue text are <em>correct</em> predictions.</p>
<p>In the first and third rows, player 2 achieves a success with probability $\frac 12$. In the middle row, player 2's prediction is always correct. Player 2's total probability of a successful prediction is therefore
$$
\frac12 \Pr(y < a) + \Pr(a < y < b) + \frac12\Pr(b<y) = \\
\frac12(\color{maroon}{\Pr(y<a) + \Pr(a < y < b) + \Pr(b<y)}) + \frac12\Pr(a<y<b) = \\
\frac12\cdot \color{maroon}{1}+ \frac12\Pr(a<y<b)
$$</p>
<p>Since $D$ was chosen to be everywhere positive, player 2's probability $$\Pr(a < y< b) = \int_a^b D(x)\;dx$$ of selecting $y$ between $a$ and $b$ is <em>strictly</em> greater than $0$ and her probability of making a correct prediction is <em>strictly</em> greater than $\frac12$ by half this strictly positive amount.</p>
<p>This analysis points toward player 1's strategy, if he wants to minimize player 2's chance of success. If player 2 uses a distribution $D$ which is identically zero on some interval $I$, and player 1 knows this, then player 1 can reduce player 2's success rate to exactly $\frac12$ by always choosing $a$ and $b$ in this interval. If player 2's distribution is everywhere positive, player 1 cannot do this, even if he knows $D$. But player 2's distribution $D(x)$ must necessarily approach zero as $x$ becomes very large. Since Player 2's edge over $\frac12$ is $\frac12\Pr(a<y<b)$ for $y$ chosen from distribution $D$, player 1 can bound player 2's chance of success to less than $\frac12 + \epsilon$ for any given positive $\epsilon$, by choosing $a$ and $b$ sufficiently large and close together. And even if player 1 doesn't know $D$, he should <em>still</em> choose $a$ and $b$ very large and close together. </p>
<p>I have heard this paradox attributed to Feller, but I'm afraid I don't have a reference.</p>
<p>[ Addendum 2014-06-04: <a href="https://math.stackexchange.com/q/709984/25554">I asked here for a reference</a>, and was answered: the source is <a href="https://en.wikipedia.org/wiki/Thomas_M._Cover" rel="noreferrer">Thomas M. Cover</a> “<a href="http://www-isl.stanford.edu/~cover/papers/paper73.pdf" rel="noreferrer">Pick the largest number</a>”<em>Open Problems in Communication and Computation</em> Springer-Verlag, 1987, p152. ] </p>
| <p>I know this is a late answer, but I'm pretty sure I know what game OP is thinking of (and none of the other answers have it right).</p>
<p>The way it works is person A chooses to hide either $100$ or $200$ dollars in an envelope, and person B has to guess the amount that person A hid. If person B guesses correctly they win the money in the envelope, but if they guess incorrectly they win nothing.</p>
<p>If person A uses the predictable strategy of putting $100$ dollars in the envelope every time, then person B can win $100$ dollars every time by guessing $100$ correctly.</p>
<p>If person A instead chooses to randomly put either $100$ or $200$, then person B can guess $200$ every time--he'll win half the time, so again will win $100$ dollars per game on average.</p>
<p>But a third, better option for A is to randomly put $100$ in the envelope with probability $2/3$, and $200$ dollars in with probability $1/3$. If person B guesses $100$, he has a $2/3$ chance of being right so his expected winnings are $\$66.67$. If person B guesses $200$, he has a $1/3$ chance of being right so his expected winnings are again $\$66.67$. No matter what B does, this strategy guarantees that he will win only $\$66.67$ on average.</p>
<p>Looking at person B's strategy, he can do something similar. If he guesses $100$ with probability $2/3$ and $200$ with probability $1/3$, then no matter what strategy person A uses he wins an average of $\$66.67$. These strategies for A and B are the <a href="https://en.wikipedia.org/wiki/Nash_equilibrium" rel="noreferrer">Nash Equilibrium</a> for this game, the set of strategies where neither person can improve their expected winnings by changing their strategy.</p>
<p>The "infinite chain of strategies" you mention comes in if either A or B start with a strategy that isn't in the Nash equilibrium. Suppose A decides to put $100$ dollars in the envelope every time. The B's best strategy is of course to guess $100$ every time. But given that, A's best strategy is clearly to put $200$ dollars in the envelope every time, at which point B should change to guessing $200$ every time, and so on. In a Nash equilibrium, though, neither player gains any advantage by modifying their strategy so such an infinite progression doesn't occur.</p>
<p>The interesting thing about this game is that although the game is entirely deterministic, the best strategies involve randomness. This actually turns out to be true in most deterministic games where the goal is to predict what your opponent will do. Another common example is <a href="https://en.wikipedia.org/wiki/Rock-paper-scissors" rel="noreferrer">rock-paper-scissors</a>, where the equilibrium strategy is unsurprisingly to choose between rock, paper, and scissors with equal probability.</p>
|
linear-algebra | <p>Why do we care about eigenvalues of graphs?</p>
<p>Of course, any novel question in mathematics is interesting, but there is an entire discipline of mathematics devoted to studying these eigenvalues, so they must be important.</p>
<p>I always assumed that spectral graph theory extends graph theory by providing tools to prove things we couldn't otherwise, somewhat like how representation theory extends finite group theory. But most results I see in spectral graph theory seem to concern eigenvalues not as means to an end, but as objects of interest in their own right.</p>
<p>I also considered practical value as motivation, e.g. using a given set of eigenvalues to put bounds on essential properties of graphs, such as maximum vertex degree. But I can't imagine a situation in which I would have access to a graph's eigenvalues before I would know much more elementary information like maximum vertex degree.</p>
<p>(<em>EDIT:</em> for example, dtldarek points out that $\lambda_2$ is related to diameter, but then why would we need $\lambda_2$ when we already have diameter? Is this somehow conceptually beneficial?)</p>
<blockquote>
<p>So, what is the meaning of graph spectra intuitively? And for what practical purposes are they used? Why is finding the eigenvalues of a graph's adjacency/Laplacian matrices more than just a novel problem?</p>
</blockquote>
| <p>This question already has a number of nice answers; I want to emphasize the breadth of
this topic.</p>
<p>Graphs can be represented by matrices - adjacency matrices and various flavours of
Laplacian matrices. This almost immediately raises the question as to what are
the connections between the spectra of these matrices and the properties of the
graphs. Let's call the study of these connections "the theory of graph spectra".
(But I am not entirely happy with this definition, see below.) It is tempting to view the map
from graphs to eigenvalues as a kind of Fourier theory, but there are difficulties
with this analogy. First, graphs in general are not determined by the their eigenvalues.
Second, which of the many adjacency matrices should we use?</p>
<p>The earliest work on graph spectra was carried out in the context of the Hueckel
molecular orbital theory in Quantum Chemistry. This lead among other things to work
on the matching polynomial; this gives us eigenvalues without adjacency matrices
(which is why I feel the above definition of the topic is unsatisfactory). A more recent
manifestation of this stream of ideas is the work on the spectra of fullerenes.</p>
<p>The second source of the topic arises in Seidel's work on regular two-graphs,
which started with questions about regular simplices in real projective space
and lead to extraordinarily interesting questions about sets of equiangular lines
in real space. The complex analogs of these questions are now of interest to quantum
physicists - see SIC-POVMs. (It is not clear what role graph theory can play here.)
In parallel with Seidel's work was the fundamental paper by Hoffman and
Singleton on Moore graphs of diameter two. In both cases, the key observation was
that certain extremal classes of graphs could be characterized very naturally
by conditions on their spectra. This work gained momentum because a number of sporadic
simple groups were first constructed as automorphism groups of graphs. For graph
theorists it flowered into the the theory of distance-regular graphs, starting with
the work of Biggs and his students, and still very active. </p>
<p>One feature of the paper of Hoffman and Singleton is that its conclusion makes no reference
to spectra. So it offers an important graph theoretical result for which the "book proof"
uses eigenvalues. Many of the results on distance-regular graphs preserve this feature.</p>
<p>Hoffman is also famous for his eigenvalue bounds on chromatic numbers, and related
bounds on the maximum size of independent sets and cliques. This is closely related
to Lovász's work on Shannon capacity. Both the Erdős-Ko-Rado
theorem and many of its analogs can now be obtained using extensions of these techniques.</p>
<p>Physicists have proposed algorithms for graph isomorphism based on the spectra of
matrices associated to discrete and continuous walks. The connections between
continuous quantum walks and graph spectra are very strong.</p>
| <p>I can't speak much to what traditional Spectral Graph Theory is about, but my personal research has included the study of what I call "Spectral Realizations" of graphs. A spectral realization is a special geometric realization (vertices are not-necessarily-distinct points, edges are not-necessarily-non-degenerate line segments, in some $\mathbb{R}^n$) derived from the eigenvectors of a graph's adjacency matrix.</p>
<blockquote>
<p>In particular, if the <em>rows</em> of a matrix constitute a basis for some eigenspace of the adjacency matrix a graph $G$, then the <em>columns</em> of that matrix are coordinate vectors of (a projection of) a spectral realization.</p>
</blockquote>
<p>A spectral realization of a graph has two nice properties:</p>
<ul>
<li>It's <em>harmonious</em>: Every graph automorphism induces to a rigid isometry of the realization; you can <em>see</em> the graph's automorphic structure!</li>
<li>It's <em>eigenic</em>: Moving each vertex to the vector-sum of its immediate neighbors is equivalent to <em>scaling</em> the figure; the scale factor is the corresponding eigenvalue.</li>
</ul>
<p>Well, the properties are nice <em>in theory</em>. Usually, a spectral realization is a jumble of collapsed segments, or is embedded is high-dimensional space; such circumstances make a realization difficult to "see". Nevertheless, a spectral realization can be a helpful first pass at visualizing a graph. Moreover, a graph with a high degree of symmetry can admit some visually-interesting low-dimensional spectral realizations; for example, the skeleton of the truncated octahedron has this modestly-elaborate collection:</p>
<p><img src="https://i.sstatic.net/ffIyp.png" alt="Spectral Realizations of the Truncated Octahedron"></p>
<p>For a gallery of hundreds of these things, see the PDF linked at my Bloog post, <a href="http://daylateanddollarshort.com/bloog/spectral-realizations-of-graphs/" rel="noreferrer">"Spectral Realizations of Graphs"</a>.</p>
<p>Since many mathematical objects decompose into eigen-objects, it probably comes as no surprise that <em>any geometric realization of a graph is the sum of spectral realizations of that graph</em>. (Simply decomposing the realization's coordinate matrix into eigen-matrices gets most of the way to that result, although the eigen-matrices themselves usually represent "affine images" of properly-spectral realizations. The fact that affine images decompose into a sum of <em>similar</em> images takes <a href="http://daylateanddollarshort.com/bloog/extending-a-theorem-of-barlotti/" rel="noreferrer">an extension of a theorem of Barlotti</a>.) There's likely something interesting to be said about how each spectral component spectral influences the properties of the combined figure.</p>
<p>Anyway ... That's why <em>I</em> care about the eigenvalues of graphs.</p>
|
linear-algebra | <p>We often learn in a standard linear algebra course that a determinant is a number associated with a square matrix. We can define the determinant also by saying that it is the sum of all the possible configurations picking an element from a matrix from different rows and different columns multiplied by (-1) or (1) according to the number inversions. </p>
<p>But how is this notion of a 'determinant' derived? What is a determinant, actually? I searched up the history of the determinant and it looks like it predates matrices. How did the modern definition of a determinant come about? Why do we need to multiply some terms of the determinant sum by (-1) based on the number of inversions? I just can't understand the motivation that created determinants. We can define determinants, and see their properties, but I want to understand how they were defined and why they were defined to get a better idea of their important and application.</p>
| <p>I normally have two ways of viewing determinants without appealing to higher-level math like multilinear forms.</p>
<p>The first is geometric, and I do think that most vector calculus classes nowadays should teach this interpretation. That is that, given vectors $v_1, \ldots, v_n \in \mathbb{R}^n$ dictating the sides of an $n$-dimensional parallelepiped, the volume of this parallelepiped is given by $\det(A)$, where $A = [v_1 \ldots v_n]$ is the matrix whose columns are given by those vectors. We can then view the determinant of a square matrix as measuring the volume-scaling property of the matrix as a linear map on $\mathbb{R}^n$. From here, it would be clear why $\det(A) = 0$ is equivalent to $A$ not being invertible - if $A$ takes a set with positive volume and sends it to a set with zero volume, then $A$ has some direction along which it "flattens" points, which would precisely be the null space of $A$. Unfortunately, I'm under the impression that this interpretation is at least semi-modern, but I think this is one of the cases where the modern viewpoint might be better to teach new students than the old viewpoint.</p>
<p>The old viewpoint is that the determinant is simply the result of trying to solve the linear system $Ax = b$ when $A$ is square. This is most likely how the determinant was first discovered. To derive the determinant this way, write down the generic matrix and then proceed by Gaussian elimination. This means you have to choose nonzero leading entries in each row (the pivots) and use them to eliminate subsequent entries below. Each time you eliminate the rows, you have to multiply by a common denominator, so after you do this $n$ times, you'll end up with the sum of all the permutations of entries from different rows and columns merely by virtue of having multiplied out to get common denominators. The $(-1)^k$ sign flip comes from the fact that at each stage in Gaussian elimination, you're subtracting. So on the first step you're subtracting, but on the second step you're subtracting a subtraction, and so forth. At the very end, by Gaussian elimination, you'll obtain an echelon form (upper triangular), and one knows that if any of the diagonal entries are zero, then the system is not uniquely solvable; the last diagonal entry will precisely be the determinant times the product of the values of previously used pivots (up to a sign, perhaps). Since the pivots chosen are always nonzero, then it will not affect whether or not the last entry is zero, and so you can divide them out.</p>
<p>EDIT: It isn't as simple as I thought, though it will work out if you keep track of what nonzero values you multiply your rows by in Gaussian elimination. My apologies if I mislead anyone.</p>
| <p>The determinant was originally `discovered' by Cramer when solving systems of linear equations necessary to determine the coefficients of a polynomial curve passing through a given set of points. Cramer's rule, for giving the general solution of a system of linear equations, was a direct result of this.</p>
<p>This appears in Gabriel Cramer, ``Introduction a l'analyse des lignes courbes algebriques,''(Introduction to the analysis of algebraic line curves), Geneve, Ches les Freres Cramer & Cl. Philibert, (1750). It is cited as a footnote on p. 60, which reads (from French):</p>
<p>``I think I have found [for solving these equations] a very simple and general rule, when the number of equations and unknowns do not pass the first degree [e.g. are linear]. One finds this in the Appendix No. 1.'' Appendix No. 1 appears on p. 657 of the same text. The text is available on line, for those who can read French.</p>
<p>The history of the Determinant appears in Thomas Muir, ``The Theory of Determinants in the Historical Order of Development,'' Dover, NY, (1923). This is also available on line.</p>
|
geometry | <p>I was puzzled by a question my colleague asked me, and now seeking your help.</p>
<p>Suppose you and your friend* end up on a big sphere. There are no visual cues on where on the sphere you both are, and the sphere is way bigger than you two. There are no means of communication. You can determine your relative position and direction by navigating the stars**. You can move anywhere, and your friend too. </p>
<p>Upon inspecting the sphere, you see it is rock-solid, so you cannot create markings. To protect the environment, you are not allowed to leave other stuff, like a blood trace or breadcrumbs.</p>
<p>You have been put on the sphere without being able to communicate a plan. </p>
<p><strong>How would you be able to find each other (come within a certain distance $\epsilon$?) What would be the optimal strategy to move?</strong></p>
<p>*Since you are here, you must be a rational person. For this puzzle we assume your friend is rational too..<sub>Which makes it odd that you end up on that sphere anyway</sub></p>
<p>**While you can determine your position relatively, you are on a sphere in a galaxy so far away that you cannot determine absolute 'north', 'south' etc. by the stars.</p>
| <p>Move at random.</p>
<p>Any deterministic strategy you choose has a chance that your partner will choose the exactly opposite strategy, so you end up moving along more or less antipodal paths and never meet. So deterministic strategies have to be avoided.</p>
<p>You might make some adjustments to your random strategy. For example, you could prefer to walk longer distances in a straight line as opposed to choosing a completely new direction after every centimeter of movement. Depending on what your partner does, some of that might improve things. But to accurately judge whether it does, you'd need some probabilistic model of what plan your partner is likely to choose, and getting that right would pretty much amount to a pre-agreed plan. So you can't even know the probability distribution of plans for your partner, hence you can't quantitatively compare strategies against one another.</p>
| <p>As per "You have been put on the sphere without being able to communicate a plan." I'm going to assume you cannot even assume what plan your partner may come up with and there is no prior collaboration.</p>
<p>Given the potential symmetric nature of the problem, there must be a random element to break that symmetry, should you both accidentally choose mirror strategies. The problem is there's no guarantee that your friend will select an optimal strategy, however, if your friend is smart and/or exhaustive, they will realise two things:</p>
<ol>
<li>If you are both moving, the chances of running into each other can be nil given non-overlapping patterns.</li>
<li>If one of you stays still, the other can eventually find you with an exhaustive search.</li>
</ol>
<p>So the first thing to do is to calculate the size of the sphere (by picking a direction and walking until you arrive back at the start point or some other, more efficient technique). At that point, you can work out an exhaustive search pattern and the duration to perform one (a spiral pattern is close to optimal but difficult for a human to perform). That duration becomes your frequency of decision making.</p>
<p>Once per period, you flip a coin. Heads, you do an exhaustive search. Tails, you stay put. Each of the longer period (e.g. the less efficient search pattern), you have a 50/50 chance of doing the opposite of your partner and thus discovering each other in the course of the exhaustive search.</p>
<p>There's two extreme cases that are covered by this approach. If you partner decides to never move, obviously they will be found on your first exhaustive search. If they decide to permanently move, either randomly, or according to some pattern, there's always the chance of happening upon them accidentally during your search sweeps, which you have to rely on if they're not being exhaustive and their movement does not cover your 'stay put' spot. Otherwise, when you stay put you guarantee they'll eventually find you.</p>
|
logic | <p>In Cantor's diagonal argument, it takes (countable) infinite steps to construct a number that is different from any numbers in a countable infinite sequence, so in fact the proof takes infinite steps too. Is that a valid proof?</p>
| <p>Take the Cantorian diagonal argument that, given a countable sequence of infinite binary strings, there must be a string not in the sequence. To get the argument to fly you don't need to actually <em>construct</em> the anti-diagonal string in the sense of print out all the digits (that would indeed be an infinite task)! You just need to be able to <em>specify</em> the string -- as is familiar, it is the one whose $n$-th digit is 1 if the $n$-digit of the $n$-th string in the countable sequence is 0, and is 0 otherwise. And just from that (finite!) <em>specification</em>, it follows that this specified infinite string is distinct from all the strings on the original list. You don't have to actually, per impossibile, <em>construct</em> (in the sense of write down all of) the string to see that!</p>
<p>It's the same, of course, with the Cantorian argument for e.g. the uncountability of the reals between 0 and 1.</p>
| <p>In the usual logic employed in mathematics and in most related systems, a proof has to have only finitely many steps. This means that on the level of the formal, uninterpreted language, a proof has to have only finitely many steps.</p>
<p>But one can use a language in which all proofs are finite to make arguments abut infinitary theories, such as set theory. And the usual language of logic is expressive enough to make infinitary statements in a finite statement. The intuitive conjunction of the infinitely many statements "$1<2$", "$2<3$", "$3<4$", and so on is the same thing as "$\forall n: n<n+1$", which consists of a total of $8$ symbols and is provable in finitely many steps from the axioms of <a href="http://en.wikipedia.org/wiki/Peano_arithmetic">Peano arithmetic</a>.</p>
<p>It is easy to create a finite theory that has only infinite models. For example, it is very easy to axiomatize the theory of strict partial orders without maximal elements with three axioms: </p>
<ol>
<li>$\forall x:\neg(x>x)$ </li>
<li>$\forall x,y,z:(x<y)\wedge (y<z)\implies (x<z)$</li>
<li>$\forall x\exists y:x<y$. </li>
</ol>
<p>By a standard argument, every finite strict partial order has a maximal element, so this theory describes only infinite strict partial orders.</p>
<p>In set theory, we only use finite proofs and have a countable language. This gives rise to the so called <a href="http://en.wikipedia.org/wiki/Skolem%27s_paradox">Skolem paradox</a> that a theory dealing with uncountable sets can have a model of countable cardinality. But inside the theory, we can deal with many sizes of infinite and make operations that would correspond to infinitely many operations in a weaker language. And it is the formal language in which set theory is formulated in which all proofs are of finite length.</p>
|
probability | <p>I have trouble understanding the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science.</p>
<p>From the purely mathematical point of view, I think it would be uncontroversial to say that Bayes' theorem does not amount to a particularly sophisticated result. Indeed, the relation
<span class="math-container">$$P(A|B)=\frac{P(A\cap B)}{P(B)}=\frac{P(B\cap A)P(A)}{P(B)P(A)}=\frac{P(B|A)P(A)}{P(B)}$$</span>
is a one line proof that follows from expanding both sides directly from the definition of conditional probability. Thus, I expect that what people find interesting about Bayes' theorem has to do with its practical applications or implications. However, even in those cases I find the typical examples being used as a justification of this to be a bit artificial.</p>
<hr />
<p>To illustrate this, the classical application of Bayes' theorem usually goes something like this: Suppose that</p>
<ol>
<li>1% of women have breast cancer;</li>
<li>80% of mammograms are positive when breast cancer is present; and</li>
<li>10% of mammograms are positive when breast cancer is not present.</li>
</ol>
<p>If a woman has a positive mammogram, then what is the probability that she has breast cancer?</p>
<p>I understand that Bayes' theorem allows to compute the desired probability with the given information, and that this probability is counterintuitively low. However, I can't help but feel that the premise of this question is wholly artificial. The only reason why we need to use Bayes' theorem here is that the full information with which the other probabilities (i.e., 1% have cancer, 80% true positive, etc.) have been computed is not provided to us. If we have access to the sample data with which these probabilities were computed, then we can directly find
<span class="math-container">$$P(\text{cancer}|\text{positive test})=\frac{\text{number of women with cancer and positive test}}{\text{number of women with positive test}}.$$</span>
In mathematical terms, if you know how to compute <span class="math-container">$P(B|A)$</span>, <span class="math-container">$P(A)$</span>, and <span class="math-container">$P(B)$</span>, then this means that you know how to compute <span class="math-container">$P(A\cap B)$</span> and <span class="math-container">$P(B)$</span>, in which case you already have your answer.</p>
<hr />
<p>From the above arguments, it seems to me that Bayes' theorem is essentially only useful for the following reasons:</p>
<ol>
<li>In an adversarial context, i.e., someone who has access to the data only tells you about <span class="math-container">$P(B|A)$</span> when <span class="math-container">$P(A|B)$</span> is actually the quantity that is relevant to your interests, hoping that you will get confused and will not notice.</li>
<li>An opportunity to dispel the confusion between <span class="math-container">$P(A|B)$</span> and <span class="math-container">$P(B|A)$</span> with concrete examples, and to explain that these are very different when the ratio between <span class="math-container">$P(A)$</span> and <span class="math-container">$P(B)$</span> deviates significantly from one.</li>
</ol>
<p>Am I missing something big about the usefulness of Bayes' theorem? In light of point 2., especially, I don't understand why Bayes' theorem stands out so much compared to, say, the Borel-Kolmogorov paradox, or the "paradox" that <span class="math-container">$P[X=x]=0$</span> when <span class="math-container">$X$</span> is a continuous random variable, etc.</p>
| <p>You are mistaken in thinking that what you perceive as "the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science" is really "the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science." But it's probably not your fault: This usually doesn't get explained very well.</p>
<p>What is the probability of a Caucasian American having brown eyes? <b>What does that question mean?</b> By one interpretation, commonly called the frequentist interpretation of probability, it asks merely for the proportion persons having brown eyes among Caucasian Americans.</p>
<p>What is the probability that there was life on Mars two billion years ago? <b>What does that question mean?</b> It has no answer according to the frequentist interpretation. "The probability of life on Mars two billion years ago is <span class="math-container">$0.54$</span>" is taken to be meaningless because one cannot say it happened in <span class="math-container">$54\%$</span> of all instances. But the Bayesian, as opposed to frequentist, interpretation of probability works with this sort of thing.</p>
<p>The Bayesian interpretation applied to statistical inference is immune to various pathologies afflicting that field.</p>
<p>Possibly you have seen that some people attach massive importance to the Bayesian interpretation of probability and mistakenly thought it was merely massive importance attached to Bayes's theorem. People who do consider Bayesianism important seldom explain this very clearly, primarily because that sort of exposition is not what they care about.</p>
| <p>While I agree with Michael Hardy's answer, there is a sense in which Bayes' theorem is more important than any random identity in basic probability. Write Bayes' Theorem as</p>
<p><span class="math-container">$$\text{P(Hypothesis|Data)}=\frac{\text{P(Data|Hypothesis)P(Hypothesis)}}{\text{P(Data)}}$$</span></p>
<p>The left hand side is what we usually want to know: given what we've observed, what should our beliefs about the world be? But the main thing that probability theory gives us is in the numerator on the right side: the frequency with which any <em>given</em> hypothesis will generate particular kinds of data. Probabilistic models in some sense answer the wrong question, and Bayes' theorem tells us how to combine this with our prior knowledge to generate the answer to the right question.</p>
<p>Frequentist methods that try not to use the prior have to reason about the quantity on the left by indirect means or else claim the left side is meaningless in many applications. They work, but frequently confuse even professional scientists. E.g. the common misconceptions about <span class="math-container">$p$</span>-values come from people assuming that they are a left-side quantity when they are a right-side quantity.</p>
|
linear-algebra | <p>Let $A$ be an $n\times n$ complex nilpotent matrix. Then we know that because all eigenvalues of $A$ must be $0$, it follows that $\text{tr}(A^n)=0$ for all positive integers $n$.</p>
<p>What I would like to show is the converse, that is, </p>
<blockquote>
<p>if $\text{tr}(A^n)=0$ for all positive integers $n$, then $A$ is nilpotent.</p>
</blockquote>
<p>I tried to show that $0$ must be an eigenvalue of $A$, then try to show that all other eigenvalues must be equal to 0. However, I am stuck at the point where I need to show that $\det(A)=0$.</p>
<p>May I know of the approach to show that $A$ is nilpotent?</p>
| <p>Assume that for all $k=1,\ldots,n$, $\mathrm{tr}(A^k) = 0$ where $A$ is a $n\times n$ matrix. <br />
We consider the eigenvalues in $\mathbb C$.</p>
<p>Suppose $A$ is not nilpotent, so $A$ has some non-zero eigenvalues $\lambda_1,\ldots,\lambda_r$. <br />
Let $n_i$ the multiplicity of $\lambda_i$ then $$\left\{\begin{array}{ccc}n_1\lambda_1+\cdots+n_r\lambda_r&=&0 \\ \vdots & & \vdots \\ n_1\lambda_1^r+\cdots+n_r\lambda_r^r&=&0\end{array}\right.$$
So we have $$\left(\begin{array}{cccc}\lambda_1&\lambda_2&\cdots&\lambda_r\\\lambda_1^2 & \lambda_2^2 & \cdots & \lambda_r^2 \\ \vdots & \vdots & \vdots & \vdots \\ \lambda_1^r & \lambda_2^r & \cdots & \lambda_r^r\end{array}\right)\left(\begin{array}{c}n_1 \\ n_2 \\ \vdots \\ n_r \end{array}\right)=\left(\begin{array}{c}0 \\ 0\\ \vdots \\ 0\end{array}\right)$$
But $$\mathrm{det}\left(\begin{array}{cccc}\lambda_1&\lambda_2&\cdots&\lambda_r\\\lambda_1^2 & \lambda_2^2 & \cdots & \lambda_r^2 \\ \vdots & \vdots & \vdots & \vdots \\ \lambda_1^r & \lambda_2^r & \cdots & \lambda_r^r\end{array}\right)=\lambda_1\cdots\lambda_r\,\mathrm{det}\left(\begin{array}{cccc} 1 & 1 & \cdots & 1 \\ \lambda_1&\lambda_2&\cdots&\lambda_r\\\lambda_1^2 & \lambda_2^2 & \cdots & \lambda_r^2 \\ \vdots & \vdots & \vdots & \vdots \\ \lambda_1^{r-1} & \lambda_2^{r-1} & \cdots & \lambda_r^{r-1}\end{array}\right)\neq 0$$
(Vandermonde)</p>
<p>So the system has a unique solution which is $n_1=\ldots=n_r=0$. Contradiction.</p>
| <p>If the eigenvalues of $A$ are $\lambda_1$, $\dots$, $\lambda_n$, then the eigenvalues of $A^k$ are $\lambda_1^k$, $\dots$, $\lambda_n^k$. It follows that if all powers of $A$ have zero trace, then $$\lambda_1^k+\dots+\lambda_n^k=0\qquad\text{for all $k\geq1$.}$$ Using <a href="http://en.wikipedia.org/wiki/Newton%27s_identities#Expressing_elementary_symmetric_polynomials_in_terms_of_power_sums">Newton's identities</a> to express the elementary symmetric functions of the $\lambda_i$'s in terms of their power sums, we see that all the coefficients of the characteristic polynomial of $A$ (except that of greatest degree, of course) are zero. This means that $A$ is nilpotent.</p>
|
probability | <p>What is the most efficient way to simulate a 7-sided die with a 6-sided die? I've put some thought into it but I'm not sure I get somewhere specifically.</p>
<p>To create a 7-sided die we can use a rejection technique. 3-bits give uniform 1-8 and we need uniform 1-7 which means that we have to reject 1/8 i.e. 12.5% rejection probability.</p>
<p>To create $n * 7$-sided die rolls we need $\lceil log_2( 7^n ) \rceil$ bits. This means that our rejection probability is $p_r(n)=1-\frac{7^n}{2^{\lceil log_2( 7^n ) \rceil}}$.</p>
<p>It turns out that the rejection probability varies wildly but for $n=26$ we get $p_r(26) = 1 - \frac{7^{26}}{2^{\lceil log_2(7^{26}) \rceil}} = 1-\frac{7^{26}}{2^{73}} \approx 0.6\%$ rejection probability which is quite good. This means that we can generate with good odds 26 7-die rolls out of 73 bits.</p>
<p>Similarly, if we throw a fair die $n$ times we get number from $0...(6^n-1)$ which gives us $\lfloor log_2(6^{n}) \rfloor$ bits by rejecting everything which is above $2^{\lfloor log_2(6^{n}) \rfloor}$. Consequently the rejection probability is $p_r(n)=1-\frac{2^{\lfloor log_2( 6^{n} ) \rfloor}}{6^n}$.</p>
<p>Again this varies wildly but for $n = 53$, we get $p_r(53) = 1-\frac{2^{137}}{6^{53}} \approx 0.2\%$ which is excellent. As a result, we can roll the 6-face die 53 times and get ~137 bits.</p>
<p>This means that we get about $\frac{137}{53} * \frac{26}{73} = 0.9207$ 7-face die rolls out of 6-face die rolls which is close to the optimum $\frac{log 7}{log6} = 0.9208$.</p>
<p>Is there a way to get the optimum? Is there an way to find those $n$ numbers as above that minimize errors? Is there relevant theory I could have a look at?</p>
<p>P.S. Relevant python expressions:</p>
<pre><code>min([ (i, round(1000*(1-( 7**i ) / (2**ceil(log(7**i,2)))) )/10) for i in xrange(1,100)], key=lambda x: x[1])
min([ (i, round(1000*(1- ((2**floor(log(6**i,2))) / ( 6**i )) ) )/10) for i in xrange(1,100)], key=lambda x: x[1])
</code></pre>
<p>P.S.2 Thanks to @Erick Wong for helping me get the question right with his great comments.</p>
<p>Related question: <a href="https://math.stackexchange.com/questions/685395/is-there-a-way-to-simulate-any-n-sided-die-using-a-fixed-set-of-die-types-for">Is there a way to simulate any $n$-sided die using a fixed set of die types for all $n$?</a></p>
| <p>Roll the D6 twice. Order the pairs $(1,1), (1,2), \ldots, (6,5)$ and associate them with the set $\{1,2,\ldots,35\}$. If one of these pairs is rolled, take the associated single value and reduce it mod $7$. So far you have a uniform distribution on $1$ through $7$. </p>
<p>If the pair $(6,6)$ is rolled, you are required to start over. This procedure will probabilistically end at some point. Since it has a $\frac{35}{36}$ chance of not requiring a repeat, the expected number of repeats is only $\frac{36}{35}$. More specifically, the number of iterations of this 2-dice-toss process has a geometric distribution with $p=\frac{35}{36}$. So $P(\mbox{requires $n$ iterations})=\frac{35}{36}\left(\frac{1}{36}\right)^{n-1}$</p>
<hr>
<p>Counting by die rolls instead of iterations, with this method, $$\{P(\mbox{exactly $n$ die rolls are needed})\}_{n=1,2,3,\ldots}=\left\{0,\frac{35}{36},0,\frac{35}{36}\left(\frac{1}{36}\right),0,\frac{35}{36}\left(\frac{1}{36}\right)^{2},0,\frac{35}{36}\left(\frac{1}{36}\right)^{3},\ldots\right\}$$</p>
<p>The method that uses base-6 representations of real numbers (@Erick Wong's answer) has
$$\{P(\mbox{exactly $n$ die rolls are needed})\}_{n=1,2,3,\ldots}=\left\{0,\frac{5}{6},\frac{5}{6}\left(\frac{1}{6}\right),\frac{5}{6}\left(\frac{1}{6}\right)^{2},\frac{5}{6}\left(\frac{1}{6}\right)^{3},\frac{5}{6}\left(\frac{1}{6}\right)^{4},\ldots\right\}$$</p>
<p>Put another way, let $Q(n)=P(\mbox{at most $n$ die rolls are needed using this method})$ and $R(n)=P(\mbox{at most $n$ die rolls are needed using the base-6 method})$. Then we sum the above term by term, and for ease of comparison, I'll use common denominators:</p>
<p>$$\begin{align}
\{Q(n)\}_{n=1,2,\ldots} &= \left\{0,\frac{45360}{36^3},\frac{45360}{36^3},\frac{46620}{36^3},\frac{46620}{36^3},\frac{46655}{36^3},\frac{46655}{36^3},\ldots\right\}\\
\{R(n)\}_{n=1,2,\ldots} &= \left\{0,\frac{38880}{36^3},\frac{45360}{36^3},\frac{46440}{36^3},\frac{46620}{36^3},\frac{46650}{36^3},\frac{46655}{36^3},\ldots\right\}\\
\end{align}$$</p>
<p>So on every other $n$, the base-6 method ties with this method, and otherwise this method is "winning".</p>
<hr>
<p>EDIT Ah, I first understood the question to be about simulating <em>one</em> D7 roll with $n$ D6 rolls, and minimizing $n$. Now I understand that the problem is about simulating $m$ D7 rolls with $n$ D6 rolls, and minimizing $n$.</p>
<p>So alter this by keeping track of "wasted" random information. Here is the recursion in words. I am sure that it could be coded quite compactly in perl:</p>
<p>Going into the first roll, we will have $6$ uniformly distributed outcomes (and so not enough to choose from $\{1,...,7\}$.) This is the initial condition for the recursion.</p>
<p>Generally, going into the $k$th roll, we have some number $t$ of uniformly distributed outcomes to consider. Partition $\{1,\ldots,t\}$ into $$\{1,\ldots,7; 8,\ldots,14;\ldots,7a;7a+1,\ldots,t\}$$ where $a=t\operatorname{div}7$. Agree that if the outcome from the $k$th roll puts us in $\{1,\ldots,7a\}$, we will consider the result mod $7$ to be a D7 roll result. If the $k$th roll puts us in the last portion of the partition, we must re-roll.</p>
<p>Now, either</p>
<ul>
<li><p>we succeeded with finding a D7 roll. Which portion of the partition were we in? We could have uniformly been in the 1st, 2nd, ..., or $a$th. The next roll therefore will give us $6a$ options to consider, and the recursion repeats.</p></li>
<li><p>we did not find another D7 roll. So our value was among $7a+1, \ldots, t$, which is at most $6$ options. However many options it is, call it $b$ for now ($b=t\operatorname{mod}7$). Going into the next roll, we will have $6b$ options to consider, and the recursion repeats.</p></li>
</ul>
| <p>In the long run, just skip the binary conversion altogether and go with some form of arithmetic coding: use the $6$-dice rolls to generate a uniform base-$6$ real number in $[0,1]$ and then extract base-$7$ digits from that as they resolve. For instance:</p>
<pre><code>int rand7()
{
static double a=0, width=7; // persistent state
while ((int)(a+width) != (int)a)
{
width /= 6;
a += (rand6()-1)*width;
}
int n = (int)a;
a -= n;
a *= 7; width *= 7;
return (n+1);
}
</code></pre>
<p>A test run of $10000$ outputs usually requires exactly $10861$ dice rolls, and occasionally needs one or two more. Note that the uniformity of this implementation is not exact (even if <code>rand6</code> is perfect) due to floating-point truncation, but should be pretty good overall.</p>
|
combinatorics | <p>This picture was in my friend's math book: </p>
<p><a href="https://i.sstatic.net/7ngrZ.jpg"><img src="https://i.sstatic.net/7ngrZ.jpg" alt=""></a> </p>
<p>Below the picture it says: </p>
<blockquote>
<p>There are $3072$ ways to draw this flower, starting from the center of
the petals, without lifting the pen.</p>
</blockquote>
<p>I know it's based on combinatorics, but I don't know how to show that there are actually $3072$ ways to do this. I'd be glad if someone showed how to show that there are exactly $3072$ ways to draw this flower, starting from the center of the petals, without lifting the pen (assuming that $3072$ is the correct amount).</p>
| <p>First you have to draw the petals. There are $4!=24$ ways to choose the order of the petals and $2^4=16$ ways to choose the direction you go around each petal. Then you go down the stem to the leaves. There are $2! \cdot 2^2=8$ ways to draw the leaves. Finally you draw the lower stem. $24 \cdot 16 \cdot 8=3072$</p>
| <p>At the beginning you could go 8 different ways, then you could go 6 different ways, then you could go 4 and 2 different ways but in the down of the picture you could go at first 4 different ways and 2 at the end.
$8\cdot6\cdot4\cdot2\cdot4\cdot2 = 3072$</p>
|
combinatorics | <p>Any hexagon in Pascal's triangle, whose vertices are 6 binomial coefficients surrounding any entry, has the property that:</p>
<ul>
<li><p>the product of non-adjacent vertices is constant.</p></li>
<li><p>the greatest common divisor of non-adjacent vertices is constant.</p></li>
</ul>
<p>Below is one such hexagon. As an example, here we have that <span class="math-container">$4 \cdot 10 \cdot 15 = 6 \cdot 20 \cdot 5$</span>, as well as <span class="math-container">$\gcd(4, 10, 15) = \gcd(6,20,5)$</span>.</p>
<p><span class="math-container">$$ 1 \\
1 \qquad 1\\
1\qquad 2\qquad 1\\
1\qquad3\qquad3\qquad1\\
1\qquad\mathbf{4}\qquad\mathbf{6}\qquad4\qquad1\\
1\qquad\mathbf{5}\qquad10\qquad\mathbf{10}\qquad5\qquad1
\\
1\qquad6\qquad\mathbf{15}\qquad\mathbf{20}\qquad15\qquad6\qquad1$$</span></p>
<p>There is a quick proof <a href="http://www.fq.math.ca/Scanned/12-1/gupta.pdf" rel="noreferrer">here (pdf).</a> The original proof should be in <em>V. E. Hoggatt, Jr., & W. Hansell. "The Hidden Hexagon Squares." The Fibonacci Quarterly 9(1971):120, 133.</em> but I cannot access it.</p>
<p>I am, however, intereseted in a purely <em>combinatorial</em> proof. I do not know how to approach this at all: I cannot see what the non-adjacent vertices represent and/or I do not know how to remodel their meaning. Can anyone help?</p>
<p><strong>EDIT:</strong> To specify my question more closely, what I am looking for is some natural bijection between the two sets of triads that create the hexagon.</p>
<p>Thanks.</p>
| <p>In symbols, the identity is </p>
<p>$$\left({n-1\atop m-1}\right)\left({n\atop m+1}\right)\left({n+1\atop m}\right) =
\left({n\atop m-1}\right)\left({n-1\atop m}\right)\left({n+1\atop m+1}\right).$$</p>
<p>The usual combinatorial interpretation of a binomial coefficient $\left({n\atop m}\right)$ is that it counts subsets of size $m$ from a set of size $n$. Multiplication is usually interpreted as mutually exclusive choice ($f(n)g(n)$ counts the process of picking $f(n)$ configurations, then picking (independently) $g(n)$ items.</p>
<p>Putting this together, the LHS counts subsets of size $m-1$ from a set of size $n-1$, then subsets of size $m$ from an (independent) set of size $n+1$, then (again independently) subsets of size $m+1$ from a set of size $n$. This corresponds one-to-one with the RHS because the things counted by the LHS can be counted in a different way by the RHS: For the RHS distinguish an element of the $n$ set and one of the $n+1$ set. What's left over for those two sets can be chosen by $\left({n-1\atop (m+1)-1}\right)$ and $\left({(n+1)-1\atop m-1}\right)$ respectively, and then the two distinguished elements can be included to be (possibly) chosen in the $n-1$ set to account for $\left({(n-1) +2 \atop (m-1)+2}\right)$.</p>
<p>To be clearer about the combinatorial interpretation, there are three sets, of size $n-1$, $n$, and $n+1$, from which you choose subsets of size $m-1$, $m+1$, and $m$, respectively. Another way to count this situation is to, take 1 item each out of the $n$ and $n+1$ sets, and add them to the $n-1$ set. So now you're counting out of sets of size $n+1$, $n-1$, and $n$, from which you choose subsets of size $m+1$, $m$, and $m-1$, respectively.</p>
| <p>I've left some questions as comments to Mitch's answer, and am hoping that my confusions about that answer will get cleared up soon. Meanwhile, I started to think about how I would approach this problem. I don't have a satisfying answer yet; the best I've been able to come up with requires introducing an additional factor on both sides of the identity. The modified identity (which is algebraically equivalent to the unmodified one) has a clear combinatorial meaning, but I don't yet see a way to interpret the unmodified identity in combinatorial terms.</p>
<p>It's nice to generalize the identity slightly. Starting with the identity as written in Mitch's answer,
<span class="math-container">$$
\binom{n - 1}{m - 1} \binom{n}{m + 1} \binom{n + 1}{m} =
\binom{n}{m - 1} \binom{n - 1}{m} \binom{n + 1}{m + 1},
$$</span>
we replace the <span class="math-container">$1$</span> with <span class="math-container">$r$</span> everywhere to obtain
<span class="math-container">$$
\binom{n - r}{m - r} \binom{n}{m + r} \binom{n + r}{m} =
\binom{n}{m - r} \binom{n - r}{m} \binom{n + r}{m + r}.
$$</span>
This is also an identity, as we show below. Just as in the original identity, the binomial coefficients that appear form the vertices of a hexagon (which we might call the radius-<span class="math-container">$r$</span> hexagon) centered at <span class="math-container">$\binom{n}{m}$</span> in Pascal's triangle. Note that the GCD property mentioned in the original post only holds for <span class="math-container">$r=1,$</span> while the identity holds for all <span class="math-container">$r.$</span> We concern ourselves only with the identity.</p>
<p>We prove the radius-<span class="math-container">$r$</span> identity starting from an elementary identity relating different ways of representing the trinomial coefficient as a product of binomial coefficients:
<span class="math-container">$$
\binom{n}{k}\binom{k}{a}=\binom{n}{a}\binom{n-a}{k-a}=\binom{n}{n-k,k-a,a}.
$$</span>
This has a combinatorial interpretation, as discussed <a href="https://math.stackexchange.com/q/534202/3736">here</a>. The following three variants of this identity are useful here:
<span class="math-container">$$
\begin{aligned}
\binom{n}{r}\binom{n-r}{m-r}&=\binom{n-m+r}{r}\binom{n}{m-r}\\
\binom{m+r}{r}\binom{n}{m+r}&=\binom{n}{r}\binom{n-r}{m}\\
\binom{n-m+r}{r}\binom{n+r}{m}&=\binom{m+r}{r}\binom{n+r}{m+r}.
\end{aligned}
$$</span>
The rightmost factors on the left side of these equations match the three factors on the left side of the identity, while the rightmost factors on the right side of these equations match the three factors on the right side of the identity. Furthermore, the leftmost factors on the left side of these equations are the same, but permuted, as the leftmost factors on the right side of these equations.</p>
<p>These observations suggest the idea of multiplying both sides of the radius-<span class="math-container">$r$</span> identity by
<span class="math-container">$$
\binom{n}{r}\binom{m+r}{r}\binom{n-m+r}{r}
$$</span>
to get
<span class="math-container">$$
\begin{aligned}
&\binom{n}{r}\binom{n - r}{m - r} \cdot \binom{m+r}{r}\binom{n}{m + r} \cdot \binom{n-m+r}{r}\binom{n + r}{m}\\
&\qquad= \binom{n-m+r}{r}\binom{n}{m - r} \cdot \binom{n}{r}\binom{n - r}{m} \cdot \binom{m+r}{r}\binom{n + r}{m + r}.
\end{aligned}
$$</span>
The two sides of this identity can be thought of as different ways of answering the following question: there are <span class="math-container">$n$</span> students, <span class="math-container">$n$</span> teachers, and <span class="math-container">$n+r$</span> administrators. A committee is to be formed having <span class="math-container">$m$</span> students <span class="math-container">$m+r$</span> teachers, and <span class="math-container">$m+r$</span> administrators. From this committee, a subcommittee is to be formed having <span class="math-container">$r$</span> students, <span class="math-container">$r$</span> teachers, and <span class="math-container">$r$</span> administrators. In how many ways can this be done?</p>
<p>On the left side, this is accomplished by</p>
<ul>
<li>choosing <span class="math-container">$r$</span> students to be on the subcommittee, then choosing <span class="math-container">$m-r$</span> additional students to fill out the committee,</li>
<li>choosing <span class="math-container">$m+r$</span> teachers to be on the committee, then from these choosing <span class="math-container">$r$</span> to be on the subcommittee,</li>
<li>choosing <span class="math-container">$m$</span> administrators to be on the committee but not the subcommittee, then choosing <span class="math-container">$r$</span> additional administrators to be on the subcommittee.</li>
</ul>
<p>On the right side, it is accomplished by</p>
<ul>
<li>choosing <span class="math-container">$m-r$</span> students to be on the committee but not the subcommittee, then choosing <span class="math-container">$r$</span> additional students to be on the subcommittee,</li>
<li>choosing <span class="math-container">$r$</span> teachers to be on the subcommittee, then choosing <span class="math-container">$m$</span> additional teachers to fill out the committee,</li>
<li>choosing <span class="math-container">$m+r$</span> administrators to be on the committee, then from these choosing <span class="math-container">$r$</span> to be on the subcommittee.</li>
</ul>
<p>Clearly we get the same set of committee and subcommittee assignments either way, so the two sides must be equal.</p>
<p>This proof is unsatisfactory since we had to multiply the identity by the extraneous factor
<span class="math-container">$$
\binom{n}{r}\binom{m+r}{r}\binom{n-m+r}{r}
$$</span>
in order to be able to state our combinatorial interpretation. I have not yet been able to find a method that avoids this.</p>
<p><strong>Added 26 January 2014:</strong> I should have looked at the <a href="http://www.fq.math.ca/Scanned/12-1/gupta.pdf" rel="noreferrer">linked pdf</a> in the question before posting. There the identity is further generalized to
<span class="math-container">$$
\binom{n - r}{m - s} \binom{n}{m + r} \binom{n + s}{m} =
\binom{n}{m - s} \binom{n - r}{m} \binom{n + s}{m + r},\qquad\qquad(*)
$$</span>
which corresponds to a hexagon with side lengths alternately <span class="math-container">$r$</span> and <span class="math-container">$s.$</span> The proof above works with small modifications. Multiply both sides by
<span class="math-container">$$
\binom{n}{r}\binom{m+r}{r}\binom{n-m+s}{r}
$$</span>
to get
<span class="math-container">$$
\begin{aligned}
&\binom{n}{r}\binom{n - r}{m - s} \cdot \binom{m+r}{r}\binom{n}{m + r} \cdot \binom{n-m+s}{r}\binom{n + s}{m}\\
&\qquad= \binom{n-m+s}{r}\binom{n}{m - s} \cdot \binom{n}{r}\binom{n - r}{m} \cdot \binom{m+r}{r}\binom{n + s}{m + r}.
\end{aligned}
$$</span>
The interpretation of the three "trinomial pairs" that appear on left and on right is similar to before.</p>
<p><strong>Added 8 February 2014:</strong> There are, in fact two similar and related, but distinct, proofs along these lines. After permuting factors on both sides of the identity <span class="math-container">$(*)$</span> in the section above to get
<span class="math-container">$$
\binom{n - r}{m - s} \binom{n + s}{m} \binom{n}{m + r} =
\binom{n - r}{m} \binom{n}{m - s} \binom{n + s}{m + r},
$$</span>
we multiply both sides by
<span class="math-container">$$
\binom{n-m-r+s}{s}\binom{m}{s}\binom{n+s}{s}
$$</span>
and obtain
<span class="math-container">$$
\begin{aligned}
&\binom{n-m-r+s}{s}\binom{n - r}{m - s} \cdot \binom{m}{s}\binom{n + s}{m} \cdot \binom{n+s}{s}\binom{n}{m + r}\\
&\qquad = \binom{m}{s}\binom{n - r}{m} \cdot \binom{n+s}{s}\binom{n}{m - s} \cdot \binom{n-m-r+s}{s}\binom{n + s}{m + r}.
\end{aligned}
$$</span>
In the previous section, the counting problem had the parameters,
<span class="math-container">$$
\begin{array}{l|ccc}
& \text{number} & \text{number on} & \text{number on}\\
& \text{in pool} & \text{committee} & \text{subcommittee}\\
\hline
\text{students} & n & m+r-s & r\\
\text{teachers} & n & m+r & r\\
\text{administrators} & n+s & m+r & r\\
\end{array}
$$</span>
while in this section, the parameters are
<span class="math-container">$$
\begin{array}{l|ccc}
& \text{number} & \text{number on} & \text{number on}\\
& \text{in pool} & \text{committee} & \text{subcommittee}\\
\hline
\text{students} & n-r & m & s\\
\text{teachers} & n+s & m & s\\
\text{administrators} & n+s & m+r+s & s\\
\end{array}
$$</span></p>
<p>The two proofs both relate to the hexagon with side-lengths alternating between <span class="math-container">$r$</span> and <span class="math-container">$s$</span>. The proof in the previous section is obtained by relating the binomial coefficients corresponding to endpoints of the sides of length <span class="math-container">$r,$</span> while the proof in this section is obtained by relating the binomial coefficients corresponding to endpoints of the sides of length <span class="math-container">$s.$</span></p>
<p><strong>Added 10 October 2018:</strong> I missed a third proof, which is similar to the previous two in that all three involve converting the binomial coefficients to trinomial coefficients. After permuting factors again to get
<span class="math-container">$$
\binom{n}{m + r} \binom{n - r}{m - s} \binom{n + s}{m} =
\binom{n}{m - s} \binom{n + s}{m + r} \binom{n - r}{m},
$$</span>
we multiply both sides by
<span class="math-container">$$
\binom{m+r}{r+s}\binom{n+s}{r+s}\binom{n-m+s}{r+s}
$$</span>
and obtain
<span class="math-container">$$
\begin{aligned}
&\binom{n}{m + r}\binom{m+r}{r+s} \cdot \binom{n+s}{r+s}\binom{n - r}{m - s} \cdot \binom{n + s}{m}\binom{n-m+s}{r+s}\\
&\qquad=
\binom{n}{m - s}\binom{n-m+s}{r+s} \cdot \binom{n + s}{m + r}\binom{m+r}{r+s} \cdot \binom{n+s}{r+s}\binom{n - r}{m},
\end{aligned}
$$</span>
In this proof, the parameters of the counting problem are
<span class="math-container">$$
\begin{array}{l|ccc}
& \text{number} & \text{number on} & \text{number on}\\
& \text{in pool} & \text{committee} & \text{subcommittee}\\
\hline
\text{students} & n & m+r & r+s\\
\text{teachers} & n+s & m+r & r+s\\
\text{administrators} & n+s & m+r+s & r+s\\
\end{array}
$$</span>
In this version, the two hexagon vertices associated with a given trinomial coefficient are diametrically opposite, rather than adjacent along sides of length <span class="math-container">$r$</span> or <span class="math-container">$s$</span>.</p>
<p><strong>Added 4 December 2018:</strong> I can't resist adding a generalization of the identity for the multinomial coefficients that may shed some light on the structure of the identity in the binomial case and on its three different proofs.</p>
<blockquote>
<p>Let <span class="math-container">$\ell\ge2$</span>, let <span class="math-container">$k_1$</span>, <span class="math-container">$k_2$</span>, ..., <span class="math-container">$k_\ell$</span>, <span class="math-container">$r_0<r_1<\ldots<r_\ell$</span> be integers such that <span class="math-container">$k_i+r_0\ge0$</span> for all <span class="math-container">$i\in\{1,2,\ldots,\ell\}$</span>. Set <span class="math-container">$n=\sum_{i=1}^\ell k_i+\sum_{i=0}^\ell r_i$</span>. Use <span class="math-container">$\pi$</span> to denote a permutation of <span class="math-container">$(r_0,r_1,\ldots,r_\ell)$</span>. Then we have the identity
<span class="math-container">$$
\prod_{\text{sgn}(\pi)=1}\binom{n-\pi(r_0)}{k_1+\pi(r_1),\ldots,k_\ell+\pi(r_\ell)}=\prod_{\text{sgn}(\pi)=-1}\binom{n-\pi(r_0)}{k_1+\pi(r_1),\ldots,k_\ell+\pi(r_\ell)}.
$$</span></p>
</blockquote>
<p><em>Proof:</em> Observe that the definition of <span class="math-container">$n$</span> and the restrictions on the <span class="math-container">$r_i$</span> guarantee that the multinomial coefficients are well-formed, that is, that the lower numbers in each coefficient sum to the upper number and that the lower numbers are all non-negative. The statement may be proved by observing that if both sides are multiplied by a suitable quantity, then both reduce to the same <span class="math-container">$\left(\frac{1}{2}(\ell+1)!\right)$</span>-fold product of <span class="math-container">$(\ell+1)$</span>-nomial coefficients.</p>
<p>To find a suitable multiplier, choose a pair of indices <span class="math-container">$i<j$</span> from the set <span class="math-container">$\{0,1,\ldots,\ell\}$</span>. The multiplier will be a product of binomial coefficients, one associated to each factor in the identity. For each multinomial coefficient in the identity (on either side), the parameter <span class="math-container">$r_j$</span> will appear in exactly one argument of the coefficient. If it appears in one of the lower arguments, that is, we have <span class="math-container">$k_a+r_j$</span> as a lower argument, introduce the binomial coefficient
<span class="math-container">$$
\binom{k_a+r_j}{k_a+r_i,r_j-r_i}.
$$</span>
If it appears in the upper argument, that is, if the upper argument is <span class="math-container">$n-r_j$</span>, then introduce the binomial coefficient
<span class="math-container">$$
\binom{n-r_i}{n-r_j,r_j-r_i}.
$$</span>
The product of the binomial coefficients so introduced is the same on each of the two sides since each of the binomial coefficients
<span class="math-container">$$
\binom{n-r_i}{n-r_j,r_j-r_i},\ \binom{k_1+r_j}{k_1+r_i,r_j-r_i},\ \ldots,\ \binom{k_\ell+r_j}{k_\ell+r_i,r_j-r_i}
$$</span>
will be introduced exactly <span class="math-container">$\frac{1}{2}\ell!$</span> times on the left and the same number of times on the right. Hence we have found equal multipliers for the two sides.</p>
<p>To show that when the multiplier is included, the two sides reduce to the same quantity, observe that for the multinomial coefficient in the identity associated with the permutation <span class="math-container">$\pi$</span>, there is a corresponding multinomial coefficient of opposite parity, obtained by following the permutation <span class="math-container">$\pi$</span> with the swap <span class="math-container">$(r_ir_j)$</span>. The result now follows from the fact that when the introduced binomial coefficients are included, these <span class="math-container">$\ell$</span>-nomials become equal <span class="math-container">$(\ell+1)$</span>-nomials. Indeed,
<span class="math-container">\begin{align*}
&\binom{\ldots}{\ldots, k_a+r_i, \ldots, k_b+r_j, \ldots} \binom{k_b+r_j}{k_b+r_i,r_j-r_i}\\
&\quad=\binom{\ldots}{\ldots, k_a+r_j, \ldots, k_b+r_i, \ldots} \binom{k_a+r_j}{k_a+r_i,r_j-r_i}\\
&\quad=\binom{\ldots}{r_j-r_i, \ldots, k_a+r_i, \ldots, k_b+r_i, \ldots}
\end{align*}</span>
when <span class="math-container">$r_i$</span> and <span class="math-container">$r_j$</span> both appear in lower arguments, while
<span class="math-container">\begin{align*}
&\binom{n-r_i}{\ldots, k_a+r_j, \ldots} \binom{k_b+r_j}{k_b+r_i,r_j-r_i}\\
&\quad=\binom{n-r_j}{\ldots, k_a+r_i, \ldots} \binom{n-r_i}{n-r_i,r_j-r_i}\\
&\quad=\binom{n-r_i}{r_j-r_i, \ldots, k_a+r_i, \ldots}
\end{align*}</span>
when one of them appears in the upper argument. <span class="math-container">$\square$</span></p>
<p>Note that the original hexagonal identity is the case <span class="math-container">$\ell=2$</span>, <span class="math-container">$r_1=-1$</span>, <span class="math-container">$r_2=0$</span>, <span class="math-container">$r_3=1$</span> and that the generalized hexagonal identity is the case <span class="math-container">$\ell=2$</span>, <span class="math-container">$r_1=-s$</span>, <span class="math-container">$r_2=0$</span>, <span class="math-container">$r_3=r$</span>. That there are three binomial coefficients on each side of the hexagonal identity reflects the fact that there are <span class="math-container">$\frac{1}{2}(\ell+1)!=3$</span> even permutations and the same number of odd permutations. </p>
<p>The proof of the multinomial version of the identity involved the arbitrary choice of the two indices <span class="math-container">$i$</span> and <span class="math-container">$j$</span>. This means that there are, in fact <span class="math-container">$\binom{\ell+1}{2}$</span> different ways of constructing this proof, each with its own combinatorial interpretation. As before, these are not combinatorial proofs, since they involve the introduction of the multiplier. That there were three similar proofs in the hexagonal case reflects that fact that, when <span class="math-container">$\ell=2$</span>, there are are <span class="math-container">$\binom{\ell+1}{2}=3$</span> ways of choosing <span class="math-container">$i$</span> and <span class="math-container">$j$</span>.</p>
<p>It is worth pointing out that the hexagonal formation in the original Pascal's triangle identity is a translation of the <a href="https://en.wikipedia.org/wiki/Permutohedron" rel="noreferrer">permutohedron</a> of <span class="math-container">$\{r_0,r_1,r_2\}$</span>. The higher multinomial identities are associated with formations in Pascal's pyramid or its higher-dimensional generalizations taking the shape of some higher-dimensional polytope. When <span class="math-container">$\ell=3$</span>, for example, the permutations of <span class="math-container">$\{r_0,r_1,r_2,r_3\}$</span> form the vertices of a truncated octahedron.</p>
<p>Finally, observe that there is a redundancy in the parameters that appear in the statement of the general identity. In particular <span class="math-container">$r_0$</span> may be eliminated by defining
<span class="math-container">$$
\begin{aligned}
r'_0&:=0 \\
r'_1&:=r_1-r_0 & k'_1&:=k_1+r_0\\
&\vdots & &\vdots\\
r'_\ell&:=r_\ell-r_0 & k'_\ell&:=k_\ell+r_0
\end{aligned}
$$</span>
so that <span class="math-container">$n':=\sum_{i=1}^\ell k'_i+\sum_{i=0}^\ell r'_i=n-r_0$</span>, and rewriting the identity by replacing original parameters with primed parameters.</p>
|
matrices | <p>What is an intuitive explanation of a positive-semidefinite matrix? Or a simple example which gives more intuition for it rather than the bare definition. Say $x$ is some vector in space and $M$ is some operation on vectors.</p>
<p>The definition is:</p>
<p>A $n$ × $n$ Hermitian matrix M is called <em>positive-semidefinite</em> if</p>
<p>$$x^{*} M x \geq 0$$</p>
<p>for all $x \in \mathbb{C}^n$ (or, all $x \in \mathbb{R}^n$ for the real matrix), where $x^*$ is the conjugate transpose of $x$.</p>
| <p>One intuitive definition is as follows. Multiply any vector with a positive semi-definite matrix. The angle between the original vector and the resultant vector will always be less than or equal $\frac{\pi}{2}$. The positive definite matrix tries to keep the vector within a certain half space containing the vector. This is analogous to what a positive number does to a real variable. Multiply it and it only stretches or contracts the number but never reflects it about the origin.</p>
| <p>First I'll tell you how I think about Hermitian positive-definite matrices. A Hermitian positive-definite matrix $M$ defines a sesquilinear inner product $\langle Mv, w \rangle = \langle v, Mw \rangle$, and in fact every inner product on a finite-dimensional inner product space $V$ has this form. In other words it is a way of computing angles between vectors, or a way of projecting vectors onto other vectors; over the real numbers it is the key ingredient to doing Euclidean geometry. An inner product can be recovered from the norm $\langle Mv, v \rangle = \langle v, Mv \rangle$ it induces, and a norm in turn can be recovered from its unit sphere $\{ v : \langle Mv, v \rangle = 1 \}$. This unit sphere is a distorted version of the usual unit sphere; the distortions will occur along axes corresponding to the eigenvectors of $M$, and the amount of distortion corresponds to the (inverses of the) corresponding eigenvalues. For example when $\dim V = 2$ it is an ellipse and when $\dim V = 3$ it is an ellipsoid.</p>
<p>A Hermitian positive-semidefinite matrix $M$ no longer describes an inner product because it is not necessarily positive-definite, but it still defines a sesquilinear form. It also defines a function $\langle Mv, v \rangle$ which is no longer a norm because it is not necessarily positive-definite; some people call these "pseudonorms," I think. The corresponding unit sphere $\{ v : \langle Mv, v \rangle = 1 \}$ might now be lower-dimensional than the usual unit sphere, depending on how many eigenvalues are equal to zero; for example if $\dim V = 3$ it might be an ellipsoid, or an ellipse, or two points. </p>
|
linear-algebra | <p><strong>Background:</strong> Many (if not all) of the transformation matrices used in $3D$ computer graphics are $4\times 4$, including the three values for $x$, $y$ and $z$, plus an additional term which usually has a value of $1$.</p>
<p>Given the extra computing effort required to multiply $4\times 4$ matrices instead of $3\times 3$ matrices, there must be a substantial benefit to including that extra fourth term, even though $3\times 3$ matrices <em>should</em> (?) be sufficient to describe points and transformations in 3D space.</p>
<p><strong>Question:</strong> Why is the inclusion of a fourth term beneficial? I can guess that it makes the computations easier in some manner, but I would really like to know <em>why</em> that is the case.</p>
| <p>I'm going to copy <a href="https://stackoverflow.com/questions/2465116/understanding-opengl-matrices/2465290#2465290">my answer from Stack Overflow</a>, which also shows why 4-component vectors (and hence 4×4 matrices) are used instead of 3-component ones.</p>
<hr>
<p>In most 3D graphics a point is represented by a 4-component vector (x, y, z, w), where w = 1. Usual operations applied on a point include translation, scaling, rotation, reflection, skewing and combination of these. </p>
<p>These transformations can be represented by a mathematical object called "matrix". A matrix applies on a vector like this:</p>
<pre><code>[ a b c tx ] [ x ] [ a*x + b*y + c*z + tx*w ]
| d e f ty | | y | = | d*x + e*y + f*z + ty*w |
| g h i tz | | z | | g*x + h*y + i*z + tz*w |
[ p q r s ] [ w ] [ p*x + q*y + r*z + s*w ]
</code></pre>
<p>For example, scaling is represented as</p>
<pre><code>[ 2 . . . ] [ x ] [ 2x ]
| . 2 . . | | y | = | 2y |
| . . 2 . | | z | | 2z |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p>and translation as</p>
<pre><code>[ 1 . . dx ] [ x ] [ x + dx ]
| . 1 . dy | | y | = | y + dy |
| . . 1 dz | | z | | z + dz |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p><strong><em>One of the reason for the 4th component is to make a translation representable by a matrix.</em></strong></p>
<p>The advantage of using a matrix is that multiple transformations can be combined into one via matrix multiplication.</p>
<p>Now, if the purpose is simply to bring translation on the table, then I'd say (x, y, z, 1) instead of (x, y, z, w) and make the last row of the matrix always <code>[0 0 0 1]</code>, as done usually for 2D graphics. In fact, the 4-component vector will be mapped back to the normal 3-vector vector via this formula:</p>
<pre><code>[ x(3D) ] [ x / w ]
| y(3D) ] = | y / w |
[ z(3D) ] [ z / w ]
</code></pre>
<p>This is called <a href="http://en.wikipedia.org/wiki/Homogeneous_coordinates#Use_in_computer_graphics" rel="noreferrer">homogeneous coordinates</a>. <strong><em>Allowing this makes the perspective projection expressible with a matrix too,</em></strong> which can again combine with all other transformations.</p>
<p>For example, since objects farther away should be smaller on screen, we transform the 3D coordinates into 2D using formula</p>
<pre><code>x(2D) = x(3D) / (10 * z(3D))
y(2D) = y(3D) / (10 * z(3D))
</code></pre>
<p>Now if we apply the projection matrix</p>
<pre><code>[ 1 . . . ] [ x ] [ x ]
| . 1 . . | | y | = | y |
| . . 1 . | | z | | z |
[ . . 10 . ] [ 1 ] [ 10*z ]
</code></pre>
<p>then the real 3D coordinates would become</p>
<pre><code>x(3D) := x/w = x/10z
y(3D) := y/w = y/10z
z(3D) := z/w = 0.1
</code></pre>
<p>so we just need to chop the z-coordinate out to project to 2D.</p>
| <blockquote>
<p>Even though 3x3 matrices should (?) be sufficient to describe points and transformations in 3D space.</p>
</blockquote>
<p>No, they aren't enough! Suppose you represent points in space using 3D vectors. You can transform these using 3x3 matrices. But if you examine the definition of matrix multiplication you should see immediately that multiplying a zero 3D vector by a 3x3 matrix gives you another zero vector. So simply multiplying by a 3x3 matrix can never move the origin. But translations and rotations do need to move the origin. So 3x3 matrices are not enough.</p>
<p>I haven't tried to explain exactly how 4x4 matrices are used. But I hope I've convinced you that 3x3 matrices aren't up to the task and that something more is needed.</p>
|
probability | <p>You play a game using a standard six-sided die. You start with 0 points. Before every roll, you decide whether you want to continue the game or end it and keep your points. After each roll, if you rolled 6, then you lose everything and the game ends. Otherwise, add the score from the die to your total points and continue/stop the game.</p>
<p>When should one stop playing this game? Obviously, one wants to maximize total score.</p>
<p>As I was asked to show my preliminary results on this one, here they are:</p>
<p>If we simplify the game to getting 0 on 6 and 3 otherwise, we get the following: </p>
<p>$$\begin{align}
EV &= \frac{5}{6}3+\frac{25}{36}6+\frac{125}{216}9+\ldots\\[5pt]
&= \sum_{n=1}^{\infty}\left(\frac{5}{6}\right)^n3n
\end{align}$$ </p>
<p>which is divergent, so it would make sense to play forever, which makes this similar to the St. Petersburg paradox. Yet I can sense that I'm wrong somewhere!</p>
| <p>Before deciding whether to stop or roll, suppose you have a non-negative integer number of points $n$.</p>
<blockquote>
<p>How many more rolls should you make to maximise the expected gain over stopping (zero)?</p>
</blockquote>
<p>Suppose that further number of rolls is another non-negative integer $k$. Now consider the $6^k$ possible sequences of $k$ rolls:</p>
<ul>
<li>In $5^k$ of those sequences there is no six and you win some points. The sum $D_k$ over all such sequences of the sum of dice rolls within each sequence satisfies the recurrence relation
$$D_0=0\quad D_{n+1}=5D_n+15\cdot5^n$$
It turns out that this has a closed form:
$$D_k=15k\cdot5^{k-1}=3k\cdot5^k$$</li>
<li>In the remaining $6^k-5^k$ sequences there is at least one six and you lose the $n$ points you had beforehand.</li>
</ul>
<p>So the expected gain when you have $n$ points and try to roll $k$ more times before stopping is
$$G(n,k)=\frac{D_k-n(6^k-5^k)}{6^k}=\frac{3k\cdot5^k-n(6^k-5^k)}{6^k}$$
For a fixed $n$, the $k$ that maximises $G(n,k)$ is $m(n)=\max(5-\lfloor n/3\rfloor,0)$; if $3\mid n$ then $k=m(n)+1$ also forms a maximum.</p>
<hr>
<p>Suppose we fix the maximum number of rolls before starting the game. At $n=0$, $k=5$ and $k=6$ maximise $G(n,k)$ and the expected score with this strategy is
$$G(0,5)=\frac{15625}{2592}=6.028163\dots$$
But what if we roll once and <em>then</em> fix the maximum rolls afterwards? If we roll 1 or 2, we roll at most 5 more times; if 3, 4 or 5, 4 more times. The expected score here is <em>higher</em>:
$$\frac16(1+G(1,5)+2+G(2,5)+3+G(3,4)+4+G(4,4)+5+G(5,4))=6.068351\dots$$
We will get an even higher expected score if we roll twice and then set the roll limit. This implies that the <em>greedy strategy</em>, outlined below, is optimal:</p>
<blockquote>
<p>Before the start of each new roll, calculate $m(n)$. Roll if this is positive and stop if this is zero.</p>
</blockquote>
<p>When $n\ge15$, $m(n)=0$. A naïve calculation that formed the previous version of this answer says that rolling once has zero expected gain when $n=15$ and negative expected gain when $n>15$. Together, these suggest that <em>we should stop if and when we have 15 or more points</em>.</p>
<p>Finding a way to calculate the expected score under this "stop-at-15" strategy took quite a while for me to conceptualise and then program, but I managed it in the end; the program is <a href="https://gist.github.com/Parclytaxel/af8c1e48aecb36f3b466c92ef252e62f">here</a>. The expected score works out to be
$$\frac{2893395172951}{470184984576}=6.1537379284\dots$$
So this is the maximum expected score you can achieve.</p>
| <p>In the last round you can get $\frac{1+2+3+4+5}{6}$ or lose $p\frac 1 6$, whenever the second is more than the first you should stop. So once you have scored more than 15 you should stop. If you score 15 it doesn't matter if you continue or stop.</p>
|
logic | <p>Is there a proper and precise definition that goes something like this?</p>
<p><strong>Definition.</strong> <em>A statement $S$ is a</em> vacuous truth <em>if ... ...</em></p>
| <p>No. The phrase "vacuously true" is used informally for statements of the form $\forall a \in X: P(a)$ that happen to be true because $X$ is empty, or even for statements of the form $\forall a \in X: Q(a) \to P(a)$ that happen to be true because no $a \in X$ satisfies $Q(a)$. In both cases, it is irrelevant what statement $P(a)$ is.</p>
<p>I guess you could turn this into a formal definition of a property of statement, but that's not standard.</p>
| <p>We say that an implication $p\to q$ holds vacuously if $p$ is always false. That is to say, it is impossible to have $p$ true and $q$ false. So the implication is a tautology. </p>
<p>Of course tautologies exist in propositional calculus, and not quite in predicate logic (and thus not in first-order logic), but the concept caries over. </p>
<p>So when we say that the empty set is a subset of $A$ is vacuously true, we say that there is just no counterexample to the contrary. Why is that true? Because the set is empty. </p>
|
geometry | <blockquote>
<p>Consider a square of side equal to $1$. Prove that we can place inside the square a finite number of disjoint discs, with different radii of the form $1/k$ with $k$ a positive integer, such that the area of the remaining region is at most $0.0001$.</p>
</blockquote>
<p>If we consider all the discs of this form, their total area is $\sum_{k \geq 1}\displaystyle \pi \frac{1}{k^2} - \pi=\frac{\pi^3}{6}-\pi\simeq 2.02$ which is greater than the area of the square. (I subtracted $\pi$ because we cannot place a disc of radius $1$ inside the square).</p>
<p>So the discs of this form can cover the square very well, but how can I prove that there is a disjoint family which leaves out a small portion of the area?</p>
| <p>I don't think this is possible for general $\epsilon$, and I doubt it's possible for remainder $0.0001$.</p>
<p>Below are some solutions with remainder less than $0.01$. I produced them by randomized search from two different initial configurations. In the first one, I only placed the circle with curvature $2$ in the centre and tried placing the remaining circles randomly, beginning with curvature $12$; in the second one, I prepositioned pairs of circles that fit in the corners and did a deterministic search for the rest.</p>
<p>The data structure I used was a list of interstices, each in turn consisting of a list of circles forming the interstice (where the lines forming the boundary of the square are treated as circles with zero curvature). I went through the circles in order of curvature and for each circle tried placing it snugly in each of the cusps where two circles touch in random order. If a circle didn't fit anywhere, I discarded it; if that decreased the remaining area below what was needed to get up to the target value (in this case $0.99$), I backtracked to the last decision.</p>
<p>I also did this without using the circle with curvature $2$. For that case I did a complete search and found no configurations with remainder less than $0.01$. Thus, if there is a better solution in that case, it must involve placing the circles in a different order. (We can always transform any solution to one where each circle is placed snugly in a cusp formed by two other circles, so testing only such positions is not a restriction; however, circles with lower curvature might sit in the cusps of circles with higher curvature, and I wouldn't have found such solutions.)</p>
<p>For the case including the circle with curvature $2$, the search wasn't complete (I don't think it can be done completely in this manner, without introducing further ideas), so I can't exclude that there's are significantly better configurations (even ones with in-order placement), but I'll try to describe how I came to doubt that there's much room for improvement beyond $0.01$, and particularly that this can be done for arbitrary $\epsilon$.</p>
<p>The reasons are both theoretical and numerical. Numerically, I found that this seems to be a typical combinatorial optimization problem: There are many local minima, and the best ones are quite close to each other. It's easy to get to $0.02$; it's relatively easy to get to $0.011$; it takes quite a bit more optimization to get to $0.01$; and beyond that practically all the solutions I found were within $0.0002$ or so of $0.01$. So a solution with $0.0001$ would have to be of a completely different kind from everything that I found.</p>
<p>Now of course <em>a priori</em> there might be some systematic solution that's hard to find by this sort of search but can be proved to exist. That might conceivably be the case for $0.0001$, but I'm pretty sure it's not the case for general $\epsilon$. To prove that it's possible to leave a remainder less than $\epsilon$ for any $\epsilon\gt0$, one might try to argue that after some initial phase it will always be possible to fit the remaining circles into the remaining space. The problem is that such an argument can't work, because we're trying to fill the rational area $1$ by discarding rational multiples of $\pi$ from the total area $\pi^3/6$, so we can't do it by discarding a finite number of circles, since $\pi$ is transcendental.</p>
<p>Thus we can never reach a stage where we could prove that the remaining circles will exactly fit, and hence every proof that proves we can beat an arbitrary $\epsilon$ would have to somehow show that the remaining circles can be divided into two infinite subsets, with one of them exactly fitting into the remaining gaps. Of course this, too, is possible in principle, but it seems rather unlikely; the problem strikes me as a typical messy combinatorial optimization problem with little regularity.</p>
<p>A related reason not to expect a clean solution is that in an <a href="http://en.wikipedia.org/wiki/Apollonian_gasket" rel="noreferrer">Apollonian gasket</a> with integer curvatures, some integers typically occur more than once. For instance, one might try to make use of the fact that the curvatures $0$, $2$, $18$ and $32$ form a quadruple that would allow us to fill an entire half-corner with a gasket of circles of integer curvature; however, in that gasket, many curvatures, for instance $98$, occur more than once, so we'd have to make exceptions for those since we're not allowed to reuse those circles. Also, if you look at the gaskets produced by $0$, $2$ and the numbers from $12$ to $23$ (which are the candidates to be placed in the corners), you'll find that the fourth number increases more rapidly than the third; that is, $0$, $2$ and $18$ lead to $32$, whereas $0$ $2$ and $19$ already lead to $(\sqrt2+\sqrt{19})^2\approx33.3$; so not only can you not place all the numbers from $12$ to $23$ into the corners (since only two of them fit together and there are only four corners), but then if you start over with $24$ (which is the next number in the gasket started by $12$), you can't even continue with the same progression, since the spacing has increased. The difference would have to be compensated by the remaining space in the corners that's not part of the gaskets with the big $2$-circle, but that's too small to pick up the slack, which makes it hard to avoid dropping several of the circles in the medium range around the thirties. </p>
<p>My impression from the optimization process is that we're forced to discard too much area quite early on; that is, we can't wait for some initial irregularities to settle down into some regular pattern that we can exploit. For instance, the first solution below uses all curvatures except for the following: 3 4 5 6 7 8 9 10 11 16 17 20 22 25 30 31 33 38 46 48 49 52 53 55 56 57 59 79 81 94 96 101 106 107 108 113 125 132. Already at 49 the remaining area becomes less than would be needed to fill the square. Other solutions I found differed in the details of which circles they managed to squeeze in where, but the total area always dropped below $1$ early on. Thus, it appears that it's the irregular constraints at the beginning that limit what can be achieved, and this can't be made up for by some nifty scheme extending to infinity. It might even be possible to prove by an exhaustive search that some initial set of circles can't be placed without discarding too much area. To be rigorous, this would have to take a lot more possibilities into account than my search did (since the circles could be placed in any order), but I don't see why allowing the bigger circles to be placed later on should make such a huge difference, since there's precious little wiggle room for their placement to begin with if we want to fit in most of the ones between $12$ and $23$.</p>
<p>So here are the solutions I found with remainder less than $0.01$. The configurations shown are both filled up to an area $\gtrsim0.99$ and have a tail of tiny circles left worth about another $0.0002$. For the first one, I checked with integer arithmetic that none of the circles overlap. (In fact I placed the circles with integer arithmetic, using floating-point arithmetic to find an approximation of the position and a single iteration of Newton's method in integer arithmetic to correct it.)</p>
<p>The first configuration has $10783$ circles and was found using repeated randomized search starting with only the circle of curvature $2$ placed; I think I ran something like $100$ separate trials to find this one, and something like $1$ in $50$ of them found a solution with remainder below $0.01$; each trial took a couple of seconds on a MacBook Pro.</p>
<p><img src="https://i.sstatic.net/K1MNc.png" alt="randomized"></p>
<p>The second configuration has $17182$ circles and was found by initially placing pairs of circles with curvatures $(12,23)$, $(13,21)$, $(14,19)$ and $(15,18)$ touching each other in the corners and tweaking their exact positions by hand; the tweaking brought a gain of something like $0.0005$, which brought the remainder down below $0.01$. The search for the remaining circles was carried out deterministically, in that I always tried first to place a circle into the cusps formed by the smaller circles and the boundary lines; this was to keep as much contiguous space as possible available in the cusps between the big circle and the boundary lines.</p>
<p><img src="https://i.sstatic.net/5zUI0.png" alt="pre-placed"></p>
<p>I also tried placing pairs of circles with curvatures $(13,21)$, $(14,19)$, $(15,18)$ and $(16,17)$ in the corners, but only got up to $0.9896$ with that.</p>
<p>Here are high-resolution version of the images; they're scaled down in this column, but you can open them in a new tab/window (where you might have to click on them to toggle the browser's autoscale feature) to get the full resolution.</p>
<p>Randomized search:</p>
<p><img src="https://i.sstatic.net/JWiwd.png" alt="randomized hi-res"></p>
<p>With pre-placed circles:</p>
<p><img src="https://i.sstatic.net/aFFfN.png" alt="enter image description here"></p>
| <p>Let's roll up our sleeves here. Let $C_k$ denote the disk of radius $1/k$. Suppose we can cover an area of $\ge 0.9999$ using a set of non-overlapping disks inside the unit square, and let $S$ denote the set of integers $k$ such that $C_k$ is used in this cover.<br>
Then we require</p>
<p>$$\sum_{k\in S}\frac{1}{k^2} \ge 0.9999/\pi \approx 0.318278$$</p>
<p>As the OP noted, we know that $1 \not\in S$. This leaves</p>
<p>$$\sum_{k\ge2}\frac{1}{k^2} \approx 0.644934$$</p>
<p>which gives us $0.644934 - 0.318278 = 0.326656$ 'spare capacity' to play with.</p>
<p><strong>Case 1</strong> Suppose $2 \in S$. Then the largest disk that will fit into the spaces in the corners left by $C_2$ is $C_{12}$, so we must throw $3,...,11$ out of $S$. This wastes</p>
<p>$$\sum_{k=3}^{11}\frac{1}{k^2}\approx0.308032$$</p>
<p>and we are close to using up our spare capacity: we would be left with $0.326656-0.308032=0.018624$ to play with. </p>
<p><strong>Case 2</strong> Now suppose $2 \not\in S$. Then we can fit $C_3$ and $C_4$ into the unit square, but not $C_5$. So we waste</p>
<p>$$\frac{1}{2^2} + \frac{1}{5^2} = 0.29$$</p>
<p>leaving us with $0.326656-0.29=0.036656$ to play with. </p>
<p>Neither of these cases fills me with confidence that this thing is doable.</p>
|
probability | <p><strong>Context:</strong> My friend gave me a problem at breakfast some time ago. It is supposed to have an easy, trick-involving solution. I can't figure it out.</p>
<p><strong>Problem:</strong> Let there be a knight (horse) at a particular corner (0,0) on a 8x8 chessboard. The knight moves according to the usual rules (2 in one direction, 1 in the orthogonal one) and only legal moves are allowed (no wall tunnelling etc). The knight moves randomly (i.e. at a particular position, it generates a set of all possible and legal new positions, and picks one at random). <strong>What is the average number of steps after which the knight returns to its starting corner?</strong></p>
<p>To sum up: A knight starts at (0,0). How many steps on average does it take to return back to (0,0) via a random (but only legal knight moves) walk.</p>
<p><strong>My attempt:</strong> (disclaimer: I don't know much about Markov chains.)</p>
<p>The problem is a Markov chain. There are $8\times8 = 64$ possible states. There exist transition probabilities between the states that are easy to generate. I generated a $64 \times 64$ transition matrix $M_{ij}$ using a simple piece of code, as it seemed too big to do by hand.</p>
<p>The starting position is $v_i = (1,0,0,...) = \delta_{0i}$.</p>
<p>The probability that the knight as in the corner (state 0) after $n$ steps is
$$
P_{there}(n) = (M^n)_{0j} v_j \, .
$$
I also need to find the probability that the knight did not reach the state 0 in any of the previous $n-1$ steps. The probability that the knight is not in the corner after $m$ steps is $1-P_{there}(m)$.</p>
<p>Therefore the total probability that the knight is in the corner for the first time (disregarding the start) after $n$ steps is
$$
P(n) = \left ( \prod_{m=1}^{n-1} \left [ 1 - \sum_{j = 0}^{63} (M^m)_{0j} v_j \right ] \right ) \left ( \sum_{j = 0}^{63} (M^n)_{0j} v_j \right )
$$
To calculate the average number of steps to return, I evaluate
$$
\left < n \right >= \sum_{n = 1}^{\infty} n P(n) \, .
$$
<strong>My issue:</strong>
The approach I described should work. However, I had to use a computer due to the size of the matrices. Also, the $\left < n \right >$ seems to converge quite slowly. I got $\left < n \right > \approx 130.3$ numerically and my friend claims it's wrong. Furthermore, my solution is far from simple. Would you please have a look at it?</p>
<p>Thanks a lot!
-SSF</p>
| <p>Details of the method mentioned in @Batman's comment:</p>
<p>We can view each square on the chessboard as a vertex on a graph consisting of $64$ vertices, and two vertices are connected by an edge if and only if a knight can move from one square to another by a single legal move.</p>
<p>Since knight can move to any other squares starting from a random square, then the graph is connected (i.e. every pair of vertices is connected by a path).</p>
<p>Now given a vertex $i$ of the graph, let $d_i$ denote the degree of the vertex, which is number of edges connected to the vertex. This is equivalent to number of possible moves that a knight can make at that vertex (square on chessboard). Since the knight moves randomly, transition probabilities from $i$ to its neighbors is $1/d_i$.</p>
<p>Now since the chain is irreducible (since the graph is connected) the stationary distribution of the chain is unique. Let's call this distribution $\pi$. Now we claim the following:</p>
<blockquote>
<p><strong>Claim</strong> Let $\pi_j$ denote $j^\text{th}$ component of $\pi$. Then $\pi_j$ is proportional to $d_j$.</p>
<p><strong>Proof</strong> Let $I$ be the fuction on vertices of the graph such that $I(i)=1$ if $i$ is a neighbor of $j$, and $I(i)=0$ otherwise. Then</p>
<p>$$
d_j=\sum_i I(i)=\sum_i d_i \cdot \frac{I(i)}{d_i} = \sum_i d_i p_{ij}
$$
where $p_{ij}$ is the transition probability from $i$ to $j$. Hence we have $dP=d$ where $P$ is the transition matrix of the chain, and $d=(d_1,\cdots,d_j,\cdots,d_{64})$. Thus $\pi P=\pi \implies$ <strong>Claim</strong></p>
</blockquote>
<p>Therefore, it follows that after normalising we have</p>
<p>$$
\pi_j=d_j/\sum_i d_i
$$</p>
<p>Finally we recall the following theorem</p>
<blockquote>
<p><strong>Theorem</strong> If the chain is irreducible and positive recurrent, then </p>
<p>$$
m_i=1/\pi_i
$$
Where $m_i$ is the mean return time of state $i$, and $\pi$ is the unique stationary distribution.</p>
</blockquote>
<p>Thus if we call the corner vertex $1$, we have</p>
<p>$$
m_1=1/\pi_1
$$</p>
<p>You can check that $\sum_i d_i = 336$, and we have $d_1=2$ (at corner knight can make at most $2$ legal moves. Therefore $\pi_1=1/168$ and</p>
<p>$$
m_1=168
$$</p>
| <p>The first thing we do is find a stable distribution for the Markov process. We see that the process will be stable if the mass for each square of the chessboard is proportional to the number of knight moves leading away from it; then the process will move a mass of 1 along each possible knight move, so each square with n moves from it will have a mass of n moving in and a mass of n moving out, so everything balances.</p>
<p>Next, we want to find the total mass of the system. This is the total number of possible knight moves; there are 8 possible directions a knight can move, and each direction can start from a 6x7 square, so there will be 8*6*7 = 336 possible moves, and that is the total mass of the distribution.</p>
<p>Since a corner square has a mass of 2, that represents 2/336 = 1/168 of the mass of the distribution. Since we have a connected recurrent process, an infinite random walk from any square will be at that particular corner 1/168 of the time. That means the average time between visits to the corner will be 168.</p>
|
matrices | <p>Let $k$ be a field with characteristic different from $2$, and $A$ and $B$ be $2 \times 2$ matrices with entries in $k$. Then we can prove, with a bit art, that $A^2 - 2AB + B^2 = O$ implies $AB = BA$, hence $(A - B)^2 = O$. It came to a surprise for me when I first succeeded in proving this, for this seemed quite nontrivial to me.</p>
<p>I am curious if there is a similar or more general result for the polynomial equations of matrices that ensures commutativity. (Of course, we do not consider trivial cases such as the polynomial $p(X, Y) = XY - YX$ corresponding to commutator)</p>
<p>p.s. This question is purely out of curiosity. I do not know even this kind of problem is worth considering, so you may regard this question as a recreational one.</p>
| <p>Your question is very interesting, unfortunately that's not a complete answer, and in fact not an answer at all, or rather a negative answer.</p>
<p>You might think, as generalization of <span class="math-container">$A^2+B^2=2AB$</span>, of the following matrix equation in <span class="math-container">$\mathcal M_n\Bbb C$</span> :
<span class="math-container">$$ (E) :\ \sum_{l=0}^k (-1)^k \binom kl A^{k-l}B^l = 0. $$</span></p>
<p>This equation implies the commutativity if and only if <span class="math-container">$n=1$</span> of <span class="math-container">$(n,k)=(2,2)$</span>, which is the case you studied.However, the equation (E) has a remarkable property : <strong>if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> satisfy (E) then their characteristic polynomials are equal</strong>. Isn't it amazing ? You can have a look at <a href="https://hal.inria.fr/hal-00780438/document" rel="nofollow noreferrer">this paper for a proof</a>.</p>
| <p>I'm neither an expert on this field nor on the unrelated field of the facts I'm about to cite, so this is more a shot in the dark. But: Given a set of matrices, the problem whether there is some combination in which to multiply them resulting in zero is undecidable, even for relatively small cases (such as two matrices of sufficient size or a low fixed number of $3\times3$ matrices).</p>
<p>A solution to one side of this problem (the "is there a polynomial such that..." side) <em>looks</em> harder (though I have no idea beyond intuition whether it really is!) than the mortality problem mentioned above. If that is actually true, then it would at least suggest that $AB = BA$ does not guarantee the existance of a solution (though it might still happen).</p>
<p>In any case, the fact that the mortality problem is decidable for $2 \times 2$ matrices at least shows that the complexity of such problems increases rapidly with dimension, which could explain why your result for $2$ does not easily extend to higher dimensions.</p>
<p>Apologies for the vagueness of all this, I just figured it might give someone with more experience in the field a different direction to think about the problem. If someone does want to look that way, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.5792&rep=rep1&type=pdf" rel="nofollow">this paper</a> has the mentioned results as well as references to related literature.</p>
|
logic | <p>Provided we have this truth table where "$p\implies q$" means "if $p$ then $q$":</p>
<p>$$\begin{array}{|c|c|c|}
\hline
p&q&p\implies q\\ \hline
T&T&T\\
T&F&F\\
F&T&T\\
F&F&T\\\hline
\end{array}$$</p>
<p>My understanding is that "$p\implies q$" means "when there is $p$, there is q". The second row in the truth table where $p$ is true and $q$ is false would then contradict "$p\implies q$" because there is no $q$ when $p$ is present.</p>
<p>Why then, does the third row of the truth table not contradict "$p\implies q$"? If $q$ is true when $p$ is false, then $p$ is not a condition of $q$.</p>
<p>I have not taken any logic class so please explain it in layman's terms.</p>
<hr>
<blockquote>
<p><strong>Administrative note.</strong> You may experience being directed here even though your question was actually about line 4 of the truth table instead. In that case, see the companion question <a href="https://math.stackexchange.com/questions/48161/in-classical-logic-why-is-p-rightarrow-q-true-if-both-p-and-q-are-false">In classical logic, why is $(p\Rightarrow q)$ True if both $p$ and $q$ are False?</a> And even if your original worry was about line 3, it might be useful to skim the other question anyway; many of the answers to either question attempt to explain <em>both</em> lines.</p>
</blockquote>
| <p>If you don't put any money into the soda-pop machine, and it gives you a bottle of soda anyway, do you have grounds for complaint? Has it violated the principle, "if you put money in, then a soda comes out"? I wouldn't think you have grounds for complaint. If the machine gives a soda to every passerby, then it is still obeying the principle that if one puts money in, one gets a soda out. </p>
<p>Similarly, the only grounds for complaint against $p\to q$ is the situation where $p$ is true, but $q$ is false. This is why the only F entry in the truth table occurs in this row. </p>
<p>If you imagine putting an F on the row to which you refer, the truth table becomes the same as what you would expect for $p\iff q$, but we don't expect that "if p, then q" has the same meaning as "p if and only if q". </p>
| <p>$p\Rightarrow q$ is an assertion that says something about situations where $p$ is true, namely that if we find ourselves in a world where $p$ is true, then $q$ will be true (or otherwise $p\Rightarrow q$ lied to us).</p>
<p>However, if we find ourselves in a world where $p$ is <em>false</em>, then it turns out that $p\Rightarrow q$ did not actually promise us anything. Therefore it can't possibly have lied to us -- you could complain about it being <em>irrelevant</em> in that situation, but that doesn't make it <em>false</em>. It has delivered everything it promised, because it turned out that it actually promised nothing.</p>
<p>As an everyday example, it is true that "If John jumps into a lake, then John will get wet". The truth of this is not affected by the fact that there are other ways to get wet. If, on investigating, we discover that John didn't jump in to the lake, but merely stood in the rain and now is wet, that doesn't mean that it is no longer true that people who jump into lakes get wet.</p>
<p><strong>However</strong>, one should note that these arguments are ultimately not the reason why $\Rightarrow$ has the truth table it has. The real reason is because that truth table is the <em>definition</em> of $\Rightarrow$. Expressing $p\Rightarrow q$ as "If $p$, then $q$" is not a definition of $\Rightarrow$, but an explanation of how the words "if" and "then" are used by mathematicians, given that one already knows how $\Rightarrow$ works. The intuitive explanations are supposed to convince you (or not) that it is reasonable to use those two English words to speak about logical implication, not that logical implication ought to work that way in the first place.</p>
|
logic | <p>Okay, now, I really want to solve this on my own, and I believe I have the basic idea, I'm just not sure how to put it as an answer on the homework. The problem in full:</p>
<blockquote>
<p>"Prove that at a party with at least two people, that there are two
people who know the same number of people there (not necessarily the
same people - just the same number) given that every person at the
party knows at least one person. Also, note that nobody can be his or
her own friend. You can solve this with a tricky use of the
Pigeonhole Principle."</p>
</blockquote>
<p>First of all, I'm treating the concept of "knowing" as A can know B, but B doesn't necessary know A. e.g. If Tom Cruise walks into a party, I "know" him, but he doesn't know me.</p>
<p>So what I did first was proved it to myself using examples of a party with 2 people, 3 people, 4 people, and so on. Indeed, under any condition, there is always at least a pair of people who know the same number of people.</p>
<p>So if we define $n$ as the number of party goers, then we can see that this is true under any circumstance if we assume that the first person knows the maximum number of people possible, which is $(n-1)$ (as a person can't be friends with himself). Then since we're not interested in a case where the second person knows the same number of people (otherwise there's nothing to prove), we want the second person to know one less than than the first, or $(n-2)$, and so on.</p>
<p>Eventually we reach a contradiction where the last person knows $(n-n)$, or 0. Since 0 is not a possible value as defined by the problem, that last person <em>must</em> know any number of people from 1 to $(n-1)$, which equals the number of people that at least one other person knows.</p>
<p>Now...I hope that this is the "right idea." <strong>But how can I turn this "general understanding" into an answer for a problem that begins with the word "prove?"</strong></p>
<p>Let me note that we only very briefly touched on the concepts of induction and the pigeonhole principle, and did not go into any examples of how to formally "prove" anything with the pigeonhole principle. We did touch on proving the sum of numbers by induction, but that's all as far as induction goes.</p>
<p>Also: <a href="https://math.stackexchange.com/questions/177432">Combinatorics question: Prove 2 people at a party know the same amount of people</a> does not really work for me, because </p>
<p>A) we've not talked about "combinatorics", and </p>
<p>B) that question allows for someone to know 0 people.</p>
| <p>Let $n$ be the number of party-goers. The maximum number of people a person can know is $n-1$ and the minimum number he/she can know is 1 (by assumption), giving us $n-1$ possibilities for the number of people someone can know. Every single person must be assigned one of these $n-1$ possible numbers but since there are $n$ party-goers one of these numbers must be used twice due to the pigeonhole principle i.e. two party-goers know the same number of people.</p>
| <p>There are two cases to consider:</p>
<ol>
<li>Assume there is a someone at the party, lets say Joe, who knows everyone else at the party. He must know $n-1$ people. In this case, everybody else at the party must know at least know Joe, and the minimum number of people a person can know is $1$. This gives us the set $\{1, 2, ... n - 1\}$ which represents the possible number of people each person can know.</li>
<li>Assume there is a party crasher, Harry, who doesn't actually know anybody. This means that even a socialite like Joe can't possibly know everyone, so the maximum number of people a person can know is $n - 2$. This gives us the set $\{0, 1, ..., n - 2\}$.</li>
</ol>
<p>Both these sets have n-1 elements. Since there are $n$ people at the party, and $n-1$ possibilities for the number of people each person can know, it follows that there must be at least two people who know the same number of people.</p>
|
linear-algebra | <blockquote>
<p>How can I prove <span class="math-container">$\operatorname{rank}A^TA=\operatorname{rank}A$</span> for any <span class="math-container">$A\in M_{m \times n}$</span>?</p>
</blockquote>
<p>This is an exercise in my textbook associated with orthogonal projections and Gram-Schmidt process, but I am unsure how they are relevant.</p>
| <p>Let $\mathbf{x} \in N(A)$ where $N(A)$ is the null space of $A$. </p>
<p>So, $$\begin{align} A\mathbf{x} &=\mathbf{0} \\\implies A^TA\mathbf{x} &=\mathbf{0} \\\implies \mathbf{x} &\in N(A^TA) \end{align}$$ Hence $N(A) \subseteq N(A^TA)$.</p>
<p>Again let $\mathbf{x} \in N(A^TA)$</p>
<p>So, $$\begin{align} A^TA\mathbf{x} &=\mathbf{0} \\\implies \mathbf{x}^TA^TA\mathbf{x} &=\mathbf{0} \\\implies (A\mathbf{x})^T(A\mathbf{x})&=\mathbf{0} \\\implies A\mathbf{x}&=\mathbf{0}\\\implies \mathbf{x} &\in N(A) \end{align}$$ Hence $N(A^TA) \subseteq N(A)$.</p>
<p>Therefore $$\begin{align} N(A^TA) &= N(A)\\ \implies \dim(N(A^TA)) &= \dim(N(A))\\ \implies \text{rank}(A^TA) &= \text{rank}(A)\end{align}$$</p>
| <p>Let $r$ be the rank of $A \in \mathbb{R}^{m \times n}$. We then have the SVD of $A$ as
$$A_{m \times n} = U_{m \times r} \Sigma_{r \times r} V^T_{r \times n}$$
This gives $A^TA$ as $$A^TA = V_{n \times r} \Sigma_{r \times r}^2 V^T_{r \times n}$$ which is nothing but the SVD of $A^TA$. From this it is clear that $A^TA$ also has rank $r$. In fact the singular values of $A^TA$ are nothing but the square of the singular values of $A$.</p>
|
probability | <p>Not a math student, so forgive me if the question seems trivial or if I pose it "wrong". Here goes...</p>
<p>Say I'm flipping a coin a <em>n</em> times. I am not sure if it's a "fair" coin, meaning I am not sure if it will come up heads and tails each with a propability of exactly 0.5. Now, if after <em>n</em> throws it has come up heads exactly as many times as it has come up tails, then obviously there's nothing to indicate that the coin is not fair. But my intuition tells me that it would be improbable even for a completely fair coin to come up with heads and tails an exact even number of times given a large amount of tosses. My question is this: How "off" should the result be for it to be probable that the coin is not fair? IOW, how many more tosses should come up heads rather than tails in a series of <em>n</em> throws before I should assume the coin is weighted?</p>
<p><strong>Update</strong></p>
<p>Someone mentioned Pearson's chi-square test but then for some reason deleted their answer. Can someone confirm if that is indeed the right place to look for the answer?</p>
| <p>Given your prefatory comment, I'm going to avoid talking about the normal curve and the associated variables and use as much straight probability as possible.</p>
<p>Let's do a side problem first. If on a A-D multiple choice test you guess randomly, what's the probability you get 8 out of 10 questions right?</p>
<p>Each problem you have a 25% (.25) chance of getting right and a 75% (.75) chance of getting wrong.</p>
<p>You want to first choose which eight problems you get right. That can be done in <a href="http://www.wolframalpha.com/input/?i=10+choose+8">10 choose 8</a> ways.</p>
<p>You want .25 to happen eight times [$(.25)^8$] and .75 to happen twice [$(.75)^2$]. This needs to be multiplied by the possible number of ways to arrange the eight correct problems, hence your odds of getting 8 out of 10 right is</p>
<p>${10 \choose{8}}(.25)^8(.75)^2$</p>
<p>Ok, so let's say you throw a coin 3000 times. What's the probability that it comes up heads only 300 times? By the same logic as the above problem that would be</p>
<p>${3000 \choose{300}}(.5)^{300}(.5)^{2700}$</p>
<p>or a rather unlikely 6.92379... x 10^-482.</p>
<p>Given throwing the coin n times, the probability it comes up heads x times is</p>
<p>${n \choose{x}}(.5)^n$</p>
<p>or if you want to ask the probability it comes up heads x times or less</p>
<p>$\sum_{i=0}^{x}{{n \choose{i}}(.5)^n}$</p>
<p>so all you have to do is decide now how unlikely are you willing to accept?</p>
<p>(This was a <a href="http://en.wikipedia.org/wiki/Binomial_probability">Binomial Probability</a> if you want to read more and all the fancier methods involving an integral under the normal curve and whatnot start with this concept.)</p>
| <p>I am surprised that no one has mentioned <a href="http://en.wikipedia.org/wiki/Statistical_hypothesis_testing">Hypothesis Testing</a> so far. Hypothesis testing lets you to decide, with a certain level of significance, whether you have sufficient evidence to reject the underlying (Null) hypothesis or you have do not sufficient evidence against the Null Hypothesis and hence you accept the Null Hypothesis.</p>
<p>I am explaining the Hypothesis testing below assuming that you want to determine if a coin comes up heads more often than tails. If you want to determine, if the coin is biased or unbiased, the same procedure holds good. Just that you need to do a two-sided hypothesis testing as opposed to one-sided hypothesis testing.</p>
<p>In this question, your Null hypothesis is $p \leq 0.5$ while your Alternate hypothesis is $p > 0.5$, where $p$ is the probability that the coin shows up a head. Say now you want to perform your hypothesis testing at $10\%$ level of significance. What you do now is to do as follows:</p>
<p>Let $n_H$ be the number of heads observed out of a total of $n$ tosses of the coin.</p>
<p>Take $p=0.5$ (the extreme case of the Null Hypothesis). Let $x \sim B(n,0.5)$.</p>
<p>Compute $n_H^c$ as follows.</p>
<p>$$P(x \geq n_H^c) = 0.1$$</p>
<p>$n_H^c$ gives you the critical value beyond which you have sufficient evidence to reject the Null Hypothesis at $10\%$ level of significance.</p>
<p>i.e. if you find $n_H \geq n_H^c$, then you have sufficient evidence to reject the Null Hypothesis at $10\%$ level of significance and conclude that the coin comes up heads more often than tails.</p>
<p>If you want to determine if the coin is unbiased, you need to do a two-sided hypothesis testing as follows.</p>
<p>Your Null hypothesis is $p = 0.5$ while your Alternate hypothesis is $p \neq 0.5$, where $p$ is the probability that the coin shows up a head. Say now you want to perform your hypothesis testing at $10\%$ level of significance. What you do now is to do as follows:</p>
<p>Let $n_H$ be the number of heads observed out of a total of $n$ tosses of the coin.</p>
<p>Let $x \sim B(n,0.5)$.</p>
<p>Compute $n_H^{c_1}$ and $n_H^{c_2}$ as follows.</p>
<p>$$P(x \leq n_H^{c_1}) + P(x \geq n_H^{c_2}) = 0.1$$</p>
<p>($n_H^{c_1}$ and $n_H^{c_2}$ are symmetric about $\frac{n}{2}$ i.e. $n_H^{c_1}$+$n_H^{c_2} = n$)</p>
<p>$n_H^{c_1}$ gives you the left critical value and $n_H^{c_2}$ gives you the right critical value.</p>
<p>If you find $n_H \in (n_H^{c_1},n_H^{c_2})$, then you have do not have sufficient evidence against Null Hypothesis and hence you accept the Null Hypothesis at $10\%$ level of significance. Hence, you accept that the coin is fair at $10\%$ level of significance.</p>
|
game-theory | <p>Everyone knows rock, paper, scissors. Now a long time ago, when I was a child, someone claimed to me that there was not only those three, but also as fourth option the well. The well wins against rock and scissors (because both fall into it) but loses against paper (because the paper covers it).</p>
<p>Now I wonder: What would be the ideal playing strategy for rock, paper, scissors, well?</p>
<p>It's obvious that now the different options are no longer on equal footing. The well wins against two of the three other options, and also the paper now wins over two options, namely rock and well. On the other hand, rock and scissors only win on one of their three possible opponents.</p>
<p>Moreover, the scissors seem to have an advantage to the rock, as it wins against a "strong" symbol, namely paper, while the rock only wins against the "weak" symbol scissors.</p>
<p>Only playing "strong" symbols is obviously not a good idea because of those two, the paper always wins, so if both players only played strong symbols, the clear winning strategy would be to play paper each time; however if you play paper each time, you're predictable, and your opponent can beat you by selecting scissors.</p>
<p>So what if you play only well, paper and scissors, but all with the same probability? If your opponent knows or guesses it, it's obviously undesirable to choose rock, because in two of three cases he'd lose, while with any other symbol, he'd lose only in one of three cases. But if nobody plays rock, we are effectively at the original three-symbol game, except that the rock is now replaced by the well.</p>
<p>Therefore my hypothesis is: The ideal strategy for this game is to never play rock, and play each other symbol with equal probability.</p>
<p>Is my hypothesis right? If not, what is the ideal strategy?</p>
| <p>You are quite right. Rock is dominated by Well: no matter what the opponent plays, you do at least as well playing Well as you would playing Rock, and against Rock or Well you do better. Good players will never play Rock, so the game reduces to Well, Paper, Scissors, which is isomorphic to Rock, Paper, Scissors.</p>
| <p>Mixing evenly between paper, scissors, and well is indeed an equilibrium. </p>
<p>Starting with Vadim's condition:</p>
<p>$$ p-s+(1-r-p-s)=\\-r+s-(1-r-p-s)=\\r-p+(1-r-p-s)=\\-r+p-s$$</p>
<p>If Rock receives no weight, we have:</p>
<p>$$s-(1-p-s)=\\-p+(1-p-s)=\\p-s$$</p>
<p>Which gives $p=s=(1-p-s)=\frac{1}{3}$</p>
<p>Further, Rock is dominated by any combination of the other three strategies against this mixture. Thus, any mixture of the three is indeed a best response to an equal mixture and so the equal mixture is a Nash equilibrium. </p>
<p>To see there is no other equilibrium, we can use the fact that in a symmetric non-zero sum game, any strategy optimal for one player is optimal for another. Note that when rock receives weight in opponent's strategy, rock is strictly dominated by well. Thus, rock cannot be part of an equilibrium since it would imply that rock is part of an optimal strategy against a strategy that includes positive weight on rock. </p>
|
probability | <p>Is it possible to revert the softmax function in order to obtain the original values <span class="math-container">$x_i$</span>?</p>
<p><span class="math-container">$$S_i=\frac{e^{x_i}}{\sum e^{x_i}} $$</span></p>
<p>In case of 3 input variables this problem boils down to finding <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> given <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$z$</span>:</p>
<p><span class="math-container">\begin{cases}
\frac{a}{a+b+c} &= x \\
\frac{b}{a+b+c} &= y \\
\frac{c}{a+b+c} &= z
\end{cases}</span></p>
<p>Is this problem solvable?</p>
| <p>Note that in your three equations you must have $x+y+z=1$.
The general solution to your three equations are $a=kx$, $b=ky$, and $c=kz$ where $k$ is any scalar.</p>
<p>So if you want to recover $x_i$ from $S_i$, you would note $\sum_i S_i = 1$ which gives the solution $x_i = \log (S_i) + c$ for all $i$, for some constant $c$.</p>
| <p>The softmax function is defined as:</p>
<p><span class="math-container">$$S_i = \frac{\exp(x_i)}{\sum_{j} \exp(x_j)}$$</span></p>
<p>Taking the natural logarithm of both sides:</p>
<p><span class="math-container">$$\ln(S_i) = x_i - \ln(\sum_{j} \exp(x_j))$$</span></p>
<p>Rearranging the equation:</p>
<p><span class="math-container">$$x_i = \ln(S_i) + \ln(\sum_{j} \exp(x_j))$$</span></p>
<p>The second term on the right-hand side is a constant over all <span class="math-container">$i$</span> and can be written as <span class="math-container">$C$</span>. Therefore, we can write:</p>
<p><span class="math-container">$$x_i = \ln(S_i) + C$$</span></p>
<p>This answer is adapted from <a href="https://www.reddit.com/r/MachineLearning/comments/3uqgzj/question_regarding_inversion_of_softmax_function/" rel="nofollow noreferrer">this</a> post on Reddit.</p>
|
differentiation | <p>I was wondering on the following and I probably know the answer already: <strong>NO</strong>.</p>
<p>Is there another number with similar properties as <span class="math-container">$e$</span>? So that the derivative of <span class="math-container">$ e^x$</span> is the same as the function itself.</p>
<p>I can guess that it's probably not, because otherwise <span class="math-container">$e$</span> wouldn't be that special, but is there any proof of it?</p>
| <p>Of course $C e^x$ has the same property for any $C$ (including $C = 0$). But these are the only ones.</p>
<p><strong>Proposition:</strong> Let $f : \mathbb{R} \to \mathbb{R}$ be a differentiable function such that $f(0) = 1$ and $f'(x) = f(x)$. Then it must be the case that $f = e^x$.</p>
<p><em>Proof.</em> Let $g(x) = f(x) e^{-x}$. Then </p>
<p>$$g'(x) = -f(x) e^{-x} + f'(x) e^{-x} = (f'(x) - f(x)) e^{-x} = 0$$</p>
<p>by assumption, so $g$ is constant. But $g(0) = 1$, so $g(x) = 1$ identically. </p>
<p><strong>N.B.</strong> Note that it is also true that $e^{x+c}$ has the same property for any $c$. Thus there exists a function $g(c)$ such that $e^{x+c} = g(c) e^x = e^c g(x)$, and setting $c = 0$, then $x = 0$, we conclude that $g(c) = e^c$, hence $e^{x+c} = e^x e^c$. </p>
<p>This observation generalizes to any differential equation with translation symmetry. Apply it to the differential equation $f''(x) + f(x) = 0$ and you get the angle addition formulas for sine and cosine. </p>
| <p>Let $f(x)$ be a differentiable function such that $f'(x)=f(x)$. This implies that the $k$-th derivative, $f^{(k)}(x)$, is also equal to $f(x)$. In particular, $f(x)$ is $C^\infty$ and we can write a Taylor expansion for $f$:</p>
<p>$$T_f(x) = \sum_{k=0}^\infty c_k x^k.$$</p>
<p>Notice that the fact that $f(x)=f^{(k)}(x)$, for all $k\geq 0$, implies that the Taylor series $T_f(x_0)$ converges to $f(x_0)$ for every $x_0\in \mathbb{R}$ (more on this later), so we may write $f(x)=T_f(x)$. Since $f'(x) = \sum_{k=0} (k+1)c_{k+1}x^k = f(x)$, we conclude that $c_{k+1} = c_k/(k+1)$. The value of $c_0 = f(0)$, and therefore, $c_k = f(0)/k!$ for all $k\geq 0$. Hence:</p>
<p>$$f(x) = f(0) \sum_{k=0}^\infty \frac{x^k}{k!} = f(0) e^x,$$</p>
<p>as desired.</p>
<p><strong>Addendum: About the convergence of the Taylor series</strong>. Let us use Taylor's remainder theorem to show that the Taylor series for $f(x)$ centered at $x=0$, denoted by $T_f(x)$, converges to $f(x)$ for all $x\in\mathbb{R}$. Let $T_{f,n}(x)$ be the $n$th Taylor polynomial for $f(x)$, also centered at $x=0$. By Taylor's theorem, we know that
$$|R_n(x_0)|\leq |f^{(n+1)}(\xi)|\frac{ |x_0 - 0|^{n+1}}{(n+1)!},$$
where $R_n(x_0)=f(x) - T_{f,n}(x)$ and $\xi$ is a number between $0$ and $x_0$. Let $M=M(x_0)$ be the maximum value of $|f(x)|$ in the interval $I=[-|x_0|,|x_0|]$, which exists because $f$ is differentiable (therefore, continuous) in $I$. Since $f(x)=f^{(n+1)}(x)$, for all $n\geq 0$, we have:
$$|R_n(x_0)|\leq |f^{(n+1)}(\xi)|\frac{ |x_0|^{n+1}}{(n+1)!}\leq |f(\xi)|\frac{ |x_0|^{n+1}}{(n+1)!}\leq M \frac{|x_0|^{n+1}}{(n+1)!} \longrightarrow 0 \ \text{ as } \ n\to \infty.$$
The limit goes to $0$ because $M$ is a constant (once $x_0$ is fixed) and $A^n/n! \to 0$ for all $A\geq 0$. Therefore, $T_{f,n}(x_0) \to f(x_0)$ as $n\to \infty$ and, by definition, this means that $T_f(x_0)$ converges to $f(x_0)$. </p>
|
number-theory | <p>If $$n=p_1^{a_1}\cdots p_k^{a_k},$$ then define</p>
<p>$$f(n):=p_1^2+\cdots+p_k^2$$</p>
<p>So, $f(n)$ is the sum of the squares of the prime divisors of $n$.</p>
<p>For which natural numbers $n\ge 2$ do we have $f(n)=n$ ?</p>
<p>It is clear that $f(n)=n$ is true for the square of any prime, but false for
the other prime powers.</p>
<p>If $p$ and $q$ are the only prime divisors of $n$, we would get $p^2+q^2\equiv
0\pmod p$, which implies $p=q$, so for numbers with exact two prime
divisors, $f(n)=n$ cannot hold.</p>
<p>If $p,q,r$ are primes with $p<q<r$, then we have two possibilities.</p>
<p>If $p,q,r\ne 3$, we have $p^2+q^2+r^2\equiv 0\pmod3$, so $f(n)=n$
cannot hold. If $p=3$ or $q=3$, then $p^2+q^2+r^2 \equiv 2\pmod3$,
so $p^2+q^2+r^2$ is not divisible by $3$, so $f(n)=n$ cannot hold.</p>
<p>Finally, if $p<q<r<s$, then if $p>2$, then $p^2+q^2+r^2+s^2\equiv 0\pmod4$, so $f(n)=n$ cannot hold. And if $p=2$, then $p^2+q^2+r^2+s^2\equiv 3\pmod4$, so $p^2+q^2+r^2+s^2$ is odd and $f(n)=n$ again cannot hold.</p>
<p>So, apart from the squares of the primes, the number must have at least $5$
prime factors. I searched to about $6\times 10^7$ and did not find a "non-trivial" example.</p>
<ul>
<li>Is there a number $n$ with at least two prime factors and $f(n)=n$ ?</li>
</ul>
| <p>If <span class="math-container">$f(n)=n$</span> then <span class="math-container">$p_1^{a_1} \cdot ... \cdot p_k^{a_k}=p_1^2+...+p_k^2$</span>.</p>
<p>From this it follows that <span class="math-container">$p_1|p_2^2+...+p_k^2$</span> and that <span class="math-container">$p_k|p_1^2+...+p_{k-1}^2$</span>, that is, it is true that <span class="math-container">$p_2^2+...+p_k^2=ap_1$</span> and <span class="math-container">$p_1^2+...+p_{k-1}^2=bp_k$</span> for naturals <span class="math-container">$a,b$</span>.</p>
<p>If those two equalities are subtracted then it is obtained <span class="math-container">$p_1^2-p_k^2=bp_k-ap_1$</span>, which is equivalent to <span class="math-container">$p_1(p_1+a)=p_k(p_k+b)$</span>.</p>
<p>If <span class="math-container">$p_1$</span> and <span class="math-container">$p_k$</span> are two different primes it follows that <span class="math-container">$p_1 |(p_k+b)$</span> and that <span class="math-container">$p_k|(p_1+a)$</span>, so, there are integers <span class="math-container">$c$</span> and <span class="math-container">$d$</span> such that <span class="math-container">$p_k+b=cp_1$</span> and <span class="math-container">$p_1+a=dp_k$</span> and this implies <span class="math-container">$p_1dp_k=p_kcp_1$</span>, that is <span class="math-container">$c=d$</span> and <span class="math-container">$c>1$</span>.</p>
<p>If equalities <span class="math-container">$p_k+b=cp_1$</span> and <span class="math-container">$p_1+a=cp_k$</span> are added it is obtained <span class="math-container">$a+b=(p_1+p_k)(c-1)$</span>.</p>
<p>Now, from <span class="math-container">$(c-1)(bp_k-ap_1)=(p_1-p_k)(a+b)$</span> it follows <span class="math-container">$c=\dfrac{bp_1-ap_k}{bp_k-ap_1}$</span> and because of <span class="math-container">$bp_1<bp_k$</span> and <span class="math-container">$-ap_k<-ap_1$</span> it follows <span class="math-container">$c=\dfrac{bp_1-ap_k}{bp_k-ap_1}<\dfrac{bp_k-ap_1}{bp_k-ap_1}=1$</span>, but this is not possible since <span class="math-container">$c>1$</span> so the assumption that <span class="math-container">$p_1$</span> and <span class="math-container">$p_k$</span> are different primes is false!</p>
<p>That means that necessarily <span class="math-container">$k=1$</span> and this settles the question. </p>
| <p><strong>Sorry</strong> because <strong>this isn't a full answer</strong>, but I believe that contains sustancials statments that could provide an improve from someone.</p>
<p>Let $d|n$, thus $n=d\cdot n/d$, and by symmetry $\prod_{d|n}d=\prod_{d|n}n/d$, multiply $\prod_{d|n}$ in first identity states $$\left( \prod_{d|n}d\right)^{2}=n^{\sigma_{0}(n)}$$ (this is Exercise 10, page 47 from Apostol, Introduction to Analytic Number Theory), where $\sigma_{0}(n)$ is the number of divisors function. My attempt is extract arithmetic information from this and Euler-Fermat Theorem. The following cases are disjoint with fullness for a collection of primes.</p>
<p><strong>Case 1.</strong> We assume without loss of generality that the first prime is $2$, we obtain ($n>1$) $$\left( p_{1}^{2}+p_{2}^{2}+\cdots +p_{\omega (n)}^{2}\right)^{\sigma_{0}(n)}=n^{\sigma_{0}(n)}=\prod_{d|n}d^{2},$$ thus $(0+\omega (n)-1)^{\sigma_{0}(n)}\equiv 0\mod 4$, where $\omega (n)$ is the number of distinct primes, since if $m$ is odd, $m^{2}\equiv 1\mod 4$. These computations removes the cases $\omega(n)$ equals $4\lambda$ or $4\lambda +2$, there are infinitely many subcases. </p>
<p><strong>Case 2.</strong> All primes are odd, we obtain with same idea $\omega (n)^{\sigma_{0}(n)}\equiv 1\mod 4$ and discard the same subcases (caution, removes subcases in this case).</p>
<p>Perhaps bounding or using another identities someone can to sweep more cases. I understand that this isn't the full answer, so I accept the response of community, yours or moderators.</p>
|
geometry | <p>We <a href="http://www.maa.org/programs/maa-awards/writing-awards/the-circle-square-problem-decomposed" rel="noreferrer">know</a> that there is no paper-and-scissors solution to <a href="http://en.wikipedia.org/wiki/Tarski%27s_circle-squaring_problem" rel="noreferrer">Tarski's circle-squaring problem</a> (my six-year-old daughter told me this while eating lunch one day) but what are the closest approximations, if we don't allow overlapping?</p>
<p>More precisely: For N pieces that together will fit inside a circle of unit area and a square of unit area without overlapping, what is the maximum area that can be covered?</p>
<p>N=1 seems obvious: (90.9454%)</p>
<p><img src="https://i.sstatic.net/CwI52.png" width="200" ></p>
<p>A possible winner for N=3: (95%)</p>
<p><img src="https://i.sstatic.net/m8rv5.png" width="200" ></p>
<p>It seems likely that with, say, N=10 we could get very close indeed but I've never seen any example, and I doubt that my N=3 example above is even the optimum. (<strong>Edit:</strong> It's not!) And I've no idea what the solution for N=2 would look like.</p>
<p><a href="http://www.mathteacherctk.com/blog/2011/10/curvy-dissections/" rel="noreferrer">This page</a> discusses some curved shapes that <em>can</em> be cut up into squares. There's a nice simple proof <a href="https://math.stackexchange.com/questions/111522/whats-wrong-with-this-solution-of-tarskis-circle-squaring-problem">here</a> that there's no paper-and-scissors solution for the circle and the square.</p>
| <p>Not really an answer but there are some fantastic dissections on <a href="http://mathworld.wolfram.com/Dissection.html" rel="noreferrer">this page</a>, including these two:</p>
<p>Dissecting an octagon into a square with five pieces:</p>
<p><img src="https://i.sstatic.net/wv24h.png" alt="enter image description here"></p>
<p>Dissecting a dodecagon into a square with six pieces:</p>
<p><img src="https://i.sstatic.net/zLVdz.png" alt="enter image description here"></p>
<p>It looks likely that these could be made into pretty good approximations of a square-circle dissection for N=5 and N=6.</p>
<p><strong>Edit:</strong> Indeed, with N=6 we can get coverage of 97.18% like this:</p>
<p><img src="https://i.sstatic.net/GG53O.png" alt="Six pieces can cover 97.18% of a circle and a square of equal area"><br>
(an inscribed dodecagon would have an area of 95.49%)</p>
<p><strong>Later edit:</strong> It turns out that with N=6 we can do much better. 98.80%:</p>
<p><img src="https://i.sstatic.net/gaZzl.png" alt="Six pieces can cover 98.6% of a circle and a square of equal area"> </p>
<p>These solutions were found with a web app I've made:<br>
<a href="https://github.com/timhutton/circle-squaring" rel="noreferrer">https://github.com/timhutton/circle-squaring</a></p>
<p>Please give it a go, and submit the best solutions you find! The leaderboard on the right shows the current best known solutions for N=1 to N=10.</p>
| <p>Well, here's a specific infinite family of scissors congruences between large portions of the circle and large portions of the square. I make no claim that these are close to optimal.</p>
<p>We begin by inscribing a regular $n$-gon in the circle. We then cut the $n$-gon into $2n$ triangles, and rearrange these as follows:</p>
<p><img src="https://i.sstatic.net/11Mg5.png" alt="enter image description here"></p>
<p>The $2n$ triangles always fit into a rectangle whose width is half the circumference of the circle (namely $\sqrt{\pi}$), and whose height is the radius of the circle (namely $1/\sqrt{\pi}$). This rectangle can be cut into three pieces that can be rearranged to form a unit square:</p>
<p><img src="https://i.sstatic.net/5yytL.png" alt="enter image description here"></p>
<p>Composing these two scissors congruences gives the desired infinite family. Note that we may need to cut each of the $2n$ triangles into as many as $3$ pieces to compose with the second scissors congruence, so this uses at most $6n$ pieces. (Actually, it uses slightly fewer pieces than this, since the triangles on the left part of the rectangle won't need to be cut.)</p>
<p>The portion of the area used is the area of the $n$-gon:
$$
\text{Area} \;=\; \frac{n}{\pi} \cos(\pi/n) \sin(\pi/n) \;\approx\; 1 - \frac{\pi^2}{2n^2}
$$
Thus the leftover area can be made to decrease quadratically with the number of pieces.</p>
|
game-theory | <p>In the Prisoner's Dilemma example, we know that there is only one Nash Equilibrium.</p>
<p>That is both of them confess. </p>
<p>Is it possible that there are two Nash equilibriums in one example?</p>
<p>Can you roughly give me such an example? </p>
| <p>Yes! In the Nash equilibrium none of the players gains more by deviating his/her strategy from the equilibrium point. For an example in the following two-player reward table, there exist "many" equilibria:</p>
<p>$$\left[ \begin{array}{ccc}
1/1 & 0/0 & 0/0& 0/0& 0/0& 0/0 \\
0/0& 0/0 & 0/0& 0/0& 0/0& 0/0 \\
0/0 & 0/0 & 0/0& 0/0& 0/0& 0/0 \\
0/0 & 0/0& 0/0&1/1 & 0/0& 0/0 \\
0/0 & 0/0 & 0/0& 0/0& 0/0& 0/0 \\
0/0 & 0/0& 0/0& 0/0& 0/0 &1/1 \end{array} \right]$$</p>
| <p>Most games have an odd number of Nash equilibrium. For example in the coordination game below: $$ \begin{array}{|c|c|c|} \hline
P1\backslash P2 & PC & MAC \\ \hline
PC & 2,2 & 0,0 \\ \hline
MAC & 0,0 & 3,3 \\ \hline
\end{array} $$ You have 3 Nash equilibria: (PC,PC), (MAC,MAC) and also one in mixed strategies where each player chooses PC with probability 3/5 and MAC with prob. 2/5.</p>
|
linear-algebra | <p>I'm having a difficult time understanding this statement. Can someone please explain with a concrete example? </p>
| <p>The reason why this can happen is that all vector spaces, and hence subspaces too, must be closed under addition (and scalar multiplication). The union of two subspaces takes all the elements already in those spaces, and nothing more. In the union of subspaces $W_1$ and $W_2$, there are new combinations of vectors we can add together that we couldn't before, like $v_1 + w_2$ where $v_1 \in W_1$ and $w_2 \in W_2$. </p>
<p>For example, take $W_1$ to be the $x$-axis and $W_2$ the $y$-axis, both subspaces of $\mathbb{R}^2$.<br>
Their union includes both $(3,0)$ and $(0,5)$, whose sum, $(3,5)$, is not in the union. Hence, the union is not a vector space.</p>
| <p>The union of two subspaces is a subspace if and only if one of the subspaces is contained in the other. </p>
<p>The "if" part should be clear: if one of the subspaces is contained in the other, then their union is just the one doing the containing, so it's a subspace. </p>
<p>Now suppose neither subspace is contained in the other subspace. Then there are vectors $x$ and $y$ such that $x$ is in the first subspace but not the second, and $y$ is in the second subspace but not the first. Then I claim the $x+y$ can't be in either subspace, hence, can't be in their union; hence, the union is not closed under addition, so it's not a subspace. </p>
<p>So, let's prove the claim. If $x+y$ is in the first subspace, well, so is $x$, so $-x$ is also there, so $(x+y)+(-x)$ is there, but that's just $y$, which we know is not there. We've reached a contradiction on the assumption that $x+y$ was in the first subspace, so it can't be. Very similar reasoning shows it can't be in the second subspace, either, and we're done. </p>
|
combinatorics | <p>I understand how combinations and permutations work (without replacement). I also see why a permutation of $n$ elements ordered $k$ at a time (with replacement) is equal to $n^{k}$. Through some browsing I've found that the number of combinations with replacement of $n$ items taken $k$ at a time can be expressed as $(\binom{n}{k})$ [this "double" set of parentheses is the notation developed by Richard Stanley to convey the idea of combinations with replacement]. </p>
<p>Alternatively, $(\binom{n}{k})$ = $\binom{n+k-1}{k}$. This is more familiar notation. Unfortunately, I have not found a clear explanation as to why the above formula applies to the combinations with replacement. Could anyone be so kind to explain how this formula was developed?</p>
| <p>A useful way to think about it:</p>
<p>Imagine you have <span class="math-container">$n$</span> different cells form left to right. A combination <strong>without</strong> replacement of <span class="math-container">$k$</span> objects from <span class="math-container">$n$</span> objects would be equivalent to the number of ways in which these <span class="math-container">$k$</span> objects can be distributed among the cells with at most one object per cell. The key here is that due to the fact that there is no replacement there is only one or zero objects in each cell. It is easy to see that this corresponds to a combination without replacement because if we represent the occupied cells with a black circle and the empty cells with a white one there would be <span class="math-container">$k$</span> black circles in the row and <span class="math-container">$(n-k)$</span> white ones in the row, so the permutations of this row is precisley:</p>
<p><span class="math-container">$$\frac{n!}{(n-k)!k!}=\binom{n}{k}$$</span></p>
<p><span class="math-container">$$[][x][x][][x](...)[x][]= \bigcirc \bullet \bullet \bigcirc \bullet (...)\bullet \bigcirc $$</span></p>
<p>Note here that necessarily <span class="math-container">$k\leq n$</span>.</p>
<p>Using the same analogy for combinations <strong>with</strong> replacement we have <span class="math-container">$k$</span> objects that we want to distribute into this <span class="math-container">$n$</span> cells but now we can put more than one object per cell (hence with replacement) also note that there is no bound on <span class="math-container">$k$</span> because if <span class="math-container">$k>n$</span> then we can just put more than one object in each cell. Now here comes the tricky part, we can count the permutations of this set by cleverly assigning circles. Let the division between the cells be a white circles and the objects black circles, then there would be <span class="math-container">$(n-1)$</span> white circles and <span class="math-container">$k$</span> black ones. It's not so hard to see that each permutation of these circles corresponds to a different way of putting each these <span class="math-container">$k$</span> objects into the <span class="math-container">$n$</span> cells. We have a total of <span class="math-container">$(n-1)+k$</span> circles, <span class="math-container">$(n-1)$</span> white and <span class="math-container">$k$</span> black, so the number of permutations of this row of circles is percisley:</p>
<p><span class="math-container">$$\frac{(n-1+k)!}{(n-1)!k!}=\binom{n+k-1}{k}$$</span></p>
<p><span class="math-container">$$[][xxxx][xx][][x][(...)][x][]=$$</span></p>
<p><span class="math-container">$$\bigcirc \bullet \bullet \bullet \bullet \bigcirc \bullet \bullet \bigcirc \bigcirc \bullet \bigcirc(...)\bigcirc \bullet \bigcirc $$</span></p>
<p>This is not really a rigorous way of proof but I think it illustrates the concept well. If you want a slightly more detailed explanation and exercises I recommend the book <em>Introduction to Combinatorics</em> published by the United Kingdom Mathematics Trust (UKMT) available at their webpage. It covers many interesting topics with a problem solving approach to them.</p>
| <p>Assume the question is about buying 6 cans of soda pop from 4 brands of soda. Of course, there is more than 6 cans of soda for each brand. The number of different combinations is $\binom{4+6-1}{6} = 84. $</p>
<p>Think of it this way: If you wanted 2 cans of soda pop from the 4 brands, the second can of pop can be the same as the first one. Therefore, the reason it is $\binom{5}{2}$ is because one of the options out of the 5 is "<em>duplicate</em>" pop. If it is $\binom{4}{2}$, it would not be combination with replacement.</p>
<p>Therefore, in $\binom{4+6-1}{6} $, the 6-1 pop (or k-1) is the "<em>duplicate</em>" pop meaning it can be one of the pop that has been picked.</p>
|
probability | <p>There is a cube and an ant is performing a random walk on the edges where it can select any of the 3 adjoining vertices with equal probability. What is the expected number of steps it needs till it reaches the diagonally opposite vertex?</p>
| <p><strong>Extending to $d$-dimensions</strong></p>
<p>So I had enough time to procrastinate today and I extended this problem to $d$ dimensions. I would appreciate if someone could read through the answer and suggest any simplification to final answer if possible. Do the final numbers $u_n$ constitute a nice well known sequence? Further, what other interesting problems arise out this ant and the cube problem. I think another nice problem is the expected cover time. Thanks</p>
<p>Working out explicitly for $d=3$ gave some nice hints and a good picture of what is going on. We use these to extend to any d-dimension.</p>
<p>The first thing is to define a cube in $d$-dimensions. A $\mathbf{d}$-<strong>dimensional cube</strong> is a set of vertices of the form $(i_1,i_2,\ldots,i_d)$ where $i_k \in \{0,1\}$. There is an edge between vertex $(i_1,i_2,\ldots,i_d)$ and vertex $(j_1,j_2,\ldots,j_d)$ if $\exists l$ such that $\left | i_l - j_l \right| = 1$ and $i_k = j_k$, $\forall k \neq l$.</p>
<p>Two vertices are said to be adjacent if they share an edge. It is easy to note that every vertex shares an edge with $d$ vertices. (Consider a vertex $(i_1,i_2,\ldots,i_d)$. Choose any $i_k$ and replace $i_k$ by $1-i_k$. This gives an adjacent vertex and hence there are $d$ adjacent vertices as $k$ can be any number from $1$ to $d$). Hence, the probability that an ant at a vertex will choose an edge is $\frac1{d}$.</p>
<p>For every vertex, $(i_1,i_2,\ldots,i_d)$ let $s((i_1,i_2,\ldots,i_d)) = \displaystyle \sum_{k=1}^d i_k$. Note that $s$ can take values from $0$ to $d$. There are $\binom{d}{r}$ vertices such that $s((i_1,i_2,\ldots,i_d)) = r$.</p>
<p>Let $v_{(i_1,i_2,\ldots,i_d)}$ denote the expected number of steps taken from $(i_1,i_2,\ldots,i_d)$ to reach $(1,1,\ldots,1)$</p>
<p>Let $S_r = \{ (i_1,i_2,\ldots,i_d) \in \{0,1\}^d: s((i_1,i_2,\ldots,i_d)) = r\}$</p>
<p><strong>Claim</strong>: Consider two vertices say $a,b \in S_r$. Then $v_a = v_b$. The argument follows easily from symmetry. It can also be seen from writing down the equations and noting that the equations for $a$ and $b$ are symmetrical.</p>
<p>Further note that if $a \in S_r$, with $0 < r < d$, then any adjacent vertex of $a$ must be in $S_{r-1}$ or $S_{r+1}$. Any adjacent vertex of $(0,0,\ldots,0)$ belongs to $S_1$ and any adjacent vertex of $(1,1,\ldots,1)$ belongs to $S_{d-1}$. In fact, for any $a \in S_r$, $r$ adjacent vertices $\in S_{r-1}$ and $d-r$ adjacent vertices $\in S_{r+1}$.</p>
<p>Let $u_r$ denote the expected number of steps from any vertex $\in S_r$ to reach $(1,1,\ldots,1)$. For $r \in \{1,2,\ldots,d-1\}$, we have
\begin{align}
u_r & = 1 + \frac{r}{d} u_{r-1} + \frac{d-r}{d} u_{r+1} & r \in \{1,2,\ldots,d-1\}\\\
u_0 & = 1 + u_1 & r = 0\\\
u_d & = 0 & r = d
\end{align}
Setting up a matrix gives us a tai-diagonal system to be solved. Instead, we go about solving this as follows.</p>
<p>Let $p_r = \frac{r}{d}$, $\forall r \in \{1,2,\ldots,d-1\}$. Then the equations become
\begin{align}
u_r & = 1 + p_r u_{r-1} + (1-p_r) u_{r+1} & r \in \{1,2,\ldots,d-1\}\\\
u_0 & = 1 + (1-p_0) u_1 & r = 0\\\
u_d & = 0 & r = d
\end{align}
Let $a_{r} = u_{r+1} - u_r$. Then we get
\begin{align}
p_r a_{r-1} & = 1 + (1-p_r)a_r & r \in \{1,2,\ldots,d-1\}\\\
a_0 & = -1 & r = 0
\end{align}
Note that $u_m = - \displaystyle \sum_{k=m}^{d-1} a_k$ and $u_d = 0$
\begin{align}
a_0 & = -1 & r = 0\\\
a_{r} & = \frac{p_r}{1-p_r} a_{r-1} - \frac1{1-p_r} & r \in \{1,2,\ldots,d-1\}
\end{align}
Let $l_r = \frac{p_r}{1-p_r} = \frac{r}{d-r}$
\begin{align}
a_0 & = -1 & r = 0\\\
a_{r} & = l_r a_{r-1} - (1+l_r) & r \in \{1,2,\ldots,d-1\}
\end{align}
\begin{align}
a_1 &= l_1 a_0 - (1+l_1)\\\
a_2 & = l_2 l_1 a_0 - l_2(1+l_1) - (1+l_2)\\\
a_3 & = l_3 l_2 l_1 a_0 - l_3 l_2 (1+l_1) - l_3 (1+l_2) - (1+l_3)\\\
a_m & = \left( \prod_{k=1}^{m} l_k \right) a_0 - \displaystyle \sum_{k=1}^{m} \left((1+l_k) \left( \prod_{j=k+1}^m l_j \right) \right)
\end{align}
Since $a_0 = -1$ and $l_0 = 0$, we get
\begin{align}
a_m & = - \displaystyle \sum_{k=0}^{m} \left((1+l_k) \left( \prod_{j=k+1}^m l_j \right) \right)
\end{align}
Hence,
\begin{align}
u_n & = - \displaystyle \sum_{m=n}^{d-1} a_m\\\
u_n & = \displaystyle \sum_{m=n}^{d-1} \left( \displaystyle \sum_{k=0}^{m} \left((1+l_k) \left( \prod_{j=k+1}^m l_j \right) \right) \right)\\\
u_n & = \displaystyle \sum_{m=n}^{d-1} \left( \displaystyle \sum_{k=0}^{m} \left(\frac{d}{d-k} \left( \prod_{j=k+1}^m \frac{j}{d-j} \right) \right) \right)\\\
u_n & = \displaystyle \sum_{m=n}^{d-1} \frac{\displaystyle \sum_{k=0}^{m} \binom{d}{k}}{\binom{d-1}{m}}
\end{align}
Note that
\begin{align}
u_n & = \frac{\displaystyle \sum_{k=0}^{n} \binom{d}{k}}{\binom{d-1}{n}} + u_{n+1} & \forall n \in \{0,1,2,\ldots,d-2 \}
\end{align}
The expected number of steps from one vertex away is when $n = d-1$ and hence $u_{d-1} = 2^d-1$</p>
<p>The expected number of steps from two vertices away is when $n = d-2$ and hence $u_{d-2} = \frac{2d(2^{d-1} - 1)}{d-1}$</p>
<p>The answers for the expected number of steps from a vertex and two vertices away
coincide with Douglas Zhare's comment</p>
<hr>
<p><strong>Initial Solution</strong></p>
<p>Problems such as these fall in the category of Markov chains and one way to solve this is through first step analysis.</p>
<p>We shall denote the vertices of the cube by numbers from $1$ to $8$ with $1$ and $8$ being the opposite ends of the body diagonal.</p>
<p><img src="https://i.sstatic.net/knyXR.png" alt="enter image description here"></p>
<p>Let $v_i$ denote the expected number of steps to reach the vertex numbered $8$ starting at vertex numbered $i$.</p>
<p>$v_1 = 1 + \frac{1}{3}(v_2 + v_4 + v_6)$;
$v_2 = 1 + \frac{1}{3}(v_1 + v_3 + v_7)$;
$v_3 = 1 + \frac{1}{3}(v_2 + v_4 + v_8)$;
$v_4 = 1 + \frac{1}{3}(v_1 + v_3 + v_5)$;
$v_5 = 1 + \frac{1}{3}(v_4 + v_6 + v_8)$;
$v_6 = 1 + \frac{1}{3}(v_1 + v_5 + v_7)$;
$v_7 = 1 + \frac{1}{3}(v_6 + v_2 + v_8)$;
$v_8 = 0$;</p>
<p>Note that by symmetry you have $v_2 = v_4 = v_6$ and $v_3 = v_5 = v_7$.</p>
<p>Hence, $v_1 = 1 + v_2$ and $v_2 = 1 + \frac{1}{3}(v_1 + 2v_3)$ and $v_3 = 1 + \frac{2}{3} v_2$.</p>
<p>Solving we get $$\begin{align}
v_1 & = 10\\
v_2 = v_4 = v_6 & = 9\\
v_3 = v_5 = v_7 & = 7
\end{align}$$</p>
<p>Hence, the expected number of steps to reach the diagonally opposite vertex is $10$.</p>
| <p>Call the set containing only the starting vertex $A$. You can move to any of three vertices next. Call the set of them $B$. For the next step, you can go back to $A$, or you can move on to any of three new vertices. Call the set of those vertices $C$. Finally, call the set containing the goal vertex $D$.</p>
<p>Call the expected number of steps from $A$ to $D$ $E(AD)$ etc.</p>
<p>Consider $E(BD)$. We can write an equation for $E(BD)$ by considering what happens if you start at $B$ and take two steps. You could go to $C$ and then to $D$. The probability of this is $2/3$ for the first step and $1/3$ for the second, or $2/9$ over all. </p>
<p>You could also go to $C$ and back, or to $A$ and back. Either way, your new expected number of steps to $D$ is the same as it was before, because you're back where you started. The probability of this is $7/9$ because probabilities add to one.</p>
<p>This gives</p>
<p>$E(BD) = 2/9(2) + 7/9(2 + E(BD))$</p>
<p>which means</p>
<p>$E(BD) = 9$</p>
<p>It takes one step to go from $A$ to $B$, so</p>
<p>$E(AD) = 10$</p>
|
linear-algebra | <p>It seems, at times, that physicists and mathematicians mean different things when they say the word "tensor." From my perspective, when I say tensor, I mean "an element of a tensor product of vector spaces." </p>
<p>For instance, here is a segment about tensors from Zee's book <em>Einstein Gravity in a Nutshell</em>:</p>
<blockquote>
<p>We already saw in the preceding chapter
that a vector is defined by how it transforms: $V^{'i}
= R^{ij}V^j$ . Consider a collection of “mathematical
entities” $T^{ij}$ with $i , j = 1, 2, . . . , D$ in $D$-dimensional space. If they transform
under rotations according to
$T^{ij} \to T^{'ij} = R^{ik}R^{jl}T^{kl}$ then we say that $T$ transforms like a tensor.</p>
</blockquote>
<p>This does not really make any sense to me. Even for "vectors," and before we get to "tensors," it seems like we'd have to be given a sense of what it means for an object to "transform." How do they divine these transformation rules?</p>
<p>I am not completely formalism bound, but I have no idea how they would infer these transformation rules without a notion of what the object is <em>first</em>. For me, if I am given, say, $v \in \mathbb{R}^3$ endowed with whatever basis, I can <em>derive</em> that any linear map is given by matrix multiplication as it seems the physicists mean. But, I am having trouble even interpreting their statement. </p>
<p>How do you derive how something "transforms" without having a notion of what it is? If you want to convince me that the moon is made of green cheese, I need to at least have a notion of what the moon is first. The same is true of tensors. </p>
<p>My questions are:</p>
<ul>
<li>What exactly are the physicists saying, and can someone translate what they're saying into something more intelligible? How can they get these "transformation rules" without having a notion of what the thing is that they are transforming?</li>
<li>What is the relationship between what physicists are expressing versus mathematicians? </li>
<li>How can I talk about this with physicists without being accused of being a stickler for formalism and some kind of plague? </li>
</ul>
| <p>What a physicist probably means when they say "tensor" is "a global section of a tensor bundle." I'll try and break it down to show the connection to what mathematicians mean by tensor.</p>
<p>Physicists always have a manifold <span class="math-container">$M$</span> lying around. In classical mechanics or quantum mechanics, this manifold <span class="math-container">$M$</span> is usually flat spacetime, mathematically <span class="math-container">$\mathbb{R}^4$</span>. In general relativity, <span class="math-container">$M$</span> is the spacetime manifold whose geometry is governed by Einstein's equations.</p>
<p>Now, with this underlying manifold <span class="math-container">$M$</span> we can discuss what it means to have a vector field on <span class="math-container">$M$</span>. Manifolds are locally euclidean, so we know what tangent vector means locally on <span class="math-container">$M$</span>. The question is, how do you make sense of a vector field globally? The answer is, you specify an open cover of <span class="math-container">$M$</span> by coordinate patches, say <span class="math-container">$\{U_\alpha\}$</span>, and you specify vector fields <span class="math-container">$V_\alpha=(V_\alpha)^i\frac{\partial}{\partial x^i}$</span> defined locally on each <span class="math-container">$U_\alpha$</span>. Finally, you need to ensure that on the overlaps <span class="math-container">$U_\alpha \cap U_\beta$</span> that <span class="math-container">$V_\alpha$</span> "agrees" with <span class="math-container">$V_\beta$</span>. When you take a course in differential geometry, you study vector fields and you show that the proper way to patch them together is via the following relation on their components:
<span class="math-container">$$
(V_\alpha)^i = \frac{\partial x^i}{\partial y^j} (V_\beta)^j
$$</span>
(here, Einstein summation notation is used, and <span class="math-container">$y^j$</span> are coordinates on <span class="math-container">$U_\beta$</span>). With this definition, one can define a vector bundle <span class="math-container">$TM$</span> over <span class="math-container">$M$</span>, which should be thought of as the union of tangent spaces at each point. The compatibility relation above translates to saying that there is a well-defined global section <span class="math-container">$V$</span> of <span class="math-container">$TM$</span>. So, when a physicist says "this transforms like this" they're implicitly saying "there is some well-defined global section of this bundle, and I'm making use of its compatibility with respect to different choices of coordinate charts for the manifold."</p>
<p>So what does this have to do with mathematical tensors? Well, given vector bundles <span class="math-container">$E$</span> and <span class="math-container">$F$</span> over <span class="math-container">$M$</span>, one can form their tensor product bundle <span class="math-container">$E\otimes F$</span>, which essentially is defined by
<span class="math-container">$$
E\otimes F = \bigcup_{p\in M} E_p\otimes F_p
$$</span>
where the subscript <span class="math-container">$p$</span> indicates "take the fiber at <span class="math-container">$p$</span>." Physicists in particular are interested in iterated tensor powers of <span class="math-container">$TM$</span> and its dual, <span class="math-container">$T^*M$</span>. Whenever they write "the tensor <span class="math-container">$T^{ij...}_{k\ell...}$</span> transforms like so and so" they are talking about a global section <span class="math-container">$T$</span> of a tensor bundle <span class="math-container">$(TM)^{\otimes n} \otimes (T^*M)^{\otimes m}$</span> (where <span class="math-container">$n$</span> is the number of upper indices and <span class="math-container">$m$</span> is the number of lower indices) and they're making use of the well-definedness of the global section, just like for the vector field.</p>
<p>Edit: to directly answer your question about how they get their transformation rules, when studying differential geometry one learns how to take compatibility conditions from <span class="math-container">$TM$</span> and <span class="math-container">$T^*M$</span> and turn them into compatibility relations for tensor powers of these bundles, thus eliminating any guesswork as to how some tensor should "transform."</p>
<p>For more on this point of view, Lee's book on Smooth Manifolds would be a good place to start.</p>
| <p>Being a physicist by training maybe I can help.</p>
<p>The "physicist" definition of a vector you allude to, in more mathematicians-friendly terms would become something like</p>
<blockquote>
<p>Let $V$ be a vector space and fix a reference frame $\mathcal{F}$ (mathematicians lingo: a basis.) A collection $\{v^1, \ldots, v^n\}$ of real numbers is called a vector if upon a change of reference frame $\mathcal{F}^\prime = R ^{-1} \mathcal{F}$ it becomes the collection $\{ v^{\prime 1}, \dots, v^{\prime n}\}$ where $v^{\prime i} =R^i_{\ j} v^j$.</p>
</blockquote>
<p>If you like, you are defining a vector as an equivalence class of $n$-tuples of real numbers.</p>
<p>Yes, in many physics books most of what I wrote is tacitly implied/shrugged off as non-important. Anyway, the definition of tensors as collections of numbers transforming according to certain rules is not so esoteric/rare as far as I am aware, and as others have pointed out it's also how mathematicians thought about them back in the past.</p>
<p>Physicists often prefer to describe objects in what they find to be more intuitive and less abstract terms, and one of their strength is the ability to work with vaguely defined objects! (Yes, I consider it a strength and yes, it has its drawback and pitfalls, no need to start arguing about that).</p>
<p>The case of tensors is similar, just think of the collection of numbers with indices as the components with respect to some basis. Be warned that sometimes what a physicist calls a tensor is actually a tensor field.</p>
<p>As to why one would use the definition in terms of components rather than more elegant invariant ones: it takes less technology and is more down-to-earth then introducing a free module over a set and quotienting by an ideal.</p>
<p>Finally, regarding how to communicate with physicists: this has always been a struggle on both sides but</p>
<ol>
<li><p>Many physicists, at least in the general relativity area, are familiar with the definition of a tensor in terms of multilinear maps. In fact, that is how they are defined in all GR books I have looked at (Carroll, Misner-Thorne-Wheeler, Hawking-Ellis, Wald).</p></li>
<li><p>It wouldn't hurt for you to get acquainted, if not proficient, with the index notation. It has its own strengths <em>and</em> is still intrinsic. See Wald or the first volume of Penrose-Rindler "Spinors and space-time" under abstract index notation for more on that.</p></li>
</ol>
|
matrices | <p>I have matrices for my syllabus but I don't know where they find their use. I even asked my teacher but she also has no answer. Can anyone please tell me where they are used?
And please also give me an example of how they are used?</p>
| <p>I work in the field of applied math, so I will give you the point of view of an applied mathematician.</p>
<p>I do numerical PDEs. Basically, I take a differential equation (an equation whose solution is not a number, but a function, and that involves the function and its derivatives) and, instead of finding an analytical solution, I try to find an approximation of the value of the solution at some points (think of a grid of points). It's a bit more deep than this, but it's not the point here. The point is that eventually I find myself having to solve a linear system of equations which usually is of huge size (order of millions). It is a pretty huge number of equations to solve, I would say.</p>
<p>Where do matrices come into play? Well, as you know (or maybe not, I don't know) a linear system can be seen in matrix-vector form as</p>
<p>$$\text{A}\underline{x}=\underline{b}$$
where $\underline{x}$ contains the unknowns, A the coefficients of the equations and $\underline{b}$ contains the values of the right hand sides of the equations.
For instance for the system</p>
<p>$$\begin{cases}2x_1+x_2=3\\4x_1-x_2=1\end{cases}$$
we have</p>
<p>$$\text{A}=\left[
\begin{array}{cc}
2 & 1\\
4 & -1
\end{array}
\right],\qquad \underline{x}=
\left[\begin{array}{c}
x_1\\
x_2
\end{array}
\right]\qquad \underline{b}=
\left[\begin{array}{c}
3\\
1
\end{array}
\right]$$</p>
<p>For what I said so far, in this context matrices look just like a fancy and compact way to write down a system of equations, mere tables of numbers.</p>
<p>However, in order to solve this system fast is not enough to use a calculator with a big RAM and/or a high clock rate (CPU). Of course, the more powerful the calculator is, the faster you will get the solution. But sometimes, faster might still mean <strong>days</strong> (or more) if you tackle the problem in the wrong way, even if you are on a Blue Gene.</p>
<p>So, to reduce the computational costs, you have to come up with a good algorithm, a smart idea. But in order to do so, you need to exploit some property or some structure of your linear system. These properties are encoded somehow in the coefficients of the matrix A. Therefore, studying matrices and their properties is of crucial importance in trying to improve linear solvers efficiency. Recognizing that the matrix enjoys a particular property might be crucial to develop a fast algorithm or even to prove that a solution exists, or that the solution has some nice property.</p>
<p>For instance, consider the linear system</p>
<p>$$\left[\begin{array}{cccc}
2 & -1 & 0 & 0\\
-1 & 2 & -1 & 0\\
0 & -1 & 2 & -1\\
0 & 0 & -1 & 2
\end{array}
\right]
\left[
\begin{array}{c}
x_1\\
x_2\\
x_3\\
x_4
\end{array}
\right]=
\left[
\begin{array}{c}
1\\
1\\
1\\
1
\end{array}
\right]$$
which corresponds (in equation form) to</p>
<p>$$\begin{cases}
2x_1-x_2=1\\
-x_1+2x_2-x_3=1\\
-x_2+2x_3-x_4=1\\
-x_3+2x_4=1
\end{cases}$$</p>
<p>Just giving a quick look to the matrix, I can claim that this system has a solution and, moreover, the solution is non-negative (meaning that all the components of the solution are non-negative). I'm pretty sure you wouldn't be able to draw this conclusion just looking at the system without trying to solve it. I can also claim that to solve this system you need only 25 operations (one operation being a single addition/subtraction/division/multiplication). If you construct a larger system with the same pattern (2 on the diagonal, -1 on the upper and lower diagonal) and put a right hand side with only positive entries, I can still claim that the solution exists and it's positive and the number of operations needed to solve it is only $8n-7$, where $n$ is the size of the system.</p>
<p>Moreover, people already pointed out other fields where matrices are important bricks and plays an important role. I hope this thread gave you an idea of why it is worth it to study matrices. =)</p>
| <p>Matrices are a useful way to represent, manipulate and study linear maps between finite dimensional vector spaces (if you have chosen basis). <br />
Matrices can also represent quadratic forms (it's useful, for example, in analysis to study hessian matrices, which help us to study the behavior of critical points).<br /></p>
<p>So, it's a useful tool of linear algebra.</p>
<p>Moreover, linear algebra is a crucial tool in math.<br />
To convince yourself, there are a lot of linear problems you can study with little knowledge in math. For examples, system of linear equations, some error-correcting codes (linear codes), linear differential equations, linear recurrence sequences...<br />
I also think that linear algebra is a natural framework of quantum mechanics.</p>
|
probability | <p>If you were to flip a coin 150 times, what is the probability that it would land tails 7 times in a row? How about 6 times in a row? Is there some forumula that can calculate this probability?</p>
| <p>Here are some details; I will only work out the case where you want $7$ tails in a row, and the general case is similar. I am interpreting your question to mean "what is the probability that, at least once, you flip at least 7 tails in a row?" </p>
<p>Let $a_n$ denote the number of ways to flip $n$ coins such that at no point do you flip more than $6$ consecutive tails. Then the number you want to compute is $1 - \frac{a_{150}}{2^{150}}$. The last few coin flips in such a sequence of $n$ coin flips must be one of $H, HT, HTT, HTTT, HTTTT, HTTTTT$, or $HTTTTTT$. After deleting this last bit, what remains is another sequence of coin flips with no more than $6$ consecutive tails. So it follows that</p>
<p>$$a_{n+7} = a_{n+6} + a_{n+5} + a_{n+4} + a_{n+3} + a_{n+2} + a_{n+1} + a_n$$</p>
<p>with initial conditions $a_k = 2^k, 0 \le k \le 6$. Using a computer it would not be very hard to compute $a_{150}$ from here, especially if you use the matrix method that David Speyer suggests.</p>
<p>In any case, let's see what we can say approximately. The asymptotic growth of $a_n$ is controlled by the largest positive root of the characteristic polynomial $x^7 = x^6 + x^5 + x^4 + x^3 + x^2 + x + 1$, which is a little less than $2$. Rearranging this identity gives $2 - x = \frac{1}{x^7}$, so to a first approximation the largest root is $r \approx 2 - \frac{1}{128}$. This means that $a_n$ is approximately $\lambda \left( 2 - \frac{1}{128} \right)^n$ for some constant $\lambda$, which means that $\frac{a_{150}}{2^{150}}$ is roughly</p>
<p>$$\lambda \left( 1 - \frac{1}{256} \right)^{150} \approx \lambda e^{ - \frac{150}{256} } \approx 0.56 \lambda$$</p>
<p>although $\lambda$ still needs to be determined.</p>
<p><strong>Edit:</strong> So let's approximate $\lambda$. I claim that the generating function for $a_n$ is</p>
<p>$$A(x) = 1 + \sum_{n \ge 1} a_{n-1} x^n = \frac{1}{1 - x - x^2 - x^3 - x^4 - x^5 - x^6 - x^7}.$$</p>
<p>This is because, by iterating the argument in the second paragraph, we can decompose any valid sequence of coin flips into a sequence of one of seven blocks $H, HT, ...$ uniquely, except that the initial segment does not necessarily start with $H$. To simplify the above expression, write $A(x) = \frac{1 - x}{1 - 2x + x^8}$. Now, the partial fraction decomposition of $A(x)$ has the form</p>
<p>$$A(x) = \frac{\lambda}{r(1 - rx)} + \text{other terms}$$</p>
<p>where $\lambda, r$ are as above, and it is this first term which determines the asymptotic behavior of $a_n$ as above. To compute $\lambda$ we can use l'Hopital's rule; we find that $\lambda$ is equal to</p>
<p>$$\lim_{x \to \frac{1}{r}} \frac{r(1 - rx)(1 - x)}{1 - 2x + x^8} = \lim_{x \to \frac{1}{r}} \frac{-r(r+1) + 2r^2x}{-2 + 8x^7} = \frac{r^2-r}{2 - \frac{8}{r^7}} \approx 1.$$</p>
<p>So my official guess at the actual value of the answer is $1 - 0.56 = 0.44$. Anyone care to validate it?</p>
<hr>
<p>Sequences like $a_n$ count the number of words in objects called <a href="http://en.wikipedia.org/wiki/Regular_language">regular languages</a>, whose enumerative behavior is described by <a href="http://en.wikipedia.org/wiki/Recurrence_relation#Linear_homogeneous_recurrence_relations_with_constant_coefficients">linear recurrences</a> and which can also be analyzed using <a href="http://en.wikipedia.org/wiki/Finite-state_machine">finite state machines</a>. Those are all good keywords to look up if you are interested in generalizations of this method. I discuss some of these issues in my <a href="http://web.mit.edu/~qchu/Public/TopicsInGF.pdf">notes on generating functions</a>, but you can find a more thorough introduction in the relevant section of Stanley's <a href="http://www-math.mit.edu/~rstan/ec/">Enumerative Combinatorics</a>.</p>
| <p>I'll sketch a solution; details are left to you.</p>
<p>As you flip your coin, think about what data you would want to keep track of to see whether $7$ heads have come up yet. You'd want to know: Whether you have already won and what the number of heads at the end of your current sequence was. In other words, there are $8$ states:</p>
<p>$A$: We have not flipped $7$ heads in a row yet, and the last flip was $T$.</p>
<p>$B$: We have not flipped $7$ heads in a row yet, and the last two flips was $TH$.</p>
<p>$C$: We have not flipped $7$ heads in a row yet, and the last three flips were $THH$. </p>
<p>$\ldots$</p>
<p>$G$: We have not flipped $7$ heads in a row yet, and the last seven flips were $THHHHHH$.</p>
<p>$H$: We've flipped $7$ heads in a row!</p>
<p>If we are in state $A$ then, with probability $1/2$ we move to state $B$ and with probability $1/2$ we stay in state $A$. If we are in state $B$ then, with probability $1/2$ we move to state $C$ and with probability $1/2$ we move back to state $A$. $\ldots$ If we are in state $G$, with probability $1/2$ we move forward to state $H$ and with probability $1/2$ we move back to state $A$. Once we are in state $H$ we stay there.</p>
<p>In short, define $M$ to be the matrix
$$\begin{pmatrix}
1/2 & 1/2 & 1/2 & 1/2 & 1/2 & 1/2 & 1/2 & 0 \\
1/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1/2 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1/2 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1/2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1/2 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1/2 & 1
\end{pmatrix}$$</p>
<p>Then the entries of $M^n$ give the probability of transitioning from one given state to another in $n$ coin flips. (Please, please, please, do not go on until you understand why this works! This is one of the most standard uses of matrix multiplication.) You are interested in the lower left entry of $M^{150}$.</p>
<p>Of course, a computer algebra system can compute this number for you quite rapidly. Rather than do this, I will discuss some interesting math which comes out of this.</p>
<hr>
<p>(1) The <a href="http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem">Perron-Frobenius theorem</a> tells us that $1$ is an eigenvalue of $M$ (with corresponding eigenvector $(0,0,0,0,0,0,0,1)^T$, in this case) and all the other eigenvalues are less then $1$. Let $\lambda$ be the largest eigenvalue less than $1$, then probability of getting $7$ heads in a row, when we flip $n$ times, is approximately $1-c \lambda^n$ for some constant $c$. </p>
<p>(2) You might wonder what sort of configurations of coin flips can be answered by this method. For example, could we understand the case where we flip $3$ heads in a row before we flip $2$ tails in a row? (Answer: Yes.) Could we understand the question of whether the first $2k$ flips are a palindrome for some $k$? (Answer: No, not by this method.) In general, the question is which properties can be recognized by <a href="http://en.wikipedia.org/wiki/Finite-state_machine">finite state automata</a>, also called the regular languages. There is a lot of study of this subject. </p>
<p>(3) See chapter $8.4$ of <em>Concrete Mathematics</em>, by Graham, Knuth and Patashnik, for many more coin flipping problems.</p>
|
logic | <p>Suppose we want to define a first-order language to do set theory (so we can formalize mathematics).
One such construction can be found <a href="http://books.google.com.bn/books?id=u927rHHmylAC&lpg=PP1&pg=PA5#v=onepage&q&f=false" rel="noreferrer">here</a>.
What makes me uneasy about this definition is that words such as "set", "countable", "function", and "number" are used in somewhat non-trivial manners.
For instance, behind the word "countable" rests an immense amount of mathematical knowledge: one needs the notion of a bijection, which requires functions and sets.
One also needs the set of natural numbers (or something with equal cardinality), in order to say that countable sets have a bijection with the set of natural numbers.</p>
<p>Also, in set theory one uses the relation of belonging "<span class="math-container">$\in$</span>".
But relation seems to require the notion an ordered pair, which requires sets, whose properties are described using belonging...</p>
<p>I found the following in Kevin Klement's, <a href="https://people.umass.edu/klement/514/ln.pdf" rel="noreferrer">lecture notes on mathematical logic</a> (pages 2-3).</p>
<p>"You have to use logic to study logic. There’s no getting away from it.
However, I’m not going to bother stating all the logical rules that are valid in the metalanguage, since I’d need to do that in the metametalanguage, and that would just get me started on an infinite regress.
The rule of thumb is: if it’s OK in the object language, it’s OK in the metalanguage too."</p>
<p>So it seems that, if one proves a fact about the object language, then one can also use it in the metalanguage.
In the case of set theory, one may not start out knowing what sets really are, but after one proves some fact about them (e.g., that there are uncountable sets) then one implicitly "adds" this fact also to the metalanguage.</p>
<p>This seems like cheating: one is using the object language to conduct proofs regarding the metalanguage, when it should strictly be the other way round.</p>
<p>To give an example of avoiding circularity, consider the definition of the integers.
We can define a binary relation <span class="math-container">$R\subseteq(\mathbf{N}\times\mathbf{N})\times(\mathbf{N}\times\mathbf{N})$</span>, where for any <span class="math-container">$a,b,c,d\in\mathbf{N}$</span>, <span class="math-container">$((a,b),(c,d))\in R$</span> iff <span class="math-container">$a+d=b+c$</span>, and then defining <span class="math-container">$\mathbf{Z}:= \{[(a,b)]:a,b\in\mathbf{N}\}$</span>, where <span class="math-container">$[a,b]=\{x\in \mathbf{N}\times\mathbf{N}: xR(a,b)\}$</span>, as in <a href="https://math.stackexchange.com/questions/156264/building-the-integers-from-scratch-and-multiplying-negative-numbers">this</a> question or <a href="https://en.wikipedia.org/wiki/Integer#Construction" rel="noreferrer">here</a> on Wikipedia. In this definition if set theory and natural numbers are assumed, then there is no circularity because one did not depend on the notion of "subtraction" in defining the integers.</p>
<p>So my question is:</p>
<blockquote>
<p><strong>Question</strong> Is the definition of first-order logic circular?
If not, please explain why.
If the definitions <em>are</em> circular, is there an alternative definition which avoids the circularity?</p>
</blockquote>
<p>Some thoughts:</p>
<ul>
<li><p>Perhaps there is the distinction between what sets are (anything that obeys the axioms) and how sets are expressed (using a formal language).
In other words, the notion of a <em>set</em> may not be circular, but to talk of sets using a formal language requires the notion of a set in a metalanguage.</p></li>
<li><p>In foundational mathematics there also seems to be the idea of first <em>defining</em> something, and then coming back with better machinery to <em>analyse</em> that thing.
For instance, one can define the natural numbers using the Peano axioms, then later come back to say that all structures satisfying the axioms are isomorphic. (I don't know any algebra, but that seems right.)</p></li>
<li><p>Maybe sets, functions, etc., are too basic? Is it possible to avoid these terms when defining a formal language?</p></li>
</ul>
| <p>I think an important answer is still not present so I am going to type it. This is somewhat standard knowledge in the field of foundations but is not always adequately described in lower level texts.</p>
<p>When we formalize the syntax of formal systems, we often talk about the <em>set</em> of formulas. But this is just a way of speaking; there is no ontological commitment to "sets" as in ZFC. What is really going on is an "inductive definition". To understand this you have to temporarily forget about ZFC and just think about strings that are written on paper. </p>
<p>The inductive definition of a "propositional formula" might say that the set of formulas is the smallest class of strings such that:</p>
<ul>
<li><p>Every variable letter is a formula (presumably we have already defined a set of variable letters). </p></li>
<li><p>If $A$ is a formula, so is $\lnot (A)$. Note: this is a string with 3 more symbols than $A$. </p></li>
<li><p>If $A$ and $B$ are formulas, so is $(A \land B)$. Note this adds 3 more symbols to the ones in $A$ and $B$. </p></li>
</ul>
<p>This definition <em>can</em> certainly be read as a definition in ZFC. But it can also be read in a different way. The definition can be used to generate a completely effective procedure that a human can carry out to tell whether an arbitrary string is a formula (a proof along these lines, which constructs a parsing procedure and proves its validity, is in Enderton's logic textbook). </p>
<p>In this way, we can understand inductive definitions in a completely effective way without any recourse to set theory. When someone says "Let $A$ be a formula" they mean to consider the situation in which I have in front of me a string written on a piece of paper, which my parsing algorithm says is a correct formula. I can perform that algorithm without any knowledge of "sets" or ZFC.</p>
<p>Another important example is "formal proofs". Again, I can treat these simply as strings to be manipulated, and I have a parsing algorithm that can tell whether a given string is a formal proof. The various syntactic metatheorems of first-order logic are also effective. For example the deduction theorem gives a direct algorithm to convert one sort of proof into another sort of proof. The algorithmic nature of these metatheorems is not always emphasized in lower-level texts - but for example it is very important in contexts like automated theorem proving. </p>
<p>So if you examine a logic textbook, you will see that all the syntactic aspects of basic first order logic are given by inductive definitions, and the algorithms given to manipulate them are completely effective. Authors usually do not dwell on this, both because it is completely standard and because they do not want to overwhelm the reader at first. So the convention is to write definitions "as if" they are definitions in set theory, and allow the readers who know what's going on to read the definitions as formal inductive definitions instead. When read as inductive definitions, these definitions would make sense even to the fringe of mathematicians who don't think that any infinite sets exist but who are willing to study algorithms that manipulate individual finite strings. </p>
<p>Here are two more examples of the syntactic algorithms implicit in certain theorems: </p>
<ul>
<li><p>Gödel's incompleteness theorem actually gives an effective algorithm that can convert any PA-proof of Con(PA) into a PA-proof of $0=1$. So, under the assumption there is no proof of the latter kind, there is no proof of the former kind. </p></li>
<li><p>The method of forcing in ZFC actually gives an effective algorithm that can turn any proof of $0=1$ from the assumptions of ZFC and the continuum hypothesis into a proof of $0=1$ from ZFC alone. Again, this gives a relative consistency result. </p></li>
</ul>
<p>Results like the previous two bullets are often called "finitary relative consistency proofs". Here "finitary" should be read to mean "providing an effective algorithm to manipulate strings of symbols". </p>
<p>This viewpoint helps explain where weak theories of arithmetic such as PRA enter into the study of foundations. Suppose we want to ask "what axioms are required to prove that the algorithms we have constructed will do what they are supposed to do?". It turns out that very weak theories of arithmetic are able to prove that these symbolic manipulations work correctly. PRA is a particular theory of arithmetic that is on one hand very weak (from the point of view of stronger theories like PA or ZFC) but at the same time is able to prove that (formalized versions of) the syntactic algorithms work correctly, and which is often used for this purpose. </p>
| <p>It's only circular if you think we need a formalization of logic in order to reason mathematically at all. However, mathematicians reasoned mathematically for many centuries <em>before</em> formal logic was invented, so this assumption is obviously not true.</p>
<p>It's an empirical fact that mathematical reasoning existed independently of formal logic back then. I think it is reasonably self-evident, then, that it <em>still</em> exists without needing formal logic to prop it up. Formal logic is a <em>mathematical model</em> of the kind of reasoning mathematicians accept -- but the model is not the thing itself.</p>
<p>A small bit of circularity does creep in, because many modern mathematicians look to their knowledge of formal logic when they need to decide whether to accept an argument or not. But that's not enough to make the whole thing circular; there are enough non-equivalent formal logics (and possible foundations of mathematics) to choose between that the choice of which one to use to analyze arguments is still largely informed by which arguments one <em>intuitively</em> wants to accept in the first place, not the other way around.</p>
|
number-theory | <p>I have a friend who turned <span class="math-container">$32$</span> recently. She has an obsessive compulsive disdain for odd numbers, so I pointed out that being <span class="math-container">$32$</span> was pretty good since not only is it even, it also has no odd factors. That made me realize that <span class="math-container">$64$</span> would be an even better age for her, because it's even, has no odd factors, and has no odd <em>digits</em>. I then wondered how many other powers of <span class="math-container">$2$</span> have this property. The only higher power of <span class="math-container">$2$</span> with all even digits that I could find was <span class="math-container">$2048.$</span> </p>
<p>So is there a larger power of <span class="math-container">$2$</span> with all even digits? If not, how would you go about proving it?</p>
<p>I tried examining the last <span class="math-container">$N$</span> digits of powers of <span class="math-container">$2$</span> to look for a cycle in which there was always at least one odd digit in the last <span class="math-container">$N$</span> digits of the consecutive powers. Unfortunately, there were always a very small percentage of powers of <span class="math-container">$2$</span> whose last <span class="math-container">$N$</span> digits were even.</p>
<p><strong>Edit:</strong> Here's a little more info on some things I found while investigating the <span class="math-container">$N$</span> digit cycles.</p>
<p><span class="math-container">$N$</span>: <span class="math-container">$2,3,4,5,6,7,8,9$</span></p>
<p>Cycle length: <span class="math-container">$20,100,500,2500,12500,62520,312500,1562500,\dotsc, 4\cdot 5^{N-1}$</span></p>
<p>Number of suffixes with all even digits in cycle: <span class="math-container">$10, 25, 60, 150, 370, 925, 2310,5780,\sim4\cdot2.5^{N-1}$</span> </p>
<p>It seems there are some interesting regularities there. Unfortunately, one of the regularities is those occurrences of all even numbers! In fact, I was able to find a power of <span class="math-container">$2$</span> in which the last <span class="math-container">$33$</span> digits were even <span class="math-container">$(2^{3789535319} = \dots 468088628828226888000862880268288)$</span>. </p>
<p>Yes it's true that it took a power of <span class="math-container">$2$</span> with over a billion digits to even get the last <span class="math-container">$33$</span> to be even, so it would seem any further powers of <span class="math-container">$2$</span> with all even digits are extremely unlikely. But I'm still curious as to how you might prove it.</p>
<p><strong>Edit 2:</strong> Here's another interesting property I noticed. The next digit to the left of the last <span class="math-container">$N$</span> digits will take on every value of its parity as the <span class="math-container">$N$</span> digits cycle each time. Let me illustrate.</p>
<p>The last <span class="math-container">$2$</span> digits cycle every <span class="math-container">$20$</span> powers. Now examine the following:</p>
<p><span class="math-container">$2^7 = 128$</span><br>
<span class="math-container">$2^{27} = \dots 728$</span><br>
<span class="math-container">$2^{47} = \dots 328$</span><br>
<span class="math-container">$2^{67} = \dots 928$</span><br>
<span class="math-container">$2^{87} = \dots 528$</span><br>
<span class="math-container">$2^{107} = \dots 128$</span> </p>
<p>Notice that the hundreds place starts out odd and then proceeds to take on every odd digit as the final 2 digits cycle.</p>
<p>As another example, let's look at the fourth digit (knowing that the last 3 digits cycle every 100 powers.)</p>
<p><span class="math-container">$2^{18} = 262144$</span>,
<span class="math-container">$2^{118} = \dots 6144$</span>,
<span class="math-container">$2^{218} = \dots 0144$</span>,
<span class="math-container">$2^{318} = \dots 4144$</span>,
<span class="math-container">$2^{418} = \dots 8144$</span>,
<span class="math-container">$2^{518} = \dots 2144$</span> </p>
<p>This explains the power of 5 in the cycle length as each digit must take on all five digits of its parity.</p>
<p><strong>EDIT 3:</strong> It looks like the <span class="math-container">$(N+1)$</span><sup>st</sup> digit takes on all the values <span class="math-container">$0-9$</span> as the last <span class="math-container">$N$</span> digits complete half a cycle. For instance, the last <span class="math-container">$2$</span> digits cycle every <span class="math-container">$20$</span> powers, so look at the third digit every <span class="math-container">$10$</span> powers:</p>
<p><span class="math-container">$2^{8} = 256$</span>,
<span class="math-container">$2^{18} = \dots 144$</span>,
<span class="math-container">$2^{28} = \dots 456$</span>,
<span class="math-container">$2^{38} = \dots 944$</span>,
<span class="math-container">$2^{48} = \dots 656$</span>,
<span class="math-container">$2^{58} = \dots 744$</span>,
<span class="math-container">$2^{68} = \dots 856$</span>,
<span class="math-container">$2^{78} = \dots 544$</span>,
<span class="math-container">$2^{88} = \dots 056$</span>,
<span class="math-container">$2^{98} = \dots 344$</span> </p>
<p>Not only does the third digit take on every value 0-9, but it also alternates between odd and even every time (as the Edit 2 note would require.) Also, the N digits cycle between two values, and each of the N digits besides the last one alternates between odd and even. I'll make this more clear with one more example which looks at the fifth digit:</p>
<p><span class="math-container">$2^{20} = \dots 48576$</span>,
<span class="math-container">$2^{270} = \dots 11424$</span>,
<span class="math-container">$2^{520} = \dots 28576$</span>,
<span class="math-container">$2^{770} = \dots 31424$</span>,
<span class="math-container">$2^{1020} = \dots 08576$</span>,
<span class="math-container">$2^{1270} = \dots 51424$</span>,
<span class="math-container">$2^{1520} = \dots 88576$</span>,
<span class="math-container">$2^{1770} = \dots 71424$</span>,
<span class="math-container">$2^{2020} = \dots 68576$</span>,
<span class="math-container">$2^{2270} = \dots 91424$</span></p>
<p><strong>EDIT 4:</strong> Here's my next non-rigorous observation. It appears that as the final N digits cycle 5 times, the <span class="math-container">$(N+2)$</span><sup>th</sup> digit is either odd twice and even three times, or it's odd three times and even twice. This gives a method for extending an all even suffix. </p>
<p>If you have an all even N digit suffix of <span class="math-container">$2^a$</span>, and the (N+1)<sup>th</sup> digit is odd, then one of the following will have the (N+1)<sup>th</sup> digit even:</p>
<p><span class="math-container">$2^{(a+1*4*5^{N-2})}$</span>,
<span class="math-container">$2^{(a+2*4*5^{N-2})}$</span>,
<span class="math-container">$2^{(a+3*4*5^{N-2})}$</span></p>
<p><strong>Edit 5:</strong> It's looking like there's no way to prove this conjecture solely by examining the last N digits since we can always find an arbitrarily long, all even, N digit sequence. However, all of the digits are distributed so uniformly through each power of 2 that I would wager that not only does every power of 2 over 2048 have an odd digit, but also, every power of 2 larger than <span class="math-container">$2^{168}$</span> has <em>every digit</em> represented in it somewhere.</p>
<p>But for now, let's just focus on the parity of each digit. Consider the value of the <span class="math-container">$k^{th}$</span> digit of <span class="math-container">$2^n$</span> (with <span class="math-container">$a_0$</span> representing the 1's place.) </p>
<p><span class="math-container">$$
a_k = \left\lfloor\frac{2^n}{10^k}\right\rfloor \text{ mod 10}\Rightarrow a_k = \left\lfloor\frac{2^{n-k}}{5^k}\right\rfloor \text{ mod 10}
$$</span></p>
<p>We can write
<span class="math-container">$$2^{n-k} = d\cdot5^k + r$$</span>
where <span class="math-container">$d$</span> is the divisor and <span class="math-container">$r$</span> is the remainder of <span class="math-container">$2^{n-k}/5^k$</span>. So
<span class="math-container">$$
a_k \equiv \frac{2^{n-k}-r}{5^k} \equiv d \pmod{10}
$$</span>
<span class="math-container">$$\Rightarrow a_k \equiv d \pmod{2}$$</span>
And
<span class="math-container">$$d\cdot5^k = 2^{n-k} - r \Rightarrow d \equiv r \pmod{2}$$</span>
Remember that <span class="math-container">$r$</span> is the remainder of <span class="math-container">$2^{n-k} \text{ div } {5^k}$</span> so </p>
<p><span class="math-container">$$\text{The parity of $a_k$ is the same as the parity of $2^{n-k}$ mod $5^k$.}$$</span></p>
<p>Now we just want to show that for any <span class="math-container">$2^n > 2048$</span> we can always find a <span class="math-container">$k$</span> such that <span class="math-container">$2^{n-k} \text{ mod }5^k$</span> is odd.</p>
<p>I'm not sure if this actually helps or if I've just sort of paraphrased the problem.</p>
<p><strong>EDIT 6:</strong> Thinking about <span class="math-container">$2^{n-k}$</span> mod <span class="math-container">$5^k$</span>, I realized there's a way to predict some odd digits. </p>
<p><span class="math-container">$$2^a \pmod{5^k} \text{ is even for } 1\le a< log_2 5^k$$</span></p>
<p>The period of <span class="math-container">$2^a \pmod{5^k}$</span> is <span class="math-container">$4\cdot5^{k-1}$</span> since 2 is a primitive root mod <span class="math-container">$5^k$</span>. Also </p>
<p><span class="math-container">$$2^{2\cdot5^{k-1}} \equiv -1 \pmod{5^k}$$</span></p>
<p>So multiplying any <span class="math-container">$2^a$</span> by <span class="math-container">$2^{2\cdot5^{k-1}}$</span> flips its parity mod <span class="math-container">$5^k$</span>. Therefore <span class="math-container">$2^a \pmod{5^k}\text{ }$</span> is odd for</p>
<p><span class="math-container">$$1 + 2\cdot5^{k-1} \le a< 2\cdot5^{k-1} + log_2 5^k$$</span></p>
<p>Or taking the period into account, <span class="math-container">$2^a \pmod{5^k} \text{ }$</span> is odd for any integer <span class="math-container">$b\ge0$</span> such that</p>
<p><span class="math-container">$$1 + 2\cdot5^{k-1} (1 + 2b) \le a< 2\cdot5^{k-1} (1 + 2b) + log_2 5^k$$</span></p>
<p>Now for the <span class="math-container">$k^{th}$</span> digit of <span class="math-container">$2^n$</span> (<span class="math-container">$ k=0 \text{ } $</span> being the 1's digit), we're interested in the parity of <span class="math-container">$2^{n-k}$</span> mod <span class="math-container">$5^k$</span>. Setting <span class="math-container">$ a =n-k \text{ } $</span> we see that the <span class="math-container">$k^{th}$</span> digit of <span class="math-container">$2^n$</span> is odd for integer <span class="math-container">$b\ge0$</span> such that</p>
<p><span class="math-container">$$1 + 2\cdot5^{k-1} (1 + 2b) \le n - k < 2\cdot5^{k-1} (1 + 2b) + log_2 5^k$$</span></p>
<p>To illustrate, here are some guaranteed odd digits for different <span class="math-container">$2^n$</span>: </p>
<p>(k=1 digit): <span class="math-container">$ 2\cdot5^0 + 2 = 4 \le n \le 5 $</span><br>
(k=2 digit): <span class="math-container">$ 2\cdot5^1 + 3 = 13 \le n \le 16 $</span><br>
(k=3 digit): <span class="math-container">$ 2\cdot5^2 + 4 = 54 \le n \le 59 $</span><br>
(k=4 digit): <span class="math-container">$ 2\cdot5^3 + 5 = 255 \le n \le 263 $</span> </p>
<p>Also note that these would repeat every <span class="math-container">$4\cdot5^{k-1}$</span> powers.</p>
<p>These guaranteed odd digits are not dense enough to cover all of the powers, but might this approach be extended somehow to find more odd digits?</p>
<p><strong>Edit 7:</strong> The two papers that Zander mentions below make me think that this is probably a pretty hard problem.</p>
| <p>This seems to be similar to (I'd venture to say as hard as) a problem of Erdős open since 1979, that the base-3 representation of $2^n$ contains a 2 for all $n>8$.</p>
<p><a href="http://arxiv.org/abs/math/0512006">Here is a paper by Lagarias</a> that addresses the ternary problem, and for the most part I think would generalize to the question at hand (we're also looking for the intersection of iterates of $x\rightarrow 2x$ with a Cantor set). Unfortunately it does not resolve the problem.</p>
<p>But Conjecture 2' (from Furstenberg 1970) in the linked paper suggests a stronger result, that every $2^n$ for $n$ large enough will have a 1 in the decimal representation. Though it doesn't quantify "large enough" (so even if proved wouldn't promise that 2048 is the largest all-even decimal), it looks like it might be true for all $n>91$ (I checked up to $n=10^6$).</p>
| <p>This sequence is known to the <a href="http://oeis.org/A068994">OEIS</a>.</p>
<p>Here are the notes, which give no explicit answer but suppose that your conjecture is correct:</p>
<blockquote>
<p>Are there any more terms in this sequence?</p>
<p>Evidence that the sequence may be finite, from Rick L. Shepherd
(rshepherd2(AT)hotmail.com), Jun 23 2002:</p>
<p>1) The sequence of last two digits of $2^n$, A000855 of period $20$, makes clear that $2^n > 4$ must have $n = 3, 6, 10, 11,$ or $19 (\text{mod }20)$ for $2^n$ to be a member of this sequence. Otherwise, either the tens digit (in $10$ cases), as seen directly, or the hundreds digit, in the $5$ cases receiving a carry from the previous power's tens digit $\geq 5$, must be odd.</p>
<p>2) No additional term has been found for n up to $50000$.</p>
<p>3) Furthermore, again for each n up to $50000$, examining $2^n$'s digits
leftward from the rightmost but only until an odd digit was found, it
was only once necessary to search even to the 18th digit. This
occurred for $2^{12106}$ whose last digits are
$\ldots 3833483966860466862424064$. Note that $2^{12106}$ has $3645$ digits. (The
clear runner-up, $2^{34966}$, a $10526$-digit number, required searching
only to the $15$th digit. Exponents for which only the $14$th digit was
reached were only $590, 3490, 8426, 16223, 27771, 48966$ and $49519$ -
representing each congruence above.)</p>
</blockquote>
|
differentiation | <p>It is often quoted in physics textbooks for finding the electric potential using Green's function that </p>
<p>$$\nabla ^2 \left(\frac{1}{r}\right)=-4\pi\delta^3({\bf r}),$$ </p>
<p>or more generally </p>
<p>$$\nabla ^2 \left(\frac{1}{|| \vec x - \vec x'||}\right)=-4\pi\delta^3(\vec x - \vec x'),$$</p>
<p>where $\delta^3$ is the 3-dimensional <a href="http://mathworld.wolfram.com/DeltaFunction.html">Dirac delta distribution</a>.
However I don't understand how/where this comes from. Would anyone mind explaining?</p>
| <p>The gradient of $\frac1r$ (noting that $r=\sqrt{x^2+y^2+z^2}$) is</p>
<p>$$
\nabla \frac1r = -\frac{\mathbf{r}}{r^3}
$$
when $r\neq 0$, where $\mathbf{r}=x\mathbf{i}+y\mathbf{j}+z\mathbf{k}$. Now, the divergence of this is</p>
<p>$$
\nabla\cdot \left(-\frac{\mathbf{r}}{r^3}\right) = 0
$$
when $r\neq 0$. Therefore, for all points for which $r\neq 0$,</p>
<p>$$
\nabla^2\frac1r = 0
$$
However, if we integrate this function over a sphere, $S$, of radius $a$, then, applying Gauss's Theorem, we get</p>
<p>$$
\iiint_S \nabla^2\frac1rdV = \iint_{\Delta S} -\frac{\mathbf{r}}{r^3}.d\mathbf{S}
$$
where $\Delta S$ is the surface of the sphere, and is outward-facing. Now, $d\mathbf{S}=\mathbf{\hat r}dA$, where $dA=r^2\sin\theta d\phi d\theta$. Therefore, we may write our surface integral as
$$\begin{align}
\iint_{\Delta S} -\frac{\mathbf{r}}{r^3}.d\mathbf{S}&=-\int_0^\pi\int_0^{2\pi}\frac{r}{r^3}r^2\sin\theta d\phi d\theta\\
&=-\int_0^\pi\sin\theta d\theta\int_0^{2\pi}d\phi\\
&= -2\cdot 2\pi = -4\pi
\end{align}$$
Therefore, the value of the laplacian is zero everywhere except zero, and the integral over any volume containing the origin is equal to $-4\pi$. Therefore, the laplacian is equal to $-4\pi \delta(\mathbf{r})$.</p>
<p>EDIT: The general case is then obtained by replacing $r=|\mathbf{r}|$ with $s=|\mathbf{r}-\mathbf{r_0}|$, in which case the function shifts to $-4\pi \delta(\mathbf{r}-\mathbf{r_0})$</p>
| <p>I'm new around here (so suggestions about posting are welcome!) and want to give my contribution to this question, even though a bit old. I feel I need to because using the divergence theorem in this context is not quite rigorous. Strictly speaking $1/r$ is not even differentiable at the origin. So here's a proof using limits of distributions.</p>
<p>Let $\mathbf{x}\in\mathbb{R}^{3}$ and $r=|\mathbf{x}|=\sqrt{x^2+y^2+z^2}$. It is evident from direct calculation that $\nabla^{2}\left(\frac{1}{r}\right)=0$ everywhere except in $\mathbf{x}=0$, where it is in fact not defined. Thus, the integral of $\nabla^{2}\left(\frac{1}{r}\right)$ over any volume non containing the origin is zero.</p>
<p>So let $\eta>0$ and $r_{\eta}=\sqrt{x^2+y^2+z^2+\eta^2}$. Obviously $\lim\limits_{\eta\rightarrow 0}r_{\eta}=r$. Direct calculation brings
\begin{equation}
\nabla^{2}\left(\frac{1}{r_{\eta}}\right) = \frac{-3\eta^{2}}{r_{\eta}^5}
\end{equation}
Now let us consider the distribution represented by $\nabla^{2}\left(\frac{1}{r_{\eta}}\right)$ and let $\rho$ be a test function (for example in the Schwartz space). I use Dirac's bra-ket notation to express the action of a distribution over a test function. Let $S^{2}$ be the unit sphere. Thus we calculate
\begin{align}
\lim\limits_{\eta\rightarrow 0}\left.\left\langle \nabla^{2}\left(\frac{1}{r_{\eta}}\right)\right|\rho\right\rangle &= \lim\limits_{\eta\rightarrow 0}\iiint\limits_{\mathbb{R}^3}\mathrm{d}^{3}x\, \nabla^{2}\left(\frac{1}{r_{\eta}}\right)\rho(\mathbf{x})\\
&=\lim\limits_{\eta\rightarrow 0}\left\{\iiint\limits_{\mathbb{R}^3\setminus S^{2}}\mathrm{d}^{3}x\, \frac{-3\eta^{2}}{r_{\eta}^5}\rho(\mathbf{x})+ \iiint\limits_{S^{2}}\mathrm{d}^{3}x\, \frac{-3\eta^{2}}{r_{\eta}^5}\rho(\mathbf{x})\right\}\\
&=\lim\limits_{\eta\rightarrow 0}\iiint\limits_{S^{2}}\mathrm{d}^{3}x\, \frac{-3\eta^{2}}{r_{\eta}^5}\rho(\mathbf{x})
\end{align}
Where the limit of the first of the integrals in the curly braces is zero (easy to show, referring to the laplacian of $1/r$, no need for $\eta$ in sets not containing the origin).
Now Taylor expand $\rho$ at $\mathbf{x}=0$ and integrate using spherical polar coordinates:
\begin{equation}
\lim\limits_{\eta\rightarrow 0}\int_{0}^{\pi}\mathrm{d}\theta\,\sin\theta\int_{0}^{2\pi}\mathrm{d}\varphi\,\int_{0}^{1}\mathrm{d}t\,\frac{-3\eta^{2}t^{2}}{(t^{2}+\eta^{2})^{5/2}}(\rho(0)+O(t^{2}))
\end{equation}
Integrating you will get that all the terms contained in $O(t^2)$ vanish as $\eta\rightarrow 0$, while the term with $\rho(0)$ remains. In fact you get
\begin{align}
\lim\limits_{\eta\rightarrow 0}\frac{-4\pi\rho(0)}{\sqrt{1+\eta^{2}}}&=-4\pi\rho(0)\\
&=\langle -4\pi\delta_{0}^{(3)}|\rho\rangle
\end{align}</p>
<p>From this argument one defines the limit distribution
\begin{equation}
\nabla^{2}\left(\frac{1}{r}\right):=\lim\limits_{\eta\rightarrow 0}\nabla^{2}\left(\frac{1}{r_{\eta}}\right)=-4\pi\delta_{0}^{(3)}
\end{equation}
The generalization to $r=|\mathbf{x}-\mathbf{x}_{0}|$ is obvious.</p>
|
logic | <p>I saw a sentence like,</p>
<pre><code>I am fine but he has flu.
</code></pre>
<p>Now I have to convert it into <code>logical sentence</code> using <code>logical operators</code>. I do not have any idea what should but be translated to. Please help me out. </p>
<p>Thanks</p>
| <p>An alternative way of conveying the same information would be to say <em>"I am fine and he has flu."</em>.</p>
<p>Often, the word <strong>but</strong> is used in English to mean <strong>and</strong>, especially when there is some contrast or conflict between the statements being combined. To determine the logical form of a statement you must think about what the statement means, rather than just translating word by word into symbols.</p>
| <p>This seems like an exercise in semantics. I cannot think of a logical operator which fits other than $\land$.</p>
<p>However, if we define the predicate $\operatorname{Fine}(x)$ which holds if and only if $x$ is fine, then we can assume "has the flu" is $\lnot\operatorname{Fine}(x)$.</p>
<p>In which case we can write the sentence:</p>
<p>$$\operatorname{Fine}(\textbf{me})\land\lnot\operatorname{Fine}(\textbf{him})$$</p>
<p>If you want to distinguish $\operatorname{Flu}(x)$ from simply $\lnot\operatorname{Fine}(x)$, then we are reduced to:
$$\operatorname{Fine}(\textbf{me})\land\operatorname{Flu}(\textbf{him})$$</p>
|
linear-algebra | <p>I see on Wikipedia that the product of two commuting symmetric positive definite matrices is also positive definite. Does the same result hold for the product of two positive semidefinite matrices?</p>
<p>My proof of the positive definite case falls apart for the semidefinite case because of the possibility of division by zero...</p>
| <p>You have to be careful about what you mean by "positive (semi-)definite" in the case of non-Hermitian matrices. In this case I think what you mean is that all eigenvalues are
positive (or nonnegative). Your statement isn't true if "$A$ is positive definite" means $x^T A x > 0$ for all nonzero real vectors $x$ (or equivalently $A + A^T$ is positive definite). For example, consider
$$ A = \pmatrix{ 1 & 2\cr 2 & 5\cr},\ B = \pmatrix{1 & -1\cr -1 & 2\cr},\
AB = \pmatrix{-1 & 3\cr -3 & 8\cr},\ (1\ 0) A B \pmatrix{1\cr 0\cr} = -1$$</p>
<p>Let $A$ and $B$ be positive semidefinite real symmetric matrices. Then $A$ has a positive semidefinite square root, which I'll write as $A^{1/2}$. Now $A^{1/2} B A^{1/2}$ is symmetric and positive semidefinite, and $AB = A^{1/2} (A^{1/2} B)$ and $A^{1/2} B A^{1/2}$ have the same nonzero eigenvalues.</p>
| <p>The product of two symmetric PSD matrices is PSD, iff the product is also symmetric.
More generally, if $A$ and $B$ are PSD, $AB$ is PSD iff $AB$ is normal, ie, $(AB)^T AB = AB(AB)^T$.</p>
<p>Reference:
On a product of positive semidefinite matrices, A.R. Meenakshi, C. Rajian, Linear Algebra and its Applications, Volume 295, Issues 1–3, 1 July 1999, Pages 3–6.</p>
|
probability | <p>A colleague popped into my office this afternoon
and asked me the following question. He told me there is a
clever proof when $n=2$. I couldn't do
anything with it, so I thought I'd post it here and see what happens.</p>
<p><em>Prove or find a counterexample</em></p>
<p>For positive, i.i.d. random variables $Z_1,\dots, Z_n$
with finite mean, and positive constants $a_1,\dots, a_n$,
we have
$$\mathbb{E}\left({\sum_{i=1}^n a_i^2 Z_i\over\sum_{i=1}^n a_i Z_i}\right)
\leq {\sum_{i=1}^n a_i^2\over\sum_{i=1}^n a_i}.$$</p>
<hr>
<p><strong>Added:</strong> This problem originates from the thesis of a student in Computer and Electrical Engineering at the University of Alberta. Here is the response from his supervisor: "Many thanks for this! It is a nice result in addition to being useful in a practical problem of antenna placement."</p>
| <p>Yes, the inequality always holds for i.i.d. random variables $Z_1,\ldots,Z_n$. In fact, as suggested by Yuval and joriki, it is enough to suppose that the joint distribution is invariant under permuting the $Z_i$. Rearranging the inequality slightly, we just need to show that the following is nonnegative (here, I am using $\bar a\equiv\sum_ia_i^2/\sum_ia_i$)
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\sum_ia_i(\bar a-a_i)\mathbb{E}\left[\frac{Z_i}{\sum_ja_jZ_j}\right].
$$
I'll write $c_i\equiv\mathbb{E}[Z_i/\sum_ja_jZ_j]$ for brevity. Then, noting that $\sum_ia_i(\bar a-a_i)=0$, choosing any constant $\bar c$ that we like,
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\sum_ia_i(\bar a-a_i)(c_i-\bar c).
$$
To show that this is nonnegative, it is enough to show that $c_i$ is a decreasing function of $a_i$ (that is, $c_i\le c_j$ whenever $a_i\ge a_j$). In that case, we can choose $\bar c$ so that $\bar c\ge c_i$ whenever $a_i\ge\bar a$ and $\bar c\le c_i$ whenever $a_i\le\bar a$. This makes each term in the final summation above positive and completes the proof.</p>
<p>Choosing $i\not=j$ such that $a_i \ge a_j$,
$$
c_i-c_j=\mathbb{E}\left[\frac{Z_i-Z_j}{\sum_ka_kZ_k}\right]
$$
Let $\pi$ be the permutation of $\{1,\ldots,n\}$ which exchanges $i,j$ and leaves everything else fixed. Using invariance under permuting $Z_i,Z_j$,
$$
\begin{align}
2(c_i-c_j)&=\mathbb{E}\left[\frac{Z_i-Z_j}{\sum_ka_kZ_k}\right]-\mathbb{E}\left[\frac{Z_i-Z_j}{\sum_ka_kZ_{\pi(k)}}\right]\cr
&=\mathbb{E}\left[\frac{(a_j-a_i)(Z_i-Z_j)^2}{\sum_ka_kZ_k\sum_ka_kZ_{\pi(k)}}\right]\cr
&\le0.
\end{align}
$$
So $c_i$ is decreasing in $a_i$ as claimed.</p>
<hr>
<p><strong>Note:</strong> In the special case of $n=2$, we can always make the choice $\bar c=(c_1+c_2)/2$. Then, both terms of the summation on the right hand side of the second displayed equation above are the same, giving
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\frac{a_1a_2(a_2-a_1)(c_1-c_2)}{a_1+a_2}.
$$
Plugging in my expression above for $2(c_1-c_2)$ gives the identity
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\frac{a_1a_2(a_1-a_2)^2}{2(a_1+a_2)}\mathbb{E}\left[\frac{(Z_1-Z_2)^2}{(a_1Z_1+a_2Z_2)(a_1Z_2+a_2Z_1)}\right],
$$
which is manifestly nonnegative. I'm guessing this could be the "clever" proof mentioned in the question.</p>
<p><strong>Note 2:</strong> The proof above relies on $\mathbb{E}[Z_i/\sum_ja_jZ_j]$ being a decreasing function of $a_i$. More generally, for any decreasing $f\colon\mathbb{R}^+\to\mathbb{R}^+$, then $\mathbb{E}[Z_if(\sum_ja_jZ_j)]$ is a decreasing function of $a_i$. Choosing positive $b_i$ and setting $\bar a=\sum_ia_ib_i/\sum_ib_i$ then $\sum_ib_i(\bar a-a_i)=0$. Applying the argument above gives the inequality
$$
\mathbb{E}\left[f\left(\sum_ia_iZ_i\right)\sum_ia_ib_iZ_i\right]
\le
\mathbb{E}\left[f\left(\sum_ia_iZ_i\right)\sum_ib_iZ_i\right]\frac{\sum_ia_ib_i}{\sum_ib_i}
$$
The inequality in the question is the special case with $b_i=a_i$ and $f(x)=1/x$.</p>
| <p>Since the $Z_i$ are i.i.d., the expectation is the same if we rename the variables. Taking all permutations, your inequality is equivalent to
$$ \mathbb{E} \left[ \frac{1}{n!} \sum_{\pi \in S_n} \frac{\sum_{i=1}^n a_i^2 Z_{\pi(i)}}{\sum_{i=1}^n a_i Z_{\pi(i)}} \right] \leq \frac{\sum_{i=1}^n a_i^2}{\sum_{i=1}^n a_i}. $$
Going over all possible values of $Z_1,\ldots,Z_n$, this is the same as the following inequality for positive real numbers:
$$ \frac{1}{n!} \sum_{\pi \in S_n} \frac{\sum_{i=1}^n a_i^2 z_{\pi(i)}}{\sum_{i=1}^n a_i z_{\pi(i)}} \leq \frac{\sum_{i=1}^n a_i^2}{\sum_{i=1}^n a_i}. $$
Intuitively, the maximum is attained at $z_i = \text{const}$, which is why we get the right-hand side. This is indeed true for $n = 2$, which is easy to check directly.</p>
|
combinatorics | <p>Is there any way of determining if $\binom{n}{k} \equiv 0\pmod{n}$. Note that I am aware of the case when $n =p$ a prime. Other than that there does not seem to be any sort of pattern (I checked up to $n=50$). Are there any known special cases where the problem becomes easier? As a place to start I was thinking of using $e_p(n!)$ defined as:</p>
<p>$$e_p(n!) = \sum_{k=1}^{\infty}\left \lfloor\frac{n}{p^k}\right \rfloor$$</p>
<p>Which counts the exponent of $p$ in $n!$ (Legendre's theorem I believe?)</p>
<p>Then knowing the prime factorization of $n$ perhaps we can determine if these primes appear more times in the numerator of $\binom{n}{k}$ than the denominator.</p>
<p>Essentially I am looking to see if this method has any traction to it and what other types of research have been done on this problem (along with any proofs of results) before. Thanks!</p>
| <p>Below is a picture of the situation. Red dots are the points of the first 256 rows of Pascal's triangle where $n\mid {n \choose k}$.</p>
<p><img src="https://i.sstatic.net/5oKgH.png" alt="enter image description here"></p>
<p>It appears that "most" values fit the bill.</p>
<p>One can prove the following:</p>
<blockquote>
<p><strong>Proposition</strong>: Whenever $(k, n)=1$, we have $n \mid {n\choose k}$.</p>
</blockquote>
<p>This follows from <a href="https://math.stackexchange.com/questions/51469/prime-dividing-the-binomial-coefficients/51475#51475">the case where $n$ is a prime power</a> (considering the various prime powers dividing $n$).</p>
<p>However, it happens quite often that $(k,n) \neq 1$ but $n \mid {n\choose k}$ still. For instance, $10 \mid {10\choose 4}=210$, but $(10,4) \neq 1$ (this is the smallest example). I do not think that there is a simple criterion.</p>
<p>In fact, it is interesting to consider separately the solutions $(n,k)$ into those which are relatively prime (which I'll call the <em>trivial</em> solutions) and those which are not. It appears that the non-trivial solutions are completely responsible for the Sierpinski pattern in the triangle above. Indeed, here are only the trivial solutions:</p>
<p><img src="https://i.sstatic.net/yWdFt.png" alt="enter image description here"></p>
<p>and here are the non-trivial solutions:</p>
<p><img src="https://i.sstatic.net/PZa0o.png" alt="enter image description here"></p>
<p>Let $f(n)$ be the number of $k$'s between $0$ and $n$ where $n \mid {n\choose k}$. By the proposition we have $\varphi(n)< f(n) < n$.</p>
<p><strong>Question</strong>: is $$\text{lim sup } \frac{f(n)-\varphi(n)}{n}=1?$$</p>
<p>Here is a list plot of $\frac{f(n)-\varphi(n)}{n}$. The max value reached for $1<n<2000$ is about $0.64980544$. The blue dots at the bottom are the $n$'s such that $f(n) = \varphi(n)$.</p>
<p><img src="https://i.sstatic.net/RY8Jk.png" alt="enter image description here"></p>
| <p>Well, $n = p^2$ when $k$ is not divisible by $p.$ Also $n=2p$ for $k$ not divisible by $2,p.$ Also $n=3p$ for $k$ not divisible by $3,p.$ </p>
<hr>
<p><img src="https://i.sstatic.net/XhMVi.jpg" alt="enter image description here"></p>
|
probability | <p>Two events are mutually exclusive if they can't both happen.</p>
<p>Independent events are events where knowledge of the probability of one doesn't change the probability of the other.</p>
<p>Are these definitions correct? If possible, please give more than one example and counterexample. </p>
| <p>Yes, that's fine.</p>
<p>Events are mutually exclusive if the occurrence of one event excludes the occurrence of the other(s). Mutually exclusive events cannot happen at the same time. For example: when tossing a coin, the result can either be <code>heads</code> or <code>tails</code> but cannot be both.</p>
<p>$$\left.\begin{align}P(A\cap B) &= 0 \\ P(A\cup B) &= P(A)+P(B)\\ P(A\mid B)&=0 \\ P(A\mid \neg B) &= \frac{P(A)}{1-P(B)}\end{align}\right\}\text{ mutually exclusive }A,B$$</p>
<p>Events are independent if the occurrence of one event does not influence (and is not influenced by) the occurrence of the other(s). For example: when tossing two coins, the result of one flip does not affect the result of the other.</p>
<p>$$\left.\begin{align}P(A\cap B) &= P(A)P(B) \\ P(A\cup B) &= P(A)+P(B)-P(A)P(B)\\ P(A\mid B)&=P(A) \\ P(A\mid \neg B) &= P(A)\end{align}\right\}\text{ independent }A,B$$</p>
<p>This of course means mutually exclusive events are not independent, and independent events cannot be mutually exclusive. (Events of measure zero excepted.)</p>
| <p>After reading the answers above I still could not understand clearly the difference between mutually exclusive AND independent events. I found a nice answer from Dr. Pete posted on <a href="https://web.archive.org/web/20180128173051/http://mathforum.org/library/drmath/view/69825.html" rel="nofollow noreferrer">math forum</a>. So I attach it here so that op and many other confused guys like me could save some of their time.</p>
<blockquote>
<p><strong>If two events A and B are independent</strong> a real-life example is the following. Consider a fair coin and a fair
six-sided die. Let event A be obtaining heads, and event B be rolling
a 6. Then we can reasonably assume that events A and B are
independent, because the outcome of one does not affect the outcome of
the other. The probability that both A and B occur is</p>
<p>P(A and B) = P(A)P(B) = (1/2)(1/6) = 1/12.</p>
<p><strong>An example of a mutually exclusive event</strong> is the following. Consider a
fair six-sided die as before, only in addition to the numbers 1
through 6 on each face, we have the property that the even-numbered
faces are colored red, and the odd-numbered faces are colored green.
Let event A be rolling a green face, and event B be rolling a 6. Then</p>
<p>P(B) = 1/6</p>
<p>P(A) = 1/2</p>
<p>as in our previous example. But it is obvious that events A and B
cannot simultaneously occur, since rolling a 6 means the face is red,
and rolling a green face means the number showing is odd. Therefore</p>
<p>P(A and B) = 0.</p>
<p>Therefore, we see that a mutually exclusive pair of nontrivial events
are also necessarily dependent events. This makes sense because if A
and B are mutually exclusive, then if A occurs, then B cannot also
occur; and vice versa. This stands in contrast to saying the outcome
of A does not affect the outcome of B, which is independence of
events.</p>
</blockquote>
|
probability | <p>Suppose the winning combination consists of $7$ digits, each digit randomly ranging from $0$ to $9$. So the probability of $1111111$, $3141592$ and $8174249$ are the same. But $1111111$ seems (to me) far less likely to be the lucky number than $8174249$. Is my intuition simply wrong or is it correct in some sense?</p>
| <p>You should never bet on that kind of sequence.
Now, every poster will agree that the odds of any sequence from 000000000 through 999999999 has an equal probability. And if the prize is the same for all winners, it's fine. But, for <em>shared prizes</em>, you will find that you just beat 10 million to 1 odds only to split the pot with dozens of people.
To be clear, the odds are the same, no argument. But people's bets will not be 100% random. They will bet your number as well as a pattern of 2's or other single digits. They will bet 1234567. I can't comment whether pi's digits are a common pattern, but the bottom line is to avoid obvious patterns for shared prizes.</p>
<p>When numbers run 1-50 or so, the chance of shared prizes increases when all numbers are below 31, as many people bet dates and stick to 1-31. Not every bettor does this of course, but enough so shared prizes show a skew due to this effect. </p>
<p>Again - odds are the same, but human nature skews the chance of split payout. I hope this answer is clear. </p>
| <p>Your intuition is wrong. Compare the two statements</p>
<p>A. The event "the lucky number has all its digits repeated" is much less probable than the event "the lucky number has a few repeated digits"</p>
<p>B. The number 1111111 (which has all its repeated digits) is much less probable than the number 8174249 (which has a few repeated digits).</p>
<p>A is true, B is false.</p>
<p>BTW, this can be related to the "<a href="https://physics.stackexchange.com/questions/66651/clear-up-confusion-about-the-meaning-of-entropy">entropy</a>" concept, and the distinction of microstates-vs-macrostates.</p>
|
combinatorics | <p>How many ways seven people can sit around a circular table?</p>
<p>For first, I thought it was $7!$ (the number of ways of sitting in seven chairs), but the answer is $(7-1)!$.</p>
<p>I don't understand how sitting around a circular table and sitting in seven chairs are different. Could somebody explain it please?</p>
| <p>In a <strong>circular arrangement</strong> we first have to fix the position for the <strong>first person</strong>, which can be performed in only <strong>one way</strong> (since every position is considered <strong>same</strong> if no one is already sitting on any of the seats), also, because <strong>there are no mark on positions</strong>.</p>
<p>Now, we can also assume that remaining persons are to be seated in a <strong>line</strong>, because there is a <strong>fixed</strong> starting and ending <strong>point</strong> i.e. to the left or right of the <strong>first person</strong>.</p>
<p>Once we have fixed the position for the <strong>first person</strong> we can now arrange the <strong>remaining</strong> $(7-1)$ <strong>persons</strong> in $(7-1)!= 6!$ ways.</p>
| <p>It depends on what you mean by "how many ways". </p>
<p>It's not unreasonable to count two seatings around the table which only differ by a rotation as "the same".</p>
<p>On the other hand, if the chairs and the view from the chairs are different, it might make more sense to count those seatings as different.</p>
|
linear-algebra | <p>The largest eigenvalue of a <a href="https://en.wikipedia.org/wiki/Stochastic_matrix" rel="noreferrer">stochastic matrix</a> (i.e. a matrix whose entries are positive and whose rows add up to $1$) is $1$.</p>
<p>Wikipedia marks this as a special case of the <a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem" rel="noreferrer">Perron-Frobenius theorem</a>, but I wonder if there is a simpler (more direct) way to demonstrate this result.</p>
| <p>Here's a really elementary proof (which is a slight modification of <a href="https://math.stackexchange.com/questions/8695/no-solutions-to-a-matrix-inequality/8702#8702">Fanfan's answer to a question of mine</a>). As Calle shows, it is easy to see that the eigenvalue $1$ is obtained. Now, suppose $Ax = \lambda x$ for some $\lambda > 1$. Since the rows of $A$ are nonnegative and sum to $1$, each element of vector $Ax$ is a convex combination of the components of $x$, which can be no greater than $x_{max}$, the largest component of $x$. On the other hand, at least one element of $\lambda x$ is greater than $x_{max}$, which proves that $\lambda > 1$ is impossible.</p>
| <p>Say <span class="math-container">$A$</span> is a <span class="math-container">$n \times n$</span> row stochastic matrix. Now:
<span class="math-container">$$A \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix} =
\begin{pmatrix}
\sum_{i=1}^n a_{1i} \\ \sum_{i=1}^n a_{2i} \\ \vdots \\ \sum_{i=1}^n a_{ni}
\end{pmatrix}
=
\begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix}
$$</span>
Thus the eigenvalue <span class="math-container">$1$</span> is attained.</p>
<p>To show that the this is the largest eigenvalue you can use the <a href="http://en.wikipedia.org/wiki/Gershgorin_circle_theorem" rel="noreferrer">Gershgorin circle theorem</a>. Take row <span class="math-container">$k$</span> in <span class="math-container">$A$</span>. The diagonal element will be <span class="math-container">$a_{kk}$</span> and the radius will be <span class="math-container">$\sum_{i\neq k} |a_{ki}| = \sum_{i \neq k} a_{ki}$</span> since all <span class="math-container">$a_{ki} \geq 0$</span>. This will be a circle with its center in <span class="math-container">$a_{kk} \in [0,1]$</span>, and a radius of <span class="math-container">$\sum_{i \neq k} a_{ki} = 1-a_{kk}$</span>. So this circle will have <span class="math-container">$1$</span> on its perimeter. This is true for all Gershgorin circles for this matrix (since <span class="math-container">$k$</span> was taken arbitrarily). Thus, since all eigenvalues lie in the union of the Gershgorin circles, all eigenvalues <span class="math-container">$\lambda_i$</span> satisfy <span class="math-container">$|\lambda_i| \leq 1$</span>.</p>
|
number-theory | <p>In the comments to the question: <a href="https://math.stackexchange.com/questions/3978/if-ann-mid-bnn-for-all-n-then-a-b">If $(a^{n}+n ) \mid (b^{n}+n)$ for all $n$, then $ a=b$</a>, there was a claim that $5^n+n$ is never prime (for integer $n>0$).</p>
<p>It does not look obvious to prove, nor have I found a counterexample.</p>
<p>Is this really true?</p>
<p><strong>Update</strong>: $5^{7954} + 7954$ has been found to be prime by a computer: <a href="http://www.mersenneforum.org/showpost.php?p=233370&postcount=46" rel="noreferrer">http://www.mersenneforum.org/showpost.php?p=233370&postcount=46</a></p>
<p>Thanks to Douglas (and lavalamp)!</p>
| <p>A general rule-of-thumb for "is there a prime of the form f(n)?" questions is, unless there exists a set of small divisors D, called a <em>covering set,</em> that divide every number of the form f(n), then there will eventually be a prime. See, e.g. <a href="http://en.wikipedia.org/wiki/Sierpinski_number">Sierpinski numbers</a>.</p>
<p>Running WinPFGW (it should be available from the primeform yahoo group <a href="http://tech.groups.yahoo.com/group/primeform/">http://tech.groups.yahoo.com/group/primeform/</a>), it found that $5^n+n$ is <a href="http://en.wikipedia.org/wiki/Probable_prime">3-probable prime</a> when n=7954. Moreover, for every n less than 7954, we have $5^n+n$ is composite.</p>
<p>To actually certify that $5^{7954}+7954$ is a prime, you could use Primo (available from <a href="http://www.ellipsa.eu/public/misc/downloads.html">http://www.ellipsa.eu/public/misc/downloads.html</a>). I've begun running it (so it's passed a few more pseudo-primality tests), but I doubt I will continue until it's completed -- it could take a long time (e.g. a few months).</p>
<p>EDIT: $5^{7954}+7954$ is officially prime. A proof certificate was given by lavalamp at <a href="http://www.mersenneforum.org/showpost.php?p=233370&postcount=46">mersenneforum.org</a>.</p>
| <p>If $n$ is odd, then $5^n + n$ is always even because LSD of $5^n$ is always $5$ for $n \gt 0$. Hence, for odd $n ( n \gt 0)$, $5 ^n + n$ is composite.</p>
|
linear-algebra | <p>First, let me admit that I suffer from a fundamental confusion here, and so I will likely say something wrong. No pretenses here, just looking for a good explanation. </p>
<p>There is a theorem from linear algebra that two vector spaces are isomorphic if and only if they have the same dimension. It is also well-known that two sets have the same cardinality if and only if there exists a bijection between them. Herein lies the issue...</p>
<p>Obviously $|\mathbb{R}| = |\mathbb{R^2}| = \mathfrak c.$ This is often stated as "there are as many points in the plane as there are on a line." Why, then, are $\mathbb{R}$ and $\mathbb{R^2}$ not isomorphic?</p>
<p>It makes intuitive sense that they <em>shouldn't</em> be. After all, I can only "match up" each point in $\mathbb{R}$ with the first coordinate of $\mathbb{R^2}.$ I cannot trust this intuition, however, because it fails when considering the possibility of a bijection $f : \mathbb{N} \rightarrow \mathbb{Q}!$</p>
<p>Even more confusing: As real vector spaces, $\mathbb{C}$ is isomorphic to $\mathbb{R^2}.$ However there is a bijection between $\mathbb{C}$ and $\mathbb{R}$ (just consider the line $\rightarrow$ plane example as above).</p>
<p>If you can explain the error in my thinking, please help!</p>
| <p>To elaborate a bit on Tobias answer. The notion of isomorphism depends on which structure (category actually) you are studying. </p>
<p>Edit: Pete L. Clark pointed out that I was too sloppy with my original answer.</p>
<p>The idea of an isomorphism is that isomorphisms preserve all structure that one is studying. This means that if $X,Y$ objects in some category, then there exists morphisms $f:X\rightarrow Y$, $g:Y\rightarrow X$ such that $f\circ g$ is the identity on $Y$, and $g\circ f$ is the identity on $X$. </p>
<p>To be a bit more explicit, if $X$ and $Y$ are sets, and there is a bijective function $X\rightarrow Y$, then we can construct the inverse $f^{-1}:Y\rightarrow X$. This inverse function is defined by $f^{-1}(y)=x$ iff $f(x)=y$. We have that $f\circ f^{-1}=id_Y$ and $f^{-1}\circ f=id_X$. </p>
<p>But if we are talking of vector spaces, we demand more. We want two vector spaces to be isomorphic iff we can realize the above situation by linear maps. This is not always possible, even though there exists a bijection (you cannot construct a invertible linear map $\mathbb{R}\rightarrow \mathbb{R}^2$). In the linear case; if a function is invertible and linear, its inverse is also linear. </p>
<p>In general however, it need not be the case that the inverse function of some structure preserving map preserves the structure. Pete pointed out that the function $x\mapsto x^3$ is an invertible function. It is also differentiable. but its inverse is not differentiable in zero. Thus $x\mapsto x^3$ is not an isomorphism in the category of differentiable manifolds and differentiable maps.</p>
<p>I would like to conclude with the following. We cannot blatantly say that two things are isomorphic. It depends on the context. The isomorphism is always in a category. In the category of sets, isomorphisms are bijections, in the category of vector spaces isomorphisms are invertible linear maps, in the category of groups isomorphisms are group isomorphisms. This can be confusing. For example $\mathbb{R}$ can be seen as lot of things. It is a set. It is a one dimensional vector space over $\mathbb{R}$. It is a group under addition. it is a ring. It is a differentiable manifold. It is a Riemannian manifold. In all these $\mathbb{R}$ can be isomorphic (bijective, linearly isomorphic, group isomprhic, ring isomorphic, diffeomorphic, isometric) to different things. This all depends on the context.</p>
| <p>The difference is that an isomorphism is not just any bijective map. It must be a bijective linear map (ie, it must preserve the addition and scalar multiplication of the vector space).</p>
<p>So even though you have a bijection between $\mathbb{R}$ and $\mathbb{R}^2$, there is no way to make this bijection linear (as follows from looking at their dimensions)</p>
|
number-theory | <p>About a month ago, I got the following : </p>
<blockquote>
<p>For <strong>every positive rational number</strong> $r$, there exists a set of four <strong>positive integers</strong> $(a,b,c,d)$ such that
$$r=\frac{a^\color{red}{3}+b^\color{red}{3}}{c^\color{red}{3}+d^\color{red}{3}}.$$</p>
<p>For $r=p/q$ where $p,q$ are positive integers, we can take
$$(a,b,c,d)=(3ps^3t+9qt^4,\ 3ps^3t-9qt^4,\ 9qst^3+ps^4,\ 9qst^3-ps^4)$$
where $s,t$ are positive integers such that $3\lt r\cdot(s/t)^3\lt 9$.</p>
<p>For $r=2014/89$, for example, since we have $(2014/89)\cdot(2/3)^3\approx 6.7$, taking $(p,q,s,t)=(2014,89,2,3)$ gives us $$\frac{2014}{89}=\frac{209889^3+80127^3}{75478^3+11030^3}.$$</p>
</blockquote>
<p>Then, I began to try to find <strong>every positive integer</strong> $n$ such that the following proposition is true : </p>
<p><strong>Proposition</strong> : For every positive rational number $r$, there exists a set of four positive integers $(a,b,c,d)$ such that $$r=\frac{a^\color{red}{n}+b^\color{red}{n}}{c^\color{red}{n}+d^\color{red}{n}}.$$</p>
<p>The followings are what I've got. Let $r=p/q$ where $p,q$ are positive integers.</p>
<ul>
<li><p>For $n=1$, the proposition is true. We can take $(a,b,c,d)=(p,p,q,q)$.</p></li>
<li><p>For $n=2$, the proposition is <strong>false</strong>. For example, no such sets exist for $r=7/3$.</p></li>
<li><p>For even $n$, the proposition is <strong>false</strong> because the proposition is false for $n=2$.</p></li>
</ul>
<p>However, I've been facing difficulty in the case of odd $n\ge 5$. I've tried to get a similar set of four positive integers $(a,b,c,d)$ as the set for $n=3$, but I have not been able to get any such set. So, here is my question.</p>
<blockquote>
<p><strong>Question</strong> : How can we find <strong>every odd number</strong> $n\color{red}{\ge 5}$ such that the following proposition is true?</p>
<p><strong>Proposition</strong> : For every positive rational number $r$, there exists a set of four positive integers $(a,b,c,d)$ such that $$r=\frac{a^n+b^n}{c^n+d^n}.$$</p>
</blockquote>
<p><em>Update</em> : I posted this question on <a href="https://mathoverflow.net/questions/200605/representing-every-positive-rational-number-in-the-form-of-anbn-cndn">MO</a>.</p>
<p><strong>Added</strong> : <a href="https://mks.mff.cuni.cz/kalva/short/soln/sh99n2.html" rel="noreferrer">Problem N2 of IMO 1999 Shortlist</a> asks the case $n=3$.</p>
| <p>For the above problem there are four sets of solutions (this is intuitive: for a, b, c, & d). In the case of positive rational r and any odd number n we can eliminate all but one of the solutions:</p>
<p>$d^n = 5 \wedge c^n = 1 \wedge a^n + b^n = 30 \wedge r = 5 \wedge a^n \in Z$</p>
<p>In the case of any odd number n≥3 we refer to the generating function:</p>
<p>$a^{2 n + 1} + b^{2 n + 1} = 30 \wedge c^{2 n + 1} = 1 \wedge d^{2 n + 1} = 5 \wedge r = 5 \wedge a^{2 n + 1} \in Z$</p>
<p>As well as the case of every odd number n≥5 (et. al):</p>
<p>$a^{2 n + 3} + b^{2 n + 3} = 30 \wedge c^{2 n + 3} = 1 \wedge d^{2 n + 3} = 5 \wedge r = 5 \wedge a^{2 n + 3} \in Z$</p>
<p>Quickly we discover that it doesn't matter the value of n, as long as it's odd and positive, leading to the generalization:</p>
<p>$r = -c_5-1 \wedge a^{2n+1} + b^{2n+1} = (c_1+c_4+1)(c_5+1) \wedge c^{2n+1}+c_3 = c_1+c_2+1 \wedge c_2+d^{2n+1} = c_3+c_4 \wedge (c_5 | c_4 | c_3 | c_2 | c_1 | a^{2n+1}) \in Z$</p>
<p>For all n:</p>
<p>$r = -c_5-1 \wedge$</p>
<p>$a^n + b^n = (c_1+c_4+1)(c_5+1) \wedge$</p>
<p>$c^n+c_3 = c_1+c_2+1 \wedge$</p>
<p>$c_2+d^n = c_3+c_4 \wedge$</p>
<p>$(c_5 | c_4 | c_3 | c_2 | c_1 | a^n) \in Z$</p>
<p><em>Note: this isn't a complete answer so it might be more appropriate as a comment, but pending reputation I may as well take a naive crack at it. Excuse any abuse of notation or lack of comprehension--it's been over a decade since I've had any formal mathematics. Lastly, I welcome criticism, especially if it's informative and friendly!</em></p>
| <p>Your solution for n=3 includes an implicit change of variables:
$$ \left(a,b,c,d\right)=\left(x+y,x-y,u+v,u-v\right) $$
$$ r = \left(2x/2u\right)\left(x^2+3y^2\right)/\left(u^2+3v^2\right)$$
at which point the substitution
$$ \left(x,y,u,v\right)=\left(3ps^3t,9qt^4,9qst^3,ps^4\right)$$
yields the desired result of $$r=p/q$$</p>
<p>A similar two-step substitution for
$$n\ge 5$$
may simplify the search</p>
<p>for n=5, the substitution </p>
<p>$$ \left(a,b,c,d\right)=\left(x+y,x-y,u+v,u-v\right) $$</p>
<p>yields</p>
<p>$$ r = \left(2x/2u\right)\left(x^4+10x^2y^2+5y^4\right)/\left(u^4+10u^2v^2+5v^4\right)$$</p>
|
probability | <p><span class="math-container">$\newcommand{\F}{\mathcal{F}} \newcommand{\powset}[1]{\mathcal{P}(#1)}$</span>
I am reading lecture notes which contradict my understanding of random variables. Suppose we have a probability space <span class="math-container">$(\Omega, \mathcal{F}, Pr)$</span>, where </p>
<ul>
<li><p><span class="math-container">$\Omega$</span> is the set of outcomes</p></li>
<li><p><span class="math-container">$\F \subseteq \powset{\Omega}$</span> is the collection of events, a <span class="math-container">$\sigma$</span>-algebra</p></li>
<li><p><span class="math-container">$\Pr:\Omega\to[0,1]$</span> is the mapping outcomes to their probabilities.</p></li>
</ul>
<p>If we take the standard definition of a random variable <span class="math-container">$X$</span>, it is actually a function from the sample space to real values, i.e. <span class="math-container">$X:\Omega \to \mathbb{R}$</span>.</p>
<p>What now confuses me is the precise definition of the term <em>support</em>. </p>
<p><a href="https://en.wikipedia.org/wiki/Support_%28mathematics%29" rel="noreferrer">According to Wikipedia</a>:</p>
<blockquote>
<p>the support of a function is the set of points where the function is
not zero valued.</p>
</blockquote>
<p>Now, applying this definition to our random variable <span class="math-container">$X$</span>, these <a href="http://www.math.fsu.edu/~paris/Pexam/3-Random%20Variables.pdf" rel="noreferrer">lectures notes</a> say:</p>
<blockquote>
<p>Random Variables – A random variable is a real valued function defined
on the sample space of an experiment. Associated with each random
variable is a probability density function (pdf) for the random
variable. The sample space is also called the support of a random
variable.</p>
</blockquote>
<p>I am not entirely convinced with the line <em>the sample space is also callled the support of a random variable</em>. </p>
<p>Why would <span class="math-container">$\Omega$</span> be the support of <span class="math-container">$X$</span>? What if the random variable <span class="math-container">$X$</span> so happened to map some element <span class="math-container">$\omega \in \Omega$</span> to the real number <span class="math-container">$0$</span>, then that element would not be in the support?</p>
<p>What is even more confusing is, when we talk about support, do we mean that of <span class="math-container">$X$</span> or that of the distribution function <span class="math-container">$\Pr$</span>? </p>
<p><a href="https://math.stackexchange.com/questions/416035/support-vs-range-of-a-random-variable">This answer says</a> that:</p>
<blockquote>
<p>It is more accurate to speak of the support of the distribution than
that of the support of the random variable.</p>
</blockquote>
<p>Do we interpret the <em>support</em> to be</p>
<ul>
<li>the set of outcomes in <span class="math-container">$\Omega$</span> which have a non-zero probability, </li>
<li>the set of values that <span class="math-container">$X$</span> can take with non-zero probability?</li>
</ul>
<p>I think being precise is important, although my literature does not seem very rigorous.</p>
| <blockquote>
<p>I am not entirely convinced with the line <em>the sample space is also called the support of a random variable</em> </p>
</blockquote>
<p>That looks quite wrong to me.</p>
<blockquote>
<p>What is even more confusing is, when we talk about support, do we mean that of $X$ or that of the distribution function $Pr$?</p>
</blockquote>
<p>In rather informal terms, the "support" of a random variable $X$ is defined as the support (in the function sense) of the density function $f_X(x)$. </p>
<p>I say, in rather informal terms, because the density function is a quite intuitive and practical concept for dealing with probabilities, but no so much when speaking of probability in general and formal terms. For one thing, it's not a proper function for "discrete distributions" (again, a practical but loose concept). </p>
<p>In more formal/strict terms, the comment of Stefan fits the bill.</p>
<pre><code>Do we interpret the support to be
- the set of outcomes in Ω which have a non-zero probability,
- the set of values that X can take with non-zero probability?
</code></pre>
<p>Neither, actually. Consider a random variable that has a uniform density in $[0,1]$, with $\Omega = \mathbb{R}$.
Then the support is the full interval $[0,1]$ - which is a subset of $\Omega$. But, then, of course, say $x=1/2$ belongs to the support. But the probability that $X$ takes this value is zero.</p>
| <h1>TL;DR</h1>
<p>The support of a random variable <span class="math-container">$X$</span> can be defined as the smallest closed set <span class="math-container">$R_X \in \mathcal{B}$</span> such that its probability is 1, as Did pointed out in their comment. An alternative definition is the one given by Stefan Hansen in his comment: the set of points in <span class="math-container">$\mathbb{R}$</span> around which any ball (i.e. open interval in 1-D) with nonzero radius has a nonzero probability. (See the section "Support of a random variable" below for a proof of the equivalence of these definitions.)</p>
<p>Intuitively, if any neighbourhood around a point, no matter how small, has a nonzero probability, then that point is in the support, and vice-versa.</p>
<hr />
<br>
<p>I'll start from the beginning to make sure we're using the same definitions.</p>
<h1>Preliminary definitions</h1>
<h2>Probability space</h2>
<p><span class="math-container">$\newcommand{\A}{\mathcal{A}} \newcommand{\powset}[1]{\mathcal{P}(#1)} \newcommand{\R}{\mathbb{R}} \newcommand{\deq}{\stackrel{\scriptsize def}{=}} \newcommand{\N}{\mathbb{N}}$</span>
Let <span class="math-container">$(\Omega, \A, \Pr)$</span> be a probability space, defined as follows:</p>
<ul>
<li><p><span class="math-container">$\Omega$</span> is the set of <strong>outcomes</strong></p>
</li>
<li><p><span class="math-container">$\A \subseteq \powset{\Omega} $</span> is the collection of <strong>events</strong>, a <a href="https://en.wikipedia.org/wiki/Sigma-algebra#Definition" rel="nofollow noreferrer"><span class="math-container">$\sigma$</span>-algebra</a></p>
</li>
<li><p><span class="math-container">$\Pr\colon\ \mathbf{\A}\to[0,1]$</span> is the <strong>mapping of events to their probabilities</strong>.
It has to satisfy some properties:</p>
<ul>
<li><span class="math-container">$\Pr(\Omega) = 1$</span> (we know <span class="math-container">$\Omega \in \A$</span> since <span class="math-container">$\A$</span> is a <span class="math-container">$\sigma$</span>-algebra of <span class="math-container">$\Omega$</span>)</li>
<li>has to be <a href="https://en.wikipedia.org/wiki/Sigma_additivity#%CF%83-additive_set_functions" rel="nofollow noreferrer">countably additive</a></li>
</ul>
</li>
</ul>
<br>
<h2>Random variable</h2>
<p>A random variable <span class="math-container">$X$</span> is defined as a map <span class="math-container">$X\colon\; \Omega \to \R$</span> such that, for any <span class="math-container">$x\in\R$</span>, the set <span class="math-container">$\{\omega \in \Omega \mid X(\omega) \le x\}$</span> is an element of <span class="math-container">$\A$</span>, ergo, an element of <span class="math-container">$\Pr$</span>'s domain to which a probability can be assigned.</p>
<p>We can think of <span class="math-container">$X$</span> as a "realisation" of <span class="math-container">$\Omega$</span>, in that it assigns a real number to each outcome in <span class="math-container">$\Omega$</span>. Intuitively, this condition means that we are assigning numbers to outcomes in an order such that the set of outcomes whose assigned number is less than a certain threshold (think of cutting the real number line at the threshold and forming the set of outcomes whose number falls
on or to the left of that) is always one of the events in <span class="math-container">$\A$</span>, meaning we can assign it a probability.</p>
<p>This is necessary in order to define the following concepts.</p>
<h3>Cumulative Distribution Function of a random variable</h3>
<p>The probability distribution function (or <strong>cumulative distribution function</strong>) of a random variable <span class="math-container">$X$</span> is defined as the map
<span class="math-container">$$
\begin{align}
F_X \colon \quad \R \ &\to\ [0, 1] \\
x\ &\mapsto\ \Pr(X \le x) \deq \Pr(X^{-1}(I_x))
\end{align}
$$</span></p>
<p>where <span class="math-container">$I_x \deq (-\infty, x]$</span>. (NB: <span class="math-container">$X^{-1}$</span> denotes preimage, not inverse; <span class="math-container">$X$</span> might well be non-injective.)</p>
<p>For notational clarity, define the following:</p>
<ul>
<li><span class="math-container">$\Omega_{\le x} \deq X^{-1}((-\infty, x]) = X^{-1}(I_x)$</span></li>
<li><span class="math-container">$\Omega_{> x} \deq X^{-1}((x, +\infty)) = X^{-1}(\overline{I_x}) = \overline{\Omega_{\le x}}$</span> where <span class="math-container">$\overline{\phantom{\Omega}}$</span> denotes set complement (in <span class="math-container">$\R$</span> or <span class="math-container">$\Omega$</span>, depending on the context)</li>
<li><span class="math-container">$\Omega_{< x} \deq X^{-1}((-\infty, x)) = \displaystyle\bigcup_{n\in\N} X^{-1} \left(I_{x-\frac{1}{n}}\right)$</span></li>
<li><span class="math-container">$\Omega_{=x} \deq X^{-1}(x) = \Omega_{\le x} \setminus \Omega_{< x}$</span></li>
</ul>
<p>we know all of these are still in <span class="math-container">$\A$</span> since <span class="math-container">$\A$</span> is a <span class="math-container">$\sigma$</span>-algebra.</p>
<p>We can see that</p>
<ul>
<li><span class="math-container">$\Pr(X > x) \deq \Pr(\Omega_{>x}) = \Pr(\overline{\Omega_{\le x}}) = 1 - \Pr(\Omega_{\le x}) = 1 - F_X(x)$</span></li>
</ul>
<br>
<ul>
<li><span class="math-container">$\Pr(X < x) \deq \Pr(\Omega_{<x}) = \Pr\left(\displaystyle\bigcup_{n\in\N} X^{-1} \left(I_{x-\frac{1}{n}}\right)\right)$</span> <span class="math-container">$= \lim\limits_{n \to \infty} \Pr(X \le x - \frac{1}{n}) = \lim\limits_{n \to \infty} F_X(x - \frac{1}{n}) = \lim\limits_{t \to x^-} F_X(t) \deq F_X(x^-)$</span>
<br><br>
since <span class="math-container">$X^{-1} \left(I_{x-\frac{1}{n}}\right) \subseteq X^{-1} \left(I_{x-\frac{1}{n+1}}\right)$</span> for all <span class="math-container">$n\in\N$</span>.</li>
</ul>
<br>
<ul>
<li><span class="math-container">$\Pr(X = x) \deq \Pr(\Omega_{=x}) = \Pr(\Omega_{\le x} \setminus \Omega_{<x})= \Pr(\Omega_{\le x}) - \Pr(\Omega_{<x}) = F_X(x) - F_X(x^-)$</span></li>
</ul>
<p>and so forth.</p>
<p>Note that the limit that defines <span class="math-container">$F(x^-)$</span> always exists because <span class="math-container">$F_X$</span> is nondecreasing (since if <span class="math-container">$x< y$</span>, then <span class="math-container">$\Omega_{\le x} \subseteq \Omega_{\le y}$</span> and <span class="math-container">$\Pr$</span> is <span class="math-container">$\sigma$</span>-additive) and bounded above (by <span class="math-container">$1$</span>), so the monotone convergence theorem guarantees that the images by <span class="math-container">$F_X$</span> by of <em>any</em> nondecreasing sequence approaching <span class="math-container">$x$</span> from the left will also converge, and thus the continuous limit <span class="math-container">$\lim_{t \to x^-} F_X(t)$</span> exists.</p>
<br>
<h2>Probability measure on <span class="math-container">$\R$</span> by <span class="math-container">$X$</span></h2>
<p>The mapping defined by <span class="math-container">$X$</span> is sufficient to uniquely define a probability measure on <span class="math-container">$\R$</span>; that is, a map
<span class="math-container">$$
\begin{align}
P_X \colon \quad \mathcal{B} \subset \powset{\R} \ &\to \ [0, 1]\\
A \ &\mapsto \ \Pr(X \in A) \deq \Pr(X^{-1}(A))
\end{align}
$$</span>
that assigns to any set <span class="math-container">$A \in \mathcal{B}$</span> the probability of the corresponding event in <span class="math-container">$\A$</span>.</p>
<p>Here <span class="math-container">$\mathcal{B}$</span> is the <a href="https://en.wikipedia.org/wiki/Borel_set" rel="nofollow noreferrer">Borel <span class="math-container">$\sigma$</span>-algebra</a> in <span class="math-container">$\R$</span>, which is, loosely speaking, the smallest <span class="math-container">$\sigma$</span>-algebra containing all of the semi-intervals <span class="math-container">$(-\infty, x]$</span>. The reason why <span class="math-container">$P_X$</span> is defined only on those sets is because in our definition we only required <span class="math-container">$X^{-1}(A) \in \A$</span> to be true for the semi-intervals of the form <span class="math-container">$A = (-\infty, x]$</span>; thus <span class="math-container">$X^{-1}(A)$</span> is an element of <span class="math-container">$\A$</span> only when <span class="math-container">$A$</span> is "generated" by those semi-intervals, their complements, and countable unions/intersections thereof (according to the rules of a <span class="math-container">$\sigma$</span>-algebra).</p>
<br>
<hr />
<br>
<h1>Support of a random variable</h1>
<h2>Formal definition</h2>
<p>Formally, the <strong>support</strong> of <span class="math-container">$X$</span> can be defined as the smallest closed set <span class="math-container">$R_X \in \mathcal{B}$</span> such that <span class="math-container">$P_X(R_X) = 1$</span>, as Did pointed out in their comment.</p>
<p>An alternative but equivalent definition is the one given by Stefan Hansen in his comment:</p>
<blockquote>
<p>The support of a random variable <span class="math-container">$X$</span> is the set <span class="math-container">$\{x\in \R \mid P_X(B(x,r))>0, \text{ for all } r>0\}$</span> where <span class="math-container">$B(x,r)$</span> denotes the ball with center at <span class="math-container">$x$</span> and radius <span class="math-container">$r$</span>. In particular, the support is a subset of <span class="math-container">$\R$</span>.</p>
</blockquote>
<p>(This can be generalized as-is to random variables with values in <span class="math-container">$\R^n$</span>, but I'll stick to <span class="math-container">$\R$</span> as that's how I defined random variables.)
The equivalence can be proven as follows:</p>
<blockquote>
<p><strong>Proof</strong> <br>
Let <span class="math-container">$R_X$</span> be the smallest closed set in <span class="math-container">$\mathcal{B}$</span> such that <span class="math-container">$P_X(R_X) = 1$</span>.
Since <span class="math-container">$\R \setminus {R_X}$</span> is open, for every <span class="math-container">$x$</span> outside <span class="math-container">$R_X$</span> there exists a radius <span class="math-container">$r\in\R_{>0}$</span> such that the open ball <span class="math-container">$B(x, r)$</span> lies completely outside of <span class="math-container">$R_X$</span>.
That, in turn, implies that <span class="math-container">$P_X(B(x, r)) = 0$</span>—otherwise, if this were strictly positive, <span class="math-container">$P_X(R_X \cup B(x, r)) = P_X(R_X) + P_X(B(x, r)) > P_X(R_X) = 1$</span>, a contradiction.</p>
<p>Conversely, let <span class="math-container">$x \in \R$</span> and suppose <span class="math-container">$P_X(B(x, r)) = 0$</span> for some <span class="math-container">$r\in\R_{>0}$</span>. Then <span class="math-container">$B(x, r)$</span> lies completely outside <span class="math-container">$R_X$</span>, and, in particular, <span class="math-container">$x$</span> is not in <span class="math-container">$R_X$</span>. Otherwise <span class="math-container">$R_X \setminus B(x, r)$</span> would be a closed set smaller than <span class="math-container">$R_X$</span> satisfying <span class="math-container">$P_X(R_X') = 1$</span>.</p>
<p>This proves <span class="math-container">$\R \setminus R_X = \{x\in\R \mid \exists r \in \R_{>0}\quad P_X(B(x, r)) = 0\}$</span>
Negating the predicate, one gets <span class="math-container">$R_X = \{x\in\R \mid \forall r \in \R_{>0}\quad P_X(B(x, r)) > 0\}$</span>.</p>
</blockquote>
<p>But more often, different definitions are given.</p>
<br>
<h2>Alternative definition for discrete random variables</h2>
<p>A discrete random variable can be defined as a random variable <span class="math-container">$X$</span> such that <span class="math-container">$X(\Omega)$</span> is countable (either finite or countably infinite). Then, for a discrete random variable the support can be defined as</p>
<p><span class="math-container">$$R_X \deq \{x\in\R \mid \Pr(X = x) > 0\}\,.$$</span></p>
<p>Note that <span class="math-container">$R_X \subseteq X(\Omega)$</span> and thus <span class="math-container">$R_X$</span> is countable. We can prove this by proving its contrapositive:</p>
<blockquote>
<p>Suppose <span class="math-container">$x \in \R$</span> and <span class="math-container">$x \notin X(\Omega)$</span>. We can distinguish two cases: either <span class="math-container">$x < y$</span> <span class="math-container">$\forall y \in R_X$</span>, or <span class="math-container">$x > y$</span> <span class="math-container">$\forall y \in R_X$</span>, or neither.</p>
<p>Suppose <span class="math-container">$x < y$</span> <span class="math-container">$\forall y \in R_X$</span>. Then <span class="math-container">$\Pr(X = x) \le \Pr(X \le x) = \Pr(X^{-1}(I_x)) = \Pr(\emptyset) = 0$</span>, since <span class="math-container">$\forall \omega\in\Omega\ X(\omega) > x$</span>. Ergo, <span class="math-container">$x\notin R_X$</span>.</p>
<p>The case in which <span class="math-container">$x > y$</span> <span class="math-container">$\forall y \in X(\Omega)$</span> is analogous.</p>
<p>Suppose now <span class="math-container">$\exists y_1, y_2 \in X(\Omega)$</span> such that <span class="math-container">$y_1 < x < y_2$</span>. Let <span class="math-container">$S = \{y\in X(\Omega) \mid y < x\}$</span>, which is. Thus <span class="math-container">$\sup L$</span> and exists, and <span class="math-container">$\lim_{y \to x^-} F_X(y) = F_X(\sup L)$</span> since <span class="math-container">$F_X$</span> is nondecreasing and bounded above. Thus, since <span class="math-container">$\sup L \le x$</span>, <span class="math-container">$F_X(x) \ge F_X(\sup L)$</span> and therefore <span class="math-container">$\Pr(X=x) = F_X(x) - F_X(x^-) \ge F_X(\sup L) - F_X(x^-) = 0$</span>.</p>
</blockquote>
<br>
<h2>Alternative definition for continuous random variables</h2>
<p>Notice that for absolutely continuous random variables (that is, random variables whose distribution function is continuous on all of <span class="math-container">$\R$</span>), <span class="math-container">$\Pr(X = x) = 0$</span> for all <span class="math-container">$x\in \R$</span>—since, by definition of continuity, <span class="math-container">$F_X(x^-) = F_X(x)$</span>. But that doesn't mean that the outcomes in <span class="math-container">$X^{-1}({x})$</span> are "impossible", informally speaking. Thus, in this case, the support is defined as</p>
<p><span class="math-container">$$ R_X = \mathrm{closure}{\{x \in \R \mid f_X(x) > 0\}}\,,$$</span></p>
<p>which intuitively can be justified as being the set of points around which we can make an arbitrarily small interval on which the integral of the PDF is strictly positive.</p>
|
logic | <p>I just started to learn mathematical logic. I'm a graduate student. I need a book with relatively more examples. Any recommendation?</p>
| <p>For my work in this area, I refer to:</p>
<ul>
<li>Richard Epstein "Classical Mathematical Logic"</li>
<li>Wolfgang Rautenberg "A Concise Introduction to Mathematical Logic"</li>
<li>Jon Barwise "Handbook of Mathematical Logic"</li>
<li>Jean Heijenoort "From Frege to Gödel"</li>
<li>Wei Li "Mathematical Logic: Foundations for Information Science"</li>
</ul>
<p>Rautenberg has a lot of examples, exercise, but is very heavy going (at least for me). Epstein is fairly recent and very well laid out. While, Barwise is the most comprehensive for when you need to deep dive.</p>
| <p>A book that should be read by everyone in mathematics regardless of level is Wolfe's <em>A Tour Through Mathematical Logic</em>. </p>
<p>It's simply a compulsory read, I couldn't put it down. It gives a broad overview of mathematical logic and set theory along with its history, and it is absolutely beautifully written. That's the best place for anyone to begin. </p>
|
combinatorics | <p>I'm teaching an intro programming course and came up with a recursion problem for my students to solve that's inspired by the game <a href="https://en.wikipedia.org/wiki/Chomp" rel="noreferrer">Chomp</a>. Here's the problem statement:</p>
<blockquote>
<p>You have a chocolate bar that’s subdivided into individual squares.
You decide to eat the bar according to the following rule: <strong><em>if you
choose to eat one of the chocolate squares, you have to also eat every
square below and/or to the right of that square.</em></strong></p>
<p>For example, here’s one of the many ways you could eat a 3 × 5
chocolate bar while obeying the rule. The star at each step indicates
the square chosen out of the chocolate bar, and the gray squares
indicate which squares must also be eaten in order to comply with the
above rule.</p>
<p><img src="https://i.sstatic.net/5gjUe.png" alt="enter image description here"></p>
<p>The particular choice of the starred square at each step was
completely arbitrary, but once a starred square is picked the choice
of grayed-out squares is forced. You have to eat the starred square,
plus each square that’s to the right of that square, below that
square, or both. The above route is only one way to eat the chocolate
bar. Here’s another:</p>
<p><img src="https://i.sstatic.net/8pp1M.png" alt="enter image description here"></p>
<p>As before, there’s no particular pattern to how the starred squares
were chosen, but once we know which square is starred the choice of
gray squares is forced.</p>
<p>Now, given an <span class="math-container">$m \times n$</span> candy bar, determine the number of different ways you can eat the candy bar while obeying the above rule.</p>
</blockquote>
<p>When I gave this to my students, I asked them to solve it by writing a recursive function that explores all the different routes by which the chocolate bar could be eaten. But as I was writing this problem, I started wondering - is there a closed-form solution?</p>
<p>I used my own solution to this problem to compute the number of different sequences that exist for different values of <span class="math-container">$m$</span> and <span class="math-container">$n$</span>, and here's what I found:</p>
<p><span class="math-container">$$\left(\begin{matrix}
1 & 1 & 1 & 1 & 1 & 1 & 1\\
1 & 1 & 2 & 4 & 8 & 16 & 32\\
1 & 2 & 10 & 58 & 370 & 2514 & 17850\\
1 & 4 & 58 & 1232 & 33096 & 1036972 & 36191226\\
1 & 8 & 370 & 33096 & 4418360 & 768194656 & 161014977260\\
1 & 16 & 2514 & 1036972 & 768194656 & 840254670736 & 1213757769879808\\
1 & 32 & 17850 & 36191226 & 161014977260 & 1213757769879808 & 13367266491668337972
\end{matrix}\right)$$</span></p>
<p>Some of these rows show nice patterns. The second row looks like it's all the powers of two, and that makes sense because if you have a <span class="math-container">$1 \times n$</span> chocolate bar then any subsequence of the squares that includes the first square, taken in sorted order, is a way to eat the candy bar. The third row shows up as <a href="http://oeis.org/A086871" rel="noreferrer">A086871</a> on the OEIS, but none of the rows after that appear to be known sequences. The diagonal sequence also isn't on the OEIS,</p>
<p>I believe that this problem is equivalent to a different one:</p>
<blockquote>
<p>Consider the partial order defined as the Cartesian product of the less-than relation over the sets <span class="math-container">$[m] = \{0, 1, 2, ..., m - 1\}$</span> and <span class="math-container">$[n]$</span>. How many distinct sequences of elements of this partial order exist so that no term in the sequence is dominated by any previous element and the final element is the maximum element of the order?</p>
</blockquote>
<p>I'm completely at a loss for how to determine the answer to that question.</p>
<p>Is there a nice closed-form solution to this problem?</p>
| <p>This is a starter providing some ideas which can be used to iteratively determine the number of ways to eat an <span class="math-container">$(m\times n)$</span> chocolate bar. We consider an <span class="math-container">$(m\times n)$</span> rectangle and start eating from bottom left to top right. The graphic below shows a valid configuration of a <span class="math-container">$(7\times 4)$</span> chocolate bar after three bite indicated by <span class="math-container">$X$</span>. </p>
<p> <a href="https://i.sstatic.net/OOsm0.png" rel="noreferrer"><img src="https://i.sstatic.net/OOsm0.png" alt="enter image description here"></a></p>
<p><strong>Valid paths:</strong></p>
<p>We characterize a valid path by an <span class="math-container">$n$</span>-tupel giving for each <span class="math-container">$y$</span>, <span class="math-container">$1\leq y\leq n$</span> the corresponding <span class="math-container">$x$</span>-value , <span class="math-container">$1\leq x\leq m$</span>. The valid path in the graphic is encoded this way as <span class="math-container">${(1,2,2,5)}$</span>. We have a total of <span class="math-container">$\binom{m+n}{n}$</span> valid paths and consider these paths as building blocks to determine the number of ways to eat the chocolate bar. A valid path is encoded as <span class="math-container">$(x_1,x_2,\ldots,x_n)$</span> with <span class="math-container">$0\leq x_1\leq \cdots \leq x_n\leq m$</span>. The final path is <span class="math-container">$(m,m,\ldots,m)$</span>.</p>
<p>In order to determine the number of ways to obtain <span class="math-container">$(1,2,2,5)$</span> we consider all possible predecessors from which we can get <span class="math-container">$(1,2,2,5)$</span> in one step. We add up the number of ways to obtain all predecessors and get so the number of ways for <span class="math-container">$(1,2,2,5)$</span>. The predecessors of <span class="math-container">$(1,2,2,5)$</span> are indicated by the grey shaded regions and are
<span class="math-container">\begin{align*}
(\color{blue}{0},2,2,5)\qquad (1,2,2,\color{blue}{2})\\
(1,\color{blue}{1},2,5)\qquad (1,2,2,\color{blue}{3})\\
(1,\color{blue}{1},\color{blue}{1},5)\qquad (1,2,2,\color{blue}{4})\\
\end{align*}</span>
The blue marked coordinates are to bite off to come to <span class="math-container">$(1,2,2,5)$</span>. </p>
<blockquote>
<p><strong>Example: <span class="math-container">$m=n=3$</span></strong></p>
<p>We determine this way the number <span class="math-container">$p_{(3,3,3)}$</span> of possible ways to eat a <span class="math-container">$(3\times 3)$</span> chocolate bar which is according to OP's table
<span class="math-container">\begin{align*}
\color{blue}{p_{(3,3,3)}=1\,232}
\end{align*}</span> We start determining the <span class="math-container">$\binom{6}{3}=20$</span> valid paths. These are:</p>
<p><span class="math-container">\begin{align*}
&(0,0,0)\\
&(0,0,1)\,(0,1,1)\quad\quad\quad\quad\quad\quad\,\,\,\, (1,1,1)\\
&(0,0,2)\,(0,1,2)\,(0,2,2)\qquad\quad\,\,\,(1,1,2)\,(1,2,2)\qquad\quad\,\,\,(2,2,2)\\
&(0,0,3)\,(0,1,3)\,(0,2,3)\,(0,3,3)\,(1,1,3)\,(1,2,3)\,(1,3,3)\,(2,2,3)\,(2,3,3)\,(3,3,3)
\end{align*}</span></p>
<p>We calculate iteratively <span class="math-container">$p_{(3,3,3)}$</span> by starting with <span class="math-container">$p_{(0,0,0)}=1$</span>.
We obtain
<span class="math-container">\begin{align*}
p_{(0,0,0)}&=1\\
\color{blue}{p_{(0,0,1)}}&=p_{(0,0,0)}\color{blue}{=1}\\
\color{blue}{p_{(0,0,2)}}&=p_{(0,0,1)}+p_{(0,0,0)}=1+1\color{blue}{=2}\\
\color{blue}{p_{(0,0,3)}}&=p_{(0,0,2)}+p_{(0,0,1)}+p_{(0,0,0)}=2+1+1\color{blue}{=4}\\
\\
\color{blue}{p_{(0,1,1)}}&=p_{(0,0,1)}+p_{(0,0,0)}=1+1\color{blue}{=2}\\
p_{(0,1,2)}&=p_{(0,1,1)}+p_{(0,0,1)}+p_{(0,0,0)}=2+1+1=4\\
p_{(0,1,3)}&=p_{(0,1,2)}+p_{(0,1,1)}+p_{(0,0,3)}=4+2+4=10\\
\color{blue}{p_{(0,2,2)}}&=p_{(0,1,2)}+p_{(0,1,1)}+p_{(0,0,2)}\\
&\quad+p_{(0,0,1)}+p_{(0,0,0)}=4+2+2+1+1\color{blue}{=10}\\
p_{(0,2,3)}&=p_{(0,2,2)}+p_{(0,1,3)}+p_{(0,0,3)}=10+10+4=24\\
\color{blue}{p_{(0,3,3)}}&=p_{(0,2,3)}+p_{(0,2,2)}+p_{(0,1,3)}+p_{(0,1,2)}\\
&\quad+p_{(0,1,1)}+p_{(0,0,3)}+p_{(0,0,2)}+p_{(0,0,1)}+p_{(0,0,0)}\\
&=24+10+10+4+2+4+2+1+1\color{blue}{=58}\\
\\
\color{blue}{p_{(1,1,1)}}&=p_{(0,1,1)}+p_{(0,0,1)}+p_{(0,0,0)}=2+1+1\color{blue}{=4}\\
p_{(1,1,2)}&=p_{(1,1,1)}+p_{(0,1,2)}+p_{(0,0,2)}=4+4+2=10\\
p_{(1,2,2)}&=p_{(1,1,2)}+p_{(1,1,1)}+p_{(0,2,2)}=10+4+10=24\\
p_{(1,1,3)}&=p_{(1,1,2)}+p_{(1,1,1)}+p_{(0,1,3)}+p_{(0,0,3)}=10+4+10+4=28\\
p_{(1,2,3)}&=p_{(1,2,2)}+p_{(1,1,3)}+p_{(0,2,3)}=24+28+24=76\\
p_{(1,3,3)}&=p_{(1,2,3)}+p_{(1,2,2)}+p_{(1,1,3)}+p_{(1,1,2)}+p_{(1,1,1)}\\
&=76+24+28+10+4+58=200\\
\\
\color{blue}{p_{(2,2,2)}}&=p_{(1,2,2)}+p_{(1,1,2)}+p_{(0,2,2)}+p_{(0,1,2)}+p_{(0,0,2)}\\
&\quad+p_{(1,1,1)}+p_{(0,1,1)}+p_{(0,0,1)}+p_{(0,0,0)}\\
&=24+10+10+4+2+4+2+1+1\color{blue}{=58}\\
p_{(2,2,3)}&=p_{(2,2,2)}+p_{(1,2,3)}+p_{(1,1,3)}\\
&\quad+p_{(0,2,3)}+p_{(0,1,3)}+p_{(0,0,3)}\\
&=58+76+28+24+10+4=200\\
p_{(2,3,3)}&=p_{(2,2,3)}+p_{(2,2,2)}+p_{(1,3,3)}+p_{(0,3,3)}\\
&=200+58+200+58=516\\
\\
\color{blue}{p_{(3,3,3)}}&=p_{(2,3,3)}+p_{(2,2,3)}+p_{(2,2,2)}+p_{(1,3,3)}+p_{(1,2,3)}\\
&\quad+p_{(1,2,2)}+p_{(1,1,3)}+p_{(1,1,2)}+p_{(1,1,1)}+p_{(0,3,3)}+p_{0,2,3)}\\
&\quad+p_{(0,2,2)}+p_{(0,1,3)}+p_{(0,1,2)}+p_{(0,1,1)}+p_{(0,0,3)}+p_{(0,0,2)}\\
&\quad+p_{(0,0,1)}+p_{(0,0,0)}\\
&=516+200+58+200+76+28+24+10+4+58\\
&\quad+24+10+10+4+2+4+2+1+1\\
&\,\,\color{blue}{=1\,232}
\end{align*}</span>
and we obtain <span class="math-container">$p_{(3,3,3)}=1\,232$</span> according to OP's table. Entries with a rectangular shape are marked in blue. They are also given in OP's list.</p>
</blockquote>
| <p>I would be pretty surprised if there were a nice answer. The related question of finding the number of linear extensions of a hypercube has no known nice formula, and there's no reason to expect one will ever be found; see for instance <a href="https://arxiv.org/pdf/1702.03018.pdf" rel="nofollow noreferrer">this paper</a> discussing both Chomp and the linear extension problem.</p>
<p>Good asymptotic estimates are known in this case, though. For the boolean lattice linear extension problem, the "naive" graded linear extensions end up being a good estimate for all of them, and those have a simple product formula--the linked paper writes it out. Finding a good asymptotic estimate for your counts would likely be interesting. As a completely naive question, is the number of ordered antichains on the underlying rectangular poset a good estimate in a logarithmic sense, or is it woefully small?</p>
<p>For linear extensions, the trouble is the general problem is #P-complete, a classic result of Brightwell--Winkler. Even restricting to quite mild posets remains #P-complete; see this <a href="https://www.math.ucla.edu/~pak/papers/BruhatPaper2.pdf" rel="nofollow noreferrer">more recent paper</a> of Dittmer--Pak. So, the only possible hope whatsoever of an efficient, explicit formula is for very particular posets. (Granted, the rectangular poset is very particular.)</p>
<p>My knowledge of this research area is relatively shallow, but I don't know of published results concerning the #P-completeness for Chomp. It would likely make a good paper. Igor Pak would probably be the person to ask. Who knows, you might even interest him in writing a paper on it?</p>
|
number-theory | <p>The number $128$ can be written as $2^n$ with integer $n$, and so can its every individual digit. Is this the only number with this property, apart from the one-digit numbers $1$, $2$, $4$ and $8$? </p>
<p>I have checked a lot, but I don't know how to prove or disprove it. </p>
| <p>This seems to be an open question. See <a href="http://oeis.org/A130693">OEIS sequence A130693</a> and references there.</p>
| <p>Here's some empirical evidence (not a proof!).</p>
<p>Here are the first powers which increase the length of 1-2-4-8 runs in the least significant digits (last digits):</p>
<pre>
Power 0: 1 digits. ...0:1
Power 7: 3 digits. ...0:128
Power 18: 4 digits. ...6:2144
Power 19: 5 digits. ...5:24288
Power 90: 6 digits. ...9:124224
Power 91: 7 digits. ...9:8248448
Power 271: 8 digits. ...0:41422848
Power 1751: 9 digits. ...3:242421248
Power 18807: 10 digits. ...9:8228144128
Power 56589: 14 digits. ...3:21142244442112
Power 899791: 16 digits. ...9:8112118821224448
Power 2814790: 17 digits. ...6:42441488812212224
Power 7635171: 19 digits. ...5:2288212218148814848
Power 39727671: 20 digits. ...6:48844421142411214848
Power 99530619: 21 digits. ...6:142118882828812812288
Power 233093807: 25 digits: ...0:2821144412811214484144128
Power 22587288091 : 26 digits: ...9:81282288882824141181288448
</pre>
<p>Easy to see, this run length grows veeeery slow: $25$ digits of $233093807*log_{10}2 \approx 70168227$ decimal digits are powers of 2. Hardly 25 can reach 70168227.</p>
<p>Let's look at it deeper. Consider $2^k$ having $n$ decimal digits (obviously $n \le k$). Let's say $2^k \mod 5^n = a$. Then by CRT we can recover $2^k \mod 10^n$ (note that $2^k \equiv 0 \pmod {2^n}$):</p>
<p>$$f(a) \equiv 2^k \equiv a \cdot 2^n \cdot (2^{-n})_{\mod 5^n} \pmod{10^n}$$</p>
<p>Now <strong>let's assume</strong> that $2^k$ randomly goes over $\mod 5^n$. How many there are elements $x \in (0,\dots,5^n-1)$ such that $f(x) \mod 10^n$ consists from digits 1-2-4-8? There are at most $4^n$ such values (all possible combinations of 1-2-4-8 $n$ times), from $5^n$ (remark - we should consider only coprime with 5 numbers, there are $5^{n-1}*4$ such). So if $2^k$ <em>acts randomly</em>, then the probability of getting 1-2-4-8 value is limited by $(4/5)^{n-1}$, which decreases exponentially with $n$ (and $k$).</p>
<p>Now how much powers of 2 do we have for each $n$? Constant amount, 3 or 4! So, if for some small $n$ there are no such values, then very probably for all numbers it's also true.</p>
<p>Remark: this may look mouthful, nearly same explanation could be given modulo $10^n$. But in my opinion it's easier to believe that $2^k$ randomly goes over $\mod 5^n$ than $\mod 10^n$. <strong>EDIT</strong>: Also, $2$ is a primitive root modulo $5^n$ for any $n$, so it indeed goes over all $5^{n-1}*4$ accounted values.</p>
<p>Remark 2: the exact amount of $x$, such that $f(x) \mod 10^n$ consist from digits 1-2-4-8, from experiments:</p>
<pre>
...
n=15: 54411 / 30517578125 ~= 2^-19.0973107004
n=16: 108655 / 152587890625 ~= 2^-20.4214544789
n=17: 216803 / 762939453125 ~= 2^-21.7467524186
n=18: 433285 / 3814697265625 ~= 2^-23.0697489411
n=19: 866677 / 19073486328125 ~= 2^-24.3914989097
n=20: 1731421 / 95367431640625 ~= 2^-25.7150367656
...
</pre>
<p><strong>UPD:</strong></p>
<p>The fact that $2$ is a primitive root modulo $5^n$ is quite important.</p>
<ol>
<li><p>We can use it to optimize search for first powers of 2 which increase the 1-2-4-8 run length (first data in this post). For example, for $n=3$ only $13$ of $5^2*4=100$ values correspond to 1-2-4-8 3-digit endings. For $\mod 1000$, period for powers of 2 is equal to order of the group, namely $100$. It means we need to check only 13 of each 100 values. I managed to build the table for $n=20$ which speeds up the computation roughly by $2^{25}$. Sadly each next $n$ for the table is much harder to compute, so this approach does not scale efficiently.</p></li>
<li><p>For arbitrary $n$ we can quite efficiently find some $k$ such that $2^k$ has last $n$ digits-powers-of-two.<br>
Let's assume that for some $n$ we know such $k_0$. Consider $a = 2^{k_0} \mod 10^n$. We want to construct $k'$ for $n+1$. Let's look at $a \mod 2^{n+1}$. It's either $0$ or $2^{n}$. If it's $0$, we can set $(n+1)$'s digit to any from $0,2,4,6,8$, in particular $2,4,8$ will fit our goal. If it's $2^{n}$ then we need to set $(n+1)$'s digit to any from $1,3,5,7,9$, in particular $1$ will be fine for us.<br>
After setting the new digit we have some value $a' < 10^{n+1}$. We now find $k'$ by calculating discrete logarithm of $a'$ base $2$ in the group $(\mathbb{Z}/5^{n+1}\mathbb{Z})^*$. It must exist, because $2$ is primitive root modulo $5^{n+1}$ and $a'$ is not divisible by five (meaning that it falls into the multiplicative group).</p></li>
</ol>
<p>Summing up, it's possible to extend any existing 1-2-4-8 tail either with $1$, or with any of $2,4,8$. For example, tail of length 1000, consisting only of 1 and 2:</p>
<pre><code>sage: k
615518781133550927510466866560333743905276469414384071264775976996005261582333424319169262744569107203933909361398767112744041716987032548389396865721149211989700277761744700088281526981978660685104481509534097152083798849174934153949598921020219299137483196605381824600377210207048355959118105502326334547495384673753323864726827644650703466356156319492521379682428275201262134907960967634887658195264018797348236155773958687977059474419550906257366056229915615067527218040720408353328787880060032847746927391316869927283585312014157952623949696812057481086276896651244409107902992111507870787820359137244857060839675634572294938878098506151681269336043213294287160464665102314138635395739226878089
sage: print pow(2, k, 10**1000)
1112221212111222212111211212211112121221111221112211221212222212222111211212111122221111222222211112222211112122111222122222212222111221111112211122121122221212111122212112211122121121212211211221122111111111111121211111211212222212112121222221221122111222221222222122212221212111121111112111211222111111211222222222112222212112211212121122212122222211111121112122122112112122222212121121222221112121221222221121122221121222121112111121221221212211121221122121122122122112112112111222212111111221121211211122222122211122211211222122122211112121121111211222211211212211112111212121212111222221212221211212222121122221211112222211221121221211211221222211112121221222122112122221221221221221222211122222222222222222222111121122221121121212111222211112122112112222112221212111112121221221121211221111121212111111121212222212211222122122212112211221221112222121221212121121112111222221122221221121111212121211211211221121211211121122122211212221112122111122212112212121112121121122111112111211111212122112
</code></pre>
|
combinatorics | <p>I have recently played the game <a href="http://gabrielecirulli.github.io/2048/">2048</a>, created by Gabriele Cirulli, which is fun. I suggest trying if you have not. But my brother posed this question to me about the game: </p>
<p>If he were to write a script that made random moves in the game 2048, what is the probability that it would win the game? </p>
<p>Combinatorics is not my area, so I did not even attempt to answer this, knowing this seems like a difficult question to answer. But I thought someone here might have a good idea. </p>
<p>Also, since we are not concerned with time, just with winning, we can assume that every random move actually results in a tile moving. </p>
<p><strong>Addendum</strong></p>
<p>While the answers below shed light on the problem, only BoZenKhaa came close to providing a probability, even if it was an upper bound. So I would like to modify the question to: </p>
<p>Can we find decent upper and lower bounds for this probability?</p>
| <p>I implemented a simulation of 2048, because I wanted to analyze different strategies.</p>
<p>Unsurprisingly, the result is that moving at random is a really bad strategy.
<img src="https://i.sstatic.net/iGHwk.png" alt="enter image description here"> Above you can see the scores of $1000000$ random games (edit: <em>updated after bugfix, thanks to misof</em>). The score is defined as the sum of all numbers generated by merge. It can be viewed as a measure of how far you make it in the game. For a win you need a score of at least $16384$. You can see that most games end in a region below $2000$, that is they generate at most a 128-tile and loose subsequently. The heap on the right at $2500$ represents those games that manage to generate a 256 tile - those games are rather rare. No game made it to the 1024-tile.</p>
<p>Upon request, here is the plot for highest number on a tile:
<img src="https://i.sstatic.net/YXZgk.png" alt="enter image description here"> When it comes to "dumb strategies", you get better results cycling moves deterministically: move up, right, up, left and repeat. This improves the expected highest number by one tile.</p>
<p>You can do your own experiments using the code <a href="https://github.com/bheuer/MSE/blob/master/minimalRandomGame.py" rel="noreferrer">here</a> and <a href="https://github.com/bheuer/MSE/blob/master/RandomGame.py" rel="noreferrer">here</a>.</p>
| <p>Instead of trying to get an exact answer, let me give you a very rough intuition based estimate based on few observations about the game and a related question on SO:</p>
<ul>
<li>Before you make it to the 2048 tile, you will need to have at least 10 tiles of different values on the board: $2,4,8,16,32,64,128,256,512$ and $1024$.</li>
<li>You will have to make at least $520$ moves to get to the 2048 tile (each time you make a move, the sum of tiles on the board increases by at most 4).</li>
<li>This is the awesome post on SO concerning the same game: <a href="https://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048">https://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048</a> . It is noteworthy that the best algorithm mentioned in one of the answers has claimed success rate of around 90%, i.e. it does not get to win every time.</li>
<li>In the abovementioned post, it is suggested that a good winning strategy is to select one side, say the top one, and then try not to ever move your highest number away from that side. It is also suggested that if you have to move the high numbers away from this side of choice, it can be hard to save the day and still win.</li>
</ul>
<p>Now for the sake of giving a pseudo-rudimentary estimate, let us entertain the idea that the last bullet point is right about the winning strategy and that this strategy covers most of the winning strategies. </p>
<p>Next, imagine, that our Random AlgoriThm (RAT) made it to the stage where half the board is covered with different numbers, meaning there are 8 different numbers on board $2,4,8,16,32,64,128, 256$. This means we are on the move number at most around $256 = \frac{1}{2}{\sum_1^{8} 2^k}$. </p>
<p>Also, our RAT miraculously made it this far and managed to keep it's high numbers by the top side of the board, as by the last bullet point. For the final assumption, assume that if the RAT presses the bottom arrow, it will always loose the game (because it is so random, it will not be able to salvage the situation.)</p>
<p>Now, the chance of our RAT winning after the move 256 is for sure smaller than the chance of RAT not pressing the bottom arrow. There is $3/4$ chance of the RAT not pressing the down arrow on every move, and there are at least 256 moves to be made before the RAT can get the 2048 tile. Thus the chance of the RAT winning in our simplified scenario is smaller $\frac{3}{4}^{256} \leq \frac{1}{2^{32}}$. </p>
<p>$P=\frac{1}{2^{32}}$ makes for a rather rare occurencce. As by N.Owad's comment, this chance is MUCH smaller than picking one specific second since the beginning of the universe. This should give you some intuition as to how unlikely a random win in this game actually is. </p>
<p><strong>Disclaimer</strong>: I do not pretend that P is a bound of any sorts for the actual probability of random win due to the nature of simplifications made. It just tries to illustrates a number which which is likely larger than a chance of randomly winning. </p>
|
linear-algebra | <p>Let $V$ be a vector space with infinite dimensions. A Hamel basis for $V$ is an ordered set of linearly independent vectors $\{ v_i \ | \ i \in I\}$ such that any $v \in V$ can be expressed as a finite linear combination of the $v_i$'s; so $\{ v_i \ | \ i \in I\}$ spans $V$ algebraically: this is the obvious extension of the finite-dimensional notion. Moreover, by Zorn Lemma, such a basis always exists.</p>
<p>If we endow $V$ with a topology, then we say that an ordered set of linearly independent vectors $\{ v_i \ | \ i \in I\}$ is a Schauder basis if its span is dense in $V$ with respect to the chosen topology. This amounts to say that any $v \in V$ can be expressed as an infinite linear combination of the $v_i$'s, i.e. as a series.</p>
<p>As far as I understand, if a $v$ can be expressed as finite linear combination of some set $\{ v_i \ | \ i \in I\}$, then it lies in its span; in other words, if $\{ v_i \ | \ i \in I\}$ is a Hamel basis, then it spans the whole $V$, and so it is a Schauder basis with respect to any topology on $V$.</p>
<p>However Per Enflo has constructed a Banach space without Schauder basis (ref. <a href="http://en.wikipedia.org/wiki/Schauder_basis#Properties">wiki</a>). So I guess I should conclude that my reasoning is wrong, but I can't see what's the problem.</p>
<p>Any help appreciated, thanks in advance!</p>
<hr>
<p>UPDATE: (coming from the huge amount of answers and comments)
Forgetting for a moment the concerns about cardinality and sticking to span-properties, it has turned out that we have two different notions of linear independence: one involving finite linear combinations (Hamel-span, Hamel-independence, in the terminology introduced by rschwieb below), and one allowing infinite linear combinations (Schauder-stuff). So the point is that the vectors in a Hamel basis are Hamel independent (by def) but need not be Schauder-independent in general. As far as I understand, this is the fundamental reason why a Hamel basis is not automatically a Schauder basis.</p>
| <p>People keep mentioning the restriction on the size of a Schauder basis, but I think it's more important to emphasize that these bases are bases with respect to <em>different spans</em>.</p>
<p>For an ordinary vector space, only finite linear combinations are defined, and you can't hope for anything more. (Let's call these Hamel combinations.) In this context, you can talk about minimal sets whose Hamel combinations generate a vector space.</p>
<p>When your vector space has a good enough topology, you can define countable linear combinations (which we'll call Schauder combinations) and talk about sets whose Schauder combinations generate the vector space.</p>
<p>If you take a Schauder basis, you can still use it as a Hamel basis and look at its collection of Hamel combinations, and you should see its Schauder-span will normally be strictly larger than its Hamel-span.</p>
<p>This also raises the question of linear independence: when there are two types of span, you now have two types of linear independence conditions. In principle, Schauder-independence is stronger because it implies Hamel-independence of a set of basis elements.</p>
<p>Finally, let me swing back around to the question of the cardinality of the basis.
I don't actually think (/know) that it's absolutely necessary to have infinitely many elements in a Schauder basis. In the case where you allow finite Schauder bases, you don't actually need infinite linear combinations, and the Schauder and Hamel bases coincide. But definitely there is a difference in the infinite dimensional cases. In that sense, using the modifier "Schauder" actually becomes useful, so maybe that is why some people are convinced Schauder bases might be infinite.</p>
<p>And now about the limit on Schauder bases only being countable. Certainly given any space where countable sums converge, you can take a set of whatever cardinality and still consider its Schauder span (just like you could also consider its Hamel span). I know that the case of a separable space is especially useful and popular, and necessitates a countable basis, so that is probably why people tend to think of Schauder bases as countable. But I had thought uncountable Schauder bases were also used for inseparable Hilbert spaces.</p>
| <p>The problem is that an element of a Hamel basis might be an <em>infinite</em> linear combination of the other basis elements. Essentially, linear dependence changes definition.</p>
|
matrices | <p><a href="http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors">Wikipedia</a> defines an eigenvector like this:</p>
<p><em>An eigenvector of a square matrix is a non-zero vector that, when multiplied by the matrix, yields a vector that differs from the original vector at most by a multiplicative scalar.</em></p>
<p>So basically in layman language: An eigenvector is a vector that when you multiply it by a square matrix, you get the same vector or the same vector multiplied by a scalar.</p>
<p>There are a lot of terms which are related to this like eigenspaces and eigenvalues and eigenbases and such, which I don't quite understand, in fact, I don't understand at all.</p>
<p>Can someone give an explanation connecting these terms? So that it is clear what they are and why they are related.</p>
| <p>Eigenvectors are those vectors that exhibit especially simple behaviour under a linear transformation: Loosely speaking, they don't bend and rotate, they simply grow (or shrink) in length (though a different interpretation of growth/shrinkage may apply if the ground field is not $\mathbb R$). If it is possible to express any other vector as a linear combination of eigenvectors (preferably if you can in fact find a whole basis made of eigenvectors) then applying the - otherwise complicated - linear transformation suddenly becomes easy because with respect to a basis of eigenvectors the linear transformation is given simply by a diagonal matrix.</p>
<p>Especially when one wants to investigate higher powers of a linear transformation, this is practically only possible for eigenvectors: If $Av=\lambda v$, then $A^nv=\lambda^nv$, and even exponentials become easy for eigenvectors: $\exp(A)v:=\sum\frac1{n!}A^n v=e^\lambda v$.
By the way, the exponential functions $x\mapsto e^{cx}$ are eigenvectors of a famous linear tranformation: differentiation, i.e. mapping a function $f$ to its derivative $f'$. That's precisely why exponetials play an important role as base solutions for linear differential equations (or even their discrete counterpart, linear recurrences like the Fibonacci numbers).</p>
<p>All other terminology is based on this notion:
An (nonzero) <strong>eigenvector</strong> $v$ such that $Av$ is a multiple of $v$ determines its <strong>eigenvalue</strong> $\lambda$ as the scalar factor such that $Av=\lambda v$.
Given an eigenvalue $\lambda$, the set of eigenvectors with that eigenvalue is in fact a subspace (i.e. sums and multiples of eigenvectors with the same(!) eigenvalue are again eigen), called the <strong>eigenspace</strong> for $\lambda$.
If we find a basis consisting of eigenvectors, then we may obviously call it <strong>eigenbasis</strong>. If the vectors of our vector space are not mere number tuples (such as in $\mathbb R^3$) but are also functions and our linear transformation is an operator (such as differentiation), it is often convenient to call the eigenvectors <strong>eigenfunctions</strong> instead; for example, $x\mapsto e^{3x}$ is an eigenfunction of the differentiation operator with eigenvalue $3$ (because the derivative of it is $x\mapsto 3e^{3x}$).</p>
| <p>As far as I understand it, the 'eigen' in words like eigenvalue, eigenvector etc. means something like 'own', or a better translation in English would perhaps be 'characteristic'.</p>
<p>Each square matrix has some special scalars and vectors associated with it. The eigenvectors are the vectors which the matrix preserves (up to scalar multiplication). As you probably know, an $n\times n$ matrix acts as a linear transformation on an $n$-dimensional space, say $F^n$. A vector and its scalar multiples form a line through the origin in $F^n$, and so you can think of the eigenvectors as indicating lines through the origin preserved by the linear transformation corresponding to the matrix.</p>
<p><strong>Defn</strong> Let $A$ be an $n\times n$ matrix over a field $F$. A vector $v\in F^n$ is an <em>eigenvector</em> of $A$ if $Av = \lambda v$ for some $\lambda$ in $F$. A scalar $\lambda\in F$ is an <em>eigenvalue</em> of $A$ if $Av = \lambda v$ for some $v\in F^n$.</p>
<p>The eigenvalues are then the factors by which these special lines through the origin are either stretched or contracted.</p>
|
matrices | <p>Is it possible to multiply A[m,n,k] by B[p,q,r]? Does the regular matrix product have generalized form?</p>
<p>I would appreciate it if you could help me to find out some tutorials online or mathematical 'word' which means N-dimensional matrix product.</p>
<p>Upd.
I'm writing a program that can perform matrix calculations. I created a class called matrix
and made it independent from the storage using object oriented features of C++. But when I started to write this program I thought that it was some general operation to multiply for all kinds of arrays(matrices). And my plan was to implement this multiplication (and other operators) and get generalized class of objects. Since this site is not concerned with programming I didn't post too much technical details earlier. Now I'm not quite sure if that one general procedure exists. Thanks for all comments.</p>
| <p>The general procedure is called <a href="http://en.wikipedia.org/wiki/Tensor_contraction">tensor contraction</a>. Concretely it's given by summing over various indices. For example, just as ordinary matrix multiplication $C = AB$ is given by</p>
<p>$$c_{ij} = \sum_k a_{ik} b_{kj}$$</p>
<p>we can contract by summing across any index. For example, we can write</p>
<p>$$c_{ijlm} = \sum_k a_{ijk} b_{klm}$$</p>
<p>which gives a $4$-tensor ("$4$-dimensional matrix") rather than a $3$-tensor. One can also contract twice, for example</p>
<p>$$c_{il} = \sum_{j,k} a_{ijk} b_{kjl}$$</p>
<p>which gives a $2$-tensor.</p>
<p>The abstract details shouldn't matter terribly unless you explicitly want to implement <a href="http://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors#Use_in_tensor_analysis">mixed variance</a>, which as far as I know nobody who writes algorithms for manipulating matrices does. </p>
| <p>Sorry to revive the thread, but what I found might answer the original question and help others who might stumble into this in the future. This came up for me when I wanted to avoid using for-loop and instead do one big multiplication on 3D matrices. </p>
<p>So first, let's look how matrix multiplication works. Say you have <code>A[m,n]</code> and <code>B[n,p]</code>. One requirement is that number of columns of A must match the number of rows of B. Then, all you do is iterate over rows of A (i) and columns of B (j) and the common dimension of both (k) (matlab/octave example):</p>
<pre><code>m=2;n=3;p=4;A=randn(m,n);B=randn(n,p);
C=zeros(m,p);
for i = 1:m
for j = 1:p
for k = 1:n
C(i,j) = C(i,j) + A(i,k)*B(k,j);
end
end
end
C-A*B %to check the code, should output zeros
</code></pre>
<p>So the common dimension <code>n</code> got "contracted" I believe (Qiaochu Yuan's answer made so much sense once I started coding it).</p>
<p>Now, assuming you want something similar to happen in 3D case, ie one common dimension to contract, what would you do? Assume you have <code>A[l,m,n]</code> and <code>B[n,p,q]</code>. The requirement of the common dimension is still there - the last one of A must equal the first one of B. Then theoretically (this is just one way to do it and it just makes sense to me, no other foundation for this), n just cancels in <code>LxMxNxNxPxQ</code> and what you get is <code>LxMxPxQ</code>. The result is not even the same kind of creature, it is not 3-dimensional, instead it grew to 4D (just like Qiaochu Yuan pointed out btw). But oh well, how would you compute it? Well, just append 2 more for loops to iterate over new dimensions:</p>
<pre><code>l=5;m=2;n=3;p=4;q=6;A=randn(l,m,n);B=randn(n,p,q);
C=zeros(l,m,p,q);
for h = 1:l
for i = 1:m
for j = 1:p
for g = 1:q
for k = 1:n
C(h,i,j,g) = C(h,i,j,g) + A(h,i,k)*B(k,j,g);
end
end
end
end
end
</code></pre>
<p>At the heart of it, it is still row-by-column kind of operation (hence only one dimension "contracts"), just over more data. </p>
<p>Now, my real problem was actually <code>A[m,n]</code> and <code>B[n,p,q]</code>, where the creatures came from different dimensions (2D times 3D), but it seems doable nonetheless (afterall matrix times vector is 2D times 1D). So for me the result is <code>C[m,p,q]</code>:</p>
<pre><code>m=2;n=3;p=4;q=5;A=randn(m,n);B=randn(n,p,q);
C=zeros(m,p,q);Ct=C;
for i = 1:m
for j = 1:p
for g = 1:q
for k = 1:n
C(i,j,g) = C(i,j,g) + A(i,k)*B(k,j,g);
end
end
end
end
</code></pre>
<p>which checks out against using the full for-loops:</p>
<pre><code>for j = 1:p
for g = 1:q
Ct(:,j,g) = A*B(:,j,g); %"true", but still uses for-loops
end
end
C-Ct
</code></pre>
<p>but doesn't achieve my initial goal of just calling some built-in matlab function to do the work for me. Still, it was fun to play with this.</p>
|
geometry | <p>This is an idea I have had in my head for years and years and I would like to know the answer, and also I would like to know if it's somehow relevant to anything or useless.
I describe my thoughts with the following image:<br>
<img src="https://i.sstatic.net/UnAyt.png" alt="enter image description here"><br>
What would the area of the "red almost half circle" on top of the third square be, assuming you rotate the hypotenuse of a square around it's center limiting its movement so it cannot pass through the bottom of the square.<br>
My guess would be: </p>
<p>$$\ \frac{\left(\pi*(h/2)^2 - a^2\right)}{2}$$</p>
<p>And also, does this have any meaning? Have I been wandering around thinking about complete nonsense for so many years?</p>
| <p>I found this problem interesting enough to make a little animation along the line of @Blue's diagram (but I didn't want to edit their answer without permission):</p>
<p><img src="https://i.sstatic.net/5le9i.gif" alt="enter image description here"></p>
<p><em>Mathematica</em> syntax for those who are interested:</p>
<pre><code>G[d_, t_] := {t - (d t)/Sqrt[1 + t^2], d /Sqrt[1 + t^2]}
P[c_, m_] := Show[ParametricPlot[G[# Sqrt[8], t], {t, -4, 4},
PlotStyle -> {Dashed, Hue[#]}, PlotRange -> {{-1.025, 1.025}, {-.025,
2 Sqrt[2] + 0.025}}] & /@ (Range[m]/m),
ParametricPlot[G[Sqrt[8], t], {t, -1, 1}, PlotStyle -> {Red, Thick}],
Graphics[{Black, Disk[{0, 1}, .025], Opacity[0.1], Rectangle[{-1, 0}, {1, 2}],
Opacity[1], Line[{{c, 0}, G[Sqrt[8], c]}], Disk[{c, 0}, .025],
{Hue[#], Disk[G[# Sqrt[8], c], .025]} & /@ (Range[m]/m)}],
Axes -> False]
Manipulate[P[c, m], {c, -1, 1}, {m, 1, 20, 1}]
</code></pre>
| <p><img src="https://i.sstatic.net/0Z1P6.jpg" alt=""></p>
<p>Let $O$ be the center of the square, and let $\ell(\theta)$ be the line through $O$ that makes an angle $\theta$ with the horizontal line.
The line $\ell(\theta)$ intersects with the lower side of the square at a point $M_\theta$, with
$OM_\theta=\dfrac{a}{2\sin \theta }$. So, if $N_\theta$ is the other end of our 'rotating' diagonal then we have
$$ON_\theta=\rho(\theta)=h-OM_\theta=a\sqrt{2}-\dfrac{a}{2\sin \theta }.$$
Now, the area traced by $ON_\theta$ as $\theta$ varies between $\pi/4$ and $3\pi/4$ is our desired area augmented by the area of the quarter of the square. So, the desired area is
$$\eqalign{
\mathcal{A}&=\frac{1}{2}\int_{\pi/4}^{3\pi/4}\rho^2(\theta)\,d\theta-\frac{a^2}{4}\cr
&=a^2\int_{\pi/4}^{\pi/2}\left(\sqrt{2}-\frac{1}{2\sin\theta}\right)^2\,d\theta-\frac{a^2}{4}
&=a^2\left(\frac{\pi}{2}-\sqrt{2}\ln(1+\sqrt{2})\right)
}
$$
Therefore, the correct answer is about $13.6\%$ larger than the conjectured answer.</p>
|
logic | <p>Consider A $\Rightarrow$ B, A $\models$ B, and A $\vdash$ B.</p>
<p>What are some examples contrasting their proper use? For example, give A and B such that A $\models$ B is true but A $\Rightarrow$ B is false. I'd appreciate pointers to any tutorial-level discussion that contrasts these operators.</p>
<p>Edit: What I took away from this discussion and the others linked is that logicians make a distinction between $\vdash$ and $\models$, but non-logicians tend to use $\Rightarrow$ for both relations plus a few others. Points go to Trevor for being the first to explain the relevance of <em>completeness</em> and <em>soundness</em>.</p>
| <p>First let's compare $A \implies B$ with $A \vdash B$. The former is a statement in the object language and the latter is a statement in the meta-language, so it would make more sense to compare $\vdash A \implies B$ with $A \vdash B$.
The rule of <em>modus ponens</em> allows us to conclude $A \vdash B$ from $\vdash A \implies B$, and the deduction theorem allows to conclude $\vdash A \implies B$ from $A \vdash B$. Probably there are exotic logics where <em>modus ponens</em> fails or the deduction theorem fails, but I'm not sure that's what you're looking for.
I think the short answer is that if you put a turnstile ($\vdash$) in front of $A \implies B$ to make it a statement in the meta-language (asserting that the implication is <em>provable</em>) then you get something equivalent to $A \vdash B$.</p>
<p>Next let's compare $A \vdash B$ with $A \models B$. These are both statements in the meta-language. The former asserts the existence of a <em>proof</em> of $B$ from $A$ (syntactic consequence) whereas the latter asserts that every $B$ <em>holds</em> in every <em>model</em> of $A$ (semantic consequence). Whether these are equivalent depends on what class of models we allow in our logical system, and what deduction rules we allow. If the logical system is <em>sound</em> then we can conclude $A \models B$ from $A \vdash B$, and if the logical system is <em>complete</em> then we can conclude $A \vdash B$ from $A \models B$. These are desirable properties for logical systems to have, but there are logical systems that are not sound or complete. For example, if you remove some essential rule of inference from a system it will cease to be complete, and if you add some invalid rule of inference to the system it will cease to be sound.</p>
| <p>@Trevor's answer makes the crucial distinctions which need to be made: there's no disagreement at all about that. Symbolically, I'd put things just a bit differently. Consider first <em>these</em> three:</p>
<p>$$\to,\quad \vdash,\quad \vDash$$</p>
<ol>
<li>'$\to$' (or '$\supset$') is a symbol belonging to various formal languages (e.g. the language of propositional logic or the language of the first-order predicate calculus) to express [usually, but not always] the truth-functional conditional. $A \to B$ is a single conditional proposition in the object language under consideration.</li>
<li>'$\vdash$' is an expression added as useful shorthand to logician's English (or Spanish or whatever) -- it belongs to the metalanguage in which we talk about consequence relations between formal sentences. Unpacked, $A, A \to B \vdash B$ says in augmented English that in some relevant deductive system, there is a proof from the premisses $A$ and $A \to B$ to the conclusion $B$. (If we are being really pernickety we would write '$A$', '$A \to B$' $\vdash$ '$B$' but it is always understood that $\vdash$ comes with invisible quotes.)</li>
<li>'$\vDash$' is another expression added to logician's English (or Spanish or whatever) -- it again belongs to the metalanguage in which we talk about consequence relations between formal sentences. And e.g. $A, A \to B \vDash B$ says that in the relevant semantics, there is no valuation which makes the premisses $A$ and $A \to B$ true and the conclusion $B$ false.</li>
</ol>
<p>As for '$\Rightarrow$', this -- like the informal use of 'implies' -- seems to be used (especially by non-logicians), in different contexts for any of these three. It is also used, differently again, for the relation of so-called strict implication, or as punctuation in a sequent. So I'm afraid you do just have to be careful to let context disambiguate. The use of the two kinds of turnstile is absolutely standardised. The use of the double arrow isn't.</p>
|
probability | <blockquote>
<p>Consider the following experiment. I roll a die repeatedly until the die returns 6, then I count the number of times 3 appeared in the random variable $X$. What is $E[X]$?</p>
</blockquote>
<p><strong>Thoughts:</strong> I expect to roll the die 6 times before 6 appears (this part is geometric), and on the preceding 5 rolls each roll has a $1/5$ chance of returning a 3. Treating this as binomial, I therefore expect to count 3 once, so $E[X]=1$.</p>
<p><strong>Problem:</strong> Don't know how to model this problem mathematically. Hints would be appreciated.</p>
| <p>We can restrict ourselves to dice throws with outcomes $3$ and $6$. Among these throws, both outcomes are equally likely. This means that the index $Y$ of the first $6$ is geometrically distributed with parameter $\frac12$, hence $\mathbb{E}(Y)=2$. The number of $3$s occuring before the first $6$ equals $Y-1$ and has expected value $1$.</p>
| <p>There are infinite ways to solve this problem, here is another solution I like. </p>
<p>Let $A = \{\text{first roll is }6\}$, $B = \{\text{first roll is }3\}$, $C = \{\text{first roll is neither }3\text{ nor }6\}$. Then
$$
E[X] = E[X|A]P(A) + E[X|B] P(B) + E[X|C] P(C) = 0 + (E[X] + 1) \frac16 + E[X]\frac46,
$$
whence $E[X] = 1$. </p>
|
probability | <p>Let <span class="math-container">$X_0$</span> be the unit disc, and consider the process of "cutting out circles", where to construct <span class="math-container">$X_n$</span> you select a uniform random point <span class="math-container">$x \in X_{n-1}$</span>, and cut out the largest circle with center <span class="math-container">$x$</span>. To illustrate this process, we have the following graphic:</p>
<p><a href="https://i.sstatic.net/D1mbZ.png" rel="noreferrer"><img src="https://i.sstatic.net/D1mbZ.png" alt="cutting out circles" /></a></p>
<p>where the graphs are respectively showing one sample of <span class="math-container">$X_1,X_2,X_3,X_{100}$</span> (the orange parts have been cut out).</p>
<p>Can we prove we eventually cut everything out? Formally, is the following true
<span class="math-container">$$\text{lim}_{n \to \infty} \mathbb{E}[\text{Area}(X_n)] = 0$$</span></p>
<p>where <span class="math-container">$\mathbb{E}$</span> denotes we are taking the expectation value. Doing simulations, this seems true, in fact <span class="math-container">$\mathbb{E}[\text{Area}($</span>X_n<span class="math-container">$)]$</span> seems to decay with some power law, but after 4 years I still don't really know how to prove this :(. The main thing you need to rule out is that <span class="math-container">$X_n$</span> doesn't get too skinny too quickly, it seems.</p>
| <p><strong>This proof is incomplete, as noted in the comments and at the end of this answer</strong></p>
<p>Apologies for the length. I tried to break it up in to sections so it's easier to follow and I tried to make all implications really clear. Happy to revise as needed</p>
<p>I'll start with some definitions to keep things clear.</p>
<p>Let</p>
<ul>
<li>The area of a set <span class="math-container">$S \subset \mathbb{R^2}$</span> be the 2-Lebesgue measure <span class="math-container">$\lambda^*_2(S):= A(S)$</span></li>
<li><span class="math-container">$p_n$</span> be the point selected from <span class="math-container">$X_{n-1}$</span> such that <span class="math-container">$P(p_n \in Q) = \frac{A(Q)}{A({X_{n-1}})} \; \forall Q\in \mathcal{B}(X_{n-1})$</span></li>
<li><span class="math-container">$C_n(p)$</span> is the maximal circle drawn around <span class="math-container">$p \in X_{n-1}$</span> that fits in <span class="math-container">$X_{n-1}$</span>: <span class="math-container">$C_n(p) = \max_r \textrm{Circle}(p,r):\textrm{Circle}(p,r) \subseteq X_{n-1})$</span></li>
<li><span class="math-container">$A_n = A(C_n(p_n)) $</span> be the area of the circle drawn around <span class="math-container">$p_n$</span> <span class="math-container">$($</span>i.e., <span class="math-container">$X_n = X_{n-1}\setminus C_n(p_n))$</span></li>
</ul>
<p>We know that <span class="math-container">$0 \leq A_n \leq 1$</span>. By your definition of the generating process we can also make a stronger statement:</p>
<p>Also, since you're using a uniform probability measure over (well-behaved) subsets of <span class="math-container">$X_{n-1}$</span> as the distribution of <span class="math-container">$p_n$</span> we have <span class="math-container">$P(p_n \in B) := \frac{A(B)}{A(X_{n-1})}\;\;\forall B\in \sigma\left(X_{n-1}\right) \implies P(p_1 \in S) = P(S) \;\;\forall S \in \sigma(X_0)$</span>.</p>
<p><strong>Lemma 1</strong>: <span class="math-container">$P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$</span></p>
<p><em>Proof</em>: We'll show this by proving</p>
<ol>
<li><span class="math-container">$P(A_n>0)=1\;\forall n$</span></li>
<li><span class="math-container">$(1) \implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span></li>
<li><span class="math-container">$(2) \implies P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$</span></li>
</ol>
<p><span class="math-container">$A_n = 0$</span> can only happen if <span class="math-container">$p_n$</span> falls directly on the boundary of <span class="math-container">$X_n$</span> (i.e., <span class="math-container">$p_n \in \partial_{X_{n-1}} \subset \mathbb{R^2})$</span>. However, since each <span class="math-container">$\partial_{X_{n-1}}$</span> is the union of a finite number of smooth curves (circular arcs) in <span class="math-container">$\mathbb{R^2}$</span> we have <span class="math-container">${A}(\partial_{X_{n-1}})=0 \;\forall n \implies P(p_n \in \partial_{X_{n-1}})=0\;\;\forall n \implies P(A_n>0)=1\;\forall n$</span></p>
<p>If <span class="math-container">$P(A_n>0)=1\;\forall n$</span> then since <span class="math-container">$A(X_i) = A(X_{i-1}) - A_n\;\forall i$</span> we have that <span class="math-container">$A(X_{i-1}) - A(X_i) = A_n\;\forall i$</span></p>
<p>Therefore, <span class="math-container">$P(A(X_{i-1}) - A(X_i) > 0\;\forall i) = P(A_n>0\;\forall i)=1\implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span></p>
<p>If <span class="math-container">$P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span> then <span class="math-container">$(A(X_{i}))_{i\in \mathbb{N}}$</span> is a monotonic decreasing sequence almost surely.</p>
<p>Since <span class="math-container">$A(X_i)\geq 0\;\;\forall i\;\;(A(X_{i}))_{i\in \mathbb{N}}$</span> is bounded from below, the monotone convergence theorem implies <span class="math-container">$P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1\;\;\square$</span></p>
<p>As you've stated, what we want to show is that eventually we've cut away all the area. There are two senses in which this can be true:</p>
<ol>
<li>Almost all sequences <span class="math-container">$\left(A(X_i)\right)_1^{\infty}$</span> converge to <span class="math-container">$0$</span>: <span class="math-container">$P\left(\lim \limits_{n\to\infty}A(X_n) = 0\right) = 1$</span></li>
<li><span class="math-container">$\left(A(X_i)\right)_1^{\infty}$</span> converges in mean to <span class="math-container">$0$</span>: <span class="math-container">$\lim \limits_{n\to \infty} \mathbb{E}[A(X_n)] = 0$</span></li>
</ol>
<p>In general, these two senses of convergence do not imply each other. However, with a couple additional conditions we can show almost sure convergence implies convergence in mean. Your question is about (2), and we will get there via proving (1) <em>plus</em> a sufficient condition for <span class="math-container">$(1)\implies (2)$</span>.</p>
<p>I'll proceed as follows:</p>
<ol>
<li>Show <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span> using Borel-Cantelli Lemma</li>
<li>Use the fact that <span class="math-container">$0<A(X_n)\leq 1$</span> to apply the Dominated Convergence Theorem to show <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></li>
</ol>
<hr />
<h2>Step 1: <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span></h2>
<p>If <span class="math-container">$\lim_{n\to \infty} A(X_n) = A_R > 0$</span> then there is some set <span class="math-container">$R$</span> with positive area <span class="math-container">$A(R)=A_R >0$</span> that is a subset of <em>all</em> <span class="math-container">$X_n$</span> (i.e.,<span class="math-container">$\exists R \subset X_0: A(R)>0\;\textrm{and}\;R \subset X_i\;\;\forall i> 0)$</span></p>
<p>Let's call a set <span class="math-container">$S\subset X_0:A(S)>0,\;S \subset X_i\;\;\forall i> 0$</span> a <em>reserved set</em> <span class="math-container">$(R)$</span> since we are "setting it aside". In the rest of this proof, the letter <span class="math-container">$R$</span> will refer to a reserved set.</p>
<p>Let's define the set <span class="math-container">$Y_n = X_n \setminus R$</span>, and the event <span class="math-container">$T_n:=p_n \in Y_{n-1}$</span> then</p>
<p><strong>Lemma 2</strong>: <span class="math-container">$P\left(\bigcap_1^n T_i \right) \leq A(Y_0)^n = (1 - A_R)^n\;\;\forall n>0$</span></p>
<p><em>Proof</em>: We'll prove this by induction. Note that <span class="math-container">$P(T_1) = A(Y_0)$</span> and <span class="math-container">$P(T_1\cap T_2) = P(T_2|T_1)P(T_1)$</span>. We know that if <span class="math-container">$T_1$</span> has happened, then <strong>Lemma 1</strong> implies that <span class="math-container">$A(Y_{1}) < A(Y_0)$</span>. Therefore</p>
<p><span class="math-container">$$P(T_2|T_1)<P(T_1)=A(Y_0)\implies P\left(T_1 \bigcap T_2\right)\leq A(Y_0)^2$$</span></p>
<p>If <span class="math-container">$P(\bigcap_{i=1}^n T_i) \leq A(Y_0)^n$</span> then by a similar argument we have</p>
<p><span class="math-container">$$P\left(\bigcap_{i=1}^{n+1} T_i\right) = P\left( T_{n+1} \left| \;\bigcap_{i=1}^n T_i\right. \right)P\left(\bigcap_{i=1}^n T_i\right)\leq A(Y_0)A(Y_0)^n = A(Y_0)^{n+1}\;\;\square$$</span></p>
<p>However, to allow <span class="math-container">$R$</span> to persist, we must ensure that <em>not only</em> does <span class="math-container">$T_n$</span> occur for all <span class="math-container">$n>0$</span> but that each <span class="math-container">$p_n$</span> doesn't fall in some neighborhood <span class="math-container">$\mathcal{N}_n(R)$</span> around <span class="math-container">$R$</span>:</p>
<p><span class="math-container">$$\mathcal{N}_n(R):= \mathcal{R}_n\setminus R$$</span>
<span class="math-container">$$\textrm{where}\; \mathcal{R}_n:=\{p \in X_{n-1}: A(C_n(p)\cap R)>0\}\supseteq R$$</span></p>
<p>Let's define the event <span class="math-container">$T'_n:=p_n \in X_{n-1}\setminus \mathcal{R}_n$</span> to capture the above requirement for a particular point <span class="math-container">$p_n$</span>. We then have the following.</p>
<p><strong>Lemma 3</strong>: <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R \implies P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$</span></p>
<p><em>Proof</em>: Assume <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span>. If <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$</span> then <span class="math-container">$P\left(\exists k>0:p_k \in \mathcal{R}_k\right)>0$</span>. By the definition of <span class="math-container">$ \mathcal{R}_k$</span>, <span class="math-container">$A(C_k(p_k)\cap R) > 0$</span> which means that <span class="math-container">$X_{k}\cap R \subset R \implies A(X_{k}\cap R) < A_R$</span>. By <strong>Lemma 1</strong>, <span class="math-container">$(X_i)_{i \in \mathbb{N}}$</span> is a strictly decreasing sequence of sets so <span class="math-container">$A(X_{j}\cap R) < A_R \;\;\forall j>i$</span>; therefore, <span class="math-container">$\exists \epsilon > 0: P\left(A(X_n) \overset{a.s.}{\to} A_R - \epsilon\right)>0$</span>. However, this contradicts our assumption <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span>. Therefore, <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$</span> is false which implies <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1\;\square$</span></p>
<p><strong>Corollary 1</strong>: <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$</span> is a necessary condition for <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span></p>
<p><em>Proof</em>: This follows immediately from <strong>Lemma 3</strong> by the logic of material implication: <span class="math-container">$X \implies Y \iff \neg Y \implies \neg X$</span> -- an implication is logically equivalent to its contrapositive.</p>
<p>We can express <strong>Corollary 1</strong> as an event <span class="math-container">$\mathcal{T}$</span> in a probability space <span class="math-container">$\left(X_0^{\mathbb{N}},\mathcal{F},\mathbb{P}\right)$</span> constructed from the sample space of infinite sequences of points <span class="math-container">$p_n \in X_0$</span> where:</p>
<ul>
<li><p><span class="math-container">$X_0^{\mathbb{N}}:=\prod_{i\in\mathbb{N}}X_0$</span> is the set of all sequences of points in the unit disk <span class="math-container">$X_0 \subset \mathbb{R^2}$</span></p>
</li>
<li><p><span class="math-container">$\mathcal{F}$</span> is the product Borel <span class="math-container">$\sigma$</span>-algebra generated by the product topology of all open sets in <span class="math-container">$X_0^{\mathbb{N}}$</span></p>
</li>
<li><p><span class="math-container">$\mathbb{P}$</span> is a probability measure defined on <span class="math-container">$\mathcal{F}$</span></p>
</li>
</ul>
<p>With this space defined, we can define our event <span class="math-container">$\mathcal{T}$</span> as as the intersection of a non-increasing sequence of cylinder sets in <span class="math-container">$\mathcal{F}$</span>:</p>
<p><span class="math-container">$$\mathcal{T}:=\bigcap_{i=1}^{\infty}\mathcal{T}_i \;\;\;\textrm{where } \mathcal{T}_i:=\bigcap_{j=1}^{i} T'_j = \text{Cyl}_{\mathcal{F}}(T'_1,..,T'_i)$$</span></p>
<p><strong>Lemma 4</strong>: <span class="math-container">$\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)\leq (1-A_R)^n$</span></p>
<p><em>Proof</em>: <span class="math-container">$\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)$</span> follows from the definition of <span class="math-container">$\mathcal{T}_n$</span>. <span class="math-container">$\mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)$</span> follows immediately from <span class="math-container">$R\subseteq \mathcal{R}_n\;\;\forall n\;\square$</span></p>
<p><strong>Lemma 5</strong>: <span class="math-container">$\mathcal{T} \subseteq \limsup \limits_{n\to \infty} \mathcal{T}_n$</span></p>
<p><em>Proof</em>: By definition <span class="math-container">$\mathcal{T} \subset \mathcal{T}_i \;\forall i>0$</span>. Since <span class="math-container">$\left(\mathcal{T}_i\right)_{i \in \mathbb{N}}$</span> is nonincreasing, we have <span class="math-container">$\limsup \limits_{i\to \infty} \mathcal{T}_i = \limsup \limits_{i\to \infty}\mathcal{T}_i = \lim \limits_{i\to \infty}\mathcal{T}_i = \mathcal{T}\;\;\square$</span></p>
<p><strong>Lemma 6</strong>: <span class="math-container">$\mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0\;\;\forall A_R \in (0,1]$</span></p>
<p><em>Proof</em>: From <strong>Lemma 4</strong>
<span class="math-container">$$\sum \limits_{i=1}^{\infty} \mathbb{P}\left(\mathcal{T}_i\right) \leq \sum \limits_{i=1}^{\infty} (1-A_R)^i = \sum \limits_{i=0}^{\infty} \left[(1-A_R) \cdot (1-A_R)^i\right] =$$</span>
<span class="math-container">$$ \frac{1-A_R}{1-(1-A_R)} = \frac{1-A_R}{A_R}=\frac{1}{A_R}-1 < \infty \;\; \forall A_R \in (0,1]\implies$$</span>
<span class="math-container">$$ \mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0 \;\; \forall A_R \in (0,1]\textrm{ (Borel-Cantelli) }\;\square$$</span></p>
<p><strong>Lemma 6</strong> implies that only <em>finitely many</em> <span class="math-container">$\mathcal{T}_i$</span> will occur with probability 1. Specifically, for almost every sequence <span class="math-container">$\omega \in X_0^{\infty}$</span> there <span class="math-container">$\exists n_{\omega}<\infty$</span> such <span class="math-container">$p_{n_{\omega}} \in \mathcal{R}_{n_{\omega}}$</span>.</p>
<p>We can define this as a stopping time for each sequence <span class="math-container">$\omega \in X_0^{\infty}$</span> as follows:</p>
<p><span class="math-container">$$\tau(\omega) := \max \limits_{n \in \mathbb{N}} \{n:\omega \in \mathcal{T}_n\}$$</span></p>
<p><strong>Corollary 2</strong>: <span class="math-container">$\mathbb{P}(\tau < \infty) = 1$</span></p>
<p><em>Proof</em>: This follows immediately from <strong>Lemma 6</strong> and the definition of <span class="math-container">$\tau$</span></p>
<p><strong>Lemma 7</strong>: <span class="math-container">$P(\mathcal{T}) = 0\;\;\forall R:A(R)>0$</span></p>
<p><em>Proof</em>: This follows from <strong>Lemma 5</strong> and <strong>Lemma 6</strong></p>
<hr />
<p><strong>This is where I'm missing a step</strong>
For Theorem 1 below to work, Lemma 7 + Corollary 1 are not sufficient.</p>
<p>Just because every subset of positive area <span class="math-container">$R$</span> has probability zero of occurring doesn't imply that the probability of the set of all possible subsets of area <span class="math-container">$R$</span> has a zero probability. An analogous situation is with continuous random variables -- there are an uncountable number of points, but yet when we draw from it we nonetheless get a point.</p>
<p>What I don't know are the sufficient conditions for the following:</p>
<p><span class="math-container">$P(\omega)=0 \;\forall \omega\in \Omega: A(\omega)=R \implies P(\{\omega: A(\omega)=R\})=0$</span></p>
<hr />
<hr />
<p><strong>Theorem 1</strong>: <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span></p>
<p><em>Proof</em>: <strong>Lemma 7</strong> and <strong>Corollary 1</strong> imply <span class="math-container">$A(X_n)$</span> does <em>not</em> converge to <span class="math-container">$A_R$</span> almost surely, which implies <span class="math-container">$P(A(X_n) \to A_R) < 1 \;\forall A_R > 0$</span>. <strong>Corollary 2</strong> makes the stronger statement that <span class="math-container">$P(A(X_n) \to A_R)=0\;\forall A_R>0$</span> (i.e., almost never), since we know that the sequences of centers of each circle <span class="math-container">$p_n$</span> viewed as a stochastic process will almost surely hit <span class="math-container">$R$</span> (again, since we've defined <span class="math-container">$R$</span> such that <span class="math-container">$A(R)>0)$</span>. <span class="math-container">$P(A(X_n) \to A_R) = 0 \;\forall A_R>0$</span> with <strong>Lemma 1</strong> implies that <span class="math-container">$P(A(X_n) \to 0) = 1$</span>. Therefore, <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0\;\square$</span></p>
<hr />
<h2>Step 2: <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></h2>
<p>We will appeal to the <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables#Properties_4" rel="nofollow noreferrer">Dominated Convergence Theorem</a> to prove this result.</p>
<p><strong>Theorem 2</strong>: <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></p>
<p><em>Proof</em>: From <strong>Theorem 1</strong> we've shown that <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span>. Given an almost surely constant random variable <span class="math-container">$Z\overset{a.s.}{=}c$</span>, we have <span class="math-container">$c>1 \implies |A(X_n)| < Z\;\forall n$</span>. In addition, <span class="math-container">$\mathbb{E}[Z]=c<\infty$</span>, hence <span class="math-container">$Z$</span> is <span class="math-container">$\mathbb{P}$</span>-integrable. Therefore, <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span> by the Dominated Convergence Theorem. <span class="math-container">$\square$</span></p>
| <p>New to this, so not sure about the rigor, but here goes.</p>
<p>Let $A_k$ be the $k$th circle. Assume the area of $\bigcup_{k=1}^n A_k$ does not approach the total area of the circle $A_T$ as $n$ tends towards infinity. Then there must be some area $K$ which is not covered yet cannot harbor a new circle. Let $C = \bigcup_{k=1}^\infty A_k$. Consider a point $P$ in such that $d(P,K)=0$ and $d(P,C)>0$. If no such point exists, then $K \subset C$, as $C$ is a clearly a closed set of points. If such a point does exist, then another circle with center $P$ and nonzero area can be made to cover part of $K$, and the same logic applies to all possible $K$. Therefore there is no area $K$ which cannot contain a new circle, and by consequence $$\lim_{n\to\infty}\Bigg[\bigcup_{k=1}^n A_k\Bigg] = \big[A_T\big]$$
Since the size of circles is continuous, there must be a set of circles $\{A_k\}_{k=1}^\infty$ such that $\big[A_k\big]=E(\big[A_k\big])$ for each $k \in \mathbb{N}$, and therefore $$\lim_{n\to\infty} E(\big[A_k\big]) = \big[A_k\big] $$</p>
<p><strong>EDIT</strong>: This proof is wrong becuase I'm bad at probability, working on a new one.</p>
|
number-theory | <p>I know that $1$ is not a prime number because $1\cdot\mathbb Z=\mathbb Z$ is, by convention, not a prime ideal in the ring $\mathbb Z$.</p>
<p>However, since $\mathbb Z$ is a domain, $0\cdot\mathbb Z=0$ <em>is</em> a prime ideal in $\mathbb Z$. Isn't $(p)$ being a prime ideal the very definition of $p$ being a prime element?</p>
<p>(I know that this would violate the <a href="http://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic">Fundamental Theorem of Arithmetic</a>.)</p>
<hr>
<p><strong>Edit:</strong>
Apparently the answer is that a prime element in a ring is, by convention a non-zero non-unit (see <a href="http://en.wikipedia.org/wiki/Prime_element#Divisibility.2C_prime_and_irreducible_elements">wikipedia</a>).</p>
<p>This is strange because a prime ideal of a ring is, by convention, a proper ideal but not necessarily non-zero (see <a href="http://en.wikipedia.org/wiki/Prime_ideal">wikipedia</a>).</p>
<p>So, my question is now: Why do we make this awkward convention?</p>
| <p>You have a point here: absolutely we want to count <span class="math-container">$(0)$</span> as a prime ideal in <span class="math-container">$\mathbb{Z}$</span> -- because <span class="math-container">$\mathbb{Z}$</span> is an integral domain -- whereas we do <em>not</em> want to count <span class="math-container">$(1)$</span> as being a prime ideal -- because the zero ring is not an integral domain (which, to me, is much more a true fact than a convention: e.g., every integral domain has a field of fractions, and the zero ring does not).</p>
<p>I think we do not want to call <span class="math-container">$0$</span> a prime element because, in practice, we never want to include <span class="math-container">$0$</span> in divisibility arguments. Another way to say this is that we generally want to study factorization in integral domains, but once we have specified that a commutative ring <span class="math-container">$R$</span> is a domain, we know all there is to know about factoring <span class="math-container">$0$</span>: <span class="math-container">$0 = x_1 \cdots x_n$</span> iff at least one <span class="math-container">$x_i = 0$</span>.</p>
<p>Here is one way to make this "ignoring <span class="math-container">$0$</span>" convention look more natural: the notions of factorization, prime element, irreducible element, and so forth in an integral domain <span class="math-container">$R$</span> depend entirely on the multiplicative structure of <span class="math-container">$R$</span>. Thus we can think of factorization questions as taking place in the <strong>cancellative monoid</strong> <span class="math-container">$(R \setminus 0,\cdot)$</span>. (Cancellative means: if <span class="math-container">$x \cdot y = x \cdot z$</span>, then <span class="math-container">$y = z$</span>.) In this context it is natural to exclude zero, because otherwise the monoid would not be cancellative. Contemporary algebraists often think about factorization as a property of monoids rather than integral domains <em>per se</em>. For a little more information about this, see e.g. Section 4.1 of <a href="http://alpha.math.uga.edu/%7Epete/factorization2010.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/factorization2010.pdf</a>.</p>
| <p>There are good reasons behind the convention of including <span class="math-container">$(0)$</span> as a prime ideal
but excluding <span class="math-container">$(1).\ $</span> First, we include zero as a prime ideal because it facilitates many useful reductions. For example, in many ring theoretic problems involving an ideal <span class="math-container">$\, I\,$</span>, one can reduce to the case <span class="math-container">$\,I = P\,$</span> prime, then reduce to <span class="math-container">$\,R/P,\,$</span> thus reducing to the case when the ring is a domain. In this case one simply says that we can <strong>factor out by the prime</strong> <span class="math-container">$ P\,$</span>, so w.l.o.g. assume <span class="math-container">$\, P = 0\,$</span> is prime, so <span class="math-container">$\,R\,$</span> is a domain. For example, I've appended to the end of this post an excerpt from Kaplansky's classic textbook <em>Commutative Rings</em>, section <span class="math-container">$1\!\!-\!\!3\!:\,G$</span>-Ideals, Hilbert Rings, and the Nullstellensatz.</p>
<p>Thus we have solid evidence for the utility of the convention that the zero ideal is prime. So why don't we adopt the same convention for the unit ideal <span class="math-container">$(1)$</span> or, equivalently, why don't we permit the zero ring as a domain? There are a number of reasons. First, in domains and fields it often proves very convenient to assume that one has a nonzero element available. This permits proofs by contradiction to conclude by deducing <span class="math-container">$\,1 = 0.\ $</span> More importantly, it implies that the unit group is nonempty, so unit groups always exist. It'd be very inconvenient to have to always add the proviso (except if <span class="math-container">$\, R = 0)\,$</span> to the many arguments involving units and unit groups. For a more general perspective it's worth emphasizing that the usual rules for equational logic are not complete for <em>empty structures</em> so that is why groups and other algebraic structures are always axiomatized to prevent nonempty structures (see this <a href="http://groups.google.com/group/sci.math/browse_frm/thread/3e8fac43081fcbf8" rel="nofollow noreferrer">thread</a> for details).</p>
<p>Below is the promised Kaplansky excerpt on reduction to domains by factoring out prime ideals. I've explicitly emphasized the reductions e.g. <strong>reduce to...</strong>.</p>
<hr />
<p>Let <span class="math-container">$\, I\,$</span> be any ideal in a ring <span class="math-container">$\, R.\,$</span> We write <span class="math-container">$\, R^{*}\,$</span> for the quotient ring <span class="math-container">$\, R/I.\,$</span> In the polynomial ring <span class="math-container">$\, R[x]\,$</span> there is a smallest extension <span class="math-container">$\, IR[x]\,$</span> of <span class="math-container">$\, I.\,$</span> The quotient ring <span class="math-container">$\, R[x]/IR[x]\,$</span> is in a natural way isomorphic to <span class="math-container">$\, R^*[x].\,$</span> In treating many problems, we can in this way <strong>reduce to the case</strong> <span class="math-container">$\, I = 0,\,$</span>
and we shall often do so.</p>
<p><strong>THEOREM <span class="math-container">$28$</span>.</strong> <span class="math-container">$\,$</span> Let <span class="math-container">$\, M\,$</span> be a maximal ideal in <span class="math-container">$\, R[x]\,$</span> and suppose that the contraction <span class="math-container">$\, M \cap R = N\,$</span> is maximal in <span class="math-container">$\, R.\ $</span> Then <span class="math-container">$\, M\,$</span> can be generated by <span class="math-container">$\, N\,$</span> and one more element <span class="math-container">$\, f.\ $</span> We can select <span class="math-container">$\, f\,$</span> to be a monic polynomial which maps <span class="math-container">$\!\bmod N\,$</span> into an irreducible polynomial over the field <span class="math-container">$\, R/N.\ $</span></p>
<p><strong>Proof.</strong> <span class="math-container">$\,$</span> We can <strong>reduce to the case</strong> <span class="math-container">$\, N = 0,\,$</span> i. e., <span class="math-container">$\, R\,$</span> a field, and then
the statement is immediate.</p>
<p><strong>THEOREM <span class="math-container">$31$</span>.</strong> <span class="math-container">$\,$</span> A commutative ring <span class="math-container">$\, R\,\,$</span> is a Hilbert ring if and only if the polynomial ring <span class="math-container">$\, R[x] \,\,$</span> is a Hilbert ring.</p>
<p><strong>Proof.</strong> <span class="math-container">$\,$</span> If <span class="math-container">$\, R[x]\,$</span> is a Hilbert ring, so is its homomorphic image <span class="math-container">$\, R\,$</span>.
Conversely, assume that <span class="math-container">$\, R\,$</span> is a Hilbert ring. Take a G-ideal <span class="math-container">$\, Q\,$</span> in
<span class="math-container">$\, R[x]\,$</span>; we must prove that <span class="math-container">$\, Q\,$</span> is maximal. Let <span class="math-container">$\, P = Q \cap R\,$</span>; we can <strong>reduce the problem to the case</strong> <span class="math-container">$\, P = 0,\,$</span> which, incidentally, makes <span class="math-container">$\, R\,$</span> a domain.
Let <span class="math-container">$\, u\,$</span> be the image of <span class="math-container">$\, x\,$</span> in the natural homomorphism <span class="math-container">$\, R[x] \to R[x]/Q.\,$</span>
Then <span class="math-container">$\, R[u]\,$</span> is a G-domain. By Theorem <span class="math-container">$23$</span>, <span class="math-container">$\,u\,$</span> is algebraic over <span class="math-container">$\,R\,$</span> and <span class="math-container">$\,R\,$</span> is a G-domain. Since <span class="math-container">$\,R\,$</span> is both a G-domain and a Hilbert ring, <span class="math-container">$\,R\,$</span> is a field. But this makes <span class="math-container">$\, R[u] = R[x]/Q\,$</span> a field, proving <span class="math-container">$\, Q\,$</span> to be maximal.</p>
|
geometry | <p>The first <a href="http://faculty.uml.edu/cbyrne/cov.pdf">application</a> I was shown of the calculus of variations was proving that the shortest distance between two points is a straight line. Define a functional measuring the length of a curve between two points:
$$
I(y) = \int_{x_1}^{x_2} \sqrt{1 + (y')^2}\, dx,
$$
apply the Euler-Langrange equation, and Bob's your uncle.</p>
<p>So far so good, but then I started thinking: That functional was derived by splitting the curve into (infinitesimal) - wait for it - straight lines, and summing them up their lengths, and each length was <em>defined</em> as being the Euclidean distance between its endpoints*. </p>
<p>As such, it seems to me that the proof, while correct, is rather meaningless. It's an obvious consequence of the facts that (a) the Euclidean norm satisfies the triangle inequality and (b) the length of a curve was defined as a sum of Euclidean norms.</p>
<p>Getting slightly philosophical, I would conjecture that proving that the shortest distance between two points is a straight line is looking at things the wrong way round. Perhaps a better way would be to say that Euclidean geometry was <strong>designed</strong> to conform to our sensory experience of the physical world: the length of string joining two points is minimized by stretching the string, and at that point, it happens to look/feel straight.</p>
<p>I'm just wondering whether people would agree with this, and hoping that I may get some additional or deeper insights. Perhaps an interesting question to ask to try to go deeper would be: <strong>why does a stretched string look and feel straight</strong>?</p>
<hr>
<p>*: To illustrate my point further, imagine we had chosen to define the length of a line as the <a href="http://en.wikipedia.org/wiki/Taxicab_geometry">Manhattan distance</a> between its endpoints. We could integrate again, and this time it would turn out that the length of <em>any</em> curve between two points is the Manhattan distance between those points.</p>
| <p>I think a more fundamental way to approach the problem is by discussing geodesic curves on the surface you call home. Remember that the geodesic equation, while equivalent to the Euler-Lagrange equation, can be derived simply by considering differentials, not extremes of integrals. The geodesic equation emerges exactly by finding the acceleration, and hence force by Newton's laws, in generalized coordinates.</p>
<p>See the Schaum's guide Lagrangian Dynamics by Dare A. Wells Ch. 3, or Vector and Tensor Analysis by Borisenko and Tarapov problem 10 on P. 181</p>
<p>So, by setting the force equal to zero, one finds that the path is the solution to the geodesic equation. So, if we define a straight line to be the one that a particle takes when no forces are on it, or better yet that an object with no forces on it takes the quickest, and hence shortest route between two points, then walla, the shortest distance between two points is the geodesic; in Euclidean space, a straight line as we know it.</p>
<p>In fact, on P. 51 Borisenko and Tarapov show that if the force is everywhere tangent to the curve of travel, then the particle will travel in a straight line as well. Again, even if there is a force on it, as long as the force does not have a component perpendicular to the path, a particle will travel in a straight line between two points.</p>
<p>Also, as far as intuition goes, this is also the path of least work.</p>
<p>So, if you agree with the definition of a derivative in a given metric, then you can find the geodesic curves between points. If you define derivatives differently, and hence coordinate transformations differently, then it's a whole other story.</p>
| <p>Let me start by saying that on a gut level, I agree with everything you said. But I feel like I should make this argument anyway, since it might help you (and me!) sort out ideas on the matter.</p>
<hr>
<p>It doesn't seem inconsistent to argue that the model of Euclidean space (defined by, say, the Hilbert axioms) as $\Bbb R^n$ really gets around all the philosophical questions. We can ask why $\Bbb R$ and such, but taken as an object in its own right, the standard inner product defines everything from the geometry to the topology to the notion of size.</p>
<p>In this view, the integral you mentioned can be taken as the definition of "length" of a curve (in $\Bbb R^2$, I think), observing that it matches with the Lebesgue measure when the curve under consideration is given by an affine transformation (although this is formally irrelevant). The definition is motivated not as being broken down into straight lines, but rather into vectors, which have a different definition of length (this does not trouble me much: it is only wishful thinking that we use the same term for each). The notion of a "line" per se arises as a fairly natural question: what is the infimum of the length between two points and if so, is there actually a curve that achieves it? Once you see that not only is the answer "yes" but also "and it's unique", it's not much of a stretch to think these objects are worth adding to our basic understanding of the space.</p>
<p>As for the remark choosing the Manhattan distance: nothing prevents you from doing this, but if you prefer this to be your norm (which you well might, for the reasons you described above), then you lose all aspects of geometry relating to angles. You also lose the uniqueness of minimal-length curves, and perhaps you then become less interested in the question. From the omniscient perspective, we might see this as a tragedy, an acceptable loss, or even as a gain. This objection, as well as Will Jagy's comment, only appear to highlight the flexibility we have in terms of which formalisms to use.</p>
<p>Your other question is of course much harder to answer, but I think a nice reduction of the question is "What makes $\Bbb R^3$ the most physical-feeling model?" The question is particularly interesting in light of the fact that $\Bbb R^3$ is certainly <em>not</em> a complete model of space for actual physics! But I do not think you would be taken seriously if you tried to argue that the universe is not a manifold. For some reason, (open subsets of) $\Bbb R^n$ is locally "almost right".</p>
<hr>
<p>This is just me talking out of you-know-where: It could be that the reason we have such strong intuitions about straightness and distance is because of evolutionary pressures. People who could intuit how to get from place to place efficiently would not burn the unnecessary calories, and in a less sheltered world this could help them reach the age of sexual viability. Once we began thinking inductively then we would be allowed to think of the notion of straightness as going on forever, and as an general construct rather than a situational feature. But by then it would be too late to straighten out the conflation of straightness and linearity, and we would need to wait a long time before we could do so with any rigor.</p>
|
logic | <p>I know, I know, there are tons of questions on this -- I've read them all, it feels like. I don't understand why $(F \implies F) \equiv T$ and $(F \implies T) \equiv T$.</p>
<p>One of the best examples I saw was showing how if you start out with a false premise like $3=5$ then you can derive all sorts of statements that are true like $8=8$ but also false like $6=10$, hence $F \implies T$ is true but so is $F \implies F$.</p>
<p>But for me examples don't always do it for me because how do I know if the relationship always holds even outside the example? Sometimes examples aren't sufficiently generalized.</p>
<p>Sometimes people say "Well ($p \implies q$) is equivalent to $\lnot p \lor q$ so you can prove it that way!" except we arrived at that representation from the truth table in the first place from disjunctive normal form so the argument is circular and I don't find it convincing.</p>
<p>Sometimes people will use analogies like "Well assume we relabeled those two "vacuous cases" three other ways, $F/F, F/T, T/F$ -- see how the end results make no sense?" Sure but T/T makes no sense to me either so I don't see why this is a good argument. Just because the other three are silly doesn't tell me why T/T is not silly. </p>
<p>Other times I see "Well it's just defined that way because it's useful"... with no examples of how it's indeed useful and why we couldn't make do with some other definition. Then this leads to the inevitable counter-responders who insist it's not mere definition of convenience but a consequence of other rules in the system and so on, adding to the confusion.</p>
<p>So I'm hoping to skip all that: Is there some other way to show without a doubt that $(F \implies q) \equiv T$?</p>
| <p>I've never been satisfied with the definition of the material implication in the context of propositional logic alone. The only really important things in the context of propositional logic are that <span class="math-container">$T \Rightarrow T$</span> is true and <span class="math-container">$T \Rightarrow F$</span> is false. It feels like the truth values of <span class="math-container">$F \Rightarrow T$</span> and <span class="math-container">$F \Rightarrow F$</span> are just not specified by our intuition about implication. After all, why should "if the sky is green, then clouds are red" be true?</p>
<p>But in predicate logic, things are different. In predicate logic, we'd like to be able to say <span class="math-container">$\forall x (P(x) \Rightarrow Q(x))$</span> and have the <span class="math-container">$x$</span>'s for which <span class="math-container">$P(x)$</span> is false not interfere with the truth of the statement.</p>
<p>For example, consider "among all integers, all multiples of <span class="math-container">$4$</span> are even". That statement is true even though <span class="math-container">$1$</span> is not even (an instance of <span class="math-container">$F \Rightarrow F$</span>). It's also true even though <span class="math-container">$2$</span> is even despite not being a multiple of <span class="math-container">$4$</span> (an instance of <span class="math-container">$F \Rightarrow T$</span>).</p>
<p>But now in classical logic, every proposition has a single truth value. Thus the only way to define <span class="math-container">$\forall x R(x)$</span> is "for every <span class="math-container">$x$</span>, <span class="math-container">$R(x)$</span> is true". We can't define it in some other way, like "for every <span class="math-container">$x$</span>, either <span class="math-container">$R(x)$</span> is true or <span class="math-container">$R(x)$</span> is too nonsensical to have a truth value". Thus we are stuck defining <span class="math-container">$F \Rightarrow T$</span> and <span class="math-container">$F \Rightarrow F$</span> to both be true, if <span class="math-container">$\forall x (P(x) \Rightarrow Q(x))$</span> is going to behave the way we want.</p>
<p>In a different system of logic, we might do things differently. But in classical logic, "every proposition has a truth value" is basically an axiom.</p>
| <p>Given that we want the $\rightarrow$ to capture the idea of an 'if .. then ..' statement, it seems reasonable to insist that $P \rightarrow P$ is a True statement, no matter what $P$ is, and thus no matter what truth-value $P$ has. </p>
<p>So, if $P$ is False, then we get $\boxed{F \rightarrow F = T}$</p>
<p>It is likewise reasonable to insist that $(P \land Q) \rightarrow P = T$, again no matter what $P$ and $Q$ are.</p>
<p>So, if $P$ is True, and $Q$ is False, we get: $(T \land F) \rightarrow T = \boxed{F \rightarrow T = T}$</p>
|
combinatorics | <p>One of my friends found this riddle.</p>
<blockquote>
<p>There are 100 soldiers. 85 lose a left leg, 80 lose a right leg, 75
lose a left arm, 70 lose a right arm. What is the minimum number of
soldiers losing all 4 limbs?</p>
</blockquote>
<p>We can't seem to agree on a way to approach this.</p>
<p>Right off the bat I said that:</p>
<pre><code>85 lost a left leg, 80 lost a right leg, 75 lost a left arm, 70 lost a right arm.
100 - 85 = 15
100 - 80 = 20
100 - 75 = 25
100 - 70 = 30
15 + 20 + 25 + 30 = 90
100 - 90 = 10
</code></pre>
<p>My friend doesn't agree with my answer as he says not all subsets were taken into consideration. I am unable to defend my answer as this was just the first, and most logical, answer that sprang to mind.</p>
| <p>Here is a way of rewriting your original argument that should convince your friend:</p>
<blockquote>
<p>Let $A,B,C,D\subset\{1,2,\dots,100\}$ be the four sets, with $|A|=85$,$|B|=80$,$|C|=75$,$|D|=70$. Then we want the minimum size of $A\cap B\cap C\cap D$. Combining the fact that $$|A\cap B\cap C\cap D|=100-|A^c\cup B^c\cup C^c\cup D^c|$$ where $A^c$ refers to $A$ complement, along with the fact that for any sets $|X\cup Y|\leq |Y|+|X|$ we see that $$|A\cap B\cap C\cap D|\geq 100-|A^c|-|B^c|-|C^c|-|D^c|=10.$$ </p>
</blockquote>
<p>You can then show this is optimal by taking any choice of $A^c$, $B^c$, $C^c$ and $D^c$ such that any two are disjoint. (This is possible since the sum of their sizes is $90$ which is strictly less then $100$.) </p>
| <p>If you add up all the injuries, there is a total of 310 sustained. That means 100 soldiers lost 3 limbs, with 10 remaining injuries. Therefore, 10 soldiers must have sustained an additional injury, thus losing all 4 limbs.</p>
<p>The manner in which you've argued your answer seems to me, logical, and correct.</p>
|
differentiation | <p>Here is one more numerically discovered conjecture that I was not able to prove, and asking you for help:
$${\large\int}_0^1\frac{\ln(1+8x)}{x^{\small2/3}\,(1-x)^{\small2/3}\,(1+8x)^{\small1/3}}dx\stackrel{\color{#A0A0A0}{\small?}}=\frac{\ln3}{\pi\sqrt3}\Gamma^3\!\left(\tfrac13\right),\tag1$$
or, equivalently,
$$\frac{d}{da}{_2F_1}\!\left(a,\frac13;\,\frac23;\,-8\right)\Bigg|_{a=\frac13}\stackrel{\color{#A0A0A0}{\small?}}=-\frac{2\ln3}3.\tag2$$</p>
<hr>
<p><em>Update:</em> I can suggest a generalization of this conjecture that might be easier to prove:
$$_2F_1\!\left(-a,\frac16-\frac{a}2;\,\frac23;\,-8\right)=2\times3^{\frac{3a-1}2}\cos\left(\frac\pi6(3a+1)\right).\tag3$$
This is actually the identity <a href="http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/03/14/10/0006/">07.23.03.0658.01</a> from the <em>Wolfram Functions Site</em> generalized to non-integer values of $a$.</p>
| <p>Define $\mathcal{I}$ to be the value of the definite integral,</p>
<p>$$\mathcal{I}:=\int_{0}^{1}\frac{\ln{\left(1+8x\right)}}{x^{2/3}(1-x)^{2/3}(1+8x)^{1/3}}\,\mathrm{d}x\approx3.8817.$$</p>
<blockquote>
<p><strong>Problem.</strong> Prove that the following conjectured value for the definite integral $\mathcal{I}$ is correct:
$$\mathcal{I}=\int_{0}^{1}\frac{\ln{\left(1+8x\right)}}{x^{2/3}(1-x)^{2/3}(1+8x)^{1/3}}\,\mathrm{d}x\stackrel{?}{=}\frac{\ln{(3)}}{\sqrt{3}\,\pi}\left[\Gamma{\left(\small{\frac13}\right)}\right]^3.\tag{1}$$</p>
</blockquote>
<hr>
<p><strong>Elimination of logarithmic factor from the integrand:</strong></p>
<p>Suppose we have a substitution relation of the form $1+8x=\frac{k}{1+8t}$, with $k$ being some positive real constant greater than $1$. First of all, the symmetry of the relation with respect to the variables $x$ and $t$ implies that $t$ solved for as a function of $x$ will have the same functional form as $x$ solved for as a function of $t$:</p>
<p>$$1+8x=\frac{k}{1+8t}\implies t=\frac{k-(1+8x)}{8(1+8x)},~~x=\frac{k-(1+8t)}{8(1+8t)}.$$</p>
<p>Transforming the integral $\mathcal{I}$ via this substitution, we find:</p>
<p>$$\begin{align}
\mathcal{I}
&=\int_{0}^{1}x^{-2/3}\,(1-x)^{-2/3}\,(1+8x)^{-1/3}\,\ln{(1+8x)}\,\mathrm{d}x\\
&=\int_{0}^{1}\frac{\ln{(1+8x)}}{\sqrt[3]{x^{2}\,(1-x)^{2}\,(1+8x)}}\,\mathrm{d}x\\
&=\int_{\frac{k-1}{8}}^{\frac{k-9}{72}}\sqrt[3]{\frac{2^{12}(1+8t)^{5}}{k(9-k+72t)^{2}(k-1-8t)^{2}}}\,\ln{\left(\frac{k}{1+8t}\right)}\cdot\frac{(-k)}{(1+8t)^2}\,\mathrm{d}t\\
&=\left(\frac{k}{9}\right)^{2/3}\int_{\frac{k-9}{72}}^{\frac{k-1}{8}}\frac{\ln{\left(\frac{k}{1+8t}\right)}}{\left(\frac{9-k}{72}+t\right)^{2/3}\left(\frac{k-1}{8}-t\right)^{2/3}\left(1+8t\right)^{1/3}}\,\mathrm{d}t,\\
\end{align}$$</p>
<p>which clearly suggests the choice $k=9$ as being the simplest, in which case:</p>
<p>$$\begin{align}
\mathcal{I}
&=\int_{0}^{1}\frac{\ln{\left(\frac{9}{1+8t}\right)}}{t^{2/3}\left(1-t\right)^{2/3}\left(1+8t\right)^{1/3}}\,\mathrm{d}t\\
&=\int_{0}^{1}\frac{\ln{\left(9\right)}}{t^{2/3}\left(1-t\right)^{2/3}\left(1+8t\right)^{1/3}}\,\mathrm{d}t-\int_{0}^{1}\frac{\ln{\left(1+8t\right)}}{t^{2/3}\left(1-t\right)^{2/3}\left(1+8t\right)^{1/3}}\,\mathrm{d}t\\
&=2\ln{(3)}\,\int_{0}^{1}\frac{\mathrm{d}t}{t^{2/3}\left(1-t\right)^{2/3}\left(1+8t\right)^{1/3}}-\mathcal{I}\\
\implies 2\mathcal{I}&=2\ln{(3)}\,\int_{0}^{1}\frac{\mathrm{d}t}{t^{2/3}\left(1-t\right)^{2/3}\left(1+8t\right)^{1/3}}\\
\implies \mathcal{I}&=\ln{(3)}\,\int_{0}^{1}\frac{\mathrm{d}t}{t^{2/3}\left(1-t\right)^{2/3}\left(1+8t\right)^{1/3}}\\
&=:\ln{(3)}\,\mathcal{J},
\end{align}$$</p>
<p>where in the last line we've simply introduced the symbol $\mathcal{J}$ to denote the last integral for convenience. It's approximate value is $\mathcal{J}\approx3.53328$.</p>
<p>Thus, to prove that the conjectured value $(1)$ is indeed correct, it suffices to prove the following equivalent conjecture:</p>
<blockquote>
<p>$$\mathcal{J}:=\int_{0}^{1}\frac{\mathrm{d}x}{x^{2/3}(1-x)^{2/3}(1+8x)^{1/3}}\stackrel{?}{=}\frac{1}{\sqrt{3}\,\pi}\left[\Gamma{\left(\small{\frac13}\right)}\right]^3.\tag{2}$$</p>
</blockquote>
<hr>
<p><strong>Representation and manipulation of integral as a hypergeometric function:</strong></p>
<p>Euler's <a href="http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/07/01/01/0001/" rel="noreferrer">integral representation</a> for the Gauss hypergeometric
function states that, for
$\Re{\left(c\right)}>\Re{\left(b\right)}>0\land
|\arg{\left(1-z\right)}|<\pi$, we have:</p>
<p>$$\int_{0}^{1}x^{b-1}(1-x)^{c-b-1}(1-zx)^{-a}\mathrm{d}x=\operatorname{B}{\left(b,c-b\right)}\,{_2F_1}{\left(a,b;c;z\right)}.$$</p>
<p>In particular, if we choose $z=-8$, $a=\frac13$, $c=\frac23$, and
$b=\frac13$, then the conditions
$\Re{\left(\frac23\right)}>\Re{\left(\frac13\right)}>0\land
|\arg{\left(1-(-8)\right)}|=0<\pi$ are satisfied, and the integral on
the left-hand-side of Euler's representation reduces to the integral
$\mathcal{J}$. That is,:</p>
<p>$$\begin{align}
\mathcal{J}
&=\int_{0}^{1}x^{-\frac23}(1-x)^{-\frac23}(1+8x)^{-\frac13}\mathrm{d}x\\
&=\operatorname{B}{\left(\frac13,\frac13\right)}\,{_2F_1}{\left(\frac13,\frac13;\frac23;-8\right)}.
\end{align}$$</p>
<p>Using the <a href="http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/17/02/10/0026/" rel="noreferrer">quadratic transformation</a>,</p>
<p>$${_2F_1}{\left(a,b;2b;z\right)} =
\left(\frac{1+\sqrt{1-z}}{2}\right)^{-2a}
{_2F_1}{\left(a,a-b+\frac12;b+\frac12;\left(\frac{1-\sqrt{1-z}}{1+\sqrt{1-z}}\right)^2\right)},$$</p>
<p>with particular values $a=b=\frac13,z=-8$, we have the hypergeometric identity,</p>
<p>$${_2F_1}{\left(\frac13,\frac13;\frac23;-8\right)} = 2^{-\frac23}
{_2F_1}{\left(\frac13,\frac12;\frac56;\frac14\right)}.$$</p>
<p>Then, applying <a href="http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/17/02/07/0001/" rel="noreferrer">Euler's transformation</a>,</p>
<p>$${_2F_1}{\left(a,b;c;z\right)}=(1-z)^{c-a-b}{_2F_1}{\left(c-a,c-b;c;z\right)},$$</p>
<p>we have,</p>
<p>$${_2F_1}{\left(\frac13,\frac12;\frac56;\frac14\right)}={_2F_1}{\left(\frac12,\frac13;\frac56;\frac14\right)}.$$</p>
<p>Now, Euler's integral representation for this hypergeometric function implies:</p>
<p>$${_2F_1}{\left(\frac12,\frac13;\frac56;\frac14\right)}=\frac{1}{\operatorname{B}{\left(\frac13,\frac12\right)}}\int_{0}^{1}x^{-\frac23}(1-x)^{-\frac12}\left(1-\small{\frac14}x\right)^{-\frac12}\mathrm{d}x.$$</p>
<p>Hence,</p>
<p>$$\begin{align}
\mathcal{J}
&=\operatorname{B}{\left(\frac13,\frac13\right)}\,{_2F_1}{\left(\frac13,\frac13;\frac23;-8\right)}\\
&=\frac{\operatorname{B}{\left(\frac13,\frac13\right)}}{2^{2/3}}\,{_2F_1}{\left(\frac13,\frac12;\frac56;\frac14\right)}\\
&=\frac{\operatorname{B}{\left(\frac13,\frac13\right)}}{2^{2/3}}\,{_2F_1}{\left(\frac12,\frac13;\frac56;\frac14\right)}\\
&=\frac{\operatorname{B}{\left(\frac13,\frac13\right)}}{2^{2/3}\,\operatorname{B}{\left(\frac13,\frac12\right)}}\,\int_{0}^{1}x^{-\frac23}(1-x)^{-\frac12}\left(1-\small{\frac14}x\right)^{-\frac12}\mathrm{d}x.\\
\end{align}$$</p>
<p>The ratio of beta functions in the last line above simplifies considerably. The Legendre duplication formula for the gamma function states:</p>
<p>$$\Gamma{\left(2z\right)}=\frac{2^{2z-1}}{\sqrt{\pi}}\Gamma{\left(z\right)}\Gamma{\left(z+\frac12\right)}.$$</p>
<p>Letting $z=\frac13$ yields:</p>
<p>$$\Gamma{\left(\frac23\right)}=\frac{2^{-1/3}}{\sqrt{\pi}}\Gamma{\left(\frac13\right)}\Gamma{\left(\frac56\right)}.$$</p>
<p>Then, using the facts that $\operatorname{B}{\left(a,b\right)}=\frac{\Gamma{\left(a\right)}\,\Gamma{\left(b\right)}}{\Gamma{\left(a+b\right)}}$ and $\Gamma{\left(\frac12\right)}=\sqrt{\pi}$, we can simplify the ratio of beta functions above considerably:</p>
<p>$$\begin{align}
\frac{\operatorname{B}{\left(\frac13,\frac13\right)}}{\operatorname{B}{\left(\frac13,\frac12\right)}}
&=\frac{\left[\Gamma{\left(\frac13\right)}\right]^2\,\Gamma{\left(\frac56\right)}}{\Gamma{\left(\frac23\right)}\,\Gamma{\left(\frac12\right)}\,\Gamma{\left(\frac13\right)}}\\
&=\frac{\Gamma{\left(\frac13\right)}\,\Gamma{\left(\frac56\right)}}{\sqrt{\pi}\,\Gamma{\left(\frac23\right)}}\\
&=\frac{\sqrt{\pi}\,\sqrt[3]{2}}{\sqrt{\pi}}\\
&=\sqrt[3]{2}.
\end{align}$$</p>
<p>Thus,</p>
<p>$$\begin{align}
\mathcal{J}
&=\frac{\operatorname{B}{\left(\frac13,\frac13\right)}}{2^{2/3}\,\operatorname{B}{\left(\frac13,\frac12\right)}}\,\int_{0}^{1}x^{-\frac23}(1-x)^{-\frac12}\left(1-\small{\frac14}x\right)^{-\frac12}\mathrm{d}x\\
&=\frac{\sqrt[3]{2}}{2^{2/3}}\,\int_{0}^{1}x^{-\frac23}(1-x)^{-\frac12}\left(1-\small{\frac14}x\right)^{-\frac12}\mathrm{d}x\\
&=\frac{1}{\sqrt[3]{2}}\,\int_{0}^{1}\frac{x^{-\frac23}}{\sqrt{\left(1-x\right)\left(1-\small{\frac14}x\right)}}\,\mathrm{d}x.\\
\end{align}$$</p>
<hr>
<p><strong>Reduction of integral to pseudo-elliptic integrals:</strong></p>
<p>Substituting $x=t^3$ into the integral representation for $\mathcal{J}$, we reduce the problem to solving an integral whose integrand is the reciprocal square-root of a sixth-degree polynomial:</p>
<p>$$\begin{align}
\mathcal{J}
&=\frac{1}{\sqrt[3]{2}}\,\int_{0}^{1}\frac{x^{-\frac23}}{\sqrt{\left(1-x\right)\left(1-\small{\frac14}x\right)}}\,\mathrm{d}x\\
&=\frac{2}{\sqrt[3]{2}}\,\int_{0}^{1}\frac{x^{-\frac23}}{\sqrt{\left(1-x\right)\left(4-x\right)}}\,\mathrm{d}x\\
&=\frac{6}{\sqrt[3]{2}}\,\int_{0}^{1}\frac{\frac13\,x^{-\frac23}\,\mathrm{d}x}{\sqrt{\left(1-x\right)\left(4-x\right)}}\\
&=\frac{6}{\sqrt[3]{2}}\,\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{\left(1-t^3\right)\left(4-t^3\right)}}\\
&=\frac{6}{\sqrt[3]{2}}\,\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{4-5t^3+t^6}}.\\
\end{align}$$</p>
<p>Then, substituting $t=\sqrt[3]{2}\,u$ gives us a similar integrand except with a sixth-degree polynomial with symmetric coefficients:</p>
<p>$$\begin{align}
\mathcal{J}
&=\frac{6}{\sqrt[3]{2}}\,\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{4-5t^3+t^6}}\\
&=\frac{6}{\sqrt[3]{2}}\,\int_{0}^{\frac{1}{\sqrt[3]{2}}}\frac{\sqrt[3]{2}\,\mathrm{d}u}{\sqrt{4-10u^3+4u^6}}\\
&=3\,\int_{0}^{\frac{1}{\sqrt[3]{2}}}\frac{\mathrm{d}u}{\sqrt{1-\small{\frac52}u^3+u^6}}.\\
\end{align}$$</p>
<p>The form of the integral $\mathcal{J}$ found in the last line above is significant because it may be transformed into a pair of <a href="https://www.encyclopediaofmath.org/index.php/Pseudo-elliptic_integral" rel="noreferrer">pseudo-elliptic integrals</a>, which can subsequently be evaluated in terms of elementary functions and elliptic integrals. For more information on these types of integrals, see my question <a href="https://math.stackexchange.com/questions/922957/indefinite-integral-typo-in-gradshte%C4%ADn-reciprocal-square-root-of-sixth-degree-p">here</a>.</p>
<p>Substituting $u=z-\sqrt{z^2-1}$ transforms the integral $\mathcal{J}$ into a sum of two pseudo-elliptic integrals:</p>
<p>$$\begin{align}
\mathcal{J}
&=3\,\int_{0}^{\frac{1}{\sqrt[3]{2}}}\frac{\mathrm{d}u}{\sqrt{1-\small{\frac52}u^3+u^6}}\\
&=\frac{3}{\sqrt{2}}\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}z}{\sqrt{(z+1)(8z^3-6z-\frac52)}}+\frac{3}{\sqrt{2}}\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}z}{\sqrt{(z-1)(8z^3-6z-\frac52)}}\\
&=3\,\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}z}{\sqrt{(z+1)(16z^3-12z-5)}}+3\,\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}z}{\sqrt{(z-1)(16z^3-12z-5)}}\\
&=\frac34\,\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}z}{\sqrt{(z+1)(z^3-\small{\frac34}z-\small{\frac{5}{16}})}}+\frac34\,\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}z}{\sqrt{(z-1)(z^3-\small{\frac34}z-\small{\frac{5}{16}})}}\\
&=:\frac34\,P_{+}+\frac34\,P_{-},\\
\end{align}$$</p>
<p>where in the last line we've introduced the auxiliary notation $P_{\pm}$ to denote the pair of integrals,</p>
<p>$$P_{\pm}:=\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}z}{\sqrt{(z\pm1)(z^3-\small{\frac34}z-\small{\frac{5}{16}})}}.$$</p>
<hr>
<p><strong>Evaluation of pseudo-elliptic integrals:</strong></p>
<p>Again for convenience, we shall denote the lower integration limit in the integrals $P_{\pm}$ defined above by $2^{-2/3}+2^{-4/3}=:\alpha\approx1.026811$. In particular, this eases the factorization of the cubic polynomial in the denominators above:</p>
<p>$$\begin{align}
16x^3-12x-5
&=16\left(x-\alpha\right)\left(x^2+\alpha x+\alpha^2-\small{\frac34}\right)\\
&=16\left(x-\alpha\right)\left[\left(x+\frac{\alpha}{2}\right)^2+\frac34\left(\alpha^2-1\right)\right].
\end{align}$$</p>
<p>Then,</p>
<p>$$\begin{align}
P_{\pm}
&=\int_{2^{-2/3}+2^{-4/3}}^{\infty}\frac{\mathrm{d}x}{\sqrt{(x\pm1)(x^3-\small{\frac34}x-\small{\frac{5}{16}})}}\\
&=\int_{\alpha}^{\infty}\frac{\mathrm{d}x}{\sqrt{(x\pm1)(x^3-\small{\frac34}x-\small{\frac{5}{16}})}}\\
&=\int_{\alpha}^{\infty}\frac{\mathrm{d}x}{\sqrt{(x\pm1)(x-\alpha)(x^2+\alpha x+\alpha^2-\small{\frac34})}}\\
&=\int_{\alpha}^{\infty}\frac{\mathrm{d}x}{\sqrt{(x\pm1)\left(x-\alpha\right)\left[\left(x+\frac{\alpha}{2}\right)^2+\small{\frac34}\left(\alpha^2-1\right)\right]}}.\\
\end{align}$$</p>
<hr>
<p>EDIT: I'm beginning to fear this strategy of derivation may be ultimately fruitless. The good news is that there is a relatively compact way of expressing the two integrals above as incomplete elliptic integrals:</p>
<blockquote>
<p>Assume $\alpha,\beta,u,m,n\in\mathbb{R}$ such that $\beta<\alpha<u$. Then proposition <strong>3.145</strong>(1) of Gradshteyn's <em>Table of Integrals, Series, and Products</em> states:
$$\begin{align}
\small{\int_{\alpha}^{u}\frac{\mathrm{d}x}{\sqrt{\left(x-\alpha\right)\left(x-\beta\right)\left[\left(x-m\right)^2+n^2\right]}}}
&=\small{\frac{1}{pq}F{\left(2\arctan{\sqrt{\frac{q\left(u-\alpha\right)}{p\left(u-\beta\right)}}},\frac12\sqrt{\frac{\left(p+q\right)^2+\left(\alpha-\beta\right)^2}{pq}}\right)},}
\end{align}$$
where $(m-\alpha)^2+n^2=p^2$, and $(m-\beta)^2+n^2=q^2$.</p>
</blockquote>
<p>The bad news is after plugging in all the appropriate values, the resulting arguments of the elliptic integrals are significantly more complex than I expected.</p>
| <p>I think I have found something for you.</p>
<p>See <a href="http://authors.library.caltech.edu/43489/" rel="noreferrer">http://authors.library.caltech.edu/43489/</a> for the Integral Tables of the Bateman Project.</p>
<p>In Volume 1, p.310 (PDF p. 322) Formula (23), which I hope should be applicable for your case.</p>
<p>The last formula of david-h's computation is an example of a Mellin integral transformation.</p>
<p>$\int_0^\infty (1+x)^\nu (1+\alpha x)^\mu x^{s-1} dx = B(s,-\mu-\nu-s)
\ {_2F_1}(-\mu,s;-\mu-\nu;1-\alpha)$,</p>
<p>with B the Beta-function, $|\text{arg}\alpha| \le \pi$ and $-\mathcal{Re}(\mu+\nu) > \mathcal{Re}(s) > 0$.</p>
|
matrices | <p>In two days, I am giving a presentation about a search engine I have been making the past summer. My research involved the use of singular value decompositions, i.e., <span class="math-container">$A = U \Sigma V^T$</span>. I took a high school course on Linear Algebra last year, but the course was not very thorough, and though I know how to find the SVD of a matrix, I don't know how to explain what I have in my hands after the matrix has been decomposed.</p>
<p>To someone who has taken linear algebra, I can say that I can decompose a matrix <span class="math-container">$A$</span> into matrix <span class="math-container">$\Sigma$</span>, whose diagonal holds the singular values, and matrices <span class="math-container">$U$</span> and <span class="math-container">$V$</span> whose columns represent the left and right singular vectors of matrix <span class="math-container">$A$</span>. I am not sure how to explain what a singular value or what left/right singular vectors are. I can still be satisfied if there is no easy way to explain what this decomposition means, but I always prefer keeping the audience as informed as possible.</p>
| <p>Much of linear algebra is about linear <em>operators</em>, that is, linear transformations of one space to itself. A typical result is that by choosing a suitable basis for the space, the operator can be expressed in a simple matrix form, for instance, diagonal. However, this does not apply to all operators.</p>
<p>The singular value decomposition is the only main result about linear transformations between two different spaces. It says that by choosing suitable bases for the spaces, the transformation can be expressed in a simple matrix form, a diagonal matrix. And this works for <em>all</em> linear transformations. Moreover, the bases are very nice: orthogonal bases.</p>
<p>Geometrically, the SVD means that spheres of the proper dimension in the domain are transformed into ellipsoids in the codomain. Since the transformation may not be injective, the dimension of the ellipsoid is at most the dimension of the sphere. So you get some distortion along some axes and some collapsing along other axes. And that is all the transformation does. <em>Every</em> linear transformation.</p>
| <p>This answer attempts to give a simple algebraic intuition. Suppose $A$ is an $m \times n$ real matrix. Let $A=U\Sigma V^T$ be the SVD of $A$. Suppose that the rank of $A$ is equal to $r.$ Then the first $r$ singular values will be non-zero, while the remaining singular values will be zero. </p>
<p>If we write $U=[u_1 | \cdots | u_n]$ and $V=[v_1| \cdots | v_m]$, where $u_i$ is the $i^{th}$ column of $U$ (and similarly for $v_j$), then $A= \sum_{i=1}^r \sigma_i u_i v_i^T$, where $\sigma_i$ is the $i^{th}$ singular value. This shows that the linear transformation $A$ can be decomposed into the weighted sum of the linear transformations $u_i v_i^T$, each of which has rank $1$. </p>
<p>A large singular value $\sigma_k$ will indicate that the contribution of the corresponding transformation $u_k v_k^T$ is large and a small singular value will indicate that the corresponding contribution to the action of $A$ is small. As an application of this intuition, there are cases where e.g. $A$ is a full rank square matrix, hence it has no zero singular values, however a threshold is chosen and all terms in the sum $A= \sum_{i=1}^r \sigma_i u_i v_i^T$ corresponding to singular values less than this threshold are discarded. In that way, $A$ is approximated by a simpler matrix $\tilde{A}$, whose behavior is, for practical purposes, essentially the same as that of the original matrix.</p>
<p>It might also help to visualize the action of $A$ on a vector $x$ by means of the above formula: $Ax = \sum_{i=1}^r (\sigma_i\langle v_i,x\rangle) u_i $. Observe that the image of $x$ is a linear combination of the vectors $u_i$ and the coefficients depend on both the magnitude of the corresponding singular values as well as the directions of the vectors $v_i$ with respect to $x$. For example, if $x$ is orthogonal to all the $v_i$ for $i$ such that $\sigma_i \neq 0$, then $Ax=0$. On the other hand, if $x=v_k$ for some $k$ such that $\sigma_k \neq 0$, then $Av_k = \sigma_k u_k$.</p>
|
linear-algebra | <p>In the <strong>few</strong> linear algebra texts I have read, the determinant is introduced in the following manner;</p>
<p>“Here is a formula for what we call <span class="math-container">$\det A$</span>. Here are some other formulas. And finally, here are some nice properties of the determinant.”</p>
<p>For example, in very elementary textbooks it is introduced by giving the co-factor expansion formula. In Axler’s “Linear Algebra Done Right” it is defined, for <span class="math-container">$T\in L(V)$</span> to be <span class="math-container">${(-1)}^{\dim V}$</span> times the constant term in the characteristic polynomial of <span class="math-container">$T$</span>.</p>
<p>However I find this somewhat unsatisfactory. It’s like the real definition of the determinant is hidden. Ideally, wouldn’t the determinant be defined in the following manner:</p>
<p>“Given a matrix <span class="math-container">$A$</span>, let <span class="math-container">$\det A$</span> be an element of <span class="math-container">$\mathbb{F}$</span> such that <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$z$</span>.”</p>
<p>Then one would proceed to prove that this element is unique, and derive the familiar formulae.</p>
<p>So my question is: Does a definition of the latter type exist, is there some minimal set of properties sufficient to define what a determinant is? If not, can you explain why?</p>
| <p>Let $V$ be a vector space of dimension $n$. For any $p$, the construction of the <a href="http://en.wikipedia.org/wiki/Exterior_algebra">exterior power</a> $\Lambda^p(V)$ is <a href="http://en.wikipedia.org/wiki/Functor">functorial</a> in $V$: it is the universal object for alternating multilinear functions out of $V^p$, that is, functions</p>
<p>$$\phi : V^p \to W$$</p>
<p>where $W$ is any other vector space satisfying $\phi(v_1, ... v_i + v, ... v_p) = \phi(v_1, ... v_i, ... v_p) + \phi(v_1, ... v_{i-1}, v, v_{i+1}, ... v_p)$ and $\phi(v_1, ... v_i, ... v_j, ... v_p) = - \phi(v_1, ... v_j, ... v_i, ... v_p)$. What this means is that there is a map $\psi : V^p \to \Lambda^p(V)$ (the exterior product) which is alternating and multilinear which is universal with respect to this property; that is, given any other map $\phi$ as above with the same properties, $\phi$ factors uniquely as $\phi = f \circ \psi$ where $f : \Lambda^p(V) \to W$ is linear.</p>
<p>Intuitively, the universal map $\psi : V^p \to \Lambda^p(V)$ is the universal way to measure the oriented $p$-dimensional volumes of <a href="http://en.wikipedia.org/wiki/Parallelepiped#Parallelotope">paralleletopes</a> defined by $p$-tuples of vectors in $V$, the point being that for geometric reasons oriented $p$-dimensional volume is alternating and multilinear. (It is instructive to work out how this works when $n = 2, 3$ by explicitly drawing some diagrams.)</p>
<p>Functoriality means the following: if $T : V \to W$ is any map between two vector spaces, then there is a natural map $\Lambda^p T : \Lambda^p V \to \Lambda^p W$ between their $p^{th}$ exterior powers satisfying certain natural conditions. This natural map comes in turn from the natural action $T(v_1, ... v_p) = (Tv_1, ... Tv_p)$ defining a map $T : V^p \to W^p$ which is compatible with the passing to the exterior powers.</p>
<p>The top exterior power $\Lambda^n(V)$ turns out to be one-dimensional. We then define the <strong>determinant</strong> $T : V \to V$ to be the scalar $\Lambda^n T : \Lambda^n(V) \to \Lambda^n(V)$ by which $T$ acts on the top exterior power. This is equivalent to the intuitive definition that $\det T$ is the constant by which $T$ multiplies oriented $n$-dimensional volumes. But it requires <em>no arbitrary choices</em>, and the standard properties of the determinant (for example that it is multiplicative, that it is equal to the product of the eigenvalues) are extremely easy to verify.</p>
<p>In this definition of the determinant, all the work that would normally go into showing that the determinant is the unique function with such-and-such properties goes into showing that $\Lambda^n(V)$ is one-dimensional. If $e_1, ... e_n$ is a basis, then $\Lambda^n(V)$ is in fact spanned by $e_1 \wedge e_2 \wedge ... \wedge e_n$. This is not so hard to prove; it is essentially an exercise in row reduction.</p>
<p>Note that this definition does not even require a definition of oriented $n$-dimensional volume as a number. Abstractly such a notion of volume is given by a choice of isomorphism $\Lambda^n(V) \to k$ where $k$ is the underlying field, but since $\Lambda^n(V)$ is one-dimensional its space of endomorphisms is already <em>canonically</em> isomorphic to $k$. </p>
<p>Note also that just as the determinant describes the action of $T$ on the top exterior power $\Lambda^n(V)$, the $p \times p$ minors of $T$ describe the action of $T$ on the $p^{th}$ exterior power $\Lambda^p(V)$. In particular, the $(n-1) \times (n-1)$ minors (which form the matrix of cofactors) describe the action of $T$ on the second-to-top exterior power $\Lambda^{n-1}(V)$. This exterior power has the same dimension as $V$, and with the right extra data can be identified with $V$, and this leads to a quick and natural proof of the explicit formula for the inverse of a matrix.</p>
<hr>
<p>As an advance warning, the determinant is sometimes defined as an alternating multilinear function on $n$-tuples of vectors $v_1, ... v_n$ satisfying certain properties; this properly defines a linear transformation $\Lambda^n(V) \to k$, not a determinant of a linear transformation $T : V \to V$. If we fix a basis $e_1, ... e_n$, then this function can be thought of as the determinant of the linear transformation sending $e_i$ to $v_i$, but this definition is basis-dependent. </p>
| <p>Let $B$ a basis of a vector space $E$ of dimension $n$ over $\Bbbk$. Then $det_B$ is the only $n$-alternating multilinear form with $det_B(B) = 1$.</p>
<p>A $n$-multilinear form is a map of $E^n$ in $\Bbbk$ which is linear for each variable.</p>
<p>A $n$- alternated multilinear form is a multilinear form which verify for all $i,j$
$$ f(x_1,x_2,\dots,x_i,\dots, x_j, \dots, x_n) = -f(x_1,x_2,\dots,x_j,\dots, x_i, \dots, x_n) $$
In plain english, the sign of the application change when you switch two argument. You understand why you use the big sum over permutations to define the determinant with a closed formula. </p>
|
linear-algebra | <p>For example we have the vector $8i + 4j - 6k$, how can we find a unit vector perpendicular to this vector?</p>
| <p>Let <span class="math-container">$\vec{v}=x\vec{i}+y\vec{j}+z\vec{k}$</span>, a perpendicular vector to yours. Their inner product (the dot product - <span class="math-container">$\vec{u}.\vec{v}$</span> ) should be equal to 0, therefore: <span class="math-container">$$8x+4y-6z=0 \tag{1}$$</span> Choose for example x,y and find z from equation 1. In order to make its length equal to 1, calculate <span class="math-container">$\|\vec{v}\|=\sqrt{x^2+y^2+z^2}$</span> and divide <span class="math-container">$\vec{v}$</span> with it. Your unit vector would be: <span class="math-container">$$\vec{u}=\frac{\vec{v}}{\|\vec{v}\|}$$</span></p>
| <p>Congrats on 10'000+ views! I'd like to combine the above fine answers into an algorithm. </p>
<p>Given a vector $\vec x$ not identically zero, one way to find $\vec y$ such that $\vec x^T \vec y = 0$ is:</p>
<ol>
<li>start with $\vec y' = \vec 0$ (all zeros);</li>
<li>find $m$ such that $x_m \neq 0$, and pick any other index $n \neq m$;</li>
<li>set $y'_n = x_m$ and $y'_m = -x_n$, setting potentially two elements of $\vec y'$ non-zero (maybe one if $x_n=0$, doesn't matter);</li>
<li>and finally normalize your vector to unit length: $\vec y = \frac{\vec y'}{\|\vec y'\|}.$</li>
</ol>
<p>(I'm referring to the $n$th element of a vector $\vec v$ as $v_n$.)</p>
|
linear-algebra | <p>I am auditing a Linear Algebra class, and today we were taught about the rank of a matrix. The definition was given from the row point of view: </p>
<blockquote>
<p>"The rank of a matrix A is the number
of non-zero rows in the reduced
row-echelon form of A".</p>
</blockquote>
<p>The lecturer then explained that if the matrix <span class="math-container">$A$</span> has size <span class="math-container">$m
\times n$</span>, then <span class="math-container">$rank(A) \leq m$</span> and <span class="math-container">$rank(A) \leq n$</span>. </p>
<p>The way I had been taught about rank was that it was the smallest of </p>
<ul>
<li>the number of rows bringing new information</li>
<li>the number of columns bringing new information. </li>
</ul>
<p>I don't see how that would change if we transposed the matrix, so I said in the lecture:</p>
<p>"then the rank of a matrix is the same of its transpose, right?" </p>
<p>And the lecturer said: </p>
<p>"oh, not so fast! Hang on, I have to think about it". </p>
<p>As the class has about 100 students and the lecturer was just substituting for the "normal" lecturer, he was probably a bit nervous, so he just went on with the lecture.</p>
<p>I have tested "my theory" with one matrix and it works, but even if I tried with 100 matrices and it worked, I wouldn't have proven that it always works because there might be a case where it doesn't.</p>
<p>So my question is first whether I am right, that is, whether the rank of a matrix is the same as the rank of its transpose, and second, if that is true, how can I prove it?</p>
<p>Thanks :)</p>
| <p>The answer is yes. This statement often goes under the name "row rank equals column rank". Knowing that, it is easy to search the internet for proofs.</p>
<p>Also any reputable linear algebra text should prove this: it is indeed a rather important result.</p>
<p>Finally, since you said that you had only a substitute lecturer, I won't castigate him, but this would be a distressing lacuna of knowledge for someone who is a regular linear algebra lecturer. </p>
| <p>There are several simple proofs of this result. Unfortunately, most textbooks use a rather complicated approach using row reduced echelon forms. Please see some elegant proofs in the Wikipedia page (contributed by myself):</p>
<p><a href="http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29" rel="noreferrer">http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29</a></p>
<p>or the page on rank factorization:</p>
<p><a href="http://en.wikipedia.org/wiki/Rank_factorization" rel="noreferrer">http://en.wikipedia.org/wiki/Rank_factorization</a></p>
<p>Another of my favorites is the following:</p>
<p>Define $\operatorname{rank}(A)$ to mean the column rank of A: $\operatorname{col rank}(A) = \dim \{Ax: x \in
\mathbb{R}^n\}$. Let $A^{t}$ denote the transpose of A. First show that $A^{t}Ax = 0$
if and only if $Ax = 0$. This is standard linear algebra: one direction is
trivial, the other follows from:</p>
<p>$$A^{t}Ax=0 \implies x^{t}A^{t}Ax=0 \implies (Ax)^{t}(Ax) = 0 \implies Ax = 0$$</p>
<p>Therefore, the columns of $A^{t}A$ satisfy the same linear relationships
as the columns of $A$. It doesn't matter that they have different number
of rows. They have the same number of columns and they have the same
column rank. (This also follows from the rank+nullity theorem, if you
have proved that independently (i.e. without assuming row rank = column
rank)</p>
<p>Therefore, $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t}A) \leq \operatorname{col rank}(A^{t})$. (This
last inequality follows because each column of $A^{t}A$ is a linear
combination of the columns of $A^{t}$. So, $\operatorname{col sp}(A^{t}A)$ is a subset
of $\operatorname{col sp}(A^{t})$.) Now simply apply the argument to $A^{t}$ to get the
reverse inequality, proving $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t})$. Since $\operatorname{col rank}(A^{t})$ is the row rank of A, we are done.</p>
|
logic | <p>As a physicist trying to understand the foundations of modern mathematics (in particular Model Theory) $-$ I have a hard time coping with the border between syntax and semantics. I believe a lot would become clearer for me, if I stated what I think the Gödel's Completeness Theorem is about (after studying various materials including Wikipedia it seems redundant for me) and someone knowledgeable would clarify my misconceptions. So here it goes:</p>
<p>As I understand, if we have a set $U$ with a particular structure (functions, relations etc.) we can interpret it (through a particular signature, e.g. group signature $\{ e,\cdot \}$ ), as a model $\mathfrak{A}$ for a certain mathematical theory $\mathcal{T}$ (a theory being a set of axioms and its consequences). The theory is satisfied by $\mathfrak{A}$ only if $U$'s structure satisfies the axioms.</p>
<p>Enter Gödel's theorem: For every first order theory $\mathcal{T}$ :</p>
<p>$$\left( \exists \textrm{model } \mathfrak{A}:
\mathfrak{A} \models \mathcal{T} \right) \iff \mathcal{T} \textrm{ is consistent}$$
So I'm confused. Isn't $\mathcal{T}$ being consistent a natural requirement which implicates that a set $U$ with a corresponding structure always exists (because of the ZFC's set theory freedom in constructing sets as we please without any concerns regarding what constitutes the set)? And that in turn always allows us to create a model $\mathfrak{A}$ with an interpretation of the signature of the theory $\mathcal{T}$ in terms of $U$'s structure?</p>
<p>Where am I making mistakes? What concepts do I need to understand better in order to be able to properly comprehend this theorem and what model theory is and is not about? Please help!</p>
| <p>It may help to look at things from a more general perspective. Presentations that focus on just first-order logic may obscure the fact that specific choices are implicit in the definitions of first-order logic; the general perspective highlights these choices. I want to write this up in detail, as a reference.</p>
<h3>General "logics"</h3>
<p>We define a particular type of general "logic" with negation. This definition is intended to be very general. In particular, it accommodates much broader types of "syntax" and "semantics" than first-order logic. </p>
<p>A general "logic" will consist of:</p>
<ul>
<li><p>A set of "sentences" $L$. These do not have to be sentences in the sense of first-order logic, they can be any set of objects.</p></li>
<li><p>A function $N: L \to L$ that assigns to each $x \in L$ a "negation" or "denial" $N(x)$.</p></li>
<li><p>A set of "deductive rules", which are given as a closure operation on the powerset of $L$. So we have a function $c: 2^L \to 2^L$ such that</p>
<ol>
<li><p>$S \subseteq c(S)$ for each $S \subseteq L$</p></li>
<li><p>$c(c(S)) = c(S)$ for each $S \subseteq L$</p></li>
<li><p>If $S \subseteq S'$ then $c(S) \subseteq c(S')$. </p></li>
</ol></li>
<li><p>A set of "models" $M$. These do not have to be structures in the sense of first-order logic. The only assumption is that each $m \in M$ comes with a set $v_m \subseteq L$ of sentences that are "satisfied" (in some sense) by $M$: </p>
<ol>
<li><p>If $S \subseteq L$ and $x \in v_m$ for each $x \in S$ then $y \in v_m $ for each $y \in c(S)$</p></li>
<li><p>There is no $m \in M$ and $x \in L$ with $x \in v_m$ and $N(x) \in v_m$</p></li>
</ol></li>
</ul>
<p>The exact nature of the "sentences", "deductive rules", and "models", and the definition of a model "satisfying" a sentence are irrelevant, as long as they satisfy the axioms listed above. These axioms are compatible with both classical and intuitionistic logic. They are also compatible with infinitary logics such as $L_{\omega_1, \omega}$, with modal logics, and other logical systems.</p>
<p>The main restriction in a general "logic" is that we have included a notion of negation or denial in the definition of a general "logic" so that we can talk about consistency.</p>
<ul>
<li><p>We say that a set $S \subseteq L$ is <strong>syntactically consistent</strong> if there is no $x \in L$ such that $x$ and $N(x)$ are both in $c(S)$.</p></li>
<li><p>We say $S$ is <strong>semantically consistent</strong> if there is an $m \in M$ such that $x \in v_m$ for all $x \in S$. </p></li>
</ul>
<p>The definition of a general "logic" is designed to imply that each semantically consistent theory is syntactically consistent. </p>
<h3>First-order logic as a general logic</h3>
<p>To see how the definition of a general "logic" works, here is how to view first-order logic in any fixed signature as a general "logic". Fix a signature $\sigma$.</p>
<ul>
<li><p>$L$ will be the set of all $\sigma$-sentences. </p></li>
<li><p>$N$ will take a sentence $x$ and return $\lnot x$, the canonical negation of $x$. </p></li>
<li><p>$c$ will take $S \subseteq L$ and return the set of all $\sigma$-sentences provable from $S$. </p></li>
<li><p>$M$ will be the set of <em>all</em> $\sigma$-structures. For each $m \in M$, $v_m$ is given by the usual Tarski definition of truth.</p></li>
</ul>
<p>With these definitions, syntactic consistency and semantic consistency in the general sense match up with syntactic consistency and semantic consistency as usually defined for first-order logic.</p>
<h3>The completeness theorem</h3>
<p>Gödel's completeness theorem simply says that, if we treat first-order logic in a fixed signature as a general "logic" (as above) then syntactic consistency is equivalent to semantic consistency. </p>
<p>The benefit of the general perspective is that we can see how things could go wrong if we change just one part of the interpretation of first-order logic with signature $\sigma$ as a general "logic":</p>
<ol>
<li><p>If we were to replace $c$ with a weaker operator, syntactic consistency may not imply semantic consistency. For example, we could take $c(S) = S$ for all $S$. Then there would be syntactically consistent theories that have no model. In practical terms, making $c$ weaker means removing deduction rules.</p></li>
<li><p>If we were to replace $M$ with a smaller class of models, syntactic consistency may not imply semantic consistency. For example, if we we take $M$ to be just the set of <em>finite</em> $\sigma$-structures, there are syntactically consistent theories that have no model. In practical terms, making $M$ smaller means excluding some structures from consideration.</p></li>
<li><p>If we were to replace $c$ with a stronger closure operator, semantic consistency may not imply syntactic consistency. For example, we could take $c(S) = L$ for all $S$. Then there would be semantically consistent theories that are syntactically inconsistent. In practical terms, making $c$ stronger means adding new deduction rules.</p></li>
</ol>
<p>On the other hand, some changes would preserve the equivalence of syntactic and semantic consistency. For example, if we take $M$ to be just the set of <em>finite or countable</em> $\sigma$-structures, we can still prove the corresponding completeness theorem for first-order logic. In this sense, the choice of $M$ to be the set of <em>all</em> $\sigma$-structures is arbitrary.</p>
<h3>Other completeness theorems</h3>
<p>We say that the "completeness theorem" for a general "logic" is the theorem that syntactic consistency is equivalent to semantic consistency in that logic.</p>
<ul>
<li><p>There is a natural completeness theorem for intuitionistic first-order logic. Here we let $c$ be the closure operator derived from any of the usual deductive systems for intuitionistic logic, and let $M$ be the set of Kripke models. </p></li>
<li><p>There is a completeness theorem for second-order logic (in a fixed signature) with Henkin semantics. Here we let $c$ be the closure operator derived from the usual deductive system for second-order logic, and let $M$ be the set of Henkin models. On the other hand, if we let $M$ be the set of all "full" models, the corresponding completeness theorem fails, because this class of models is too small.</p></li>
<li><p>There are similar completeness theorems for propositional and first-order modal logics using Kripke frames.</p></li>
</ul>
<p>In each of those three cases, the historical development began with a deductive system, and the corresponding set of models was identified later. But, in other cases, we may begin with a set of models and look for a deductive system (including, in this sense, a set of axioms) that leads to a generalized completeness theorem. This is related to a common problem in model theory, which is to determine whether a given class of structures is "axiomatizable".</p>
| <p>The usual form of the completeness theorem is this: $ T \models \phi \implies T \vdash\phi$, or that if $\phi$ is true in all models $\mathcal{M} \models T$, then there is a proof of $\phi$ from $T$. This is a non-trivial statement, structures and models are about sets with operations and relations that satisfy sentences. Proofs ignore the sets with structure and just gives rules for deriving new sentences from old. </p>
<p>If you go to second order logic, this is no longer true. We can have a theory $PA$, which only has one model $\mathbb N \models PA$, but there are sentences $PA \models \phi$ with $PA \not\vdash \phi$ ("true but not provable" sentences). This follows from the incompleteness theorem which says that truth in the particular model $\mathbb N$ cannot be pinned down by proofs. The way first order logic avoids this is by the fact that a first order theory can't pin down only one model $\mathbb N \models PA$. It has to also admit non-standard models (this follows from Lowenheim-Skolem).</p>
<p>This theorem, along with the soundness theorem $T\vdash \phi \implies T\models \phi$ give a strong correspondence between syntax and semantics of first order logic.</p>
<p>Your main confusion is that consistency here is a syntactic notion, so it doesn't directly have anything to do with models. The correspondence between the usual form of the completeness theorem, and your form is by using a contradiction in place of $\phi$, and taking the contrapositive. So if $T \not \vdash \bot$ ($T$ is consistent), then $T \not \models \bot$. That is, if $T$ is consistent, then there exists a model $\mathcal M \models T$ such that $\mathcal M \not \models \bot \iff \mathcal M \models \top$, but that's a tautology, so we just get that there exists a model of $T$. </p>
|
geometry | <p>I saw this picture of a cube cut out of a tree stump.</p>
<p><a href="https://i.sstatic.net/BQapm.jpg" rel="noreferrer"><img src="https://i.sstatic.net/BQapm.jpg" alt="enter image description here" /></a></p>
<p>I've been trying to craft the same thing out of a tree stump, but I found it hard to figure out how to do it.</p>
<p>One of the opposing vertices pair is on the center of the tree stump:</p>
<p><a href="https://i.sstatic.net/nMvU3.jpg" rel="noreferrer"><img src="https://i.sstatic.net/nMvU3.jpg" alt="enter image description here" /></a></p>
<p>I've been struggling to find the numbers and angle needed to make the cuts.</p>
<p>Any kind of advice would be greatly appreciated. Thanks for taking your time to read this and I'm sorry for my bad english.</p>
| <p>First, form your stump into a cylinder with a height greater than <span class="math-container">$\frac{3}{\sqrt{2}}\approx 2.121$</span> times its radius so that the cube will fit inside the cylinder. Here, we assume you can mark given points and lines on the surface of the cylinder as well as cut a plane through <span class="math-container">$3$</span> points.</p>
<p>For the first vertex, mark the center of one of the cylinder's circular faces. Next, from this point mark <span class="math-container">$3$</span> radii of the circular face spaced <span class="math-container">$120$</span> degrees from each other (so that their endpoints create an equilateral triangle).</p>
<p><a href="https://i.sstatic.net/pN4kM.png" rel="noreferrer"><img src="https://i.sstatic.net/pN4kM.png" alt="enter image description here" /></a></p>
<p>Now, go down from each of the <span class="math-container">$3$</span> vertices of the triangle a distance equal to <span class="math-container">$\frac{1}{\sqrt{2}}\approx 0.707$</span> the radius of the cylinder, and mark the points there. If your cylinder has the minimal height-to-radius ratio, these points will be exactly <span class="math-container">$\frac{1}{3}$</span> of the way down.</p>
<p><a href="https://i.sstatic.net/NfXLR.png" rel="noreferrer"><img src="https://i.sstatic.net/NfXLR.png" alt="enter image description here" /></a></p>
<p>Next, cut <span class="math-container">$3$</span> planes through the first vertex and each pair of the next <span class="math-container">$3$</span> vertices, forming the first corner of the cube.</p>
<p><a href="https://i.sstatic.net/4lOSZ.png" rel="noreferrer"><img src="https://i.sstatic.net/4lOSZ.png" alt="enter image description here" /></a></p>
<p>Now, mark the <span class="math-container">$3$</span> midpoints of the arcs formed by your cuts. These are the next vertices of the cube. Once again, if your cylinder has the minimal height-to-radius ratio, these will be <span class="math-container">$\frac{2}{3}$</span> of the way down.</p>
<p><a href="https://i.sstatic.net/eYD3m.png" rel="noreferrer"><img src="https://i.sstatic.net/eYD3m.png" alt="enter image description here" /></a></p>
<p>Finally, cut <span class="math-container">$3$</span> more planes to finish the shape of your cube. You do not need to mark the last vertex as it is predefined by the intersection of the planes.</p>
<p><a href="https://i.sstatic.net/3uuSt.png" rel="noreferrer"><img src="https://i.sstatic.net/3uuSt.png" alt="enter image description here" /></a></p>
<p>When you are all done, the side length of the cube will be <span class="math-container">$\sqrt{\frac{3}{2}}\approx 1.225$</span> times the radius of the starting cylinder, so take this into account if you want a specific side length. The angle formed by each face of the cube to the original end of the cylinder is <span class="math-container">$\arctan(\sqrt{2}) \approx 54.734^{\circ}$</span>.</p>
| <p><strong>The Method</strong></p>
<p>Here's an approach with which you are only ever dealing with planar surfaces.</p>
<ol>
<li>Cut from the stump a (regular) hexagonal prism of radius <span class="math-container">$r$</span> and height <span class="math-container">$\frac32r\sqrt2\approx2.1213 r$</span>. <br/><br/><strong>Note:</strong> The face-to-opposite-face "width" of the prism should be <span class="math-container">$r\sqrt3\approx1.732r$</span>. <br/><br/>(Sanity check for regularity: The edges of the hexagonal top and bottom should also be <span class="math-container">$r$</span>.)</li>
</ol>
<p>The figure shows the top view and the six "unrolled" lateral faces.</p>
<p><a href="https://i.sstatic.net/sursO.png" rel="noreferrer"><img src="https://i.sstatic.net/sursO.png" alt="enter image description here" /></a></p>
<ol start="2">
<li><p>Divide each lateral edge of the prism into thirds, and draw a zig-zag across the lateral faces, joining alternating <span class="math-container">$\frac13$</span> and <span class="math-container">$\frac23$</span> marks on those edges. <br/><br/>Each segment of the zig-zag will be an edge of the final cube. (Its length should be <span class="math-container">$e\approx 1.2247r$</span>.)</p>
</li>
<li><p>Each consecutive triad of points on the zig-zag (along with either the top-center or bottom-center point of the prism) defines a face of the cube, so "all you have to do" is remove excess wood until those faces are planar.</p>
</li>
</ol>
<p>Done!</p>
<hr />
<p><strong>The Math</strong></p>
<p>For a prism with radius <span class="math-container">$r=2$</span> (to avoid fractions), let the bottom-center of the prism be the origin <span class="math-container">$O=(0,0,0\sqrt2)$</span>, and the top-center be <span class="math-container">$T=(0,0,3\sqrt2)$</span>.</p>
<p>The coordinates of the zig-zag of points (<span class="math-container">$ABCDEFA$</span> in the figure) have the form
<span class="math-container">$$\left(2\cos k\,60^\circ, 2\sin k\,60^\circ, (1\;\text{or}\;2)\sqrt{2} \right)$$</span>
Specifically,
<span class="math-container">$$A = (\phantom{-}2,0,2\sqrt{2}) \qquad B = (\phantom{-}1,\phantom{-}\sqrt{3},1\sqrt2) \qquad C = (-1,\phantom{-}\sqrt3,2\sqrt2)$$</span>
<span class="math-container">$$D = (-2,0,1\sqrt2) \qquad E = (-1,-\sqrt3,2\sqrt2) \qquad F = (\phantom{-}1,-\sqrt3,1\sqrt2)$$</span>
(where <span class="math-container">$A$</span>, <span class="math-container">$C$</span>, <span class="math-container">$E$</span> are closer to the top of the prism, and <span class="math-container">$B$</span>, <span class="math-container">$D$</span>, <span class="math-container">$F$</span> are closer to the bottom). From here, we can verify that
<span class="math-container">$$\begin{align}
T &= B+(A-B)+(C-B) &&\quad\to\quad T,A,B,C\;\text{coplanar} \\
&= D+(C-D)+(E-D) &&\quad\to\quad T,C,D,E\;\text{coplanar}\\
&= F+(E-F)+(A-F) &&\quad\to\quad T,E,F,A\;\text{coplanar}\\[4pt]
O &= A+(B-A)+(F-A) &&\quad\to\quad O,F,A,B\;\text{coplanar}\\
&=C+(B-C)+(D-C) &&\quad\to\quad O,B,C,D\;\text{coplanar}\\
&=E+(D-E)+(F-E) &&\quad\to\quad O,D,E,F\;\text{coplanar}
\end{align}$$</span>
and
<span class="math-container">$$\begin{align}2\,\sqrt{\frac32}=\sqrt{6}\;&=\;|AB|=|BC|=|CD|=|DE|=|EF|=|FA| \\
&=\;|TA|=|TC|=|TE| \\
&=\;|OB|=|OD|=|OE|\end{align}$$</span>
so that the faces are all quadrilaterals with equal edge-lengths, making them <em>at least</em> rhombuses. Also,
<span class="math-container">$$\begin{align}
0\;&=\;(A-T)\cdot(C-T)=(C-T)\cdot(E-T)=(E-T)\cdot(A-T) \\
&=\;(B-O)\cdot(D-O)=(D-O)\cdot(F-O)=(F-O)\cdot(B-O)
\end{align}$$</span>
so that
<span class="math-container">$$90^\circ = \angle ATC = \angle CTE = \angle ETA =\angle BOD = \angle DOF = \angle FOB$$</span>
That is, each rhombus face has at least one right angle, making it a <em>square</em>. <span class="math-container">$\square$</span></p>
|
logic | <p>Intuitionistic logic contains the rule $\bot \rightarrow \phi$ for every $\phi$. In the formulations I have seen this is a separate axiom, and the logic without this axiom(?) is termed "minimal logic".</p>
<p>Is this rule required for practical proof problems in intuitionism? Is there a good example of a practical proof which goes through in intuitionism which doesn't go through without this axiom, or do all/most important practical results also go through in minimal logic? And can you please illustrate this with a practical example?</p>
<p>Context: We are faced with a decision theory problem in which it might be very useful to have a powerful reasoning logic which can nonetheless notice and filter consequences which are passing through a principle of explosion. So if the important proofs use a rule like $(A \vee B), \neg A \vdash B$ and we can replace that with $(A \vee B), \neg A \vdash \neg \neg B$ to distinguish the proofs going through the 'explosive' reasoning, that would also be useful.</p>
<p>ADDED clarification: I'm not looking for a generic propositional formula which can't be proven, I'm looking for a theorem in topology or computability theory or something which can be proven in intuitionism but not in minimal logic, along with a highlighting of which step requires explosion. Could be a very simple theorem but I'd still want it to be a useful statement in some concrete domain.</p>
| <p>In minimal logic you can't prove a formula of the form $\forall x \;\neg A(x) \rightarrow B(x)$, where $B(x)$ doesn't have $\bot$ as a subformula, unless you can already prove $\forall x\;B(x)$. To see this, note that if we can prove $\forall x\;(A(x) \rightarrow \bot) \rightarrow B(x)$ in minimal logic, then we could prove the same formula with $\bot$ replaced by $\top$, ie $\forall x\;(A'(x) \rightarrow \top) \rightarrow B(x)$ (where $A'$ has any instances of $\bot$ in $A$ replaced by $\top$). Since $\forall x \;A'(x) \rightarrow \top$ is provable, we deduce that $\forall x\;B(x)$ is also provable.</p>
<p>To give an explicit example, we can easily prove in intuitionisitic logic that if a natural number $n$ is not prime, then there are $a, b$ such that $1 < a, b < n$ and $n = ab$. But this won't work in minimal logic.</p>
<p>I think the key part of the proof that works for intuitionistic logic but not minimal logic is the following principle. On the basis of this principle it shouldn't be surprising that the above example holds in intuitionistic logic. If $\phi$ is a quantifier free formula, then we can prove
$$
\neg(\forall x < y \;\neg\phi(x)) \rightarrow \exists x < y\; \phi(x)
$$
(This is a bounded version of the classical principle $\neg \forall x \neg \phi \rightarrow \exists x \phi$).</p>
<p>We prove this by induction on $y$. Note that if we have ex falso, then the case $y = 0$ is easy. Suppose now that we have shown this for $y$ and want to show it for $y + 1$. Suppose further $\neg(\forall x < y + 1\; \neg \phi(x))$. Since $\phi(x)$ is quantifier free, we know that either $\phi(y)$ or $\neg \phi(y)$ holds (by another inductive argument). Suppose that $\phi(y)$ holds. Then we can trivially show $\exists x < y + 1\;\phi(x)$. Now suppose that $\neg \phi(y)$ holds. Note that we can't have $\forall x < y\; \neg \phi(x)$ because this would imply $\forall x < y + 1 \; \neg \phi(x)$ (because for every $x < y + 1$ either $x = y$ or $x < y$ by yet another inductive argument!). Hence $\neg (\forall x < y \; \neg\phi(x))$ holds and so by the inductive hypothesis we can show $\exists x < y\; \phi(x)$, and deduce $\exists x < y + 1\;\phi(x)$.</p>
<p>So ex falso was explicitly used for the case $y = 0$. I suspect that it is also important for the inductive argument that for every $x < y + 1$ either $x = y$ or $x < y$ that I didn't show.</p>
| <p>The disjunctive syllogism $( (A \vee B), \neg A \vdash B )$ is about the most important deduction that is valid in (Heytings) intuitionistic logic but invalid in (Johanssons) minimal logic.
Johanssons article (see link on <a href="http://en.wikipedia.org/wiki/Minimal_logic" rel="nofollow">http://en.wikipedia.org/wiki/Minimal_logic</a> ) also mentions the following formulas that are theorems in intuitionistic logic but are NOT theorems in minimal logic.</p>
<p>$ ( A \land \lnot A) \to B $ </p>
<p>$ ( ( A \land \lnot A) \lor B ) \to B $</p>
<p>$ ( B \lor B) \to( \lnot \lnot B \to B ) $</p>
<p>$ ( \lnot A \lor B) \to ( A \to B) $ </p>
<p>$ ( A \lor B) \to ( \lnot A \to B ) $</p>
<p>$ ( A \to ( B \lor\lnot C) \to ((A \land C) \to B)) $ </p>
<p>$ \lnot \lnot ( \lnot \lnot A \to A) $ </p>
<p>But maybe Johansson only mentions these because they are mentioned in Heyting's article. </p>
<p>I am doubtful if $ ( A \lor B) , \lnot A \vdash \lnot \lnot B $ is a theorem of minimal logic , have you proven it or just assumed?<br>
[CORRECTION $ ( A \lor B) , \lnot A \vdash \lnot \lnot B $ is an theorem of minimal logic, because $ ( A \land \lnot A) \to \lnot \lnot B $ and $ B \to \lnot \lnot B $ are both theorems of minimal logic] </p>
<p>In the 1920's there were discussions what the axioms of constructive logic were, it is just that the founder of intuitionism (Brouwer) liked Heyting and just gave him the honour to name his system THE system of intuitionistic logic, Brouwer found logic bloodless, and not really important) </p>
<p>What I think is a more serious problem is that
$ ( A \land \lnot A) \to \lnot B $ is a theorem of minimal logic. so you could conclude that minimal logic still allows a lot of explosion.
I think you would be better of by using a paraconstistent or relevant logic.
see:</p>
<p><a href="http://en.wikipedia.org/wiki/Paraconsistent_logic" rel="nofollow">http://en.wikipedia.org/wiki/Paraconsistent_logic</a><br>
and<br>
<a href="http://en.wikipedia.org/wiki/Relevance_logic" rel="nofollow">http://en.wikipedia.org/wiki/Relevance_logic</a></p>
<p>Good luck</p>
|
linear-algebra | <p>A natural vector space is the set of continuous functions on $\mathbb{R}$. Is there a nice basis for this vector space? Or is this one of those situations where we're guaranteed a basis by invoking the Axiom of Choice, but are left rather unsatisfied?</p>
| <p>There is, in a fairly strong sense, no reasonable basis of this space. Zoom in on a neighborhood at any point and note that a finite linear combination of functions which have various kinds of nice behavior in that neighborhood also has that nice behavior in that neighborhood (differentiable, $C^k$, smooth, etc.). So any basis necessarily contains, for <em>every such neighborhood</em>, a function which does not behave nicely in that neighborhood. More generally, but roughly speaking, a basis needs to have functions which are at least as pathological as the most pathological continuous functions. </p>
<p>(Hamel / algebraic) bases of most infinite-dimensional vector spaces simply are not useful. In applications, the various topologies you could put on such a thing matter a lot and the notion of a <a href="http://en.wikipedia.org/wiki/Schauder_basis">Schauder basis</a> becomes more useful.</p>
| <p>Using Nate Eldredge's <a href="https://math.stackexchange.com/questions/136637/what-is-a-basis-for-the-vector-space-of-continuous-functions#comment347999_136637">comment</a> we have that $C(\mathbb R)$ is a Polish vector space.</p>
<p>Consider a Solovay model, that is ZF+DC+"All sets have the Baire property". In such model all linear maps into separable vector spaces are continuous, this is a consequence of [1, Th. 9.10]. </p>
<p>It is important to remark that a continuous function (from $\mathbb R$ to $\mathbb R$) from a compact set is uniformly continuous is a result which do not require <em>any</em> form of choice, and I believe that Dependent Choice (DC) ensures that uniform converges on compact sets is well behaved.</p>
<p>Suppose that there was a Hamel basis, $B$, it has to be of cardinality $\frak c$. So it has $2^\frak c$ many permutations, which induce $2^\frak c$ <strong>different</strong> linear automorphisms.</p>
<p>However every linear automorphism is automatically continuous, so it is determined completely by the countable dense set, and therefore there can only be $\frak c$ many linear automorphisms which is a contradiction to Cantor's theorem since $\mathfrak c\neq 2^\frak c$.</p>
<p>This is essentially the same argument as I used in <a href="https://math.stackexchange.com/a/122723/622">this answer</a>. </p>
<hr>
<p>Bibliography:</p>
<ol>
<li>Kechris, A. <strong><a href="http://books.google.com/books?id=pPv9KCEkklsC" rel="noreferrer">Classical Descriptive Set Theory.</a></strong> <em>Springer-Verlag</em>, 1994.</li>
</ol>
|
logic | <p>In plain language, what's the difference between two things that are 'equivalent', 'equal', 'identical', and isomorphic?</p>
<p>If the answer depends on the area of mathematics, then please take the question in the context of logical systems and statements.</p>
| <p>Convention may vary, but the following is, I guess, how most mathematicians would use these notions. Identical and equal are very often used synonymously. However, sometimes identical is meant to say that the two things are not just equal, but actually are syntactically equal. For instance, take $x=2$. The claim that $x^2=4$ is saying that $x^2$ and $4$ are equal. The claim that $x^2=x^2$ is saying that $x^2$ is equal to $x^2$, but we also say that the left hand side and the right hand side are identical. </p>
<p>Equivalence is a strictly weaker notion than equality. It can be formalized in many different ways. For instance, as an equivalence relation. The identity relation is always an equivalence relation, but not the other way around. A typical way to obtain an equivalence is to suppress some properties of the objects you study, and only look at particular aspects of them. A classical example is modular arithmetic. We say that $10$ and $20$ are equivalent modulo $5$, basically saying that while $10$ and $20$ are not equal, if the only thing we care about is their divisibility by $5$, then they are the same. </p>
<p>Isomorphism is a specific term from category theory. Two objects are isomorphic if there exists an invertible morphism between them. Informally, two isomorphic objects are identical for the purposes of answering any question about them in their category. </p>
| <p>They have different <a href="http://qchu.wordpress.com/2013/05/28/the-type-system-of-mathematics/">types</a>.</p>
<p>"Equal" and "identical" take as input two elements of a <em>set</em> and return a truth value. They both mean the same thing, which is what you think they'd mean. For example, we can consider the set $\{ 1, 2, 3, \dots \}$ of natural numbers, and then $1 = 1$ is true, $1 = 2$ is false, and so forth.</p>
<p>"Equivalent" takes as input two elements of a <em>set together with an <a href="http://en.wikipedia.org/wiki/Equivalence_relation">equivalence relation</a></em> and returns a truth value corresponding to the equivalence relation. For example, we can consider the set $\{ 1, 2, 3, \dots \}$ of natural numbers together with the equivalence relation "has the same remainder upon division by $2$," and then $1 \equiv 3$ is true, $1 \equiv 4$ is false, and so forth. The crucial point here is that an equivalence relation is extra structure on a set. It doesn't make sense to ask whether $1$ is equivalent to $3$ without specifying what equivalence relation you're talking about. </p>
<p>"Isomorphic" takes as input two objects in a <em><a href="http://en.wikipedia.org/wiki/Category_%28mathematics%29">category</a></em> and returns a truth value corresponding to whether an <a href="http://en.wikipedia.org/wiki/Isomorphism">isomorphism</a> between the two objects exists. For example, we can consider the category of sets and functions, and then the set $\{ 1, 2 \}$ and the set $\{ 3, 4 \}$ are isomorphic because the map $1 \to 3, 2 \to 4$ is an isomorphism between them. The crucial point here is, again, a category structure is extra structure on a set (of objects). It doesn't make sense to ask whether two objects are isomorphic without specifying what category structure you're talking about.</p>
<p>Here is a terrible place where this distinction matters. In <a href="http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZF set theory</a>, in addition to being able to ask whether two sets are isomorphic (which means that they are in bijection with each other), it is also a meaningful question to ask whether two sets are <em>equal</em>. The former involves the structure of the category of sets while the latter involves the "set" of sets (not really a set, but that isn't the problem here). For example, $\{ 1, 2 \}$ and $\{ 3, 4 \}$ are particular sets in ZFC which are not the same set (because they don't contain the same elements; that's what it means for two sets in ZFC to be <em>equal</em>) even though they are in bijection with each other. This distinction can trip up the unwary if they aren't careful.</p>
<p>(My personal conviction is that you should never be allowed to ask the question of whether two bare sets are equal in the first place. It is basically never the question you actually want to ask.) </p>
|
linear-algebra | <p>I'm in the process of writing an application which identifies the closest matrix from a set of square matrices $M$ to a given square matrix $A$. The closest can be defined as the most similar.</p>
<p>I think finding the distance between two given matrices is a fair approach since the smallest Euclidean distance is used to identify the closeness of vectors. </p>
<p>I found that the distance between two matrices ($A,B$) could be calculated using the <a href="http://mathworld.wolfram.com/FrobeniusNorm.html">Frobenius distance</a> $F$:</p>
<p>$$F_{A,B} = \sqrt{trace((A-B)*(A-B)')} $$</p>
<p>where $B'$ represents the conjugate transpose of B.</p>
<p>I have the following points I need to clarify</p>
<ul>
<li>Is the distance between matrices a fair measure of similarity?</li>
<li>If distance is used, is Frobenius distance a fair measure for this problem? any other suggestions?</li>
</ul>
| <p>Some suggestions. Too long for a comment:</p>
<p>As I said, there are many ways to measure the "distance" between two matrices. If the matrices are $\mathbf{A} = (a_{ij})$ and $\mathbf{B} = (b_{ij})$, then some examples are:
$$
d_1(\mathbf{A}, \mathbf{B}) = \sum_{i=1}^n \sum_{j=1}^n |a_{ij} - b_{ij}|
$$
$$
d_2(\mathbf{A}, \mathbf{B}) = \sqrt{\sum_{i=1}^n \sum_{j=1}^n (a_{ij} - b_{ij})^2}
$$
$$
d_\infty(\mathbf{A}, \mathbf{B}) = \max_{1 \le i \le n}\max_{1 \le j \le n} |a_{ij} - b_{ij}|
$$
$$
d_m(\mathbf{A}, \mathbf{B}) = \max\{ \|(\mathbf{A} - \mathbf{B})\mathbf{x}\| : \mathbf{x} \in \mathbb{R}^n, \|\mathbf{x}\| = 1 \}
$$
I'm sure there are many others. If you look up "matrix norms", you'll find lots of material. And if $\|\;\|$ is any matrix norm, then $\| \mathbf{A} - \mathbf{B}\|$ gives you a measure of the "distance" between two matrices $\mathbf{A}$ and $\mathbf{B}$.</p>
<p>Or, you could simply count the number of positions where $|a_{ij} - b_{ij}|$ is larger than some threshold number. This doesn't have all the nice properties of a distance derived from a norm, but it still might be suitable for your needs.</p>
<p>These distance measures all have somewhat different properties. For example, the third one shown above will tell you that two matrices are far apart even if all their entries are the same except for a large difference in one position.</p>
| <p>If we have two matrices $A,B$.
Distance between $A$ and $B$ can be calculated using Singular values or $2$ norms.</p>
<p>You may use Distance $= \vert(\text{fnorm}(A)-\text{fnorm}(B))\vert$
where fnorm = sq root of sum of squares of all singular values. </p>
|
matrices | <p><a href="http://www.agnesscott.edu/lriddle/women/todd.htm" rel="noreferrer">Olga Tausky-Todd</a> had once said that</p>
<blockquote>
<p><strong>"If an assertion about matrices is false, there is usually a 2x2 matrix that reveals this."</strong></p>
</blockquote>
<p>There are, however, assertions about matrices that are true for $2\times2$ matrices but not for the larger ones. I came across one nice little <a href="https://math.stackexchange.com/questions/577163/how-prove-this-matrix-inequality-detb0">example</a> yesterday. Actually, every student who has studied first-year linear algebra should know that there are even assertions that are true for $3\times3$ matrices, but false for larger ones --- <a href="http://en.wikipedia.org/wiki/Rule_of_Sarrus" rel="noreferrer">the rule of Sarrus</a> is one obvious example; a <a href="https://math.stackexchange.com/questions/254731/schurs-complement-of-a-matrix-with-no-zero-entries">question</a> I answered last year provides another.</p>
<p>So, here is my question. <em>What is your favourite assertion that is true for small matrices but not for larger ones?</em> Here, $1\times1$ matrices are ignored because they form special cases too easily (otherwise, Tausky-Todd would have not made the above comment). The assertions are preferrably simple enough to understand, but their disproofs for larger matrices can be advanced or difficult.</p>
| <p>Any two rotation matrices commute.</p>
| <p>I like this one: two matrices are similar (conjugate) if and only if they have the same minimal and characteristic polynomials and the same dimensions of eigenspaces corresponding to the same eigenvalue. This statement is true for all $n\times n$ matrices with $n\leq6$, but is false for $n\geq7$. </p>
|
linear-algebra | <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$n \times n$</span> matrix and let <span class="math-container">$\Lambda$</span> be an <span class="math-container">$n \times n$</span> diagonal matrix. Is it always the case that <span class="math-container">$A\Lambda = \Lambda A$</span>? If not, when is it the case that <span class="math-container">$A \Lambda = \Lambda A$</span>?</p>
<p>If we restrict the diagonal entries of <span class="math-container">$\Lambda$</span> to being equal (i.e. <span class="math-container">$\Lambda = \text{diag}(a, a, \dots, a)$</span>), then it is clear that <span class="math-container">$A\Lambda = AaI = aIA = \Lambda A$</span>. However, I can't seem to come up with an argument for the general case.</p>
| <p>It is possible that a diagonal matrix $\Lambda$ commutes with a matrix $A$ when $A$ is symmetric and $A \Lambda$ is also symmetric. We have</p>
<p>$$
\Lambda A = (A^{\top}\Lambda^\top)^{\top} = (A\Lambda)^\top = A\Lambda
$$</p>
<p>The above trivially holds when $A$ and $\Lambda$ are both diagonal.</p>
| <p>A diagonal matrix will not commute with every matrix.</p>
<p>$$
\begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}*\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$$</p>
<p>But:</p>
<p>$$\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} * \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} 0 & 2 \\ 0 & 0 \end{pmatrix}.$$</p>
|
matrices | <p>In the <a href="https://archive.org/details/springer_10.1007-978-1-4684-9446-4" rel="noreferrer">book</a> of <em>Linear Algebra</em> by Werner Greub, whenever we choose a field for our vector spaces, we always choose an arbitrary field $F$ of <strong>characteristic zero</strong>, but to understand the importance of the this property, I am wondering what would we lose if the field weren't characteristic zero ?</p>
<p>I mean, right now I'm in the middle of the Chapter 4, and up to now we have used the fact that the field is characteristic zero once in a single proof, so as main theorems and properties, if the field weren't characteristic zero, what we would we lose ?</p>
<p>Note, I'm asking this particular question to understand the importance and the place of this fact in the subject, so if you have any other idea to convey this, I'm also OK with that.</p>
<p>Note: Since this a broad question, it is unlikely that one person will cover all the cases, so I will not accept any answer so that you can always post answers.</p>
| <p>The equivalence between symmetric bilinear forms and quadratic forms given by the <a href="https://en.wikipedia.org/wiki/Polarization_identity" rel="noreferrer">polarization identity</a> breaks down in characteristic $2$.</p>
| <p>Many arguments using the trace of a matrix will no longer be true in general. For example, a matrix $A\in M_n(K)$ over a field of characteristic zero is <em>nilpotent</em>, i.e., satisfies $A^n=0$, if and only if $\operatorname{tr}(A^k)=0$ for all $1\le k\le n$. For fields of prime characteristic $p$ with $p\mid n$ however, this fails. For example, the identity matrix $A=I_n$ then satisfies $\operatorname{tr}(A^k)=0$ for all $1\le k\le n$, but is not nilpotent. <br>
The pathology of linear algebra over fields of characteristic $2$ has been discussed already <a href="https://math.stackexchange.com/questions/250411/why-are-fields-with-characteristic-2-so-pathological">here</a>.</p>
|
linear-algebra | <p>What is the importance of eigenvalues/eigenvectors? </p>
| <h3>Short Answer</h3>
<p><em>Eigenvectors make understanding linear transformations easy</em>. They are the "axes" (directions) along which a linear transformation acts simply by "stretching/compressing" and/or "flipping"; eigenvalues give you the factors by which this compression occurs. </p>
<p>The more directions you have along which you understand the behavior of a linear transformation, the easier it is to understand the linear transformation; so you want to have as many linearly independent eigenvectors as possible associated to a single linear transformation.</p>
<hr>
<h3>Slightly Longer Answer</h3>
<p>There are a lot of problems that can be modeled with linear transformations, and the eigenvectors give very simply solutions. For example, consider the system of linear differential equations
\begin{align*}
\frac{dx}{dt} &= ax + by\\\
\frac{dy}{dt} &= cx + dy.
\end{align*}
This kind of system arises when you describe, for example, the growth of population of two species that affect one another. For example, you might have that species $x$ is a predator on species $y$; the more $x$ you have, the fewer $y$ will be around to reproduce; but the fewer $y$ that are around, the less food there is for $x$, so fewer $x$s will reproduce; but then fewer $x$s are around so that takes pressure off $y$, which increases; but then there is more food for $x$, so $x$ increases; and so on and so forth. It also arises when you have certain physical phenomena, such a particle on a moving fluid, where the velocity vector depends on the position along the fluid.</p>
<p>Solving this system directly is complicated. But suppose that you could do a change of variable so that instead of working with $x$ and $y$, you could work with $z$ and $w$ (which depend linearly on $x$ and also $y$; that is, $z=\alpha x+\beta y$ for some constants $\alpha$ and $\beta$, and $w=\gamma x + \delta y$, for some constants $\gamma$ and $\delta$) and the system transformed into something like
\begin{align*}
\frac{dz}{dt} &= \kappa z\\\
\frac{dw}{dt} &= \lambda w
\end{align*}
that is, you can "decouple" the system, so that now you are dealing with two <em>independent</em> functions. Then solving this problem becomes rather easy: $z=Ae^{\kappa t}$, and $w=Be^{\lambda t}$. Then you can use the formulas for $z$ and $w$ to find expressions for $x$ and $y$..</p>
<p>Can this be done? Well, it amounts <em>precisely</em> to finding two linearly independent eigenvectors for the matrix $\left(\begin{array}{cc}a & b\\c & d\end{array}\right)$! $z$ and $w$ correspond to the eigenvectors, and $\kappa$ and $\lambda$ to the eigenvalues. By taking an expression that "mixes" $x$ and $y$, and "decoupling it" into one that acts independently on two different functions, the problem becomes a lot easier. </p>
<p>That is the essence of what one hopes to do with the eigenvectors and eigenvalues: "decouple" the ways in which the linear transformation acts into a number of independent actions along separate "directions", that can be dealt with independently. A lot of problems come down to figuring out these "lines of independent action", and understanding them can really help you figure out what the matrix/linear transformation is "really" doing. </p>
| <h3>A short explanation</h3>
<p>Consider a matrix <span class="math-container">$A$</span>, for an example one representing a physical transformation (e.g rotation). When this matrix is used to transform a given vector <span class="math-container">$x$</span> the result is <span class="math-container">$y = A x$</span>.</p>
<p>Now an interesting question is</p>
<blockquote>
<p>Are there any vectors <span class="math-container">$x$</span> which do not change their direction under this transformation, but allow the vector magnitude to vary by scalar <span class="math-container">$ \lambda $</span>?</p>
</blockquote>
<p>Such a question is of the form <span class="math-container">$$A x = \lambda x $$</span></p>
<p>So, such special <span class="math-container">$x$</span> are called <em>eigenvector</em>(s) and the change in magnitude depends on the <em>eigenvalue</em> <span class="math-container">$ \lambda $</span>.</p>
|
linear-algebra | <p>Can <span class="math-container">$\det(A + B)$</span> expressed in terms of <span class="math-container">$\det(A), \det(B), n$</span>
where <span class="math-container">$A,B$</span> are <span class="math-container">$n\times n$</span> matrices?</p>
<h3></h3>
<p>I made the edit to allow <span class="math-container">$n$</span> to be factored in.</p>
| <p>When <span class="math-container">$n=2$</span>, and suppose <span class="math-container">$A$</span> has inverse, you can easily show that</p>
<p><span class="math-container">$\det(A+B)=\det A+\det B+\det A\,\cdot \mathrm{Tr}(A^{-1}B)$</span>.</p>
<hr />
<p>Let me give a general method to find the determinant of the sum of two matrices <span class="math-container">$A,B$</span> with <span class="math-container">$A$</span> invertible and symmetric (The following result might also apply to the non-symmetric case. I might verify that later...).
I am a physicist, so I will use the index notation, <span class="math-container">$A_{ij}$</span> and <span class="math-container">$B_{ij}$</span>, with <span class="math-container">$i,j=1,2,\cdots,n$</span>.
Let <span class="math-container">$A^{ij}$</span> donate the inverse of <span class="math-container">$A_{ij}$</span> such that <span class="math-container">$A^{il}A_{lj}=\delta^i_j=A_{jl}A^{li}$</span>.
We can use <span class="math-container">$A_{ij}$</span> to lower the indices, and its inverse to raise.
For example <span class="math-container">$A^{il}B_{lj}=B^i{}_j$</span>.
Here and in the following, the Einstein summation rule is assumed.</p>
<p>Let <span class="math-container">$\epsilon^{i_1\cdots i_n}$</span> be the totally antisymmetric tensor, with <span class="math-container">$\epsilon^{1\cdots n}=1$</span>.
Define a new tensor <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}=\epsilon^{i_1\cdots i_n}/\sqrt{|\det A|}$</span>.
We can use <span class="math-container">$A_{ij}$</span> to lower the indices of <span class="math-container">$\tilde\epsilon^{i_1\cdots i_n}$</span>, and define
<span class="math-container">$\tilde\epsilon_{i_1\cdots i_n}=A_{i_1j_1}\cdots A_{i_nj_n}\tilde\epsilon^{j_1\cdots j_n}$</span>.
Then there is a useful property:
<span class="math-container">$$
\tilde\epsilon_{i_1\cdots i_kl_{k+1}\cdots l_n}\tilde\epsilon^{j_1\cdots j_kl_{k+1}\cdots l_n}=(-1)^sl!(n-l)!\delta^{[j_1}_{i_1}\cdots\delta^{j_k]}_{i_k},
$$</span>
where the square brackets <span class="math-container">$[]$</span> imply the antisymmetrization of the indices enclosed by them.
<span class="math-container">$s$</span> is the number of negative elements of <span class="math-container">$A_{ij}$</span> after it has been diagonalized.</p>
<p>So now the determinant of <span class="math-container">$A+B$</span> can be obtained in the following way
<span class="math-container">$$
\det(A+B)=\frac{1}{n!}\epsilon^{i_1\cdots i_n}\epsilon^{j_1\cdots j_n}(A+B)_{i_1j_1}\cdots(A+B)_{i_nj_n}
$$</span>
<span class="math-container">$$
=\frac{(-1)^s\det A}{n!}\tilde\epsilon^{i_1\cdots i_n}\tilde\epsilon^{j_1\cdots j_n}\sum_{k=0}^n C_n^kA_{i_1j_1}\cdots A_{i_kj_k}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n}
$$</span>
<span class="math-container">$$
=\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon^{j_1\cdots j_k}{}_{i_{k+1}\cdots i_n}B_{i_{k+1}j_{k+1}}\cdots B_{i_nj_n}
$$</span>
<span class="math-container">$$
=\frac{(-1)^s\det A}{n!}\sum_{k=0}^nC_n^k\tilde\epsilon^{i_1\cdots i_ki_{k+1}\cdots i_n}\tilde\epsilon_{j_1\cdots j_ki_{k+1}\cdots i_n}B_{i_{k+1}}{}^{j_{k+1}}\cdots B_{i_n}{}^{j_n}
$$</span>
<span class="math-container">$$
=\frac{\det A}{n!}\sum_{k=0}^nC_n^kk!(n-k)!B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]}
$$</span>
<span class="math-container">$$
=\det A\sum_{k=0}^nB_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]}
$$</span>
<span class="math-container">$$
=\det A+\det A\sum_{k=1}^{n-1}B_{i_{k+1}}{}^{[i_{k+1}}\cdots B_{i_n}{}^{i_n]}+\det B.
$$</span></p>
<p>This reproduces the result for <span class="math-container">$n=2$</span>.
An interesting result for physicists is when <span class="math-container">$n=4$</span>,</p>
<p><span class="math-container">\begin{split}
\det(A+B)=&\det A+\det A\cdot\text{Tr}(A^{-1}B)+\frac{\det A}{2}\{[\text{Tr}(A^{-1}B)]^2-\text{Tr}(BA^{-1}BA^{-1})\}\\
&+\frac{\det A}{6}\{[\text{Tr}(BA^{-1})]^3-3\text{Tr}(BA^{-1})\text{Tr}(BA^{-1}BA^{-1})+2\text{Tr}(BA^{-1}BA^{-1}BA^{-1})\}\\
&+\det B.
\end{split}</span></p>
| <p>When $n\ge2$, the answer is no. To illustrate, consider
$$
A=I_n,\quad B_1=\pmatrix{1&1\\ 0&0}\oplus0,\quad B_2=\pmatrix{1&1\\ 1&1}\oplus0.
$$
If $\det(A+B)=f\left(\det(A),\det(B),n\right)$ for some function $f$, you should get $\det(A+B_1)=f(1,0,n)=\det(A+B_2)$. But in fact, $\det(A+B_1)=2\ne3=\det(A+B_2)$ over any field.</p>
|
probability | <p>Do men or women have more brothers?</p>
<p>I think women have more as no man can be his own brother. But how one can prove it rigorously?</p>
<hr>
<p>I am going to suggest some reasonable background assumptions:</p>
<ol>
<li>There are a large number of individuals, of whom half are men and half are women.</li>
<li>The individuals are partitioned into nonempty families.</li>
<li>The distribution of the sizes of the families is deliberately not specified.</li>
<li>However, in each family, the sex of each member is independent of the sexes of the other members.</li>
</ol>
<p>I believe these assumptions are roughly correct for the world we actually live in.</p>
<p>Even in the absence of any information about point 3, what can one say about relative expectation of the random variables “Number of brothers of individual $I$, given that $I$ is female” and “Number of brothers of individual $I$, given that $I$ is male”?</p>
<p>And how can one directly refute the argument that claims that the second expectation should almost certainly be smaller than the first, based on the observation that in any single family, say with two girls and one boy, the girls have at least as many brothers as do the boys, and usually more.</p>
| <p>So many long answers! But really it's quite simple. </p>
<ul>
<li>Mathematically, the expected number of brothers is the same for men and women.</li>
<li>In real life, we can expect men to have slightly <em>more</em> brothers than women. </li>
</ul>
<p><strong>Mathematically:</strong></p>
<p>Assume, as the question puts it, that "in each family, the sex of each member is independent of the sexes of the other members". This is all we assume: we don't get to pick a particular set of families. (This is essential: If we were to choose the collection of families we consider, we can find collections where the men have more brothers, collections where the women have more brothers, or where the numbers are equal: we can get the answer to come out any way at all.) </p>
<p>I'll write $p$ for the gender ratio, i.e. the proportion of all people who are men. In real life $p$ is close to 0.5, but this doesn't make any difference. In any random set of $n$ persons, the expected (average) number of men is $n\cdot p$.</p>
<ol>
<li>Take an arbitrary child $x$, and let $n$ be the number of children in $x$'s family. </li>
<li>Let $S(x)$ be the set of $x$'s siblings. Note that there are <em>no</em> gender-related restrictions on $S(x)$: It's just the set of children other than $x$.</li>
<li>Obviously, <strong>the expected number of $x$'s brothers is the expected number of men in $S(x)$.</strong> </li>
<li>So what is the expected number of men in this set? Since $x$ has $n-1$ siblings, it's just $(n-1)\cdot p$, or approximately $(n-1)\div 2$, regardless of $x$'s gender. That's all there is to it.</li>
</ol>
<p>Note that the gender of $x$ didn't figure in this calculation at all. If we were to choose an arbitrary boy or an arbitrary girl in step 1, the calculation would be exactly the same, since $S(x)$ is not dependent on $x$'s gender.</p>
<p><strong>In real life:</strong> </p>
<p>In reality, the gender distribution of children does depend on the parents a little bit (for biological reasons that are beyond the scope of math.se). I.e., the distribution of genders in families <a href="http://www.genetics.org/content/genetics/15/5/445.full.pdf" rel="noreferrer">is not completely random.</a> Suppose some couples cannot have boys, some might be unable to have girls, etc. In such a case, being male is evidence that your parents <em>can</em> have a boy, which (very) slightly raises the odds that you can have a brother. </p>
<p>In other words: <strong>If the likelihood of having boys does depend on the family, men on average have <em>more</em> brothers, not fewer.</strong> (I am expressly putting aside the "family planning" scenario where people choose to have more children depending on the gender of the ones they have. If you allow this, <strong>anything could happen.</strong>)</p>
| <p><strong>Edit, 5/24/16:</strong> After some thought I don't particularly like this answer anymore; please take a look at my second answer below instead. </p>
<hr>
<p>Here's a simple version of the question. Suppose there is exactly one family which has $n$ children, of which $k$ are male with some probability $p_k$. When this happens, the men each have $k-1$ brothers, while the women have $k$ brothers. So it would seem that no matter what the probabilities $p_k$ are, the women will always have more brothers on average. </p>
<p>However, this is not true, and the reason is that sometimes we might have $k = 0$ (no males) or $k = n$ (no females). In the first case the women have no brothers and the men don't exist, and in the second case the men have $n-1$ brothers and the women don't exist. In these cases it's unclear whether the question even makes sense.</p>
<hr>
<p>Another simple version of the question, which avoids the previous problem and which I think is more realistic, is to suppose that there are two families with a total of $2n$ children between them, $n$ of which are male and $n$ of which are female, but now the children are split between the families in some random way. If there are $m$ male children in the first family and $f$ female children, then the average number of brothers a man has is</p>
<p>$$\frac{m(m-1) + (n-m)(n-m-1)}{n}$$</p>
<p>while the average number of brothers a woman has is</p>
<p>$$\frac{mf + (n-m)(n-f)}{n}.$$</p>
<p>The first quantity is big when $m$ is either big or small (in other words, when the distribution of male children is lopsided between the two families) while the second quantity is big when $m$ and $f$ are either both big or both small (in other words, when the distribution of male and female children are similar in the two families). If we suppose that "big" and "small" are disjoint and both occur with some probability $p \le \frac{1}{2}$ (say $p = \frac{1}{3}$ to be concrete), then the first case occurs with probability $2p$ (say $2 \frac{1}{3} = \frac{2}{3}$) while the second case occurs with probability $2p^2$ (say $2 \frac{1}{9} = \frac{2}{9}$). So heuristically, in this version of the question:</p>
<blockquote>
<p>If it's easy for there to be many or few men in a family, men could have more brothers than women because it's easier for men to correlate with themselves than for women to correlate with men.</p>
</blockquote>
<p>But you don't have to take my word for it: we can actually do the computation. Let me write $M$ for the random variable describing the number of men in the first family and $F$ for the random variable describing the number of women in the first family, and let's assume that they are 1) independent and 2) symmetric about $\frac{n}{2}$, so that in particular</p>
<p>$$\mathbb{E}(M) = \mathbb{E}(F) = \frac{n}{2}.$$ </p>
<p>$M$ and $F$ are independent, so</p>
<p>$$\mathbb{E}(MF) = \mathbb{E}(M) \mathbb{E}(F) = \frac{n^2}{4}.$$</p>
<p>and similarly for $n-M$ and $n-F$. This is already enough to compute the expected number of brothers a woman has, which is (because $MF$ and $(n-M)(n-F)$ have the same distribution by assumption)</p>
<p>$$\frac{2}{n} \left( \mathbb{E}(MF) \right) = \frac{n}{2}.$$</p>
<p>In other words, the expected number of brothers a woman has is precisely the expected number of men in one family. This also follows from linearity of expectation.</p>
<p>Next we'll compute the expected number of brothers a man has. This is (again because $M(M-1)$ and $(n-M)(n-M-1)$ have the same distribution by assumption)</p>
<p>$$\frac{2}{n} \left( \mathbb{E}(M(M-1)) \right) = \frac{2}{n} \left( \mathbb{E}(M^2) - \frac{n}{2} \right) = \frac{2}{n} \left( \text{Var}(M) + \frac{n^2}{4} - \frac{n}{2} \right) = \frac{n}{2} - 1 + \frac{2 \text{Var}(M)}{n}$$</p>
<p>where we used $\text{Var}(M) = \mathbb{E}(M^2) - \mathbb{E}(M)^2$. As in Donkey_2009's answer, this computation reveals that the answer depends delicately on the variance of the number of men in one family (although be careful comparing these two answers: in Donkey_2009's answer he's choosing a random family to inspect while I'm choosing a random distribution of males and females among two families). More precisely,</p>
<blockquote>
<p>Men have more brothers than women on average if and only if $\text{Var}(M)$ is strictly larger than $\frac{n}{2}$.</p>
</blockquote>
<p>For example, if the men are distributed by independent coin flips, then we can compute that $\text{Var}(M) = \frac{n}{4}$, so in fact in this case women have more brothers than men (and this doesn't depend on the distribution of $F$ at all, as long as it's independent of $M$). Here the heuristic argument about bigness and smallness doesn't apply because the probability of $M$ deviating from its mean is quite small. </p>
<p>But if, for example, $m$ is instead chosen uniformly at random among the possible values $0, 1, 2, \dots n$, then $\mathbb{E}(M^2) = \frac{n(2n+1)}{6}$, so $\text{Var}(M) = \frac{n(2n+1)}{6} - \frac{n^2}{4} = \frac{n^2}{12} + \frac{n}{6}$, which is quite a bit larger than in the previous case, and this gives about $\frac{2n}{3}$ expected brothers for men. </p>
<p>One quibble you might have with the above model is that you might not think it's reasonable for $M$ and $F$ to be independent. On the one hand, some families just like having lots of children, so you might expect $M$ and $F$ to be correlated. On the other hand, some families don't like having lots of children, so you might expect $M$ and $F$ to be anticorrelated. Without the independence assumption the computation for women acquires an extra term, namely $\frac{2 \text{Cov}(M, F)}{n}$ (as in Donkey_2009's answer), and now the answer also depends on how large this is relative to $\text{Var}(M)$. </p>
<p>Note that the argument in the OP that "no man can be his own brother" (basically, the $-1$ in $m(m-1)$) ought to imply, if it worked, that the difference between expected number of brothers for men and women is exactly $1$: this happens iff we are allowed to write $\mathbb{E}(M(M-1)) = \mathbb{E}(M) \mathbb{E}(M-1)$ iff $M$ is independent of itself iff it's constant iff $\text{Var}(M) = 0$. </p>
<hr>
<p><strong>Edit:</strong> Perhaps the biggest objection you might have to the model above is that a given person's gender is not independent of the gender of their siblings; that is, as Greg Martin points out in the comments below, requirement 4 in the OP is not satisfied. This is easiest to see in the extreme case that $n = 1$: in that case we're only distributing one male and one female child, and so any siblings you have must have opposite gender from you. In general the fact that the number of male and female children is fixed here means that your siblings are slightly more likely to be a different gender from you. </p>
<p>A more realistic model would be to both distribute the children randomly and to assign their genders randomly. Beyond that we should think more about how to model family sizes. </p>
|
differentiation | <p>L'Hôpital's rule can be stated as follows:</p>
<blockquote>
<p>Let <span class="math-container">$f, g$</span> be differentiable real functions defined on a deleted one-sided neighbourhood<span class="math-container">$^{(1)}$</span> of <span class="math-container">$a$</span>, where <span class="math-container">$a$</span> can be any real number or <span class="math-container">$\pm \infty$</span>. Suppose that both <span class="math-container">$f,g$</span> converge to <span class="math-container">$0$</span> or that both <span class="math-container">$f,g$</span> converge to <span class="math-container">$+\infty$</span> as <span class="math-container">$x \to a^{\pm}$</span> (<span class="math-container">$\pm$</span> depending on the side of the deleted neighbourhood). If
<span class="math-container">$$\frac{f'(x)}{g'(x)} \to L,$$</span>
then
<span class="math-container">$$\frac{f(x)}{g(x)} \to L,$$</span>
where <span class="math-container">$L$</span> can be any real number or <span class="math-container">$\pm \infty$</span>. </p>
</blockquote>
<p>This is an ubiquitous tool for computations of limits, and some books avoid proving it or just prove it in some special cases. Since we don't seem to have a consistent reference for its statement and proof in MathSE and it is a theorem which is often misapplied (see <a href="https://math.stackexchange.com/questions/1710786/">here</a> for an example), it seems valuable to have a question which could serve as such a reference. This is an attempt at that.</p>
<p><span class="math-container">$^{(1)}$</span>E.g., if <span class="math-container">$a=1$</span>, then <span class="math-container">$(1,3)$</span> is such a neighbourhood. </p>
| <p>For the sake of avoiding clutter, we suppose without loss of generality that <span class="math-container">$x \to a^+$</span> and establish the notation that a "neighbourhood" refers to a deleted one-sided neighbourhood.</p>
<p>Pick <span class="math-container">$\epsilon>0$</span>. By hypothesis, there exists a neighbourhood <span class="math-container">$U$</span> of <span class="math-container">$a$</span> such that <span class="math-container">$g'(x) \neq 0$</span> for every <span class="math-container">$x \in U$</span> and
<span class="math-container">$$L-\epsilon<\frac{f'(x)}{g'(x)}<L+\epsilon \tag{1}$$</span>
for every <span class="math-container">$x \in U$</span>. By the <a href="http://mathworld.wolfram.com/CauchysMean-ValueTheorem.html" rel="nofollow noreferrer">Cauchy mean value theorem</a>, it follows that for every <span class="math-container">$\alpha, \beta \in U$</span> with <span class="math-container">$\beta>\alpha$</span>,
<span class="math-container">$$L-\epsilon<\frac{f(\beta)-f(\alpha)}{g(\beta)-g(\alpha)}<L+\epsilon. \tag{2}$$</span></p>
<p>We divide the proof now in order to address our two main cases. </p>
<p><strong><em>Case 1:</strong> <span class="math-container">$f,g \to 0$</span> as <span class="math-container">$x \to a^+$</span>.</em></p>
<p>By letting <span class="math-container">$\alpha \to a$</span> in <span class="math-container">$(2)$</span>, we have that
<span class="math-container">$$L-\epsilon \leq \frac{f(\beta)}{g(\beta)}\leq L+\epsilon.$$</span>
Since this holds for every <span class="math-container">$\beta \in U$</span>, we have the result in this case. </p>
<p>OBS: Note that <span class="math-container">$g(\beta)\neq 0$</span> due to the mean value theorem applied to <span class="math-container">$g$</span> extended continuously to <span class="math-container">$a$</span>. In fact, one could also apply the Cauchy mean value theorem to the extensions of <span class="math-container">$f$</span> and <span class="math-container">$g$</span> in order to prove this case by considering <span class="math-container">$\frac{f(\beta)-f(0)}{g(\beta)-g(0)}$</span> directly. Of course, this would not work in the case where <span class="math-container">$f,g \to +\infty$</span>.</p>
<p><strong><em>Case 2:</strong> <span class="math-container">$f,g \to +\infty$</span> as <span class="math-container">$x \to a^+$</span>.</em></p>
<p>We can rewrite <span class="math-container">$(2)$</span> as
<span class="math-container">$$L-\epsilon<\frac{f(\beta)}{g(\beta)-g(\alpha)} + \frac{g(\alpha)}{g(\alpha)-g(\beta)} \cdot\frac{f(\alpha)}{g(\alpha)}<L+\epsilon.$$</span>
Taking the <span class="math-container">$\limsup$</span> and <span class="math-container">$\liminf$</span> as <span class="math-container">$\alpha \to a$</span> together with the fact that <span class="math-container">$g \to +\infty$</span> yields
<span class="math-container">$$L-\epsilon \leq \liminf_{\alpha \to a}\frac{f(\alpha)}{g(\alpha)} \leq \limsup_{\alpha \to a}\frac{f(\alpha)}{g(\alpha)} \leq L+\epsilon.$$</span>
Since this holds for every <span class="math-container">$\epsilon>0$</span>, we have that <span class="math-container">$\liminf_{\alpha \to a}\frac{f(\alpha)}{g(\alpha)} = \limsup_{\alpha \to a}\frac{f(\alpha)}{g(\alpha)}=L$</span> and the result follows.</p>
<hr>
<p>Some observations:</p>
<ul>
<li>It should also be clear that if <span class="math-container">$L = +\infty$</span> (resp. <span class="math-container">$-\infty$</span>), then these proofs can be easily adapted by changing "pick <span class="math-container">$\epsilon>0$</span>" to "pick <span class="math-container">$K \in \mathbb{R}$</span>" and changing inequality <span class="math-container">$(1)$</span> to <span class="math-container">$\frac{f'(x)}{g'(x)} >K$</span> (resp. <span class="math-container">$\frac{f'(x)}{g'(x)} < K$</span>), while also making the obvious following changes.</li>
<li>As a mild curiosity (which is not so deep after some inspection), note that in the case of <span class="math-container">$f,g \to +\infty$</span>, the assumption that <span class="math-container">$f \to +\infty$</span> is actually not necessary. It suffices to assume that <span class="math-container">$g \to +\infty$</span>. But stating the theorem without assuming <span class="math-container">$f \to +\infty$</span> may be confusing to students that see this much more frequently in the context of the so-called "indeterminate forms".</li>
<li><p>The passage involving the <span class="math-container">$\limsup$</span> and <span class="math-container">$\liminf$</span> may be somewhat obscure. First of all, we are adopting the following definitions:
<span class="math-container">$$\limsup_{x \to a} = \inf_{\substack{\text{$U$ del.} \\ \text{nbhd. of $x$}}} \sup_{x \in U} f(x), \quad \liminf_{x \to a} = \sup_{\substack{\text{$U$ del.} \\ \text{nbhd. of $x$}}} \inf_{x \in U} f(x).$$</span>
We could also solve that part sequentially by taking <span class="math-container">$x_n \to a$</span> and using the <a href="http://mathworld.wolfram.com/SupremumLimit.html" rel="nofollow noreferrer"><span class="math-container">$\limsup$</span></a> and <a href="http://mathworld.wolfram.com/InfimumLimit.html" rel="nofollow noreferrer"><span class="math-container">$\liminf$</span></a> of sequences, establishing that the limit is the same for every sequence <span class="math-container">$x_n \to a$</span>. It is a matter of preference.</p>
<p>Then, we are using are the following facts:</p>
<p>1) If <span class="math-container">$\lim_{x \to a} h(x) = M$</span>, then
<span class="math-container">$$\limsup (h(x)+j(x)) = M +\limsup j(x) $$</span>
and
<span class="math-container">$$\liminf (h(x)+j(x)) = M +\liminf j(x) .$$</span>
2) If <span class="math-container">$\lim_{x \to a} h(x) = c >0$</span>, then
<span class="math-container">$$\limsup (h(x) j(x)) = c \cdot\limsup j(x)$$</span>
and
<span class="math-container">$$\liminf (h(x) j(x)) = c \cdot\liminf j(x).$$</span>
3) If <span class="math-container">$h(x) \leq j(x)$</span>, then
<span class="math-container">$$\limsup h(x) \leq \limsup j(x) $$</span>
and
<span class="math-container">$$\liminf h(x) \leq \liminf j(x) .$$</span>
4) <span class="math-container">$\liminf h(x) \leq \limsup h(x)$</span> and, if both coincide, then <span class="math-container">$\lim h(x)$</span> exists and is equal to both.</p></li>
</ul>
| <p>By definition, <span class="math-container">$f'(a) = \lim_\limits{x\to a} \frac {f(x) - f(a)}{x-a}$</span></p>
<p>if <span class="math-container">$f'(a), g'(a)$</span> exist and <span class="math-container">$g'(a) \ne 0$</span></p>
<p><span class="math-container">$\frac {f'(a)}{g'(a)} = \lim_\limits{x\to a} \dfrac {\frac {f(x) - f(a)}{x-a}}{\frac {g(x) - g(a)}{x-a}} = \lim_\limits{x\to a}\frac {f(x) - f(a)}{g(x) - g(a)}$</span></p>
<p>if <span class="math-container">$f(a), g(a) = 0$</span></p>
<p><span class="math-container">$\frac {f'(a)}{g'(a)} = \lim_\limits{x\to a}\frac {f(x) }{g(x)}$</span></p>
|
probability | <p>I gave the following problem to students:</p>
<blockquote>
<p>Two $n\times n$ matrices $A$ and $B$ are <em>similar</em> if there exists a nonsingular matrix $P$ such that $A=P^{-1}BP$.</p>
<ol>
<li><p>Prove that if $A$ and $B$ are two similar $n\times n$ matrices, then they have the same determinant and the same trace.</p></li>
<li><p>Give an example of two $2\times 2$ matrices $A$ and $B$ with same determinant, same trace but that are not similar.</p></li>
</ol>
</blockquote>
<p>Most of the ~20 students got the first question right. However, almost none of them found a correct example to the second question. Most of them gave examples of matrices that have same determinant and same trace. </p>
<p>But computations show that their examples are similar matrices. They didn't bother to check that though, so they just tried <em>random</em> matrices with same trace and same determinant, hoping it would be a correct example.</p>
<p><strong>Question</strong>: how to explain that none of the random trial gave non similar matrices?</p>
<p>Any answer based on density or measure theory is fine. In particular, you can assume any reasonable distribution on the entries of the matrix. If it matters, the course is about matrices with real coefficients, but you can assume integer coefficients, since when choosing numbers <em>at random</em>, most people will choose integers.</p>
| <p>If <span class="math-container">$A$</span> is a <span class="math-container">$2\times 2$</span> matrix with determinant <span class="math-container">$d$</span> and trace <span class="math-container">$t$</span>, then the characteristic polynomial of <span class="math-container">$A$</span> is <span class="math-container">$x^2-tx+d$</span>. If this polynomial has distinct roots (over <span class="math-container">$\mathbb{C}$</span>), then <span class="math-container">$A$</span> has distinct eigenvalues and hence is diagonalizable (over <span class="math-container">$\mathbb{C}$</span>). In particular, if <span class="math-container">$d$</span> and <span class="math-container">$t$</span> are such that the characteristic polynomial has distinct roots, then any other <span class="math-container">$B$</span> with the same determinant and trace is similar to <span class="math-container">$A$</span>, since they are diagonalizable with the same eigenvalues.</p>
<p>So to give a correct example in part (2), you need <span class="math-container">$x^2-tx+d$</span> to have a double root, which happens only when the discriminant <span class="math-container">$t^2-4d$</span> is <span class="math-container">$0$</span>. If you choose the matrix <span class="math-container">$A$</span> (or the values of <span class="math-container">$t$</span> and <span class="math-container">$d$</span>) "at random" in any reasonable way, then <span class="math-container">$t^2-4d$</span> will usually not be <span class="math-container">$0$</span>. (For instance, if you choose <span class="math-container">$A$</span>'s entries uniformly from some interval, then <span class="math-container">$t^2-4d$</span> will be nonzero with probability <span class="math-container">$1$</span>, since the vanishing set in <span class="math-container">$\mathbb{R}^n$</span> of any nonzero polynomial in <span class="math-container">$n$</span> variables has Lebesgue measure <span class="math-container">$0$</span>.) Assuming that students did something like pick <span class="math-container">$A$</span> "at random" and then built <span class="math-container">$B$</span> to have the same trace and determinant, this would explain why none of them found a correct example.</p>
<p>Note that this is very much special to <span class="math-container">$2\times 2$</span> matrices. In higher dimensions, the determinant and trace do not determine the characteristic polynomial (they just give two of the coefficients), and so if you pick two matrices with the same determinant and trace they will typically have different characteristic polynomials and not be similar.</p>
| <p>As Eric points out, such $2\times2$ matrices are special.
In fact, there are only two such pairs of matrices.
The number depends on how you count, but the point is that such matrices have a <em>very</em> special form.</p>
<p>Eric proved that the two matrices must have a double eigenvalue.
Let the eigenvalue be $\lambda$.
It is a little exercise<sup>1</sup> to show that $2\times2$ matrices with double eigenvalue $\lambda$ are similar to a matrix of the form
$$
C_{\lambda,\mu}
=
\begin{pmatrix}
\lambda&\mu\\
0&\lambda
\end{pmatrix}.
$$
Using suitable diagonal matrices shows that $C_{\lambda,\mu}$ is similar to $C_{\lambda,1}$ if $\mu\neq0$.
On the other hand, $C_{\lambda,0}$ and $C_{\lambda,1}$ are not similar; one is a scaling and the other one is not.</p>
<p>Therefore, up to similarity transformations, the only possible example is $A=C_{\lambda,0}$ and $B=C_{\lambda,1}$ (or vice versa).
Since scaling doesn't really change anything, <strong>the only examples</strong> (up to similarity, scaling, and swapping the two matrices) are
$$
A
=
\begin{pmatrix}
1&0\\
0&1
\end{pmatrix},
\quad
B
=
\begin{pmatrix}
1&1\\
0&1
\end{pmatrix}
$$
and
$$
A
=
\begin{pmatrix}
0&0\\
0&0
\end{pmatrix},
\quad
B
=
\begin{pmatrix}
0&1\\
0&0
\end{pmatrix}.
$$
If adding multiples of the identity is added to the list of symmetries (then scaling can be removed), then there is only one matrix pair up to the symmetries.</p>
<p>If you are familiar with the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="noreferrer">Jordan normal form</a>, it gives a different way to see it.
Once the eigenvalues are fixed to be equal, the only free property (up to similarity) is whether there are one or two blocks in the normal form.
The Jordan normal form is invariant under similarity transformations, so it gives a very quick way to solve problems like this.</p>
<hr>
<p><sup>1</sup>
You only need to show that any matrix is similar to an upper triangular matrix.
The eigenvalues (which now coincide) are on the diagonal.
You can skip this exercise if you have Jordan normal forms at your disposal.</p>
|
geometry | <p>The fugitive is at the origin. They move at a speed of 1. There's a guard at every integer coordinate except the origin. A guard's speed is 1/100. The fugitive and the guards move simultaneously and continuously. <strong>At any moment, the guards only move towards the current position of the fugitive</strong>, i.e. a guard's trajectory is a <a href="https://en.wikipedia.org/wiki/Pursuit_curve" rel="nofollow noreferrer">pursuit curve</a>. If they're within 1/100 distance from a guard, the fugitive is caught. The game is played on <span class="math-container">$\mathbb{R}^2$</span>.</p>
<p><a href="https://i.sstatic.net/YyV0Hm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YyV0Hm.jpg" alt="enter image description here" /></a></p>
<p>Can the fugitive avoid capture forever?</p>
<hr />
<p>What I know:</p>
<ol>
<li><p>The distance between two guards is always non-increasing, but the farther away from the fugitive, the slower that distance decreases.</p>
</li>
<li><p>If the fugitive runs along a straight line, they will always be caught by some far away guard. For example, if the fugitive starts at <span class="math-container">$(0.5, 0)$</span> and runs due north, they will be caught by a guard at about <span class="math-container">$(0, \frac{50^{100}}{4})$</span>. Consult <a href="https://en.wikipedia.org/wiki/Radiodrome" rel="nofollow noreferrer">radiodrome</a> for calculation.</p>
</li>
<li><p>If there're just 2 guards at distance 1 from each other, then the fugitive can always find a path to safely pass <strong>between</strong> them. This is true regardless of the pair's distance and relative positions to the fugitive.</p>
</li>
<li><p>The fugitive can escape if they're just "enclosed" by guards who're at distance 1 from each other:</p>
</li>
</ol>
<p><a href="https://i.sstatic.net/Xpef0m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xpef0m.png" alt="enter image description here" /></a></p>
<p>The shape of the fence doesn't have to be rectangular. Circles or other shapes don't prevent escape either, regardless of the size.</p>
<hr />
<p>3 and 4 are nontrivial, but can be proved by geometry argument alone without calculus. To avoid unnecessarily clustering the question, further details are given as an answer below for those who're interested, hopefully instrumental in solving the original problem.</p>
| <p>As requested in the comments above, here is the <a href="https://raw.githubusercontent.com/martinq321/escape/main/mma" rel="noreferrer">mathematica code</a> that generated this:</p>
<img src="https://i.sstatic.net/zGc1q.gif" width="300" height="80">
<p>Might provide some insight. I used a discrete analog to the pursuit curve whereby for every quarter of a unit step the pursuers make (or whatever ratio you choose), the escapee makes a unit step to a point on the circumference of the unit circle around the escapee that maximises the gap between itself and its pursuers. Code modified from <a href="https://math.stackexchange.com/questions/2427050/staff-icebreaker-is-stasis-ever-attained">here</a>.</p>
<p>Obviously the pursuer density increases with time. Whereas a straight line escape route plot of the distance between the escapee and it's pursuers is strictly decreasing</p>
<img src="https://i.sstatic.net/hwkfo.png" width="300" height="200">
<p>a plot of the distance between the escapee and it's pursuers for the local evasion strategy is, at first glance, not:</p>
<img src="https://i.sstatic.net/zOxmg.png" width="300" height="200">
<p>However, this is most likely misleading, since this discrete analog hides the continuous path that will at some point, no doubt pass within the restricted parameters:</p>
<img src="https://i.sstatic.net/PUlTW.png" width="300" height="350">
| <p>Proofs for points 3 and 4 in the question.</p>
<p><strong>Proof for point 3</strong></p>
<p>Let's denote the two guards and the fugitive by <span class="math-container">$G_1$</span>, <span class="math-container">$G_2$</span> and <span class="math-container">$F$</span>, respectively. If <span class="math-container">$\angle FG_1G_2\leq\pi/4$</span> at time <span class="math-container">$0$</span>, then slipping through is possible because:</p>
<p><a href="https://i.sstatic.net/HqM7nm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HqM7nm.png" alt="enter image description here" /></a></p>
<p>If <span class="math-container">$F$</span> travels to <span class="math-container">$G_1$</span> along the y axis, <span class="math-container">$G_2$</span> will always be within the eye-shaped intersection of the two circles before <span class="math-container">$F$</span> reaches <span class="math-container">$G_3$</span>, because distance between any pair of guards is non-increasing. So the distance between <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> will be greater than <span class="math-container">$\sqrt2-1$</span> when <span class="math-container">$F$</span> reaches <span class="math-container">$G_3$</span>. For a safety radius of 1/100 and speed ratio of 100, the fugitive can easily slip through between <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span>.</p>
<p>What about <span class="math-container">$\angle FG_1G_2\gt\pi/4$</span>? Let's consider the extreme case where <span class="math-container">$\angle FG_1G_2=\pi/2$</span> at time <span class="math-container">$0$</span>. Suppose <span class="math-container">$G_1(0)=(0,n)$</span> and <span class="math-container">$G_2(0)=(1,n)$</span>. Notice if the fugitive runs leftward, the segment <span class="math-container">$G_1G_2$</span> will tilt counterclockwise, as illustrated below:</p>
<p><a href="https://i.sstatic.net/QFz1nm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QFz1nm.png" alt="enter image description here" /></a></p>
<p>So if the fugitive runs clockwise along the arc of the circle of radius n centering at <span class="math-container">$G_1(0)$</span>, <span class="math-container">$\angle FG_2G_1$</span> will be less than <span class="math-container">$\pi/4$</span> when the fugitive has traveled a distance of <span class="math-container">$n\pi/4\approx n$</span>. Since the speed ratio is 100, the pair have traveled at most about <span class="math-container">$n/100$</span> and are still about <span class="math-container">$99n/100$</span> away, which means the distance between <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> has shrunk at most about 1% at this time. Now the fugitive can run straight for <span class="math-container">$G_2$</span> and slip through the pair as explained in the first paragraph.</p>
<p>In fact, there's even a better strategy! Notice <span class="math-container">$F$</span>, <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> will be <strong>collinear</strong> sometime before the fugitive has traveled a distance of <span class="math-container">$n\pi/2\approx 2n$</span>. By this time the pair of guards have traveled at most <span class="math-container">$n/50$</span> and are still about <span class="math-container">$49n/50$</span> away, which means distance between <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> has shrunk at most 2%. Since the triple are collinear, the gap between guards will remain constant at about 0.98 if the fugitive runs straight towards them. That's more than enough space for the fugitive to slip through at a leisurely pace. <span class="math-container">$\blacksquare$</span></p>
<p><strong>Proof for point 4</strong></p>
<p>Let's assume guards surround the fugitive with a circle of radius <span class="math-container">$R$</span>. The fugitive moves straight toward some guard. When they're close to the guard, say at distance 1 away, the fugitive swerves to the right and passes every guard tangentially while keeping at about distance 1 away, as schematically illustrated below</p>
<p><a href="https://i.sstatic.net/rVDlrm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rVDlrm.png" alt="enter image description here" /></a></p>
<p>Now, as the fugitive makes their clockwise movement, the distance between a pair of guards closest to them has to be no more than about 0.04 lest the target escapes. Since the distance between guards are non-increasing, when the fugitive finishes a full round, the distance between any two adjacent guards is no more than 0.04. But this is impossible given that the fugitive has moved no more than <span class="math-container">$R+2\pi R \approx 7R$</span>. Because given the speed radio of 1:100, each guard has moved no more than <span class="math-container">$\frac{7}{100}R$</span>. This means each guard is still at least <span class="math-container">$\frac{93}{100}R$</span> away from the origin. Hence the average distance between two adjacent guards is at least <span class="math-container">$\frac{93}{100}$</span>. A contradiction. <span class="math-container">$\blacksquare$</span></p>
|
logic | <p>The competition has ended 6 june 2014 22:00 GMT
The winner is Bryan </p>
<p>Well done ! </p>
<hr>
<p>When I was rereading the proof of the drinkers paradox (see <a href="https://math.stackexchange.com/questions/807092/proof-of-drinker-paradox">Proof of Drinker paradox</a> I realised that $\exists x \forall y (D(x) \to D(y)) $ is also a theorem.</p>
<p>I managed to proof it in 21 lines (see below), but I am not sure if this is the shortest possible proof.
And I would like to know are there shorter versions?</p>
<p>That is the reason for this <strong>competition</strong> (see also below)</p>
<p><strong>Competition rules</strong></p>
<ul>
<li><p><strong>This is a small competition, I will give the person who comes with the shortest proof (shorter than the proof below ) a reward of 200 (unmarked) reputation points.</strong> ( I can only add a bounty in a couple of days from now , but you may already start to think about how to prove it, and you may allready post your answer. </p></li>
<li><p>The length of a proof is measured by the number of numbered lines in the proof (see the proof below as example, the first formula is at line number 2, and end proof lines are not counted) </p></li>
<li><p>If there is more than one person with the shortest answer the reward goes to the first poster. (measured with the last substantional edit of the proof) </p></li>
<li><p>The answer must be in the form of a complete Fitch-style natural deduction style<br>
typed in a answer box as below. proofs formatted in $\LaTeX$ is even better, but I just don't know how to do that. </p></li>
<li><p>The proofsystem is the Fitch style natural deduction system as described in "Language proof and Logic" by Barwise cs, ( LPL, <a href="http://ggweb.stanford.edu/lpl/" rel="noreferrer">http://ggweb.stanford.edu/lpl/</a> ) except that the General Conditional proof rule is <strong>not</strong> allowed. (I just don't like the rule, not sure if using this rule would shorten the proof either) </p></li>
<li><p>Maybe I will also give an extra prize for the most beautiful answer, or the shortest proof in another style/ method of natural deduction. it depends a bit on the answers I get, others may also set and give bounties as they see fit.</p></li>
<li><p>you may give more than one answer, answers in more than one proof method ect, but do post them as seperate answers, and be aware that only the proof that uses the fitch method described above is competing, and other participants can peek at your answers.</p></li>
<li><ul>
<li>GOOD LUCK, and may the best one win.</li>
</ul></li>
</ul>
<p>Proof system:</p>
<p>For participants that don't have the LPL book the only allowed inference rules are the rules I used in the proof:</p>
<ul>
<li>the $\bot$ (falsum , contradiction) introduction and elimination rules</li>
<li>the $\lnot \lnot$ elimination rules</li>
<li>the $\lnot $ , $\to$ , $\exists$ and $\forall$ introduction rules</li>
</ul>
<p>(also see the proof below for examples of how to use them.)</p>
<p>Also the following rules are allowable : </p>
<ul>
<li>the $\land$ , $\lor$ and $\leftrightarrow$ introduction and elimination rules. </li>
<li>the $\to$ , $\exists$ and $\forall$ elimination rules</li>
<li>the reiteration rule.</li>
</ul>
<p>(This is just to be complete, I don't think they are useful in this proof) </p>
<p>Notice:</p>
<ul>
<li><p>line 1 is empty (in the Fitch software that accompanies the book line 1 is for premisses only and there are no premisses in this proof)</p></li>
<li><p>the end subproof lines have no line number. (and they don't count in the all important the number of lines)</p></li>
<li><p>the General Conditional proof rule is <strong>not</strong> allowed</p></li>
<li>there is no $\lnot$ elimination rule (only an double negation elimination rule)</li>
</ul>
<p>My proof: (and example on how to use the rules, and how your answer should be formatted) </p>
<pre><code>1 |
. |-----------------
2 | |____________ ~Ex Vy (D(x) -> D(y)) New Subproof for ~ Introduction
3 | | |_________a variable for Universal Introduction
4 | | | |________ D(b) New Subproof for -> Introduction
5 | | | | |______ ~D(a) New Subproof for ~ Introduction
6 | | | | | |___c variable for Universal Introduction
7 | | | | | | |__ D(a) New Subproof for -> Introduction
8 | | | | | | | _|_ 5,7 _|_ introduction
9 | | | | | | | D(c) 8 _|_ Elimination
.. | | | | | | <------------------------- end subproof
10 | | | | | | D(a) -> D(c) 7-9 -> Introduction
.. | | | | | <--------------------------- end subproof
11 | | | | | Vy(D(a) -> D(y)) 6-10 Universal Introduction
12 | | | | | Ex Vy (D(x) -> D(y)) 11 Existentional Introduction
13 | | | | | _|_ 2,12 _|_ introduction
.. | | | | <----------------------------- end subproof
14 | | | | ~~D(a) 5-13 ~ introduction
15 | | | | D(a) 14 ~~ Elimination
.. | | | <------------------------------- end subproof
16 | | | D(b) -> D(a) 4-15 -> Introduction
.. | | <--------------------------------- end subproof
17 | | Vy(D(b) -> D(y)) 3-16 Universal Introduction
18 | | ExVy(D(x) -> D(y)) 17 Existentional Introduction
19 | | _|_ 2,18 _|_ introduction
.. | <----------------------------------- end subproof
20 | ~~Ex Vy (D(x) -> D(y)) 2-19 ~ introduction
21 | Ex Vy (D(x) -> D(y)) 20 ~~ Elimination
</code></pre>
<p><strong>Allowable Question and other meta stuff</strong></p>
<p>I did ask on <a href="http://meta.math.stackexchange.com/questions/13855/are-small-competitions-allowed">http://meta.math.stackexchange.com/questions/13855/are-small-competitions-allowed</a> if this is an allowable question, I was not given an answer yet (28 may 2014) that such questions were not allowable, outside the scope of this forum or any other negative remark, I did not get any comment that this was an improper question, or under which circumstances such competitions are allowed. </p>
<p>(the most negative comment was that it should be unmarked reputation points. :) and that comment was later removed...</p>
<p>If you disagree with this please add that as answer to the question on at the meta math stackexchange site. (or vote such an answer up)</p>
<p>If on the other hand you do like competitions, also show your support on the meta site, ( by adding it as answer, or voting such an answer up)</p>
<p>PS don't think all logic is all simple , I will make a shorter proof in no time, the question is rather hard and difficult, but proof that I am wrong :) </p>
| <p>Well, here's my solution even though I've used 4 'unallowed' rules. </p>
<ol>
<li>$\neg\exists x\forall y(Dx\implies Dy)\quad$ assumption for $\neg$ introduction</li>
<li>$\forall x\neg\forall y(Dx\implies Dy)\quad$ DeMorgan's for quantifiers</li>
<li>$\forall x\exists y \neg(Dx\implies Dy)\quad$ DeMorgan's for quantifiers</li>
<li>$\forall x\exists y\neg(\neg Dx\vee Dy)\quad$ Conditional exchange</li>
<li>$\forall x\exists y(\neg\neg Dx\wedge \neg Dy)\quad$ DeMorgan's</li>
<li>$\forall x \exists y(Dx\wedge\neg Dy)\quad$ Double negation</li>
<li>$\exists y(Da\wedge\neg Dy)\quad$ Universal instantiation</li>
<li>$Da\wedge \neg Db\quad$ Existential instantiation (flag $b$)</li>
<li>$\neg Db\quad$ Simplification</li>
<li>$\exists y(Db\wedge \neg Dy)\quad$ Universal instantiation</li>
<li>$Db\wedge \neg Dc\quad$ Existential instantiation (flag $c$)</li>
<li>$Db\quad$ Simplification</li>
<li>$\bot\quad$ Contradiction 9, 12</li>
<li>$\neg\neg\exists x\forall y(Dx\implies Dy)\quad$ Proof by contradiction 1-13</li>
<li>$\exists x\forall y(Dx\implies Dy)\quad$ Double negation</li>
</ol>
<p>I guess there are 16 lines in all if we include the empty premise line. I do have a couple comments though.</p>
<p>First, I highly doubt this proof can be attained by the last applied rule being Existential Generalization. This statement is a logical truth precisely because we can change our $x$ depending on whether the domain has or has not a member such that $Dx$ holds. If $D$ holds for all members of the domain, any choice of $x$ will make the statement true. If $D$ does not hold for some member of the domain, choosing that as our $x$ will make the statement true. Saying that we can reach the conclusion by one final use of E.G. means that the member $b$ which appears in the precedent somehow handled both cases while the <em>only</em> thing we are supposed to know about $b$ is that it is a member of the domain. We still don't know anything of the domain.</p>
<p>With that said, after I got a copy of your book and read the rules, it appears the only way we can get to the conclusion is by an application of double negation. And the only way to get there is a proof by contradiction. Thus I believe your lines 1, 2, 19, 20, and 21 belong to the minimal solution. So far, I haven't found anything simpler for the middle.</p>
| <p>Although outside the scope of what is asked, it is perhaps amusing to note that a proof in a tableau system is markedly shorter, and writes itself (without even having to split the tree and consider cases):</p>
<p>$$\neg\exists x\forall y(Dx\implies Dy)$$
$$\forall x\neg\forall y(Dx\implies Dy)$$
$$\neg\forall y(Da\implies Dy)$$
$$\exists y\neg(Da\implies Dy)$$
$$\neg(Da\implies Db)$$
$$\neg Db$$
$$\neg\forall y(Db\implies Dy)$$
$$\exists y\neg(Db\implies Dy)$$
$$\neg(Db\implies Dc)$$
$$Db$$
$$\bigstar$$</p>
|
differentiation | <p>I was wondering how we could use the limit definition </p>
<p>$$ \lim_{h \rightarrow 0} \frac{f(x+h)-f(x)}{h}$$</p>
<p>to find the derivative of $e^x$, I get to a point where I do not know how to simplify the indeterminate $\frac{0}{0}$. Below is what I have already done</p>
<p>$$\begin{align}
&\lim_{h \rightarrow 0} \frac{f(x+h)-f(x)}{h} \\
&\lim_{h \rightarrow 0} \frac{e^{x+h}-e^x}{h} \\
&\lim_{h \rightarrow 0} \frac{e^x (e^h-1)}{h} \\
&e^x \cdot \lim_{h \rightarrow 0} \frac{e^h-1}{h}
\end{align}$$</p>
<p>Where can I go from here? Because, the $\lim$ portion reduces to indeterminate when $0$ is subbed into $h$. </p>
| <p>Sometimes one <em>defines</em> $e$ as the (unique) number for which $$\tag 1 \lim_{h\to 0}\frac{e^h-1}{h}=1$$</p>
<p>In fact, there are two possible directions. </p>
<p>$(i)$ Start with the logarithm. You'll find out it is continuous monotone increasing on $\Bbb R_{>0}$, and it's range is $\Bbb R$. It follows $\log x=1$ for some $x$. We define this (unique) $x$ to be $e$. Some elementary properties will pop up, and one will be $$\tag 2 \lim\limits_{x\to 0}\frac{\log(1+x)}{x}=1$$</p>
<p>Upon defining $\exp x$ as the inverse of the logarithm, and after some rules, we will get to defining exponentiation of $a>0\in \Bbb R$ as $$a^x:=\exp(x\log a)$$</p>
<p>In said case, $e^x=\exp(x)$, as we expected. $(1)$ will then be an immediate consequence of $(2)$.</p>
<p>$(ii)$ We might define $$e=\sum_{k=0}^\infty \frac 1 {k!}$$ (or the equivalent Bernoulli limit). Then, we may define $$\exp x=\sum_{k=0}^\infty \frac{x^k}{k!}$$ Note $$\tag 3 \exp 1=e$$</p>
<p>We define the $\log$ as the inverse of the exponential function. We may derive certain properties of $\exp x$. The most important ones would be $$\exp(x+y)=\exp x\exp y$$ $$\exp'=\exp$$ $$\exp 0 =1$$</p>
<p>In particular, we have that $\log e=1$ by. We might then define general exponentiation yet again by $$a^x:=\exp(x\log a)$$</p>
<p>Note then that again $e^x=\exp x$. We can prove $(1)$ easily recurring to the series expansion we used.</p>
<hr>
<p><strong>ADD</strong> As for the definition of the logarithm, there are a few ones. One is $$\log x=\int_1^x \frac{dt}{t}$$</p>
<p>Having defined exponentiation of real numbers using rationals by $$a^x=\sup\{a^r:r\in\Bbb Q\wedge r<x\}$$</p>
<p>we might also define $$\log x=\lim_{k\to 0}\frac{x^k-1}{k}$$</p>
<p>In any case, you should be able to prove that </p>
<p>$$\tag 1 \log xy = \log x +\log y $$
$$\tag 2 \log x^a = a\log x $$
$$\tag 3 1-\dfrac 1 x\leq\log x \leq x-1 $$
$$\tag 4\lim\limits_{x\to 0}\dfrac{\log(1+x)}{x}=1 $$
$$\tag 5\dfrac{d}{dx}\log x = \dfrac 1 x$$</p>
<p>What you want is a direct consequence of either $(4)$ or $(5)$, or of the first sentence in my post.</p>
<hr>
<p><strong>ADD</strong> We can prove that for $x \geq 0$ $$\lim\left(1+\frac xn\right)^n=\exp x$$ from definition $(ii)$. </p>
<p>First, note that $${n\choose k}\frac 1{n^k}=\frac{1}{{k!}}\frac{{n\left( {n - 1} \right) \cdots \left( {n - k + 1} \right)}}{{{n^k}}} = \frac{1}{{k!}}\left( {1 - \frac{1}{n}} \right)\left( {1 - \frac{2}{n}} \right) \cdots \left( {1 - \frac{{k - 1}}{n}} \right)$$</p>
<p>Since all the factors to the rightmost are $\leq 1$, we can claim $${n\choose k}\frac{1}{{{n^k}}} \leqslant \frac{1}{{k!}}$$</p>
<p>It follows that $${\left( {1 + \frac{x}{n}} \right)^n}=\sum\limits_{k = 0}^n {{n\choose k}\frac{{{x^k}}}{{{n^k}}}} \leqslant \sum\limits_{k = 0}^n {\frac{{{x^k}}}{{k!}}} $$</p>
<p>It follows that <strong>if</strong> the limit on the left exists, $$\lim {\left( {1 + \frac{x}{n}} \right)^n} \leqslant \lim \sum\limits_{k = 0}^n {\frac{{{x^k}}}{{k!}}} = \exp x$$</p>
<p>Note that the sums in $$\sum\limits_{k = 0}^n {{n\choose k}\frac{{{x^k}}}{{{n^k}}}} $$</p>
<p>are always increasing, which means that for $m\leq n$</p>
<p>$$\sum\limits_{k = 0}^m {{n\choose k}\frac{{{x^k}}}{{{n^k}}}}\leq \sum\limits_{k = 0}^n {{n\choose k}\frac{{{x^k}}}{{{n^k}}}}$$</p>
<p>By letting $n\to\infty$, since $m$ is fixed on the left side, and $$\mathop {\lim }\limits_{n \to \infty } \frac{1}{{k!}}\left( {1 - \frac{1}{n}} \right)\left( {1 - \frac{2}{n}} \right) \cdots \left( {1 - \frac{{k - 1}}{n}} \right) = \frac{1}{{k!}}$$</p>
<p>we see that <strong>if</strong> the limit exists, then for each $m$, we have $$\sum\limits_{k = 0}^m {\frac{{{x^k}}}{{k!}}} \leqslant \lim {\left( {1 + \frac{x}{n}} \right)^n}$$</p>
<p>But then, taking $m\to\infty$ $$\exp x = \mathop {\lim }\limits_{m \to \infty } \sum\limits_{k = 0}^m {\frac{{{x^k}}}{{k!}}} \leqslant \lim {\left( {1 + \frac{x}{n}} \right)^n}$$</p>
<p>It follows that <strong>if</strong> the limit exists $$\eqalign{
& \exp x \leqslant \lim_{n\to\infty} {\left( {1 + \frac{x}{n}} \right)^n} \cr
& \exp x \geqslant \lim_{n\to\infty} {\left( {1 + \frac{x}{n}} \right)^n} \cr}$$ which means $$\exp x = \lim_{n\to\infty} {\left( {1 + \frac{x}{n}} \right)^n}$$ Can you show the limit exists?</p>
<p>The case $x<0$ follows now from $$\displaylines{
{\left( {1 - \frac{x}{n}} \right)^{ - n}} = {\left( {\frac{n}{{n - x}}} \right)^n} \cr
= {\left( {\frac{{n - x + x}}{{n - x}}} \right)^n} \cr
= {\left( {1 + \frac{x}{{n - x}}} \right)^n} \cr} $$</p>
<p>using the squeeze theorem with $\lfloor n-x\rfloor$, $\lceil n-x\rceil$, and the fact $x\to x^{-1}$ is continuous. We care only for terms $n>\lfloor x\rfloor$ to make the above meaningful.</p>
<p><strong>NOTE</strong> If you're acquainted with $\limsup$ and $\liminf$; the above can be put differently as $$\eqalign{
& \exp x \leqslant \lim \inf {\left( {1 + \frac{x}{n}} \right)^n} \cr
& \exp x \geqslant \lim \sup {\left( {1 + \frac{x}{n}} \right)^n} \cr} $$ which means $$\lim \inf {\left( {1 + \frac{x}{n}} \right)^n} = \lim \sup {\left( {1 + \frac{x}{n}} \right)^n}$$ and proves the limit exists and is equal to $\exp x$.</p>
| <p>First prove that $\lim_{h\to 0}\frac{\ln(h+1)}{h}=1$. The switch of $\ln$, $\lim$ is possible because $f(x)=\ln x$, $x>0$ is continuous.</p>
<p>$$\lim_{h\to 0}\ln\left( (1+h)^{\frac{1}{h}} \right)=\ln\lim_{h\to 0}\left( (1+h)^{\frac{1}{h}} \right)=\ln e = 1$$</p>
<p>Now let $u=e^h-1$. We know $h\to 0\iff u\to 0$.</p>
<p>$$\lim_{h\to 0}\frac{e^h-1}{h}=\lim_{u\to 0}\frac{u}{\ln(u+1)}=1$$</p>
|
logic | <p>Suppose we want to define a first-order language to do set theory (so we can formalize mathematics).
One such construction can be found <a href="http://books.google.com.bn/books?id=u927rHHmylAC&lpg=PP1&pg=PA5#v=onepage&q&f=false" rel="noreferrer">here</a>.
What makes me uneasy about this definition is that words such as "set", "countable", "function", and "number" are used in somewhat non-trivial manners.
For instance, behind the word "countable" rests an immense amount of mathematical knowledge: one needs the notion of a bijection, which requires functions and sets.
One also needs the set of natural numbers (or something with equal cardinality), in order to say that countable sets have a bijection with the set of natural numbers.</p>
<p>Also, in set theory one uses the relation of belonging "<span class="math-container">$\in$</span>".
But relation seems to require the notion an ordered pair, which requires sets, whose properties are described using belonging...</p>
<p>I found the following in Kevin Klement's, <a href="https://people.umass.edu/klement/514/ln.pdf" rel="noreferrer">lecture notes on mathematical logic</a> (pages 2-3).</p>
<p>"You have to use logic to study logic. There’s no getting away from it.
However, I’m not going to bother stating all the logical rules that are valid in the metalanguage, since I’d need to do that in the metametalanguage, and that would just get me started on an infinite regress.
The rule of thumb is: if it’s OK in the object language, it’s OK in the metalanguage too."</p>
<p>So it seems that, if one proves a fact about the object language, then one can also use it in the metalanguage.
In the case of set theory, one may not start out knowing what sets really are, but after one proves some fact about them (e.g., that there are uncountable sets) then one implicitly "adds" this fact also to the metalanguage.</p>
<p>This seems like cheating: one is using the object language to conduct proofs regarding the metalanguage, when it should strictly be the other way round.</p>
<p>To give an example of avoiding circularity, consider the definition of the integers.
We can define a binary relation <span class="math-container">$R\subseteq(\mathbf{N}\times\mathbf{N})\times(\mathbf{N}\times\mathbf{N})$</span>, where for any <span class="math-container">$a,b,c,d\in\mathbf{N}$</span>, <span class="math-container">$((a,b),(c,d))\in R$</span> iff <span class="math-container">$a+d=b+c$</span>, and then defining <span class="math-container">$\mathbf{Z}:= \{[(a,b)]:a,b\in\mathbf{N}\}$</span>, where <span class="math-container">$[a,b]=\{x\in \mathbf{N}\times\mathbf{N}: xR(a,b)\}$</span>, as in <a href="https://math.stackexchange.com/questions/156264/building-the-integers-from-scratch-and-multiplying-negative-numbers">this</a> question or <a href="https://en.wikipedia.org/wiki/Integer#Construction" rel="noreferrer">here</a> on Wikipedia. In this definition if set theory and natural numbers are assumed, then there is no circularity because one did not depend on the notion of "subtraction" in defining the integers.</p>
<p>So my question is:</p>
<blockquote>
<p><strong>Question</strong> Is the definition of first-order logic circular?
If not, please explain why.
If the definitions <em>are</em> circular, is there an alternative definition which avoids the circularity?</p>
</blockquote>
<p>Some thoughts:</p>
<ul>
<li><p>Perhaps there is the distinction between what sets are (anything that obeys the axioms) and how sets are expressed (using a formal language).
In other words, the notion of a <em>set</em> may not be circular, but to talk of sets using a formal language requires the notion of a set in a metalanguage.</p></li>
<li><p>In foundational mathematics there also seems to be the idea of first <em>defining</em> something, and then coming back with better machinery to <em>analyse</em> that thing.
For instance, one can define the natural numbers using the Peano axioms, then later come back to say that all structures satisfying the axioms are isomorphic. (I don't know any algebra, but that seems right.)</p></li>
<li><p>Maybe sets, functions, etc., are too basic? Is it possible to avoid these terms when defining a formal language?</p></li>
</ul>
| <p>I think an important answer is still not present so I am going to type it. This is somewhat standard knowledge in the field of foundations but is not always adequately described in lower level texts.</p>
<p>When we formalize the syntax of formal systems, we often talk about the <em>set</em> of formulas. But this is just a way of speaking; there is no ontological commitment to "sets" as in ZFC. What is really going on is an "inductive definition". To understand this you have to temporarily forget about ZFC and just think about strings that are written on paper. </p>
<p>The inductive definition of a "propositional formula" might say that the set of formulas is the smallest class of strings such that:</p>
<ul>
<li><p>Every variable letter is a formula (presumably we have already defined a set of variable letters). </p></li>
<li><p>If $A$ is a formula, so is $\lnot (A)$. Note: this is a string with 3 more symbols than $A$. </p></li>
<li><p>If $A$ and $B$ are formulas, so is $(A \land B)$. Note this adds 3 more symbols to the ones in $A$ and $B$. </p></li>
</ul>
<p>This definition <em>can</em> certainly be read as a definition in ZFC. But it can also be read in a different way. The definition can be used to generate a completely effective procedure that a human can carry out to tell whether an arbitrary string is a formula (a proof along these lines, which constructs a parsing procedure and proves its validity, is in Enderton's logic textbook). </p>
<p>In this way, we can understand inductive definitions in a completely effective way without any recourse to set theory. When someone says "Let $A$ be a formula" they mean to consider the situation in which I have in front of me a string written on a piece of paper, which my parsing algorithm says is a correct formula. I can perform that algorithm without any knowledge of "sets" or ZFC.</p>
<p>Another important example is "formal proofs". Again, I can treat these simply as strings to be manipulated, and I have a parsing algorithm that can tell whether a given string is a formal proof. The various syntactic metatheorems of first-order logic are also effective. For example the deduction theorem gives a direct algorithm to convert one sort of proof into another sort of proof. The algorithmic nature of these metatheorems is not always emphasized in lower-level texts - but for example it is very important in contexts like automated theorem proving. </p>
<p>So if you examine a logic textbook, you will see that all the syntactic aspects of basic first order logic are given by inductive definitions, and the algorithms given to manipulate them are completely effective. Authors usually do not dwell on this, both because it is completely standard and because they do not want to overwhelm the reader at first. So the convention is to write definitions "as if" they are definitions in set theory, and allow the readers who know what's going on to read the definitions as formal inductive definitions instead. When read as inductive definitions, these definitions would make sense even to the fringe of mathematicians who don't think that any infinite sets exist but who are willing to study algorithms that manipulate individual finite strings. </p>
<p>Here are two more examples of the syntactic algorithms implicit in certain theorems: </p>
<ul>
<li><p>Gödel's incompleteness theorem actually gives an effective algorithm that can convert any PA-proof of Con(PA) into a PA-proof of $0=1$. So, under the assumption there is no proof of the latter kind, there is no proof of the former kind. </p></li>
<li><p>The method of forcing in ZFC actually gives an effective algorithm that can turn any proof of $0=1$ from the assumptions of ZFC and the continuum hypothesis into a proof of $0=1$ from ZFC alone. Again, this gives a relative consistency result. </p></li>
</ul>
<p>Results like the previous two bullets are often called "finitary relative consistency proofs". Here "finitary" should be read to mean "providing an effective algorithm to manipulate strings of symbols". </p>
<p>This viewpoint helps explain where weak theories of arithmetic such as PRA enter into the study of foundations. Suppose we want to ask "what axioms are required to prove that the algorithms we have constructed will do what they are supposed to do?". It turns out that very weak theories of arithmetic are able to prove that these symbolic manipulations work correctly. PRA is a particular theory of arithmetic that is on one hand very weak (from the point of view of stronger theories like PA or ZFC) but at the same time is able to prove that (formalized versions of) the syntactic algorithms work correctly, and which is often used for this purpose. </p>
| <p>It's only circular if you think we need a formalization of logic in order to reason mathematically at all. However, mathematicians reasoned mathematically for many centuries <em>before</em> formal logic was invented, so this assumption is obviously not true.</p>
<p>It's an empirical fact that mathematical reasoning existed independently of formal logic back then. I think it is reasonably self-evident, then, that it <em>still</em> exists without needing formal logic to prop it up. Formal logic is a <em>mathematical model</em> of the kind of reasoning mathematicians accept -- but the model is not the thing itself.</p>
<p>A small bit of circularity does creep in, because many modern mathematicians look to their knowledge of formal logic when they need to decide whether to accept an argument or not. But that's not enough to make the whole thing circular; there are enough non-equivalent formal logics (and possible foundations of mathematics) to choose between that the choice of which one to use to analyze arguments is still largely informed by which arguments one <em>intuitively</em> wants to accept in the first place, not the other way around.</p>
|
matrices | <p>For the Quadratic Form $X^TAX; X\in\mathbb{R}^n, A\in\mathbb{R}^{n \times n}$ (which simplifies to $\Sigma_{i=0}^n\Sigma_{j=0}^nA_{ij}x_ix_j$), I tried to take the derivative wrt. X ($\Delta_X X^TAX$) and ended up with the following:</p>
<p>The $k^{th}$ element of the derivative represented as</p>
<p>$\Delta_{X_k}X^TAX=[\Sigma_{i=1}^n(A_{ik}x_k+A_{ki})x_i] + A_{kk}x_k(1-x_k)$</p>
<p>Does this result look right? Is there an alternative form?</p>
<p>I'm trying to get to the $\mu_0$ of Gaussian Discriminant Analysis by maximizing the log likelihood and I need to take the derivative of a Quadratic form. Either the result I mentioned above is wrong (shouldn't be because I went over my arithmetic several times) or the form I arrived at above is not the terribly useful to my problem (because I'm unable to proceed).</p>
<p>I can give more details about the problem or the steps I put down to arrive at the above result, but I didn't want to clutter to start off. Please let me know if more details are necessary.</p>
<p>Any link to related material is also much appreciated.</p>
| <p>Let $Q(x) = x^T A x$. Then expanding $Q(x+h)-Q(x)$ and dropping the higher order term, we get $DQ(x)(h) = x^TAh+h^TAx = x^TAh+x^TA^Th = x^T(A+A^T)h$, or more typically, $\frac{\partial Q(x)}{\partial x} = x^T(A+A^T)$.</p>
<p>Notice that the derivative with respect to a <strong>column</strong> vector is a <strong>row</strong> vector!</p>
| <p>You could also take the derivative of the scalar sum.
<span class="math-container">\begin{equation}
\begin{aligned}
{\bf x^TAx} = \sum\limits_{j=1}^{n}x_j\sum\limits_{i=1}^{n}x_iA_{ji}
\end{aligned}
\end{equation}</span>
The derivative with respect to the <span class="math-container">$k$</span>-th variable is then(product rule):
<span class="math-container">\begin{equation}
\begin{aligned}
\frac{d {\bf x^TAx}}{d x_k} & = \sum\limits_{j=1}^{n}\frac{dx_j}{dx_k}\sum\limits_{i=1}^{n}x_iA_{ji} + \sum\limits_{j=1}^{n}x_j\sum\limits_{i=1}^{n} \frac{dx_i}{dx_k}A_{ji} \\
& = \sum\limits_{i=1}^{n}x_iA_{ki} + \sum\limits_{j=1}^{n}x_jA_{jk}
\end{aligned}
\end{equation}</span></p>
<p>If then you arrange these derivatives into a column vector, you get:
<span class="math-container">\begin{equation}
\begin{aligned}
\begin{bmatrix}
\sum\limits_{i=1}^{n}x_iA_{1i} + \sum\limits_{j=1}^{n}x_jA_{j1} \\
\sum\limits_{i=1}^{n}x_iA_{2i} + \sum\limits_{j=1}^{n}x_jA_{j2} \\
\vdots \\
\sum\limits_{i=1}^{n}x_iA_{ni} + \sum\limits_{j=1}^{n}x_jA_{jn} \\
\end{bmatrix} = {\bf Ax} + ({\bf x}^T{\bf A})^T = ({\bf A} + {\bf A}^T){\bf x}
\end{aligned}
\end{equation}</span></p>
<p>or if you choose to arrange them in a row, then you get:
<span class="math-container">\begin{equation}
\begin{aligned}
\begin{bmatrix}
\sum\limits_{i=1}^{n}x_iA_{1i} + \sum\limits_{j=1}^{n}x_jA_{j1} &
\sum\limits_{i=1}^{n}x_iA_{2i} + \sum\limits_{j=1}^{n}x_jA_{j2} &
\dots &
\sum\limits_{i=1}^{n}x_iA_{ni} + \sum\limits_{j=1}^{n}x_jA_{jn}
\end{bmatrix} \\ = ({\bf Ax} + ({\bf x}^T{\bf A})^T)^T = (({\bf A} + {\bf A}^T){\bf x})^T = {\bf x}^T({\bf A} + {\bf A}^T)
\end{aligned}
\end{equation}</span></p>
<hr />
|
number-theory | <p>A well-known overkill proof of the irrationality of <span class="math-container">$2^{1/n}$</span> (<span class="math-container">$n \geqslant 3$</span> an integer) using Fermat's Last Theorem goes as follows: If <span class="math-container">$2^{1/n} = a/b$</span>, then <span class="math-container">$2b^n = b^n + b^n = a^n$</span>, which contradicts FLT. (See <a href="https://math.stackexchange.com/questions/1551309/proof-that-the-number-sqrt32-is-irrational-using-fermats-last-theorem">this</a>, and see <a href="https://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts/42519#comment100811_42519">this comment</a> for the reason this is a circular argument when using Wiles' FLT proof)</p>
<p>The same method of course can't be applied to prove the irrationality of <span class="math-container">$\sqrt{2}$</span>, since FLT doesn't say anything about the solutions of <span class="math-container">$x^2 + y^2 = z^2$</span>. Often this fact is stated humorously as, "FLT is not strong enough to prove that <span class="math-container">$\sqrt{2} \not \in \mathbb{Q}$</span>." But clearly, the failure of one specific method that works for <span class="math-container">$n \geqslant 3$</span> does not rule out that some other argument could work in the case <span class="math-container">$n = 2$</span> in which the irrationality of <span class="math-container">$\sqrt{2}$</span> is related to a Fermat-type equation.</p>
<p>(<em>For example</em>, if we knew that there are integers <span class="math-container">$x,y,z$</span> such that <span class="math-container">$4x^4 + 4y^4 = z^4$</span>, then with <span class="math-container">$\sqrt{2} = a/b$</span>, we would have <span class="math-container">$a^4 x^4 / b^4 + a^4 y^4 / b^4 = z^4$</span> and hence</p>
<p><span class="math-container">\begin{align}
X^4 + Y^4 = Z^4, \quad \quad (X, Y, Z) = (ax, ay, bz) \in \mathbb{Z}^3,
\end{align}</span></p>
<p>a contradiction to FLT.)</p>
<p>Is there a proof along these lines that <span class="math-container">$\sqrt{2} \not \in \mathbb{Q}$</span> using Fermat's Last Theorem?</p>
| <p><span class="math-container">$$
\left(18+17\sqrt{2}\right)^3 + (18-17\sqrt{2})^3 = 42^3,
$$</span>
so <span class="math-container">$\sqrt{2}\in \mathbb{Q}$</span> would contradict FLT (once you know that <span class="math-container">$\sqrt{2}\not\in\{\pm 18/17\}$</span> of course).</p>
<p>Source: <a href="http://www.afjarvis.staff.shef.ac.uk/maths/jarvismeekin08-fermat-jnt3079.pdf" rel="noreferrer">this article</a>, which also show that this is 'the only way' to show <span class="math-container">$\sqrt{2}$</span> is irrational using FLT, because FLT is almost true in <span class="math-container">$\mathbb{Q}(\sqrt{2})$</span> -- only in exponent <span class="math-container">$3$</span> do we get counterexamples and all of them are 'generated' (see Lemma <span class="math-container">$2.1$</span> and the discussion immediately following its proof at the bottom half of page <span class="math-container">$4$</span>) by the counterexample given above.</p>
| <p>One can generalize this beyond <span class="math-container">$\sqrt{2}$</span>, showing that <span class="math-container">$2$</span> is not special at all. For example, for rational <span class="math-container">$k$</span> other than <span class="math-container">$0$</span> and <span class="math-container">$-1$</span>, we have the identity
<span class="math-container">$$\left(3+\sqrt{-3(1+4k^3)}\right)^3+\left(3-\sqrt{-3(1+4k^3)}\right)^3+(6k)^3=0$$</span>
Obviously it is not trivial to know that FLT is not valid in quadratic fields as it is in the reals (since for all <span class="math-container">$a,b∈ℝ$</span> and <span class="math-container">$n∈ℕ^+$</span> we have <span class="math-container">$a^n+b^n=c^n$</span> for <span class="math-container">$c=\sqrt[n]{a^n+b^n}$</span>), but as the above identity shows, it is not hard either and essentially the same for all quadratic fields.</p>
|
logic | <p><em>A more focused version of this question has now been <a href="https://mathoverflow.net/questions/359958/positive-set-theory-and-the-co-russell-set">asked at MO</a>.</em></p>
<p>Tl;dr version: are there "reasonable" theories which prove/disprove "the set of all sets containing themselves, contains itself"?</p>
<hr>
<p>Inspired by <a href="https://math.stackexchange.com/questions/2431073/significance-of-non-decidable-statement">this question</a>, I'd like to ask a question which has been vaguely on my mind for a while but which I've never looked into.</p>
<p>Working naively for a moment, let <span class="math-container">$S=\{x: x\in x\}$</span> be the "dual" to <a href="https://en.wikipedia.org/wiki/Russell%27s_paradox" rel="noreferrer">Russell's paradoxical set</a> <span class="math-container">$R$</span>. There does not seem to be an immediate argument showing that <span class="math-container">$S$</span> is or is not an element of itself, nicely paralleling the fact that there <em>are</em> of course arguments for <span class="math-container">$R$</span> <em>both</em> containing and not containing itself (that's exactly what the paradox is, of course).</p>
<p>However, it's a bit premature to leap to the conclusion that there actually <strong>are</strong> no such arguments. Namely, if we look at the Godel situation, we see something quite different: while the Godel sentence "I am unprovable (in <span class="math-container">$T$</span>)" is neither provable nor disprovable (in <span class="math-container">$T$</span>), the sentence "I am provable (in <span class="math-container">$T$</span>)" <a href="https://en.wikipedia.org/wiki/L%C3%B6b%27s_theorem" rel="noreferrer"><strong>is provable</strong> (in <span class="math-container">$T$</span>)</a> (as long as we express "is provable" <a href="https://math.stackexchange.com/questions/2135587/variant-of-g%C3%B6del-sentence">in a reasonable way</a>)! So a certain intuitive symmetry is broken. So this raises the possibility that the question</p>
<p><span class="math-container">$$\mbox{Does $S$ contain itself?}$$</span></p>
<p>could actually be answered, at least from "reasonable" axioms.</p>
<p>Now ZFC does answer it, albeit in a trivial way: in ZFC we have <span class="math-container">$S=\emptyset$</span>. So ideally we're looking for a set theory which allows sets containing themselves, so that <span class="math-container">$S$</span> is nontrivial. Also, to keep the parallel with Russell's paradox, a set theory more closely resembling naive comprehension is a reasonable thing to desire.</p>
<p>All of this suggests looking at some <a href="https://en.wikipedia.org/wiki/Positive_set_theory" rel="noreferrer">positive set theory</a> - which proves that <span class="math-container">$S$</span> exists, since "<span class="math-container">$x\in x$</span>" is a positive formula, but is not susceptible to Russell's paradox since "<span class="math-container">$x\not\in x$</span>" is not a positive formula - possibly augmented by some kind of <a href="https://en.wikipedia.org/wiki/Non-well-founded_set_theory" rel="noreferrer">antifoundation axiom</a>.</p>
<p>To be specific:</p>
<blockquote>
<p>Is there a "natural" positive set theory (e.g. <span class="math-container">$GPK_\infty^+$</span>), or extension of such by a "natural" antifoundation axiom (e.g. Boffa's), which decides whether <span class="math-container">$S\in S$</span>?</p>
</blockquote>
<p>In general, I'm interested in the status of "<span class="math-container">$S\in S$</span>" in positive set theories. I'm especially excited by those which prove <span class="math-container">$S\in S$</span>; note that these would have to prove the existence of sets containing themselves, since otherwise <span class="math-container">$S=\emptyset\not\in S$</span>.</p>
| <p>I found <a href="https://doi.org/10.1023/A:1025159016268" rel="nofollow noreferrer">a paper of Cantini's</a> that contains an argument that can be used to establish that <span class="math-container">$S \in S$</span> under fairly weak assumptions (on both the amount of comprehension and the underlying logic). Ultimately the proof is a fixed-point argument in the vein of Löb's theorem. This argument is strong enough to establish that <span class="math-container">$S \in S$</span> in <span class="math-container">$\mathsf{GPK}$</span>. While Cantini is concerned with a contractionless logic, I would like to avoid writing out sequent calculus in this answer, so I will state and prove a weaker result in classical first-order logic.</p>
<p>EDIT: I recently found out that adding an abstraction operator (i.e., set builder notation) is far less innocuous than I had realized. (This is discussed by Forti and Hinnion in the introduction of <a href="https://doi.org/10.2307/2274822" rel="nofollow noreferrer">this paper</a>. My understanding of the issue is that it allows you to code negation with equality.) I suspect that the old version of my answer was only vacuously correct in that the resulting theory is inconsistnt, so I have fixed it, although I have specialized the argument to the particular case at hand. I've also cleaned up the argument a bit, mostly to make sure I actually understood it.</p>
<p>We need to assume that our theory <span class="math-container">$T$</span> has enough machinery for the following:</p>
<ul>
<li><span class="math-container">$T$</span> entails extensionality.</li>
<li>There is definable pairing function <span class="math-container">$(x,y) \mapsto \langle x,y\rangle$</span>.</li>
<li>For any <span class="math-container">$a$</span> and <span class="math-container">$f$</span>, there is a set <span class="math-container">$f[a]$</span> such that <span class="math-container">$x \in f[a]$</span> if and only if <span class="math-container">$\langle a,x\rangle \in f$</span>. Note that <span class="math-container">$(f,a) \mapsto f[a]$</span> is a definable function by extensionality.</li>
<li>There is a set <span class="math-container">$D$</span> such that every element of <span class="math-container">$D$</span> is an ordered pair <span class="math-container">$\langle x,y\rangle$</span> and <span class="math-container">$\langle x,y\rangle \in D$</span> if and only if either <span class="math-container">$y \in y$</span> or <span class="math-container">$y = x[x]$</span>.</li>
</ul>
<p>(It is easy to check that <span class="math-container">$\mathsf{GPK}$</span> satisfies all of these.)</p>
<p>Now let <span class="math-container">$I = D[D]$</span>. Unpacking, we have that <span class="math-container">$x \in I$</span> if and only if <span class="math-container">$\langle D,x\rangle \in D$</span> if and only if either <span class="math-container">$x \in x$</span> or <span class="math-container">$x = D[D] = I$</span>. Therefore <span class="math-container">$I$</span> contains precisely the elements of the co-Russell class <span class="math-container">$S$</span> and <span class="math-container">$I$</span> itself, but since <span class="math-container">$I \in I$</span>, <span class="math-container">$I \in S$</span> and so <span class="math-container">$I = S$</span>, whence <span class="math-container">$S \in S$</span>.</p>
<p>(Incidentally, a similar argument also resolves a question in <a href="https://mathoverflow.net/a/379632/83901">my earlier answer to your related question</a>. In particular, <span class="math-container">$\mathsf{GPK}$</span> does entail the existence of a Quine atom by the above argument if we just say that <span class="math-container">$\langle x,y\rangle \in D$</span> if and only if <span class="math-container">$y = x[x]$</span>.)</p>
<p>In light of this, I wonder whether there even is a 'reasonable' set theory in which <span class="math-container">$S$</span> is non-trivial and <span class="math-container">$S \notin S$</span> is consistent.</p>
| <p>This is not a direct answer to your specific question, but it might shed an idea on a possible solution within the arena of <span class="math-container">$\mathsf{GPK}_\infty^+$</span> in which your question is decidable and to the positive!</p>
<p>Around three months ago I've asked Olivier Esser if whether adding the following condition is consistent with <span class="math-container">$\mathsf{GPK}_\infty^+$</span>:</p>
<p><span class="math-container">$``$</span> if <span class="math-container">$\phi$</span> is <em>purely</em> positive without free variables other than <span class="math-container">$y,A$</span>, and without using the false formula, then: <span class="math-container">$$\exists A \forall y (y \in A \iff \phi)"$$</span> By this principle we can construct Quine atoms and alike sets, which are not constructible in merely <span class="math-container">$\mathsf{GPK}_\infty^+$</span></p>
<p>However Olivier Esser see that it's <em>unclear</em> whether such addition is consistent or not? So this principle is itself debatable?</p>
<p>The idea is that everything depends on what "reasonable" is? If the above principle is considered as more or less reasonable and if found consistent, then an answer is there! However, we are not there yet!</p>
|
differentiation | <p>A few days ago, I was wondering about a series expansion for $$f(x) = \ x^x$$ so I tried taking a couple derivatives. From these I was able to guess at some formulas for the general coefficients of the first few terms of the nth derivative as well as the last term (where I put the terms of the derivative in the order $$f^{(n)}(x) = c_1x^x + c_2x^{x-1} + c_3x^{x-2}\ + \ ... \ +\ c_nx^{x-(n-1)}$$</p>
<p>Please let me know if there is a general formula for the coefficients as a function of n. In my own fooling around, I found that $$\ln(x)+1$$ appears quite a bit, so I called this $a$ in my work. Since I would like to center my expansion at $1$, and this expression evaluated to $1$, at $x=0$, I was interested in writing my formulas as functions of $n$ and $a$. I found that you get something of the form (this could very well be completely incorrect): $$f^{(n)}(x) = ax^x + a^{n-2}{\binom n2}x^{x-1} + a^{n-4}(3{\binom n4}-a{\binom n3})x^{x-2}\\ + a^{n-6}(15{\binom n6}-10{\binom n5}a+2{\binom n4}a^2)x^{x-3}\ + \ ... \ +\ (-1)^n(n-2)! \ x^{x-(n-1)}$$</p>
<p>An interesting property that seemed to recur for the various coefficients was that once I found a recursion formula for the coefficient in question and plugged in my initial value, it always seemed to be the same as taking the integral of the previous coefficient with respect to $a$. That is to say, if I plugged in the nth coefficient (a polynomial of $a$) into my recursion formula for the $(n+1)$th coefficient, this gave me the same result as integrating. In general, as reflected by the few coefficient formulas above, the formula for the coefficient of $$x^{x-c}$$ where c is an integer such that $$0\le c<n$$ "Stabilized" to a polynomial of a with c terms after the first few applications of the recursion formula. This stabilization occurred when $$n=2c$$ Also, as a direct result of the tentative and incomplete formula I have written above, the first nonzero-value for the coefficient of $$x^{x-c}$$ is $$(-1)^{c+1}(c-1)!$$ and appears in $$f^{(c+1)}(x)$$ If these various properties are indeed true, can anyone explain why this would be the case? I am aware that this function can be expanded using the expansion for $$e^x$$ with $$x\ln(x)$$ in the exponent. This does not strike me as particularly enlightening (let me know if I'm missing something). I am most interested in the coefficients on the binomial coefficients, since these are the only things that stand between me and a general formula for the coefficients (since the form is quite standard). These are: $$1$$ then $$3,-1$$ then $$15, -10,2$$The sum of these coefficients should also give the nth derivative evaluated at zero since $$f(1) = 1$$ and $$a(1) = 1$$</p>
<p>Anyway, please let me know what you think! I am currently stuck due to an inability to brute force my way any further and a complete lack of cleverness. My knowledge of mathematics is quite elementary, so please explain any higher level concepts if they are necessary to explain this problem.</p>
| <p><strong>Hint:</strong> We can find an elaboration of the $n$-th derivative of $x^x$ in an example (p. 139) of <em><a href="http://rads.stackoverflow.com/amzn/click/9027703809" rel="noreferrer">Advanced Combinatorics</a></em> by L. Comtet. The idea is based upon a clever Taylor series expansion. Using the differential operator $D_x^j:=\frac{d^j}{dx^j}$ the following holds:</p>
<blockquote>
<p>The $n$-th derivative of $x^x$ is</p>
<p>\begin{align*}
D_x^n x^x=x^x\sum_{i=0}^n\binom{n}{i}(\ln(x))^i\sum_{j=0}^{n-i}b_{n-i,n-i-j}x^{-j}\tag{1}
\end{align*}
with $b_{n,j}$ the <em>Lehmer-Comtet</em> numbers.</p>
<p>These numbers follow the recurrence relation
\begin{align*}
b_{n+1,j}=(j-n)b_{n,j}+b_{n,j-1}+nb_{n-1,j-1}\qquad\qquad n,j\geq 1
\end{align*}
and the first values, together with initial values are listed below.</p>
<p>\begin{array}{c|cccccc}
n\setminus k&1&2&3&4&5&6\\
\hline
1&1\\
2&1&1\\
3&-1&3&1\\
4&2&-1&6&1\\
5&-6&0&5&10&1\\
6&24&4&-15&25&15&1\\
\end{array}</p>
<p>The values can be found in OEIS as <em><a href="http://oeis.org/A008296" rel="noreferrer">A008296</a></em>. They are called <em>Lehmer-Comtet</em> numbers and were stored in the archive by N.J.A.Sloane by referring precisely to the example we can see here.</p>
</blockquote>
<p><strong>Example: $n=2$</strong></p>
<p>Let's look at a small example. Letting $n=2$ we obtain from (1) and the table with $b_{n,j}$:
\begin{align*}
D_x^2x^x&=x^x\sum_{i=0}^2\binom{2}{i}(\ln(x))^i\sum_{j=0}^{2-i}b_{2-i,2-i-j}x^{-j}\\
&=x^x\left(\binom{2}{0}\sum_{j=0}^2b_{2,2-j}x^{-j}+\binom{2}{1}\ln(x)\sum_{j=0}^1b_{1,1-j}x^{-j}\right.\\
&\qquad\qquad\left.+\binom{2}{2}\left(\ln(x)\right)^2\sum_{j=0}^0b_{0,0-j}x^{-j}\right)\\
&=x^x\left(\left(b_{2,2}+b_{2,1}\frac{1}{x}+b_{2,0}\frac{1}{x^2}\right)+2\ln(x)\left(b_{1,1}+b_{1,0}\frac{1}{x}\right)
+(\ln(x))^2b_{0,0}\right)\\
&=x^x\left(1+\frac{1}{x}+2\ln(x)+\left(\ln(x)\right)^2\right)
\end{align*}
in accordance with the result of Wolfram Alpha.</p>
<blockquote>
<p><strong>Note:</strong> A detailed answer is provided in <em><a href="https://math.stackexchange.com/questions/1607520/how-should-i-calculate-the-nth-derivative-of-fx-xx/1961167#1961167">this MSE post</a></em>.</p>
</blockquote>
<hr>
<blockquote>
<p>Another variation of the $n$-th derivative of $x^x$ is stated as <em>Remark 7.4</em> in section 7.8 of <em><a href="http://www.math.wvu.edu/~gould/Vol.5.PDF" rel="noreferrer">Combinatorial Identities, Vol. 5</a></em> by H.W. Gould.</p>
<p>Let $\{F_x(k)\}_{k=0}^\infty$ be a sequence with $F_0(x)=1, F_1(x)=1$, and
\begin{align*}
F_k(x)=-D_xF_{k-1}(x)+\frac{k-1}{x}F_{k-2}(x)\qquad k\geq 2.
\end{align*}
Assuming $x\neq 0$ the following holds
\begin{align*}
D_x^n(x^x)=x^x\sum_{k=0}^n(-1)^k\binom{n}{k}(1+\ln x)^{n-k}F_k(x)
\end{align*}</p>
</blockquote>
| <p>Using logarithmic differentiation seems like a nice route:
$$
y = x^x \implies
\ln(y) = x \ln(x) \implies\\
\frac{y'}{y} = (\ln(x) + 1) \implies \\
y' = (\ln(x) + 1)y
$$
Differentiating further, we get:
$$
y'' = \frac 1x y + (\ln(x) + 1)y' = \frac 1x y + (\ln(x) + 1)^2 y = \\
\left[ \frac 1x + (\ln(x) + 1)^2\right]y\\
y''' = \left[ -\frac 1{x^2} + 2\frac 1x (\ln(x) + 1)\right]y + \left[ \frac 1x + (\ln(x) + 1)^2\right]y' =\\
\left[ -\frac 1{x^2} + 2\frac 1x (\ln(x) + 1)\right]y + \left[ \frac 1x(\ln(x) + 1) + (\ln(x) + 1)^3\right]y = \\
\left[ -\frac 1{x^2} + 3\frac 1x (\ln(x) + 1) + (\ln(x) + 1)^3\right]y
$$
Perhaps this could explain your recurrence.</p>
|
probability | <blockquote>
<p>Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show,
$$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$
when $X$ has : a) a discrete distribution, b) a continuous distribution.</p>
</blockquote>
<p>I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.</p>
| <p>For <strong>every</strong> nonnegative random variable <span class="math-container">$X$</span>, whether discrete or continuous or a mix of these,
<span class="math-container">$$
X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt,
$$</span>
hence, by applying <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem#Tonelli%27s_theorem_for_non-negative_measurable_functions" rel="noreferrer">Tonelli's Theorem</a>,</p>
<blockquote>
<p><span class="math-container">$$
\mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt.
$$</span></p>
</blockquote>
<hr />
<p>Likewise, for every <span class="math-container">$p>0$</span>, <span class="math-container">$$
X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt,
$$</span>
hence</p>
<blockquote>
<p><span class="math-container">$$
\mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt.
$$</span></p>
</blockquote>
| <p>Copied from <a href="https://stats.stackexchange.com/questions/18438/does-a-univariate-random-variables-mean-always-equal-the-integral-of-its-quanti">Cross Validated / stats.stackexchange</a>:</p>
<p><img src="https://i.sstatic.net/mSb6a.png" alt="enter image description here"></p>
<p>where $S(t)$ is the <a href="http://en.wikipedia.org/wiki/Survival_analysis#Quantities_derived_from_the_survival_distribution" rel="noreferrer">survival function</a> equal to $1- F(t)$. The two areas are clearly identical.</p>
|
linear-algebra | <p>First of all, I am very comfortable with the tensor product of vector spaces. I am also very familiar with the well-known generalizations, in particular the theory of monoidal categories. I have gained quite some intuition for tensor products and can work with them. Therefore, my question is not about the definition of tensor products, nor is it about its properties. It is rather about the mental images. My intuition for tensor products was never really <strong>geometric</strong>. Well, except for the tensor product of commutative algebras, which corresponds to the fiber product of the corresponding affine schemes. But let's just stick to real vector spaces here, for which I have some geometric intuition, for example from classical analytic geometry. </p>
<p>The direct product of two (or more) vector spaces is quite easy to imagine: There are two (or more) "directions" or "dimensions" in which we "insert" the vectors of the individual vector spaces. For example, the direct product of a line with a plane is a three-dimensional space.</p>
<p>The exterior algebra of a vector space consists of "blades", as is nicely explained in the <a href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="noreferrer">Wikipedia article</a>.</p>
<p>Now what about the tensor product of two finite-dimensional real vector spaces $V,W$? Of course $V \otimes W$ is a direct product of $\dim(V)$ copies of $W$, but this description is not intrinsic, and also it doesn't really incorporate the symmetry $V \otimes W \cong W \otimes V$. How can we describe $V \otimes W$ geometrically in terms of $V$ and $W$? This description should be intrinsic and symmetric.</p>
<p>Note that <a href="https://math.stackexchange.com/questions/115630">SE/115630</a> basically asked the same, but received no actual answer. The answer given at <a href="https://math.stackexchange.com/questions/309838">SE/309838</a> discusses where tensor products are used in differential geometry for more abstract notions such as tensor fields and tensor bundles, but this doesn't answer the question either. (Even if my question gets closed as a duplicate, then I hope that the other questions receive more attention and answers.)</p>
<p>More generally, I would like to ask for a geometric picture of the tensor product of two vector bundles on nice topological spaces. For example, tensoring with a line bundle is some kind of twisting. But this is still some kind of vague. For example, consider the Möbius strip on the circle $S^1$, and pull it back to the torus $S^1 \times S^1$ along the first projection. Do the same with the second projection, and then tensor both. We get a line bundle on the torus, okay, but how does it look like geometrically?</p>
<p>Perhaps the following related question is easier to answer: Assume we have a geometric understanding of two linear maps $f : \mathbb{R}^n \to \mathbb{R}^m$, $g : \mathbb{R}^{n'} \to \mathbb{R}^{m'}$. Then, how can we imagine their tensor product $f \otimes g : \mathbb{R}^n \otimes \mathbb{R}^{n'} \to \mathbb{R}^m \otimes \mathbb{R}^{m'}$ or the corresponding linear map $\mathbb{R}^{n n'} \to \mathbb{R}^{m m'}$ geometrically? This is connected to the question about vector bundles via their cocycle description.</p>
| <p>Well, this may not qualify as "geometric intuition for the tensor product", but I can offer some insight into the tensor product of line bundles.</p>
<p>A line bundle is a very simple thing -- all that you can "do" with a line is flip it over, which means that in some basic sense, the Möbius strip is the only really nontrivial line bundle. If you want to understand a line bundle, all you need to understand is where the Möbius strips are.</p>
<p>More precisely, if $X$ is a line bundle over a base space $B$, and $C$ is a closed curve in $B$, then the preimage of $C$ in $X$ is a line bundle over a circle, and is therefore either a cylinder or a Möbius strip. Thus, a line bundle defines a function
$$
\varphi\colon \;\pi_1(B)\; \to \;\{-1,+1\}
$$
where $\varphi$ maps a loop to $-1$ if its preimage is a Möbius strip, and maps a loop to $+1$ if its preimage is a cylinder.</p>
<p>It's not too hard to see that $\varphi$ is actually a homomorphism, where $\{-1,+1\}$ forms a group under multiplication. This homomorphism completely determines the line bundle, and there are no restrictions on the function $\varphi$ beyond the fact that it must be a homomorphism. This makes it easy to classify line bundles on a given space.</p>
<p>Now, if $\varphi$ and $\psi$ are the homomorphisms corresponding to two line bundles, then the tensor product of the bundles corresponds to the <em>algebraic product of $\varphi$ and $\psi$</em>, i.e. the homomorphism $\varphi\psi$ defined by
$$
(\varphi\psi)(\alpha) \;=\; \varphi(\alpha)\,\psi(\alpha).
$$
Thus, the tensor product of two bundles only "flips" the line along the curve $C$ if exactly one of $\varphi$ and $\psi$ flip the line (since $-1\times+1 = -1$).</p>
<p>In the example you give involving the torus, one of the pullbacks flips the line as you go around in the longitudinal direction, and the other flips the line as you around in the meridional direction:</p>
<p><img src="https://i.sstatic.net/iEOgb.png" alt="enter image description here"> <img src="https://i.sstatic.net/SKGy1.png" alt="enter image description here"></p>
<p>Therefore, the tensor product will flip the line when you go around in <em>either</em> direction:</p>
<p><img src="https://i.sstatic.net/tmKVQ.png" alt="enter image description here"></p>
<p>So this gives a geometric picture of the tensor product in this case.</p>
<p>Incidentally, it turns out that the following things are all really the same:</p>
<ol>
<li><p>Line bundles over a space $B$</p></li>
<li><p>Homomorphisms from $\pi_1(X)$ to $\mathbb{Z}/2$.</p></li>
<li><p>Elements of $H^1(B,\mathbb{Z}/2)$.</p></li>
</ol>
<p>In particular, every line bundle corresponds to an element of $H^1(B,\mathbb{Z}/2)$. This is called the <a href="https://en.wikipedia.org/wiki/Stiefel%E2%80%93Whitney_class" rel="noreferrer">Stiefel-Whitney class</a> for the line bundle, and is a simple example of a <a href="https://en.wikipedia.org/wiki/Characteristic_class" rel="noreferrer">characteristic class</a>.</p>
<p><strong>Edit:</strong> As Martin Brandenburg points out, the above classification of line bundles does not work for arbitrary spaces $B$, but does work in the case where $B$ is a CW complex.</p>
| <p>Good question. My personal feeling is that we gain true geometric intuition of vector spaces only once norms/inner products/metrics are introduced. Thus, it probably makes sense to consider tensor products in the category of, say, Hilbert spaces (maybe finite-dimensional ones at first). My geometric intuition is still mute at this point, but I know that (for completed tensor products) we have an isometric isomorphism
$$
L^2(Z_1) \otimes L^2(Z_2) \cong L^2(Z_1 \times Z_2)
$$<br>
where $Z_i$'s are measure spaces. In the finite-dimensional setting one, of course, just uses counting measures on finite sets. From this point, one can at least rely upon analytic intuition for the tensor product (Fubini theorem and computation of double integrals as iterated integrals, etc.).</p>
|
combinatorics | <p>Suppose a biased coin (probability of head being $p$) was flipped $n$ times. I would like to find the probability that the length of the longest run of heads, say $\ell_n$, exceeds a given number $m$, i.e. $\mathbb{P}(\ell_n > m)$. </p>
<p>It suffices to find the probability that length of any run of heads exceeds $m$. I was trying to approach the problem by fixing a run of $m+1$ heads, and counting the number of such configurations, but did not get anywhere.</p>
<p>It is easy to simulate it:</p>
<p><img src="https://i.sstatic.net/jPJxb.png" alt="Distribution of the length of the longest run of head in a sequence of 1000 Bernoulli trials with 60% change of getting a head"></p>
<p>I would appreciate any advice on how to analytically solve this problem, i.e. express an answer in terms of a sum or an integral.</p>
<p>Thank you.</p>
| <p>This problem was solved using generating functions by de Moivre in 1738.
The formula you want is
$$\mathbb{P}(\ell_n \geq m)=\sum_{j=1}^{\lfloor n/m\rfloor} (-1)^{j+1}\left(p+\left({n-jm+1\over j}\right)(1-p)\right){n-jm\choose j-1}p^{jm}(1-p)^{j-1}.$$</p>
<p><strong>References</strong></p>
<ol>
<li><p>Section 14.1 <em>Problems and Snapshots from the World of Probability</em> by Blom, Holst, and Sandell</p></li>
<li><p>Chapter V, Section 3 <em>Introduction to Mathematical Probability</em> by Uspensky</p></li>
<li><p>Section 22.6 <em>A History of Probability and Statistics and Their Applications before 1750</em> by Hald gives solutions by de Moivre (1738), Simpson (1740), Laplace (1812), and Todhunter (1865) </p></li>
</ol>
<hr>
<p><strong>Added:</strong>
The combinatorial class of all coin toss sequences without a run of $ m $ heads
in a row is
$$\sum_{k\geq 0}(\mbox{seq}_{< m }(H)\,T)^k \,\mbox{seq}_{< m }(H), $$
with corresponding counting generating function
$$H(h,t)={\sum_{0\leq j< m }h^j\over 1-(\sum_{0\leq j< m }h^j)t}={1-h^ m \over 1-h-(1-h^ m )t}.$$
We introduce probability by replacing $h$ with $ps$ and $t$ by $qs$,
where $q=1-p$:
$$G(s)={1-p^ m s^ m \over1-s+p^ m s^{ m +1}q}.$$
The coefficient of $s^n$ in $G(s)$ is $\mathbb{P}(\ell_n<m).$</p>
<p>The function $1/(1-s(1-p^ m s^ m q ))$ can be rewritten as
\begin{eqnarray*}
\sum_{k\geq 0}s^k(1-p^ m s^ m q )^k
&=&\sum_{k\geq 0}\sum_{j\geq 0} {k\choose j} (-p^ m q)^js^{k+j m }\\
%&=&\sum_{j\geq 0}\sum_{k\geq 0} {k\choose j} (-p^ m q )^js^{k+j m }.
\end{eqnarray*}
The coefficient of $s^n$ in this function is $c(n)=\sum_{j\geq 0}{n-j m \choose j}(-p^ m q)^j$. Therefore the coefficient of $s^n$ in $G(s)$ is $c(n)-p^ m c(n- m ).$
Finally,
\begin{eqnarray*}
\mathbb{P}(\ell_n\geq m)&=&1-\mathbb{P}(\ell_n<m)\\[8pt]
&=&p^ m c(n- m )+1-c(n)\\[8pt]
&=&p^ m \sum_{j\geq 0}(-1)^j{n-(j+1) m \choose j}(p^ m q)^j+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^ m q)^j\\[8pt]
&=&p^ m \sum_{j\geq 1}(-1)^{j-1}{n-j m \choose j-1}(p^m q)^{j-1}+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^mq )^j\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m \choose j-1}q+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m +1\choose j}q \right]p^{ jm} q^{j-1}\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[p+{n-j m +1\over j}\, q\right] {n-j m \choose j-1}\,p^{ jm} q^{j-1}.
\end{eqnarray*}</p>
| <p>Define a Markov chain with states $0, 1, \ldots m$ so that with probability $1$ the chain moves from $m$ to $m$ and for $i<m$ with probability $p$ the chain moves from $i$ to $i+1$ and with probability $1-p$ the chain moves from $i$ to $0$. If you look at the $n$th power of the transition matrix for this chain you can read off the probability that in $n$ flips you have a sequence of at least $m$ consecutive heads.</p>
|
probability | <p>Suppose a biased coin (probability of head being $p$) was flipped $n$ times. I would like to find the probability that the length of the longest run of heads, say $\ell_n$, exceeds a given number $m$, i.e. $\mathbb{P}(\ell_n > m)$. </p>
<p>It suffices to find the probability that length of any run of heads exceeds $m$. I was trying to approach the problem by fixing a run of $m+1$ heads, and counting the number of such configurations, but did not get anywhere.</p>
<p>It is easy to simulate it:</p>
<p><img src="https://i.sstatic.net/jPJxb.png" alt="Distribution of the length of the longest run of head in a sequence of 1000 Bernoulli trials with 60% change of getting a head"></p>
<p>I would appreciate any advice on how to analytically solve this problem, i.e. express an answer in terms of a sum or an integral.</p>
<p>Thank you.</p>
| <p>This problem was solved using generating functions by de Moivre in 1738.
The formula you want is
$$\mathbb{P}(\ell_n \geq m)=\sum_{j=1}^{\lfloor n/m\rfloor} (-1)^{j+1}\left(p+\left({n-jm+1\over j}\right)(1-p)\right){n-jm\choose j-1}p^{jm}(1-p)^{j-1}.$$</p>
<p><strong>References</strong></p>
<ol>
<li><p>Section 14.1 <em>Problems and Snapshots from the World of Probability</em> by Blom, Holst, and Sandell</p></li>
<li><p>Chapter V, Section 3 <em>Introduction to Mathematical Probability</em> by Uspensky</p></li>
<li><p>Section 22.6 <em>A History of Probability and Statistics and Their Applications before 1750</em> by Hald gives solutions by de Moivre (1738), Simpson (1740), Laplace (1812), and Todhunter (1865) </p></li>
</ol>
<hr>
<p><strong>Added:</strong>
The combinatorial class of all coin toss sequences without a run of $ m $ heads
in a row is
$$\sum_{k\geq 0}(\mbox{seq}_{< m }(H)\,T)^k \,\mbox{seq}_{< m }(H), $$
with corresponding counting generating function
$$H(h,t)={\sum_{0\leq j< m }h^j\over 1-(\sum_{0\leq j< m }h^j)t}={1-h^ m \over 1-h-(1-h^ m )t}.$$
We introduce probability by replacing $h$ with $ps$ and $t$ by $qs$,
where $q=1-p$:
$$G(s)={1-p^ m s^ m \over1-s+p^ m s^{ m +1}q}.$$
The coefficient of $s^n$ in $G(s)$ is $\mathbb{P}(\ell_n<m).$</p>
<p>The function $1/(1-s(1-p^ m s^ m q ))$ can be rewritten as
\begin{eqnarray*}
\sum_{k\geq 0}s^k(1-p^ m s^ m q )^k
&=&\sum_{k\geq 0}\sum_{j\geq 0} {k\choose j} (-p^ m q)^js^{k+j m }\\
%&=&\sum_{j\geq 0}\sum_{k\geq 0} {k\choose j} (-p^ m q )^js^{k+j m }.
\end{eqnarray*}
The coefficient of $s^n$ in this function is $c(n)=\sum_{j\geq 0}{n-j m \choose j}(-p^ m q)^j$. Therefore the coefficient of $s^n$ in $G(s)$ is $c(n)-p^ m c(n- m ).$
Finally,
\begin{eqnarray*}
\mathbb{P}(\ell_n\geq m)&=&1-\mathbb{P}(\ell_n<m)\\[8pt]
&=&p^ m c(n- m )+1-c(n)\\[8pt]
&=&p^ m \sum_{j\geq 0}(-1)^j{n-(j+1) m \choose j}(p^ m q)^j+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^ m q)^j\\[8pt]
&=&p^ m \sum_{j\geq 1}(-1)^{j-1}{n-j m \choose j-1}(p^m q)^{j-1}+\sum_{j\geq 1}(-1)^{j+1}{n-j m \choose j}(p^mq )^j\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m \choose j-1}q+{n-j m \choose j}q\right]p^{ jm } q^{j-1}\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[{n-j m \choose j-1}p+{n-j m +1\choose j}q \right]p^{ jm} q^{j-1}\\[8pt]
&=&\sum_{j\geq 1}(-1)^{j+1} \left[p+{n-j m +1\over j}\, q\right] {n-j m \choose j-1}\,p^{ jm} q^{j-1}.
\end{eqnarray*}</p>
| <p>Define a Markov chain with states $0, 1, \ldots m$ so that with probability $1$ the chain moves from $m$ to $m$ and for $i<m$ with probability $p$ the chain moves from $i$ to $i+1$ and with probability $1-p$ the chain moves from $i$ to $0$. If you look at the $n$th power of the transition matrix for this chain you can read off the probability that in $n$ flips you have a sequence of at least $m$ consecutive heads.</p>
|
probability | <p>Assuming I have a always positive random variable $X$, $X \in \mathbb{R}$, $X > 0$. Then I am now interested in the difference between the following two expectation values:</p>
<ol>
<li>$E \left[ \ln X \right]$</li>
<li>$\ln E \left[ X \right]$</li>
</ol>
<p>Is one maybe always a lower/upper bound of the other?</p>
<p>Many thanks in advance...</p>
| <p>Since the function logarithm is concave, <a href="http://en.wikipedia.org/wiki/Jensen%27s_inequality" rel="noreferrer">Jensen's inequality</a> shows that $E(\ln(X))\leqslant \ln E(X)$ and that equality occurs iff $X$ is almost surely constant.</p>
<hr>
<p><strong>Edit</strong> (This is to expand on a remark made by Shai.)</p>
<p>Shai's answer explains how to prove $E(\ln(X))\leqslant \ln E(X)$ using only AM-GM inequality and the strong law of large numbers. These very tools yield the following refinement (adapted from the paper <a href="http://jmi.ele-math.com/03-21" rel="noreferrer">Self-improvement of the inequality between arithmetic and geometric means</a> by J. M. Aldaz). </p>
<p>Apply AM-GM inequality to the <em>square roots</em> of an i.i.d. sequence of positive random variables $(X_i)$, that is,
$$
\sqrt[n]{\sqrt{X_1}\cdots\sqrt{X_n}}\leqslant\frac1n(\sqrt{X_1}+\cdots+\sqrt{X_n}).
$$
In the limit $n\to\infty$, the strong law of large numbers yields
$$
\exp(E(\ln\sqrt{X}))\leqslant E(\sqrt{X}),
$$
that is,
$$
E(\ln X)\leqslant 2\ln E(\sqrt{X})=\ln (E(X)-\mbox{var}(\sqrt{X})).
$$
Finally:</p>
<blockquote>
<p>For every positive integrable $X$,
$$
E(\ln X)\leqslant [\ln E(X)]-\delta(X)\quad\mbox{where}\ \delta(X)=\ln[E(X)/E(\sqrt{X})^2].
$$</p>
</blockquote>
<p>The correction term $\delta(X)$ is nonnegative for every $X$, and $\delta(X)=0$ iff $X$ is almost surely constant.</p>
<p>Naturally, this also obtains directly through Jensen's inequality applied to $\sqrt{X}$. </p>
<p>And this result is just a special case of the fact that, for every $s$ in $(0,1)$,
$$
E(\ln X)\leqslant [\ln E(X)]-\delta_s(X)\quad\mbox{where}\ \delta_s(X)=\ln[E(X)/E(X^s)^{1/s}].
$$
The quantity $\delta_s(X)$ is a nonincreasing function of $s$ hence the upper bound $[\ln E(X)]-\delta_s(X)$ is better and better when $s$ decreases to $0$. For every $X$, $\delta_1(X)=0$, $\delta_{1/2}(X)=\delta(X)$ and $\delta_0(X)=[\ln E(X)]-E(\ln X)$.</p>
<p>The interesting point in all this, if any, is that one has <em>quantified</em> the discrepancy between $E(\ln X)$ and $\ln E(X)$ and, simultaneously, recovered the fact that $E(\ln X)=\ln E(X)$ iff $X$ is almost surely constant.</p>
| <p>To add on Didier's answer, it is instructive to note that the inequality ${\rm E}(\ln X) \le \ln {\rm E}(X)$
can be seen as a consequence of the <a href="http://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means" rel="noreferrer">AM-GM inequality</a> combined with the <a href="http://en.wikipedia.org/wiki/Law_of_large_numbers" rel="noreferrer">strong law of large numbers</a>, upon writing
the AM-GM inequality
$$
\sqrt[n]{{X_1 \cdots X_n }} \le \frac{{X_1 + \cdots + X_n }}{n}
$$
as
$$
\exp \bigg(\frac{{\ln X_1 + \cdots + \ln X_n }}{n}\bigg) \le \frac{{X_1 + \cdots + X_n }}{n},
$$
and letting $n \to \infty$.</p>
<p>EDIT: For completeness, let me note that ${\rm E}[\ln X]$ might be equal to $-\infty$. For example, if $X$ has density function
$$
f(x) = \frac{{\ln a}}{{x\ln ^2 x}},\;\;0 < x < \frac{1}{a},
$$
where $a>1$ (note that $\int f = 1$), then
$$
{\rm E}[\ln X] =
\int_0^{1/a} {\frac{{\ln a}}{{x\ln x}}} \,{\rm d}x = -\infty.
$$</p>
|
game-theory | <p>Have any studies been done that demonstrate people (not game theorists) actually using mixed Nash equilibrium as their strategy in a game? </p>
| <p>According to <a href="http://pricetheory.uchicago.edu/levitt/Papers/ChiapporiGrosecloseLevitt2002.pdf">this</a>(article about mixed equilibrium strategies), I think penalty kicks between two soccer teams use mixed Nash equilibrium strategies.</p>
| <p>There have been lots of studies on this sort of thing, with different results. It depends a lot on cultural context. You might look at <a href="http://books.google.ca/books?id=0NKfbdyzvJAC&printsec=frontcover&dq=%22A%20beautiful%20math%22%20siegfried&hl=en&ei=AOylTcDFPJLmsQOF9bn6DA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC8Q6AEwAA#v=onepage&q&f=false" rel="nofollow">"A Beautiful Math"</a> by Tom Siegfried</p>
|
Subsets and Splits