tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
probability | <p>Question is quite straight... I'm not very good in this subject but need to understand at a good level.</p>
| <p>For probability theory as probability theory (rather than normed measure theory ala Kolmogorov) I'm quite partial to <a href="http://rads.stackoverflow.com/amzn/click/0521592712">Jaynes's Probability Theory: The Logic of Science</a>. It's fantastic at building intuition behind the rules and operations. That said, this has the downside of creating fanatics who think they know all there is to know about probability theory.</p>
| <p><em><a href="http://rads.stackoverflow.com/amzn/click/0131856626">A First Course in Probability</a></em> by Sheldon Ross is good.</p>
|
linear-algebra | <p>Having picked up a rudimentary understanding of tensors from reading mechanics papers and Wikipedia, I tend to think of rank-2 tensors simply as square matrices (along with appropriate transformation rules). Certainly, if the distinction between vectors and dual vectors is ignored, a rank 2 tensor $T$ seems to be simply a multilinear map $V \times V \rightarrow \mathbb{R}$, and (I think) any such map can be represented by a matrix $\mathbf{A}$ using the mapping $(\mathbf{v},\mathbf{w}) \mapsto \mathbf{v}^T\mathbf{Aw}$.</p>
<p>My question is this: Is this a reasonable way of thinking about things, at least as long as you're working in $\mathbb{R}^n$? Are there any obvious problems or subtle misunderstandings that this naive approach can cause? Does it break down when you deal with something other than $\mathbb{R}^n$? In short, is it "morally wrong"?</p>
| <p>It's not misleading as long as you change your notion of equivalence. When a matrix represents a linear transformation $V \to V$, the correct notion of equivalence is similarity: $M \simeq B^{-1} MB$ where $B$ is invertible. When a matrix represents a bilinear form $V \times V \to \mathbb{R}$, the correct notion of equivalence is <em>congruence</em>: $M \simeq B^TMB$ where $B$ is invertible. As long as you keep this distinction in mind, you're fine.</p>
| <p>You're absolutely right.</p>
<p>Maybe someone will find useful a couple of remarks telling the same story in coordinate-free way:</p>
<ul>
<li><p>What happens here is indeed identification of space with its dual: so a bilinear map $T\colon V\times V\to\mathbb{R}$ is rewritten as $V\times V^*\to\mathbb{R}$ — which is exactly the same thing as a linear operator $A\colon V\to V$;</p></li>
<li><p>An identification of $V$ and $V^*$ is exactly the same thing as a scalar product on $V$, and using this scalar product one can write $T(v,w)=(v,Aw)$;</p></li>
<li><p>So orthogonal change of basis preserves this identification — in terms of Qiaochu Yuan's answer one can see this from the fact that for orthogonal matrix $B^T=B^{-1}$ (moral of the story: if you have a canonical scalar product, there is no difference between $T$ and $A$ whatsoever; and if you don't have one — see Qiaochu Yuan's answer.)</p></li>
</ul>
|
differentiation | <p>I happened to stumble upon the following matrix:
$$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$</p>
<p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then:
$$ P(A)=\begin{bmatrix}
P(a) & P'(a) \\
0 & P(a)
\end{bmatrix}$$</p>
<p>Where $P'(a)$ is the derivative evaluated at $a$.</p>
<p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me:
$$ \exp(A)=\begin{bmatrix}
e^a & e^a \\
0 & e^a
\end{bmatrix}$$
and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p>
<p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get:
$$ P(A)=\begin{bmatrix}
\frac{1}{a} & -\frac{1}{a^2} \\
0 & \frac{1}{a}
\end{bmatrix}$$
And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p>
<p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p>
<p>I have two questions:</p>
<ol>
<li><p>Why is this happening?</p></li>
<li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li>
</ol>
| <p>If $$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$
then by induction you can prove that
$$ A^n = \begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} \tag 1
$$
for $n \ge 1 $. If $f$ can be developed into a power series
$$
f(z) = \sum_{n=0}^\infty c_n z^n
$$
then
$$
f'(z) = \sum_{n=1}^\infty n c_n z^{n-1}
$$
and it follows that
$$
f(A) = \sum_{n=0}^\infty c_n A^n = I + \sum_{n=1}^\infty c_n
\begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} = \begin{bmatrix}
f(a) & f'(a) \\
0 & f(a)
\end{bmatrix} \tag 2
$$
From $(1)$ and
$$
A^{-1} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}
$$
one gets
$$
A^{-n} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}^n =
(-a^{-2})^{n} \begin{bmatrix}
-a & 1 \\
0 & -a
\end{bmatrix}^n \\ =
(-1)^n a^{-2n} \begin{bmatrix}
(-a)^n & n (-a)^{n-1} \\
0 & (-a)^n
\end{bmatrix} =
\begin{bmatrix}
a^{-n} & -n a^{-n-1} \\
0 & a^{-n}
\end{bmatrix}
$$
which means that $(1)$ holds for negative exponents as well.
As a consequence, $(2)$ can be generalized to functions
admitting a Laurent series representation:
$$
f(z) = \sum_{n=-\infty}^\infty c_n z^n
$$</p>
| <p>It's a general statement if <span class="math-container">$J_{k}$</span> is a Jordan block and <span class="math-container">$f$</span> a function matrix then
<span class="math-container">\begin{equation}
f(J)=\left(\begin{array}{ccccc}
f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & \frac{f''(\lambda_{0})}{2!} & \ldots & \frac{f^{(n-1)}(\lambda_{0})}{(n-1)!}\\
0 & f(\lambda_{0}) & \frac{f'(\lambda_{0})}{1!} & & \vdots\\
0 & 0 & f(\lambda_{0}) & \ddots & \frac{f''(\lambda_{0})}{2!}\\
\vdots & \vdots & \vdots & \ddots & \frac{f'(\lambda_{0})}{1!}\\
0 & 0 & 0 & \ldots & f(\lambda_{0})
\end{array}\right)
\end{equation}</span>
where
<span class="math-container">\begin{equation}
J=\left(\begin{array}{ccccc}
\lambda_{0} & 1 & 0 & 0\\
0 & \lambda_{0} & 1& 0\\
0 & 0 & \ddots & 1\\
0 & 0 & 0 & \lambda_{0}
\end{array}\right)
\end{equation}</span>
This statement can be demonstrated in various ways (none of them short), but it's a quite known formula. I think you can find it in various books, like in Horn and Johnson's <em>Matrix Analysis</em>.</p>
|
game-theory | <p>Assume a tic-tac-toe board's state is stored in a matrix.
$$
S=\begin{bmatrix}
-1 & 0 & 1 \\
1 & -1 & 0 \\
1 & 0 & -1 \\
\end{bmatrix}
$$</p>
<p>Here, $X$ is mapped to $1$, $O$ is mapped to $-1$ and an empty state is mapped to zero, but <strong>any other numeric mapping will do</strong> if there is one more suitable for solving the problem. Is it possible to create some single expression involving the matrix $S$ which will indicate whether the board is in a winning state? For the above matrix, the expression should show a win for $O$. </p>
<p>I recognize that there are more direct programmatic approaches to this, so this is more of an academic question.</p>
<p>Edit: I have been asked what to do if the board shows two winners. You could either:</p>
<ol>
<li>Assume only valid board states. Since gameplay would stop after once side wins, it is not possible to have a board with two winners. </li>
<li>Alternatively (or equivalently?), your expression could arbitrarily pick a winner in a board that has two.</li>
</ol>
| <p>Sure, here's one way to do it with linear-algebra primitives. Define column vectors $e_1=(1,0,0)^T$, $e_2=(0,1,0)^T$, $e_3=(0,0,1)^T$, and a row vector $a=(1,1,1)$.</p>
<ul>
<li>You can detect if a row or column of $S$ is a winner by testing $a S e_i$ and $a S^T e_i$ for $\pm 3$. (Exercise: Which expression detects winning rows and which detects winning columns?)</li>
<li>You can detect if the main diagonal is a winner by testing the trace of $S$.</li>
<li>Finally, let $R$ be the matrix that permutes rows $1$ and $3$; then you can detect if the other diagonal is a winner by testing the trace of $RS$.</li>
</ul>
<p>In order to combine all eight tests into a single expression, you'll have to specify what you want to do in case of an over-determined matrix. For example, if one row shows a win for $+$ and another row shows a win for $-$, what's the desired behavior?</p>
<p><strong>Edit:</strong> Okay, assuming only valid board states, it's not too hard. We're just going to have to introduce some unusual notation. Define a slightly arbitrary function
$$
\max^*(a,b)=\begin{cases}
a& \text{if } |a| \geq |b|\\
b& \text{otherwise }
\end{cases}
$$
Then $\max^*$ is an associative operation, so we can extend it iteratively to any number of arguments, and the winner of the game is
$$w(S)=\left\lfloor\frac13\max^*\left(\max^*_i(a S e_i), \max^*_i(a S^T e_i), \mathrm{tr}(S),\mathrm{tr}(RS)\right)\right\rceil$$
where $\lfloor x\rceil$ is the round-towards-zero function, so that $w(S)=0$ means nobody wins.</p>
<p>(If $S$ is an invalid state in which both players "win", $w(S)$ will pick the first winner according to the order of expressions tested.)</p>
<p><strong>Edit 2:</strong> Here's a theoretical approach that technically uses only matrix multiplication on $S$... but then shifts all the work to a scalar.</p>
<p>Let $a=(1,3,3^2)$ and $b=(1,3^3, 3^6)^T=(1,27,729)^T$. Then $aSb$ is an integer which encodes $S$, and it has absolute value $\leq (3^9-1)/2=9841$. So there exists a polynomial $p$ of degree $\leq 3^9=19683$ such that $w(S)=p(aSb)$.</p>
<p>In fact, we can choose the even coefficients of $p$ to be zero. The odd coefficients are slightly harder to compute. :)</p>
| <p>Just a comment on the fact that more obscure solutions may exist that are easier to compute. I was able to construct one for the $2\times 2$ tic-tac-toe. Let
\begin{align}
\mathbf{Z}_1 = \left[\begin{array}{cc}2.3049 & -2.2506 \\ -2.2310 & 2.2420 \end{array}\right]
\end{align}
and
\begin{align}
\mathbf{Z}_2 =\left[\begin{array}{cc} -0.2072 & 0.2190 \\ 0.3336 & -0.0792\end{array}\right]
\end{align}
Let the tic-tac-toe matrix be $\mathbf{Z}$ with entries $0$, $1$ or $-1$ in the same manner as defined in the question. Let
\begin{align}
\chi \triangleq \det(\mathbf{Z}_1+\mathbf{Z})|\det(\mathbf{Z}_2+\mathbf{Z})|
\end{align}
Then, unless $\mathbf{Z}$ specifies an illegal board where both sides have won, $\chi \geq 1.5 \iff $ user $1$ has won, $-1 < \chi < 1.5\iff$ the game has not ended yet, and $\chi \leq -1$ if user $-1$ has won. For example, for
\begin{align}
\mathbf{Z} = \left[\begin{array}{cc} 1 & 1 \\ -1 & 0\end{array}\right]
\end{align}
obviously user $1$ has won, and indeed we have $\chi = 2.5252$.</p>
<p>The above was found using a genetic algorithm and experimenting with different $\chi$ forms.</p>
|
linear-algebra | <blockquote>
<p>Consider the basis <span class="math-container">$B=\left\{\begin{pmatrix} -1 \\ 1 \\0 \end{pmatrix}\begin{pmatrix} -1 \\ 0 \\1 \end{pmatrix}\begin{pmatrix} 1 \\ 1 \\1 \end{pmatrix} \right\}$</span> for <span class="math-container">$\mathbb{R}^3$</span>.</p>
<p>A) Find the change of basis matrix for converting from the standard basis to the basis B.</p>
</blockquote>
<p>I have never done anything like this and the only examples I can find online basically tell me how to do the change of basis for "change-of-coordinates matrix from B to C".</p>
<blockquote>
<p>B) Write the vector <span class="math-container">$\begin{pmatrix} 1 \\ 0 \\0 \end{pmatrix}$</span> in B-coordinates.</p>
</blockquote>
<p>Obviously I can't do this if I can't complete part A.</p>
<p>Can someone either give me a hint, or preferably guide me towards an example of this type of problem?</p>
<hr />
<p>The absolute only thing I can think to do is take an augmented matrix <span class="math-container">$[B E]$</span> (note - E in this case is the standard basis, because I don't know the correct notation) and row reduce until B is now the standard matrix. This is basically finding the inverse, so I doubt this is correct.</p>
| <p>Denote $E$ the canonical basis of $\mathbb{R}^3$. </p>
<p>A) These three column vectors define a $3\times 3$ matrix
$$P=\left(\matrix{-1&-1&1\\1&0&1\\0&1&1}\right)$$
which is the matrix of the linear map
$$
Id:(\mathbb{R}^3,B)\longrightarrow (\mathbb{R}^3,E).
$$
This means in particular that whenever you right multiply it by a column vector $(x_1,x_2,x_3)$ where $x_j$ are the coordinates of a vector $x=x_1B_1+x_2B_2+x_3B_3$ with the respect to the basis $B$, you obtain the coordinates of $x$ in the canonical basis $E$.</p>
<p>What you want is the matrix of
$$
Id:(\mathbb{R}^3,E)\longrightarrow (\mathbb{R}^3,B).
$$
That is $P^{-1}$, <strong>the inverse of the matrix above</strong>. This will transform, by right multiplication, the coordinates of a vector with respect to $E$ into its coordinates with respect to $B$. That's the change of basis matrix you need.</p>
<p>B) As explained above, you just have to <strong>right multiply</strong> the change of basis matrix $P^{-1}$ by this column vector.</p>
<p><strong>Check your answer:</strong> you should find</p>
<blockquote class="spoiler">
<p> $$P^{-1}=\left(\matrix{-1/3&2/3&-1/3\\-1/3&-1/3&2/3\\1/3&1/3&1/3} \right)$$
$$\left(\matrix{-1/3&2/3&-1/3\\-1/3&-1/3&2/3\\1/3&1/3&1/3} \right)\left(\matrix{1\\0\\0}\right)=\left(\matrix{-1/3\\-1/3\\1/3}\right).$$</p>
</blockquote>
| <p>By definition change of base matrix contains the coordinates of the new base in respect to old base as it's columns. So by definition <span class="math-container">$B$</span> is the change of base matrix.
Key to solution is equation <span class="math-container">$v = Bv'$</span> where <span class="math-container">$v$</span> has coordinates in old basis and <span class="math-container">$v'$</span> has coordinates in the new basis (new basis is B-s cols)
suppose we know that in old basis <span class="math-container">$v$</span> has coords <span class="math-container">$(1,0,0)$</span> (as a column) (which is by the way just an old base vector) and we want to know <span class="math-container">$v'$</span> (the old base vector coordinates in terms of new base) then from the above equation we get
<span class="math-container">$$B^{-1}v = B^{-1}Bv' \Rightarrow B^{-1}v = v'$$</span></p>
<p>As a side-node, sometimes we want to ask how does that change of base matrix B act if we look at it as linear transformation, that is given vector v in old base <span class="math-container">$v=(v_1,...,v_n)$</span>, what is the vector <span class="math-container">$Bv$</span>? In general it is a vector whith i-th coordinate bi1*v1+...+bin*vn (dot product of i-th row of <span class="math-container">$B$</span> with <span class="math-container">$v$</span>). But in particular if we consider v to be an old base vector having coordinates (0...1...0) (coordinates in respect the old base) where 1 is in the j-th position, then we get <span class="math-container">$Bv = (b_{1j},...,b_{nj})$</span> which is the j-th column of B, which is the j-th base vector of the new base. Thus we may say that B viewed as linear transformation takes old base to new base.</p>
|
probability | <p>In a probability course, a game was introduced which a logical approach won't yield a strategy for winning, but a probabilistic one will. My problem is that I don't remember the details (the rules of the game)! I would be thankful if anyone can complete the description of the game. I give the outline of the game, below.</p>
<p>Some person (A) hides a 100 or 200 dollar bill, and asks another one (B) to guess which one is hidden. If B's guess is correct, something happens and if not, something else (this is what I don't remember). The strange point is, B can think of a strategy so that always ends to a positive amount, but now A can deduce that B will use this strategy, and finds a strategy to overcome B. Now B knows A's strategy, and will uses another strategy, and so on. So, before even playing the game for once, there is an infinite chain of strategies which A and B choose successively!</p>
<p>Can you complete the story? I mean, what happens when B's guess correct and incorrect?</p>
<p>Thanks.</p>
| <p>In the <a href="http://blog.plover.com/math/envelope.html" rel="noreferrer">Envelope Paradox</a> player 1 writes any two different numbers $a< b$ on two slips of paper. Then player 2 draws one of the two slips each with probability $\frac 12$, looks at its number $x$, and predicts whether $x$ is the larger or the smaller of the two numbers.</p>
<p>It appears at first that no strategy by player 2 can achieve a success rate grater than $\frac 12$. But there is in fact a strategy that will do this.</p>
<p>The strategy is as follows: Player 2 should first select some probability distribution $D$ which is positive everywhere on the real line. (A normal distribution will suffice.) She should then select a number $y$ at random according to distribution $D$. That is, her selection $y$ should lie in the interval $I$ with probability exactly $$\int_I D(x)\; dx.$$ General methods for doing this are straightforward; <a href="https://en.wikipedia.org/wiki/Box-Muller_transform" rel="noreferrer">methods for doing this</a> when $D$ is a normal distribution are well-studied.</p>
<p>Player 2 now draws a slip at random; let the number on it be $x$.
If $x>y$, player 2 should predict that $x$ is the larger number $b$; if $x<y$ she should predict that $x$ is the smaller number $a$. ($y=x$ occurs with probability 0 and can be disregarded, but if you insist, then player 2 can flip a coin in this case without affecting the expectation of the strategy.)</p>
<p>There are six possible situations, depending on whether the selected slip $x$ is actually the smaller number $a$ or the larger number $b$, and whether the random number $y$ selected by player 2 is less than both $a$ and $b$, greater than both $a$ and $b$, or in between $a$ and $b$.</p>
<p>The table below shows the prediction made by player 2 in each of the six cases; this prediction does not depend on whether $x=a$ or $x=b$, only on the result of her comparison of $x$ and $y$: </p>
<p>$$\begin{array}{r|cc}
& x=a & x=b \\ \hline
y < a & x=b & \color{blue}{x=b} \\
a<y<b & \color{blue}{x=a} & \color{blue}{x=b} \\
b<y & \color{blue}{x=a} & x=a
\end{array}
$$</p>
<p>For example, the upper-left entry says that when player 2 draws the smaller of the two numbers, so that $x=a$, and selects a random number $y<a$, she compares $y$ with $x$, sees that $y$ is smaller than $x$, and so predicts that $x$ is the larger of the two numbers, that $x=b$. In this case she is mistaken. Items in blue text are <em>correct</em> predictions.</p>
<p>In the first and third rows, player 2 achieves a success with probability $\frac 12$. In the middle row, player 2's prediction is always correct. Player 2's total probability of a successful prediction is therefore
$$
\frac12 \Pr(y < a) + \Pr(a < y < b) + \frac12\Pr(b<y) = \\
\frac12(\color{maroon}{\Pr(y<a) + \Pr(a < y < b) + \Pr(b<y)}) + \frac12\Pr(a<y<b) = \\
\frac12\cdot \color{maroon}{1}+ \frac12\Pr(a<y<b)
$$</p>
<p>Since $D$ was chosen to be everywhere positive, player 2's probability $$\Pr(a < y< b) = \int_a^b D(x)\;dx$$ of selecting $y$ between $a$ and $b$ is <em>strictly</em> greater than $0$ and her probability of making a correct prediction is <em>strictly</em> greater than $\frac12$ by half this strictly positive amount.</p>
<p>This analysis points toward player 1's strategy, if he wants to minimize player 2's chance of success. If player 2 uses a distribution $D$ which is identically zero on some interval $I$, and player 1 knows this, then player 1 can reduce player 2's success rate to exactly $\frac12$ by always choosing $a$ and $b$ in this interval. If player 2's distribution is everywhere positive, player 1 cannot do this, even if he knows $D$. But player 2's distribution $D(x)$ must necessarily approach zero as $x$ becomes very large. Since Player 2's edge over $\frac12$ is $\frac12\Pr(a<y<b)$ for $y$ chosen from distribution $D$, player 1 can bound player 2's chance of success to less than $\frac12 + \epsilon$ for any given positive $\epsilon$, by choosing $a$ and $b$ sufficiently large and close together. And even if player 1 doesn't know $D$, he should <em>still</em> choose $a$ and $b$ very large and close together. </p>
<p>I have heard this paradox attributed to Feller, but I'm afraid I don't have a reference.</p>
<p>[ Addendum 2014-06-04: <a href="https://math.stackexchange.com/q/709984/25554">I asked here for a reference</a>, and was answered: the source is <a href="https://en.wikipedia.org/wiki/Thomas_M._Cover" rel="noreferrer">Thomas M. Cover</a> “<a href="http://www-isl.stanford.edu/~cover/papers/paper73.pdf" rel="noreferrer">Pick the largest number</a>”<em>Open Problems in Communication and Computation</em> Springer-Verlag, 1987, p152. ] </p>
| <p>I know this is a late answer, but I'm pretty sure I know what game OP is thinking of (and none of the other answers have it right).</p>
<p>The way it works is person A chooses to hide either $100$ or $200$ dollars in an envelope, and person B has to guess the amount that person A hid. If person B guesses correctly they win the money in the envelope, but if they guess incorrectly they win nothing.</p>
<p>If person A uses the predictable strategy of putting $100$ dollars in the envelope every time, then person B can win $100$ dollars every time by guessing $100$ correctly.</p>
<p>If person A instead chooses to randomly put either $100$ or $200$, then person B can guess $200$ every time--he'll win half the time, so again will win $100$ dollars per game on average.</p>
<p>But a third, better option for A is to randomly put $100$ in the envelope with probability $2/3$, and $200$ dollars in with probability $1/3$. If person B guesses $100$, he has a $2/3$ chance of being right so his expected winnings are $\$66.67$. If person B guesses $200$, he has a $1/3$ chance of being right so his expected winnings are again $\$66.67$. No matter what B does, this strategy guarantees that he will win only $\$66.67$ on average.</p>
<p>Looking at person B's strategy, he can do something similar. If he guesses $100$ with probability $2/3$ and $200$ with probability $1/3$, then no matter what strategy person A uses he wins an average of $\$66.67$. These strategies for A and B are the <a href="https://en.wikipedia.org/wiki/Nash_equilibrium" rel="noreferrer">Nash Equilibrium</a> for this game, the set of strategies where neither person can improve their expected winnings by changing their strategy.</p>
<p>The "infinite chain of strategies" you mention comes in if either A or B start with a strategy that isn't in the Nash equilibrium. Suppose A decides to put $100$ dollars in the envelope every time. The B's best strategy is of course to guess $100$ every time. But given that, A's best strategy is clearly to put $200$ dollars in the envelope every time, at which point B should change to guessing $200$ every time, and so on. In a Nash equilibrium, though, neither player gains any advantage by modifying their strategy so such an infinite progression doesn't occur.</p>
<p>The interesting thing about this game is that although the game is entirely deterministic, the best strategies involve randomness. This actually turns out to be true in most deterministic games where the goal is to predict what your opponent will do. Another common example is <a href="https://en.wikipedia.org/wiki/Rock-paper-scissors" rel="noreferrer">rock-paper-scissors</a>, where the equilibrium strategy is unsurprisingly to choose between rock, paper, and scissors with equal probability.</p>
|
probability | <p>In <em>Finite Mathematics</em> by Lial et al. (10th ed.), problem 8.3.34 says:</p>
<blockquote>
<p>On National Public Radio, the <em>Weekend Edition</em> program posed the
following probability problem: Given a certain number of balls, of
which some are blue, pick 5 at random. The probability that all 5 are
blue is 1/2. Determine the original number of balls and decide how
many were blue.</p>
</blockquote>
<p>If there are $n$ balls, of which $m$ are blue, then the probability that 5 randomly chosen balls are all blue is $\binom{m}{5} / \binom{n}{5}$. We want this to be $1/2$,
so $\binom{n}{5} = 2\binom{m}{5}$; equivalently,
$n(n-1)(n-2)(n-3)(n-4) = 2 m(m-1)(m-2)(m-3)(m-4)$.
I'll denote these quantities as $[n]_5$ and $2 [m]_5$ (this is a notation for the so-called "falling factorial.")</p>
<p>A little fooling around will show that $[m+1]_5 = \frac{m+1}{m-4}[m]_5$.
Solving $\frac{m+1}{m-4} = 2$ shows that the only solution with $n = m + 1$ has $m = 9$, $n = 10$.</p>
<p><strong>Is this the only solution?</strong></p>
<p>You can check that $n = m + 2$ doesn't yield any integer solutions, by using the quadratic formula to solve $(m + 2)(m +1) = 2(m - 3)(m - 4)$. I have ruled out $n = m + 3$ or $n = m + 4$ with similar checks. For $n \geq m + 5$, solutions would satisfy a quintic equation, which of course has no general formula to find solutions.</p>
<p>Note that, as $n$ gets bigger, the ratio of successive values of $\binom{n}{5}$ gets smaller; $\binom{n+1}{5} = \frac{n+1}{n-4}\binom{n}{5}$
and $\frac{n+1}{n-4}$ is less than 2—in fact, it approaches 1. So it seems possible that, for some $k$, $\binom{n+k}{5}$ could be $2 \binom{n}{5}$.</p>
<p>This is now <a href="https://mathoverflow.net/questions/128036/solutions-to-binomn5-2-binomm5">a question at MathOverflow</a>.</p>
| <p>Many Diophantine equations are solved using modern algebraic geometry. For an informal survey how this works, see</p>
<p>M. Stoll, <em>How to solve a Diophantine equation</em>, <a href="http://arxiv.org/pdf/1002.4344.pdf">arXiv</a>.</p>
<p>The most prominent example is Fermat's equation. But there are also interesting binomial equations. It has been shown very recently that $\binom{x}{5}=\binom{y}{2}$ has exactly $20$ integer solutions:</p>
<p>Y. Bugeaud, M. Mignotte, S. Siksek, M. Stoll, Sz. Tengely, <em>Integral Points on Hyperelliptic Curves</em>, <a href="http://arxiv.org/pdf/0801.4459v4.pdf">arXiv</a></p>
<p>I don't know if this can be proven by elementary means. And I don't know the situation for $\binom{x}{5}=2 \binom{y}{5}$. I just want to warn you that it <em>might be</em> a waste of time to look for elementary solutions, and that instead more sophisticated methods are necessary. On the other hand, this equation arises as a problem from a book, so I am not sure ...</p>
| <p>I am putting some results that may or may not be part of an answer here, as a community wiki post, rather that cluttering the question with them. Perhaps they will help lead someone to a complete answer. If you have similar potentially useful information or partial answers, please feel free to add it here.</p>
<hr>
<p>Let's see what can be gleaned by looking at $\binom{n}{r} = 2 \binom{n-k}{r}$ for other values of $r$. There is an interesting duality:
$\binom{n}{r} = 2 \binom{n-k}{r} \iff \binom{n}{k} = 2 \binom{n-r}{k}$.
So we can find solutions to the original problem by finding solutions to $\binom{n}{k} = 2\binom{n-5}{k}$.
For any $r$, there is a "standard" solution, $\binom{2r}{r} = 2\binom{2r-1}{r}$, and several "trivial" solutions, $\binom{i}{r} = 2\binom{j}{r}$ whenever $0 \leq i,j \leq r - 1$.</p>
<p>For $r = 1$, there are infinitely many solutions;
for any $k$, we have $\binom{2k}{1} = 2 \binom{k}{1}$.
Under the duality, these correspond to the "standard" solutions $\binom{2k}{k} = 2 \binom{2k - 1}{k}$.</p>
<p>For $r = 2$, there are also infinitely many solutions. It's fun to see why. A solution satisfies the equation
$n(n-1) = 2 (n-k)(n-k-1)$
or $$\begin{equation}\tag{1}\label{eq:qud}n^2 - (4k+1)n + (2k^2 + 2k) = 0.\end{equation}$$
Thus $n = \frac{4k + 1 \pm \sqrt{8k^2 + 1}}{2}$.
Since $8k^2 + 1$ is odd,
this either has two integer solutions if $8k^2 + 1$ is a perfect square,
or none at all.
Thus, whenever there is one solution $n$ for a given difference $k$,
<em>there must be a second.</em>
The second key ingredient is that since we are multiplying evenly many terms, we can change the signs of all the terms without changing the result.</p>
<p>So, start with the standard solution:
$$
4 \cdot 3 = 2 (3 \cdot 2).
$$
Then it's also true that
$$
4 \cdot 3 = 2 (-2 \cdot -3),
$$
in other words, $[4]_2 = 2[-2]_2 = 2[4-6]_2$,
a solution with $k = 6$.
So there must be another.
When $k = 6$, equation $\eqref{eq:qud}$ is $n^2 - 25n + 84 = 0$,
and we know we can factor out $(n - 4)$,
which leaves $(n - 21)$.
And indeed, $\binom{21}{2} = 2 \binom{15}{2}$.
The duality gives $\binom{21}{6} = 2 \binom{19}{6}$.
Repeating the process, we get $[21]_2 = 2 [-14]_2$
with a difference $k = 35$;
factoring $(n - 21)$ from equation $\eqref{eq:qud}$ leaves
$(n - 120)$, and indeed $\binom{120}{2} = 2 \binom{85}{2}$.
Dually, $\binom{120}{35} = 2 \binom{118}{35}$.
We can keep going up this "staircase" forever;
if $k$ gives a solution, then so does $3k + \sqrt{8k^2 + 1}$.
This is OEIS <a href="http://oeis.org/A001109">A001109</a>.</p>
<p>Of course, this doesn't help for our case because $5$ doesn't appear.
But it does show that solutions exist, besides the standard and trivial ones, for $r > 2$.</p>
<p>Now consider $r = 3$.
We need to solve $[n]_3 = 2[n-k]_3$ or
$$\begin{equation}\tag{2}\label{eq:cub}n^3 - (6k + 3)n^2 + (6k^2 + 12k + 2)n - (2k^3 + 6k^2 + 4k) = 0.\end{equation}$$
With $k = 0$ this factors as expected as $n(n-1)(n-2)$.
With $k = 1$ it again gives us the known trivial and standard solutions,
$(n-1)(n-2)(n-6)$.
With $k = 2$,
$\eqref{eq:cub}$ is $n^3 - 15n^2 + 50n - 48 = 0$,
from which we can factor the known trivial solution:
$(n-2)(n^2 - 13n + 24) = 0$.
But the other factor has no integer solutions.
For general $k$, we can apply the <a href="http://en.wikipedia.org/wiki/Cubic_formula">Cubic formula</a>.
We find that the discriminant $\Delta$ is $4(-27k^6 + 108k^4 + 18k^2 + 1)$,
which is positive for $k = 0,1,2$ and negative for $k \geq 3$,
so that it has three real roots for $k \leq 2$ but only one real root for $k \geq 3$.
This root turns out to be
$$
2k + 1 - \sqrt[3]{-3k^3 + \sqrt{k^6 - 4k^4 - \frac{2}{3}k^2 - \frac{1}{27}}} - \sqrt[3]{-3k^3 - \sqrt{k^6 - 4k^4 - \frac{2}{3}k^2 - \frac{1}{27}}}.
$$
I don't think that this can ever be an integer, but I don't know how to prove this. Unfortunately it's not sufficient to show that the contents of the cube roots cannot be perfect cubes, since even if neither is an integer, their sum still could be. However, no solutions exist for $n$ up to 10,000 (checked in the <a href="http://oeis.org/A000292/b000292.txt">table</a> provided at <a href="http://oeis.org/A000292">OEIS A000292</a>.)</p>
<p>We can at least check if any answers to our original question have the difference $n - m = 3$
by looking at the above root, with $k = 5$. It's about 25.268, not an integer,
so any extra solutions have the difference at least 4.</p>
<p>For $r = 4$, a similar "negation" trick should work as for $r = 2$.
Unfortunately, it doesn't seem to yield extra positive solutions.
The equation $[n]_4 = 2[n-k]_4$ expands to
$$\begin{equation}\tag{3}\label{eq:qrt}n^4 - (8k + 6)n^3 + (12k^2 + 36k + 11)n^2 - (8k^3 + 36k^2 + 44k + 6)n + (2k^4 + 12k^3 + 22k^2 + 12k) = 0.\end{equation}$$
Starting from the standard solution,
$[8]_4 = 2[7]_4$,
we have another solution, $[8]_4 = 2[-4]_4$,
with difference $k = 12$.
And indeed, we can factor $(n - 8)$ out of equation $\eqref{eq:qrt}$ with $k = 12$,
$$
n^4 - 102n^3 + 2171n^2 - 19542n + 65520 =
(n-8)(n^3 - 94n^2 + 1419n - 8190).
$$
But the other factor is a cubic with one real root, which isn't an integer.
There are no solutions (besides the trivial and standard) up to $n=1002$ (checked in the <a href="http://oeis.org/A000332/b000332.txt">table</a> provided at <a href="http://oeis.org/A000332">OEIS A000332</a>.)</p>
<p>When $k=5$, $\eqref{eq:qrt}$ becomes $n^4 - 46n^3 + 491n^2 - 2126n + 3360$,
which (<a href="http://www.wolframalpha.com/input/?i=n%5E4+-+46n%5E3+%2B+491n%5E2+-+2126n+%2B+3360+%3D+0">according to Wolfram Alpha</a>) has no integer solutions. So for the original problem, $n - m > 4$.</p>
<hr>
<p>Related OEIS sequences: </p>
<ul>
<li><a href="http://oeis.org/A000389">A000389: $\binom{n}{5}$</a> and <a href="http://oeis.org/A000389/b000389.txt">table up to $n=1000$</a></li>
<li><a href="http://oeis.org/A001109">A001109</a> are values of $k$ such that $\binom{n}{2} = 2\binom{n-k}{2}$, or equivalently $\binom{n}{k} = 2\binom{n-2}{k}$, has a solution
$n = \bigl(4k + 1 + \sqrt{8k^2 + 1}\bigr)/2$.</li>
<li><a href="http://oeis.org/A082291">A082291</a> are the values of $n-2$ in the above, offset by one.
That is, A001109 begins 0,1,6 and A082291 begins 2, 19, 118. $\binom{2+2}{1} = 2\binom{2}{1}$, and $\binom{19+2}{6} = 2\binom{19}{6}$.</li>
</ul>
|
linear-algebra | <blockquote>
<p>Let <span class="math-container">$K$</span> be nonsingular symmetric matrix, prove that if <span class="math-container">$K$</span> is positive definite so is <span class="math-container">$K^{-1}$</span> .</p>
</blockquote>
<hr />
<p>My attempt:</p>
<p>I have that <span class="math-container">$K = K^T$</span> so <span class="math-container">$x^TKx = x^TK^Tx = (xK)^Tx = (xIK)^Tx$</span> and then I don't know what to do next.</p>
| <p>If <span class="math-container">$K$</span> is positive definite then <span class="math-container">$K$</span> is invertible, so define
<span class="math-container">$y = K x$</span>. Then <span class="math-container">$y^T K^{-1} y = x^T K^{T} K^{-1} K x = x^T K^{T} x >0$</span>.</p>
<p>Since the transpose of a positive definite matrix is also positive definite, cf. <a href="https://www.quora.com/Is-the-transpose-of-a-positive-definite-matrix-positive-definite?share=1" rel="noreferrer">here</a>, this proves that <span class="math-container">$K^{-1}$</span> is positive definite.</p>
| <p>Here's one way: $K$ is positive definite if and only if all of its eigenvalues are positive. What do you know about the eigenvalues of $K^{-1}$?</p>
|
linear-algebra | <p>How can I understand that $A^TA$ is invertible if $A$ has independent columns? I found a similar <a href="https://math.stackexchange.com/questions/1181271/if-ata-is-invertible-then-a-has-linearly-independent-column-vectors">question</a>, phrased the other way around, so I tried to use the theorem</p>
<p>$$
rank(A^TA) \le min(rank(A^T),rank(A))
$$</p>
<p>Given $rank(A) = rank(A^T) = n$ and $A^TA$ produces an $n\times n$ matrix, I can't seem to prove that $rank(A^TA)$ is actually $n$.</p>
<p>I also tried to look at the question another way with the matrices</p>
<p>$$
A^TA
= \begin{bmatrix}a_1^T \\ a_2^T \\ \ldots \\ a_n^T \end{bmatrix}
\begin{bmatrix}a_1 a_2 \ldots a_n \end{bmatrix}
= \begin{bmatrix}A^Ta_1 A^Ta^2 \ldots A^Ta_n\end{bmatrix}
$$</p>
<p>But I still can't seem to show that $A^TA$ is invertible. So, how should I get a better understanding of why $A^TA$ is invertible if $A$ has independent columns?</p>
| <p>Consider the following:
$$A^TAx=\mathbf 0$$
Here, $Ax$, an element in the range of $A$, is in the null space of $A^T$. However, the null space of $A^T$ and the range of $A$ are orthogonal complements, so $Ax=\mathbf 0$.</p>
<p>If $A$ has linearly independent columns, then $Ax=\mathbf 0 \implies x=\mathbf 0$, so the null space of $A^TA=\{\mathbf 0\}$. Since $A^TA$ is a square matrix, this means $A^TA$ is invertible.</p>
| <p>If $A $ is a real $m \times n $ matrix then $A $ and $A^T A $ have the same null space. Proof: $A^TA x =0\implies x^T A^T Ax =0 \implies (Ax)^TAx=0 \implies \|Ax\|^2 = 0 \implies Ax = 0 $. </p>
|
game-theory | <p>A mathematician and a computer are playing a game: First, the mathematician chooses an integer from the range $2,...,1000$. Then, the computer chooses an integer <em>uniformly at random</em> from the same range. If the numbers chosen share a prime factor, the larger number wins. If they do not, the smaller number wins. (If the two numbers are the same, the game is a draw.)</p>
<p>Which number should the mathematician choose in order to maximize his chances of winning?</p>
| <p>For fixed range:</p>
<pre><code>range = 16;
a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}];
b = Table[Sort@DeleteDuplicates@ Flatten@Table[
Table[Position[a, a[[y, m]]][[n, 1]],
{n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}];
c = Table[Complement[Range[range], b[[n]]], {n, 1, range}];
d = Table[Range[n, range], {n, 1, range}];
e = Table[Range[1, n], {n, 1, range}];
w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]],
Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}];
l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1],
n], {n, 1, range}];
results = Table[Length@l[[n]], {n, 1, range}];
cf = Grid[{{Join[{"n"}, Rest@(r = Range[range])] // ColumnForm,
Join[{"win against n"}, Rest@w] // ColumnForm,
Join[{"lose against n"}, Rest@l] // ColumnForm,
Join[{"probability win for n"}, (p = Drop[Table[
results[[n]]/Total@Drop[results, 1] // N,{n, 1, range}], 1])] // ColumnForm}}]
Flatten[Position[p, Max@p] + 1]
</code></pre>
<p>isn't great code, but fun to play with for small ranges, gives</p>
<p><img src="https://i.sstatic.net/6oqyM.png" alt="enter image description here">
<img src="https://i.sstatic.net/hxDGU.png" alt="enter image description here"></p>
<p>and perhaps more illuminating</p>
<pre><code>rr = 20; Grid[{{Join[{"range"}, Rest@(r = Range[rr])] // ColumnForm,
Join[{"best n"}, (t = Rest@Table[
a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}];
b = Table[Sort@DeleteDuplicates@Flatten@Table[Table[
Position[a, a[[y, m]]][[n, 1]], {n, 1,Length@Position[a, a[[y, m]]]}],
{m, 1,PrimeNu[y]}], {y, 1, range}];
c = Table[Complement[Range[range], b[[n]]], {n, 1, range}];
d = Table[Range[n, range], {n, 1, range}];
e = Table[Range[1, n], {n, 1, range}];
w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]],
Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}];
l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n],
{n,1, range}];
results = Table[Length@l[[n]], {n, 1, range}];
p = Drop[Table[results[[n]]/Total@Drop[results, 1] // N,
{n, 1, range}], 1];
{Flatten[Position[p, Max@p] + 1], Max@p}, {range, 1, rr}]/.Indeterminate-> draw);
Table[t[[n, 1]], {n, 1, rr - 1}]] // ColumnForm,
Join[{"probability for win"}, Table[t[[n, 2]], {n, 1, rr - 1}]] // ColumnForm}}]
</code></pre>
<p>compares ranges:</p>
<p><img src="https://i.sstatic.net/P2aA8.png" alt="enter image description here"></p>
<p>Plotting mean "best $n$" against $\sqrt{\text{range}}$ gives</p>
<p><img src="https://i.sstatic.net/LMkAz.png" alt="enter image description here"></p>
<p>For range=$1000,$ "best $n$" are $29$ and $31$, which can be seen as maxima in this plot:</p>
<p><img src="https://i.sstatic.net/06gXJ.png" alt="enter image description here"></p>
<h1>Update</h1>
<p>In light of DanielV's comment that a "primes vs winchance" graph would probably be enlightening, I did a little bit of digging, and it turns out that it is. Looking at the "winchance" (just a weighting for $n$) of the primes in the range only, it is possible to give a fairly accurate prediction using</p>
<pre><code>range = 1000;
a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}];
b = Table[Sort@DeleteDuplicates@Flatten@Table[
Table[Position[a, a[[y, m]]][[n, 1]], {n, 1,
Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}];
c = Table[Complement[Range[range], b[[n]]], {n, 1, range}];
d = Table[Range[n, range], {n, 1, range}];
e = Table[Range[1, n], {n, 1, range}];
w = Table[ DeleteCases[ DeleteCases[
Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]],
1], n], {n, 1, range}];
l = Table[
DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1],
n], {n, 1, range}];
results = Table[Length@l[[n]], {n, 1, range}];
p = Drop[Table[
results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1];
{Flatten[Position[p, Max@p] + 1], Max@p};
qq = Prime[Range[PrimePi[2], PrimePi[range]]] - 1;
Show[ListLinePlot[Table[p[[t]] range, {t, qq}],
DataRange -> {1, Length@qq}],
ListLinePlot[
Table[2 - 2/Prime[x] - 2/range (-E + Prime[x]), {x, 1, Length@qq + 0}],
PlotStyle -> Red], PlotRange -> All]
</code></pre>
<p><img src="https://i.sstatic.net/BwAmp.png" alt="enter image description here"></p>
<p>The plot above (there are $2$ plots here) show the values of "winchance" for primes against a plot of $$2+\frac{2 (e-p_n)}{\text{range}}-\frac{2}{p_n}$$</p>
<p>where $p_n$ is the $n$th prime, and "winchance" is the number of possible wins for $n$ divided by total number of possible wins in range ie $$\dfrac{\text{range}}{2}\left(\text{range}-1\right)$$ eg $499500$ for range $1000$.</p>
<p><img src="https://i.sstatic.net/zFegO.png" alt="enter image description here"></p>
<pre><code>Show[p // ListLinePlot, ListPlot[N[
Transpose@{Prime[Range[PrimePi[2] PrimePi[range]]],
Table[(2 + (2*(E - Prime[x]))/range - 2/Prime[x])/range, {x, 1,
Length@qq}]}], PlotStyle -> {Thick, Red, PointSize[Medium]},
DataRange -> {1, range}]]
</code></pre>
<h1>Added</h1>
<p>Bit of fun with game simulation:</p>
<pre><code>games = 100; range = 30;
table = Prime[Range[PrimePi[range]]];
choice = Nearest[table, Round[Sqrt[range]]][[1]];
y = RandomChoice[Range[2, range], games]; z = Table[
Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}];
Count[Table[If[Count[z, choice] == 0 && y[[m]] < choice \[Or]
Count[z, choice] > 0 && y[[m]] < choice, "lose", "win"],
{m, 1, games}], "win"]
</code></pre>
<p>& simulated wins against computer over variety of ranges </p>
<p><img src="https://i.sstatic.net/82BQG.png" alt="enter image description here"></p>
<p>with</p>
<pre><code>Clear[range]
highestRange = 1000;
ListLinePlot[Table[games = 100;
table = Prime[Range[PrimePi[range]]];
choice = Nearest[table, Round[Sqrt[range]]][[1]];
y = RandomChoice[Range[2, range], games];
z = Table[Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m,
1, games}];
Count[Table[ If[Count[z, choice] == 0 && y[[m]] < choice \[Or]
Count[z, choice] > 0 && y[[m]] < choice, "lose", "win"], {m, 1,
games}], "win"], {range,2, highestRange}], Filling -> Axis, PlotRange-> All]
</code></pre>
<h1>Added 2</h1>
<p>Plot of mean "best $n$" up to range$=1000$ with tentative conjectured error bound of $\pm\dfrac{\sqrt{\text{range}}}{\log(\text{range})}$ for range$>30$.</p>
<p><img src="https://i.sstatic.net/gjQ7E.png" alt="enter image description here"></p>
<p>I could well be wrong here though. - In fact, on reflection, I think I am (<a href="https://math.stackexchange.com/questions/865820/very-tight-prime-bounds">related</a>).</p>
| <p>First consider choosing a prime $p$ in the range $[2,N]$. You lose only if the computer chooses a multiple of $p$ or a number smaller than $p$, which occurs with probability
$$
\frac{(\lfloor{N/p}\rfloor-1)+(p-2)}{N-1}=\frac{\lfloor{p+N/p}\rfloor-3}{N-1}.
$$
The term inside the floor function has derivative
$$
1-\frac{N}{p^2},
$$
so it increases for $p\le \sqrt{N}$ and decreases thereafter. The floor function does not change this behavior. So the best prime to choose is always one of the two closest primes to $\sqrt{N}$ (the one on its left and one its right, unless $N$ is the square of a prime). Your chance of losing with this strategy will be $\sim 2/\sqrt{N}$.</p>
<p>On the other hand, consider choosing a composite $q$ whose prime factors are $$p_1 \le p_2 \le \ldots \le p_k.$$ Then the computer certainly wins if it chooses a prime number less than $q$ (other than any of the $p$'s); there are about $q / \log q$ of these by the prime number theorem. It also wins if it chooses a multiple of $p_1$ larger than $q$; there are about $(N-q)/p_1$ of these. Since $p_1 \le \sqrt{q}$ (because $q$ is composite), the computer's chance of winning here is at least about
$$
\frac{q}{N\log q}+\frac{N-q}{N\sqrt{q}}.
$$
The first term increases with $q$, and the second term decreases. The second term is larger than $(1/3)/\sqrt{N}$ until $q \ge (19-\sqrt{37})N/18 \approx 0.72 N$, at which point the first is already $0.72 / \log{N}$, which is itself larger than $(5/3)/\sqrt{N}$ as long as $N > 124$. So the sum of these terms will always be larger than $2/\sqrt{N}$ for $N > 124$ or so, meaning that the computer has a better chance of winning than if you'd chosen the best prime.</p>
<p>This rough calculation shows that choosing the prime closest to $\sqrt{N}$ is the best strategy for sufficiently large $N$, where "sufficiently large" means larger than about $100$. (Other answers have listed the exceptions, the largest of which appears to be $N=30$, consistent with this calculation.)</p>
|
differentiation | <p>I don't understand something about L'Hôpital's rule. In this case: </p>
<p>$$
\begin{align}
& {{}\phantom{=}}\lim_{x\to0}\frac{e^x-1-x^2}{x^4+x^3+x^2} \\[8pt]
& =\lim_{x\to0}\frac{(e^x-1-x^2)'}{(x^4+x^3+x^2)'} \\[8pt]
& =\lim_{x\to0}\frac{(e^x-2x)'}{(4x^3+3x^2+2x)'} \\[8pt]
& =\lim_{x\to0}\frac{(e^x-2)'}{(12x^2+6x+2)'} \\[8pt]
& = \lim_{x\to0}\frac{(e^x)'}{(24x+6)'} \\[8pt]
& = \lim_{x\to0}\frac{e^x}{24} \\[8pt]
& = \frac{e^0}{24} \\[8pt]
& = \frac{1}{24}
\end{align}
$$</p>
<p>Why do we have to keep on solving after this step:</p>
<p>$$\displaystyle\lim_{x\to0}\dfrac{(e^x-2)'}{(12x^2+6x+2)'}$$</p>
<p>Can't I just plug in $x=0$ and compute the limit at this step giving me:</p>
<p>$$\dfrac{1-2}{0+0+2}=-\dfrac{1}{2}$$</p>
<p>I'm very confused, because I get different probable answers for the limit, depending on when do I stop to differentiate, as clearly $-\frac1{2}\neq \frac 1{24}$.</p>
| <p>Once your answer is no longer in the form 0/0 or $\frac{\infty}{\infty}$ you must stop applying the rule. You only apply the rule to attempt to get rid of the indeterminate forms. If you apply L'Hopital's rule when it is not applicable (i.e., when your function no longer yields an indeterminate value of 0/0 or $\frac{\infty}{\infty}$) you will most likely get the wrong answer. </p>
<p>You should have stopped differentiating the top and bottom once you got to this:</p>
<p>$\dfrac{e^x-2x}{4x^2+3x^2+2x}$. Taking the limit at that gives you $1/0$. The limit is nonexistent. </p>
<p>Also, don't be tempted to say "infinity" when you see a 0 in the denominator and a non-zero number in the top. It may not be the case. For example, the function $\frac{1}{x}$ approaches infinity and negative infinity from both sides of the limit as x approaches 0. Its not necessarily infinite; its best just to leave it as "nonexistent". </p>
| <p>After differentiating just once, you get $$\lim_{x \to 0} \dfrac{e^x-2x}{4x^3+3x^2+2x}$$ which "evaluates" to $\dfrac 10$, i.e., the numerator approaches $1$, and the denominator approaches $0$. Hence, L'Hopital <strong>no longer applies</strong> and we have $$\lim_{x \to 0} \dfrac{e^x-2x}{4x^3+3x^2+2x}\quad\text{does not exist}.$$ </p>
<p><a href="http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule">L'Hopital's rule</a> applies <em>provided and only while</em> a limit evaluates to an "indeterminate" form: e.g., $\dfrac 00, \;\text{or}\;\dfrac {\pm\infty}{\pm\infty}$.</p>
|
matrices | <p>Given two square matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, how do you show that <span class="math-container">$$\det(AB) = \det(A) \det(B)$$</span> where <span class="math-container">$\det(\cdot)$</span> is the determinant of the matrix?</p>
| <p>Let's consider the function <span class="math-container">$B\mapsto \det(AB)$</span> as a function of the columns of <span class="math-container">$B=\left(v_1|\cdots |v_i| \cdots | v_n\right)$</span>. It is straight forward to verify that this map is multilinear, in the sense that
<span class="math-container">$$\det\left(A\left(v_1|\cdots |v_i+av_i'| \cdots | v_n\right)\right)=\det\left(A\left(v_1|\cdots |v_i| \cdots | v_n\right)\right)+a\det\left(A\left(v_1|\cdots |v_i'| \cdots | v_n\right)\right).$$</span> It is also alternating, in the sense that if you swap two columns of <span class="math-container">$B$</span>, you multiply your overall result by <span class="math-container">$-1$</span>. These properties both follow directly from the corresponding properties for the function <span class="math-container">$A\mapsto \det(A)$</span>.</p>
<p>The determinant is completely characterized by these two properties, and the fact that <span class="math-container">$\det(I)=1$</span>. Moreover, any function that satisfies these two properties must be a multiple of the determinant. If you have not seen this fact, you should try to prove it. I don't know of a reference online, but I know it is contained in Bretscher's linear algebra book.</p>
<p>In any case, because of this fact, we must have that <span class="math-container">$\det(AB)=c\det(B)$</span> for some constant <span class="math-container">$c$</span>, and setting <span class="math-container">$B=I$</span>, we see that <span class="math-container">$c=\det(A)$</span>.</p>
<hr />
<p>For completeness, here is a proof of the necessary lemma that any a multilinear, alternating function is a multiple of determinant.</p>
<p>We will let <span class="math-container">$f:\mathbb (F^n)^n\to \mathbb F$</span> be a multilinear, alternating function, where, to allow for this proof to work in characteristic 2, we will say that a multilinear function is alternating if it is zero when two of its inputs are equal (this is equivalent to getting a sign when you swap two inputs everywhere except characteristic 2). Let <span class="math-container">$e_1, \ldots, e_n$</span> be the standard basis vectors. Then <span class="math-container">$f(e_{i_1},e_{i_2}, \ldots, e_{i_n})=0$</span> if any index occurs twice, and otherwise, if <span class="math-container">$\sigma\in S_n$</span> is a permutation, then <span class="math-container">$f(e_{\sigma(1)}, e_{\sigma(2)},\ldots, e_{\sigma(n)})=(-1)^\sigma$</span>, the sign of the permutation <span class="math-container">$\sigma$</span>.</p>
<p>Using multilinearity, one can expand out evaluating <span class="math-container">$f$</span> on a collection of vectors written in terms of the basis:</p>
<p><span class="math-container">$$f\left(\sum_{j_1=1}^n a_{1j_1}e_{j_1}, \sum_{j_2=1}^n a_{2j_2}e_{j_2},\ldots, \sum_{j_n=1}^n a_{nj_n}e_{j_n}\right) = \sum_{j_1=1}^n\sum_{j_2=1}^n\cdots \sum_{j_n=1}^n \left(\prod_{k=1}^n a_{kj_k}\right)f(e_{j_1},e_{j_2},\ldots, e_{j_n}).$$</span></p>
<p>All the terms with <span class="math-container">$j_{\ell}=j_{\ell'}$</span> for some <span class="math-container">$\ell\neq \ell'$</span> will vanish before the <span class="math-container">$f$</span> term is zero, and the other terms can be written in terms of permutations. If <span class="math-container">$j_{\ell}\neq j_{\ell'}$</span> for any <span class="math-container">$\ell\neq \ell'$</span>, then there is a unique permutation <span class="math-container">$\sigma$</span> with <span class="math-container">$j_k=\sigma(k)$</span> for every <span class="math-container">$k$</span>. This yields:</p>
<p><span class="math-container">$$\begin{align}\sum_{j_1=1}^n\sum_{j_2=1}^n\cdots \sum_{j_n=1}^n \left(\prod_{k=1}^n a_{kj_k}\right)f(e_{j_1},e_{j_2},\ldots, e_{j_n}) &= \sum_{\sigma\in S_n} \left(\prod_{k=1}^n a_{k\sigma(k)}\right)f(e_{\sigma(1)},e_{\sigma(2)},\ldots, e_{\sigma(n)}) \\ &= \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{k=1}^n a_{k\sigma(k)}\right)f(e_{1},e_{2},\ldots, e_{n}) \\ &= f(e_{1},e_{2},\ldots, e_{n}) \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{k=1}^n a_{k\sigma(k)}\right). \end{align}
$$</span></p>
<p>In the last line, the thing still in the sum is the determinant, although one does not need to realize this fact, as we have shown that <span class="math-container">$f$</span> is completely determined by <span class="math-container">$f(e_1,\ldots, e_n)$</span>, and we simply define <span class="math-container">$\det$</span> to be such a function with <span class="math-container">$\det(e_1,\ldots, e_n)=1$</span>.</p>
| <p>The proof using elementary matrices can be found e.g. on <a href="http://www.proofwiki.org/wiki/Determinant_of_Matrix_Product" rel="noreferrer">proofwiki</a>. It's basically the same proof as given in Jyrki Lahtonen 's comment and Chandrasekhar's link.</p>
<p>There is also a proof using block matrices, I googled a bit and I was only able to find it in <a href="http://books.google.com/books?id=N871f_bp810C&pg=PA112&dq=determinant+product+%22alternative+proof%22&hl=en&ei=EPlZToKHOo6aOuyapJIM&sa=X&oi=book_result&ct=result&resnum=2&ved=0CC8Q6AEwAQ#v=onepage&q=determinant%20product%20%22alternative%20proof%22&f=false" rel="noreferrer">this book</a> and <a href="http://www.mth.kcl.ac.uk/%7Ejrs/gazette/blocks.pdf" rel="noreferrer">this paper</a>.</p>
<hr />
<p>I like the approach which I learned from Sheldon Axler's Linear Algebra Done Right, <a href="http://books.google.com/books?id=ovIYVIlithQC&printsec=frontcover&dq=linear+algebra+done+right&hl=en&ei=H-xZTuutJoLrOdvrwaMM&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCkQ6AEwAA#v=onepage&q=%2210.31%22&f=false" rel="noreferrer">Theorem 10.31</a>.
Let me try to reproduce the proof here.</p>
<p>We will use several results in the proof, one of them is - as far as I can say - a little less known. It is the <a href="http://www.proofwiki.org/wiki/Determinant_as_Sum_of_Determinants" rel="noreferrer">theorem</a> which says, that if I have two matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, which only differ in <span class="math-container">$k$</span>-th row and other rows are the same, and the matrix <span class="math-container">$C$</span> has as the <span class="math-container">$k$</span>-th row the sum of <span class="math-container">$k$</span>-th rows of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and other rows are the same as in <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, then <span class="math-container">$|C|=|B|+|A|$</span>.</p>
<p><a href="https://math.stackexchange.com/questions/668/whats-an-intuitive-way-to-think-about-the-determinant">Geometrically</a>, this corresponds to adding two parallelepipeds with the same base.</p>
<hr />
<p><strong>Proof.</strong>
Let us denote the rows of <span class="math-container">$A$</span> by <span class="math-container">$\vec\alpha_1,\ldots,\vec\alpha_n$</span>. Thus
<span class="math-container">$$A=
\begin{pmatrix}
a_{11} & a_{12}& \ldots & a_{1n}\\
a_{21} & a_{22}& \ldots & a_{2n}\\
\vdots & \vdots& \ddots & \vdots \\
a_{n1} & a_{n2}& \ldots & a_{nn}
\end{pmatrix}=
\begin{pmatrix}
\vec\alpha_1 \\ \vec\alpha_2 \\ \vdots \\ \vec\alpha_n
\end{pmatrix}$$</span></p>
<p>Directly from the definition of matrix product we can see
that the rows of <span class="math-container">$A\cdot B$</span> are of the form <span class="math-container">$\vec\alpha_kB$</span>, i.e.,
<span class="math-container">$$A\cdot B=\begin{pmatrix}
\vec\alpha_1B \\ \vec\alpha_2B \\ \vdots \\ \vec\alpha_nB
\end{pmatrix}$$</span>
Since <span class="math-container">$\vec\alpha_k=\sum_{i=1}^n a_{ki}\vec e_i$</span>, we can
rewrite this equality as
<span class="math-container">$$A\cdot B=\begin{pmatrix}
\sum_{i_1=1}^n a_{1i_1}\vec e_{i_1} B\\
\vdots\\
\sum_{i_n=1}^n a_{ni_n}\vec e_{i_n} B
\end{pmatrix}$$</span>
Using the theorem on the sum of determinants multiple times we get
<span class="math-container">$$
|{A\cdot B}|= \sum_{i_1=1}^n a_{1i_1}
\begin{vmatrix}
\vec e_{i_1}B\\
\sum_{i_2=1}^n a_{2i_2}\vec e_{i_2} B\\
\vdots\\
\sum_{i_n=1}^n a_{ni_n}\vec e_{i_n} B
\end{vmatrix}= \ldots =
\sum_{i_1=1}^n \ldots \sum_{i_n=1}^n a_{1i_1} a_{2i_2} \dots
a_{ni_n}
\begin{vmatrix}
\vec e_{i_1} B \\ \vec e_{i_2} B \\ \vdots \\ \vec e_{i_n} B
\end{vmatrix}
$$</span></p>
<p>Now notice that if <span class="math-container">$i_j=i_k$</span> for some <span class="math-container">$j\ne k$</span>, then
the corresponding determinant in the above sum is zero (it has two <a href="http://www.proofwiki.org/wiki/Square_Matrix_with_Duplicate_Rows_has_Zero_Determinant" rel="noreferrer">identical rows</a>).
Thus the only nonzero summands are those one, for which the <span class="math-container">$n$</span>-tuple
<span class="math-container">$(i_1,i_2,\dots,i_n)$</span> represents a permutation of the numbers <span class="math-container">$1,\ldots,n$</span>.
Thus we get
<span class="math-container">$$|{A\cdot B}|=\sum_{\varphi\in S_n} a_{1\varphi(1)} a_{2\varphi(2)} \dots a_{n\varphi(n)}
\begin{vmatrix}
\vec e_{\varphi(1)} B \\ \vec e_{\varphi(2)} B \\ \vdots \\ \vec
e_{\varphi(n)} B
\end{vmatrix}$$</span>
(Here <span class="math-container">$S_n$</span> denotes the set of all permutations of <span class="math-container">$\{1,2,\dots,n\}$</span>.)
The matrix on the RHS of the above equality is the matrix
<span class="math-container">$B$</span> with permuted rows. Using several transpositions of rows we can get the matrix
<span class="math-container">$B$</span>. We will show that this can be done using
<span class="math-container">$i(\varphi)$</span> transpositions, where <span class="math-container">$i(\varphi)$</span> denotes the number of <a href="http://en.wikipedia.org/wiki/Inversion_%28discrete_mathematics%29" rel="noreferrer">inversions</a> of <span class="math-container">$\varphi$</span>.
Using this fact we get
<span class="math-container">$$|{A\cdot B}|=\sum_{\varphi\in S_n} a_{1\varphi(1)} a_{2\varphi(2)} \dots a_{n\varphi(n)} (-1)^{i(\varphi)} |{B}| =|A|\cdot |B|.$$</span></p>
<p>It remains to show that we need <span class="math-container">$i(\varphi)$</span> transpositions.
We can transform the "permuted matrix" to matrix <span class="math-container">$B$</span>
as follows: we first move the first row of <span class="math-container">$B$</span> on the first place
by exchanging it with the preceding row until it is on the correct position.
(If it already is in the first position, we make no exchanges at all.)
The number of transpositions we have used is exactly the number
of inversions of <span class="math-container">$\varphi$</span> that contains the number 1.
Now we can move the second row to the second place in the same way.
We will use the same number of transposition as the number of inversions of <span class="math-container">$\varphi$</span>
containing 2 but not containing 1. (Since the first row is already in place.)
We continue in the same way.
We see that by using this procedure we obtain the matrix
<span class="math-container">$B$</span> after <span class="math-container">$i(\varphi)$</span> row transpositions.</p>
|
probability | <p>Just for fun, I am trying to find a good method to generate a random number between 1 and 10 (uniformly) with an unbiased six-sided die.</p>
<p>I found a way, but it may requires a lot of steps before getting the number, so I was wondering if there are more efficient methods.</p>
<p><strong>My method:</strong></p>
<ol>
<li>Throw the die and call the result $n$. If $1\leq n\leq 3$ your number will be between $1$ and $5$ and if $4\leq n\leq 6$ your number will be between $6$ and $10$. Hence, we reduced to the problem of generating a random number between $1$ and $5$.</li>
<li>Now, to get a number between $1$ and $5$, throw the die five times. If the $i$th throw got the largest result, take your number to be $i$. If there is no largest result, start again until there is.</li>
</ol>
<p>The problem is that although the probability that there will eventually be a largest result is $1$, it might take a while before getting it.</p>
<p>Is there a more efficient way that requires only some <em>fixed</em> number of steps? <strong>Edit:</strong> Or if not possible, a method with a smaller expected number of rolls?</p>
| <p>You may throw the die many ($N$) times, take the sum of the outcomes and consider the residue class $\pmod{10}$. The distribution on $[1,10]$ gets closer and closer to a uniform distribution as $N$ increases.</p>
<hr>
<p>You may throw the die once to decide if the final outcome will be even or odd, then throw it again until it gives an outcome different from six, that fixes the residue class $\pmod{5}$. In such a way you generate a uniform distribution over $[1,10]$ with $\frac{11}{5}=\color{red}{2.2}$ tosses, in average. </p>
<hr>
<p>If you are allowed to throw the die in a wedge, you may label the edges of the die with the numbers in $[1,10]$ and mark two opposite edges as "reroll". In such a way you save exactly one toss, and need just $\color{red}{1.2}$ tosses, in average.</p>
<hr>
<p>Obviously, if you are allowed to throw the die in decagonal glass you don't even need the die, but in such a case the lateral thinking spree ends with just $\color{red}{1}$ toss. Not much different from buying a D10, as Travis suggested.</p>
<hr>
<p>At last, just for fun: look at the die, without throwing it. Then look at your clock, the last digit of the seconds. Add one. $\color{red}{0}$ tosses.</p>
| <p>Write out the base-$6$ decimals of $\frac{0}{10}$ through $\frac{10}{10}$.</p>
<p>$$\begin{array}{cc}
\frac{0}{10} & = 0.00000000\dots\\
\frac{1}{10} & = 0.03333333\dots\\
\frac{2}{10} & = 0.11111111\dots\\
\frac{3}{10} & = 0.14444444\dots\\
\frac{4}{10} & = 0.22222222\dots\\
\frac{5}{10} & = 0.30000000\dots\\
\frac{6}{10} & = 0.33333333\dots\\
\frac{7}{10} & = 0.41111111\dots\\
\frac{8}{10} & = 0.44444444\dots\\
\frac{9}{10} & = 0.52222222\dots\\
\frac{10}{10} & = 1.00000000\dots\\
\end{array}$$</p>
<p>Treat rolls of a $6$ as a $0$. As you roll your $6$-sided die, you are generating digits of a base-$6$ decimal number, uniformly distributed between $0$ and $1$. There are $10$ gaps in between the fractions for $\frac{x}{10}$, corresponding to the $10$ uniformly random outcomes you are looking for. You know which outcome you are generating as soon as you know which gap the random number will be in. </p>
<p>This is kind of annoying to do. Here's an equivalent algorithm:</p>
<blockquote>
<p>Roll a die
$$\begin{array}{c|c}
1 & A(0,1)\\
2 & B(1,2,3)\\
3 & A(4,3)\\
4 & A(5,6)\\
5 & B(6,7,8)\\
6 & A(9,8)\\
\end{array}$$
$A(x,y)$: Roll a die
$$\begin{array}{c|c}
1,2,3 & x\\
4 & A(x,y)\\
5,6 & y\\
\end{array}$$
$B(x,y,z)$: Roll a die
$$\begin{array}{c|c}
1 & x\\
2 & C(x,y)\\
3,4 & y\\
5 & C(z,y)\\
6 & z\\
\end{array}$$
$C(x,y)$: Roll a die
$$\begin{array}{c|c}
1 & x\\
2 & C(x,y)\\
3,4,5,6 & y\\
\end{array}$$</p>
</blockquote>
<p>One sees that:</p>
<ul>
<li>$A(x,y)$ returns $x$ with probability $\frac35$ and $y$ with probability $\frac25$.</li>
<li>$B(x,y,z)$ returns $x$ with probability $\frac15$, $y$ with probability $\frac35$, and $z$ with probability $\frac15$.</li>
<li>$C(x,y)$ returns $x$ with probability $\frac15$ and $y$ with probability $\frac45$.</li>
</ul>
<p>Overall, it produces the $10$ outcomes each with $\frac1{10}$ probability. </p>
<p>Procedures $A$ and $C$ are expected to require $\frac65$ rolls. Procedure $B$ is expected to require $\frac23 \cdot 1 + \frac13 (1 + E(C)) = \frac75$ rolls. So the main procedure is expected to require $\frac23 (1 + E(A)) + \frac13(1 + E(B)) = \frac{34}{15} = 2.2\overline{6}$ rolls.</p>
|
probability | <p>Do men or women have more brothers?</p>
<p>I think women have more as no man can be his own brother. But how one can prove it rigorously?</p>
<hr>
<p>I am going to suggest some reasonable background assumptions:</p>
<ol>
<li>There are a large number of individuals, of whom half are men and half are women.</li>
<li>The individuals are partitioned into nonempty families.</li>
<li>The distribution of the sizes of the families is deliberately not specified.</li>
<li>However, in each family, the sex of each member is independent of the sexes of the other members.</li>
</ol>
<p>I believe these assumptions are roughly correct for the world we actually live in.</p>
<p>Even in the absence of any information about point 3, what can one say about relative expectation of the random variables “Number of brothers of individual $I$, given that $I$ is female” and “Number of brothers of individual $I$, given that $I$ is male”?</p>
<p>And how can one directly refute the argument that claims that the second expectation should almost certainly be smaller than the first, based on the observation that in any single family, say with two girls and one boy, the girls have at least as many brothers as do the boys, and usually more.</p>
| <p>So many long answers! But really it's quite simple. </p>
<ul>
<li>Mathematically, the expected number of brothers is the same for men and women.</li>
<li>In real life, we can expect men to have slightly <em>more</em> brothers than women. </li>
</ul>
<p><strong>Mathematically:</strong></p>
<p>Assume, as the question puts it, that "in each family, the sex of each member is independent of the sexes of the other members". This is all we assume: we don't get to pick a particular set of families. (This is essential: If we were to choose the collection of families we consider, we can find collections where the men have more brothers, collections where the women have more brothers, or where the numbers are equal: we can get the answer to come out any way at all.) </p>
<p>I'll write $p$ for the gender ratio, i.e. the proportion of all people who are men. In real life $p$ is close to 0.5, but this doesn't make any difference. In any random set of $n$ persons, the expected (average) number of men is $n\cdot p$.</p>
<ol>
<li>Take an arbitrary child $x$, and let $n$ be the number of children in $x$'s family. </li>
<li>Let $S(x)$ be the set of $x$'s siblings. Note that there are <em>no</em> gender-related restrictions on $S(x)$: It's just the set of children other than $x$.</li>
<li>Obviously, <strong>the expected number of $x$'s brothers is the expected number of men in $S(x)$.</strong> </li>
<li>So what is the expected number of men in this set? Since $x$ has $n-1$ siblings, it's just $(n-1)\cdot p$, or approximately $(n-1)\div 2$, regardless of $x$'s gender. That's all there is to it.</li>
</ol>
<p>Note that the gender of $x$ didn't figure in this calculation at all. If we were to choose an arbitrary boy or an arbitrary girl in step 1, the calculation would be exactly the same, since $S(x)$ is not dependent on $x$'s gender.</p>
<p><strong>In real life:</strong> </p>
<p>In reality, the gender distribution of children does depend on the parents a little bit (for biological reasons that are beyond the scope of math.se). I.e., the distribution of genders in families <a href="http://www.genetics.org/content/genetics/15/5/445.full.pdf" rel="noreferrer">is not completely random.</a> Suppose some couples cannot have boys, some might be unable to have girls, etc. In such a case, being male is evidence that your parents <em>can</em> have a boy, which (very) slightly raises the odds that you can have a brother. </p>
<p>In other words: <strong>If the likelihood of having boys does depend on the family, men on average have <em>more</em> brothers, not fewer.</strong> (I am expressly putting aside the "family planning" scenario where people choose to have more children depending on the gender of the ones they have. If you allow this, <strong>anything could happen.</strong>)</p>
| <p><strong>Edit, 5/24/16:</strong> After some thought I don't particularly like this answer anymore; please take a look at my second answer below instead. </p>
<hr>
<p>Here's a simple version of the question. Suppose there is exactly one family which has $n$ children, of which $k$ are male with some probability $p_k$. When this happens, the men each have $k-1$ brothers, while the women have $k$ brothers. So it would seem that no matter what the probabilities $p_k$ are, the women will always have more brothers on average. </p>
<p>However, this is not true, and the reason is that sometimes we might have $k = 0$ (no males) or $k = n$ (no females). In the first case the women have no brothers and the men don't exist, and in the second case the men have $n-1$ brothers and the women don't exist. In these cases it's unclear whether the question even makes sense.</p>
<hr>
<p>Another simple version of the question, which avoids the previous problem and which I think is more realistic, is to suppose that there are two families with a total of $2n$ children between them, $n$ of which are male and $n$ of which are female, but now the children are split between the families in some random way. If there are $m$ male children in the first family and $f$ female children, then the average number of brothers a man has is</p>
<p>$$\frac{m(m-1) + (n-m)(n-m-1)}{n}$$</p>
<p>while the average number of brothers a woman has is</p>
<p>$$\frac{mf + (n-m)(n-f)}{n}.$$</p>
<p>The first quantity is big when $m$ is either big or small (in other words, when the distribution of male children is lopsided between the two families) while the second quantity is big when $m$ and $f$ are either both big or both small (in other words, when the distribution of male and female children are similar in the two families). If we suppose that "big" and "small" are disjoint and both occur with some probability $p \le \frac{1}{2}$ (say $p = \frac{1}{3}$ to be concrete), then the first case occurs with probability $2p$ (say $2 \frac{1}{3} = \frac{2}{3}$) while the second case occurs with probability $2p^2$ (say $2 \frac{1}{9} = \frac{2}{9}$). So heuristically, in this version of the question:</p>
<blockquote>
<p>If it's easy for there to be many or few men in a family, men could have more brothers than women because it's easier for men to correlate with themselves than for women to correlate with men.</p>
</blockquote>
<p>But you don't have to take my word for it: we can actually do the computation. Let me write $M$ for the random variable describing the number of men in the first family and $F$ for the random variable describing the number of women in the first family, and let's assume that they are 1) independent and 2) symmetric about $\frac{n}{2}$, so that in particular</p>
<p>$$\mathbb{E}(M) = \mathbb{E}(F) = \frac{n}{2}.$$ </p>
<p>$M$ and $F$ are independent, so</p>
<p>$$\mathbb{E}(MF) = \mathbb{E}(M) \mathbb{E}(F) = \frac{n^2}{4}.$$</p>
<p>and similarly for $n-M$ and $n-F$. This is already enough to compute the expected number of brothers a woman has, which is (because $MF$ and $(n-M)(n-F)$ have the same distribution by assumption)</p>
<p>$$\frac{2}{n} \left( \mathbb{E}(MF) \right) = \frac{n}{2}.$$</p>
<p>In other words, the expected number of brothers a woman has is precisely the expected number of men in one family. This also follows from linearity of expectation.</p>
<p>Next we'll compute the expected number of brothers a man has. This is (again because $M(M-1)$ and $(n-M)(n-M-1)$ have the same distribution by assumption)</p>
<p>$$\frac{2}{n} \left( \mathbb{E}(M(M-1)) \right) = \frac{2}{n} \left( \mathbb{E}(M^2) - \frac{n}{2} \right) = \frac{2}{n} \left( \text{Var}(M) + \frac{n^2}{4} - \frac{n}{2} \right) = \frac{n}{2} - 1 + \frac{2 \text{Var}(M)}{n}$$</p>
<p>where we used $\text{Var}(M) = \mathbb{E}(M^2) - \mathbb{E}(M)^2$. As in Donkey_2009's answer, this computation reveals that the answer depends delicately on the variance of the number of men in one family (although be careful comparing these two answers: in Donkey_2009's answer he's choosing a random family to inspect while I'm choosing a random distribution of males and females among two families). More precisely,</p>
<blockquote>
<p>Men have more brothers than women on average if and only if $\text{Var}(M)$ is strictly larger than $\frac{n}{2}$.</p>
</blockquote>
<p>For example, if the men are distributed by independent coin flips, then we can compute that $\text{Var}(M) = \frac{n}{4}$, so in fact in this case women have more brothers than men (and this doesn't depend on the distribution of $F$ at all, as long as it's independent of $M$). Here the heuristic argument about bigness and smallness doesn't apply because the probability of $M$ deviating from its mean is quite small. </p>
<p>But if, for example, $m$ is instead chosen uniformly at random among the possible values $0, 1, 2, \dots n$, then $\mathbb{E}(M^2) = \frac{n(2n+1)}{6}$, so $\text{Var}(M) = \frac{n(2n+1)}{6} - \frac{n^2}{4} = \frac{n^2}{12} + \frac{n}{6}$, which is quite a bit larger than in the previous case, and this gives about $\frac{2n}{3}$ expected brothers for men. </p>
<p>One quibble you might have with the above model is that you might not think it's reasonable for $M$ and $F$ to be independent. On the one hand, some families just like having lots of children, so you might expect $M$ and $F$ to be correlated. On the other hand, some families don't like having lots of children, so you might expect $M$ and $F$ to be anticorrelated. Without the independence assumption the computation for women acquires an extra term, namely $\frac{2 \text{Cov}(M, F)}{n}$ (as in Donkey_2009's answer), and now the answer also depends on how large this is relative to $\text{Var}(M)$. </p>
<p>Note that the argument in the OP that "no man can be his own brother" (basically, the $-1$ in $m(m-1)$) ought to imply, if it worked, that the difference between expected number of brothers for men and women is exactly $1$: this happens iff we are allowed to write $\mathbb{E}(M(M-1)) = \mathbb{E}(M) \mathbb{E}(M-1)$ iff $M$ is independent of itself iff it's constant iff $\text{Var}(M) = 0$. </p>
<hr>
<p><strong>Edit:</strong> Perhaps the biggest objection you might have to the model above is that a given person's gender is not independent of the gender of their siblings; that is, as Greg Martin points out in the comments below, requirement 4 in the OP is not satisfied. This is easiest to see in the extreme case that $n = 1$: in that case we're only distributing one male and one female child, and so any siblings you have must have opposite gender from you. In general the fact that the number of male and female children is fixed here means that your siblings are slightly more likely to be a different gender from you. </p>
<p>A more realistic model would be to both distribute the children randomly and to assign their genders randomly. Beyond that we should think more about how to model family sizes. </p>
|
probability | <blockquote>
<p>Consider the following experiment. I roll a die repeatedly until the die returns 6, then I count the number of times 3 appeared in the random variable $X$. What is $E[X]$?</p>
</blockquote>
<p><strong>Thoughts:</strong> I expect to roll the die 6 times before 6 appears (this part is geometric), and on the preceding 5 rolls each roll has a $1/5$ chance of returning a 3. Treating this as binomial, I therefore expect to count 3 once, so $E[X]=1$.</p>
<p><strong>Problem:</strong> Don't know how to model this problem mathematically. Hints would be appreciated.</p>
| <p>We can restrict ourselves to dice throws with outcomes $3$ and $6$. Among these throws, both outcomes are equally likely. This means that the index $Y$ of the first $6$ is geometrically distributed with parameter $\frac12$, hence $\mathbb{E}(Y)=2$. The number of $3$s occuring before the first $6$ equals $Y-1$ and has expected value $1$.</p>
| <p>There are infinite ways to solve this problem, here is another solution I like. </p>
<p>Let $A = \{\text{first roll is }6\}$, $B = \{\text{first roll is }3\}$, $C = \{\text{first roll is neither }3\text{ nor }6\}$. Then
$$
E[X] = E[X|A]P(A) + E[X|B] P(B) + E[X|C] P(C) = 0 + (E[X] + 1) \frac16 + E[X]\frac46,
$$
whence $E[X] = 1$. </p>
|
matrices | <p>I'm reading my linear algebra textbook and there are two sentences that make me confused.</p>
<p>(1) Symmetric matrix $A$ can be factored into $A=Q\lambda Q^{T}$ where $Q$ is orthogonal matrix : Diagonalizable<br>
($Q$ has eigenvectors of $A$ in its columns, and $\lambda$ is diagonal matrix which has eigenvalues of $A$) </p>
<p>(2) Any symmetric matrix has a complete set of orthonormal eigenvectors<br>
whether its eigenvalues are distinct or not. </p>
<p>It's a contradiction, right?<br>
Diagonalizable means the matrix has n distinct eigenvectors (for $n$ by $n$ matrix).<br>
If symmetric matrix can be factored into $A=Q\lambda Q^{T}$, it means that<br>
symmetric matrix has n distinct eigenvalues.<br>
Then why the phrase "whether its eigenvalues are distinct or not" is added in (2)?</p>
<p>After reading eigenvalue and eigenvector part of textbook, I conclude that every symmetric matrix is diagonalizable. Is that true?</p>
| <p>Diagonalizable doesn't mean it has distinct eigenvalues. Think about the identity matrix, it is diagonaliable (already diagonal, but same eigenvalues. But the converse is true, every matrix with distinct eigenvalues can be diagonalized. </p>
| <p>It is definitively NOT true that a diagonalizable matrix has all distinct eigenvalues--take the identity matrix. This is sufficient, but not necessary. There is no contradiction here.</p>
|
logic | <p>Text below copied from <a href="https://math.stackexchange.com/questions/566/your-favourite-maths-puzzles">here</a> </p>
<blockquote>
<p>The Blue-Eyed Islander problem is one of my favorites. You can read
about it <a href="http://terrytao.wordpress.com/2008/02/05/the-blue-eyed-islanders-puzzle/" rel="noreferrer">here</a> on Terry Tao's website, along with some discussion.
I'll copy the problem here as well.</p>
<p>There is an island upon which a tribe resides. The tribe consists of
1000 people, with various eye colours. Yet, their religion forbids
them to know their own eye color, or even to discuss the topic; thus,
each resident can (and does) see the eye colors of all other
residents, but has no way of discovering his or her own (there are no
reflective surfaces). If a tribesperson does discover his or her own
eye color, then their religion compels them to commit ritual suicide
at noon the following day in the village square for all to witness.
All the tribespeople are highly logical and devout, and they all know
that each other is also highly logical and devout (and they all know
that they all know that each other is highly logical and devout, and
so forth).</p>
<p>[For the purposes of this logic puzzle, "highly logical" means that
any conclusion that can logically deduced from the information and
observations available to an islander, will automatically be known to
that islander.]</p>
<p>Of the 1000 islanders, it turns out that 100 of them have blue eyes
and 900 of them have brown eyes, although the islanders are not
initially aware of these statistics (each of them can of course only
see 999 of the 1000 tribespeople).</p>
<p>One day, a blue-eyed foreigner visits to the island and wins the
complete trust of the tribe.</p>
<p>One evening, he addresses the entire tribe to thank them for their
hospitality.</p>
<p>However, not knowing the customs, the foreigner makes the mistake of
mentioning eye color in his address, remarking “how unusual it is to
see another blue-eyed person like myself in this region of the world”.</p>
<p>What effect, if anything, does this faux pas have on the tribe?</p>
</blockquote>
<p>The possible options are </p>
<p>Argument 1. The foreigner has no effect, because his comments do not tell the tribe anything that they do not already know (everyone in the tribe can already see that there are several blue-eyed people in their tribe). </p>
<p>Argument 2. 100 days after the address, all the blue eyed people commit suicide. This is proven as a special case of</p>
<p>Proposition. Suppose that the tribe had $n$ blue-eyed people for some positive integer $n$. Then $n$ days after the traveller’s address, all $n$ blue-eyed people commit suicide.</p>
<p>Proof: We induct on $n$. When $n=1$, the single blue-eyed person realizes that the traveler is referring to him or her, and thus commits suicide on the next day. Now suppose inductively that $n$ is larger than $1$. Each blue-eyed person will reason as follows: “If I am not blue-eyed, then there will only be $n-1$ blue-eyed people on this island, and so they will all commit suicide $n-1$ days after the traveler’s address”. But when $n-1$ days pass, none of the blue-eyed people do so (because at that stage they have no evidence that they themselves are blue-eyed). After nobody commits suicide on the $(n-1)^{st}$ day, each of the blue eyed people then realizes that they themselves must have blue eyes, and will then commit suicide on the $n^{th}$ day. </p>
<p>It seems like no-one has found a suitable answer to this puzzle, which seems to be, "which argument is valid?" </p>
<p>My question is...
Is there no solution to this puzzle? </p>
| <p>Argument 1 is clearly wrong.</p>
<p>Consider the island with only <em>two</em> blue-eyed people. The foreigner arrives and announces "how unusual it is to see another blue-eyed person like myself in this region of the world." The induction argument is now simple, and proceeds for only two steps; on the second day both islanders commit suicide. (I leave this as a crucial exercise for the reader.)</p>
<p>Now, what did the foreigner tell the islanders that they did not already know? Say that the blue-eyed islanders are $A$ and $B$. Each already knows that there are blue-eyed islanders, so this is <em>not</em> what they have learned from the foreigner. Each knows that there are blue-eyed islanders, but <em>neither</em> one knows that the other knows this. But when $A$ hears the foreigner announce the existence of blue-eyed islanders, he gains new knowledge: he now knows <em>that $B$ knows that there are blue-eyed islanders</em>. This is new; $A$ did not know this before the announcement. The information learned by $B$ is the same, but mutatis mutandis.</p>
<p>Analogously, in the case that there are three blue-eyed islanders, none learns from the foreigner that there are blue-eyed islanders; all three already knew this. And none learns from the foreigner that other islanders knew there were blue-eyed islanders; all three knew this as well. But each of the three does learn something new, namely that all the islanders now know that (all the islanders know that there are blue-eyed islanders). They did not know this before, and this new information makes the difference.</p>
<p>Apply this process 100 times and you will understand what new knowledge was gained by the hundred blue-eyed islanders in the puzzle.</p>
| <p>This isn't a solution to the puzzle, but it's too long to post as a comment. If one reads further in the post (second link), for clarification:</p>
<p>In response to a request for the solution shortly after the puzzle was posted, Terence Tao replied: </p>
<blockquote>
<p>I don’t want to spoil the puzzle for others, but the key to resolving the apparent contradiction is to understand the concept of “common knowledge”; see
<a href="http://en.wikipedia.org/wiki/Common_knowledge_%28logic%29" rel="noreferrer">http://en.wikipedia.org/wiki/Common_knowledge_%28logic%29</a></p>
</blockquote>
<p>Added much later, Terence Tao poses <em>this question</em>:</p>
<blockquote>
<p>[An interesting moral dilemma: the traveler can save 99 lives after his faux pas, by naming a specific blue-eyed person as the one he referred to, causing that unlucky soul to commit suicide the next day and sparing everyone else. Would it be ethical to do so?]</p>
</blockquote>
<p><em>Now that is truly a dilemma!</em></p>
<hr>
<p>Added: See also this <a href="http://www.math.dartmouth.edu/~pw/solutions.pdf" rel="noreferrer">alternate version of same problem</a> - and its solution, by Peter Winkler of Dartmouth (dedicated to Martin Gardner). See problem/solution $(10)$.</p>
|
linear-algebra | <p>When I was studying linear algebra in the first year, from what I remember, vector spaces were always defined over a field, which was in every single concrete example equal to either $\mathbb{R}$ or $\mathbb{C}$.</p>
<p>In Associative Algebra course, we sometimes mentioned (when talking about $R$-modules) that if $R$ is a division ring, everything becomes trivial and known from linear algebra.</p>
<p>During the summer, I'm planning to revisit my notes of linear algebra, write them in tex, and try to prove as much as possible in a general setting. </p>
<p><strong>Are there any theorems in linear algebra, that hold for vector spaces over a field and not over a division ring?</strong> How much linear algebra can be done over a division ring?</p>
<p>Also, <strong>what are some examples of division rings, that aren't fields?</strong> $\mathbb{H}$ is the only one that comes to mind. I know there aren't any finite ones (Wedderburn). Of course, I'm looking for reasonably nice ones...</p>
| <p>In my experience, when working over a division ring $D$, the main thing you have
to be careful of is the distinction between $D$ and $D^{op}$.</p>
<p>E.g. if $F$ is a field, then $End_F(F) = F$ ($F$ is the ring of $F$-linear endomorphisms of itself, just via multiplication), and hence $End(F^n) = M_n(F)$;
and this latter isomorphism is what links matrices and the theory of linear transformations.</p>
<p>But, for a general division ring $D$, the action of $D$ by left multiplication on itself is not $D$-linear, if $D$ is not commutative. Instead, the action of $D^{op}$ on $D$ via right multiplication is $D$-linear, and so we find that
$End_D(D) = D^{op}$, and hence that $End_D(D^n) = M_n(D^{op}).$</p>
<hr>
<p>As for examples of division algebras, they come from fields with non-trivial <a href="http://en.wikipedia.org/wiki/Brauer_group" rel="noreferrer">Brauer groups</a>, although this may not help particularly with concrete examples. </p>
<p>A standard way to construct examples of central simple algebra over a field $F$ is via a <em>crossed product</em>. (Unfortunately, there does not seem to be a wikipedia entry on this topic.)</p>
<p>What you do is you take an element $a\in F^{\times}/(F^{\times})^n$, and
a cyclic extension $K/F$, with Galois group generated by an element $\sigma$
of order $n$, and then define a degree $n^2$ central simple algebra $A$ over $F$
as follows:</p>
<p>$A$ is obtained from $K$ by adjoining a non-commuting, non-zero element $x$,
which satisfies the conditions</p>
<ol>
<li>$x k x^{-1} = \sigma(k)$ for all $k \in K$, and</li>
<li>$x^n = a$.</li>
</ol>
<p>This will sometimes produce division algebras. </p>
<p>E.g. if we take $F = \mathbb R$, $K = \mathbb C$, $a = -1$, and $\sigma =$ complex conjugation, then $A$ will be $\mathbb H$, the Hamilton quaternions.</p>
<p>E.g. if we take $F = \mathbb Q_p$ (the $p$-adic numbers for some prime $p$),
we take $K =$ the unique unramified extension of $\mathbb Q_p$ of degree $n$,
take $\sigma$ to be the Frobenius automorphism of $K$,
and take $a = p^i$ for some $i \in \{1,\ldots,n-1\}$ coprime to $n$,
then we get a central simple division algebra over $\mathbb Q_p$, which is called <em>the division algebra over $\mathbb Q_p$ of invariant $i/n$</em> (or perhaps $-i/n$, depending on your conventions).</p>
<p>E.g. if we take $F = \mathbb Q$, $K =$ the unique cubic subextension of $\mathbb Q$ contained in $\mathbb Q(\zeta_7)$, and $a = 2$, then we will get
a central simple division algebra of degree $9$ over $\mathbb Q$.
(To see that it is really a division algebra, one can extend scalars to $\mathbb Q_2$, where it becomes a special case of the preceding construction.)</p>
<p>See Jyrki Lahtonen's answer to this question, as well as Jyrki's answer <a href="https://math.stackexchange.com/questions/45085/an-example-of-a-division-ring-d-that-is-not-isomorphic-to-its-opposite-ring/45086#45086">here</a>, for some more detailed examples of this construction. (Note that a key condition for getting a division algebra is that the element $a$ not be norm from the extension $K$.)</p>
<hr>
<p>Added: As the OP remarks in a comment below, it doesn't seem to be so easy to find non-commutative division rings. Firstly, perhaps this shouldn't be so surprising, since there was quite a gap (centuries!) between the discovery of complex numbers and Hamilton's discovery of quaternions, suggesting that the latter are not so easily found.</p>
<p>Secondly, one easy way to make interesting but tractable non-commutative rings is to form group rings of non-commutative finite groups, and if you do this over e.g. $\mathbb Q$, you can find interesting division rings inside them. The one problem with this is that a group ring of a non-trivial group is never itself a division ring; you need to use Artin--Wedderburn theory to break it up into a product of matrix rings over division rings, and so the interesting division rings that arise in this way lie a little below the surface. </p>
| <p>Let me quote a relevant paragraph in the Wikipedia article on "Division ring":</p>
<blockquote>
<p>Much of linear algebra may be
formulated, and remains correct, for
(left) modules over division rings
instead of vector spaces over fields.
Every module over a division ring has
a basis; linear maps between
finite-dimensional modules over a
division ring can be described by
matrices, and the Gaussian elimination
algorithm remains applicable.
Differences between linear algebra
over fields and skew fields occur
whenever the order of the factors in a
product matters. For example, the
proof that the column rank of a matrix
over a field equals its row rank
yields for matrices over division
rings only that the left column rank
equals its right row rank: it does not
make sense to speak about the rank of
a matrix over a division ring.</p>
</blockquote>
<p>I hope this helps!</p>
|
linear-algebra | <p>I know smilar questions have been asked and I have looked at them but none of them seems to have satisfactory answer. I am reading the book <a href="http://amzn.com/0521406498">a course in mathematics for student of physics vol. 1</a> by Paul Bamberg and Shlomo Sternberg. In Chapter 1 authors define affine space and writes:</p>
<blockquote>
<p>The space $\Bbb{R}^2$ is an example of a <em>vector space</em>. The distinction between <em>vector space</em> $\Bbb{R}^2$ and <em>affine space</em> $A\Bbb{R}^2$ lies in the fact that in $\Bbb{R}^2$ the point (0,0) has a special significance ( it is the additive identity) and the addition of two vectors in $\Bbb{R}^2$ makes sense. These do not hold for $A\Bbb{R}^2$.</p>
</blockquote>
<p>Please explain. </p>
<p>Edit:</p>
<p>How come $A\Bbb{R}^2$ has point (0,0) without special significance? and why the addition of two vectors in $A\Bbb{R}^2$ does not make sense? Please give concrete examples instead of abstract answers . I am a physics major and have done courses in Calculus, Linear Algebra and Complex Analysis.</p>
| <p>Consider the vector space <span class="math-container">$\mathbb{R}^3$</span>. Inside <span class="math-container">$\mathbb{R}^3$</span> we can choose two planes, as in the picture below. We'll call the green one <span class="math-container">$P_1$</span> and the blue one <span class="math-container">$P_2$</span>. The plane <span class="math-container">$P_1$</span> passes through the origin but the plane <span class="math-container">$P_2$</span> does not. It is a standard homework exercise in linear algebra to show that the <span class="math-container">$P_1$</span> is a <strong>sub-vector space</strong> of <span class="math-container">$\mathbb{R}^3$</span> but the plane <span class="math-container">$P_2$</span> is <strong>not</strong>. However, the plane <span class="math-container">$P_2$</span> <strong>looks</strong> almost exactly the same as <span class="math-container">$P_1$</span>, having the exact same, flat geometry, and in fact <span class="math-container">$P_2$</span> and <span class="math-container">$P_1$</span> are simply translates of one another. This plane <span class="math-container">$P_2$</span> is a classical example of an affine space.</p>
<p><span class="math-container">$\,\,\,\,\,\,\,\,\,$</span><img src="https://i.sstatic.net/LwSZZ.gif" alt="enter image description here" /></p>
<p>Suppose we wanted to turn <span class="math-container">$P_2$</span> into a vector space, would it be possible? Sure. What we would need to do is align <span class="math-container">$P_2$</span> with <span class="math-container">$P_1$</span> using some translation, and then use this alignment to re-define the algebraic operations on <span class="math-container">$P_2$</span>. Let's make this precise. If <span class="math-container">$T: P_2 \to P_1$</span> is the alignment, for <span class="math-container">$p,q \in P_2$</span> we'll define <span class="math-container">$p \oplus q = T^{-1}(T(p) + T(q))$</span>. In words, we shift <span class="math-container">$p$</span> and <span class="math-container">$q$</span> down to <span class="math-container">$P_1$</span>, add them, and then shift them back. Note that this is different than simply adding <span class="math-container">$p+q$</span>, as this vector need not lie on <span class="math-container">$P_2$</span> at all (one of the reasons <span class="math-container">$P_2$</span> is not a vector space, it is not closed under addition).</p>
<p>There are, however, many ways of aligning <span class="math-container">$P_2$</span> with <span class="math-container">$P_1$</span>, and so many different ways of turning <span class="math-container">$P_2$</span> into a vector space, and none of them are <strong>canonical</strong>. Here is one way to make these alignments: pick a vector <span class="math-container">$v \in P_2$</span>, and translate <span class="math-container">$P_2$</span> by <span class="math-container">$-v$</span>, so that <span class="math-container">$T(p) = p-v$</span>. This translates <span class="math-container">$P_2$</span> on to <span class="math-container">$P_1$</span>, and sends <span class="math-container">$v$</span> to <span class="math-container">$0$</span>. Conceptually, this translation "sends <span class="math-container">$v$</span> to zero", and this approach of "redefining some chosen vector to be the zero vector" always works to turn an affine space into a vector space.</p>
<p>If you want to do algebra on <span class="math-container">$P_2$</span> without picking a "zero vector", you can use the following trick: instead of trying to trying to add together vectors in <span class="math-container">$P_2$</span> (which, as we've seen, need not stay in <span class="math-container">$P_2$</span>), you can add vectors in <span class="math-container">$P_1$</span> to vectors in <span class="math-container">$P_2$</span>. Note that if <span class="math-container">$v_1 \in P_1$</span> and <span class="math-container">$v_2 \in P_2$</span> then <span class="math-container">$v_1 + v_2 \in P_2$</span>. What we obtain is a funny situation where the addition takes place between two sets: a vector space <span class="math-container">$P_1$</span> on the one hand, and the non-vector-space <span class="math-container">$P_2$</span> on the other. This lets us work with <span class="math-container">$P_2$</span> without having to force it to be a vector space.</p>
<p>Affine spaces are an abstraction and generalization of this situation.</p>
| <p>Consider an infinite sheet (of idealised paper, if you like). If it is blank, then there is absolutely no way to distinguish between any two points on the sheet. Nonetheless, if you do have two points on the sheet, you can measure the distance between them. And if there is a uniform magnetic field parallel to the sheet, then you can even measure the bearing from one point to another. Thus, given any point $P$ on the sheet, you can uniquely describe every other point on the sheet by its distance and bearing from $P$; and conversely, given any distance and bearing, there is a point with that distance and bearing from $P$. <em>This</em> is the situation that the notion of a 2-dimensional affine space is an abstraction of.</p>
<p>Now suppose we have marked a point $O$ on the sheet. Then we can "add" points $P$ and $Q$ on the sheet by drawing the usual parallelogram diagram. The result $P + Q$ of the "addition" depends on the choice of $O$ (and, of course, $P$ and $Q$), but nothing else. <em>This</em> is what the notion of a 2-dimensional vector space is an abstraction of.</p>
|
linear-algebra | <p>In one of my exams I'm asked to prove the following </p>
<blockquote>
<p>Suppose <span class="math-container">$A,B\in \mathbb R^{n\times n}$</span>, and <span class="math-container">$AB=BA$</span>, then <span class="math-container">$A,B$</span> share the same eigenvectors. </p>
</blockquote>
<p>My attempt is let <span class="math-container">$\xi$</span> be an eigenvector corresponding to <span class="math-container">$\lambda$</span> of <span class="math-container">$A$</span>, then <span class="math-container">$A\xi=\lambda\xi$</span>, then I want to show <span class="math-container">$\xi$</span> is also some eigenvector of <span class="math-container">$B$</span> but I get stuck.</p>
| <p>The answer is in the book <em>Linear Algebra and its Application</em> by Gilbert Strang. I'll just write down what he said in the book.</p>
<blockquote>
<p>Starting from <span class="math-container">$Ax=\lambda x$</span>, we have</p>
<p><span class="math-container">$$ABx = BAx = B \lambda x = \lambda Bx$$</span></p>
<p>Thus <span class="math-container">$x$</span> and <span class="math-container">$Bx$</span> are both eigenvectors of <span class="math-container">$A$</span>, sharing the same <span class="math-container">$\lambda$</span> (or else <span class="math-container">$Bx = 0$</span>). If we assume for convenience that the eigenvalues of <span class="math-container">$A$</span> are distinct – the eigenspaces are one dimensional – then <span class="math-container">$Bx$</span> must be a multiple of <span class="math-container">$x$</span>. In other words <span class="math-container">$x$</span> is an eigenvector of <span class="math-container">$B$</span> as well as <span class="math-container">$A$</span>.</p>
</blockquote>
<p>There's another proof using diagonalization in the book.</p>
| <p>Commuting matrices do not necessarily share <em>all</em> eigenvector, but generally do share <em>a</em> common eigenvector.</p>
<p>Let <span class="math-container">$A,B\in\mathbb{C}^{n\times n}$</span> such that <span class="math-container">$AB=BA$</span>. There is always a nonzero subspace of <span class="math-container">$\mathbb{C}^n$</span> which is both <span class="math-container">$A$</span>-invariant and <span class="math-container">$B$</span>-invariant (namely <span class="math-container">$\mathbb{C}^n$</span> itself). Among all these subspaces, there exists hence an invariant subspace <span class="math-container">$\mathcal{S}$</span> of the minimal (nonzero) dimension.</p>
<p>We show that <span class="math-container">$\mathcal{S}$</span> is spanned by some common eigenvectors of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>.
Assume that, say, for <span class="math-container">$A$</span>, there is a nonzero <span class="math-container">$y\in \mathcal{S}$</span> such that <span class="math-container">$y$</span> is not an eigenvector of <span class="math-container">$A$</span>. Since <span class="math-container">$\mathcal{S}$</span> is <span class="math-container">$A$</span>-invariant, it contains some eigenvector <span class="math-container">$x$</span> of <span class="math-container">$A$</span>; say, <span class="math-container">$Ax=\lambda x$</span> for some <span class="math-container">$\lambda\in\mathbb{C}$</span>. Let <span class="math-container">$\mathcal{S}_{A,\lambda}:=\{z\in \mathcal{S}:Az=\lambda z\}$</span>. By the assumption, <span class="math-container">$\mathcal{S}_{A,\lambda}$</span> is a proper (but nonzero) subspace of <span class="math-container">$\mathcal{S}$</span> (since <span class="math-container">$y\not\in\mathcal{S}_{A,\lambda}$</span>).</p>
<p>We know that for any <span class="math-container">$z\in \mathcal{S}_{A,\lambda}$</span>, <span class="math-container">$Bz\in \mathcal{S}$</span> since <span class="math-container">$\mathcal{S}_{A,\lambda}\subset\mathcal{S}$</span> and <span class="math-container">$\mathcal{S}$</span> is <span class="math-container">$B$</span>-invariant. However, <span class="math-container">$A$</span> and <span class="math-container">$B$</span> commute so
<span class="math-container">$$
ABz=BAz=\lambda Bz \quad \Rightarrow\quad Bz\in \mathcal{S}_{A,\lambda}.
$$</span>
This means that <span class="math-container">$\mathcal{S}_{A,\lambda}$</span> is <span class="math-container">$B$</span>-invariant. Since <span class="math-container">$\mathcal{S}_{A,\lambda}$</span> is both <span class="math-container">$A$</span>- and <span class="math-container">$B$</span>-invariant and is a proper (nonzero) subspace of <span class="math-container">$\mathcal{S}$</span>, we have a contradiction. Hence every nonzero vector in <span class="math-container">$\mathcal{S}$</span> is an eigenvector of both <span class="math-container">$A$</span> and <span class="math-container">$B$</span>.</p>
<hr>
<p><strong>EDIT:</strong> A nonzero <span class="math-container">$A$</span>-invariant subspace <span class="math-container">$\mathcal{S}$</span> of <span class="math-container">$\mathbb{C}^n$</span> contains an eigenvector of <span class="math-container">$A$</span>.</p>
<p>Let <span class="math-container">$S=[s_1,\ldots,s_k]\in\mathbb{C}^{n\times k}$</span> be such that <span class="math-container">$s_1,\ldots,s_k$</span> form a basis of <span class="math-container">$\mathcal{S}$</span>. Since <span class="math-container">$A\mathcal{S}\subset\mathcal{S}$</span>, we have <span class="math-container">$AS=SG$</span> for some <span class="math-container">$G\in\mathbb{C}^{k\times k}$</span>. Since <span class="math-container">$k\geq 1$</span>, <span class="math-container">$G$</span> has at least one eigenpair <span class="math-container">$(\lambda,x)$</span>. From <span class="math-container">$Gx=\lambda x$</span>, we get <span class="math-container">$A(Sx)=SGx=\lambda(Sx)$</span> (<span class="math-container">$Sx\neq 0$</span> because <span class="math-container">$x\neq 0$</span> and <span class="math-container">$S$</span> has full column rank). The vector <span class="math-container">$Sx\in\mathcal{S}$</span> is an eigenvector of <span class="math-container">$A$</span> and, consequently, <span class="math-container">$\mathcal{S}$</span> contains at least one eigenvector of <span class="math-container">$A$</span>.</p>
<hr>
<p><strong>EDIT:</strong> There is a nonzero <span class="math-container">$A$</span>- and <span class="math-container">$B$</span>-invariant subspace of <span class="math-container">$\mathbb{C}^n$</span> of the least dimension.</p>
<p>Let <span class="math-container">$\mathcal{I}$</span> be the set of all nonzero <span class="math-container">$A$</span>- and <span class="math-container">$B$</span>-invariant subspaces of <span class="math-container">$\mathbb{C}^n$</span>. The set is nonempty since <span class="math-container">$\mathbb{C}^n$</span> is its own (nonzero) subspace which is both <span class="math-container">$A$</span>- and <span class="math-container">$B$</span>-invariant (<span class="math-container">$A\mathbb{C}^n\subset\mathbb{C}^n$</span> and <span class="math-container">$B\mathbb{C}^n\subset\mathbb{C}^n$</span>). Hence the set <span class="math-container">$\mathcal{D}:=\{\dim \mathcal{S}:\mathcal{S}\in\mathcal I\}$</span> is a nonempty subset of <span class="math-container">$\{1,\ldots,n\}$</span>. By the <a href="http://en.wikipedia.org/wiki/Well-ordering_principle" rel="noreferrer">well-ordering principle</a>, <span class="math-container">$\mathcal{D}$</span> has the least element and hence there is a nonzero <span class="math-container">$\mathcal{S}\in\mathcal{I}$</span> of the least dimension.</p>
|
linear-algebra | <p>I am a bit confused. What is the difference between a linear and affine function? Any suggestions will be appreciated.</p>
| <p>A linear function fixes the origin, whereas an affine function need not do so. An affine function is the composition of a linear function with a translation, so while the linear part fixes the origin, the translation can map it somewhere else.</p>
<p>Linear functions between vector spaces preserve the vector space structure (so in particular they must fix the origin). While affine functions don't preserve the origin, they do preserve some of the other geometry of the space, such as the collection of straight lines.</p>
<p>If you choose bases for vector spaces $V$ and $W$ of dimensions $m$ and $n$ respectively, and consider functions $f\colon V\to W$, then $f$ is linear if $f(v)=Av$ for some $n\times m$ matrix $A$ and $f$ is affine if $f(v)=Av+b$ for some matrix $A$ and vector $b$, where coordinate representations are used with respect to the bases chosen.</p>
| <p>An affine function is the composition of a linear function followed by a translation.
$ax$ is linear ; $(x+b)\circ(ax)$ is affine.
see Modern basic Pure mathematics : C.Sidney</p>
|
combinatorics | <p>Kent Haines describes the game of <a href="http://www.kenthaines.com/blog/2016/2/19/integer-solitaire" rel="noreferrer">Integer Solitaire</a>, which I find to be excellent for young kids learning arithmetic. I'm sure they will be motivated by this game to get a lot of practice.</p>
<p>Kent asks a question about his game, which I find very interesting, and so I am asking here, in the hopes that Math.SE might be able to answer.</p>
<p>The child draws 18 cards from an ordinary deck of cards, and then regards the cards to have values Ace = 1, 2, 3, ..., Jack = 11, Queen = 12, King = 13, except that Black means a positive value and Red means a negative value.</p>
<p>Using 14 of the 18, the child seeks to find solutions of four equations:</p>
<p><a href="https://i.sstatic.net/TtELUm.jpg" rel="noreferrer"><img src="https://i.sstatic.net/TtELUm.jpg" alt="Target equations" /></a></p>
<p>For example, a successful solution would look like:</p>
<p><a href="https://i.sstatic.net/Qezb9m.jpg" rel="noreferrer"><img src="https://i.sstatic.net/Qezb9m.jpg" alt="Successful play of Integer Solitaire" /></a></p>
<p><strong>Question.</strong> Does every set of 18 cards admit a solution?</p>
<p>Kent Haines says, "I have no idea whether all combinations of 18 cards are solvable in this game. But I have played this game for five years with dozens of students, and I have yet to see a combination of 18 cards that is unsolvable."</p>
<p><strong>Follow up Question.</strong> In the event that the answer is negative, what is the probability of having a winning set?</p>
<p>For the follow up question, it may be that an exact answer is out of reach, but bounds on the probability would be welcome.</p>
| <p>Unsatisfyingly, a counterexample is (all black):</p>
<p><span class="math-container">$$(5,5,6,6,7,7,8,8,9,9,10,10,J,J,Q,Q,K,K)$$</span></p>
<p>which does not satisfy the last two equations, since</p>
<p><span class="math-container">$$\_+\_+\_ \ge 5+5+6 =16>13 = K$$</span></p>
<p>Extending this result, we need at least <span class="math-container">$22$</span> cards to guarantee a solvable <span class="math-container">$14$</span>-tuple since we have the <span class="math-container">$21$</span>-card counterexample</p>
<p><span class="math-container">$$(3,4,4,5,5, \dots , K, K)$$</span></p>
<p>where <span class="math-container">$3+4+4+5+5+6 = 27 > 26 = 2K$</span>, so the last two equations cannot both be satisfied. I do not know whether a counterexample to <span class="math-container">$22$</span> cards exists at this moment.</p>
| <p>28 card counterexample:</p>
<p><span class="math-container">$$ black: K, K, J, J, 9, 9, 7, 7, 5, 5, 3, 3, A, A $$</span>
<span class="math-container">$$ red: K, K, J, J, 9, 9, 7, 7, 5, 5, 3, 3, A, A $$</span></p>
<p>cannot satisfy __ + __ = __ because 2 odds make an even (whether added or subtracted), and there are no evens in the set.</p>
<p>Edit: it's 28 cards, not 26.</p>
<p>The 29th counterexample is easy: with only one additional even, it isn't enough to satisfy both of the top two equations. So, 2 evens are needed to be added.</p>
|
logic | <p>I have read somewhere there are some theorems that are shown to be "unprovable". It was a while ago and I don't remember the details, and I suspect that this question might be the result of a total misunderstanding. By the way, I assume that <em>unprovable theorem</em> does exist. Please correct me if I am wrong and skip reading the rest.</p>
<p>As far as I know, the mathematical statements are categorized into: undefined concepts, definitions, axioms, conjectures, lemmas and theorems. There might be some other types that I am not aware of as an amateur math learner. In this categorization, an axiom is something that cannot be built upon other things and it is too obvious to be proved (is it?). So axioms are unprovable. A theorem or lemma is actually a conjecture that has been proved. So "a theorem that cannot be proved" sounds like a paradox.</p>
<p>I know that there are some statements that cannot be proved simply because they are wrong. I am not addressing them because they are not <em>theorems</em>. So what does it mean that a theorem is unprovable? Does it mean that it cannot be proved by current mathematical tools and it may be proved in the future by more advanced tools that are not discovered yet? So why don't we call it a conjecture? If it cannot be proved at all, then it is better to call it an axiom.</p>
<p>Another question is, <em>how can we be sure that a theorem cannot be proved</em>? I am assuming the description might be some high level logic that is way above my understanding. So I would appreciate if you put it into simple words.</p>
<p><strong>Edit-</strong> Thanks to a comment by @user21820 I just read two other interesting posts, <a href="https://math.stackexchange.com/a/1643073/301977">this</a> and <a href="https://math.stackexchange.com/a/1808558/301977">this</a> that are relevant to this question. I recommend everyone to take a look at them as well.</p>
| <p>When we say that a statement is 'unprovable', we mean that it is unprovable from the axioms of a particular theory. </p>
<p>Here's a nice concrete example. Euclid's <em>Elements</em>, the prototypical example of axiomatic mathematics, begins by stating the following five axioms:</p>
<blockquote>
<p>Any two points can be joined by a straight line</p>
<p>Any finite straight line segment can be extended to form an infinite
straight line.</p>
<p>For any point <span class="math-container">$P$</span> and choice of radius <span class="math-container">$r$</span> we can form a circle
centred at <span class="math-container">$P$</span> of radius <span class="math-container">$r$</span></p>
<p>All right angles are equal to one another.</p>
<p>[The parallel postulate:] If <span class="math-container">$L$</span> is a straight line and <span class="math-container">$P$</span> is a point not on the line <span class="math-container">$L$</span> then there is at most one line <span class="math-container">$L'$</span> that passes through <span class="math-container">$P$</span> and is parallel to <span class="math-container">$L$</span>.</p>
</blockquote>
<p>Euclid proceeds to derive much of classical plane geometry from these five axioms. This is an important point. After these axioms have been stated, Euclid makes no further appeal to our natural intuition for the concepts of 'line', 'point' and 'angle', but only gives proofs that can be deduced from the five axioms alone. </p>
<p>It is conceivable that you could come up with your own theory with 'points' and 'lines' that do not resemble points and lines at all. But if you could show that your 'points' and 'lines' obey the five axioms of Euclid, then you could interpret all of his theorems in your new theory. </p>
<p>In the two thousand years following the publication of the <em>Elements</em>, one major question that arose was: do we need the fifth axiom? The fifth axiom - known as the parallel postulate - seems less intuitively obvious than the other four: if we could find a way of deducing the fifth axiom from the first four then it would become superfluous and we could leave it out. </p>
<p>Mathematicians tried for millennia to find a way of deducing the parallel postulate from the first four axioms (and I'm sure there are cranks who are still trying to do so now), but were unable to. Gradually, they started to get the feeling that it might be impossible to prove the parallel postulate from the first four axioms. But how do you prove that something is unprovable?</p>
<p>The right approach was found independently by Lobachevsky and Bolyai (and possibly Gauss) in the nineteenth century. They took the first four axioms and replaced the fifth with the following:</p>
<blockquote>
<p>[Hyperbolic parallel postulate:] If <span class="math-container">$L$</span> is a straight line and <span class="math-container">$P$</span> is a point not on the line <span class="math-container">$L$</span> then <strong>there are at least two</strong> lines that pass through <span class="math-container">$P$</span> and are parallel to <span class="math-container">$L$</span>.</p>
</blockquote>
<p>This axiom is clearly incompatible with the original parallel postulate. The remarkable thing is that there is a geometrical theory in which the first four axioms and the modified parallel postulate are true. </p>
<p>The theory is called <em>hyperbolic geometry</em> and it deals with points and lines inscribed on the surface of a <em>hyperboloid</em>:</p>
<p><a href="https://i.sstatic.net/LmlxP.png" rel="noreferrer"><img src="https://i.sstatic.net/LmlxP.png" alt="Wikimedia image: a triangle and a pair of diverging parallel lines inscribed on a hyperboloid"></a></p>
<p><em>In the bottom right of the image above, you can see a pair of hyperbolic parallel lines. Notice that they diverge from one another.</em></p>
<p>The first four axioms hold (and you can check this), but now if <span class="math-container">$L$</span> is a line and <span class="math-container">$P$</span> is a point not on <span class="math-container">$L$</span> then there are <em>infinitely many</em> lines parallel to <span class="math-container">$L$</span> passing through <span class="math-container">$P$</span>. So the original parallel postulate does not hold.</p>
<p>This now allows us to prove very quickly that it is impossible to prove the parallel postulate from the other four axioms: indeed, suppose there were such a proof. Since the first four axioms are true in hyperbolic geometry, our proof would induce a proof of the parallel postulate in the setting of hyperbolic geometry. But the parallel postulate is not true in hyperbolic geometry, so this is absurd. </p>
<hr>
<p>This is a major method for showing that statements are unprovable in various theories. Indeed, a theorem of Gödel (Gödel's completeness theorem) tells us that if a statement <span class="math-container">$s$</span> in the language of some axiomatic theory <span class="math-container">$\mathbb T$</span> is unprovable then there is <em>always</em> some structure that satisfies the axioms of <span class="math-container">$\mathbb T$</span> in which <span class="math-container">$s$</span> is false. So showing that <span class="math-container">$s$</span> is unprovable often amounts to finding such a structure.</p>
<p>It is also possible to show that things are unprovable using a direct combinatorial argument on the axioms and deduction rules you are allowed in your logic. I won't go into that here.</p>
<p>You're probably interested in things like Gödel's incompleteness theorem, that say that there are statements that are unprovable in a particular theory called ZFC set theory, which is often used as the foundation of <em>all mathematics</em> (note: there is in fact plenty of mathematics that cannot be expressed in ZFC, so <em>all</em> isn't really correct here). This situation is not at all different from the geometrical example I gave above: </p>
<p>If a particular statement is neither provable nor disprovable from the axioms of <em>all mathematics</em> it means that there are two structures out there, both of which interpret the axioms of <em>all mathematics</em>, in one of which the statement is true and in the other of which the statement is false. </p>
<p>Sometimes we have explicit examples: one important problem at the turn of the century was the <em>Continuum Hypothesis</em>. The problem was solved in two steps:</p>
<ul>
<li>Gödel gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was true.</li>
<li>Later, Cohen gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was false.</li>
</ul>
<p>Between them, these results show that the Continuum Hypothesis is in fact neither provable nor disprovable in ZFC set theory. </p>
| <p>First of all in the following answer I allowed myself (contrary to my general nature) to focus my efforts on simplicity, rather than formal correctness.</p>
<p>In general, I think that the way we teach the concept of <em>axioms</em> is rather unfortunate. While traditionally axioms were thought of as statements that are - in some philosophical way - <em>obviously true</em> and <em>don't need further justifications</em>, this view has shifted a lot in the last century or so. Rather than thinking of axioms as <em>obvious truths</em> think of them as statements that we <em>declare to be true</em>. Let $\mathcal A$ be a set of axioms. We can now ask a bunch of questions about $\mathcal A$.</p>
<ul>
<li>Is $\mathcal A$ self-contradictory? I.e. does there exist a proof (<- this needs to be formalized, but for the sake of simplicity just think of your informal notion of proofs) - starting from formulas in $\mathcal A$ that leads to a contradiction? If that's the case, then $\mathcal A$ was poorly chosen. If all the statements in $\mathcal A$ should be true (in a philosophical sense), then they cannot lead to a contradiction. So our first requirement is that $\mathcal A$ - should it represent a collection of true statements - is not self-contradictory.</li>
<li>Does $\mathcal A$ prove interesting statements? Take for example $\mathcal A$ as the axioms of set theory (e.g. $\mathcal A = \operatorname{ZFC}$). In this case we can prove all sorts of interesting mathematical statements. In fact, it seems reasonable that every mathematical theorem that can be proved by the usual style of informal proofs, can be formally proved from $\mathcal A$. This is one of the reasons, the axioms of set theory have been so successful.</li>
<li>Is $\mathcal A$ a <em>natural</em> set of axioms? ...</li>
<li>...</li>
<li>Is there a statement $\phi$ which $\mathcal A$ does not decide? I.e. is there a statement $\phi$ such that there is no proof of $\phi$ or $\neg \phi$ starting from $\mathcal A$?</li>
</ul>
<p>The last point is what we mean when we say that <em>$\phi$ is unprovable from $\mathcal A$</em>. And if $\mathcal A$ is our background theory, say $\mathcal A = \operatorname{ZFC}$, we just say that <em>$\phi$ is unprovable</em>. </p>
<p>By a very general theorem of Kurt Gödel, any <em>natural</em> set of axioms $\mathcal A$ has statements that are unprovable from it. In fact, the statement "$\mathcal A$ is not self-contradictory" is not provable from $\mathcal A$. So, while natural sets of axioms $\mathcal A$ are not self-contradictory - they themselves cannot prove this fact. This is rather unfortunate and demonstrates that David Hilbert's program on the foundation of mathematics - in its original form - is impossible. The natural workaround is something contrary to the general nature of mathematics - a leap of faith: If $\mathcal A$ is a sufficiently natural set of axioms (or otherwise <em>certified</em>), we <em>believe</em> that it is consistent (or - if you're more like me - you <em>assume</em> it is consistent until you see a reason not to). </p>
<p>This is - for example - the case for $\mathcal A = \operatorname{ZFC}$ and for the remainder of my answer, I will restrict myself to this scenario. Now that we know that $\mathcal A$ does not decide all statements (and arguably does not prove some true statements - like its consistency), a new question arises:</p>
<ul>
<li>Does $\operatorname{ZFC}$ decide all <em>mathematical</em> statements? In other words: Is there a question about typical mathematical objects that $\operatorname{ZFC}$ does not answer?</li>
</ul>
<p>The - to some people unfortunate - answer is yes and the most famous example is</p>
<blockquote>
<p>$\operatorname{ZFC}$ does not decide how many real numbers there are.</p>
</blockquote>
<p>Actually proving this fact, took mathematicians (logicians) many decades. At the end of this effort, however, we not only had a way to prove this single statement, but we actually obtained a very general method to prove the independence of many statements (the so-called <strong>method of forcing</strong>, introduced by Paul Cohen in 1963).</p>
<p>The idea - roughly speaking - is as follows: Let $\phi$ be a statement, say </p>
<blockquote>
<p>$\phi \equiv$ "there is no infinity strictly between the infinity of $\mathbb N$ and of $\mathbb R$" </p>
</blockquote>
<p>Let $\mathcal M$ be a model of $\operatorname{ZFC}$. Starting from $\mathcal M$ we would like to construct new models $\mathcal M_{\phi}$ and $\mathcal M_{\neg \phi}$ of $\operatorname{ZFC}$ such that $\mathcal M_{\phi} \models \phi$ and $\mathcal M_{\neg \phi} \models \neg \phi$ (i.e. $\phi$ is true in $\mathcal M_{\phi}$ and $\phi$ is false in $\mathcal M_{\neg \phi}$). If this is possible, then this proves that $\phi$ is not decided by $\operatorname{ZFC}$. Why is that?</p>
<p>Well, if it were decided by $\operatorname{ZFC}$, then there would be a proof of $\phi$ or a proof of $\neg \phi$. Let us say that $\phi$ has a proof (the other case is the same). Then, by <em>soundness</em> of our proofs, any model that satisfies $\operatorname{ZFC}$ must satisfy $\phi$, so there cannot be a model $\mathcal M_{\neg \phi}$ as above.</p>
|
matrices | <p>What is an <em>intuitive</em> meaning of the null space of a matrix? Why is it useful?</p>
<p>I'm not looking for textbook definitions. My textbook gives me the definition, but I just don't "get" it.</p>
<p>E.g.: I think of the <em>rank</em> $r$ of a matrix as the minimum number of dimensions that a linear combination of its columns would have; it tells me that, if I combined the vectors in its columns in some order, I'd get a set of coordinates for an $r$-dimensional space, where $r$ is minimum (please correct me if I'm wrong). So that means I can relate <em>rank</em> (and also dimension) to actual coordinate systems, and so it makes sense to me. But I can't think of any physical meaning for a null space... could someone explain what its meaning would be, for example, in a coordinate system?</p>
<p>Thanks!</p>
| <p>If $A$ is your matrix, the null-space is simply put, the set of all vectors $v$ such that $A \cdot v = 0$. It's good to think of the matrix as a linear transformation; if you let $h(v) = A \cdot v$, then the null-space is again the set of all vectors that are sent to the zero vector by $h$. Think of this as the set of vectors that <em>lose their identity</em> as $h$ is applied to them.</p>
<p>Note that the null-space is equivalently the set of solutions to the homogeneous equation $A \cdot v = 0$.</p>
<p>Nullity is the complement to the rank of a matrix. They are both really important; here is a <a href="https://math.stackexchange.com/questions/21100/importance-of-rank-of-a-matrix">similar question</a> on the rank of a matrix, you can find some nice answers why there.</p>
| <p>This is <a href="https://math.stackexchange.com/a/987657">an answer</a> I got from <a href="https://math.stackexchange.com/q/987146">my own question</a>, it's pretty awesome!</p>
<blockquote>
<p>Let's suppose that the matrix A represents a physical system. As an example, let's assume our system is a rocket, and A is a matrix representing the directions we can go based on our thrusters. So what do the null space and the column space represent?</p>
<p>Well let's suppose we have a direction that we're interested in. Is it in our column space? If so, then we can move in that direction. The column space is the set of directions that we can achieve based on our thrusters. Let's suppose that we have three thrusters equally spaced around our rocket. If they're all perfectly functional then we can move in any direction. In this case our column space is the entire range. But what happens when a thruster breaks? Now we've only got two thrusters. Our linear system will have changed (the matrix A will be different), and our column space will be reduced.</p>
<p>What's the null space? The null space are the set of thruster intructions that completely waste fuel. They're the set of instructions where our thrusters will thrust, but the direction will not be changed at all.</p>
<p>Another example: Perhaps A can represent a rate of return on investments. The range are all the rates of return that are achievable. The null space are all the investments that can be made that wouldn't change the rate of return at all.</p>
<p>Another example: room illumination. The range of A represents the area of the room that can be illuminated. The null space of A represents the power we can apply to lamps that don't change the illumination in the room at all.</p>
</blockquote>
<p>-- <a href="https://math.stackexchange.com/a/987657">NicNic8</a></p>
|
linear-algebra | <p>Why do we care about eigenvalues of graphs?</p>
<p>Of course, any novel question in mathematics is interesting, but there is an entire discipline of mathematics devoted to studying these eigenvalues, so they must be important.</p>
<p>I always assumed that spectral graph theory extends graph theory by providing tools to prove things we couldn't otherwise, somewhat like how representation theory extends finite group theory. But most results I see in spectral graph theory seem to concern eigenvalues not as means to an end, but as objects of interest in their own right.</p>
<p>I also considered practical value as motivation, e.g. using a given set of eigenvalues to put bounds on essential properties of graphs, such as maximum vertex degree. But I can't imagine a situation in which I would have access to a graph's eigenvalues before I would know much more elementary information like maximum vertex degree.</p>
<p>(<em>EDIT:</em> for example, dtldarek points out that $\lambda_2$ is related to diameter, but then why would we need $\lambda_2$ when we already have diameter? Is this somehow conceptually beneficial?)</p>
<blockquote>
<p>So, what is the meaning of graph spectra intuitively? And for what practical purposes are they used? Why is finding the eigenvalues of a graph's adjacency/Laplacian matrices more than just a novel problem?</p>
</blockquote>
| <p>This question already has a number of nice answers; I want to emphasize the breadth of
this topic.</p>
<p>Graphs can be represented by matrices - adjacency matrices and various flavours of
Laplacian matrices. This almost immediately raises the question as to what are
the connections between the spectra of these matrices and the properties of the
graphs. Let's call the study of these connections "the theory of graph spectra".
(But I am not entirely happy with this definition, see below.) It is tempting to view the map
from graphs to eigenvalues as a kind of Fourier theory, but there are difficulties
with this analogy. First, graphs in general are not determined by the their eigenvalues.
Second, which of the many adjacency matrices should we use?</p>
<p>The earliest work on graph spectra was carried out in the context of the Hueckel
molecular orbital theory in Quantum Chemistry. This lead among other things to work
on the matching polynomial; this gives us eigenvalues without adjacency matrices
(which is why I feel the above definition of the topic is unsatisfactory). A more recent
manifestation of this stream of ideas is the work on the spectra of fullerenes.</p>
<p>The second source of the topic arises in Seidel's work on regular two-graphs,
which started with questions about regular simplices in real projective space
and lead to extraordinarily interesting questions about sets of equiangular lines
in real space. The complex analogs of these questions are now of interest to quantum
physicists - see SIC-POVMs. (It is not clear what role graph theory can play here.)
In parallel with Seidel's work was the fundamental paper by Hoffman and
Singleton on Moore graphs of diameter two. In both cases, the key observation was
that certain extremal classes of graphs could be characterized very naturally
by conditions on their spectra. This work gained momentum because a number of sporadic
simple groups were first constructed as automorphism groups of graphs. For graph
theorists it flowered into the the theory of distance-regular graphs, starting with
the work of Biggs and his students, and still very active. </p>
<p>One feature of the paper of Hoffman and Singleton is that its conclusion makes no reference
to spectra. So it offers an important graph theoretical result for which the "book proof"
uses eigenvalues. Many of the results on distance-regular graphs preserve this feature.</p>
<p>Hoffman is also famous for his eigenvalue bounds on chromatic numbers, and related
bounds on the maximum size of independent sets and cliques. This is closely related
to Lovász's work on Shannon capacity. Both the Erdős-Ko-Rado
theorem and many of its analogs can now be obtained using extensions of these techniques.</p>
<p>Physicists have proposed algorithms for graph isomorphism based on the spectra of
matrices associated to discrete and continuous walks. The connections between
continuous quantum walks and graph spectra are very strong.</p>
| <p>I can't speak much to what traditional Spectral Graph Theory is about, but my personal research has included the study of what I call "Spectral Realizations" of graphs. A spectral realization is a special geometric realization (vertices are not-necessarily-distinct points, edges are not-necessarily-non-degenerate line segments, in some $\mathbb{R}^n$) derived from the eigenvectors of a graph's adjacency matrix.</p>
<blockquote>
<p>In particular, if the <em>rows</em> of a matrix constitute a basis for some eigenspace of the adjacency matrix a graph $G$, then the <em>columns</em> of that matrix are coordinate vectors of (a projection of) a spectral realization.</p>
</blockquote>
<p>A spectral realization of a graph has two nice properties:</p>
<ul>
<li>It's <em>harmonious</em>: Every graph automorphism induces to a rigid isometry of the realization; you can <em>see</em> the graph's automorphic structure!</li>
<li>It's <em>eigenic</em>: Moving each vertex to the vector-sum of its immediate neighbors is equivalent to <em>scaling</em> the figure; the scale factor is the corresponding eigenvalue.</li>
</ul>
<p>Well, the properties are nice <em>in theory</em>. Usually, a spectral realization is a jumble of collapsed segments, or is embedded is high-dimensional space; such circumstances make a realization difficult to "see". Nevertheless, a spectral realization can be a helpful first pass at visualizing a graph. Moreover, a graph with a high degree of symmetry can admit some visually-interesting low-dimensional spectral realizations; for example, the skeleton of the truncated octahedron has this modestly-elaborate collection:</p>
<p><img src="https://i.sstatic.net/ffIyp.png" alt="Spectral Realizations of the Truncated Octahedron"></p>
<p>For a gallery of hundreds of these things, see the PDF linked at my Bloog post, <a href="http://daylateanddollarshort.com/bloog/spectral-realizations-of-graphs/" rel="noreferrer">"Spectral Realizations of Graphs"</a>.</p>
<p>Since many mathematical objects decompose into eigen-objects, it probably comes as no surprise that <em>any geometric realization of a graph is the sum of spectral realizations of that graph</em>. (Simply decomposing the realization's coordinate matrix into eigen-matrices gets most of the way to that result, although the eigen-matrices themselves usually represent "affine images" of properly-spectral realizations. The fact that affine images decompose into a sum of <em>similar</em> images takes <a href="http://daylateanddollarshort.com/bloog/extending-a-theorem-of-barlotti/" rel="noreferrer">an extension of a theorem of Barlotti</a>.) There's likely something interesting to be said about how each spectral component spectral influences the properties of the combined figure.</p>
<p>Anyway ... That's why <em>I</em> care about the eigenvalues of graphs.</p>
|
combinatorics | <p>I'd like to know if it's possible to calculate the odds of winning a game of Minesweeper (on easy difficulty) in a single click. <a href="http://www.minesweeper.info/wiki/One_Click_Bug">This page</a> documents a bug that occurs if you do so, and they calculate the odds to around 1 in 800,000. However, this is based on the older version of Minesweeper, which had a fixed number of preset boards, so not every arrangement of mines was possible. (Also the board size in the current version is 9x9, while the old one was 8x8. Let's ignore the intermediate and expert levels for now - I assume those odds are nearly impossible, though a generalized solution that could solve for any W×H and mine-count would be cool too, but a lot more work I'd think.) In general, the increased board size (with the same number of mines), as well as the removal of the preset boards would both probably make such an event far more common.</p>
<p>So, assuming a 9x9 board with 10 mines, and assuming every possible arrangement of mines is equally likely (not true given the pseudo-random nature of computer random number generators, but let's pretend), and knowing that the first click is always safe (assume the described behavior on that site still holds - if you click on a mine in the first click, it's moved to the first available square in the upper-left corner), we'd need to first calculate the number of boards that are 1-click solvable. That is, boards with only one opening, and no numbered squares that are not adjacent to that opening. The total number of boards is easy enough: $\frac{(W×H)!}{((W×H)-M)! ×M!}$ or $\frac{81!}{71!×10!} \approx 1.878×10^{12}$. (Trickier is figuring out which boards are not one-click solvable unless you click on a mine and move it. We can maybe ignore the first-click-safe rule if it over-complicates things.) Valid arrangements would have all 10 mines either on the edges or far enough away from each other to avoid creating numbers which don't touch the opening. Then it's a simple matter of counting how many un-numbered spaces exist on each board and dividing by 81.</p>
<p>Is this a calculation that can reasonably be represented in a mathematical formula? Or would it make more sense to write a program to test every possible board configuration? (Unfortunately, the numbers we're dealing with get pretty close to the maximum value storable in a 64-bit integer, so overflow is very likely here. For example, the default Windows calculator completely borks the number unless you multiply by hand from 81 down to 72.)</p>
| <p>We must ignore the "cannot lose on first click" rule as it severely complicates things.</p>
<p>In this answer, I will be using a notation similar to chess's FEN (<a href="https://en.wikipedia.org/wiki/Forsyth-Edwards_Notation" rel="nofollow">Forsyth-Edwards Notation</a>) to describe minesweeper boards. <em>m</em> is a mine and empty spaces are denoted by numbers. We start at the top of the board and move from left to right, returning to the left at the end of each row. To describe a specific square, the columns are numbered from <em>a</em> to <em>h</em>, left to right, and the rows are numbered from 8 to 1, top to bottom.</p>
<p>On a minesweeper board, all mines are adjacent to numbered squares that say how many mines are next to them (including diagonally). If there is ever a numbered square surrounded only by mines and other numbered squares, new squares will stop being revealed at that square. Therefore, the question is actually:</p>
<blockquote>
<p>How many 9 × 9 minesweeper boards with 10 mines exist such that every blank square adjacent to a mine touches a square that is neither a mine nor adjacent to one?</p>
</blockquote>
<p>I like to approach problems like these by placing mines down one by one. There are 81 squares to place the first mine. If we place it in a corner, say a1, then the three diagonal squares adjacent to the corner (in this case a3, b2, and c1) are no longer valid (either a2 or b1 is now "trapped"). If we place it on any edge square except the eight squares adjacent to the corners, the squares two horizontal or vertical spaces away become invalid. On edge squares adjacent to the corners (say b1) three squares also become unavailable. On centre squares, either 4 or 3 squares become unavailable.</p>
<p>The problem is that invalid squares can be fixed at any time. For example, placing mines first on a1 and then c1 may be initially invalid, but a mine on b1 solves that.</p>
<p>This is my preliminary analysis. I conclude that there is no way to calculate this number of boards without brute force. However, anyone with sufficient karma is welcome to improve this answer.</p>
| <p>First i apologise for my bad english.</p>
<p>A simple rule to use and detect a one clickable grade is:
"if every number have a 0 cell (or empty cell) adjacent to it, then, the grade is one clickable."
That rule was easy to figure understanding how the automaticaly opens of a cell works. if the opened cell is a 0, then open all the adjecents cells.</p>
<p>This rule is very good for brute force algorithm to determine the favorable cases.</p>
<p>Besides that i tried to find the patterns that prevents one click win to happen in atempt to count the number of possibles grades that cant be win with one click. if you ignore the walls is simple, there are just two that englobe all of the others: B N B and B N N B (B for bomb, N for not bomb.) This N cells are traped becuse have just bombs or numbers adjecent to them and this kind of grades can't be one clickble, as the rule says. </p>
<p>There are the case when bombs make clusters of non openble cells too, without necessarly using the these labels.</p>
<p>But with walls things like non-bombs traps into corners and lines of bombs cross the board make thing a lot difficult. This cases dont necessarly need using BNB or BNNB patterns beacuse wall act like a block to empty cells opnenig domino's chain. So i stoped there. </p>
<p>Even if wee could figure out all paterns including the wall factor, we'll have another problem counting the possible combinations of patterns.. so i think is very hard, virtualy impossible without a pc to count these nunber of grades. </p>
<p>Thats my contribution. I hope that can be usefull</p>
|
probability | <p>A teenage acquaintance of mine lamented:</p>
<blockquote>
<p>Every one of my friends is better friends with somebody else.</p>
</blockquote>
<p>Thanks to my knowledge of mathematics I could inform her that she's not alone and $e^{-1}\approx 37\%$ of all people could be expected to be in the same situation, which I'm sure cheered her up immensely.</p>
<p>This number assumes that friendships are distributed randomly, such that each person in a population of $n$ chooses a best friend at random. Then the probability that any given person is not anyone's best friend is $(1-\frac{1}{n-1})^{n-1}$, which tends to $e^{-1}$ for large $n$.</p>
<p>Afterwards I'm not sure this is actually the best way to analyze the claim. Perhaps instead we should imagine assigning a random "friendship strength" to each edge in the complete graph on $n$ vertices, in which case my friend's lament would be "every vertex I'm connected to has an edge with higher weight than my edge to it". This is not the same as "everyone choses a best friend at random", because it guarantees that there's at least one pair of people who're mutually best friends, namely the two ends of the edge with the highest weight.</p>
<p>(Of course, some people are not friends at all; we can handle that by assigning low weights to their mutual edges. As long as everyone has at least one actual friend, this won't change who are whose best friends).</p>
<p>(It doesn't matter which distribution the friendship weights are chosen by, as long as it's continuous -- because all that matters is the <em>relative</em> order between the weights. Equivalently, one may simply choose a random total order on the $n(n-1)/2$ edges in the complete graph).</p>
<p><strong>In this model, what is the probability that a given person is not anyone's best friend?</strong></p>
<p>By linearity of expectations, the probability of being <em>mutually</em> best friends with anyone is $\frac{n-1}{2n-3}\approx\frac 12$ (much better than in the earlier model), but that doesn't take into account the possibility that some poor soul has me as <em>their</em> best friend whereas I myself has other better friends. Linearity of expectation doesn't seem to help here -- it tells me that the <em>expected</em> number of people whose best friend I am is $1$, but not the probability of this number being $0$.</p>
<hr>
<p><em>(Edit: Several paragraphs of numerical results now moved to a significantly expanded answer)</em></p>
| <p>The probability for large $n$ is $e^{-1}$ in the friendship-strength model too. I can't even begin to <em>explain</em> why this is, but I have strong numerical evidence for it. More precisely, if $p(n)$ is the probability that someone in a population of $n$ isn't anyone's best friend, then it looks strongly like</p>
<p>$$ p(n) = \Bigl(1-\frac{1}{2n-7/3}\Bigr)^{2n-7/3} + O(n^{-3})$$
as $n$ tends to infinity.</p>
<p>The factor of $2$ may hint at some asymptotic connection between the two models, but it has to be subtle, because the best-friend relation certainly doesn't look the same in the two models -- as noted in the question, in the friendship-strength model we expect half of all people to be <em>mutually</em> best friends with someone, whereas in the model where everyone chooses a best friend independently, the <em>total</em> expected number of mutual friendships is only $\frac12$.</p>
<p>The offset $7/3$ was found experimentally, but there's good evidence that it is exact. If it means anything, it's a mystery to me what.</p>
<p><strong>How to compute the probability.</strong> Consider the complete graph on $n$ vertices, and assign random friendship weights to each edge. Imagine processing the edges in order from the strongest friendship towards the weakest. For each vertex/person, the <em>first time</em> we see an edge ending there will tell that person who their best friend is.</p>
<p>The graphs we build can become very complex, but for the purposes of counting we only need to distinguish three kinds of nodes:</p>
<ul>
<li><strong>W</strong> - Waiting people who don't yet know any of their friends. (That is, vertices that are not an endpoint of any edge processed yet).</li>
<li><strong>F</strong> - Friends, people who are friends with someone, but are not anyone's best friend <em>yet</em>. Perhaps one of the Waiting people will turn out to have them as their best friend.</li>
<li><strong>B</strong> - Best friends, who know they are someone's best friend.</li>
</ul>
<p>At each step in the processing of the graph, it can be described as a triple $(w,f,b)$ stating the number of each kind of node. We have $w+f+b=n$, and the starting state is $(n,0,0)$ with everyone still waiting.</p>
<ul>
<li>If we see a <strong>WW</strong> edge, two waiting people become mutually best friends, and we move to state $(w-2,f,b+2)$. There are $w(w-1)/2$ such edges.</li>
<li>If we see a <strong>WF</strong> edge, the <strong>F</strong> node is now someone's best friend and becomes a <strong>B</strong>, and the <strong>W</strong> node becomes <strong>F</strong>. The net effect is to move us to state $(w-1,f,b+1)$. There are $wf$ such edges.</li>
<li>If we see a <strong>WB</strong> edge, tne <strong>W</strong> node becomes <strong>F</strong>, but the <strong>B</strong> stays a <strong>B</strong> -- we don't care how <em>many</em> people's best friends one is, as long there is someone. We move to $(w-1,f+1,b)$, and there are $wb$ edges of this kind.</li>
<li>If we see a <strong>FF</strong> or <strong>FB</strong> or <strong>BB</strong> edge, it represents a friendship where both people already have better friends, so the state doesn't change.</li>
</ul>
<p>Thus, for each state, the next <strong>WW</strong> or <strong>WF</strong> or <strong>WB</strong> edge we see determine which state we move to, and since all edges are equally likely, the probabilities to move to the different successor states are $\frac{w-1}{2n-w-1}$ and$\frac{2f}{2n-w-1}$ and $\frac{2b}{2n-w-1}$, respectively.</p>
<p>Since $w$ decreases at every move between states, we can fill out a table of the probabilities that each state is ever visited simply by considering all possible states in order of decreasing $w$. When all edges have been seen we're in some state $(0,f,n-f)$, and summing over all these we can find the <em>expected</em> $f$ for a random weight assignment.</p>
<p>By linearity of expectation, the probability that any <em>given</em> node is <strong>F</strong> at the end must then be $\langle f\rangle/n$.</p>
<p>Since there are $O(n^2)$ states with $w+f+b=n$ and a constant amount of work for each state, this algorithm runs in time $O(n^2)$.</p>
<p><strong>Numerical results.</strong> Here are exact results for $n$ up to 18:</p>
<pre><code> n approx exact
--------------------------------------------------
1 100% 1/1
2 0.00% 0/1
3 33.33% 1/3
4 33.33% 1/3
5 34.29% 12/35
6 34.81% 47/135
7 35.16% 731/2079
8 35.40% 1772/5005
9 35.58% 20609/57915
10 35.72% 1119109/3132675
11 35.83% 511144/1426425
12 35.92% 75988111/211527855
13 36.00% 1478400533/4106936925
14 36.06% 63352450072/175685635125
15 36.11% 5929774129117/16419849744375
16 36.16% 18809879890171/52019187845625
17 36.20% 514568399840884/1421472473796375
18 36.24% 120770557736740451/333297887934886875
</code></pre>
<p>After this point, exact rational arithmetic with 64-bit denominators start overflowing. It does look like $p(n)$ tends towards $e^{-1}$. (As an aside, the sequence of numerators and denominators are both unknown to OEIS).</p>
<p>To get further, I switched to native machine floating point (Intel 80-bit) and got the $p(n)$ column in this table:</p>
<pre><code> n p(n) A B C D E F G H
---------------------------------------------------------------------
10 .3572375 1.97+ 4.65- 4.65- 4.65- 2.84+ 3.74+ 3.64+ 4.82-
20 .3629434 2.31+ 5.68- 5.68- 5.68- 3.47+ 4.67+ 4.28+ 5.87-
40 .3654985 2.62+ 6.65- 6.64- 6.64- 4.09+ 5.59+ 4.90+ 6.84-
80 .3667097 2.93+ 7.59- 7.57- 7.57- 4.70+ 6.49+ 5.51+ 7.77-
100 .3669469 3.03+ 7.89- 7.87- 7.86- 4.89+ 6.79+ 5.71+ 8.07-
200 .3674164 3.33+ 8.83- 8.79- 8.77- 5.50+ 7.69+ 6.32+ 8.99-
400 .3676487 3.64+ 9.79- 9.69- 9.65- 6.10+ 8.60+ 6.92+ 9.90-
800 .3677642 3.94+ 10.81- 10.60- 10.52- 6.70+ 9.50+ 7.52+ 10.80-
1000 .3677873 4.04+ 11.17- 10.89- 10.80- 6.90+ 9.79+ 7.72+ 11.10-
2000 .3678334 4.34+ 13.18- 11.80- 11.63- 7.50+ 10.69+ 8.32+ 12.00-
4000 .3678564 4.64+ 12.74+ 12.70- 12.41- 8.10+ 11.60+ 8.92+ 12.90-
8000 .3678679 4.94+ 13.15+ 13.60- 13.14- 8.70+ 12.50+ 9.52+ 13.81-
10000 .3678702 5.04+ 13.31+ 13.89- 13.36- 8.90+ 12.79+ 9.72+ 14.10-
20000 .3678748 5.34+ 13.86+ 14.80- 14.03- 9.50+ 13.69+ 10.32+ 15.00-
40000 .3678771 5.64+ 14.44+ 15.70- 14.67- 10.10+ 14.60+ 10.92+ 15.91-
</code></pre>
<p>The 8 other columns show how well $p(n)$ matches various attempts to model it. In each column I show $-\log_{10}|p(n)-f_i(n)|$ for some test function $f_i$ (that is, how many digits of agreement there are between $p(n)$ and $f_i(n)$), and the sign of the difference between $p$ and $f_i$.</p>
<ul>
<li>$f_{\tt A}(n)=e^{-1}$</li>
</ul>
<p>In the first column we compare to the constant $e^{-1}$. It is mainly there as evidence that $p(n)\to e^{-1}$. More precisely it looks like $p(n) = e^{-1} + O(n^{-1})$ -- whenever $n$ gets 10 times larger, another digit of $e^{-1}$ is produced.</p>
<ul>
<li>$f_{\tt C}(n)=\Bigl(1-\frac{1}{2n-7/3}\Bigr)^{2n-7.3}$</li>
</ul>
<p>I came across this function by comparing $p(n)$ to $(1-\frac{1}{n-1})^{n-1}$ (the probability in the chose-best-friends-independently model) and noticing that they almost matched beteen $n$ and $2n$. The offset $7/3$ was found by trial and error. With this value it looks like $f_{\tt C}(n)$ approximates $p(n)$ to about $O(n^{-3})$, since making $n$ 10 times larger gives <em>three</em> additional digits of agreement.</p>
<ul>
<li>$ f_{\tt B}=\Bigl(1-\frac{1}{2n-2.332}\Bigr)^{2n-2.332}$ and $f_{\tt D}=\Bigl(1-\frac{1}{2n-2.334}\Bigr)^{2n-2.334} $</li>
</ul>
<p>These columns provide evidence that $7/3$ in $f_{\tt C}$ is likely to be exact, since varying it just slightly in each direction gives clearly worse approximation to $p(n)$. These columns don't quite achieve three more digits of precision for each decade of $n$.</p>
<ul>
<li>$ f_{\tt E}(n)=e^{-1}\bigl(1-\frac14 n^{-1}\bigr)$ and $f_{\tt F}(n)=e^{-1}\bigl(1-\frac14 n^{-1} - \frac{11}{32} n^{-2}\bigr)$</li>
</ul>
<p>Two and three terms of the asymptotic expansion of $f_{\tt C}$ in $n$. $f_{\tt F}$ also improves cubically, but with a much larger error than $f_{\tt C}$. This seems to indicate that the specific structure of $f_{\tt C}$ is important for the fit, rather than just the first terms in its expansion.</p>
<ul>
<li>$ f_{\tt G}(n)=e^{-1}\bigl(1-\frac12 (2n-7/3)^{-1}\bigr)$ and $f_{\tt H}(n)=e^{-1}\bigl(1-\frac12 (2n-7/3)^{-1}-\frac{5}{24} (2n-7/3)^{-2}\bigr) $</li>
</ul>
<p>Here's a surprise! Expanding $f_{\tt C}$ in powers of $2n-7/3$ instead of powers of $n$ not only gives better approximations than $f_{\tt E}$ and $f_{\tt F}$, but also approximates $p(n)$ better than $f_{\tt C}$ itself does, by a factor of about $10^{0.2}\approx 1.6$. This seems to be even more mysterious than the fact that $f_{\tt C}$ matches.</p>
<p>At $n=40000$ the computation of $p(n)$ takes about a minute, and the comparisons begin to push the limit of computer floating point. The 15-16 digits of precision in some of the columns are barely even representable in double precision. Funnily enough, the calculation of $p(n)$ itself seems to be fairly robust compared to the approximations.</p>
| <p>Here's another take at this interesting problem.
Consider a group of $n+1$ persons $x_0,\dots,x_n$ with $x_0$ being myself.
Define the probability
$$
P_{n+1}(i)=P(\textrm{each of $x_1,\dots,x_i$ has me as his best friend}).
$$
Then we can compute the wanted probability using the <em>inclusion-exclusion formula</em> as follows:
\begin{eqnarray}
P_{n+1} &=& P(\textrm{I am nobody's best friend}) \\
&=& 1-P(\textrm{I am somebody's best friend}) \\
&=& \sum_{i=0}^n (-1)^i\binom{n}{i}P_{n+1}(i). \tag{$*$}
\end{eqnarray}</p>
<p>To compute $P_{n+1}(i)$, note that for the condition to be true,
it is necessary that of all friendships between one of $x_1,\dots,x_i$ and anybody,
the one with the highest weight is a friendship with me.
The probability of that being the case is
$$
\frac{i}{in-i(i-1)/2}=\frac{2}{2n-i+1}.
$$
Suppose, then, that that is the case and let this friendship be $(x_0, x_i)$.
Then I am certainly the best friend of $x_i$.
The probability that I am also the best friend of each of $x_1,\dots,x_{i-1}$ is unchanged.
So we can repeat the argument and get
$$
P_{n+1}(i)
=\frac{2}{2n}\cdot\frac{2}{2n-1}\cdots\frac{2}{2n-i+1}
=\frac{2^i(2n-i)!}{(2n)!}.
$$
Plugging this into $(*)$ gives a formula for $P_{n+1}$ that agrees with Henning's results.</p>
<p>To prove $P_{n+1}\to e^{-1}$ for $n\to\infty$, use that the $i$-th term of $(*)$ converges to, but is numerically smaller than, the $i$-th term of
$$
e^{-1}=\sum_{i=0}^\infty\frac{(-1)^i}{i!}.
$$</p>
<p>By the way, in the alternative model where each person chooses a best friend at random, we would instead have $P_{n+1}(i)=1/n^i$ and $P_{n+1}=(1-1/n)^n$.</p>
|
matrices | <p>I'm TAing linear algebra next quarter, and it strikes me that I only know one example of an application I can present to my students. I'm looking for applications of elementary linear algebra outside of mathematics that I might talk about in discussion section.</p>
<p>In our class, we cover the basics (linear transformations; matrices; subspaces of $\Bbb R^n$; rank-nullity), orthogonal matrices and the dot product (incl. least squares!), diagonalization, quadratic forms, and singular-value decomposition.</p>
<p>Showing my ignorance, the only application of these I know is the one that was presented in the linear algebra class I took: representing dynamical systems as Markov processes, and diagonalizing the matrix involved to get a nice formula for the $n$th state of the system. But surely there are more than these.</p>
<p>What are some applications of the linear algebra covered in a first course that can motivate the subject for students? </p>
| <p>I was a teaching assistant in Linear Algebra previous semester and I collected a few applications to present to my students. This is one of them:</p>
<p><strong>Google's PageRank algorithm</strong></p>
<p>This algorithm is the "heart" of the search engine and sorts documents of the world-wide-web by their "importance" in decreasing order. For the sake of simplicity, let us look at a system only containing of four different websites. We draw an arrow from $i$ to $j$ if there is a link from $i$ to $j$.</p>
<p><img src="https://i.sstatic.net/mHgB7.png" alt=""></p>
<p>The goal is to compute a vector $\underline{x} \in \mathbb{R}^4$, where each entry $x_i$ represents the website's importance. A bigger value means the website is more important. There are three criteria contributing to the $x_i$:</p>
<ol>
<li>The more websites contain a link to $i$, the bigger $x_i$ gets.</li>
<li>Links from more important websites have a more relevant weight than those of less important websites.</li>
<li>Links from a website which contains many links to other websites (outlinks) have less weight.</li>
</ol>
<p>Each website has exactly one "vote". This vote is distributed uniformly to each of the website's outlinks. This is known as <em>Web-Democracy</em>. It leads to a system of linear equations for $\underline{x}$. In our case, for</p>
<p>$$P = \begin{pmatrix} 0&0&1&1/2\\ 1/3&0&0&0\\ 1/3& 1/2&0&1/2\\ 1/3&1/2&0&0 \end{pmatrix}$$</p>
<p>the system of linear equations reads $\underline{x} = P \underline{x}$. The matrix $P$ is a stochastical matrix, hence $1$ is an eigenvalue of $P$. One of the corresponding eigenvectors is</p>
<p>$$\underline{x} = \begin{pmatrix} 12\\4\\9\\6 \end{pmatrix},$$</p>
<p>hence $x_1 > x_3 > x_4 > x_2$. Let</p>
<p>$$G = \alpha P + (1-\alpha)S,$$</p>
<p>where $S$ is a matrix corresponding to purely randomised browsing without links, i.e. all entries are $\frac{1}{N}$ if there are $N$ websites. The matrix $G$ is called the <em>Google-matrix</em>. The inventors of the PageRank algorithm, Sergey Brin and Larry Page, chose $\alpha = 0.85$. Note that $G$ is still a stochastical matrix. An eigenvector for the eigenvalue $1$ of $\underline{x} = G \underline{x}$ in our example would be (rounded)</p>
<p>$$\underline{x} = \begin{pmatrix} 18\\7\\14\\10 \end{pmatrix},$$</p>
<p>leading to the same ranking.</p>
| <p>Another very useful application of Linear algebra is</p>
<p><strong>Image Compression (Using the SVD)</strong></p>
<p>Any real matrix $A$ can be written as</p>
<p>$$A = U \Sigma V^T = \sum_{i=1}^{\operatorname{rank}(A)} u_i \sigma_i v_i^T,$$</p>
<p>where $U$ and $V$ are orthogonal matrices and $\Sigma$ is a diagonal matrix. Every greyscale image can be represented as a matrix of the intensity values of its pixels, where each element of the matrix is a number between zero and one. For images of higher resolution, we have to store more numbers in the intensity matrix, e.g. a 720p greyscale photo (1280 x 720), we have 921'600 elements in its intensity matrix. Instead of using up storage by saving all those elements, the singular value decomposition of this matrix leads to a simpler matrix that requires much less storage.</p>
<p>You can create a <em>rank $J$ approximation</em> of the original image by using the first $J$ singular values of its intensity matrix, i.e. only looking at</p>
<p>$$\sum_{i=1}^J u_i \sigma_i v_i^T .$$</p>
<p>This saves a large amount of disk space, but also causes the image to lose some of its visual clarity. Therefore, you must choose a number $J$ such that the loss of visual quality is minimal but there are significant memory savings. Example:</p>
<p><img src="https://i.sstatic.net/JkUsU.png" alt=""></p>
<p>The image on the RHS is an approximation of the image on the LHS by keeping $\approx 10\%$ of the singular values. It takes up $\approx 18\%$ of the original image's storage. (<a href="http://www.math.uci.edu/icamp/courses/math77c/demos/SVD_compress.pdf" rel="noreferrer">Source</a>)</p>
|
game-theory | <p>This is from Mark Joshi's classic book. The full question is:</p>
<p>"I pick a number n from 1 to 100. If you guess correctly, I pay you $n and zero otherwise. How much would you pay to play this game?"</p>
<p>Joshi offers a solution, but I am struggling with it. From what I understand, the person picking the number has incentive to pick lower numbers as this will result in lower payoffs. However, low numbers will likely be selected by the player so the number should not be too low. Joshi suggests the following as the expectation of the game:</p>
<p><span class="math-container">$$\Big(\sum_{i=1}^{100}\frac{1}{i}\Big)^{-1}$$</span></p>
<p>Not sure if anyone could with the intuition on how the solution was obtained. I guess it arises as the initial person picking the number should pick with decaying probability going from 1 to 100?</p>
<p>Thanks</p>
| <p>The intuition is that in an optimal strategy, the picker should be indifferent to what the guesser chooses. </p>
<p>Suppose we just take <span class="math-container">$n=3$</span> for simplicity. Suppose the picker chooses <span class="math-container">$1$</span> with probability <span class="math-container">$p_1$</span>, chooses <span class="math-container">$2$</span> with probability <span class="math-container">$p_2$</span>, and <span class="math-container">$3$</span> with probability <span class="math-container">$p_3$</span>. The selection of <span class="math-container">$p_1, p_2, p_3$</span> constitutes the picker's strategy.</p>
<p>The indifference criterion means that <span class="math-container">$1p_1=2p_2=3p_3$</span>. However, also <span class="math-container">$p_1+p_2+p_3=1$</span>. To solve, plug in and get <span class="math-container">$$p_1+\frac{1}{2}p_1+\frac{1}{3}p_1=1$$</span>
Hence, <span class="math-container">$p_1=(1+\frac{1}{2}+\frac{1}{3})^{-1}$</span>. This is also the average amount that the guesser wins, regardless of which number guessed. This is also the expected value of the game.</p>
| <p>Here is a rigorous justification for the answer. First, note that this is a zero-sum game - my gain is the negative of your loss. It's clear that I should never play a deterministic strategy - you can simply play adversarially by always avoiding my deterministic guess. Should you play deterministically? It's not clear yet, but if you are ever going to play a deterministic strategy, that strategy has to always play 1, since that minimizes my gain/your loss. Then let's think about randomized strategies - is there a randomized strategy that is better for you than always playing 1? A randomized strategy for a player is just a probability distribution according to which the player either picks or guesses the number. Let my strategy <span class="math-container">$P$</span> be <span class="math-container">$<p_1,...,p_{100}>$</span>, where <span class="math-container">$p_i$</span> denotes the probability that I guess <span class="math-container">$i$</span>. Now there is a result in game theory that says that in a 2-player zero-sum game, to calculate player A's optimal strategy, we can examine each of player B's deterministic strategies, calculate the payoff of A, and maximize the minimum of those payoffs. (another result is that in a 2-player zero-sum game, if both players play optimally, then they can publish their strategies and it would not affect the expected payoff, which is why I assume that both players know the other play's strategy). You have 100 deterministic strategies. If you always play 1, then my exp. payoff is <span class="math-container">$p_1$</span>; if you always play 2, then my exp. payoff is <span class="math-container">$2p_2$</span>; ...; if you always play <span class="math-container">$i$</span>, then my exp. payoff is <span class="math-container">$ip_i$</span>. Thus, I want to maximize the minimum among <span class="math-container">$p_1,2p_2,...,100p_{100}$</span>, subject to <span class="math-container">$p_1+p_2+...+p_{100}=1$</span>, and all <span class="math-container">$p_i\geq 0$</span>. Clearly we should set them equal, and that gives <span class="math-container">$$p_k=\frac{1}{k}\frac{1}{\sum_{i=1}^{100}\frac{1}{i}}$$</span>
The payoff is then
<span class="math-container">$$\frac{1}{\sum_{i=1}^{100}\frac{1}{i}}$$</span>
By the way, this is lower than 1, so you should not play deterministically.</p>
|
probability | <p>There's a question in my Olympiad questions book which I can't seem to solve:</p>
<blockquote>
<p>You have the option to throw a die up to three times. You will earn
the face value of the die. You have the option to stop after each
throw and walk away with the money earned. The earnings are not additive. What is the expected payoff of this game?</p>
</blockquote>
<p>I found a solution <a href="https://web.archive.org/web/20180120152240/http://mathforum.org/library/drmath/view/68228.html" rel="nofollow noreferrer">here</a> but I don't understand it.</p>
| <p>Let's suppose we have only 1 roll. What is the expected payoff? Each roll is equally likely, so it will show $1,2,3,4,5,6$ with equal probability. Thus their average of $3.5$ is the expected payoff.</p>
<p>Now let's suppose we have 2 rolls. If on the first roll, I roll a $6$, I would not continue. The next throw would only maintain my winnings of $6$ (with $1/6$ chance) or make me lose. Similarly, if I threw a $5$ or a $4$ on the first roll, I would not continue, because my expected payoff on the last throw would be a $3.5$. However, if I threw a $1,2$ of $3$, I would take that second round. This is again because I expect to win $3.5$.</p>
<p>So in the 2 roll game, if I roll a $4,5,6$, I keep those rolls, but if I throw a $1,2,3$, I reroll. Thus I have a $1/2$ chance of keeping a $4,5,6$, or a $1/2$ chance of rerolling. Rerolling has an expected return of $3.5$. As the $4,5,6$ are equally likely, rolling a $4,5$ or $6$ has expected return $5$. Thus my expected payout on 2 rolls is $.5(5) + .5(3.5) = 4.25$.</p>
<p>Now we go to the 3 roll game. If I roll a $5$ or $6$, I keep my roll. But now, even a $4$ is undesirable, because by rerolling, I'd be playing the 2 roll game, which has expected payout of $4.25$. So now the expected payout is $\frac{1}{3}(5.5) + \frac{2}{3}(4.25) = 4.\overline{66}$.</p>
<p>Does that make sense?</p>
| <p>This problem is solved using the theory of optimal stopping
for Markov chains. I will explain some of the theory, then turn to your specific question. You can learn more about this fascinating topic in Chapter 4 of <em>Introduction to Stochastic Processes</em> by Gregory F. Lawler.</p>
<hr>
<p>Think of a Markov chain with state space $\cal S$ as a game.</p>
<p>A <em>payoff function</em>
$f:{\cal S}\to[0,\infty)$
assigns a monetary "payoff'' to each state of the Markov chain.
This is the amount you would collect if you stop playing with the
chain in that state. </p>
<p>In contrast, the <em>value function</em>
$v:{\cal S}\to[0,\infty)$
is defined as the greatest expected payoff possible from each starting point;
$$v(x)=\sup_T \mathbb{E}_x(f(X_T)).$$ There is a single optimal strategy $T_{\rm opt}$
so that $v(x)=\mathbb{E}_x(f(X_{T_{\rm opt}})).$
It can be described as $T_{\rm opt}:=\inf(n\geq 0: X_n\in{\cal E})$,
where ${\cal E}=\{x\in {\cal S}\mid f(x)=v(x)\}$. That is, you should stop playing as soon as you hit the set $\cal E$.</p>
<hr>
<p><strong>Example:</strong> </p>
<p>You roll an ordinary die with outcomes $1,2,3,4,5,6$.
You can keep the value or roll again.
If you roll, you can keep the new value or roll a third time.
After the third roll you must stop. You win the amount showing on the die.
What is the value of this game?</p>
<p><strong>Solution:</strong>
The state space for the Markov chain is
$${\cal S}=\{\mbox{Start}\}\cup\left\{(n,d): n=2,1,0; d=1,2,3,4,5,6\right\}.$$
The variable $n$ tells you how many rolls you have left, and this decreases by one every
time you roll. Note that the states with $n=0$ are absorbing.</p>
<p>You can think of the state space as a tree,
the chain moves forward along the tree until it reaches the end. </p>
<p><img src="https://i.sstatic.net/pNXRN.jpg" alt="enter image description here"></p>
<p>The function $v$ is given above in green, while $f$ is in red.</p>
<p>The payoff function $f$ is zero at the start, and otherwise equals the number of spots on $d$.</p>
<p>To find the value function $v$, let's start at the right hand side of the tree.
At $n=0$, we have $v(0,d)=d$, and we calculate $v$ elsewhere by working backwards,
averaging over the next roll and taking the maximum of that and the current payoff.
Mathematically, we use the property $v(x)=\max(f(x),(Pv)(x))$ where $Pv(x)=\sum_{y\in {\cal S}} p(x,y)v(y).$</p>
<p>The value of the game at the start is \$4.66. The optimal strategy is to keep playing
until you reach a state where the red number and green number are the same. </p>
|
logic | <p>We know there are statements that are undecidable/independent of ZFC. Can there be a statement S, such that (ZFC $\not\vdash$ S and ZFC $\not\vdash$ ~S) is undecidable?</p>
| <p>At least one version of this question has a nearly-trivial answer; if you want to know if there's some statement G such that ZFC (or your PA-compatible logic system of choice) doesn't prove G, ZFC doesn't prove ~G, and we can't prove that ZFC doesn't prove G or not G, then any undecideable statement works!</p>
<p>Specifically, it's impossible for ZFC to prove that it can't prove something, because such a statement is tantamount to the consistency of ZFC; if ZFC were inconsistent then it could prove everything, so proving that there's something that can't be proved is equivalent to proving the consistency of ZFC, and of course this is forbidden by the second incompleteness theorem.</p>
| <p>(Zhen: Independence is with respect to ZFC.)</p>
<p>Given any sentence S, either (1) ZFC proves S, in which case it is a theorem of ZFC that ZFC proves S, or (2) ZFC does not prove S. In this case, ZFC is consistent, and ZFC does not know that it does not prove S (or else it would know that it is consistent, and therefore it wouldn't be, but then it would prove S). So, in this case ZFC does not prove that it does not prove S, and does not prove that it proves S. It follows easily that "ZFC proves S" is independent of ZFC, for any S for which ZFC does not prove S. </p>
<p>To ask whether the statement "ZFC proves S" or a variant of this such as (+): "ZFC does not prove S and does not prove not-S" is decidable, on the other hand, is silly, unless you are using the term in a strange fashion. For any S there is a Turing machine M with no input that outputs the truth value of the statement (+). You do not ask for the decidability of a single statement, but of a family of statements, say with S as a parameter. You probably need to clarify what you mean. </p>
<p>The only sensible way of understanding what you are asking is to take the set whose sole element is a sentence formalizing the statement (+), and asking whether that set is decidable, but of course it is as <em>any finite set (of natural numbers)</em> is trivially decidable. Perhaps more interesting is whether, calling the (formalization of the) statement in quotes $(+)_S$, the set $A=${$S\mid(+)_S$} is decidable. </p>
<p>Now: Suppose first that ZFC is inconsistent. Then ZFC proves anything, so all the statements $(+)_S$ are false. Hence, the set $A$ is obviously decidable. Suppose now that ZFC is consistent. Let S be undecidable. Then if a machine solves $A$, it would tell us, upon inputting S, that S is independent. Since the set of independent (of ZFC) statements is independent, we are done: $A$ is undecidable.</p>
<p>Perhaps you want to know whether ZFC proves that A is decidable, or if it proves that A is undecidable. Note that if ZFC proves that A is undecidable, then the argument above (formalized within ZFC) tells us that ZFC knows that ZFC is consistent. In this case, ZFC <em>is</em> inconsistent, and it proves anything.</p>
<p>Suppose, then, that ZFC proves that A is decidable. This is possible if ZFC is inconsistent. So, suppose that ZFC is consistent. Then the argument above tells us that ZFC proves that ZFC is inconsistent. </p>
<p>This is not expected to be the case, as it tells us that ZFC is wrong about arithmetic statements. If ZFC is a "true" theory, meaning, if the arithmetic consequences of ZFC are true about the natural numbers, then ZFC cannot prove that A is decidable, and "A is decidable" is independent. Of course, if ZFC is not a true theory, I do not think it is too interesting whether it proves something or other about A.</p>
<p>We can further complicate things by considering models of ZFC, and checking whether the model thinks that ZFC proves that A is decidable, or not, or any of the posisble variants suggested above. We can in fact, assuming mild consistency requirements on ZFC, check that there are models of ZFC that disagree on whether ZFC proves that A is decidable, proves that it is undecidable, or does not prove either. </p>
|
logic | <p>Say I am explaining to a kid, <span class="math-container">$A +B$</span> is the same as <span class="math-container">$B+A$</span> for natural numbers.</p>
<p>The kid asks: why?</p>
<p>Well, it's an axiom. It's called commutativity (which is not even true for most groups).</p>
<p>How do I "prove" the axioms?</p>
<p>I can say, look, there are <span class="math-container">$3$</span> pebbles in my right hand and <span class="math-container">$4$</span> pebbles in my left hand. It's pretty intuitive that the total is <span class="math-container">$7$</span> whether I added the left hand first or the right hand first.</p>
<p>Well, I answer that on any exam and I'll get an F for sure.</p>
<p>There is something about axioms. They can't be proven and yet they are more true than conjectures or even theorems.</p>
<p>In what sense are axioms true then?</p>
<p>Is this just intuition? We simply define natural numbers as things that fit these axioms. If it's not true then well, they're not natural numbers. That may make sense. What do mathematicians think? Is the fact that the number of pebbles in my hand follows the rules of natural numbers "science" instead of "math"? Looks like it's more obvious than that.</p>
<p>It looks to me, truth for axioms, theorems, and science are all truth in a different sense, isn't it? We use just one word to describe them, true. I feel like I am missing something here.</p>
| <p><strong>You only need to "prove" an axiom when using it to model a real-world problem.</strong> In general, mathematicians just say <em>"these are my assumptions (axioms), this is what I can prove with them"</em> - they often don't care whether it models a real-world problem or not.</p>
<hr>
<p>When using math to model real-world problems, it's up to you to show that the axioms actually hold. The idea is that, if the axioms are true for the real-world problem, and all the logical steps taken are sound, then the conclusions (theorems etc.) should also be true in your real-world problem.</p>
<p>In your case, I think your example is actually a convincing "proof" that your axiom <em>(commutativity of addition over natural numbers)</em> holds for your real-world problem of counting stones: if I pick up any number of stones in my left- and right-hands, it doesn't matter whether I count the left or right first, I'll get the same result either way. You can verify this experimentally, or use your intuition. As long as you agree that the axioms of the model fit your problem, you should agree with the conclusions as well <em>(assuming you agree with the proofs, of course)</em>.</p>
<p>Of course, this is not a <em>proof</em> of the axioms, and it's entirely possible for someone to disagree. In that case, they don't believe that the natural numbers are valid model for counting stones, and they'll have to look for a different model instead.</p>
| <p>The problem of what is means for something to be "true" is a general problem in philosophy which has received a lot of attention, so it is impossible for anyone to give a short and complete answer here. The Stanford Encyclopedia has a nice <a href="http://plato.stanford.edu/entries/truth/">article on truth</a>. Mathematics is a useful test case for philosophers so a lot has been written on mathematical truth. </p>
<p>There is a separate problem that the word "true" is used to mean several things in mathematics: it can mean just "true", or it can mean "true in a particular structure". For example, the latter meaning is intended when we say that the axiom of commutativity is true in some groups and not in others. The notion of truth in a structure is well studied in mathematical logic. But I think the question above is about plain "truth" not about "truth in a structure". </p>
<p>The easiest way to define what plain "truth" means is to believe that there is some "real" mathematical structure, consisting of mathematical objects that actually exist. This viewpoint is called mathematical Platonism or mathematical realism. Then a statement is "true" if it is true when interpreted as a statement about these real mathematical objects. For example, from this viewpoint the statement "Addition of natural numbers is commutative" is true because the actual addition operation on the actual set of natural numbers is commutative. </p>
<p>There are other "anti-realist" theories of truth that do not presuppose that there are independently existing mathematical objects that can be used to test the truth of a statement. (One problem with the realist versions is that it is far from clear how we would be able to tell whether mathematical objects have various properties using our five senses.) Some go as far as replacing truth with provability; for example, this is one way to understand the motivations for intuitionism. But most mathematicians maintain that there is a difference between truth and provability. </p>
<p>There is a separate issue that most of the time the word "true" is used in mathematical proofs, it is just a turn of phrase that could be omitted. For example, instead of saying "We know that $A \to B$ is true, and we have proved $A$, so $B$ is true", we can say "We have assumed $A \to B$, and we have proved $A$, so we may conclude $B$". This shows up when the proofs are formalized: formal proofs in most theories (e.g. the theory of groups, ZFC set theory) do not have any way to refer to "truth", they simply manipulate formulas. The idea, of course, is that if the assumptions are true then the conclusion is true. But the formal proof itself will not make reference to plain "truth". </p>
<hr>
<p>The question goes on to ask how we would know (in a realist theory, for example) that addition of natural numbers is commutative. Someone could say "you prove it from other postulates" but then the problem would be how to know that those postulates are true. In the end, the question is how to know that any postulate about the actual natural numbers is true. This is a major issue for mathematical realism, as I mentioned above. The most common answer is that humans have some form of insight which allows us to determine the truth of some (but not all) mathematical propositions directly, without having a formal proof of those propositions. The commutativity of natural number addition is one of those propositions: by thinking about addition and natural numbers, we are drawn to conclude that the addition is commutative. In the end this is how we justify all postulates in geometry, set theory, arithmetic, etc. The realist positions is that although we cannot prove them formally, we can come to believe they are true by thinking about the objects they describe. </p>
|
probability | <p>I recently had an interview question that posed the following... Suppose you are shooting free throws and each shot has a 60% chance of going in (there are no "learning" or "depreciation" effects, all have the some probability no matter how many shots you take).</p>
<p>Now there are three scenarios where you can win $1000</p>
<ol>
<li>Make at least 2 out of 3</li>
<li>Make at least 4 out of 6</li>
<li>Make at least 20 out of 30</li>
</ol>
<p>My initial thought is that each are equally appealing as they all require the same percentage of free throw shots. However when using a binomial calculator (which this process seems to be) the P (X > x) seems to be the highest for scenario 1. Is this due to the number of combinations?</p>
| <p>The result is linked to the <a href="http://en.wikipedia.org/wiki/Law_of_large_numbers">Law of large numbers</a>, which basically states that the more trials you do of something, the closer you will get to the expected probability. So after 10,000 trials, I would expect to be proportionally closer to 6000 than being close to 60 after 100 trials.</p>
<p>The point here is that in all these cases the proportion is $\frac23$ - i.e. greater than the expected probability of 60%. Since for larger numbers we will be closer to the expected probability of 60%, that means that we are less likely to be above $\frac23$.</p>
<p>If you were to change the parameters slightly - and ask what is the probability of success in more than 55% of cases, for example, then you'd see the complete opposite happening.</p>
| <p>$P(\#1) = \binom{3}{2}\cdot{(\frac{60}{100})}^2\cdot{(\frac{40}{100})}^1+\binom{3}{3}\cdot{(\frac{60}{100})}^3\cdot{(\frac{40}{100})}^0 = 0.648$</p>
<hr>
<p>$P(\#2) = \binom{6}{4}\cdot{(\frac{60}{100})}^4\cdot{(\frac{40}{100})}^2+\binom{6}{5}\cdot{(\frac{60}{100})}^5\cdot{(\frac{40}{100})}^1+\binom{6}{6}\cdot{(\frac{60}{100})}^6\cdot{(\frac{40}{100})}^0 = 0.54432$</p>
<hr>
<p>$P(\#3) = \sum\limits_{n=20}^{30}\binom{30}{n}\cdot{(\frac{60}{100})}^n\cdot{(\frac{40}{100})}^{30-n} = $ feel free to do the math yourself...</p>
<hr>
<p>In general, you need to prove that the following sequence is monotonically decreasing:</p>
<p>$A_k = \sum\limits_{n=2k}^{3k}\binom{3k}{n}\cdot{(\frac{60}{100})}^n\cdot{(\frac{40}{100})}^{3k-n}$</p>
<p>I think it should be pretty easy to do so by induction...</p>
|
logic | <p>Why isn't a conditional statement said to "not apply" instead of be "vacuously true" if the hypothesis is not satisfied? That would seem more appropriate. </p>
| <p>It's not appropriate because it doesn't reflect the fact that the statement is <em>actually</em> vacuously true. More useful might be understanding what <em>vacuous</em> actually means: without contents, empty, lacking in ideas or intelligence, meaningless, etc. </p>
<p>Thus, something like the following is vacuously true:</p>
<blockquote>
<p>If the moon is made of barbecue and spare ribs, then I'm smarter than Ramanujan. </p>
</blockquote>
<p>This is <strong>vacuously</strong> true; that is, logically it is true, but it really doesn't mean anything. The moon is obviously not made of barbecue and spare ribs; thus, whatever statement I have is true in no meaningful way (i.e., the statement is <em>vacuously true</em>). </p>
<p>Saying the proposition "does not apply"...what would be the use in that? Doesn't apply to what? Saying something is vacuously true communicates what ultimately needs to be communicated--that a statement is true, but its truth is meaningless. </p>
| <p>We would <em>often</em> say that a statement of the form $P\rightarrow Q$ doesn't apply if $P$ cannot be assumed. This is not a definition or anything formal though - it is merely a statement that knowing that $P\rightarrow Q$ holds gives us no information about where $Q$ holds if $P$ does not. This is, in fact, an interpretation of what vacuous truths mean. That is to say, we have two cases. If we're lucky, we get:</p>
<blockquote>
<p>Suppose $P\rightarrow Q$ and $P$. Then, if we look at a truth table, the only way this could be would be if $Q$ were true. Thus, we can apply the statement $P\rightarrow Q$ to the situation.</p>
</blockquote>
<p>If our truth is vacuous, however, we end up with:</p>
<blockquote>
<p>Suppose $P\rightarrow Q$ and $\neg P$. If we look at a truth table, we see that this could be true if $Q$ is false <em>or</em> if $Q$ is true. So, knowing these two facts tells us nothing about $Q$ - so, our statement $P\rightarrow Q$ cannot be advantageously applied.</p>
</blockquote>
<p>The point here is that the truth-table definition of $P\rightarrow Q$ gives rise to the fact that vacuously true statements aren't very helpful. So, while the definition is counterintuitive, it behaves exactly as we expect when we apply it.</p>
<p>In an intuitive sense, if we wanted to think about a statement like "if a bird is a crow, it is black," we would find that consistent with the following observations</p>
<ul>
<li><p>We see a crow that is black.</p></li>
<li><p>We see a <em>raven</em>, which is also black.</p></li>
<li><p>We see a blue jay, which is <em>not</em> black.</p></li>
</ul>
<p>which correspond to the three "true" cases of $P\rightarrow Q$ on a truth table. We would be surprised, however, if we saw a crow that was not black - which is the only "false" case. So, the way we interpret $P\rightarrow Q$ is only about "Is it consistent to believe this?" not "Is it <em>useful</em>?"</p>
|
matrices | <p>I know that matrix multiplication in general is not commutative. So, in general:</p>
<p>$A, B \in \mathbb{R}^{n \times n}: A \cdot B \neq B \cdot A$</p>
<p>But for some matrices, this equations holds, e.g. A = Identity or A = Null-matrix $\forall B \in \mathbb{R}^{n \times n}$.</p>
<p>I think I remember that a group of special matrices (was it $O(n)$, the <a href="http://en.wikipedia.org/wiki/Orthogonal_group">group of orthogonal matrices</a>?) exist, for which matrix multiplication is commutative.</p>
<p><strong>For which matrices $A, B \in \mathbb{R}^{n \times n}$ is $A\cdot B = B \cdot A$?</strong></p>
| <p>Two matrices that are simultaneously diagonalizable are always commutative.</p>
<p>Proof: Let $A$, $B$ be two such $n \times n$ matrices over a base field $\mathbb K$, $v_1, \ldots, v_n$ a basis of Eigenvectors for $A$. Since $A$ and $B$ are simultaneously diagonalizable, such a basis exists and is also a basis of Eigenvectors for $B$. Denote the corresponding Eigenvalues of $A$ by $\lambda_1,\ldots\lambda_n$ and those of $B$ by $\mu_1,\ldots,\mu_n$. </p>
<p>Then it is known that there is a matrix $T$ whose columns are $v_1,\ldots,v_n$ such that $T^{-1} A T =: D_A$ and $T^{-1} B T =: D_B$ are diagonal matrices. Since $D_A$ and $D_B$ trivially commute (explicit calculation shows this), we have $$AB = T D_A T^{-1} T D_B T^{-1} = T D_A D_B T^{-1} =T D_B D_A T^{-1}= T D_B T^{-1} T D_A T^{-1} = BA.$$</p>
| <p>The only matrices that commute with <em>all</em> other matrices are the multiples of the identity.</p>
|
differentiation | <p>Would it be wrong to think of differentiation as a function whose domain is the set of differentiable functions and co-domain is the set of all functions whose domain and range are some subsets of real numbers?</p>
| <p>Yes, differentiation can be thought of as a function from the set of differentiable functions to the set of functions which are derivatives of a differentiable function. (which is, as Dr. MV points out in a comment, not quite the set of integrable functions).</p>
<p>Such things, which map functions to functions, are typically called <a href="http://mathworld.wolfram.com/Operator.html" rel="noreferrer">operators</a>, but this is just a convention, you can think of them as functions just fine.</p>
| <p>Differentiation of a function is a linear operator, a function on a set of functions.</p>
<p>Differentiation <em>at a point</em> is a linear functional, a function which maps elements of a vector space to it's underlying field, in this case functions to real numbers.</p>
|
linear-algebra | <p>I'm doing a raytracing exercise. I have a vector representing the normal of a surface at an intersection point, and a vector of the ray to the surface. How can I determine what the reflection will be?</p>
<p>In the below image, I have <code>d</code> and <code>n</code>. How can I get <code>r</code>?</p>
<p><img src="https://i.sstatic.net/IQa15.png" alt="Vector d is the ray; n is the normal; t is the refraction; r is the reflection"></p>
<p>Thanks.</p>
| <p>$$r = d - 2 (d \cdot n) n$$
</p>
<p>where $d \cdot n$ is the dot product, and
$n$ must be normalized.</p>
| <p>Let $\hat{n} = {n \over \|n\|}$. Then $\hat{n}$ is the vector of magnitude one in the same direction as $n$. The projection of $d$ in the $n$ direction is given by $\mathrm{proj}_{n}d = (d \cdot \hat{n})\hat{n}$, and the projection of $d$ in the orthogonal direction is therefore given by $d - (d \cdot \hat{n})\hat{n}$. Thus we have
$$d = (d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$
Note that $r$ has $-1$ times the projection onto $n$ that $d$ has onto $n$, while the orthogonal projection of $r$ onto $n$ is equal to the orthogonal projection of $d$ onto $n$, therefore
$$r = -(d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$
Alternatively you may look at it as that $-r$ has the same projection onto $n$ that $d$ has onto $n$, with its orthogonal projection given by $-1$ times that of $d$.
$$-r = (d \cdot \hat{n})\hat{n} - [d - (d \cdot \hat{n})\hat{n}]$$
The later equation is exactly
$$r = -(d \cdot \hat{n})\hat{n} + [d - (d \cdot \hat{n})\hat{n}]$$</p>
<p>Hence one can get $r$ from $d$ via
$$r = d - 2(d \cdot \hat{n})\hat{n}$$
Stated in terms of $n$ itself, this becomes
$$r = d - {2 d \cdot n\over \|n\|^2}n$$</p>
|
probability | <p>If $f(x)$ is a density function and $F(x)$ is a distribution function of a random variable $X$ then I understand that the expectation of x is often written as: </p>
<p>$$E(X) = \int x f(x) dx$$</p>
<p>where the bounds of integration are implicitly $-\infty$ and $\infty$. The idea of multiplying x by the probability of x and summing makes sense in the discrete case, and it's easy to see how it generalises to the continuous case. However, in Larry Wasserman's book <em>All of Statistics</em> he writes the expectation as follows:</p>
<p>$$E(X) = \int x dF(x)$$</p>
<p>I guess my calculus is a bit rusty, in that I'm not that familiar with the idea of integrating over functions of $x$ rather than just $x$.</p>
<ul>
<li>What does it mean to integrate over the distribution function?</li>
<li>Is there an analogous process to repeated summing in the discrete case?</li>
<li>Is there a visual analogy?</li>
</ul>
<p><strong>UPDATE:</strong>
I just found the following extract from Wasserman's book (p.47):</p>
<blockquote>
<p>The notation $\int x d F(x)$ deserves some comment. We use it merely
as a convenient unifying notation so that we don't have to write
$\sum_x x f(x)$ for discrete random variables and $\int x f(x) dx$ for
continuous random variables, but you should be aware that $\int x d F(x)$ has a precise meaning that is discussed in a real analysis
course.</p>
</blockquote>
<p>Thus, I would be interested in any insights that could be shared about <strong>what is the precise meaning that would be discussed in a real analysis course?</strong></p>
| <p>There are many definitions of the integral, including the Riemann integral, the Riemann-Stieltjes integral (which generalizes and expands upon the Riemann integral), and the Lebesgue integral (which is even more general.) If you're using the Riemann integral, then you can only integrate with respect to a variable (e.g. <span class="math-container">$x$</span>), and the notation <span class="math-container">$dF(x)$</span> isn't defined. </p>
<p>The Riemann-Stieltjes integral generalizes the concept of the Riemann integral and allows for integration with respect to a cumulative distribution function that isn't continuous. </p>
<p>The notation <span class="math-container">$\int_{a}^{b} g(x)dF(x)$</span> is roughly equivalent of <span class="math-container">$\int_{a}^{b} g(x) f(x) dx$</span> when <span class="math-container">$f(x)=F'(x)$</span>. However, if <span class="math-container">$F(x)$</span> is a function that isn't differentiable at all points, then you simply can't evaluate <span class="math-container">$\int_{a}^{b} g(x) f(x) dx$</span>, since <span class="math-container">$f(x)=F'(x)$</span> isn't defined. </p>
<p>In probability theory, this situation occurs whenever you have a random variable with a discontinuous cumulative distribution function. For example, suppose <span class="math-container">$X$</span> is <span class="math-container">$0$</span> with probability <span class="math-container">$\frac{1}{2}$</span> and <span class="math-container">$1$</span> with probability <span class="math-container">$\frac{1}{2}$</span>. Then </p>
<p><span class="math-container">$$
\begin{align}
F(x) &= 0 & x &< 0 \\
F(x) &= 1/2 & 0 &\leq x < 1 \\
F(x) &= 1 & x &\geq 1 \\
\end{align}
$$</span></p>
<p>Clearly, <span class="math-container">$F(x)$</span> doesn't have a derivative at <span class="math-container">$x=0$</span> or <span class="math-container">$x=1$</span>, so there isn't a probability density function <span class="math-container">$f(x)$</span> at those points.</p>
<p>Now, suppose that we want to evaluate <span class="math-container">$E[X^3]$</span>. This can be written, using the Riemann-Stieltjes integral, as </p>
<p><span class="math-container">$$E[X^3]=\int_{-\infty}^{\infty} x^3 dF(x).$$</span></p>
<p>Note that because there isn't a probability density function <span class="math-container">$f(x)$</span>, we can't write this as </p>
<p><span class="math-container">$$E[X^{3}]=\int_{-\infty}^{\infty} x^3 f(x) dx.$$</span></p>
<p>However, we can use the fact that this random variable is discrete to evaluate the expected value as:</p>
<p><span class="math-container">$$E[X^{3}]=(0)^{3}(1/2)+(1)^{3}(1/2)=1/2$$</span></p>
<p>So, the short answer to your question is that you need to study alternative definitions of the integral, including the Riemann and Riemann-Stieltjes integrals.</p>
| <p>Another way to understand integration with respect to a distribution function is via the Lebesgue-Stieltjes measure. Let $F\!:\mathbb R\to\mathbb R$ be a distribution function (i.e. non-decreasing and right-continuous). Then there exists a unique measure $\mu_F$ on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ that satisfies
$$
\mu_F((a,b])=F(b)-F(a)
$$
for any choice of $a,b\in\mathbb R$ with $a<b$. Actually there is a one-to-one correspondance between probability measures on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ and non-decreasing, right-continuous functions $F\!:\mathbb R\to\mathbb R$ satisfying $F(x)\to 1$ for $x\to\infty$ and $F(x)\to 0$ for $x\to-\infty$.</p>
<p>Now, the integral
$$
\int x\,\mathrm dF(x)
$$
can be viewed as simply the integral
$$
\int x\,\mu_F(\mathrm dx)\quad\text{or}\quad \int x \,\mathrm d\mu_F(x).
$$</p>
<p>Now if $X$ is a random variable having distribution function $F$, then the Lebesgue-Stieltjes measure is nothing but the distribution $P_X$ of $X$:
$$
P_X((a,b])=P(X\in (a,b])=P(X\leq b)-P(X\leq a)=F(b)-F(a)=\mu_F((a,b]),\quad a<b,
$$
showing that $P_X=\mu_F$. In particular we see that
$$
{\rm E}[X]=\int_\Omega X\,\mathrm dP=\int_\mathbb{R}x\,P_X(\mathrm dx)=\int_\mathbb{R}x\,\mu_F(\mathrm dx)=\int_\mathbb{R}x\,\mathrm dF(x).
$$</p>
|
logic | <p>It is said that our current basis for mathematics are the ZFC-axioms. </p>
<p><strong>Question:</strong> Where are these axioms in our mathematics? When do we use them? I have now studied math for a year, and have yet to run into a single one of these ZFC axioms. How can this be if they are supposed to be the basis for everything I have done so far? </p>
| <p>I think this is a very good question. I don't have the time right now to write a complete answer, but let me make a few quick points (and I'll add more when I get the chance):</p>
<ul>
<li><p><strong>Most mathematics is not done in ZFC</strong>. Most mathematics, in fact, isn't done axiomatically at all: rather, we simply use propositions which seem "intuitively obvious" without comment. This is true even when it looks like we're being rigorous: for example, when we formally define the real numbers in a real analysis class (by Cauchy sequences, Dedekind cuts, or however), we (usually) <em>don't</em> set forth a list of axioms of set theory which we're using to do this. The reason is, that the facts about sets which we need seem to be utterly tame: for example, that the intersection of two sets is again a set. </p></li>
<li><p><strong>ZFC arose in response to a historical need.</strong> The history of modern logic is fascinating, and I don't want to do it injustice; let me just say (wildly oversimplifying) that you only really need to axiomatize mathematics if there's real danger of different people using different axioms implicitly, without realizing it. One standard example here is the axiom of choice, which very reasonable people alternately find perfectly intuitive and clearly false. So ZFC, very roughly speaking, won the job of being the "default" axiom system for mathematics: you're perfectly free to prove theorems using (say) NF instead, but it's considered gauche if you don't explicitly say that's what you're doing. Are there reasons to prefer some other system to ZFC? I'm a very pro-ZFC person, but even I'd have to say yes. The point isn't that ZFC is perfect, though; it's that it's strong enough to address the vast majority of our mathematical needs, while also being reasonable enough that it doesn't cause huge problems most of the time. This strength, by the way, is crucial: we don't want to have to keep updating our axiomatic framework to allow us to do basic mathematics, so overshooting in terms of strength is (I would argue) preferable (the counterargument is that overshooting runs a greater risk of inconsistency; but that's an issue for a different question, or at least a bit later when I have more time to write).</p></li>
<li><p><strong>Even in the ZFC context, ZFC is usually overkill.</strong> OK, let's say you buy the ZFC sales pitch (I certainly did - and I just love the complimentary toaster!). Then you have some class of theorems you want to prove, and - after expressing them in the language of ZFC (which is frankly a tedious process, and one of the practical objections to ZFC emerges from this) - proceed to prove them from the ZFC axioms. But then you notice that you didn't use most of the ZFC axioms at all! This is in fact the norm - <em>replacement</em> especially is overkill in most situations. This isn't a problem, though: ZFC doesn't claim to be minimal, in any sense. And in fact the study of what axioms are <em>needed</em> to prove a given theorem is quite rich: see e.g. <em>reverse mathematics</em>.</p></li>
</ul>
<p>Tl;dr: I would say that the sentence "ZFC is the foundation for modern mathematics," while broadly correct, is hiding a <em>lot</em> of context. In particular:</p>
<ul>
<li><p>Most of the time you're not going to actually be using axioms at all.</p></li>
<li><p>ZFC's claim to prominence is primarily sociological/historical; we could equally well have gone with NF, or something completely different.</p></li>
<li><p>The ZFC axioms are wildly overpowered for most of mathematics; in particular, you probably won't really need the whole of ZFC for almost anything you do.</p></li>
<li><p>Most of all: <strong>ZFC doesn't come first</strong>. <em>Mathematics</em> comes first; ZFC is a <em>mathematical</em> theory that, among other things, "absorbs" the vast majority of mathematics in a certain way. But you can do math without ZFC. (It's just that you run the risk of accidentally invoking an "obvious" set-theoretic principle which isn't so obvious, and so conflicting with other results which invoke the "obvious" negation of your "obvious" axiom. ZFC provides a general language for us to do math in, so we don't have to worry about things like this. But in practice, this almost never occurs.)</p></li>
</ul>
<hr>
<p>Note that you could also ask this question with regard to formal logic - specifically, classical first-order logic - in general; and there has been a lot written about this (I'll add some citations when I have more time). But that's going <em>very</em> far afield.</p>
<p>Really tl;dr (and, I should add, in conflict with a number of people - this is my opinion): foundations do not <em>enable</em>, but rather <em>serve</em>, mathematics.</p>
| <p>"My house is supposedly built on concrete pad foundations, but I've been fixing the pipes upstairs, and I haven't seen any foundation yet."</p>
<p>This is analogous - you don't see them because they're so deep beneath the surface of where you're working. If you lifted the floorboards and poked around, you'd find the foundations.</p>
<p>Though they might actually be wooden pile or columns-on-ball-bearings, in the same way you might actually be using a system besides ZFC. Though until you go and check, you probably won't know the difference.</p>
<p>As going down to the foundation level is way beyond scope for most house repairs, so is going down to axioms beyond scope for most mathematics.</p>
|
combinatorics | <p>In my discrete mathematics class our notes say that between set <span class="math-container">$A$</span> (having <span class="math-container">$6$</span> elements) and set <span class="math-container">$B$</span> (having <span class="math-container">$8$</span> elements), there are <span class="math-container">$8^6$</span> distinct functions that can be formed, in other words: <span class="math-container">$|B|^{|A|}$</span> distinct functions. But no explanation is offered and I can't seem to figure out why this is true. Can anyone elaborate?</p>
| <p>A function on a set involves running the function on <em>every element</em> of the set A, each one producing some result in the set B. So, for the first run, every element of A gets mapped to an element in B. The question becomes, how many different mappings, all using <em>every element</em> of the set A, can we come up with? Take this example, mapping a 2 element set A, to a 3 element set B. There are 9 different ways, all beginning with both 1 <em>and</em> 2, that result in some different combination of mappings over to B.</p>
<p><img src="https://i.sstatic.net/zYKzS.png" alt="enter image description here"></p>
<p>The number of functions from A to B is |B|^|A|, or $3^2$ = 9.</p>
| <p>Let set $A$ have $a$ elements and set $B$ have $b$ elements. Each element in $A$ has $b$ choices to be mapped to. Each such choice gives you a unique function. Since each element has $b$ choices, the total number of functions from $A$ to $B$ is
$$\underbrace{b \times b \times b \times \cdots b}_{a \text{ times}} = b^a$$</p>
|
probability | <p>If someone asked me what it meant for $X$ to be standard normally distributed, I would tell them it means $X$ has probability density function $f(x) = \frac{1}{\sqrt{2\pi}}\mathrm e^{-x^2/2}$ for all $x \in \mathbb{R}$.</p>
<p>More rigorously, I could alternatively say that $f$ is the Radon-Nikodym derivative of the distribution measure of $X$ w.r.t. the Lebesgue measure on $\mathbb{R}$, or $f = \frac{\mathrm d \mu_X}{\mathrm d\lambda}$. As I understand it, $f$ re-weights the values $x \in \mathbb{R}$ in such a way that
$$
\int_B \mathrm d\mu_X = \int_B f\, \mathrm d\lambda
$$
for all Borel sets $B$. In particular, the graph of $f$ lies below one everywhere: <a href="https://i.sstatic.net/2Uit5.jpg" rel="noreferrer"><img src="https://i.sstatic.net/2Uit5.jpg" alt="normal pdf"></a> </p>
<p>so it seems like $f$ is re-weighting each $x \in \mathbb{R}$ to a smaller value, but I don't really have any intuition for this. I'm seeking more insight into viewing $f$ as a change of measure, rather than a sort of distribution describing how likely $X$ is. </p>
<p>In addition, does it make sense to ask "which came first?" The definition for the standard normal pdf as just a function used to compute probabilities, or the pdf as a change of measure?</p>
| <p>Your understanding of the basic math itself seems pretty solid, so I'll just try to provide some extra intuition.</p>
<p>When we integrate a function $g$ with respect to the Lebesgue measure $\lambda$, we find its "area under the curve" or "volume under the surface", etc... This is obvious since the Lebesgue measure assigns the ordinary notion of length (area, etc) to all possible integration regions over the domain of $g$. Therefore, I say that integrating with respect to the Lebesgue measure (which is equivalent in value to Riemannian integration) is a <em>calculation to find the "volume" of some function.</em></p>
<p>Let's pretend for a moment that when performing integration, we are always forced to do it over the <em>entire</em> domain of the integrand. Meaning we are only allowed to compute
$$\int_B g \,d\lambda\ \ \ \ \text{if}\ \ \ \ B=\mathbb{R}^n$$
where $\mathbb{R}^n$ is assumed to be the entire domain of $g$.</p>
<p>With that restriction, what could we do if we only cared about the volume of $g$ over the region $B$? Well, we could define an <a href="https://en.wikipedia.org/wiki/Indicator_function" rel="noreferrer">indicator function</a> for the set $B$ and integrate its product with $g$,
$$\int_{\mathbb{R}^n} \mathbf{1}_B g \,d\lambda$$</p>
<p>When we do something like this, we are taking the mindset that our goal is to nullify $g$ wherever we don't care about it... but that isn't the only way to think about it. We can instead try to nullify $\mathbb{R}^n$ itself wherever we don't care about it. We would compute the integral then as,
$$\int_{\mathbb{R}^n} g \,d\mu$$
where $\mu$ is a measure that behaves just like $\lambda$ for Borel sets that are subsets of $B$, but returns zero for Borel sets that have no intersection with $B$. Using this measure, it doesn't matter that $g$ has value outside of $B$, because $\mu$ will give that support no consideration.</p>
<p>Obviously, these integrals are just different ways to think about the same thing,
$$\int_{\mathbb{R}^n} g \,d\mu = \int_{\mathbb{R}^n} \mathbf{1}_B g \,d\lambda$$
The function $\mathbf{1}_B$ is clearly the density of $\mu$, its Radon–Nikodym derivative with respect to the Lebesgue measure, or by directly matching up symbols in the equation,
$$d\mu = f\,d\lambda$$
where here $f = \mathbf{1}_B$. The reason for showing you all this was to show how we can <em>think of changing measure as a way to tell an integral how to only compute the volume we <strong>care</strong> about</em>. Changing measure allows us to discount parts of the support of $g$ instead of discounting parts of $g$ itself, and the Radon–Nikodym chainrule formalizes their equivalence.</p>
<p>The cool thing about this, is that our measures don't have to be as bipolar as the $\mu$ I constructed above. They don't have to completely not care about support outside $B$, but instead can just care about support outside $B$ <em>less</em> than inside $B$.</p>
<p>Think about how we might find the total mass of some physical object. We integrate over all of space (the <em>entire</em> domain where particles can exist) but use a measure $m$ that returns larger values for regions in space where there is "more mass" and smaller values (down to zero) for regions in space where there is "less mass". It doesn't have to be just mass vs no-mass, it can be everything in between too, and the Radon–Nikodym derivative of this measure is indeed the literal "density" of the object.</p>
<p>So what about probability? Just like with the mass example, we are encroaching on the world of physical modeling and leaving abstract mathematics. Formally, a measure is a probability measure if it returns 1 for the Borel set that is the union of all the other Borel sets. When we consider these Borel sets to model physical "events", this notion makes intuitive modeling sense... we are just defining the probability (measure) of <em>anything</em> happening to be 1.</p>
<p>But why 1? <strong>Arbitrary convenience.</strong> In fact, some people don't use 1! Some people use 100. Those people are said to use the "percent" convention. What is the probability that if I flip this coin, it lands on heads or tails? 100... percent. We could have used literally any positive real number, but 1 is just a nice choice. Note that the Lebesgue measure is not a probability measure because $\lambda(\mathbb{R}^n) = \infty$.</p>
<p>Anyway, what people are doing with probability is designing a measure that models how much significance they give to various events - which are Borel sets, which are regions in the domain; they are just defining how much they value parts of the domain itself. As we saw before with the measure $\mu$ I constructed, the easiest way to write down your measure is by writing its density.</p>
<p>Fun to note: "expected value" of $g$ is just its volume with respect to the given probability measure $P$, and "covariance" of $g$ with $h$ is just their inner product with respect to $P$. Letting $\Omega$ be the entire domain of both $g$ and $h$ (also known as the sample space), if $g$ and $h$ have zero mean,
$$\operatorname{cov}(g, h) = \int_{x \in \Omega}g(x)h(x)f(x)\ dx = \int_{\Omega}gh\ dP = \langle g, h \rangle_P$$</p>
<p>I'll let you show that the <a href="https://en.wikipedia.org/wiki/Pearson_correlation_coefficient#Definition" rel="noreferrer">correlation coefficient</a> for $g$ and $h$ is just the "cosine of the angle between them".</p>
<p>Hope this helps! Measure theory is definitely the modern way of viewing things, and people began to understand "weighted Riemannian integrals" well before they realized the other viewpoint: "weighting" the domain instead of the integrand. Many people attribute this viewpoint's birth to Lebesgue integration, where the operation of integration was first (notably) restated in terms of an arbitrary measure, as opposed to Riemnnian integration which tacitly <em>always</em> assumed the Lebesgue measure.</p>
<p>I noticed you brought up the normal distribution specifically. The normal distribution is special for a lot of reasons, but it is by no means some de-facto probability density. There are an infinite number of equally valid probability measures (with their associated densities). The normal distribution is really only so important because of the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem" rel="noreferrer">central limit theorem</a>.</p>
| <p>The case you are referring to is valid. In your example, Radon-Nikodym serves as a reweighting of the Lebesgue measure and it turns out that the Radon-Nikodym is the pdf of the given distribution. </p>
<p>However, Radon-Nikodym is a more general concept. Your example converts Lebesgue measure to a normal probability measure whereas Radon-Nikodym can be used to convert any measure to another measure as long as they meet certain technical conditions. </p>
<p>A quick recap of the intuition behind measure. A measure is a set function that takes a set as an input and returns a non-negative number as output.
For example length, volume, weight, and probability are all examples of measures. </p>
<p>So what if I have one measure that returns length in meters and another measure that returns length in kilometer? A Radon-Nikodym is to convert these two measures. What is the Radon-Nikodym in this case? It is a constant number 1000. </p>
<p>Similarly, another Radon-Nikodym can be used to convert a measure that returns the weight in kg to another measure that returns the weight in lbs. </p>
<p>Back to your example, pdf is used to convert a Lebesgue measure to a normal probability measure, but this is just one example usage of measure. </p>
<p>Starting from a Lebesgue measure, you can define Radon-Nikodym that generates other useful measures (not necessarily probability measure). </p>
<p>Hope this clarifies it. </p>
|
number-theory | <p>One observes that
\begin{equation*}
4!+1 =25=5^{2},~5!+1=121=11^{2}
\end{equation*}
is a perfect square. Similarly for $n=7$ also we see that $n!+1$ is a perfect square. So one can ask the truth of this question:</p>
<ul>
<li>Is $n!+1$ a perfect square for infinitely many $n$? If yes, then how to prove.</li>
</ul>
| <p>This is Brocard's problem, and it is still open.</p>
<p><a href="http://en.wikipedia.org/wiki/Brocard%27s_problem">http://en.wikipedia.org/wiki/Brocard%27s_problem</a></p>
| <p>The sequence of factorials $n!+1$ which are also perfect squares is <a href="https://oeis.org/A085692" rel="nofollow">here in Sloane</a>. It contains three terms, and notes that there are no more terms below $(10^9)!+1$, but as far as I know there's no proof.</p>
|
logic | <p>It seems that given a statement <span class="math-container">$a = b$</span>, that <span class="math-container">$a + c = b + c$</span> is assumed also to be true.</p>
<p>Why isn't this an axiom of arithmetic, like the commutative law or associative law?</p>
<p>Or is it a consequence of some other axiom of arithmetic?</p>
<p>Thanks!</p>
<p>Edit: I understand the intuitive meaning of equality. Answers that stated that <span class="math-container">$a = b$</span> means they are the same number or object make sense but what I'm asking is if there is an explicit law of replacement that allows us to make this intuitive truth a valid mathematical deduction. For example is there an axiom of Peano's Axioms or some other axiomatic system that allows for adding or multiplying both sides of an equation by the same number? </p>
<p>In all the texts I've come across I've never seen an axiom that states if <span class="math-container">$a = b$</span> then <span class="math-container">$a + c = b + c$</span>. I have however seen if <span class="math-container">$a < b$</span> then <span class="math-container">$a + c < b + c$</span>. In my view <span class="math-container">$<$</span> and <span class="math-container">$=$</span> are similar so the absence of a definition for equality is strange.</p>
| <p>If you are given that
$$a = b$$
then you can always infer that
$$f(a) = f(b)$$
for a function $f$. That's what it means to be a function. However, if you are given
$$f(a) = f(b)$$
then you can't infer
$$a = b$$
unless the function is injective (invertible) over a domain containing $a$ and $b$.</p>
<p>For your problem, $f(x) = x + c$.</p>
| <p>This is a basic property of equality. An equation like $$a=b$$ means that $a$ and $b$ are different names for <em>the same number</em>. If you do something to $a$, and you do the same thing to $b$, you must get the same result because $a$ and $b$ were the same to begin with.</p>
<p>For example, how do we know that <a href="http://en.wikipedia.org/wiki/Samuel_Clemens">Samuel Clemens</a> and <a href="http://en.wikipedia.org/wiki/Mark_Twain">Mark Twain</a> were equal in height? Simple: Because they were the same person.</p>
<p>How do we know that $a+c$ and $b+c$ are equal numbers? Because $a$ and $b$ are the same number.</p>
|
differentiation | <blockquote>
<p>Construct a function which is continuous in $[1,5]$ but not differentiable at $2, 3, 4$.</p>
</blockquote>
<p>This question is just after the definition of differentiation and the theorem that if $f$ is finitely derivable at $c$, then $f$ is also continuous at $c$. Please help, my textbook does not have the answer. </p>
| <p>$$\ \ \ \ \mathsf{W}\ \ \ \ $$</p>
| <p>$|x|$ is continuous, and differentiable everywhere except at 0. Can you see why?</p>
<p>From this we can build up the functions you need: $|x-2| + |x-3| + |x-4|$ is continuous (why?) and differentiable everywhere except at 2, 3, and 4.</p>
|
combinatorics | <p>I'm studying graphs in algorithm and complexity,
(but I'm not very good at math) as in title:</p>
<blockquote>
<p>Why a complete graph has $\frac{n(n-1)}{2}$ edges?</p>
</blockquote>
<p>And how this is related with combinatorics?</p>
| <p>A simpler answer without binomials: A complete graph means that every vertex is connected with every other vertex. If you take one vertex of your graph, you therefore have $n-1$ outgoing edges from that particular vertex. </p>
<p>Now, you have $n$ vertices in total, so you might be tempted to say that there are $n(n-1)$ edges in total, $n-1$ for every vertex in your graph. But this method counts every edge twice, because every edge going out from one vertex is an edge going into another vertex. Hence, you have to divide your result by 2. This leaves you with $n(n-1)/2$.</p>
| <p>A complete graph has an edge between any two vertices. You can get an edge by picking any two vertices.</p>
<p>So if there are $n$ vertices, there are $n$ choose $2$ = ${n \choose 2} = n(n-1)/2$ edges.</p>
<p>Does that help?</p>
|
differentiation | <p>I'm looking for a function $f$, whose third derivative is $f$ itself, while the first derivative isn't.</p>
<p>Is there any such function? Which one(s)? If not, how can we prove that there is none?</p>
<p>Notes:</p>
<ul>
<li><p>$x\longmapsto c\cdot e^x, c \in R$ are the functions whose derivative is itself.</p></li>
<li><p>$x\longmapsto \cosh(x)={e^x+e^{-x}\over 2}$ and $x\longmapsto \sinh(x)={e^x-e^{-x}\over 2}$ have their second derivatives equal to themselves.</p></li>
<li><p>$x\longmapsto f(x)$, has its third derivative equal to itself.</p></li>
<li><p>$x\longmapsto \cos(x)$ and $x\longmapsto \sin(x)$ have their fourth derivatives equal to themselves.</p></li>
</ul>
| <p>$$f(x)=e^{\omega x}$$
where $\omega$ is a <a href="http://en.wikipedia.org/wiki/Root_of_unity">primitive third root of unity</a>. We have $$f'(x)=\omega e^{\omega x}, ~~ f''(x)=\omega^2 e^{\omega x}, ~~ f'''(x)=\omega^3e^{\omega x}=e^{\omega x}$$</p>
| <p>Try this: $f(x) = \sum_{m=0}^\infty \frac{x^{3m}}{(3m)!}$. This function is like the exponential function, which can be defined as $e^x=\sum_{m=0}^\infty \frac{x^m}{m!}$, but only taking every third term of the sum. Differentiating term-by-term verifies the desired property.</p>
<p>Generally, the functions $\Lambda_m^k(x) := \sum_{n=0}^\infty \frac{x^{nm+k}}{(nm+k)!}$ for $0\leq k < m$ define the set of $m$ linearly independent solutions to $\frac{d^mf}{df^m} = f$ with $\frac{d^jf}{df^j} \neq f$ for $j < m$. Again, it is easy to see this using term-by-term differentiation, but actually computing the functions in this form is unfeasible. The theory of ODE's ensures that all solutions are linear combinations of these lambda-functions, but not all linear combinations satisfy our second condition.</p>
<p>Note that all the functions you mention can be realized as linear combinations of these lambda-functions: </p>
<p>$e^x = \Lambda_1^0(x)$; $\cosh(x) = \Lambda_2^0(x)$; $\sinh(x) = \Lambda_2^1(x)$; </p>
<p>$\sin(x) = \Lambda_4^1(x) - \Lambda_4^3(x); \cos(x) = \Lambda_4^0(x) - \Lambda_4^2(x)$</p>
<p>PS: If there are an concerns about convergence of these sums, just note that they are essentially the sum for the exponential function but with terms missing. Since the exponential sum is absolutely convergent for all $x$, the sub-sums are also absolutely convergent and may be differentiated term-by-term.</p>
|
geometry | <p>My friend gave me this puzzle:</p>
<blockquote>
<p>What is the probability that a point chosen at random from the interior of an equilateral triangle is closer to the center than any of its edges? </p>
</blockquote>
<hr>
<p>I tried to draw the picture and I drew a smaller (concentric) equilateral triangle with half the side length. Since area is proportional to the square of side length, this would mean that the smaller triangle had $1/4$ the area of the bigger one. My friend tells me this is wrong. He says I am allowed to use calculus but I don't understand how geometry would need calculus. Thanks for help.</p>
| <p>You are right to think of the probabilities as areas, but the set of points closer to the center is not a triangle. It's actually a weird shape with three curved edges, and the curves are parabolas. </p>
<hr>
<p>The set of points equidistant from a line $D$ and a fixed point $F$ is a parabola. The point $F$ is called the focus of the parabola, and the line $D$ is called the directrix. You can read more about that <a href="https://en.wikipedia.org/wiki/Conic_section#Eccentricity.2C_focus_and_directrix">here</a>.</p>
<p>In your problem, if we think of the center of the triangle $T$ as the focus, then we can extend each of the three edges to give three lines that correspond to the directrices of three parabolas. </p>
<p>Any point inside the area enclosed by the three parabolas will be closer to the center of $T$ than to any of the edges of $T$. The answer to your question is therefore the area enclosed by the three parabolas, divided by the area of the triangle. </p>
<hr>
<p>Let's call $F$ the center of $T$. Let $A$, $B$, $C$, $D$, $G$, and $H$ be points as labeled in this diagram:</p>
<p><a href="https://i.sstatic.net/MTg9U.png"><img src="https://i.sstatic.net/MTg9U.png" alt="Voronoi diagram for a triangle and its center"></a></p>
<p>The probability you're looking for is the same as the probability that a point chosen at random from $\triangle CFD$ is closer to $F$ than to edge $CD$. The green parabola is the set of points that are the same distance to $F$ as to edge $CD$.</p>
<p>Without loss of generality, we may assume that point $C$ is the origin $(0,0)$ and that the triangle has side length $1$. Let $f(x)$ be equation describing the parabola in green. </p>
<hr>
<p>By similarity, we see that $$\overline{CG}=\overline{GH}=\overline{HD}=1/3$$</p>
<p>An equilateral triangle with side length $1$ has area $\sqrt{3}/4$, so that means $\triangle CFD$ has area $\sqrt{3}/12$. The sum of the areas of $\triangle CAG$ and $\triangle DBH$ must be four ninths of that, or $\sqrt{3}/27$.</p>
<p>$$P\left(\text{point is closer to center}\right) = \displaystyle\frac{\frac{\sqrt{3}}{12} - \frac{\sqrt{3}}{27} - \displaystyle\int_{1/3}^{2/3} f(x) \,\mathrm{d}x}{\sqrt{3}/12}$$</p>
<p>We know three points that the parabola $f(x)$ passes through. This lets us create a system of equations with three variables (the coefficients of $f(x)$) and three equations. This gives</p>
<p>$$f(x) = \sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}$$</p>
<p>The <a href="http://goo.gl/kSMPmv">integral of this function from $1/3$ to $2/3$</a> is $$\int_{1/3}^{2/3} \left(\sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}\right) \,\mathrm{d}x = \frac{5}{54\sqrt{3}}$$ </p>
<hr>
<p>This <a href="http://goo.gl/xEFB0s">gives our final answer</a> of $$P\left(\text{point is closer to center}\right) = \boxed{\frac{5}{27}}$$</p>
| <p>In response to Benjamin Dickman's request for a solution without calculus, referring to dtldarek's nice diagram in Zubin Mukerjee's answer (with all areas relative to that of the triangle $FCD$):</p>
<p>The points $A$ and $B$ are one third along the bisectors from $F$, so the triangle $FAB$ has area $\frac19$. The vertex $V$ of the parabola is half-way between $F$ and the side $CD$, so the triangle $VAB$ has width $\frac13$ of $FCD$ and height $\frac16$ of $FCD$ and thus area $\frac1{18}$. By <a href="https://en.wikipedia.org/wiki/The_Quadrature_of_the_Parabola">Archimedes' quadrature of the parabola</a> (long predating the advent of calculus), the area between $AB$ and the parabola is $\frac43$ of the area of $VAB$. Thus the total area in $FCD$ closer to $F$ than to $CD$ is</p>
<p>$$
\frac19+\frac43\cdot\frac1{18}=\frac5{27}\;.
$$</p>
<p>P.S.: Like Dominic108's solution, this is readily generalized to a regular $n$-gon. Let $\phi=\frac\pi n$. Then the condition $FB=BH$, expressed in terms of the height $h$ of triangle $FAB$ relative to that of $FCD$, is</p>
<p>$$
\frac h{\cos\phi}=1-h\;,\\
h=\frac{\cos\phi}{1+\cos\phi}\;.
$$</p>
<p>This is also the width of $FAB$ relative to that of $FCD$. The height of the arc of the parabola between $A$ and $B$ is $\frac12-h$. Thus, the proportion of the area of triangle $FCD$ that's closer to $F$ than to $CD$ is</p>
<p>$$
h^2+\frac43h\left(\frac12-h\right)=\frac23h-\frac13h^2=\frac{2\cos\phi(1+\cos\phi)-\cos^2\phi}{3(1+\cos\phi)^2}=\frac13-\frac1{12\cos^4\frac\phi2}\;.
$$</p>
<p>This doesn't seem to take rational values except for $n=3$ and for $n\to\infty$, where the limit is $\frac13-\frac1{12}=\frac14$, the value for the circle.</p>
|
geometry | <p><img src="https://i.sstatic.net/d6azj.jpg" alt="enter image description here" /></p>
<p>I discovered this elegant theorem in my facebook feed. Does anyone have any idea how to prove?</p>
<p>Formulations of this theorem can be found in the answers and the comments. You are welcome to join in the discussion.</p>
<p>Edit:
Current progress: The theorem is proven. There is a bound to curvature to be satisfied before the theorem can hold.</p>
<p>Some unresolved issues and some food for thought:</p>
<p>(1) Formalise the definition (construction) of the parallel curve in the case of concave curves. Thereafter, consider whether this theorem is true under this definition, where the closed smooth curve is formed by convex and concave curves linked alternatively.</p>
<p>(2) Reaffirm the upper bond of <span class="math-container">$r$</span> proposed, i.e. <span class="math-container">$r=\rho$</span>, where <span class="math-container">$\rho$</span> is the radius of the curvature, to avoid self-intersection.</p>
<p>(3) What is the least identity of the curves in order for this theorem to be true?
This is similar to the first part of question (1).</p>
<p>(4) Can this proof be more generalised? For example, what if this theorem is extended into higher dimensions? (Is there any analogy in higher dimensions?)</p>
<p>Also, I would like to bring your attention towards some of the newly posted answers.</p>
<p>Meanwhile, any alternative approach or proof is encouraged. Please do not hesitate in providing any insights or comments in further progressing this question.</p>
| <p><img src="https://i.sstatic.net/gi52Y.jpg" alt="Example for irregular pentagon"></p>
<p>Consider the irregular pentagon above. I have drawn it as well as it's "extended version," with lines connecting the two. Notice that the perimeter of the extended version is the same as the original, save for several arcs — arcs which fit together, when translated, to form a perfect circle! Thus, the difference in length between the two is the circumference of that circle, $2\pi r$.</p>
<p>Any convex shape can be approximated as well as one wishes by convex polygons. Thus, by taking limits (you have to be careful here but it works out), it works for any convex shape.</p>
<p>I'll get back to you on the concave case, but it should still work. EDIT: I'm not sure it does… EDIT EDIT: Ah, the problem with my supposed counterexample — the triomino — is that the concave vertex was more than $r$ away from any point on its extended version. If you round out the triomino at that vertex, it works again. TL;DR, it works for concave shapes provided there are no "creases."</p>
| <p>Let $\beta: I \rightarrow \mathbb{R^2}, I \subset \mathbb{R}$ be a <em>positively oriented</em> plane curve <strong><em>parametrised by arc length</em></strong>. </p>
<p>Now note that $\alpha$ is constructed by moving each point on $\beta$ a distance of $r$ along its <em>unit</em> normal vector. We can articulate this more precisely:</p>
<blockquote class="spoiler">
<p> Since $\beta$ is param. by arc length, by definition, its (<em>unit</em>) tangent vector $\beta' =t_\beta$ has a norm of $1$ - that is, $t_\beta . t_\beta = 1$. Differentiate this inner product using the product rule to see that $2t_\beta . t'_\beta = 0$. One deduces that $t'_\beta \perp t$ and since $\beta$ is positively oriented, it is apparent that $\beta''=t'_\beta$ is exactly the outward normal vector of $\beta$ - we must normalise this to obtain the <em>unit</em> normal. Do so using the Serret-Frenet relation in the plane (i.e. 'torsion', $\tau$ vanishes) $\beta''=\kappa n_\beta$ (where $\kappa$ is the signed plane curvature at any given point), and so $n_\beta=\frac{\beta''}{\kappa}$.</p>
</blockquote>
<p>So, </p>
<p>$$
\alpha = \beta + r\frac{\beta''}{\kappa} = \beta + n_\beta \ \ \ (*)
$$</p>
<p>The arc length of some space curve $\gamma:(a,b)\rightarrow \mathbb{R^n}$ parametrised by arc length is given by,</p>
<p>$$
\int^b_a ||\gamma'(s)||\,ds
$$</p>
<p>(where $s$ is the parametrisation variable)</p>
<p>Let $l_\alpha$ and $l_\beta$ denote the respective lengths of $\alpha$ and $\beta$. We wish to show that $l_\alpha - l_\beta=2\pi r$</p>
<p>Computing the relevant integral using ($*$) (I needn't bother explicitly writing the bounds since these aren't important to us here) and writing $\beta:= \beta(s)$ (which in turn, induces $\alpha=\alpha(s))$,</p>
<p>$$
l_\alpha - l_\beta= \int ||\alpha'||\,ds - \int ||\beta'||\,ds = \int \left(||\beta' + r n'_\beta|| - ||\beta'||\right)\,ds\ \ \ \ (**)
$$</p>
<p>Recall that $\beta'=t_\beta$.We must determine the nature of $n'_\beta$ in order to proceed:</p>
<p>Define the scalar function $\theta(s)$ as the inclination of $t_\beta(s)$ to the horizontal. Then we may write $t_\beta(s)=(\cos\theta(s),\sin\theta(s))$, and so $n_\beta=(-\sin\theta(s),\cos\theta(s))$ by application of the rotation matrix through $\pi/2$. This gives us,</p>
<p>$$
n'_\beta(s)=-\theta'(s)t_\beta(s)
$$</p>
<p>We can stop sweating now since we can see that $n'_\beta$ is parallel to $\beta'$ - and that makes everything pretty neat. Plugging all of this into ($**$) and recalling that $||t_\beta||=1$,</p>
<p>$$
l_\alpha - l_\beta= \int \left(||t_\beta -r\theta't_\beta|| - ||t_\beta||\right)\,ds = \int \left(||t_\beta||.||1-r\theta'-1||\right)\,ds = \int \left(||-r\theta'||\right)\,ds$$</p>
<p>And finally,</p>
<p>$$
l_\alpha-l_\beta = \int \left(||-r\theta'||\right)\,ds= r \int \left(||\theta'||\right)\,ds=2\pi r
$$</p>
<p><strong>Q.E.D</strong></p>
<p><strong><em>Fun fact</em></strong>: $\theta'(s)$ is exactly $\kappa(s)$, the signed plane curvature (from Serret-Frenet relation $t'=\kappa n$ for space curves given torsion $\tau$ vanishes) and so the final integral is often seen as $\int \kappa(s)ds$ which is a quantity called the <em>total curvature</em>!</p>
<p><strong>Caveat</strong> One must assume differentiability on the inner curve - note that the result fails for polygons or when given vertices. Furthermore, the special case of concave curves (where the initial result $l_\alpha=l_\beta+2\pi r$ does not always hold) is discussed in the comments below - this is not too difficult to deal with given a restriction on $r$ (though we don't have to apply this restriction if self-intersections <em>are permitted</em>; if they are indeed permitted, the proof will still work!) and modified computation of the total curvature.</p>
|
number-theory | <p>The standard way (other than generating up to $N$) is to check if $(5N^2 + 4)$ or $(5N^2 - 4)$ is a perfect square. What is the mathematical logic behind this? Also, is there any other way for checking the same? </p>
| <p>Binet's formula tells you that $F_n$ is asymptotic to $\frac{1}{\sqrt{5}} \phi^n$, where $\phi = \frac{1 + \sqrt{5}}{2}$ is the golden ratio. So to check if a number $x$ is a Fibonacci number you should compute $n = \frac{\log (\sqrt{5} x)}{\log \phi}$. If this number is very close to an integer then $x$ should be very close to the Fibonacci number $F_{[n]}$ where $[n]$ denotes the closest integer to $n$. In fact I think $x$ is a Fibonacci number if and only if the error is less than about $\frac{\log \phi}{5x^2}$.</p>
<p>But if you don't trust that computation, you can compute $F_{[n]}$ in $O(\log n)$ time, for example by using <a href="http://en.wikipedia.org/wiki/Exponentiation_by_squaring">binary exponentiation</a> to compute powers of the matrix $\left[ \begin{array}{cc} 1 & 1 \\\ 1 & 0 \end{array} \right]$, and then you can check whether this number equals $x$.</p>
<p>Your first question can be answered using the theory of (generalized) <a href="http://en.wikipedia.org/wiki/Pells_equation">Pell equations</a>. </p>
| <p><strong>HINT</strong> $\rm\ n\:$ is a Fibonacci number iff the interval $\rm\ [\phi\ n - 1/n,\ \phi\ n + 1/n]\ $ contains a positive integer (the next Fibonacci number for $\rm\ n > 1\:$). This follows from basic properties of continued fractions. For one proof see e.g. $\ $ T. Komatsu: <a href="http://www.fq.math.ca/Scanned/41-1/komatsu.pdf">The interval associated with a fibonacci number</a>. $\ $ This is a reasonably efficient test since it requires only a few multiplications and additions. $\ $ For example, $\rm\ 2 \to [2.7,\ 3.7],\ \ 3\to [4.5,\ 5.2],\ \ 5 \to [7.9,\ 8.3]\ \ldots$</p>
|
logic | <p>I wanted to give an easy example of a non-constructive proof, or, more precisely, of a proof which states that an object exists, but gives no obvious recipe to create/find it.</p>
<p>Euclid's proof of the infinitude of primes came to mind, however there is an obvious way to "fix" it: just try all the numbers between the biggest prime and the constructed number, and you'll find a prime in a finite number of steps.</p>
<p>Are there good examples of <strong>simple</strong> non-constructive proofs which would require a substantial change to be made constructive? (Or better yet, can't be made constructive at all).</p>
| <p>Some digit occurs infinitely often in the decimal expansion of $\pi$.</p>
| <p>Claim: There exist irrational $x,y$ such that $x^y$ is rational.</p>
<p>Proof: If $\sqrt2^{\sqrt2}$ is rational, take $x=y=\sqrt 2$. Otherwise take $x=\sqrt2^{\sqrt2}, y=\sqrt2$, so that $x^y=2$.</p>
|
geometry | <p>Geometry is one of the oldest branches of mathematics, and many famous problems have been proposed and solved in its long history.</p>
<p>What I would like to know is: <strong>What is the oldest open problem in geometry?</strong></p>
<p>Also <em>(soft questions)</em>: Why is it so hard? Which existing tools may be helpful to handle it? If twenty great geometers of today gathered to work together in the problem, would they (probably) be able to solve it?</p>
<p>P.S. The problem can be of any area of geometry (discrete, differential, etc...)</p>
| <p><strong>Problem</strong>: </p>
<blockquote>
<p>Does every triangular billiard have a periodic orbit?</p>
</blockquote>
<p>For acute triangles, the question has been answered affirmatively by Fagnano in 1775: one can simply take the length $3$ orbit joining the basepoints of the heights of the triangle. </p>
<p><img src="https://i.sstatic.net/4q57g.png" alt="enter image description here"></p>
<p>For (generic) obtuse triangles, the answer is not known in spite of very considerable efforts of many mathematicians. <a href="http://arxiv.org/pdf/math/0609392v2.pdf" rel="noreferrer">Apparently</a>, A. Katok has offered a $10.000$\$ prize for a solution of this problem.</p>
| <p>One of my favourites because it's just so simple to state. Discussed in <a href="https://mathoverflow.net/questions/17313/">this MO question</a> and related to an also very interesting, though solved question/puzzle <a href="https://math.stackexchange.com/questions/481527/">which I posted here a while ago</a>. I'm not sure when the problem was first stated but it could certainly have been understood by even the earliest mathematicians.</p>
<blockquote>
<p>Can a disk be tiled by a finite number of congruent pieces (You can rotate or flip pieces onto each other) such that the center of the disk is contained in the interior of one of the pieces?</p>
</blockquote>
<p>So far, the only kinds of tilings of the disk by congruent pieces, which we know of, all have the center of the disk lying on the boundary of more than one piece. Here are some examples which fail to have the center in the interior of one of the pieces:</p>
<p><img src="https://i.sstatic.net/wA0xf.png" alt="enter image description here">
<em>From <a href="https://math.stackexchange.com/a/481546/29059">Robert Israel's</a> answer</em></p>
<p><img src="https://i.sstatic.net/OlRmN.png" alt="enter image description here">
<em>From <a href="https://math.stackexchange.com/a/481702/29059">robjohn's</a> answer</em></p>
<p>I should add that by 'piece' we mean some nice subset of the disk such as homeomorphic to a disk itself, and equal to the closure of its interior.</p>
<p>By 'tiling' we mean the union of the pieces should be the entire disk, and the intersection of any two pieces should be contained within the union of the boundary of the pieces.</p>
|
geometry | <p>Is there a proof that the ratio of a circle's diameter and the circumference is the same for all circles, that doesn't involve some kind of limiting process, e.g. a direct geometrical proof?</p>
| <p>Limits are not involved in the problem of proving that $\pi(C)$ is independent of the circle $C$.</p>
<p>In geometrical definitions of $\pi$, to a circle $C$ is associated a sequence of finite polygonal objects and thus a sequence of numbers (or lengths, or areas, or ratios of those) $\pi_k(C)$. This sequence is thought of as a set of approximations converging to $\pi$, but that doesn't concern us here; what is important is that the sequence is <em>independent of the circle C</em>. Any further aspects of the sequence such as its limit or the rate of convergence will also be the same for any two circles.</p>
<p>(edit: an example of a "geometrical definition" of a sequence of approximants $\pi_k(C)$ is: perimeter of a regular $k$-sided polygon inscribed in circle C, divided by the diameter of C. Also, the use of words like <em>limit</em> and <em>approximation</em> above does not reflect any assumption that the sequences have limits or that an environment involving limits has been set up. We are demonstrating that if $\pi(C)$ is defined using some construction on the sequence, then whether that construction involves limits or not, it must produce the same answer for any two circles.)</p>
<p>The proof that $\pi_k(C_1) = \pi_k(C_2)$ of course would just apply the similarity of polygons and the behavior of length and area with respect to changes of scale. This argument does not assume a limit-based theory of length and area, because the theory of length and area for polygons in Euclidean geometry only requires dissections and rigid motions ("cut-and-paste equivalence" or <em>equidecomposability</em>). Any polygonal arc or region can be standardized to an interval or square by a finite number of (area and length preserving) cut-and-paste dissections. Numerical calculations involving the $\pi_k$, such as ratios of particular lengths or areas, can be understood either as applying to equidecomposability classes of polygons, or to the standardizations. In both interpretations, due to the similitude, the results will be the same for $C_1$ and $C_2$.</p>
<p>(You might think that this is proving a different conclusion, that the equidecomposability version of $\pi$ for the two circles is equal, and not the numerical equality of $\pi$ within a theory that has real numbers as lengths and areas for arbitrary curved figures. However, any real number-based theory, including elementary calculus, Jordan measure, and Lebesgue measure, is set up with a minimum requirement of compatibility with the geometric operations of dissection and rigid motion, so once equidecomposability is known, numerical equality will also follow.) </p>
| <p>Intuitively, all circles are similar and therefore doubling the diameter also doubles the circumference. The same applies to ratios other than 2.</p>
<p>To make this rigorous, we have to consider what we mean by “the length of the circumference.” The usual rigorous definition uses integration and therefore relies on the notion of limits. I guess that any rigorous definition of the length of a curve ultimately requires the notion of limits.</p>
<p>Edit: Rephrased a little to make the connection between the two paragraphs clearer.</p>
|
probability | <p>There is a famous citation that says "It is evident that the primes are randomly distributed but, unfortunately, we don't know what 'random' means." <em>R. C. Vaughan (February 1990)</em></p>
<p>I have this very clear but rather broad question that might be answered by different opinions and view points. However, my question is really not targeting an intuitive or philosophical answer, and I beg you for view points with a strength of mathematical foundation. </p>
<p><em>are primes randomly distributed?</em> so then <em>what is 'random' in this context?</em></p>
<hr>
<p><em>A posterior</em> I</p>
<p>A possible hint comes perhaps from the theory of complex dynamical systems.</p>
<p><em>It can be difficult to tell from data whether a physical or other observed process is random or chaotic, because in practice no time series consists of pure 'signal.' There will always be some form of corrupting noise, even if it is present as round-off or truncation error. Thus any real time series, even if mostly deterministic, will contain some randomness. All methods for distinguishing deterministic and stochastic processes rely on the fact that a deterministic system always evolves in the same way from a given starting point.</em>(ref <a href="http://dx.doi.org/10.1016/0167-2789%2892%2990100-2" rel="noreferrer">1</a>, <a href="http://dx.doi.org/10.1016/0022-0531%2886%2990014-1" rel="noreferrer">2</a>, <a href="http://deepeco.ucsd.edu/~george/publications/90_nonlinear_forecasting.pdf" rel="noreferrer">3</a>, and "<a href="http://en.wikipedia.org/wiki/Chaos_theory" rel="noreferrer">Distinguishing random from chaotic data</a>") - complying to latter, remind that every prime $p$ can be trivially identified by a sieving that applies prior primes $q<p$ so it is possible to determine that somehow <em>the system evolves in the same way from a given starting point</em>. Of course to take into account that <em>time</em> must be substituted by a <em>walking index</em> as well.</p>
<hr>
<p><em>A posterior</em> II</p>
<p>Thank you for all of many the comprehensive answers and discussions. This is a quite classic question on MSE and meanwhile we moved much forward. You are right that primes are not random as per above question. Indeed we could show that they are in their sequence some type of "deterministic chaos". We don't need the Riemann function for this purpose. The primes sequence is a so called "ordered iterative sequence". Meanwhile this has been further elaborated by this source: "The Secret Harmony of Primes" (ISBN 978-9176370001) <a href="http://a.co/iIHQqR8" rel="noreferrer">http://a.co/iIHQqR8</a>
Some of you correctly referred to sieving. It is crucial however that we regard sieving procedures as a subset of "interference" (incl. frequencies and amplitudes). We can iteratively apply interference rules in order to gain from the first prime 2, the next ordered sequence. This can be iterative continued in an "ordered" way and within exact boundaries of p-squares (for 100% certainty). Indeed, in order to construct an ordered sequence of primes you just need to begin with 2. The Riemann approach is charming but would raise difficulties since we don't have yet a proof of the hypothesis that connects the order of the non-trivial zeros with the primes. So if you apply Riemann, as some colleagues here suggest, we would need to say at any time in the begin of your argumentation something like "provided the Riemann hypothesis would be true...". Having in mind that the very unique rule that primes follow, is that in an interference scheme all odd prime frequencies dance on the base frequency of 2 (ordered iterative sequence), one may even give it a thought to something of a parallel in the Riemann transformed world, that all non-trivial zeros dance on 1/2. But latter remains not more than a tempting trivial speculation yet.</p>
| <p>The primes are not randomly distributed. They are completely deterministic in the sense that the $n$th prime can be found via sieving. We speak loosely of the probability that a given number $n$ is prime $({\bf P}(n\in {\mathbb P}) \approx 1/\log n)$ based on the prime number theorem but this does not change matters and is largely a convenience. </p>
<p>Some confusion is maybe due to the use of probabilistic methods to prove interesting things about primes and because once we put the sieve aside the primes are pretty inscrutable. They seem random in the sense that we cannot predict their appearance in some formulaic way. </p>
<p>On the other hand the primes have properties associated more or less directly with random numbers. It has been shown that the form of the "explicit formulas" (such as that of von Mangoldt) obeyed by zeros of the $\zeta$ function imply what is known as the GUE hypothesis: roughly speaking the zeros of the $\zeta$ function are spaced in a non-random way. The eigenvalues of certain types of random matrices share this property with the zeros. There is a proof of this.$^1$</p>
<p>So it can be said that the primes are a deterministic sequence that via the $\zeta$ function share a salient feature with putatively random sequences. </p>
<p>In response to the particular question, "random" here is the "random" of random matrix theory. The paper trail is pretty clear from the work below and it's not a subject that fits into an answer box. </p>
<p>$^1$ Rudnick and Sarnak, Zeros of Principal L-Functions and Random Matrix Theory, Duke Math. J., vol. 81 no. 2 (1996). </p>
| <p>Terence Tao wrote about it, I've found this <a href="http://www.youtube.com/watch?v=e2V5U8Gwebc" rel="noreferrer">video</a> and there's also one article called: <em>Structure and randomness in the prime numbers</em>, I've read it in the book: <em><a href="http://rads.stackoverflow.com/amzn/click/3642195326" rel="noreferrer">An Invitation to Mathematics: From Competitions to Research</a>, by Dierk Schleicher and Malte Lackmann.</em></p>
<p>The article I mentioned can be found <a href="http://www.google.com/url?sa=t&source=web&cd=4&ved=0CDgQFjAD&url=http%3A%2F%2Fterrytao.files.wordpress.com%2F2009%2F09%2Fprimes_paper.pdf&ei=a-W8UdKGLom88wTB34EY&usg=AFQjCNG_Kl023Lg2oX6hN8HsdLEw_tu8Nw" rel="noreferrer">here</a>.</p>
|
number-theory | <blockquote>
<p>I conjecture that there exist infinitely many integers $n$ such
that $$(n^{2015}+1)\mid n!.$$</p>
</blockquote>
<p>I have seen a simpler problem that there exist infinitely many integers $n$ such that $(n^2+1)\mid n!$.</p>
<p>Alternatively, I considered the Pell equation
$n^2+1=5m^2$, $2m<n$, but for $2015$ I can't figure it out.</p>
| <p>Modest progress. There are infinitely many integers $n$ such that $n^3+1\mid n!$.</p>
<p>We always have $n^3+1=(n+1)(n^2-n+1)$. Let $n=k^2+1$. Then
$$
n^2-n+1=(1+k+k^2)(1-k+k^2).
$$
Assume further that $k\equiv1\pmod3$. In that case $1+k+k^2$ and $n+1=2+k^2$ are both divisible by $3$. For all sufficiently large $k\equiv1\pmod3$ we thus have
$$
(k^2+1)^3+1=3^2\cdot\frac{k^2+2}3\cdot\frac{k^2+k+1}3(k^2-k+1)
$$
that is clearly a factor of $(k^2+1)!$.</p>
| <p>It suffices to show that for infinitely many $n$, the largest prime factor of $n^{2015}+1$ is at most $\sqrt{n}$. Indeed, if $n$ is such a large integer and $p$ is a prime, then the largest value of $a$ for which $p^a\mid n^{2015}+1$ is $\leq c \log n$ for some constant $c$, while $n!$ is divisible by $p^a$ with $a\geq \frac{n}{p}-1\geq \sqrt{n}-1>c \log n$. It was shown by Schinzel (Theorem 13 in <a href="https://eudml.org/doc/204826">https://eudml.org/doc/204826</a>) that for any nonzero integers $A$ and $B$, any integer $k\geq 2$ and any $\varepsilon>0$ there exist infinitely many integers $n$ such that the largest prime factor of $An^k+B$ is less than $n^{\varepsilon}$. In particular, the claim of the problem holds with $2015$ replaced by any positive integer.</p>
|
matrices | <blockquote>
<p>If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$. </p>
</blockquote>
<p>I do not understand anything more than the following.</p>
<ol>
<li>Elementary row operations.</li>
<li>Linear dependence.</li>
<li>Row reduced forms and their relations with the original matrix.</li>
</ol>
<p>If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?</p>
<p><strong>P.S.</strong>: Please avoid using the transpose and/or inverse of a matrix.</p>
| <p>Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.</p>
<p>Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).</p>
<p>Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.</p>
<p>Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.</p>
| <p>We have the following general assertion:</p>
<p><strong>Lemma.</strong> <em>Let <span class="math-container">$A$</span> be a finite-dimensional <a href="https://en.wikipedia.org/wiki/Algebra_over_a_field" rel="noreferrer"><span class="math-container">$K$</span>-algebra</a>, and <span class="math-container">$a,b \in A$</span>. If <span class="math-container">$ab=1$</span>, then <span class="math-container">$ba=1$</span>.</em></p>
<p>For example, <span class="math-container">$A$</span> could be the algebra of <span class="math-container">$n \times n$</span> matrices over <span class="math-container">$K$</span>.</p>
<p><strong>Proof.</strong> The sequence of subspaces <span class="math-container">$\cdots \subseteq b^{k+1} A \subseteq b^k A \subseteq \cdots \subseteq A$</span> must be stationary, since <span class="math-container">$A$</span> is finite-dimensional. Thus there is some <span class="math-container">$k$</span> with <span class="math-container">$b^{k+1} A = b^k A$</span>. So there is some <span class="math-container">$c \in A$</span> such that <span class="math-container">$b^k = b^{k+1} c$</span>. Now multiply with <span class="math-container">$a^k$</span> on the left to get <span class="math-container">$1=bc$</span>. Then <span class="math-container">$ba=ba1 = babc=b1c=bc=1$</span>. <span class="math-container">$\square$</span></p>
<p>The proof also works in every left- or right-<a href="https://en.wikipedia.org/wiki/Artinian_ring" rel="noreferrer">Artinian ring</a> <span class="math-container">$A$</span>. In particular, the statement is true in every finite ring.</p>
<p>Remark that we need in an essential way some <strong>finiteness condition</strong>. There is no purely algebraic manipulation with <span class="math-container">$a,b$</span> that shows <span class="math-container">$ab = 1 \Rightarrow ba=1$</span>.</p>
<p>In fact, there is a <span class="math-container">$K$</span>-algebra with two elements <span class="math-container">$a,b$</span> such that <span class="math-container">$ab=1$</span>, but <span class="math-container">$ba \neq 1$</span>. Consider the left shift <span class="math-container">$a : K^{\mathbb{N}} \to K^{\mathbb{N}}$</span>, <span class="math-container">$a(x_0,x_1,\dotsc) := (x_1,x_2,\dotsc)$</span> and the right shift <span class="math-container">$b(x_0,x_1,\dotsc) := (0,x_0,x_1,\dotsc)$</span>. Then <span class="math-container">$a \circ b = \mathrm{id} \neq b \circ a$</span> holds in the <span class="math-container">$K$</span>-algebra <span class="math-container">$\mathrm{End}_K(K^{\mathbb{N}})$</span>.</p>
<p>See <a href="https://math.stackexchange.com/questions/298791">SE/298791</a> for a proof of <span class="math-container">$AB=1 \Rightarrow BA=1$</span> for square matrices over a commutative ring.</p>
|
probability | <p>It's a standard exercise to find the Fourier transform of the Gaussian $e^{-x^2}$ and show that it is equal to itself. Although it is computationally straightforward, this has always somewhat surprised me. My intuition for the Gaussian is as the integrand of normal distributions, and my intuition for Fourier transforms is as a means to extract frequencies from a function. They seem unrelated, save for their use of the exponential function.</p>
<p>How should I understand this property of the Gaussian, or in general, eigenfunctions of the Fourier transform? The Hermite polynomials are eigenfunctions of the Fourier transform and play a central role in probability. Is this an instance of a deeper connection between probability and harmonic analysis?</p>
| <p>The generalization of this phenomenon, from a probabilistic standpoint, is the Wiener-Askey Polynomial Chaos.</p>
<p>In general, there is a connection between orthogonal polynomial families in the Askey scheme and probability distribution/mass functions.</p>
<p>Orthogonality of these polynomials can be shown in an inner product space using a weighting function -- a weight function that typically happens to be, within a scale factor, the pdf/pmf of some distribution.</p>
<p>In other words, we can use these orthogonal polynomials as a basis for a series expansion of a random variable:</p>
<p>$$z = \sum_{i=0}^\infty z_i \Phi_i(\zeta).$$</p>
<p>The random variable $\zeta$ belongs to a distribution we choose, and the orthogonal polynomial family to which $\Phi$ belongs follows from this choice.</p>
<p>The <em>deterministic</em> coefficients $z_i$ can be computed easily by using Galerkin's method.</p>
<p>So, yes. There is a very deep connection in this regard, and it is extremely powerful, particularly in engineering applications. Strangely, many mathematicians do not know this relationship!</p>
<hr>
<p>See also: <a href="http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA460654" rel="noreferrer">http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA460654</a> and the Cameron-Martin Theorem.</p>
| <p>There's a simple reason why taking a Fourier-like transform of a Gaussian-like function yields another Gaussian-like function. Consider the property $$\mathcal{T}[f^\prime](\xi) \propto \xi \hat{f}(\xi)$$
of a transform $\mathcal{T}$. We will call an invertible transform $\mathcal{F}$ "Fourier-like" if both it and its inverse have this property.</p>
<p>Define a "Gaussian-like" function as one with the form $$f(x) = A e^{a x^2}.$$ Functions with this form satisfy
$$f^\prime(x) \propto x f(x).$$
Taking a Fourier-like transform of each each side yields
$$\xi \hat{f}(\xi) \propto \hat{f}^\prime(\xi).$$
This is has the same form as the previous equation, so it is not surprising that its solutions have the Gaussian-like form
$$\hat{f}(\xi) = B e^{b \xi^2}.$$</p>
|
logic | <p>I am currently reading <a href="https://leanprover.github.io/theorem_proving_in_lean/axioms_and_computation.html" rel="noreferrer">Theorem Proving in Lean</a>, a document dealing with how to use <a href="https://leanprover.github.io" rel="noreferrer">Lean</a> (an open source theorem prover and programming language). My question stems from chapter 11.</p>
<p>The particular section of the chapter that I'm interested in discusses the history and philosophical context of an initially "essentially computational" style of mathematics, until the 19th century, when a "more "conceptual"" understanding of mathematics was required.</p>
<p>The following quote is taken from the second paragraph of section 11.1 of the linked document.</p>
<blockquote>
<p>The goal was to obtain a powerful “conceptual” understanding without getting bogged down in computational details, but this had the effect of admitting mathematical theorems that are simply <em>false</em> on a direct computational reading.</p>
</blockquote>
<p>This is by no means essential to the understanding of the contents of the document itself, but I'm curious to ask what examples of theorems are false in a "direct computational reading".</p>
<p>In other words, <strong>are there any examples of theorems that are true conceptually, but false computationally?</strong></p>
| <p>Here is a longer quote from the <a href="https://leanprover.github.io/theorem_proving_in_lean/axioms_and_computation.html" rel="noreferrer">article linked in the question</a>:</p>
<blockquote>
<p>For most of its history, mathematics was essentially computational: geometry dealt with constructions of geometric objects, algebra was concerned with algorithmic solutions to systems of equations, and analysis provided means to compute the future behavior of systems evolving over time. From the proof of a theorem to the effect that “for every x, there is a y such that …”, it was generally straightforward to extract an algorithm to compute such a y given x.</p>
<p>In the nineteenth century, however, increases in the complexity of mathematical arguments pushed mathematicians to develop new styles of reasoning that suppress algorithmic information and invoke descriptions of mathematical objects that abstract away the details of how those objects are represented. The goal was to obtain a powerful “conceptual” understanding without getting bogged down in computational details, but this had the effect of admitting mathematical theorems that are simply false on a direct computational reading.</p>
</blockquote>
<p>With that, we see that the authors are talking about theorems of the form "for every <span class="math-container">$x$</span> there is a <span class="math-container">$y$</span> ...". There are many such theorems that are true classically, and where objects of the type of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> could be represented by a computer, but where there is no program to produce <span class="math-container">$y$</span> from <span class="math-container">$x$</span>. This area of study overlaps constructive mathematics and computability theory. In computability theory, rather than just proving that particular theorems are not computably true, we instead try to classify <em>how uncomputable</em> the theorems are. There is also a research field of <em>proof mining</em> which is able to extract algorithms from a number of classical proofs (of course, not all). This program has led to new concrete bounds for theorems in analysis, among other things.</p>
<p>The phenomenon of uncomputability in classical mathematical theorems is very widespread. I will give just a few examples, trying to include several areas of mathematics.</p>
<p><strong>Hilbert's 10th Problem</strong> One example comes from Hilbert's 10th problem. Given a multivariable polynomial with integer coefficients, there is a natural number <span class="math-container">$n$</span> so that <span class="math-container">$n = 1$</span> if the polynomial has an integer root, and <span class="math-container">$n = 0$</span> otherwise. This is a trivial classical theorem, but the MDRP theorem shows exactly that there is no program that can produce <span class="math-container">$n$</span> from the polynomial in every case.</p>
<p>That is, given a multivariable polynomial with integer coefficients, where we can substitute integers for the variables, there is no effective way to decide whether <span class="math-container">$0$</span> is in the range. The proof uses classical computability theory, and shows that this decision problem is equivalent to the halting problem, a benchmark example of an uncomputable decision problem.</p>
<p><strong>Jordan forms</strong> Anther example comes when we work with <em>infinite precision</em> real numbers, so that a real number is represented as a Cauchy sequence of rationals that converges at a fixed rate. These sequences can be manipulated by programs, and this is a standard part of computable analysis.</p>
<p>We know from linear algebra that every square matrix over the reals has a Jordan canonical form. However, there is no program that, given a square matrix of infinite precision reals, produces the Jordan canonical form matrix.</p>
<p>The underlying reason for this is continuity: fixing a dimension <span class="math-container">$N$</span>, the map that takes an <span class="math-container">$N \times N$</span> matrix to its Jordan form is not continuous. However, if this function were computable then it would be continuous, giving a contradiction.</p>
<p><strong>Countable graph theory</strong> It is easy to represent a countable graph for a computer: we let natural numbers represent the nodes and we provide a function that tells whether there is an edge between any two given nodes. It is a classical theorem that "for each countable graph there is a set of nodes so that the set has exactly one node from each connected component". This is false computationally: there are countable graphs for which the edge relation is computable, but there is no computable set that selects one node from each connected component.</p>
<p><strong>König's Lemma</strong> Every infinite, finitely branching tree has an infinite path. However, even if the tree itself is computable, there does not need to be an infinite computable path.</p>
<p><strong>Intermediate value theorem</strong> Returning to analysis, we can also represent continuous functions from the reals to the reals in a way that, given an infinite precision real number, a program can compute the value of the function at that number, producing another infinite precision real number. The representation is called a code of the function.</p>
<p>The intermediate value theorem is interesting because it is computably true in a weak sense but not in a stronger sense.</p>
<ul>
<li><p>It is true that, for any computable continuous function <span class="math-container">$[0,1] \to \mathbb{R}$</span> with <span class="math-container">$f(0)\cdot f(1) < 0$</span>, there is a computable real <span class="math-container">$\xi \in [0,1]$</span> with <span class="math-container">$f(\xi) = 0$</span>. So, if we lived in a world where everything was computable, the intermediate value theorem would be true.</p>
</li>
<li><p>At the same time, there is not a computable functional <span class="math-container">$G$</span> that takes as input any code for a continuous function <span class="math-container">$f$</span> satisfying the hypotheses above, and produces a <span class="math-container">$\xi = G(f)$</span> with <span class="math-container">$f(\xi) = 0$</span>. So, although the intermediate value theorem might be true, there is no way to effectively <em>find</em> or <em>construct</em> the root just given a code for the continuous function.</p>
</li>
</ul>
| <p><a href="https://arxiv.org/pdf/0804.3199.pdf" rel="nofollow noreferrer" title="Petrus H Potgieter">This paper by Potgieter</a> has examples to show that, in a suitable computational sense, Brouwer's Fixed point theorem is false. Quoting from the abstract:</p>
<blockquote>
<p>The main results, the counter-examples of Orevkov and
Baigger, imply that there is no procedure for finding the fixed
point in general by giving an example of a computable function which
does not fix any computable point. </p>
</blockquote>
|
differentiation | <p>I calculated the derivative of $\arctan\left(\frac{1+x}{1-x}\right)$ to be $\frac{1}{1+x^2}$. This is the same as $(\arctan)'$. Why is there no $c$ that satisfies $\arctan\left(\frac{1+x}{1-x}\right) = \arctan(x) +c$? </p>
| <p>The problem is that $\arctan \frac{1+x}{1-x}$ isn't defined at $x = 1$, and in particular isn't differentiable there. In fact, we have
$$
\arctan \frac{1+x}{1-x} - \arctan x = \begin{cases} \frac{\pi}{4} & x < 1, \\ -\frac{3\pi}{4} & x > 1. \end{cases}
$$
So the difference is <em>piecewise</em> constant.</p>
| <p><strong>Hint:</strong> $$\tan(x+y)=\frac{\tan x + \tan y}{1-\tan x\tan y}$$</p>
<p>What happens when $\tan y=1$?</p>
|
linear-algebra | <p>I've recently started reading about Quaternions, and I keep reading that for example they're used in computer graphics and mechanics calculations to calculate movement and rotation, but without real explanations of the benefits of using them.</p>
<p>I'm wondering what exactly can be done with Quaternions that can't be done as easily (or easier) using more tradition approaches, such as with Vectors?</p>
| <p>I believe they are used in quantum physics as well, because rotation with quaternions models Spinors extremely well (due to the lovely property that you need to rotate a point in quaternionic space around 2 full revolutions to get back to your 'original', which is exactly what happens with spin-1/2 particles).</p>
<p>They are also, as you said, used in computer graphics a lot for several reasons:</p>
<ol>
<li>they are much more space efficient to store than rotation matrices (4 floats rather than 16)</li>
<li>They are much easier to interpolate than euler angle rotations (spherical interpolation or normalised liner interpolation)</li>
<li>They avoid gimbal lock</li>
<li>It's much cooler to say that your rotation is described as 'a great circle on the surface of a unit 4 dimensional hypersphere' :)</li>
</ol>
<p>I think there are other uses, but a lot of them have been superseded by more general Vectors.</p>
| <p>To understand the benefits of using quaternions you have to consider different ways to represent rotations. </p>
<p>Here are few ways with a summary of the pros and cons:</p>
<ul>
<li>Euler angles</li>
<li>Rotation matrices</li>
<li>Axis angle</li>
<li>Quaternions </li>
<li>Rotors (normalized Spinors)</li>
</ul>
<p>Euler angles are the best choice if you want a user to specify an orientation in a intuitive way. They are are also space efficient (three numbers). However, it is more difficult to linear interpolate values. Consider the case where you want to interpolate between 359 and 0 degrees. Linearly interpolating would cause a large rotation, even though the two orientations are almost the same. Writing shortest path interpolation, is easy for one axis, but non-trivial when considering the three Euler angles(for instance the shortest route between (240, 57, 145) and (35, -233, -270) is not immediately clear).</p>
<p>Rotation matrices specify a new frame of reference using three normalized and orthogonal vectors (Right, Up, Out, which when multiplied become the new x, y, z). Rotation matrices are useful for operations like strafing (side ways movement), which only requires translating along the Right vector of the camera's rotation matrix. However, there is no clear method of interpolating between them. The are also expensive to normalize which is necessary to prevent scaling from being introduced.</p>
<p>Axis angle, as the name suggests, are a way of specifying a rotation axis and angle to rotate around that axis. You can think of Euler angles, as three axis angle rotations, where the axises are the x, y, z axis respectively. Linearly interpolating the angle in a axis angle is pretty straight forward (if you remember to take the shortest path), however linearly interpolating between different axises is not. </p>
<p>Quaternions are a way of specifying a rotation through a axis and the cosine of half the angle. They main advantage is I can pick any two quaternions and smoothly interpolate between them.</p>
<p>Rotors are another way to perform rotations. Rotors are basically quaternions, but instead of thinking of them as 4D complex numbers, rotors are thought of as real 3D multivectors. This makes their visualization much more understandable (compared to quaternions), but requires fluency in geometric algebra to grasp their significance.</p>
<p>Okay with that as the background I can discuss a real world example. </p>
<blockquote>
<p>Say you are writing a computer game
where the characters are animated in
3ds Max. You need to export a
animation of the character to play in
your game, but cannot faithfully
represent the interpolation used by
the animation program, and thus have
to sample. The animation is going to
be represented as a list of rotations
for each joint. How should we store
the rotations? </p>
<p>If I am going to sample every frame,
not interpolate, and space is not an
issue, I would probably store the
rotations as rotation matrices. If
space was issue, then Euler angles.
That would also let me do things like
only store one angle for joints like
the knee that have only one degree of
freedom.</p>
<p>If I only sampled every 4 frames, and
need to interpolate it depends on
whether I am sure the frame-rate will
hold. If I am positive that the
frame-rate will hold I can use axis
angle relative rotations to perform
the interpolation. This is atypical.
In most games the frame rate can drop
past my sampling interval, which would
require skipping an element in the
list to maintain the correct playback
speed. If I am unsure of what two
orientations I need to interpolate
between, then I would use quaternions
or rotors.</p>
</blockquote>
|
combinatorics | <p>The <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="noreferrer"><strong>Traveling Salesperson Problem</strong></a> is originally a mathematics/computer science optimization problem in which the goal is to determine a path to take between a group of cities such that you return to the starting city after visiting each city exactly once and the total distance (longitude/latitude) traveled is minimized. For <span class="math-container">$n$</span> cities, there are <span class="math-container">$(n-1)!/2$</span> unique paths - and we can see that as <span class="math-container">$n$</span> increases, the number of paths to consider becomes enormous in size. For even a small number of cities (e.g. 15 cities), modern computers are unable to solve this problem using "brute force" (i.e. calculate all possible routes and return the shortest route) - as a result, sophisticated optimization algorithms and approximate methods are used to tackle this problem in real life.</p>
<p><strong>I was trying to explain this problem to my friend, and I couldn't think of an example which shows why the Travelling Salesperson Problem is difficult!</strong> Off the top of my head, I tried to give an example where someone is required to find the shortest route between Boston, Chicago and Los Angeles - but then I realized that the shortest path in this case is pretty obvious! (i.e. Move in the general East to West direction).</p>
<p>Real world applications of the Travelling Salesperson Problem tend to have an additional layer of complexity as they generally have a "cost" associated between pairs of cities - and this cost doesn't have to be symmetric. For example, buses might be scheduled more frequently to go from a small city to a big city, but scheduled less frequently to return from the big city to the small city - thus, we might be able to associate a "cost" with each direction. Or even a simpler example, you might have to drive "uphill" to go from City A to City B, but drive "downhill" to go from City B to City A - thus there is likely a greater cost to go from City A to City B. Many times, these "costs" are not fully known and have to be approximated with some statistical model. However, all this can become a bit complicated to explain to someone who isn't familiar with all these terms.</p>
<p>But I am still looking for an example to explain to my friend - <strong>can someone please help me think of an obvious and simple example of the Travelling Salesperson Problem where it becomes evidently clear that the choice of the shortest path is not obvious?</strong> Every simple example I try to think of tends to be very obvious (e.g. Manhattan, Newark, Nashville) - I don't want to overwhelm my friend with an example of 1000 cities across the USA : just something simple with 4-5 cities in which it is not immediately clear (and perhaps even counterintuitive) which path should be taken?</p>
<p>I tried to show an example using the R programming language in which there are 10 (random) points on a grid - starting from the lowest point, the path taken involves choosing the nearest point from each current point:</p>
<pre><code>library(ggplot2)
set.seed(123)
x_cor = rnorm(5,100,100)
y_cor = rnorm(5,100,100)
my_data = data.frame(x_cor,y_cor)
x_cor y_cor
1 43.95244 271.50650
2 76.98225 146.09162
3 255.87083 -26.50612
4 107.05084 31.31471
5 112.92877 55.43380
ggplot(my_data, aes(x=x_cor, y=y_cor)) + geom_point() + ggtitle("Travelling Salesperson Example")
</code></pre>
<p><a href="https://i.sstatic.net/2qXbe.png" rel="noreferrer"><img src="https://i.sstatic.net/2qXbe.png" alt="enter image description here" /></a></p>
<p>But even in this example, the shortest path looks "obvious" (imagine you are required to start this problem from the bottom most right point):</p>
<p><a href="https://i.sstatic.net/5mRbe.png" rel="noreferrer"><img src="https://i.sstatic.net/5mRbe.png" alt="enter image description here" /></a></p>
<p>I tried with more points:</p>
<pre><code>set.seed(123)
x_cor = rnorm(20,100,100)
y_cor = rnorm(20,100,100)
my_data = data.frame(x_cor,y_cor)
ggplot(my_data, aes(x = x_cor, y = y_cor)) +
geom_path() +
geom_point(size = 2)
</code></pre>
<p><a href="https://i.sstatic.net/sypnl.png" rel="noreferrer"><img src="https://i.sstatic.net/sypnl.png" alt="enter image description here" /></a></p>
<p>But my friend still argues that the "find the nearest point from the current point and repeat" (imagine you are required to start this problem from the bottom most right point):</p>
<p><a href="https://i.sstatic.net/crZ6B.png" rel="noreferrer"><img src="https://i.sstatic.net/crZ6B.png" alt="enter image description here" /></a></p>
<p><strong>How do I convince my friend that what he is doing corresponds to a "Greedy Search" that is only returning a "local minimum" and it's very likely that a shorter path exists?</strong> (not even the "shortest path" - just a "shorter path" than the "Greedy Search")</p>
<p>I tried to illustrate this example by linking him to the Wikipedia Page on Greedy Search that shows why Greedy Search can often miss the true minimum : <a href="https://en.wikipedia.org/wiki/Greedy_algorithm#/media/File:Greedy-search-path-example.gif" rel="noreferrer">https://en.wikipedia.org/wiki/Greedy_algorithm#/media/File:Greedy-search-path-example.gif</a></p>
<ul>
<li><p>Could someone help me think of an example to show my friend in which choosing the immediate nearest point from where you are, does not result in the total shortest path? (e.g. some example that appears counterintuitive, i.e. if you choose a path always based on the nearest point from your current position, you can clearly see that this is not the optimal path)</p>
</li>
<li><p>Is there a mathematical proof that shows that the "Greedy Search" algorithm in Travelling Salesperson has the possibility of sometimes missing the true optimal path?</p>
</li>
</ul>
<p>Thanks!</p>
| <p>Here's a simple explicit example in which the greedy algorithm always fails, this arrangement of cities (and euclidean distances):</p>
<p><a href="https://i.sstatic.net/OS8mv.png" rel="noreferrer"><img src="https://i.sstatic.net/OS8mv.png" alt="(0,1), (10, 0), (0,-1), (-10,0)" /></a></p>
<p>If you apply the greedy algorithm on this graph, it'll look like the following (or a flipped version):</p>
<p><a href="https://i.sstatic.net/Jqvpg.png" rel="noreferrer"><img src="https://i.sstatic.net/Jqvpg.png" alt="" /></a></p>
<p>This is true regardless of the starting point. This means the greedy algorithm gives us a path with a total distance traveled of <span class="math-container">$20 + 2 + 2\sqrt{101} \approx 42.1$</span></p>
<p>Clearly, this isn't the optimal solution though. Just by eyeballing it, you can see that this is the best path:</p>
<p><a href="https://i.sstatic.net/QvFFs.png" rel="noreferrer"><img src="https://i.sstatic.net/QvFFs.png" alt="" /></a></p>
<p>It has a total length of <span class="math-container">$4\sqrt{101} \approx 40.2$</span>, which is better than the greedy algorithm.</p>
<p>You can explain to your friend that the reason why the greedy algorithm fails is because it doesn't look ahead. It sees the shortest path (in this case, the vertical one), and takes it. However, doing so may later force it to take a much long path, leaving it worse off in the long run. While it's simple to see in this example, detecting every case where this happens is a lot harder.</p>
| <p>It seems to me that you are looking for an intuitive insight, and not a counterexample or a formal proof. I was wondering the same thing many years ago and the following allowed me to achieve an intuitive insight:</p>
<p>I experimented with a TSP solver program called Concorde. That program allows you to place points and it can also drop points randomly. It will then show the solving process as it happens.</p>
<p>You can then see live how the currently known best path is evolving. Very different paths will be shown, each one a little bit better than before.</p>
<p>This showed me that vastly different paths can lead to incremental tiny improvements. And this showed me how "non-convex" the solution is. You can't just use a hill climbing algorithm to zero in on the best solution. The problem contains decision points leading to vastly different parts of the search space.</p>
<p>This is an entirely non-formal way of looking at this, but it helped me a lot.</p>
|
geometry | <p>Circular manholes are great because the cover can not fall down the hole. If the hole were square, the heavy metal cover could fall down the hole and kill some man working down there.</p>
<p>Circular manhole: <img src="https://i.sstatic.net/JQweE.png" alt="Circle" /></p>
<p><em>Can manholes be made in other shapes than circles, that prevent the cover from being able to fall down its own hole?</em></p>
<p><strong>Semi rigid math formulation:</strong></p>
<p>Let us say that we have an infinite matematical 2D plane in 3D space. In this plane is a hole of some shape. Furthermore we have a flat rigid 2D figure positioned on one side of the plane. This figure has the same shape as the hole in the plane, but infinitesimal larger.
Is it possible to find a shape, where there is no path twisting and turning the figure that brings the figure through the hole?</p>
<p>Here is one such shape (<em>only the black is the the shape</em>):<img src="https://i.sstatic.net/CVMTi.png" alt="Five side" /></p>
<p>But if one put the restriction on the shape, that it needs to be without holes (topological equivalent to a circle in 2D), then I can not answer the question!?</p>
<p><em>Edit:</em></p>
<p>Because of the huge amount of comments and answers not about math, I fell the need to specify that:</p>
<p><strong>I am not interested in designing manholes. I am interested in the math inspired by the manhole problem.</strong></p>
| <p>Any manhole cover bounded by a <a href="http://en.wikipedia.org/wiki/Curve_of_constant_width">curve of constant width</a> will not fall through. The circle is the simplest such curve.</p>
| <p>A manhole cover can't fall into the hole if the minimum width of the cover is greater than the maximum width of the hole.</p>
<p>For example, consider a one-meter square cover over a square hole slightly smaller than $1\over\sqrt 2$ meter on a side. The diagonal of the hole is slightly less than 1 meter, so the cover won't fit into it.</p>
<p>The point is that manhole covers <em>aren't</em> the same size as the manholes they cover; they have flanged edges.</p>
<p><strong>EDIT :</strong></p>
<p>Oops, I missed this sentence in the question:</p>
<blockquote>
<p>This figure has the same shape as the hole in the plane, but infinitesimal larger.</p>
</blockquote>
<p>so my answer, though it does have real-world applications, doesn't really answer the question as stated.</p>
|
combinatorics | <p>Consider a population of nodes arranged in a triangular configuration as shown in the figure below, where each level $k$ has $k$ nodes. Each node, except the ones in the last level, is a parent node to two child nodes. Each node in levels $2$ and below has $1$ parent node if it is at the edge, and $2$ parent nodes otherwise.</p>
<p>The single node in level $1$ is infected (red). With some probability $p_0$, it does not infect either of its child nodes in level $2$. With some probability $p_1$, it infects exactly one of its child nodes, with equal probability. With the remaining probability $p_2=1-p_0-p_1$, it infects both of its child nodes.</p>
<p>Each infected node in level $2$ then acts in a similar manner on its two child nodes in level $3$, and so on down the levels. It makes <em>no</em> difference whether a node is inected by one or two parents nodes - it's still just infected.</p>
<p>The figure below shows one possibility of how the disease may spread up to level $6$.</p>
<p><a href="https://i.sstatic.net/NMxL0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NMxL0.png" alt="One possibile spread of the disease up to level $6$."></a></p>
<p>The question is: <strong>what is the expected number of infected nodes at level $k$?</strong></p>
<p>Simulations suggest that this is (at least asymptotically) linear in $k$, i.e.,</p>
<p>$$
\mathbb{E}(\text{number of infected nodes in level } k) = \alpha k
$$</p>
<p>where $\alpha = f(p_0, p_1,p_2)$.</p>
<hr>
<p>This question arises out of a practical scenario in some research I'm doing. Unfortunately, the mathematics involved is beyond my current knowledge, so I'm kindly asking for your help. Pointers to relevant references are also appreciated. </p>
<p>I asked a <a href="https://math.stackexchange.com/questions/1500829/a-disease-spreading-through-a-triangular-population">different version</a> of this question some time ago, which did not have the possibility of a node not infecting either of its child nodes. It now turns out that in the system I'm looking at, the probability of this happening is not negligble.</p>
| <p><strong>Note:</strong> Of course, a most interesting approach would be to derive a generating function describing the probabilities of infected nodes at level $k$ with respect to the probabilities $p_0,p_1$ and $p_2$ and to derive an asymptotic estimation from it.</p>
<p>This seems to be a rather tough job, currently out of reach for me and so this is a much more humble approach trying to obtain at least for small values of $k$ the expectation value $E(k)$ of the number of infected nodes at this level.</p>
<p>Here I give the expectation value $E(k)$ for the number of infected nodes at level $k=1,2$ and propose an algorithm to derive the expectation values for greater values of $k$. Maybe someone with computer based assistance could use it to provide $E(k)$ for some larger values of $k$.</p>
<blockquote>
<p><strong>The family of graphs of infected nodes:</strong></p>
<p>We focus at graphs containing infected nodes only which can be derived from one infected root node. The idea is to iteratively find at step $k$ a manageable representation of <em>all</em> graphs of this kind with diameter equal to the level $k$ based upon the graphs from step $k-1$.</p>
<p><strong>Expectation value $E(k=1)$</strong></p>
<p>We encode a left branch by $x$, a right branch by $y$ and a paired branch by $tx+ty$, which is marked with $t$ in order to differentiate it from single branching. No branching at all is encoded by $1$. So, starting from a root node denoted with $1$ we obtain four different graphs:
\begin{array}{cccc}
1\qquad&x\qquad&y\qquad&tx+ty\\
p_0\qquad&\frac{1}{2}p_1\qquad&\frac{1}{2}p_1\qquad& p_2
\end{array}</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>Three of the graphs $x,y,tx+ty$ have diameter equal to $1$ which means they have <em>leaves</em> at level $k=1$.</p></li>
<li><p>These three graphs are of interest for further generations of graphs with higher level. </p></li>
<li><p>Let the polynomial $G(x,y,t)$ represent a graph. The number of terms $x^my^n$ in $G(x,y,t)$ with $m+n=k$ gives the number of nodes at level $k$.</p></li>
<li><p>A node without successor nodes is weighted with $p_0$. We associate the weight $\frac{1}{2}p_1$ to $x$ resp. $y$ if they are not marked and the weight $p_2$ to $tx+ty$. </p></li>
</ul>
<blockquote>
<p><strong>Description:</strong> The first generation of graphs corresponding to $k=1$ is obtained from a root node $1$ by <em>multiplication</em> with $(1|x|y|tx+ty)$ whereby the bar $|$ denotes alternatives.
\begin{align*}
1(1|x|y|tx+ty)\qquad\rightarrow\qquad 1,x,y,tx+ty
\end{align*}</p>
<p>We obtain
\begin{array}{c|c|c}
&&\text{nodes at level }\\
\text{graph}&\text{prob}&k=1\\
\hline
1&p_0&0\\
x&\frac{1}{2}p_1&1\\
y&\frac{1}{2}p_1&1\\
tx+ty&p_2&2\\
\end{array}</p>
<p>We conclude
\begin{align*}
E(1)&=0\cdot p_0 +1\cdot\frac{1}{2}p_1 + 1\cdot\frac{1}{2}p_1+2\cdot p_2\\
&=p_1+2p_2
\end{align*}</p>
</blockquote>
<p>$$ $$</p>
<blockquote>
<p><strong>Expectation value $E(k=2)$</strong></p>
<p>For the next step $k=2$ we consider all graphs from the step before having diameter equal to $k-1=1$.</p>
<p>These are $x,y,tx+ty$. Each of them generates graphs for the next generation by appending at nodes at level $k$ the subgraphs $1|x|y|tx+ty$. If a graph has $n$ nodes at level $k$ we get $4^n$ graphs to analyze for the next generation.</p>
<p>But, we will see that due to symmetry we can identify graphs and be able to reduce roughly by a factor two the variety of different graphs.</p>
</blockquote>
<p><strong>Intermezzo: Symmetry</strong></p>
<p>Note the contribution of graphs which are symmetrical with respect to $x$ and $y$ is the same. They both show the same probability of occurrence.</p>
<p>Instead of considering the three graphs
\begin{align*}
x,y,tx+ty
\end{align*}
we can identify $x$ and $y$. We arrange the family $\mathcal{F}_1=\{x,y,tx+ty\}$ of graphs of level $k=1$ in two equivalence classes
\begin{align*}
[x],[tx+ty]
\end{align*}
and describe the family more compactly by their equivalence classes together with a multiplication factor giving the number of elements in that class. In order to uniquely describe the different equivalence classes it is convenient to also add the probability of occurrence of a representative in the description. We use a semicolon to separate the probability weight from the polynomial representation of the graph.
\begin{align*}
\mathcal{[F}_1]=\{2[x;\frac{1}{2}p_1],[tx+ty;p_2]\}
\end{align*}</p>
<blockquote>
<p>The second generation of graphs corresponding to $k=2$ is obtained from $[\mathcal{F}_1]$ via selecting a representative from each equivalence class and each node at level $k=2$ is multiplied by $(1|x|y|tx+ty)$. The probability of this graph has to be multiplied accordingly with
$(p_0|\frac{1}{2}p_1|\frac{1}{2}p_1|p_2)$. We obtain this way $4+4^2=20$ graphs</p>
<p>We calculate from the representative $x$ of $2[x;\frac{1}{2}p_1]$</p>
<p>\begin{align*}
x(1|x|y|tx+ty) &\rightarrow (x|x^2|xy|tx^2+txy)\\
\frac{1}{2}p_1\left(p_0\left|\frac{1}{2}p_1\right.\left|\frac{1}{2}p_1\right.\left|\phantom{\frac{1}{2}}p_2\right.\right)
&\rightarrow \left(\frac{1}{2}p_0p_1\left|\frac{1}{4}p_1^2\right.\left|\frac{1}{4}p_1^2\right.\left|\frac{1}{2}p_1p_2\right.\right)\\
\end{align*}</p>
<p>We obtain from $2[x;\frac{1}{2}p_1]\in[\mathcal{F}_1]$ the first part of equivalence classes of $[\mathcal{F}_2]$ with multiplicity denoting the number of graphs within an equivalence class. We list the representative and the graphs for each class.
\begin{array}{c|l|c|c|l}
&&&\text{nodes at}\\
\text{mult}&\text{repr}&\text{prob}&\text{level }2&graphs\\
\hline
2&x&\frac{1}{2}p_0p_1&0&x,y\\
2&x^2&\frac{1}{4}p_1^2&1&x^2,y^2\\
2&xy&\frac{1}{4}p_1^2&1&xy,yx\\
2&tx^2+txy&\frac{1}{2}p_1p_2&2&tx^2+txy,txy+ty^2\tag{1}\\
\end{array}</p>
<p>We calculate from the representative $tx+ty$ of $[tx+ty;p_2]$ using a somewhat informal notation</p>
<p>\begin{align*}
tx&(1|x|y|tx+ty)+ty(1|x|y|tx+ty)\\
&\rightarrow (tx|tx^2|txy|t^2x^2+t^2xy)+(ty|tyx|ty^2|t^2xy+t^2y^2)\tag{2}\\
\end{align*}</p>
</blockquote>
<p>We arrange the resulting graphs in groups and associate the probabilities accordingly. The graphs are created by adding a left alternative from (2) with a right alternative from (2). The probabilities are the product of $p_2$ from $[tx+ty;p_2]$ and the corresponding probabilities of the left and right selected alternatives.</p>
<blockquote>
<p>\begin{array}{ll}
tx+ty\qquad&\qquad p_2p_0p_0\\
tx+tyx\qquad&\qquad p_2p_0\frac{1}{2}p_1\\
tx+ty^2\qquad&\qquad p_2p_0\frac{1}{2}p_1\\
tx+t^2xy+t^2y^2\qquad&\qquad p_2p_0p_2\\
\\
tx^2+ty\qquad&\qquad p_2\frac{1}{2}p_1p_0\\
tx^2+tyx\qquad&\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\
tx^2+ty^2\qquad&\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\
tx^2+t^2xy+t^2y^2\qquad&\qquad p_2\frac{1}{2}p_1p_2\\
\\
txy+ty\qquad&\qquad p_2\frac{1}{2}p_1p_0\\
txy+tyx\qquad&\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\
txy+ty^2\qquad&\qquad p_2\frac{1}{2}p_1\frac{1}{2}p_1\\
txy+t^2xy+t^2y^2\qquad&\qquad p_2\frac{1}{2}p_1p_2\\
\\
t^2x^2+t^2y^2+ty\qquad&\qquad p_2p_2p_0\\
t^2x^2+t^2y^2+tyx\qquad&\qquad p_2p_2\frac{1}{2}p_1\\
t^2x^2+t^2y^2+ty^2\qquad&\qquad p_2p_2\frac{1}{2}p_1\\
t^2x^2+t^2y^2+t^2xy+t^2y^2\qquad&\qquad p_2p_2p_2\tag{3}\\
\end{array}</p>
</blockquote>
<p>A few words to terms like $txxytyxy$. This term can be replaced with $t^2x^3y^2$. In fact <em>any</em> walk in a graph containing $m$ $x$'s, $n$ $y$'s and $r$ $t$'s (with $t\leq m+n$) can be normalised to
\begin{align*}
t^rx^my^n
\end{align*}
If we map this walk to a lattice path in $\mathbb{Z}^2$ with the root at $(0,0)$ and with $x$ moving one step horizontally and $y$ moving one step vertically we always describe a path from $(0,0)$ to $(m,n)$.
We could represent each graph as union of lattice paths.</p>
<blockquote>
<p>We create now a table as we did in (1). We identify graphs in (3) which belong to the same equivalence class.</p>
<p>\begin{array}{c|l|c|c|l}
&&&\text{nodes at}\\
\text{mult}&\text{repr}&\text{prob}&\text{level }2&graphs\\
\hline
1&tx+ty&p_0^2p^2&0&tx+ty\\
2&tx+txy&\frac{1}{2}p_0p_1p_2&1&tx+txy,txy+ty\\
2&tx+ty^2&\frac{1}{2}p_0p_1p_2&1&tx+ty^2,tx^2+ty\\
2&tx+t^2xy+t^2y^2&p_0p_2^2&2&tx+t^2xy+t^2y^2,\\
&&&&t^2x^2+t^2xy+ty\\
2&tx^2+txy&\frac{1}{4}p_1^2p_2&2&tx^2+txy,txy+ty^2\\
1&tx^2+ty^2&\frac{1}{4}p_1^2p_2&2&tx^2+ty^2\\
2&tx^2+t^2xy+t^2y^2&\frac{1}{2}p_1p_2^2&2&tx^2+t^2xy+t^2y^2,\\
&&&&t^2x^2+t^2xy+ty^2\\
1&2txy&\frac{1}{4}p_1^2p_2&1&txy+txy\\
2&txy+t^2xy+t^2y^2&\frac{1}{2}p_1p_2^2&3&txy+t^2xy+t^2y^2,\\
&&&&t^2x^2+t^2xy+txy\\
1&t^2x^2+2t^2xy+t^2y^2&p_2^3&3&t^2x^2+2t^2xy+t^2y^2\tag{4}
\end{array}</p>
<p>Combining the classes from (1) and (4) gives $[F_2]$.</p>
<p>We calculate the expectation value $E(2)$ from the tables in (1) and (4)
\begin{align*}
E(2)&=0\cdot p_0p_1 +1\cdot\frac{1}{2}p_1^2 + 1\cdot\frac{1}{2}p_1^2+2\cdot p_1p_2\\
&\qquad+0\cdot p_0^2p_2+1\cdot p_0p_1p_2+1\cdot p_0p_1p_2+2\cdot 2p_0p_2^2\\
&\qquad+2\cdot\frac{1}{2}p_1^2p_2+2\cdot\frac{1}{4}p_1^2p_2+2\cdot p_1p_2^2+1\cdot\frac{1}{4}p_1^2p_2\\
&\qquad+3\cdot p_1p_2^2+3\cdot p_2^3\\
&=p_1^2+2p_1p_2+2p_0p_1p_2+4p_0p_2^2+\frac{7}{4}p_1^2p_2+5p_1p_2^2+3p_2^3
\end{align*}</p>
</blockquote>
<p><strong>Algorithm for $E(k)$</strong></p>
<p>Here is a short summary how $E(k)$ can be derived when $[F_{k-1}]$, the family of equivalence classes of graphs with diameter $k-1$ is already known. </p>
<ul>
<li><p>Take a representative $G(x,y,t)$ from each equivalence class of $[F_{k-1}]$</p></li>
<li><p>Multiply each leave which is at level $k-1$ with $(1|x|y|tx+ty)$</p></li>
<li><p>Multiply the probability of the representative with
$(p_0|\frac{1}{2}p_1|\frac{1}{2}p_1|p_2)$ accordingly</p></li>
<li><p>Use $xy$-symmetry of graphs and normalization $t^rx^my^n$ to find new equivalence classes as we did for $k=2$ above. <em>Attention</em>: There may be equivalence classes with <strong>equal</strong> polynomial representatives but with <strong>different</strong> probabilities.</p></li>
<li><p>The number of nodes at level $k$ of a graph $G(x,y,t)$ is the number of terms
\begin{align*}
t^rx^my^n\qquad\qquad m,n\geq 0, 0\leq r\leq m+n=k
\end{align*}</p></li>
<li><p>Build $[F_k]$ by collecting all equivalence classes respecting the multiplicity (number of graphs) in an equivalence class</p></li>
<li><p>Calculate $E(k)$</p></li>
</ul>
| <p>According to discussion in the comments of the question we can conclude that infections of the children of the same node are not independent. But we don't know if that dependence is horizontal or vertical. Meaning, maybe infection of one node has influence on its sibiling, or maybe the parent statistically has higher or lower influence on its children so sometimes it infects both, one or none of them</p>
<p>Since disease spreads vertically only, I will consider the second case</p>
<p>Let parents be named B,L,R or N if they infect both, left, right or none of the children, where $p(L)=p(R)=\frac{p_1}2$. We can draw all possible trees and count their probabilities. For instance
$$B$$$$B\_\_R$$$$N\_\_L\_\_R$$$$0\_\_L\_\_0\_\_B$$
$$0\_\_\_1\_\_0\_\_1\_\_\_1$$
has probability $B^3R^2L^2N$ (0-not infected, 1-infected)</p>
<p>Lets discuss all the graphs that end with $01011$, then previous row has to be one of $\{0L0B,R00B,...\}=$
$(\{R\}\times\{0,L,N\}\bigcup \{0,N\}\times\{L\})\times(\{R\}\times\{R\}\bigcup\{0,R,N\}\times\{B\})$</p>
<p>Probability that $01011$ is preceded with $0L0B$ is equal to probability for $0101$ to happen, times $p(L)p(B)$</p>
<p>So $p(01011)=p(1001)p(R)p(B)+p(0101)p(L)p(B)
+p(1011)p(R)(p(R)^2+p(R)p(B)+p(N)p(B))+p(0111)p(L)(p(R)^2+p(R)p(B)+p(N)p(B))
+p(1101)(p(R)p(L)+p(N)p(R)+p(N)p(L))p(B)
+p(1111)(p(R)p(L)+p(N)p(R)+p(N)p(L))(p(R)^2+p(R)p(B)+p(N)p(B))$</p>
<p>Also $p(1011)=p(1101)$ as a mirror pair</p>
<p>So, it is left to conclude a way of finding preceding combinations for a 011011110... It is made of several groups of consecutive $1$s which are independent, so they can be attached to each other by a Decartes multiplication also multiplied by $\{0,N\}$ for each pair of consecutive $0$s in between. </p>
<p>Let $f(n)$ be the set of all combinations that precede to n consecutive $1$s, $f(1)=\{RN,R0,RL,0L,NL\}$ and lets define $g(n)$ as the set of all elements from $f(n)$ that start with $R$ or $0$ but leading $R$ subtituted with $B$ and leading $0$ subtituted with $L$ , $g(1)=\{BN,B0,BL,LL\}$. Then $f(n+1)=R\times f(n)\bigcup \{R,0,N\}\times g(n)$</p>
<p>Exceptions are the groups on the end or start of the level, where you pick only combinations with leading or ending zero and remove that zero. Lets define sets for leading groups as $lf(n)$, sets for ending groups as $ef(n)$ and groups that are both leading and ending as $lef(n)$</p>
<p>$f(1)=\{RN,R0,RL,0L,NL\}$<br>
$g(1)=\{BN,B0,BL,LL\}$<br>
$lf(1)=\{L\}$, $ef(1)=\{R\}$<br>
$f(2)=\{RRN,RR0,R0L,RNL,RBN,RB0,RBL,RLL,NBN,NB0,NBL,NLL,0BN,0B0,0BL,0LL\}$<br>
$g(2)=\{BRN,BR0,B0L,BNL,BBN,BB0,BBL,BLL,LBN,LB0,LBL,LLL\}$<br>
$lf(2)=\{BN,B0,BL,LL\}$, $ef(2)=\{RR,RB,NB,0B\}$, $lef(2)=\{B\}$<br>
$f(3)=\{RRRN,RRR0,RR0L,RRNL,RRBN,RRB0,RRBL,RRLL,RNBN,RNB0,RNBL,RNLL,R0BN,R0B0,R0BL,R0LL,RBRN,RBR0,RB0L,RBNL,RBBN,RBB0,RBBL,RBLL,RLBN,RLB0,RLBL,RLLL,NBRN,NBR0,NB0L,NBNL,NBBN,NBB0,NBBL,NBLL,NLBN,NLB0,NLBL,NLLL,0BRN,0BR0,0B0L,0BNL,0BBN,0BB0,0BBL,0BLL,0LBN,0LB0,0LBL,0LLL\}$<br>
$lf(3)=\{BRN,BR0,B0L,BNL,BBN,BB0,BBL,BLL,LBN,LB0,LBL,LLL\}$, $ef(3)=\{RRR,RRB,RNB,R0B,RBR,RBB,RLB,NBR,NBB,NLB,0BR,0BB,0LB\}$, $lef(3)=\{BR,BB,LB\}$<br></p>
<p>Lets calculate p(111),p(110),p(101),p(100),p(010),p(000). With notes that $p(L)=p(10)=\frac {p_1} 2=p(01)=p(R)$, $p(N)=p(00)=p_0$, $p(B)=p(11)=p_2$</p>
<p>$prec(111)=lef(3)=\{BR,BB,LB\}$ is the set of rows that can precede to $111$, so $p(111)=p(11)p(B)(p(B)+p(R)+p(L))$, further I will write just B not p(B)<br>
$prec(110)=lf(2)$,<br>
$p(110)=p(11)(BN+BL+LL)+p(10)B=B(BN+BL+LL+L)$<br>
$prec(101)=lf(1)\times ef(1)=\{LR\}$,<br>
$p(101)=p(11)LR=BLR$<br>
$prec(100)=lf(1)\times \{0,N\}=\{LN,L0\}$,<br>
$p(100)=p(11)LN+p(10)L=BLN+LL$<br>
$prec(010)=f(1)$,<br>
$p(010)=p(11)(RN+RL+NL)+p(10)R+p(01)L=B(RN+RL+NL)+2RL$</p>
<p>$p(000)$ isn't needed and $p(100)=p(001)$, $p(110)=p(011)$</p>
<p>$E(2)=3p(111)+2(p(101)+2p(110))+2p(100)+p(010)=
3p(111)+2p(101)+4p(110)+2p(100)+p(010)=
3p_2^2(p_2+p_1)+2p_2p_1^2/4+4p_2(p_2p_0+p_2p_1/2+p_1^2/4+p_1/2)+p_0p_1p_2+p_1^2/2+p_0p_1p_2+p_2p_1^2/4+p_1^2/2=3p_2^3+5p_2^2p_1+\frac{7}{4}p_1^2p_2+4p_0p_2^2+2p_1p_2+2p_0p_1p_2+p_1^2$</p>
<p>Now you have probabilities on 3rd level you can "easily" calculate them on 4th level</p>
<p>$prec(1111)=lef(4)$<br>
$prec(1110)=lf(3)$,<br>
$prec(1101)=lf(2)\times ef(1)$,<br>
$prec(1100)=lf(2)\times \{0,N\}$,<br>
$prec(1010)=lf(1)\times f(1)$,<br>
$prec(1001)=lf(1)\times \{0,N\}\times ef(1)$,<br>
$prec(0110)=f(2)$,<br>
$prec(1000)=lf(1)\times \{0,N\}\times \{0,N\}$,<br>
$prec(0100)=f(1)\times \{0,N\}$,<br></p>
|
matrices | <p>I am having a difficult time with the following question. Any help will be much appreciated.</p>
<blockquote>
<p>Let $A$ be an $n×n$ real matrix such that $A^T = A$. We call such matrices “symmetric.” Prove that the eigenvalues of a real symmetric matrix are real (i.e. if $\lambda$ is an eigenvalue of $A$, show that $\lambda = \overline{\lambda}$ )</p>
</blockquote>
| <p>Let <span class="math-container">$(\lambda,v)$</span> be any eigenpair of <span class="math-container">$A$</span>. Since <span class="math-container">$A=A^T=A^\ast$</span>,
<span class="math-container">$$\langle Av,Av\rangle=v^*A^*Av=v^\ast A^2v=v^*(A^2v)=\lambda^2||v||^2.$$</span></p>
<p>Therefore <span class="math-container">$\lambda^2=\frac{\langle Av,Av\rangle}{||v||^2}$</span> is a real nonnegative number. Hence <span class="math-container">$\lambda$</span> must be real.</p>
| <p>Let <span class="math-container">$Ax=\lambda x$</span> with <span class="math-container">$x\ne 0$</span>, with <span class="math-container">$\lambda\in\mathbb{C}$</span>, then
<span class="math-container">\begin{align}
\lambda \bar x^T x &= \bar x^T(\lambda x)\\
&=\bar x^T A x \\
&=(A^T \bar{x})^T x \\
&=(A \bar x)^T x \\
&=(\bar A \bar x)^T x \\
&=(\bar\lambda\bar x)^T x\\
&=\bar \lambda \bar x^T x.\\
\end{align}</span>
Because <span class="math-container">$x\ne 0$</span>, then <span class="math-container">$\bar{x}^T x\ne 0$</span> and <span class="math-container">$\lambda=\bar \lambda$</span>.</p>
|
linear-algebra | <p>I was wondering if a dot product is technically a term used when discussing the product of $2$ vectors is equal to $0$. And would anyone agree that an inner product is a term used when discussing the integral of the product of $2$ functions is equal to $0$? Or is there no difference at all between a dot product and an inner product?</p>
| <p>In my experience, <em>the dot product</em> refers to the product $\sum a_ib_i$ for two vectors $a,b\in \Bbb R^n$, and that "inner product" refers to a more general class of things. (I should also note that the real dot product is extended to a complex dot product using the complex conjugate: $\sum a_i\overline{b}_i)$.</p>
<p>The definition of "inner product" that I'm used to is a type of biadditive form from $V\times V\to F$ where $V$ is an $F$ vector space.</p>
<p>In the context of $\Bbb R$ vector spaces, the biadditive form is usually taken to be symmetric and $\Bbb R$ linear in both coordinates, and in the context of $\Bbb C$ vector spaces, it is taken to be Hermetian-symmetric (that is, reversing the order of the product results in the complex conjugate) and $\Bbb C$ linear in the first coordinate. </p>
<p>Inner products in general can be defined even on infinite dimensional vector spaces. The integral example is a good example of that.</p>
<p>The real dot product is just a special case of an inner product. In fact it's even positive definite, but general inner products need not be so. The modified dot product for complex spaces also has this positive definite property, and has the Hermitian-symmetric I mentioned above.</p>
<p>Inner products are generalized by <a href="http://en.wikipedia.org/wiki/Inner_product#Generalizations">linear forms</a>. I think I've seen some authors use "inner product" to apply to these as well, but a lot of the time I know authors stick to $\Bbb R$ and $\Bbb C$ and require positive definiteness as an axiom. General bilinear forms allow for indefinite forms and even degenerate vectors (ones with "length zero"). The naive version of dot product $\sum a_ib_i$ still works over any field $\Bbb F$. Another thing to keep in mind is that in a lot of fields the notion of "positive definite" doesn't make any sense, so that may disappear.</p>
| <p>A dot product is a very specific inner product that works on $\Bbb{R}^n$ (or more generally $\Bbb{F}^n$, where $\Bbb{F}$ is a field) and refers to the inner product given by</p>
<p>$$(v_1, ..., v_n) \cdot (u_1, ..., u_n) = v_1 u_1 + ... + v_n u_n$$</p>
<p>More generally, an inner product is a function that takes in two vectors and gives a complex number, subject to some conditions.</p>
|
matrices | <p>For <span class="math-container">$n = 2$</span>, I can visualize that the determinant <span class="math-container">$n \times n$</span> matrix is the area of the parallelograms by actually calculating the area by coordinates. But how can one easily realize that it is true for any dimensions?</p>
| <p>If the column vectors are linearly dependent, both the determinant and the volume are zero.
So assume linear independence.
The determinant remains unchanged when adding multiples of one column to another. This corresponds to a skew translation of the parallelepiped, which does not affect its volume.
By a finite sequence of such operations, you can transform your matrix to diagonal form, where the relation between determinant (=product of diagonal entries) and volume of a "rectangle" (=product of side lengths) is apparent.</p>
| <p>Here is the same argument as Muphrid's, perhaps written in an elementary way.<br /><br /></p>
<p>Apply Gram-Schmidt orthogonalization to $\{v_{1},\ldots,v_{n}\}$, so that
\begin{eqnarray*}
v_{1} & = & v_{1}\\
v_{2} & = & c_{12}v_{1}+v_{2}^{\perp}\\
v_{3} & = & c_{13}v_{1}+c_{23}v_{2}+v_{3}^{\perp}\\
& \vdots
\end{eqnarray*}
where $v_{2}^{\perp}$ is orthogonal to $v_{1}$; and $v_{3}^{\perp}$
is orthogonal to $span\left\{ v_{1},v_{2}\right\} $, etc. </p>
<p>Since determinant is multilinear, anti-symmetric, then
\begin{eqnarray*}
\det\left(v_{1},v_{2},v_{3},\ldots,v_{n}\right) & = & \det\left(v_{1},c_{12}v_{1}+v_{2}^{\perp},c_{13}v_{1}+c_{23}v_{2}+v_{3}^{\perp},\ldots\right)\\
& = & \det\left(v_{1},v_{2}^{\perp},v_{3}^{\perp},\ldots,v_{n}^{\perp}\right)\\
& = & \mbox{signed volume}\left(v_{1},\ldots,v_{n}\right)
\end{eqnarray*}</p>
|
game-theory | <p>There are many games that even though they include some random component (for example dice rolls or dealing of cards) they are skill games. In other words, there is definite skill in how one can play the game taking into account the random component. Think of backgammon or poker as good examples of such games. </p>
<p>Moreover, novice players or outsiders might fail to recognise the skill involved and attribute wins purely to luck. As someone gains experience, they usually start to appreciate the skill involved, and concede that there is more to the game than just luck. They might even concede that there is "more" skill than luck. How do we quantify this? <strong>How "much" luck vs skill</strong>? People can have very subjective feelings about this balance. Recently, I was reading someone arguing that backgammon is $9/10$ luck, while another one was saying it's $6/10$. These numbers mean very little other than expressing gut feelings. </p>
<p><strong>Can we do better?</strong> Can we have an objective metric to give us a good sense of the skill vs luck component of a game.</p>
<p>I was thinking along these lines: Given a game and a lot of empirical data on matches between players with different skills, a metric could be:</p>
<blockquote>
<p>How many games on the average do we need to have an overall win (positive win-loss balance) with probability $P$ (let's use $P=0.95$) between a top level player and a novice?</p>
</blockquote>
<p>For the game of chess this metric would be $1$ (or very close to $1$). For the game of scissors-paper-rock it would be $\infty$.</p>
<p>This is an objective measure (we can calculate it based on empirical data) and it is intuitive. There is however an ambiguity in what top-level and novice players mean. Empirical data alone does not suffice to classify the players as novices or experts. For example, imagine that we have the results from 10,000 chess games between 20 chess grandmasters. Some will be better, some will be worse, but analysing the data with the criterion I defined, we will conclude that chess has a certain (significant) element of luck. Can we make this more robust? Also, given a set of empirical data (match outcomes) how do we know we have enough data?</p>
<p>What other properties do we want to include? Maybe a rating between $[0, 1]$, zero meaning no luck, and one meaning all luck, would be easier to talk about. </p>
<p>I am happy to hear completely different approaches too. </p>
| <p>A natural place to start, depending upon your mathematical background might be the modern cardinal generalizations of the voting literature. Consider the following toy model, and feel free to adapt any parts which don't seem appropriate to the problem at hand.</p>
<p><strong>Data</strong>: Say we have a fixed, finite population of players $N$ who we observe play. A single observation consists of the result of a single match between two players $\{i \succ j\}$ for some $i,j \in N$. Let's the space of datasets $\mathcal{D}$ consists, for fixed $N$, of all finite collections of such observations. </p>
<p><strong>Solution Concept</strong>: Our goal here is twofold: mathematically we're going to try to get a vector in $\mathbb{R}^N$ whose $i^\textrm{th}$ component is the 'measure of player $i$'s caliber relative to all the others.' A player $i$ is better than player $j$ by our rule if the $i^\textrm{th}$ component of this score vector is higher than the $j^\textrm{th}$. We might also like the magnitude of the differences in these components to carry be increasing in some measure of pairwise skill difference (the score vector is <em>cardinal</em>, to use economics lingo). </p>
<blockquote>
<p><strong>Edit</strong>: This score vector should not be seen as a measure of luck versus skill. But what we will be able to do is use this to 'clean' our data of the component of it generated by differences in skill, leaving behind a residual which we may then interpret as 'the component of the results attributable to luck.' In particular, this 'luck component' lives in a normed space, and hence comes with a natural means of measuring its magnitude, which seems to be what you are after.</p>
</blockquote>
<p>Our approach is going to use a cardinal generalization of something known as the Borda count from voting theory.</p>
<p><strong>Aggregation</strong>: Our first step is to aggregate our dataset. Given a dataset, consider the following $N\times N$ matrix 'encoding' it. For all $i,j \in N$ define: </p>
<p>$$
D_{ij} = \frac{\textrm{Number of times $i$ has won over $j$}- \textrm{Number of times $j$ has won over $i$}}{\textrm{Number of times $i$ and $j$ have played}}
$$</p>
<p>if the denominator is non-zero, and $0$ else. The ${ij}^\textrm{th}$ element of this matrix encodes the relative frequency with which $i$ has beaten $j$. Moreover, $D_{ij} = -D_{ij}$. Thus we have identified this dataset with a skew-symmetric, real-valued $N\times N$ matrix.</p>
<p>An equivalent way of viewing this data is as a <em>flow</em> on a graph whose vertices correspond to the $N$ players (every flow on a graph has a representation as a skew-symmetric matrix and vice-versa). In this language, a natural candidate for a score vector is a <em>potential function</em>: a function from the vertices to $\mathbb{R}$ (i.e. a vector in $\mathbb{R}^N$) such that the value of the flow on any edge is given by the its gradient. In other words, we ask if there exists some vector $s \in \mathbb{R}^N$ such that, for all $i,j \in N$:</p>
<p>$$
D_{ij} = s_j - s_i.
$$</p>
<p>This would very naturally be able to be perceived as a metric of 'talent' given the dataset. If such a vector existed, it would denote that differences in skill could 'explain away all variation in the data.' Generally, however, for a given aggregate data matrix, such a potential function generally does not exist (as we would hope, in line with our interpretation). </p>
<blockquote>
<p><strong>Edit</strong>: It should be noted that the way we are aggregating the data (counting relative wins) will generally preclude such a function from existing, even when the game is 'totally determined by skill.' In such cases our $D$ matrix will take values exclusively in $\{-1, 0 ,1\}$. Following the approach outlined below, one will get out a score function which rationalizes this ordering but the residual will not necessarily be zero (but will generally take a specific form, of a $N$-cycle conservative flow that goes through each vertex). If one were to, say, make use of scores of games, the aggregated data would have a cardinal interpretation.</p>
</blockquote>
<p><strong>Construction of a score</strong>: The good news is that the mathematical tools exist to construct a scoring function that is, in a rigorous sense, the 'best fit for the data,' even if it is not a perfect match (think about how a data point cloud rarely falls perfectly on a line, but we nonetheless can find it instructive to find the line that is the best fit for it). Since this has been tagged a soft question, I'll just give a sketch of how and why such a score can be constructed. The space of all such aggregate data matrices actually admits a decomposition into the linear subspace of flows that admit a potential function, and its orthogonal complement. Formally, this is a combinatorial version of the Hodge decomposition from de Rham cohomology (but if those words mean nothing, don't worry about looking them up). Then, loosely speaking, in the spirit of classical regression, we can solve a least-squares minimization problem to orthogonally project our aggregate data matrix onto the linear subspace of flows that admit a potential function:</p>
<p>$$
\min_{s \in \mathbb{R}^N} \|D - (-\textrm{grad}(s))\|^2
$$
where $-\textrm{grad}(s)$ is an $N\times N$ matrix whose $ij^\textrm{th}$ element is $s_i - s_j$.</p>
<p>If you're interested in seeing this approach used to construct a ranking for college football teams, see:</p>
<p><a href="http://www.ams.org/samplings/feature-column/fc-2012-12" rel="nofollow noreferrer">http://www.ams.org/samplings/feature-column/fc-2012-12</a></p>
<p>If you'd like to read a bit more about the machinery underlying this including some brief reading about connections to the mathematical voting literature, see:</p>
<p><a href="https://www.stat.uchicago.edu/~lekheng/meetings/mathofranking/ref/jiang-lim-yao-ye.pdf" rel="nofollow noreferrer">https://www.stat.uchicago.edu/~lekheng/meetings/mathofranking/ref/jiang-lim-yao-ye.pdf</a></p>
| <p>Found this: <a href="https://boardgames.stackexchange.com/questions/9697/how-to-measure-luck-vs-skill-in-games">How to Measure Luck vs Skill in Games?</a> at<br>
boardgames.stackexchange.com</p>
<p>The following links might also be helpful and/or of general interest:</p>
<p><a href="https://en.wikipedia.org/wiki/Kelly_criterion" rel="nofollow noreferrer">Kelly criterion</a><br>
<a href="https://en.wikipedia.org/wiki/Edward_O._Thorp" rel="nofollow noreferrer">Edward O. Thorp</a><br>
<a href="https://en.wikipedia.org/wiki/Daniel_Kahneman" rel="nofollow noreferrer">Daniel Kahneman</a><br></p>
<hr />
<p><a href="http://www.cfapubs.org/doi/pdf/10.2469/cp.v30.n3.1" rel="nofollow noreferrer">The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing</a><br>
<span class="math-container">$\;$</span> by Michael J. Mauboussin<br>
<a href="http://studentofvalue.com/notes-on-the-success-equation/" rel="nofollow noreferrer">NOTES ON THE SUCCESS EQUATION</a><br></p>
<blockquote>
<p>Estimated true skill = Grand average + shrinkage factor (observed
average − grand average)</p>
</blockquote>
<p><a href="https://www.farnamstreetblog.com/2012/12/three-things-to-consider-in-order-to-make-an-effective-prediction/" rel="nofollow noreferrer">Michael Mauboussin: Three Things to Consider in Order To Make an Effective Prediction</a><br></p>
<hr />
<p><a href="https://en.wikipedia.org/wiki/Bayesian_inference" rel="nofollow noreferrer">Bayesian inference</a></p>
<hr />
<p>I hope the OP can forgive me for not focusing on his questions. Ten years ago I might have been more enthusiastic, but now all I could muster up was to provide links that anyone interested in games should find interesting.</p>
<p>So what is my problem? Two letters:</p>
<h1>AI</h1>
<p>Ten years ago it was thought that it would take AI 'bots' several decades to become #1 over humanity in two games:</p>
<p>Go (a game of perfect information)<br></p>
<p>OR</p>
<p>Poker (a game of imperfect information/randomness)<br></p>
<p>Today, it is a fait accompli.</p>
<p>The researchers matching their bots against human players in poker can claim victory whenever the out-performance is statistically significant.</p>
<p>GO is a different animal. Any win against a 10 dan professional player is 'off the charts' significant (throw that statistics book out the window).</p>
|
probability | <p>Suppose $X$ is a random variable which follows standard normal distribution then how is $KX$ ($K$ is constant) defined. Why does it follow a normal distribution with mean $0$ and variance $K^2$.
Thank You.</p>
| <p>For a random variable $X$ with finite first and second moments (i.e. expectation and variance exist) it holds that $\forall c \in \mathbb{R}: E[c \cdot X ] = c \cdot E[X]$ and $ \mathrm{Var}[c\cdot X] = c^2 \cdot \mathrm{Var} [ X]$</p>
<p>However the fact that $c\cdot X$ follows the same family of distributions as does $X$ is not trivial and has to be shown seperately. (Not true e.g. for the Beta distribution, which is also in the exponential family). You can see it if you look at the characteristic function of the product $c\cdot X$: $ \exp\{i\mu c t - \frac{1}{2} \sigma^2 c^2 t^2\}$ which is the characteristic function of a normal distribution wih $\mu'= \mu\cdot c$ and $\sigma' = \sigma \cdot c$.</p>
| <p>Another way of characterizing a random variable's distribution is by its distribution function, that is, if two random variables have the same distribution function then they are equal.</p>
<p>In our case, let <span class="math-container">$X \sim N(\mu,\sigma^2)$</span> then set <span class="math-container">$Y = c X$</span> with <span class="math-container">$c > 0$</span> and call <span class="math-container">$F$</span> the distribution function of <span class="math-container">$X$</span> and <span class="math-container">$G$</span> the distribution function of <span class="math-container">$Y$</span>. Then:</p>
<p><span class="math-container">$G(y) = P[Y \le y] = P[cX \le y] = P\Big[X \le \frac yc\Big] = F\Big(\frac yc\Big)$</span></p>
<p>Now we differentiate and we get:</p>
<p><span class="math-container">$g(y) = f(\frac yc) \frac1c$</span></p>
<p>where <span class="math-container">$g$</span> is the density function for <span class="math-container">$Y$</span> and <span class="math-container">$f$</span> is the density function for <span class="math-container">$X$</span>. Then we just try to express this as a normal density:</p>
<p><span class="math-container">$g(y) = f(\frac yc ) \frac1c = \frac{1}{\sqrt{2\pi}(c\sigma)} e^{\frac{-(cx-c\mu)^2}{2(c\sigma)^2}} = \frac{1}{\sqrt{2\pi}(c\sigma)} e^{\frac{-(y-c\mu)^2}{2(c\sigma)^2}}$</span></p>
<p>But this last is the density of a <span class="math-container">$N(c\mu,(c\sigma)^2)$</span></p>
<p>This is a calmed formulation of what Dilip Sarwate pointed out in the comments before.</p>
<p><strong>The case c < 0</strong></p>
<p><span class="math-container">$G(y) = P[Y \le y] = P[cX \le y] = P\Big[X \ge \frac yc\Big] = 1 - F\Big(\frac yc\Big)$</span></p>
<p>differentiating:</p>
<p><span class="math-container">$g(y) = -f(\frac yc) \frac1c = f(\frac yc) \frac{1}{|c|} = \frac{1}{\sqrt{2\pi}(|c|\sigma)} e^{\frac{-(y-c\mu)^2}{2(c\sigma)^2}}$</span></p>
<p>Note that this does not pose difficulties since <span class="math-container">$\sqrt{(c \sigma)^2} = |c| \sigma$</span>.</p>
|
combinatorics | <p>In competitive Pokémon-play, two players pick a team of six Pokémon out of the 718 available. These are picked independently, that is, player $A$ is unaware of player $B$'s choice of Pokémon. Some online servers let the players see the opponents team before the match, allowing the player to change the order of its Pokémon. (Only the first matters, as this is the one that will be sent into the match first. After that, the players may switch between the chosen six freely, as explained below.) Each Pokémon is assigned four moves out of a list of moves that may or may not be unique for that Pokémon. There are currently 609 moves in the move-pool. Each move is assigned a certain type, and may be more or less effective against Pokémon of certain types. However, a Pokémon may have more than one type. In general, move effectiveness is given by $0.5\times$, $1\times$ and $2\times$. However, there are exceptions to this rule. Ferrothorn, a dual-type Pokémon of steel and grass, will take $4\times$ damage against fire moves, since both of its types are weak against fire. All moves have a certain probability that it will work.</p>
<p>In addition, there are moves with other effects than direct damage. For instance, a move may increase one's attack, decrease your opponent's attack, or add a status deficiency on your opponent's Pokémon, such as making it fall asleep. This will make the Pokémon unable to move with a relatively high probability. If it is able to move, the status of "asleep" is lifted. Furthermore, each Pokémon has a "Nature" which increases one stat (out of Attack, Defense, Special Attack, Special Defense, Speed), while decreases another. While no longer necessary for my argument, one could go even deeper with things such as IV's and EV's for each Pokémon, which also affects its stats.</p>
<p>A player has won when all of its opponents Pokémon are out of play. A player may change the active Pokémon freely. (That is, the "battles" are 1v1, but the Pokémon may be changed freely.)</p>
<p>Has there been any serious mathematical research towards competitive Pokémon play? In particular, has there been proved that there is always a best strategy? What about the number of possible "positions"? If there always is a best strategy, can one evaluate the likelihood of one team beating the other, given best play from both sides? (As is done with chess engines today, given a certain position.)</p>
<p>EDIT: For the sake of simplicity, I think it is a good idea to consider two positions equal when </p>
<p>1) Both positions have identical teams in terms of Pokémon. (Natures, IVs, EVs and stats are omitted.) As such, one can create a one-to-one correspondence between the set of $12$ Pokémon in position $A$ and the $12$ in position $B$ by mapping $a_A \mapsto a_B$, where $a_A$ is Pokémon $a$ in position $A$.</p>
<p>2) $a_A$ and $a_B$ have the same moves for all $a\in A, B$.</p>
| <p>Has there been serious <em>research</em>? Probably not. Have there been modeling efforts? Almost certainly, and they probably range anywhere from completely ham-handed to somewhat sophisticated.</p>
<p>At its core, the game is finite; there are two players and a large but finite set of strategies. As such, existence of a mixed-strategy equilibrium is guaranteed. This is the result of Nash, and is actually an interesting application of the Brouwer Fixed-Point theorem.</p>
<p>That said, the challenge isn't really in the math; if you could set up the game, it's pretty likely that you could solve it using some linear programming approach. The challenge is in modeling the payoff, capturing the dynamics and, to some degree (probably small), handling the few sources of intrinsic uncertainty (i.e. uncertainty generated by randomness, such as hit chance).</p>
<p>Really, though, this is immaterial since the size of the action space is so large as to be basically untenable. LP solutions suffer a curse of dimensionality -- the more actions in your discrete action space, the more things the algorithm has to look at, and hence the longer it takes to solve.</p>
<p>Because of this, most tools that people used are inherently Monte Carlo-based -- simulations are run over and over, with new random seeds, and the likelihood of winning is measured statistically.</p>
<p>These Monte Carlo methods have their down-sides, too. Certain player actions, such as switching your Blastoise for your Pikachu, are deterministic decisions. But we've already seen that the action space is too large to prescribe determinism in many cases. Handling this in practice becomes difficult. You could treat this as a random action with some probability (even though in the real world it is not random at all), and increase your number of Monte Carlo runs, or you could apply some heuristic, such as "swap to Blastoise if the enemy type is fire and my current pokemon is under half-health." However, writing these heuristics relies on an assumption that your breakpoints are nearly-optimal, and it's rarely actually clear that such is the case.</p>
<p>As a result, games like Pokemon are interesting because optimal solutions are difficult to find. If there were 10 pokemon and 20 abilities, it would not be so fun. The mathematical complexity, if I were to wager, is probably greater than chess, owing simply to the size of the action space and the richer dynamics of the measurable outcomes. This is one of the reasons the game and the community continue to be active: people find new ideas and new concepts to explore.</p>
<p>Also, the company making the game keeps making new versions. That helps.</p>
<hr>
<p>A final note: one of the challenges in the mathematical modeling of the game dynamics is that the combat rules are very easy to implement programmatically, but somewhat more difficult to cleanly describe mathematically. For example, one attack might do 10 damage out front, and then 5 damage per round for 4 rounds. Other attacks might have cooldowns, and so forth. This is easy to implement in code, but more difficult to write down a happy little equation for. As such, it's a bit more challenging to do things like try to identify gradients etc. analytically, although it could be done programmatically as well. It would be an interesting application for automatic differentiation, as well.</p>
| <p>I think it's worth pointing out that even stripping away most of the complexity of the game still leaves a pretty hard problem. </p>
<p>The very simplest game that can be said to bear any resemblance to Pokemon is rock-paper-scissors (RPS). (Imagine, for example, that there are only three Pokemon - let's arbitrarily call them Squirtle, Charmander, and Bulbasaur - and that Squirtle always beats Charmander, Charmander always beats Bulbasaur, and Bulbasaur always beats Squirtle.)</p>
<p>Already it's unclear what "best strategy" means here. There is a unique <a href="http://en.wikipedia.org/wiki/Nash_equilibrium">Nash equilibrium</a> given by randomly playing Squirtle, Charmander, or Bulbasaur with probability exactly $\frac{1}{3}$ each, but in general just because there's a Nash equilibrium, even a unique Nash equilibrium, doesn't mean that it's the strategy people will actually gravitate to in practice. </p>
<p>There is in fact a <a href="http://www.chessandpoker.com/rps_rock_paper_scissors_strategy.html">professional RPS tournament scene</a>, and in those tournaments nobody is playing the Nash equilibrium because nobody can actually generate random choices with probability $\frac{1}{3}$; instead, everyone is playing some non-Nash equilibrium strategy, and if you want to play to win (not just win $\frac{1}{3}$ of the time, which is the best you can hope for playing the Nash equilibrium) you'll instead play strategies that fare well against typical strategies you'll encounter. Two examples:</p>
<ul>
<li>Novice male players tend to open with rock, and to fall back on it when they're angry or losing, so against such players you should play paper.</li>
<li>Novice RPS players tend to avoid repeating their plays too often, so if a novice player's played rock twice you should expect that they're likely to play scissors or paper next.</li>
</ul>
<p>There is even an <a href="http://www.rpscontest.com/">RPS programming competition scene</a> where people design algorithms to play repeated RPS games against other algorithms, and nobody's playing the Nash equilibrium in these games unless they absolutely have to. Instead the idea is to try to predict what the opposing algorithm is going to do next while trying to prevent your opponent from predicting what you'll do next. <a href="http://ofb.net/~egnor/iocaine.html">Iocaine Powder</a> is a good example of the kind of things these algorithms get up to. </p>
<p>So, even the very simple-sounding question of figuring out the "best strategy" in RPS is in some sense open, and people can dedicate a lot of time both to figuring out good strategies to play against other people and good algorithms to play against other algorithms. I think it's safe to say that Pokemon is strictly more complicated than RPS, enough so even this level of analysis is probably impossible. </p>
<p><strong>Edit:</strong> It's also worth pointing out that another way that Pokemon differs from a game like chess is that it is <em>imperfect information</em>: you don't know everything about your opponent's Pokemon (movesets, EV distribution, hold items, etc.) even if you happen to know what they are. That means both players should be trying to predict these hidden variables about each other's Pokemon while trying to trick the other player into making incorrect predictions. My understanding is that a common strategy for doing this is to use Pokemon that have very different viable movesets and try to trick your opponent into thinking you're playing one moveset when you're actually playing another. So in this respect Pokemon resembles poker more than it does chess. </p>
|
probability | <p>Consider a two-sided coin. If I flip it $1000$ times and it lands heads up for each flip, what is the probability that the coin is unfair, and how do we quantify that if it is unfair? </p>
<p>Furthermore, would it still be considered unfair for $50$ straight heads? $20$? $7$?</p>
| <p>First of all, you must understand that there is no such thing as a perfectly fair coin, because there is nothing in the real world that conforms perfectly to some theoretical model. So a useful definition of "fair coin" is one, that for practical purposes behaves like fair. In other words, no human flipping it for even a very long time, would be able to tell the difference. That means, one can assume, that the probability of heads or tails on that coin, is $1/2$. </p>
<p>Whether your particular coin is fair (according to above definition) or not, cannot be assigned a "probability". Instead, statistical methods must be used. </p>
<p>Here, you make a so called "null-hypothesis": "the coin is fair". You then proceed to calculate the probability of the event you observed (to be precise: the event, or something at least as "strange"), assuming the null-hypothesis were true. In your case, the probability of your event, 1000 heads, or something at least as strange, is $2\times1/2^{1000}$ (that is because you also count 1000 tails).</p>
<p>Now, with statistics, you can never say anything for sure. You need to define, what you consider your "confidence level". It's like saying in court "beyond a reasonable doubt". Let's say you are willing to assume confidence level of 0.999 . That means, if something that had supposedly less than 0.001 chance of happening, actually happened, then you are going to say, "I am confident enough that my assumptions must be wrong". </p>
<p>In your case, if you assume the confidence level of 0.999, and you have 1000 heads in 1000 throws, then you can say, "the assumption of the null hypothesis must be wrong, and the coin must be unfair".
Same with 50 heads in 50 throws, or 20 heads in 20 throws. But not with 7, not at this confidence level. With 7 heads (or tails), the probability is $2 \times 1/2 ^ {7}$ , which is more than 0.001.</p>
<p>But if you assume confidence level at 95% (which is commonly done in less strict disciplines of science), then even 7 heads means "unfair". </p>
<p>Notice that you can never actually "prove" the null hypothesis. You can only reject it, based on what you observe is happening, and your "standard of confidence". This is in fact what most scientists do - they reject hypotheses based on evidence and the accepted standards of confidence. </p>
<p>If your events do not disprove your hypothesis, that does not necessarily mean, it must be true! It just means, it withstood the scrutiny so far. You can also say "the results are consistent with the hypothesis being true" (scientists frequently use this phrase). If a hypothesis is standing for a long time without anybody being able to produce results that disprove it, it becomes generally accepted. However, sometimes even after hundreds of years, some new results might come up which disprove it. Such was the case of General Relativity "disproving" Newton's classical theory. </p>
| <p>If you take a coin you have modified so that it always lands in heads and you get $1000$ heads then the probability of it being unfair is $100\%$.</p>
<p>If you take a coin you have crafted yourself and carefully made sure that it is a fair coin and then you get $1000$ heads then the probability of it being unfair is $0\%$.</p>
<p>Next, you fill a box with coins of both types, then take a random coin.</p>
<ul>
<li>$NF$ : fair coins in the box.</li>
<li>$NU$ : unfair coins in the box</li>
<li><p>$P(U)$ : probability of having taken an unfair coin
$$P(U) = \frac{NU}{NF + NU}$$</p></li>
<li><p>$P(F)$ : probability of having taken a fair coin
$$ P(F) = \frac{NF}{NF + NU} = 1 - P(U) $$</p></li>
<li>$P(H \mid{U})$ : Probability of having 1000 heads conditioned to having take
an unfair coin
$$P(H\mid{U}) = 1 $$</li>
<li>$P(H\mid{F})$ : Probability of having 1000 heads conditioned to having taken
a fair coin
$$P(H\mid{F}) = \left( \tfrac{1}{2} \right)^{1000}$$</li>
<li>$P(H)$ : Probability of having 1000 heads</li>
</ul>
<p>\begin{align}
P(H) &= P(U \cap H) + P(F \cap H)\\
&= P(H \mid{U})P(U) + P(H \mid{F})P(F)\\
&= P(U) + P(H \mid{F})P(F)
\end{align}</p>
<p>By applying <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem" rel="noreferrer">Bayes theorem</a> :</p>
<p>$P(U \mid{H})$ : probability of the coin being unfair conditioned to getting 1000 heads
$$P(U\mid{H}) = \frac{P(H \mid{U})P(U)}{P(H)} = \frac{P(U)}{P(U) + P(H\mid{F})P(F)}$$</p>
<p>And that is your answer.</p>
<hr>
<h2>In example</h2>
<p>If $P(U)=1/(6 \cdot 10^{27})$ ($1$ out of every $6 \cdot 10^{27}$ coins are unfair) and you get 1000 heads then the probability of the coin being unfair is
\begin{align}
\mathbf{99}.&999999999999999999999999999999999999999999999999999999999999999999999\\
&999999999999999999999999999999999999999999999999999999999999999999999\\
&999999999999999999999999999999999999999999999999999999999999999999999\\
&999999999999999999999999999999999999999999999999999999999999999944\%
\end{align}</p>
<p>Very small coins like the USA cent have a weight of $2.5g$. We can safely assume that there are no coins with a weight less than 1 gram.</p>
<p>Earth has a weight of less than $6 \cdot 10^{27}$ grams. Thus we know that there are less than $6 \cdot 10^{27}$ coins. We know that there is at least one unfair coin ( I have seen coins with two heads and zero tails) thus we know that $P(U) \ge 1/(6 \cdot 10^{27})$.</p>
<p>And thus we can conclude that if you get 1000 heads then the probability of the coin being unfair is at least
\begin{align}
\mathbf{99}.&999999999999999999999999999999999999999999999999999999999999999999999\\
&999999999999999999999999999999999999999999999999999999999999999999999\\
&999999999999999999999999999999999999999999999999999999999999999999999\\
&999999999999999999999999999999999999999999999999999999999999999944\%
\end{align}</p>
<p>This analysis is only valid if you take a random coin and only if coins are either $100\%$ fair or $100\%$ unfair.
It is still a good indication that yes, with $1000$ heads you can be certain beyond any reasonable doubt that the coin is unfair.</p>
|
linear-algebra | <p>I am looking for an intuitive explanation as to why/how row rank of a matrix = column rank. I've read the <a href="https://en.wikipedia.org/wiki/Rank_(linear_algebra)#Proofs_that_column_rank_=_row_rank" rel="noreferrer">proof on Wikipedia</a> and I understand the proof, but I don't "get it". Can someone help me out with this ?</p>
<p>I find it hard to wrap my head around the idea of how the column space and the row space is related at a fundamental level.</p>
| <p>You can apply elementary row operations and elementary column operations to bring a matrix <span class="math-container">$A$</span> to a matrix that is in <strong>both</strong> row reduced echelon form and column reduced echelon form. In other words, there exist invertible matrices <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> (which are products of elementary matrices) such that
<span class="math-container">$$PAQ=E:=\begin{pmatrix}I_k\\&0_{(n-k)\times(n-k)}\end{pmatrix}.$$</span>
As <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are invertible, the maximum number of linearly independent rows in <span class="math-container">$A$</span> is equal to the maximum number of linearly independent rows in <span class="math-container">$E$</span>. That is, the row rank of <span class="math-container">$A$</span> is equal to the row rank of <span class="math-container">$E$</span>. Similarly for the column ranks. Now it is evident that the row rank and column rank of <span class="math-container">$E$</span> are identical (to <span class="math-container">$k$</span>). Hence the same holds for <span class="math-container">$A$</span>.</p>
| <p>This post is quite old, so my answer might come a bit late.
If you are looking for an intuition (you want to "get it") rather than a demonstration (of which there are several), then here is my 5c.</p>
<p>If you think of a matrix A in the context of solving a system of simultaneous equations, then the row-rank of the matrix is the number of independent equations, and the column-rank of the matrix is the number of independent parameters that you can estimate from the equation. That I think makes it a bit easier to see why they should be equal.</p>
<p>Saad.</p>
|
geometry | <p>When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle.</p>
<p>Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$.</p>
<p>Is this just a coincidence, or is there some deep explanation for why we should expect this?</p>
| <p>Consider increasing the radius of a circle by an infinitesimally small amount, $dr$. This increases the area by an <a href="http://en.wikipedia.org/wiki/Annulus_%28mathematics%29" rel="noreferrer">annulus</a> (or ring) with inner radius $2 \pi r$ and outer radius $2\pi(r+dr)$. As this ring is extremely thin, we can imagine cutting the ring and then flattening it out to form a rectangle with width $2\pi r$ and height $dr$ (the side of length $2\pi(r+dr)$ is close enough to $2\pi r$ that we can ignore that). So the area gain is $2\pi r\cdot dr$ and to determine the rate of change with respect to $r$, we divide by $dr$ and so we get $2\pi r$. Please note that this is just an informative, intuitive explanation as opposed to a formal proof. The same reasoning works with a sphere, we just flatten it out to a rectangular prism instead.</p>
| <p>$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Bd}{\partial}\DeclareMathOperator{\vol}{vol}$The formulas are no accident, but not especially deep. The explanation comes down to a couple of geometric observations.</p>
<ol>
<li><p>If $X$ is the closure of a bounded open set in the Euclidean space $\Reals^{n}$ (such as a solid ball, or a bounded polytope, or an ellipsoid) and if $a > 0$ is real, then the image $aX$ of $X$ under the mapping $x \mapsto ax$ (uniform scaling by a factor of $a$ about the origin) satisfies
$$
\vol_{n}(aX) = a^{n} \vol_{n}(X).
$$
More generally, if $X$ is a closed, bounded, piecewise-smooth $k$-dimensional manifold in $\Reals^{n}$, then scaling $X$ by a factor of $a$ multiplies the volume by $a^{k}$.</p></li>
<li><p>If $X \subset \Reals^{n}$ is a bounded, $n$-dimensional intersection of closed half-spaces whose boundaries lie at unit distance from the origin, then scaling $X$ by $a = (1 + h)$ "adds a shell of uniform thickness $h$ to $X$ (modulo behavior along intersections of hyperplanes)". The volume of this shell is equal to $h$ times the $(n - 1)$-dimensional measure of the boundary of $X$, up to added terms of higher order in $h$ (i.e., terms whose total contribution to the $n$-dimensional volume of the shell is negligible as $h \to 0$).</p></li>
</ol>
<p><img src="https://i.sstatic.net/Goo5G.png" alt="The change in area of a triangle under scaling about its center"></p>
<p>If $X$ satisfies Property 2. (e.g., $X$ is a ball or cube or simplex of "unit radius" centered at the origin), then
$$
h \vol_{n-1}(\Bd X) \approx \vol_{n}\bigl[(1 + h)X \setminus X\bigr],
$$
or
$$
\vol_{n-1}(\Bd X) \approx \frac{(1 + h)^{n} - 1}{h}\, \vol_{n}(X).
\tag{1}
$$
The approximation becomes exact in the limit as $h \to 0$:
$$
\vol_{n-1}(\Bd X)
= \lim_{h \to 0} \frac{(1 + h)^{n} - 1}{h}\, \vol_{n}(X)
= \frac{d}{dt}\bigg|_{t = 1} \vol_{n}(tX).
\tag{2}
$$
By Property 1., if $r > 0$, then
$$
\vol_{n-1}\bigl(\Bd (rX)\bigr)
= r^{n-1}\vol_{n-1}(\Bd X)
= \lim_{h \to 0} \frac{(1 + h)^{n}r^{n} - r^{n}}{rh}\, \vol_{n}(X)
= \frac{d}{dt}\bigg|_{t = r} \vol_{n}(tX).
\tag{3}
$$
In words, the $(n - 1)$-dimensional volume of $\Bd(rX)$ is the derivative with respect to $r$ of the $n$-dimensional volume of $rX$.</p>
<p>This argument fails for non-cubical boxes and ellipsoids (to name two) because for these objects, uniform scaling about an arbitrary point does not add a shell of uniform thickness (i.e., Property 2. fails). Equivalently, adding a shell of uniform thickness does not yield a new region similar to (i.e., obtained by uniform scaling from) the original.</p>
<p>(The argument also fails for cubes (etc.) not centered at the origin, again because "off-center" scaling does not add a shell of uniform thickness.)</p>
<p>In more detail:</p>
<ul>
<li><p>Scaling a non-square rectangle adds "thicker area" to the pair of short sides than to the long pair. Equivalently, adding a shell of uniform thickness around a non-square rectangle yields a rectangle <em>having different proportions</em> than the original rectangle.</p></li>
<li><p>Scaling a non-circular ellipse adds thicker area near the ends of the major axis. Equivalently, adding a uniform shell around a non-circular ellipse yields a non-elliptical region. (The principle that "the derivative of area is length" fails drastically for ellipses: The area of an ellipse is proportional to the product of the axes, while the arc length is a <em>non-elementary function</em> of the axes.)</p></li>
</ul>
|
probability | <p>Recently I was asked the following in an interview:</p>
<blockquote>
<p>If you are a pretty good basketball player, and were betting on whether you could make $2$ out of $4$ or $3$ out of $6$ baskets, which would you take?</p>
</blockquote>
<p>I said anyone since ratio is same. Any insights?</p>
| <h1>Depends on how good you are</h1>
<p><img src="https://i.sstatic.net/Naze3.png" alt="enter image description here"></p>
<p>The explanation is intuitive:</p>
<ul>
<li><p>If you are not very good (probability that you make a single shot - p < 0.6), then your overall probability is not very high, but it is better to bet that you'll make 2 out of 4, because you may do it just by chance and your clumsiness has less chance to prove in 4 than in 6 attempts.</p></li>
<li><p>If you are really good (p > 0.6), then it is better to bet on 3 out of 6, because if you miss just by chance, you have better chance to correct yourself in 6 attempts.</p></li>
</ul>
<p>The curves meet exactly at p = 0.6.</p>
<h2>In general, the more attempts, the more of real skill reveals</h2>
<p>This is best illustrated on the extreme case:</p>
<p><img src="https://i.sstatic.net/W6WZV.png" alt="enter image description here"></p>
<p>With more attempts, it is almost binary case - you either succeed or not, based on your skill. With high N, the result will be close to your original expectation.</p>
<p>Note that with high N and p = 0.5, the binomial distribution gets narrower and converges to normal distribution.</p>
<h2>Everything here just revolves around <a href="http://en.wikipedia.org/wiki/Binomial_distribution" rel="noreferrer">binomial distribution</a>,</h2>
<p>which tells you that the probability that you will score <em>exactly</em> <code>k</code> shots out of <code>n</code> is</p>
<p>$$P(X = k) = \binom{n}{k} p^k (1-p)^{n-k}$$</p>
<p>The probability that you will score at least k = n/2 shots (and win the bet) is then </p>
<p>$$P(X \ge k) = \sum^{n}_{i=k} \binom{n}{i} p^i (1-p)^{n-i}$$</p>
<h1>Why the curves don't meet at p = 0.5?</h1>
<p>Look at the following plots:</p>
<p><img src="https://i.sstatic.net/vsdVi.png" alt="enter image description here"></p>
<p>These plots are for p = 0.5. The binomial distribution is symmetric for this value. Intuitivelly, you <em>expect</em> 2 of 4 or 3 of 6 to take half of the distribution. But if you look especially at the left plot, it is clear that the middle column (2 successful shots) goes far beyond the half of the distribution (dashed line), which is denoted by the red arrow. In the right plot (3/6), this proportion is much smaller.</p>
<p>If you sum the gold bars, you will get:</p>
<pre><code>P(make at least 2 out of 4) = 0.6875
P(make at least 3 out of 6) = 0.65625
P(make at least 500 out of 1000) = 0.5126125
</code></pre>
<p>From these figures, as well as from the plots, is apparent that with high N, the proportion of the distribution "beyond the half" converges to zero, and the total probability converges to 0.5.</p>
<p>So, for the curves to meet for low Ns, <code>p</code> must be higher to compensate for this:</p>
<p><img src="https://i.sstatic.net/gDZBh.png" alt="enter image description here"></p>
<pre><code>P(make at least 2 out of 4) = 0.8208
P(make at least 3 out of 6) = 0.8208
</code></pre>
<p>Full code in R:</p>
<pre><code>f6 <- function(p) {
dbinom(3, 6, p) +
dbinom(4, 6, p) +
dbinom(5, 6, p) +
dbinom(6, 6, p)
}
f4 <- function(p) {
dbinom(2, 4, p) +
dbinom(3, 4, p) +
dbinom(4, 4, p)
}
fN <- function(p, from, max) {
#sum(sapply(from:max, function (x) dbinom(x, max, p)))
s <- 0
for (i in from:max) {
s <- s + dbinom(i, max, p)
}
s
}
f1000 <- function (p) fN(p, 500, 1000)
plot(f6, xlim = c(0,1), col = "red", lwd = 2, ylab = "", main = "Probability that you will make ...", xlab = "p (probability you make a single shot)")
curve(f4, col = "green", add = TRUE, lwd = 2)
curve(f1000, add = TRUE, lwd = 2, col = "blue")
legend("topleft", c("2 out of 4", "3 out of 6", "500 out of 1000"), lwd = 2, col = c("green", "red", "blue"), bty = "n")
plotHist <- function (n, p) {
plot(x=c(-0.5,n+0.5),y=c(0,0.41),type="n", xaxt="n", xlab = "successful shots", ylab = "probability",
main = paste0(n/2, "/", n, ", p = ", p))
axis(1, at=0:n, labels=0:n)
x <- 0:n
y <- dbinom(0:n, n, p)
w <- 0.9
#lines(0:4, dbinom(0:4, 4, 0.5), lwd = 50, type = "h", lend = "butt")
rect(x-0.5*w, 0, x+0.5*w, y, col = "lightgrey")
uind <- (n/2+1):(n+1)
rect(x[uind]-0.5*w, 0, x[uind]+0.5*w, y[uind], col = "gold")
}
par(mfrow = c(1, 2))
plotHist(4, 0.5)
abline(v = 2, lty = 2)
arrows(2-0.5*0.9, 0.17, 2, 0.17, col = "red", code = 3, length = 0.1, lwd = 2)
plotHist(6, 0.5)
f4(0.5)
f6(0.5)
f1000(0.5)
par(mfrow = c(1, 2))
plotHist(4, 0.6)
plotHist(6, 0.6)
f4(0.6)
f6(0.6)
</code></pre>
| <p>The probability of you getting at least half increases with the number of shots. E.g. with a probability of 2/3 per shot the probability of getting at least half the baskets increases as below.</p>
<p><em>Edit</em> it is important to point out that this only holds if by a "pretty good basketball player" you mean your chance of making a basket is somewhat better than evens (in the range 0.6 to 1 exclusive). This is shown very clearly in <a href="https://math.stackexchange.com/questions/678515/probability-2-4-vs-3-6/678537#678537">Hagen von Eitzen's answer</a>.</p>
<p><img src="https://i.sstatic.net/hcPVU.jpg" alt="enter image description here"></p>
<p>An intuitive way of looking at this is that it's like a <strong>diversification effect</strong>. With only a few baskets, you could get unlucky, just as you might if you tried to pick only a couple of stocks for an investment portfolio, even if you were a good stock picker. You increase the number of baskets -- or stocks -- and the role of chance is reduced and your skill shines through.</p>
<p>Formally, assuming that</p>
<ul>
<li><p>each throw is independent, and</p></li>
<li><p>you have the same probability $p$ of scoring on each throw</p></li>
</ul>
<p>you can model the chance of scoring $b$ baskets out of $n$ using the <a href="http://en.wikipedia.org/wiki/Binomial_distribution" rel="nofollow noreferrer">binomial distribution</a></p>
<p>$$ \mathbb{P}(b \text{ from } n) = \binom{n}{b} p^{b}(1-p)^{n-b} $$</p>
<p>To get the probability of scoring at least half of the $n$ baskets, you have to add up these probilities. E.g. for at least 2 out of 4 you want $\mathbb{P}(2 \text{ from } 4) + \mathbb{P}(3 \text{ from } 4) + \mathbb{P}(4 \text{ from } 4)$.</p>
|
linear-algebra | <p>Why is the determinant as a function from <span class="math-container">$M_n(\mathbb{R})$</span> to <span class="math-container">$\mathbb{R}$</span> continuous? Can anyone explain precisely and rigorously? So far, I know the explanation which comes from the facts that</p>
<ul>
<li>polynomials are continuous,</li>
<li>sum and product of continuous functions are continuous.</li>
</ul>
<p>Also I have the confusion regarding the metric on <span class="math-container">$M_n(\mathbb{R})$</span>.</p>
| <p><span class="math-container">$M_n(\mathbb R)$</span> is just <span class="math-container">$\mathbb R^{n^2}$</span> with the euclidian metric.</p>
<p>determinant is continuous, because it is a polynomial in the coordinates
<span class="math-container">$$
\det(X) = \det ([x_{i,j}])= \sum_\sigma \text{sgn}(\sigma) \prod_{i=1}^{n} x_{\sigma(i),i},
$$</span>
where <span class="math-container">$\sigma:\{1,\dots,n\}\to\{1,\dots,n\}$</span> is a permutation.</p>
| <p>Recall that the determinant can be computed by a sum of determinants of minors, that is "sub"-matrices of smaller dimension.</p>
<p>Now we can prove by induction that $\det$ is continuous:</p>
<ul>
<li>For $n=1$, $A\in M_1(\mathbb R)$ is simply a scalar we have that $\det A=A$, and surely the identity function is continuous.</li>
<li><p>Suppose that for $n$ we have that $\det$ is continuous on $M_n(\mathbb R)$, let $A\in M_{n+1}(\mathbb R)$. We know that $\det A$ can be calculated as the alternating sum over one of first row, when calculating the $\det$ of the appropriate minor. </p>
<p>So $\det A$ is written as a sum and scalar multiplication of $\det$ on a smaller dimension. From the induction hypothesis these are continuous and therefore $\det$ is continuous on $n+1\times n+1$ matrices.</p></li>
</ul>
|
probability | <p>Is there an exact or good approximate expression for the expectation, variance or other moments of the maximum of $n$ independent, identically distributed gaussian random variables where $n$ is large?</p>
<p>If $F$ is the cumulative distribution function for a standard gaussian and $f$ is the probability density function, then the CDF for the maximum is (from the study of order statistics) given by</p>
<p>$$F_{\rm max}(x) = F(x)^n$$</p>
<p>and the PDF is</p>
<p>$$f_{\rm max}(x) = n F(x)^{n-1} f(x)$$</p>
<p>so it's certainly possible to write down integrals which evaluate to the expectation and other moments, but it's not pretty. My intuition tells me that the expectation of the maximum would be proportional to $\log n$, although I don't see how to go about proving this.</p>
| <p>How precise an answer are you looking for? Giving (upper) bounds on the maximum of i.i.d Gaussians is easier than precisely characterizing its moments. Here is one way to go about this (another would be to combine a tail bound on Gaussian RVs with a union bound).</p>
<p>Let $X_i$ for $i = 1,\ldots,n$ be i.i.d $\mathcal{N}(0,\sigma^2)$.</p>
<p>Defining, $$ Z = [\max_{i} X_i] $$</p>
<p>By Jensen's inequality,</p>
<p>$$\exp \{t\mathbb{E}[ Z] \} \leq \mathbb{E} \exp \{tZ\} = \mathbb{E} \max_i \exp \{tX_i\} \leq \sum_{i = 1}^n \mathbb{E} [\exp \{tX_i\}] = n \exp \{t^2 \sigma^2/2 \}$$</p>
<p>where the last equality follows from the definition of the Gaussian moment generating function (a bound for sub-Gaussian random variables also follows by this same argument).</p>
<p>Rewriting this,</p>
<p>$$\mathbb{E}[Z] \leq \frac{\log n}{t} + \frac{t \sigma^2}{2} $$</p>
<p>Now, set $t = \frac{\sqrt{2 \log n}}{\sigma}$ to get</p>
<p>$$\mathbb{E}[Z] \leq \sigma \sqrt{ 2 \log n} $$ </p>
| <p>The <span class="math-container">$\max$</span>-central limit theorem (<a href="http://en.wikipedia.org/wiki/Fisher%E2%80%93Tippett%E2%80%93Gnedenko_theorem" rel="noreferrer">Fisher-Tippet-Gnedenko theorem</a>) can be used to provide a decent approximation when <span class="math-container">$n$</span> is large. See <a href="http://reference.wolfram.com/mathematica/ref/ExtremeValueDistribution.html#6764486" rel="noreferrer">this example</a> at reference page for extreme value distribution in <em>Mathematica</em>.</p>
<p>The <span class="math-container">$\max$</span>-central limit theorem states that <span class="math-container">$F_\max(x) = \left(\Phi(x)\right)^n \approx F_{\text{EV}}\left(\frac{x-\mu_n}{\sigma_n}\right)$</span>, where <span class="math-container">$F_{EV} = \exp(-\exp(-x))$</span> is the cumulative distribution function for the extreme value distribution, and
<span class="math-container">$$
\mu_n = \Phi^{-1}\left(1-\frac{1}{n} \right) \qquad \qquad
\sigma_n = \Phi^{-1}\left(1-\frac{1}{n} \cdot \mathrm{e}^{-1}\right)- \Phi^{-1}\left(1-\frac{1}{n} \right)
$$</span>
Here <span class="math-container">$\Phi^{-1}(q)$</span> denotes the inverse cdf of the standard normal distribution.</p>
<p>The mean of the maximum of the size <span class="math-container">$n$</span> normal sample, for large <span class="math-container">$n$</span>, is well approximated by
<span class="math-container">$$ \begin{eqnarray}
m_n &=& \sqrt{2} \left((\gamma -1) \Phi^{-1}\left(2-\frac{2}{n}\right)-\gamma \Phi^{-1}\left(2-\frac{2}{e n}\right)\right) \\ &=& \sqrt{\log \left(\frac{n^2}{2 \pi \log \left(\frac{n^2}{2\pi} \right)}\right)} \cdot \left(1 + \frac{\gamma}{\log (n)} + \mathcal{o} \left(\frac{1}{\log (n)} \right) \right)
\end{eqnarray}$$</span>
where <span class="math-container">$\gamma$</span> is the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant" rel="noreferrer">Euler-Mascheroni constant</a>.</p>
|
probability | <p>I might just be slow (or too drunk), but I'm seeing a conflict in the equations for adding two normals and scaling a normal. According to <a href="http://www.cs.toronto.edu/~yuvalf/CLT.pdf" rel="noreferrer">page 2 of this</a>, if $X_1 \sim N(\mu_1,\sigma_1^2)$ and $X_2 \sim N(\mu_2,\sigma_2^2)$, then $X_1 + X_2 \sim N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)$, and for some $c \in \mathbb{R}$, $cX_1 = N(c\mu_1, c^2\sigma_1^2)$.</p>
<p>Then for $X \sim N(\mu,\sigma^2)$, we have $X + X = N(\mu + \mu,\sigma^2 + \sigma^2) = N(2\mu,2\sigma^2)$, but also $X + X = 2X = N(2\mu,2^2\sigma^2) = N(2\mu,4\sigma^2)$ ? Ie, the variances disagree.</p>
<p>edit: Oh, am I mistaken in saying that $2X = X + X$? Is the former "rolling" $X$ just once and doubling it while the latter "rolls" twice and adds them?</p>
| <p>On the first page of the cited document, $X_1$ and $X_2$ were previously defined to be two (distinct) <em>independent</em>, identically distributed random variables. For your purposes, the "identically distributed" part is not important, but the "independent" part <em>is</em>.</p>
<p>On the second page, where $X_1$ and $X_2$ are considered to be normal variables, there's still the assumption that they're independent. Possibly this could have been stated more clearly, but in context this assumption makes sense.</p>
<p>When you consider $2X = X + X$, you are not dealing with two independent variables.
The two "copies" of $X$ are correlated (in fact, as correlated as any two variables can be).
The formula for the sum of two independent normal variables therefore does not apply.</p>
| <p>Expectation is always linear. So for any two variables <span class="math-container">$X,Y$</span>, we have <span class="math-container">$E[X+Y]=E[X]+E[Y]$</span>. And <span class="math-container">$E[\underbrace{X+X+\ldots+X}_{k \text{ times}}]=E[kX]=kE[X]$</span></p>
<p>Variance is linear when the variables are <em>independent</em>. In this case, <span class="math-container">$V[X+Y]=V[X]+V[Y]$</span>. However when the variables are the same, i.e. when we scale, we have <span class="math-container">$V[kX]=k^2 V[X]$</span>.</p>
<p>These are true no matter what the distribution. Determining the distribution of the sum of random variables is, in general, <a href="https://stats.stackexchange.com/q/331973/89612">difficult</a>. However when <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent normal variables, then the sum of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> is also normally distributed (and the means and variances add as above).</p>
<p>In my opinion your question is a good one and it is very easy to become confused about what is adding and what is scaling. A very important example which involves adding independent distributions <em>and</em> scaling is when you compute the variance of the mean <span class="math-container">$\bar{X}$</span> of independent identically distributed (iid) samples from the same distribution.</p>
<p>To keep things simple we have <span class="math-container">$n$</span> samples and let's say each sample is normally distributed with variance <span class="math-container">$\sigma^2$</span>. So each sample is drawn from <span class="math-container">$X_i \sim N(\mu,\sigma^2)$</span>. Then <em>adding</em> independent random variables the variance of the sum is <span class="math-container">$V(S) = V\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n V\left(X_i\right) = n\sigma^2$</span>. But the mean <span class="math-container">$\bar{X}=S/n$</span> and so by <em>scaling</em> <span class="math-container">$V(\bar{X}) = V(S/n) = n\sigma^2 / n^2 = \sigma^2/n$</span>. It is this combination of adding and scaling which leads to the famous relationship that standard deviation of the sum increases according to the square root of <span class="math-container">$n$</span>, and of the mean as <span class="math-container">$1/\sqrt{n}$</span>.</p>
|
probability | <p>I got a problem of calculating $E[e^X]$, where X follows a normal distribution $N(\mu, \sigma^2)$ of mean $\mu$ and standard deviation $\sigma$.</p>
<p>I still got no clue how to solve it. Assume $Y=e^X$. Trying to calculate this value directly by substitution $f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\, e^{\frac{-(x-\mu)^2}{2\sigma^2}}$ then find $g(y)$ of $Y$ is a nightmare (and I don't know how to calculate this integral to be honest).</p>
<p>Another way is to find the inverse function. Assume $Y=\phi(X)$, if $\phi$ is differentiable, monotonic, and have inverse function $X=\psi(Y)$ then $g(y)$ (PDF of random variable $Y$) is as follows: $g(y)=f[\psi(y)]|\psi'(y)|$.</p>
<p>I think we don't need to find PDF of $Y$ explicitly to find $E[Y]$. This seems to be a classic problem. Anyone can help?</p>
| <p><span class="math-container">$\newcommand{\E}{\operatorname{E}}$</span></p>
<p>Look at this: <a href="http://en.wikipedia.org/wiki/Law_of_the_unconscious_statistician" rel="noreferrer">Law of the unconscious statistician</a>.</p>
<p>If <span class="math-container">$f$</span> is the density function of the distribution of a random variable <span class="math-container">$X$</span>, then
<span class="math-container">$$
\E(g(X)) = \int_{-\infty}^\infty g(x)f(x)\,dx,
$$</span>
and there's no need to find the probability distribution, including the density, of the random variable <span class="math-container">$g(X)$</span>.</p>
<p>Now let <span class="math-container">$X=\mu+\sigma Z$</span> where <span class="math-container">$Z$</span> is a <em>standard</em> normal, i.e. <span class="math-container">$\E(Z)=0$</span> and <span class="math-container">$\operatorname{var}(Z)=1$</span>.</p>
<p>Then you get
<span class="math-container">$$
\begin{align}
\E(e^X) & =\E(e^{\mu+\sigma Z}) = \int_{-\infty}^\infty e^{\mu+\sigma z} \varphi(z)\,dz \\[10pt]
& = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{\mu+\sigma z} e^{-z^2/2}\,dz = \frac{1}{\sqrt{2\pi}} e^\mu \int_{-\infty}^\infty e^{\sigma z} e^{-z^2/2}\,dz.
\end{align}
$$</span></p>
<p>We have <span class="math-container">$\sigma z-\dfrac{z^2}{2}$</span> so of course we complete the square:
<span class="math-container">$$
\frac 1 2 (z^2 - 2\sigma z) = \frac 1 2 ( z^2 - 2\sigma z + \sigma^2) - \frac 1 2 \sigma^2 = \frac 1 2 (z-\sigma)^2 - \frac 1 2 \sigma^2.
$$</span>
Then the integral is
<span class="math-container">$$
\frac{1}{\sqrt{2\pi}} e^{\mu+ \sigma^2/2} \int_{-\infty}^\infty e^{-(z-\sigma)^2/2}\,dz
$$</span>
This whole thing is
<span class="math-container">$$
e^{\mu + \sigma^2/2}.
$$</span>
In other words, the integral with <span class="math-container">$z-\sigma$</span> is the same as that with just <span class="math-container">$z$</span> in that place, because the function is merely moved over by a distance <span class="math-container">$\sigma$</span>. If you like, you can say <span class="math-container">$w=z+\sigma$</span> and <span class="math-container">$dw=dz$</span>, and as <span class="math-container">$z$</span> goes from <span class="math-container">$-\infty$</span> to <span class="math-container">$+\infty$</span>, so does <span class="math-container">$w$</span>, so you get the <em>same</em> integral after this substitution.</p>
| <p>Let $X$ be an $\mathbb{R}$-valued random variable with the probability density function $p(x)$, and $f(x)$ be a nice function. Then
$$\mathbb{E}f(X) = \int_{-\infty}^{\infty} f(x) p(x) \; dx.$$
In this case, we have
$$ p(x) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}},$$
hence
$$ \mathbb{E}e^X = \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^{\infty} e^x e^{-\frac{(x-\mu)^2}{2\sigma^2}} \; dx. $$
Now the rest is clear.</p>
|
probability | <p>Let us assume that a number is selected at random from $1, 2, 3$.
We define </p>
<p>$$A = \{1, 2\},\quad B = \{2, 3\},\quad C = \{1, 3\}$$</p>
<p>Then are $A$, $B$ and $C$ <strong>mutually independent</strong> or <strong>pairwise independent</strong> or both?</p>
<p>I am confused between <strong>mutually</strong> vs <strong>pairwise</strong> independent.</p>
| <p><strong>Mutual independence</strong>: Every event is independent of any intersection of the other events.</p>
<p><strong>Pairwise independence</strong>: Any two events are independent.</p>
<p>$A, B, C$ are mutually independent if $$P(A\cap B\cap C)=P(A)P(B)P(C)$$ $$P(A\cap B)=P(A)P(B)$$ $$P(A\cap C)=P(A)P(C)$$ $$P(B\cap C)=P(B)P(C)$$</p>
<p>On the other hand, $A, B, C$ are pairwise independent if $$P(A\cap B)=P(A)P(B)$$ $$P(A\cap C)=P(A)P(C)$$ $$P(B\cap C)=P(B)P(C)$$</p>
<p>I'm sure you can solve your problem now.</p>
| <p>Pairwise Independent Conditions:</p>
<ol>
<li>P(AB) = P(A)P(B)</li>
<li>P(BC) = P(B)P(C)</li>
<li>P(AC) = P(A)P(C)</li>
</ol>
<p>Mutual Independent Conditions:</p>
<ol>
<li>P(AB) = P(A)P(B)</li>
<li>P(BC) = P(B)P(C)</li>
<li>P(AC) = P(A)P(C)</li>
<li>P(ABC) = P(A)P(B)P(C)</li>
</ol>
<p>Let us take an example to understand the situation clearly.</p>
<p>A lot contains 50 defective and 50 non-defective pens.
Two pens are drawn at random, one at a time, with
replacement. The events A, B, C are defined as :</p>
<ul>
<li>A = ( the first pen is defective)</li>
<li>B = (the second pen is non-defective)</li>
<li>C = (the two pens are both defective or both non-defective).</li>
</ul>
<p>Determine whether:</p>
<ul>
<li>(i) A, B, C are pairwise independent.</li>
<li>(ii) A, B, C are independent. [IIT 1992]</li>
</ul>
<p>Solution :</p>
<ul>
<li>D := Defective , N := Not Defective</li>
<li>P(A) = P{the first pen is defective} = P(<span class="math-container">$[D_{1}D_{2},D_{1}N_{2}] ) = (\frac{1}{2})^2 + (\frac{1}{2})^2 = \frac{1}{2}$</span></li>
<li>P(B) = P{the second pen is non-defective} = P(<span class="math-container">$D_{1}N_{2},N_{1}N_{2}$</span>) = <span class="math-container">$\frac{1}{2}$</span></li>
<li>P(C) = P{the two pens are both defective OR not defective} = P(<span class="math-container">$D_{1}D_{2},N_{1}N_{2}$</span>) = <span class="math-container">$\frac{1}{2}$</span></li>
</ul>
<p>Then P(<span class="math-container">$A\bigcap B$</span>) = P(AB) = P(<span class="math-container">$D_{1}N_{2}$</span>) = <span class="math-container">$\frac{1}{2} \cdot \frac{1}{2}$</span>=<span class="math-container">$\frac{1}{4}$</span>, P(AC)=P(<span class="math-container">$D_{1}D_{2}$</span>) = <span class="math-container">$\frac{1}{4}$</span> , P(BC)= P(<span class="math-container">$N_{1}N_{2}$</span>) = <span class="math-container">$\frac{1}{4}$</span>.</p>
<p>But, P(<span class="math-container">$A\bigcap B\bigcap C) = P(ABC) = \varnothing = null$</span> = no common elements = <span class="math-container">$0$</span>.</p>
<p>Therefore, <span class="math-container">$P(AB) = P(BC) = P(AC) = \frac{1}{4}$</span>. But <span class="math-container">$P(ABC) = 0 \neq P(A) \cdot P(B) \cdot P(C)$</span>.</p>
<p>Hence, A, B, C are pairwise independent BUT Not Mutually Independent.</p>
|
differentiation | <p>I thought I had a good idea on why/how implicit differentiation works until I read the following passage in my Calculus book:</p>
<blockquote>
<p>Furthermore, implicit differentiation works just as easily for equations such as
$$x^5+5x^4y^2+3xy^3+y^5=1$$
which are actually <i>impossible</i> to solve for $y$ in terms of $x$</p>
</blockquote>
<p>My problem with it is the following:</p>
<p>The way we go about differentiating, for instance, $xy=1$ is by differentiating the whole equation through with respect to $x$ and treating $y$ as a function of $x$. But (and at least that's how I see it) we can only treat $y$ as $f(x)$ because the equation <i>determines</i> $y$ as a function of $x$ in a relation that <i>can</i> ben written expliclity (in this case, $y=\frac{1}{x}$). If we have an equation such as the quoted one, in which we just can't solve for $y$, doesn't that mean that $y$ is <i>not</i> a function of $x$? In such a case, wouldn't treating it as such be an invalid move?</p>
<p>I hope I have made myself understood. Any clarification will be appreciated. Thanks</p>
| <p>At a basic level, I think this question is really about the difference between saying that something <em>exists</em>, on the one hand, and being able to <em>write a formula for it</em> on the other. It's important to distinguish between three different (but closely related) ideas:</p>
<ol>
<li>It may be that the relationship between $x$ and $y$ does not define $y$ as a function of $x$ because there is more than one $y$-value associated to a given $x$-value. To take a simple example, the equation of a unit circle ($x^2+y^2=1$) does not define $y$ as a function of $x$.</li>
<li>However, even if $y$ is not <em>globally</em> a function of $x$, it is nevertheless possible that <em>locally</em> (i.e. in the vicinity of some point) there may be a function of $x$ that "matches" the graph of the relationship, in a precise sense. For example, the point $(0.6, -0.8)$ lies on the bottom half of the unit circle. and the function $f(x) = -\sqrt{1-x^2}$ is a local solution for $y$ in terms of $x$ that includes that point. The <em><a href="https://en.wikipedia.org/wiki/Implicit_function_theorem" rel="noreferrer">Implicit Function Theorem</a></em> provides conditions under which such a function exists.</li>
<li>On the other hand, even when such a local function <em>exists</em>, it may be impossible to write down an <em>explicit formula</em> for it. That, I think, is what your textbook means by "impossible to solve". It's not that the function doesn't exist, but rather that there is no way to write down a formula for the function.</li>
</ol>
<p>To elaborate on this last point, consider the implicitly defined relation
$$ y^3 + 2^y = \cos(2\pi x^2) + x $$
This relationship implicitly defines $y$ as a function of $x$: choose any specific value of $x$, say $x=2$. Then the right-hand side of the equation is $\cos(2\pi\cdot4) + 2 = 3$. The equation then asks us to find a value of $y$ such that $y^3 + 2^y = 3$. Such a $y$ is guaranteed to exist, and is in fact unique, as you can convince yourself of by looking at the graph of the function $h(t) = t^3 + 2^t$ (it's strictly increasing because its derivative is always positive, and its range is $(-\infty ,\infty)$ . And there's nothing special about the choice $x=2$ in this example; choose <em>any</em> value of $x$, and there is a unique $y$ value associated to that value of $x$ by the relation $ y^3 + 2^y = \cos(2\pi x^2) + x $. In fact you can see the graph of this implicitly-defined function below.
<a href="https://i.sstatic.net/VzVgi.png" rel="noreferrer"><img src="https://i.sstatic.net/VzVgi.png" alt="enter image description here"></a></p>
<p>But go ahead, try to find a <em>formula</em> for explicitly calculating $y$ in terms of $x$. I'll wait.</p>
<p>(Okay, this is the point where someone jumps into the comments and says "Well, actually..." and goes on to explain that you <em>can</em> explicitly calculate $y$ in terms of $x$ by introducing a Lambert function or something. Let me try to pre-empt that by arguing that such a"solution" just sweeps the implicitness under the rug. In any case it misses the point of the example, which is that a relationship may implicitly define a function even if you lack an explicit formula for computing one variable in terms of the other.)</p>
<p>On the other hand, consider this closely-related example:
$$ y^2 + 2^y = \cos(2\pi x^2) + x $$
(The only change is the exponent on the $y$ on the left-hand side.) This relationship most definitely does <em>not</em> define $y$ as a function of $x$, as can be seen in the graph below:</p>
<p><a href="https://i.sstatic.net/YiT37.png" rel="noreferrer"><img src="https://i.sstatic.net/YiT37.png" alt="enter image description here"></a></p>
<p>We can see that for many values of $x$ there are two different $y$ values that both satisfy $ y^2 + 2^y = \cos(2\pi x^2) + x $, so this is not a function. Nevertheless if we choose a point on the graph — $(2,1)$ is a convenient one — and zoom in on a neighborhood of that point, it <em>locally</em> looks like a function:</p>
<p><a href="https://i.sstatic.net/ZNxoD.png" rel="noreferrer"><img src="https://i.sstatic.net/ZNxoD.png" alt="enter image description here"></a></p>
<p>The power of implicit differentiation as a technique is precisely that it allows us to find an <em>explicit</em> formula for the slope of the tangent line at $(x,y)$, even when we can't find an explicit formula for $y$ in terms of $x$, <em>and even when the "function" isn't really a function at all</em>. (The trade-off is that the "explicit" formula for the slope is expressed in terms of both variables, so there is still some lurking implicitness in the problem.)</p>
| <p>That's an excellent question. </p>
<p>Part of the <em>conclusion</em> of the Implicit Differentiation Theorem (perhaps better known as the Implicit Function Theorem) is that the equation <strong>locally</strong> defines $y$ as a function of $x$.</p>
<p>The theorem says, roughly, this. Suppose that $(x,y)=(a,b)$ lies in the solution set, and suppose furthermore that the partial derivative $\frac{\partial F}{\partial y}$ of the left hand side is nonzero at $(a,b)$. Then there exists an open ball $B \subset \mathbb{R}^2$ of some positive radius $\epsilon>0$ centered on the point $(a,b)$, such that the intersection of the solution set with the ball $B$ is, indeed, the graph of a differentiable function $y=f(x)$, and furthermore its derivative $\frac{dy}{dx}$ can be calculated using the method you learned in calculus.</p>
<p>Now you may ask: how do we find a formula for $y=f(x)$? </p>
<p>There's not really an answer there. In general you cannot find the formula, although if you follow through the proof of the theorem then you can find numerical approximation methods. That's the real power of the Implicit Differentiation Theorem, proving the existence of a differentiable function without even being able to write down a formula for it.</p>
|
combinatorics | <p>A large portion of combinatorics cases have probabilities of $\frac1e$.</p>
<p>Secretary problem is one of such examples. Excluding trivial cases (a variable has a uniform distribution over $(0,\pi)$ - what is the probability for the value to be below $1$?). I can't recall any example where $\frac1\pi$ would be a solution to a probability problem.</p>
<p>Are there "meaningful" probability theory case with probability approaching $\frac1\pi$?</p>
| <p><strong>Yes</strong> there is!</p>
<p>Here is an example, called the Buffon's needle.</p>
<p>Launch a match of length $1$ on a floor with parallel lines spaced to each other by $2$ units, then the probability that a match crosses a line is </p>
<p>$$\frac 1\pi.$$</p>
<p><em>You can have all the details of the proof <a href="https://en.wikipedia.org/wiki/Buffon%27s_needle">here</a> if you like.</em></p>
<p>$\qquad\qquad\qquad\qquad\quad $<a href="https://i.sstatic.net/dRcEr.jpg"><img src="https://i.sstatic.net/dRcEr.jpg" alt=""></a></p>
<hr>
<p>More generally, if your match (or needle, it's all the same) have a length $a$, and the lines are spaced to each other by $\ell$ units, then the probability that a match crosses a line is </p>
<p>$$\frac {2a}{\pi \ell}.$$</p>
| <p>This doesn't addresses the question as asked, but I think is an interesting example in the spirit of what you asked:</p>
<p>Pick $N$ to be an integer. Now, calculate $p_N$ to be the probability that two random numbers $1 \leq m,n \leq N$ are relatively prime.</p>
<p>Then,
$$\lim_{N \to \infty} p_N =\frac{6}{\pi^2}$$</p>
|
geometry | <p><strong>Edit (June. 2015)</strong> This question has been moved to MathOverflow, where a recent write-up finds a similar approximation as leonbloy's post below; see <a href="https://mathoverflow.net/questions/142983/probability-that-a-stick-randomly-broken-in-five-places-can-form-a-tetrahedron"><strong>here</strong></a>.</p>
<hr>
<p>Randomly break a stick in five places. </p>
<p><strong>Question:</strong> What is the probability that the resulting six pieces can form a tetrahedron?</p>
<p>Clearly satisfying the triangle inequality on each face is a necessary but <em>not</em> sufficient condition; an example is provided below.</p>
<p>Furthermore, another commenter kindly points to a <a href="http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDIQFjAA&url=http://www.ems-ph.org/journals/show_pdf.php?issn=0013-6018&vol=64&iss=4&rank=4&ei=c0BhUfa7OKnk2wXFzIHIAw&usg=AFQjCNG3gcB97ojaqR62PfWT_rjqowOkAg&sig2=vP8K-IBTc8jO1cpY7wM-pA&bvm=bv.44770516,d.b2I" rel="noreferrer">reference</a> that may be of help in resolving this problem. In particular, it relates the question of when six numbers can be edges of a tetrahedron to a certain $5 \times 5$ determinant.</p>
<p>Finally, a third commenter points out that since one such construction is possible, there is an admissible neighborhood around this arrangement, so that the probability is in fact positive.</p>
<p>In any event, this problem is far harder than the classic $2D$ "form a triangle" one. </p>
<p>Several numerical attacks can be found below; I will be grateful if anyone can provide an exact solution.</p>
| <p><s>Not an answer, but it might help to progress further.</s> [The derivation that follows provides a strict -but practically useless- bound. The second part of the answer has some results that might be of interest, but they are merely empirical]</p>
<hr>
<p>Let's consider the (much more restricted) event that the six lengths form a tetrahedron in whatever order. In the <a href="http://www.ems-ph.org/journals/show_pdf.php?issn=0013-6018&vol=64&iss=4&rank=4" rel="noreferrer">linked pdf</a> this set of lengths is called "completely tetrahedral", and a necessary-sufficient condition is given (Theorem 4.2) which is equivalent to the following: $ u \le \sqrt{2}v $, where $u,v$ are the maximum and minimum lengths. This, informally, would correspond to "almost regular" tetrahedra.
Let's then compute the probability that the lengths are completely tetrahedral. Because the points are chosen at random, uniformly, the lengths are probabilistically equivalent to a set of iid exponential variables with arbitrary parameter, conditioned to a constant sum. Because we are only interested in ratios, we can even ignore this conditioning.
Now, the joint probability of the maximum and minimum of a set of $n$ iid variables is given by</p>
<p>$$ f_{u,v}= n(n-1) f(u) f(v) [F(u) -F(v)]^{n-2}, \hskip{1cm} u\ge v$$</p>
<p>In our case: $n=6$, $f(u)=e^{-u}$, and the probability that $u<a \, v$ is a straightforward integral, which gives:</p>
<p>$$P(u<a \, v)= 5 (a -1)\left( \frac{1}{1+5\,a}-\frac{4}{2+4\,a}+\frac{6}{3+3\,a}-\frac{4}{4+2\,a}+\frac{1}{5+1\,a} \right)$$</p>
<p>And $P(u<\sqrt{2} v) \approx 7.46 \times 10^{-5}$
This should be a strict bound on the desired probability, but, surely, far from tight.</p>
<p>[<em>Update</em>: indeed, the bound is indeed practically useless, it corresponds to an irrelevant tail. The probability, as per my simulations, is around $p=0.06528$ ($N=10^9$ tries, $3 \, \sigma \approx 2.3 \times 10^{-5}$), which agrees with other results.]</p>
<hr>
<p>The only empirical result that might be of interest: It's easy to see that, from the $6!$ possible permutations of the lenghts, we can restrict ourselves to $30$, from symmetry considerations; now, from my simulations, I've found that it's sufficient to consider 7 permutations, the first two being already enough for more than $90\%$ of the successes;
and the (need to consider the) seventh one is extremely small. These permutations are:</p>
<p>$$p_1 = [0 , 1 , 4 , 5 , 3 , 2] \hskip{1 cm} (0.75837)\\
p_2 = [0 , 1 , 4 , 3 , 5 , 2] \hskip{1 cm} (0.15231)\\
p_3 = [0 , 2 , 4 , 1 , 5 , 3] \hskip{1 cm} (0.08165)\\
p_4 = [0 , 1 , 4 , 5 , 2 , 3] \hskip{1 cm} (0.00404)\\
p_5 = [0 , 1 , 4 , 2 , 5 , 3] \hskip{1 cm} (0.00245)\\
p_6 = [0 , 1 , 3 , 5 , 4 , 2] \hskip{1 cm} (0.00116)\\
p_7 = [0 , 1 , 3 , 4 , 5 , 2] \hskip{1 cm} (0.00002)\\
$$</p>
<p>The length indexes correspond to a sorted array (say, ascending), and following the convention of the linked paper: the first three sides have a common vertex, the following three are the corresponding opposite sides (so, for example, in the first permutation, and by far the most favorable one, the longest and shortest sides are opposite). The numbers on the right are the probability that this permutation (when one tries in the above order) is the successful one (given that they form a tetrahedron). I cannot be totally sure if there is some rare case that requires other permutation (very improbable, I'd say), but I'm quite sure (unless I've made some mistake) that the set cannot be further reduced.</p>
| <p>This is not an answer but a long comment to other person's request of details.</p>
<p>Following is some details of my simulation. I hope this will be useful for those who are interested in numerics.</p>
<p>On a tetrahedron, I will use 6 variables $a,b,c,A,B,C$ to represent the lengths of 6 edges.
$a,b,c$ corresponds to the edges connected to an arbitrary vertex and $A,B,C$ are lengths of corresponding opposite edges. </p>
<p><strong>Step 1</strong>
Pre-generate the set of 120 permutations of 5 symbols and filter away equivalent ones down to 30.
$$\begin{align}
&S\; = \operatorname{Perm}(\{\,1,2,3,4,5\,\})\\
\xrightarrow{\text{filter}} &
S' = \{\, \pi \in S : \pi(1) = \min(\pi(1),\pi(2),\pi(4),\pi(5))\, \}
\end{align}$$</p>
<p>The filtering condition corresponds to the fact once a pair of opposite edge $(a,A)$
is chosen, there is a 4-fold symmetry in assigning the remaining 2 pairs of opposite
edges. Following 4 assignments of lengths leads to equivalent tetrahedra.</p>
<p>$$(a,b,c,A,B,C) \equiv (a,c,b,A,C,B) \equiv (a,B,C,A,b,c) \equiv (a,C,B,A,c,b)$$</p>
<p><strong>Step 2</strong>
Draw 5 uniform random numbers from $[0,1]$, sort them and turn them into 6 lengths:<br>
$$\begin{align}& X_i = \operatorname{Rand}(0,1), i = 1,\ldots 5\\
\xrightarrow{\text{sort} X_i} & 0 \le X_1 \le \ldots \le X_5 \le 1\\
\xrightarrow{Y_i = X_{i+1}-X_i} &Y_0 = X_1,\,Y_1 = X_2-X_1,\, \ldots,\, Y_5 = 1-X_5
\end{align}$$</p>
<p><strong>Step 3</strong>
Loop through the list of permuation in $S'$, for each permutation $\pi$, assign the 6 lengths to the 6 edges:</p>
<p>$$(Y_0, Y_{\pi(1)}, Y_{\pi(2)}, \ldots, Y_{\pi(5)}) \longrightarrow (a, b, c, A, B, C )$$</p>
<p>Verify whether this assignment generate a valid teterhedron by checking:</p>
<ul>
<li>All faces satisfies the triangular inequality. This can be compactly represented as:
$$\min(a+b+c,a+B+C,A+b+C,A+B+c) > \max(a+A,b+B,c+C)$$</li>
<li>The corresponding Cayler-Menger determinant is positive. Up to a scaling factor, this is:</li>
</ul>
<p>$$\left|\begin{matrix}0 & 1 & 1 & 1 & 1\cr 1 & 0 & {a}^{2} & {b}^{2} & {c}^{2}\cr 1 & {a}^{2} & 0 & {C}^{2} & {B}^{2}\cr 1 & {b}^{2} & {C}^{2} & 0 & {A}^{2}\cr 1 & {c}^{2} & {B}^{2} & {A}^{2} & 0\end{matrix}\right| > 0$$</p>
<p>If this configuration is admissible, record it and goes to <strong>Step 2</strong>. If not, try other permutations from $S'$.</p>
<p><em><strong>Some comment about whether this is useful for exact answer</em></strong>.</p>
<p>A $N = 10^9$ simulation is definitely not enough. The probability of forming a tetrahedron
is about $p = 0.065$. Such a simulation will give us a number accurate to about $\sqrt{\frac{p(1-p)}{N}} \sim 1.5 \times 10^{-5}$. i.e. a 5 digit accuracy.</p>
<p>Up to what I heard, we need about 7 digit of accuracy before we have a chance to
feed this into existing <a href="http://pi.lacim.uqam.ca/eng/">Pluoffe's Inverter</a> and
find whether this number look like a combination of simple mathematical constants. </p>
<p>Until one can speed up the algorithm to have a $N > 10^{13}$ simulation or have a better control of the error terms. Simulation remains only useful for cross checking purposes.</p>
|
geometry | <p>Having a circle with the centre $(x_c, y_c)$ with the radius $r$ how to know whether a point $(x_p, y_p)$ is inside the circle?</p>
| <p>The distance between $\langle x_c,y_c\rangle$ and $\langle x_p,y_p\rangle$ is given by the Pythagorean theorem as $$d=\sqrt{(x_p-x_c)^2+(y_p-y_c)^2}\;.$$ The point $\langle x_p,y_p\rangle$ is inside the circle if $d<r$, on the circle if $d=r$, and outside the circle if $d>r$. You can save yourself a little work by comparing $d^2$ with $r^2$ instead: the point is inside the circle if $d^2<r^2$, on the circle if $d^2=r^2$, and outside the circle if $d^2>r^2$. Thus, you want to compare the number $(x_p-x_c)^2+(y_p-y_c)^2$ with $r^2$.</p>
| <p>The point is inside the circle if the distance from it to the center is less than <span class="math-container">$r$</span>. Symbolically, this is
<span class="math-container">$$\sqrt{|x_p-x_c|^2+|y_p-y_c|^2}< r.$$</span></p>
|
combinatorics | <p>Let $\varphi(n)$ be Euler's totient function, the number of positive integers less than or equal to $n$ and relatively prime to $n$. </p>
<p>Challenge: Prove</p>
<p>$$\sum_{k=1}^n \left\lfloor \frac{n}{k} \right\rfloor \varphi(k) = \frac{n(n+1)}{2}.$$
</p>
<p>I have two proofs, one of which is partially combinatorial. </p>
<p>I'm posing this problem partly because I think some folks on this site would be interested in working on it and partly because I would like to see a purely combinatorial proof. (But please post any proofs; I would be interested in noncombinatorial ones, too. I've learned a lot on this site by reading alternative proofs of results I already know.)</p>
<p>I'll wait a few days to give others a chance to respond before posting my proofs.</p>
<p>EDIT: The two proofs in full are now given among the answers.</p>
| <p>One approach is to use the formula $\displaystyle \sum_{d \mid k} \varphi(d) = k$ </p>
<p>So we have that $\displaystyle \sum_{k=1}^{n} \sum_{d \mid k} \varphi(d) = n(n+1)/2$</p>
<p>Exchanging the order of summation we see that the $\displaystyle \varphi(d)$ term appears $\displaystyle \left\lfloor \frac{n}{d} \right\rfloor$ times</p>
<p>and thus</p>
<p>$\displaystyle \sum_{d=1}^{n} \left\lfloor \frac{n}{d} \right\rfloor \varphi(d) = n(n+1)/2$</p>
<p>Or in other words, if we have the $n \times n$ matrix $A$ such that</p>
<p>$\displaystyle A[i,j] = \varphi(j)$ if $j \mid i$ and $0$ otherwise.</p>
<p>The sum of elements in row $i$ is $i$.</p>
<p>The sum of elements in column $j$ is $\displaystyle \left\lfloor \frac{n}{j} \right\rfloor \varphi(j)$ and the identity just says the total sum by summing the rows is same as the total sum by summing the columns.</p>
| <p>In case anyone is interested, here are the full versions of my two proofs. (I constructed the combinatorial one from my original partially combinatorial one after I posted the question.)</p>
<p><HR></p>
<p><strong>The non-combinatorial proof</strong> </p>
<p>As Derek Jennings observes, $\lfloor \frac{n+1}{k} \rfloor - \lfloor \frac{n}{k} \rfloor$ is $1$ if $k|(n+1)$ and $0$ otherwise. Thus, if $$f(n) = \sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k),$$
then $$\Delta f(n) = f(n+1) - f(n) = \sum_{k|(n+1)} \phi(k) = n+1,$$
where the last equality follows from the well-known formula Aryabhata cites.</p>
<p>Then
$$\sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k) = f(n) = \sum_{k=0}^{n-1} \Delta f(k) = \sum_{k=0}^{n-1} (k+1) = \frac{n(n+1)}{2}.$$</p>
<p><HR></p>
<p><strong>The combinatorial proof</strong></p>
<p>Both sides count the number of fractions (reducible or irreducible) in the interval (0,1] with denominator $n$ or smaller. </p>
<p>For the right side, the number of ways to pick a numerator and a denominator is the number of ways to choose two numbers with replacement from the set $\{1, 2, \ldots, n\}$. This is known to be
$$\binom{n+2-1}{2} = \frac{n(n+1)}{2}.$$</p>
<p>Now for the left side. The number of irreducible fractions in $(0,1]$ with denominator $k$ is equal to the number of positive integers less than or equal to $k$ and relatively prime to $k$; i.e., $\varphi(k)$. Then, for a given irreducible fraction $\frac{a}{k}$, there are $\left\lfloor \frac{n}{k} \right\rfloor$ total fractions with denominators $n$ or smaller in its equivalence class. (For example, if $n = 20$ and $\frac{a}{k} = \frac{1}{6}$, then the fractions $\frac{1}{6}, \frac{2}{12}$, and $\frac{3}{18}$ are those in its equivalence class.) Thus the sum
$$\sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k)$$
also gives the desired quantity.</p>
|
logic | <p>I want to know the difference between ⊢ and ⊨.</p>
<p><a href="http://en.wikipedia.org/wiki/List_of_logic_symbols">http://en.wikipedia.org/wiki/List_of_logic_symbols</a></p>
<p>⊢ means ”provable”
But ⊨ is used exactly the same:</p>
<pre><code>A → B ⊢ ¬B → ¬A
A → B ⊨ ¬B → ¬A
</code></pre>
<p>Can you present a good example where they are different? Is it like the incompleteness theorem of recursive sets that there are sentences that are true i.e. ⊨ but do not have the property ⊢ i.e. provable?</p>
<p>Thanks for any insight</p>
| <p>$A \models B$ means that $B$ is true in every structure in which $A$ is true. $A\vdash B$ means $B$ can be proved using $A$ as the premises. (In both cases, $A$ is a not necessarily finite set of formulas and $B$ is a formula.)</p>
<p>First-order logic simultaneously enjoys the following properties: There is a system of proof for which</p>
<ul>
<li>If $A\vdash B$ then $A\models B$ (soundness)</li>
<li>If $A\models B$ then $A\vdash B$ (completeness)</li>
<li>There is a proof-checking algorithm (effectiveness). <br>(And it's fortunately quite a fast-running algorithm.)</li>
</ul>
<p>That last point is in stark contrast to this fact: There is no provability-checking algorithm. You can search for a proof of a first-order formula in such a systematic way that you'll find it if it exists, and you'll search forever if it doesn't. But if you've been searching for a million years and it hasn't turned up yet, you don't know whether the search will go on forever or end next week. These are results proved in the 1930s. The non-existence of an algorithm for deciding whether a formula is provable is called Church's theorem, after Alonzo Church.</p>
| <p>I learned that $\models$ stands for semantic entailment, while $\vdash$ stands for provability in a certain proof system.</p>
<p>More concretely: Given a set of formulas $\Gamma$ and a formula $\varphi$ in some logic (e.g., first-order logic), $\Gamma \models \varphi$ means that every model of $\Gamma$ is also a model of $\varphi$. On the other hand, fix a proof system (e.g., sequent calculus) for that logic. Then $\Gamma \vdash \varphi$ means that there is a proof using that proof system of $\varphi$ assuming the formulas in $\Gamma$.</p>
<p>The relationship between $\models$ and $\vdash$ actually describes two important properties of a proof calculus: If $\Gamma \vdash \varphi$ implies $\Gamma \models \varphi$, the proof system is <em>sound</em>, and if $\Gamma \models \varphi$ implies $\Gamma \vdash \varphi$, it is <em>complete</em>. For propositional and first-order logic, there are proof systems that are both sound and complete; this is not the case for some other logics. For example, second-order logic does not admit an effective sound and complete proof system (e.g., the set of rules for a sound and complete proof system would not be decidable).</p>
<p><strong>Edit:</strong> Not every proof calculus for propositional or first-order logic is both complete and sound. For example, consider the system with the following rule: $\vdash \varphi \vee \neg \varphi$ where $\varphi$ is an arbitrary formula. This is undoubtably correct, but it's also incomplete, since one cannot derive trivially true statements like $\varphi \vdash \varphi$. On the other hand, the system $\Gamma \vdash \varphi$ for arbitrary formulae $\varphi$ and formula sets $\Gamma$ is complete, but obviously incorrect.</p>
|
differentiation | <p>I am watching the following video lecture:</p>
<p><a href="https://www.youtube.com/watch?v=G_p4QJrjdOw" rel="noreferrer">https://www.youtube.com/watch?v=G_p4QJrjdOw</a></p>
<p>In there, he talks about calculating gradient of $ x^{T}Ax $ and he does that using the concept of exterior derivative. The proof goes as follows:</p>
<ol>
<li>$ y = x^{T}Ax$</li>
<li>$ dy = dx^{T}Ax + x^{T}Adx = x^{T}(A+A^{T})dx$ (using trace property of matrices)</li>
<li>$ dy = (\nabla y)^{T} dx $ and because the rule is true for all $dx$</li>
<li>$ \nabla y = x^{T}(A+A^{T})$</li>
</ol>
<p>It seems that in step 2, some form of product rule for differentials is applied. I am familiar with product rule for single variable calculus, but I am not understanding how product rule was applied to a multi-variate function expressed in matrix form.</p>
<p>It would be great if somebody could point me to a mathematical theorem that allows Step 2 in the above proof.</p>
<p>Thanks!
Ajay</p>
| <p><span class="math-container">\begin{align*}
dy
& = d(x^{T}Ax)
= d(Ax\cdot x)
= d\left(\sum_{i=1}^{n}(Ax)_{i}x_{i}\right) \\
& = d \left(\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i,j}x_{j}x_{i}\right)
=\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i,j}x_{i}dx_{j}+\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i,j}x_{j}dx_{i} \\
& =\sum_{i=1}^{n}(Ax)dx_{i}+\sum_{i=1}^{n}(Adx)x_{i}
=(dx)^{T}Ax+x^{T}Adx \\
& =(dx)^{T}Ax+(dx)^{T}A^{T}x
=(dx)^{T}(A+A^{T})x.
\end{align*}</span></p>
| <p>Step 2 might be the result of a simple computation. Consider $u(x)=x^TAx$, then
$$
u(x+h)=(x+h)^TA(x+h)=x^TAx+h^TAx+x^TAh+h^TAh,
$$
that is, $u(x+h)=u(x)+x^T(A+A^T)h+r_x(h)$ where $r_x(h)=h^TAh$ (this uses the fact that $h^TAx=x^TA^Th$, which holds because $m=h^TAx$ is a $1\times1$ matrix hence $m^T=m$). </p>
<p>One sees that $r_x(h)=o(\|h\|)$ when $h\to0$.
This proves that the differential of $u$ at $x$ is the linear function $\nabla u(x):\mathbb R^n\to\mathbb R$, $h\mapsto x^T(A+A^T)h$, which can be identified with the unique vector $z$ such that $\nabla u(x)(h)=z^Th$ for every $h$ in $\mathbb R^n$, that is, $z=(A+A^T)x$.</p>
|
matrices | <p>Can someone point me to a paper, or show here, why symmetric matrices have orthogonal eigenvectors? In particular, I'd like to see proof that for a symmetric matrix $A$ there exists decomposition $A = Q\Lambda Q^{-1} = Q\Lambda Q^{T}$ where $\Lambda$ is diagonal.</p>
| <p>For any real matrix $A$ and any vectors $\mathbf{x}$ and $\mathbf{y}$, we have
$$\langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle.$$
Now assume that $A$ is symmetric, and $\mathbf{x}$ and $\mathbf{y}$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda$ and $\mu$. Then
$$\lambda\langle\mathbf{x},\mathbf{y}\rangle = \langle\lambda\mathbf{x},\mathbf{y}\rangle = \langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle = \langle\mathbf{x},A\mathbf{y}\rangle = \langle\mathbf{x},\mu\mathbf{y}\rangle = \mu\langle\mathbf{x},\mathbf{y}\rangle.$$
Therefore, $(\lambda-\mu)\langle\mathbf{x},\mathbf{y}\rangle = 0$. Since $\lambda-\mu\neq 0$, then $\langle\mathbf{x},\mathbf{y}\rangle = 0$, i.e., $\mathbf{x}\perp\mathbf{y}$.</p>
<p>Now find an orthonormal basis for each eigenspace; since the eigenspaces are mutually orthogonal, these vectors together give an orthonormal subset of $\mathbb{R}^n$. Finally, since symmetric matrices are diagonalizable, this set will be a basis (just count dimensions). The result you want now follows.</p>
| <p>Since being symmetric is the property of an operator, not just its associated matrix, let me use <span class="math-container">$\mathcal{A}$</span> for the linear operator whose associated matrix in the standard basis is <span class="math-container">$A$</span>. Arturo and Will proved that a real symmetric operator <span class="math-container">$\mathcal{A}$</span> has real eigenvalues (thus real eigenvectors) and that eigenvectors corresponding to different eigenvalues are orthogonal. <em>One question still stands: how do we know that there are no generalized eigenvectors of rank more than 1, that is, all Jordan blocks are one-dimensional?</em> Indeed, by referencing the theorem that any symmetric matrix is diagonalizable, Arturo effectively threw the baby out with the bathwater: showing that a matrix is diagonalizable is tautologically equivalent to showing that it has a full set of eigenvectors. Assuming this as a given dismisses half of the question: we were asked to show that <span class="math-container">$\Lambda$</span> is diagonal, and not just a generic Jordan form. Here I will untangle this bit of circular logic.</p>
<p>We prove by induction in the number of eigenvectors, namely it turns out that finding an eigenvector (and at least one exists for any matrix) of a symmetric matrix always allows us to generate another eigenvector. So we will run out of dimensions before we run out of eigenvectors, making the matrix diagonalizable.</p>
<p>Suppose <span class="math-container">$\lambda_1$</span> is an eigenvalue of <span class="math-container">$A$</span> and there exists at least one eigenvector <span class="math-container">$\boldsymbol{v}_1$</span> such that <span class="math-container">$A\boldsymbol{v}_1=\lambda_1 \boldsymbol{v}_1$</span>. Choose an orthonormal basis <span class="math-container">$\boldsymbol{e}_i$</span> so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span>. The change of basis is represented by an orthogonal matrix <span class="math-container">$V$</span>. In this new basis the matrix associated with <span class="math-container">$\mathcal{A}$</span> is <span class="math-container">$$A_1=V^TAV.$$</span>
It is easy to check that <span class="math-container">$\left(A_1\right)_{11}=\lambda_1$</span> and all the rest of the numbers <span class="math-container">$\left(A_1\right)_{1i}$</span> and <span class="math-container">$\left(A_1\right)_{i1}$</span> are zero. In other words, <span class="math-container">$A_1$</span> looks like this:
<span class="math-container">$$\left(
\begin{array}{c|ccc}
\lambda_1 & \\
\hline & & \\
& & B_1 & \\
& &
\end{array}
\right)$$</span>
Thus the operator <span class="math-container">$\mathcal{A}$</span> breaks down into a direct sum of two operators: <span class="math-container">$\lambda_1$</span> in the subspace <span class="math-container">$\mathcal{L}\left(\boldsymbol{v}_1\right)$</span> (<span class="math-container">$\mathcal{L}$</span> stands for linear span) and a symmetric operator <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> whose associated <span class="math-container">$(n-1)\times (n-1)$</span> matrix is <span class="math-container">$B_1=\left(A_1\right)_{i > 1,j > 1}$</span>. <span class="math-container">$B_1$</span> is symmetric thus it has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which has to be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span> and the same procedure applies: change the basis again so that <span class="math-container">$\boldsymbol{e}_1=\boldsymbol{v}_1$</span> and <span class="math-container">$\boldsymbol{e}_2=\boldsymbol{v}_2$</span> and consider <span class="math-container">$\mathcal{A}_2=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1,\boldsymbol{v}_2\right)^{\bot}}$</span>, etc. After <span class="math-container">$n$</span> steps we will get a diagonal matrix <span class="math-container">$A_n$</span>.</p>
<p>There is a slightly more elegant proof that does not involve the associated matrices: let <span class="math-container">$\boldsymbol{v}_1$</span> be an eigenvector of <span class="math-container">$\mathcal{A}$</span> and <span class="math-container">$\boldsymbol{v}$</span> be any vector such that <span class="math-container">$\boldsymbol{v}_1\bot \boldsymbol{v}$</span>. Then
<span class="math-container">$$\left(\mathcal{A}\boldsymbol{v},\boldsymbol{v}_1\right)=\left(\boldsymbol{v},\mathcal{A}\boldsymbol{v}_1\right)=\lambda_1\left(\boldsymbol{v},\boldsymbol{v}_1\right)=0.$$</span> This means that the restriction <span class="math-container">$\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> is an operator of rank <span class="math-container">$n-1$</span> which maps <span class="math-container">${\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$</span> into itself. <span class="math-container">$\mathcal{A}_1$</span> is symmetric for obvious reasons and thus has an eigenvector <span class="math-container">$\boldsymbol{v}_2$</span> which will be orthogonal to <span class="math-container">$\boldsymbol{v}_1$</span>.</p>
|
linear-algebra | <p>After looking in my book for a couple of hours, I'm still confused about what it means for a $(n\times n)$-matrix $A$ to have a determinant equal to zero, $\det(A)=0$.</p>
<p>I hope someone can explain this to me in plain English.</p>
| <p>For an $n\times n$ matrix, each of the following is equivalent to the condition of the matrix having determinant $0$:</p>
<ul>
<li><p>The columns of the matrix are dependent vectors in $\mathbb R^n$</p></li>
<li><p>The rows of the matrix are dependent vectors in $\mathbb R^n$</p></li>
<li><p>The matrix is not invertible. </p></li>
<li><p>The volume of the parallelepiped determined by the column vectors of the matrix is $0$.</p></li>
<li><p>The volume of the parallelepiped determined by the row vectors of the matrix is $0$.</p></li>
<li><p>The system of homogenous linear equations represented by the matrix has a non-trivial solution. </p></li>
<li><p>The determinant of the linear transformation determined by the matrix is $0$. </p></li>
<li><p>The free coefficient in the characteristic polynomial of the matrix is $0$. </p></li>
</ul>
<p>Depending on the definition of the determinant you saw, proving each equivalence can be more or less hard.</p>
| <p>For me, this is the most intuitive video on the web that explains determinants, and everyone who wants a deep and visual understanding of this topic should watch it:</p>
<p><a href="https://www.youtube.com/watch?v=Ip3X9LOh2dk" rel="noreferrer">The determinant by 3Blue1Brown</a></p>
<p>The whole playlist is available at this link:</p>
<p><a href="https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="noreferrer">Essence of linear algebra by 3Blue1Brown</a></p>
<p>The crucial part of the series is "Linear transformations and matrices". If you understand that well, everything else will be like a piece of cake. Literally: plain English + visual.</p>
|
combinatorics | <p>Like all combinatoric problems, this one is probably equivalent to another, well-known one, but I haven't managed to find such an equivalent problem (and OEIS didn't help), so I offer this one as being possibly new and possibly interesting.</p>
<p><strong>Problem statement</strong></p>
<p>I have <span class="math-container">$2N$</span> socks in a laundry basket, and I am hanging them on the hot pipes to dry. To make life easier later, I want to hang them in pairs. Since it is dark where the pipes are, I adopt the following algorithm:</p>
<ol>
<li>Take a sock at random from the basket.</li>
<li>If it matches one that is already on my arm, hang them both on the pipes: the one in my hand and the matching one taken from my arm.</li>
<li>If it does not match one that is already on my arm, hang it on my arm with the others.</li>
<li>Do this <span class="math-container">$2N$</span> times.</li>
</ol>
<p>The question is: <em>How long does my arm have to be?</em></p>
<p>Clearly, the minimum length is <span class="math-container">$1$</span>, for instance if the socks come out in the order <span class="math-container">$AABBCC$</span>. Equally clearly, the maximum length is <span class="math-container">$N$</span>, for instance if the socks come out as <span class="math-container">$ABCABC$</span>. But what is the likeliest length? Or the average length? Or what sort of distribution do the required lengths have?</p>
<p>It turns out to be easiest to parameterise the results not by <span class="math-container">$2N$</span>, the number of socks, but by <span class="math-container">$2N-1$</span>, which I will call <span class="math-container">$M$</span>.</p>
<p><strong>The first few results</strong></p>
<p>(Notation: <span class="math-container">$n!!$</span> is the semifactorial, the factorial including only odd numbers; thus <span class="math-container">$7!!=7\times 5\times 3\times 1$</span>).</p>
<p>In each case I provide the frequency for each possible arm length, starting with a length of 1. I use frequencies rather than probabilities because they are easier to type, but you can get the probabilities by dividing by <span class="math-container">$M!!$</span>.</p>
<p><span class="math-container">$$
\begin{array}{c|rrrrr}
M \\
\hline
1 & 1 \\
3 & 1 & 2 \\
5 & 1 & 8 & 6 \\
7 & 1 & 30 & 50 & 24 \\
9 & 1 & 148 & 340 & 336 & 120 \\
\end{array}
$$</span>
It would be good to know (for example) if these frequencies tend to some sort of known distribution as <span class="math-container">$M\to\infty$</span>, just as the binomial coefficients do.</p>
<p>But, as I said at the beginning, this may just be a re-encoding of a known combinatorial problem, carrying a lot of previously worked out results along with it. I thought, for instance, of the lengths of random walks in <span class="math-container">$N$</span> dimensions with only one step forward and one step back being allowed in each dimension – but that looked too complicated to give any straightforward direction to follow. </p>
<p><strong>Background: methods</strong></p>
<p>In case it is interesting or helpful, I obtained the results above by means of a two-dimensional generating function, in which the coefficient of <span class="math-container">$y^n$</span> identified the arm length needed and the coefficient of <span class="math-container">$x^n$</span> identified how many socks had been retrieved at the [first] time that this length was reached. Calling the resulting generating function <span class="math-container">$A_M(x,y)$</span>, the recurrence I used was:</p>
<p><span class="math-container">$$A_M=MxyA_{M-2}+x^2(x-y)\frac\partial{\partial x}A_{M-2}+(1-x^2)xy$$</span></p>
<p>which is based on sound first principles and matches the results of manual calculation up to <span class="math-container">$M=5$</span>. Having found a polynomial, I substitute <span class="math-container">$x=1$</span> and the numbers in the table above are then the coefficients of the powers of <span class="math-container">$y$</span>.</p>
<p>But, mathematics being close to comedy, all this elaboration may be an unnecessarily complicated way to get to a result too trivial to be found even in OEIS. Is it?</p>
| <p>I did some Monte Carlo with this interesting problem and came to some interesting conclusions. If you have <span class="math-container">$N$</span> pairs of socks the expected maximum arm length is slightly above <span class="math-container">$N/2$</span>. </p>
<p>First, I made 1,000,000 experiments with 100 pairs of socks and recorded maximum arm length reached in each one. For example, maximum arm length of 54 was reached about 90,000 times. And it all looks like a normal distribution to me. The average value of maximum arm length was 53.91, confirmed several times in a row.</p>
<p><a href="https://i.sstatic.net/zOW4e.png" rel="noreferrer"><img src="https://i.sstatic.net/zOW4e.png" alt="enter image description here"></a></p>
<p>Nothing changed with 100 pairs of socks and 10,000,000 experiments. Average value remained the same. So it looks like you need about a million runs to draw up a meaningful conclusion.</p>
<p><a href="https://i.sstatic.net/E7Ojm.png" rel="noreferrer"><img src="https://i.sstatic.net/E7Ojm.png" alt="enter image description here"></a></p>
<p>Here is what I got when I doubled the number of socks to 200 pairs. Maximum arm length on average was 105.12, still above 50%. I got the same value in several repeated experiments (<span class="math-container">$\pm0.01$</span>).</p>
<p><a href="https://i.sstatic.net/I8oRd.png" rel="noreferrer"><img src="https://i.sstatic.net/I8oRd.png" alt="enter image description here"></a></p>
<p>Finally, I decided to check expected maximum arm length for different number of sock pairs, from 10 to 250. Each number of pairs was tested 2,000,000 times before the average value was calculated. Here are the results:</p>
<p><span class="math-container">$$
\begin{array}{c|rr}
\textbf{Pairs} & \textbf{Arm Length} & \textbf{Increment} \\
\hline
10 & 6.49 & \\
20 & 12.03 & 5.54 \\
30 & 17.41 & 5.38 \\
40 & 22.71 & 5.30 \\
50 & 27.97 & 5.26 \\
60 & 33.20 & 5.23 \\
70 & 38.40 & 5.20 \\
80 & 43.59 & 5.19 \\
90 & 48.75 & 5.16 \\
100 & 53.91 & 5.16 \\
110 & 59.07 & 5.16 \\
120 & 64.20 & 5.13 \\
130 & 69.33 & 5.13 \\
140 & 74.46 & 5.13 \\
150 & 79.58 & 5.12 \\
160 & 84.69 & 5.11 \\
170 & 89.80 & 5.11 \\
180 & 94.91 & 5.11 \\
190 & 100.02 & 5.11 \\
200 & 105.11 & 5.09 \\
210 & 110.20 & 5.09 \\
220 & 115.29 & 5.09 \\
230 & 120.38 & 5.09 \\
240 & 125.47 & 5.09 \\
250 & 130.56 & 5.09
\end{array}
$$</span></p>
<p><a href="https://i.sstatic.net/lzAgk.png" rel="noreferrer"><img src="https://i.sstatic.net/lzAgk.png" alt="enter image description here"></a></p>
<p>It looks like a straight line but it's actually an arc, slightly bended downwards (take a look at the increment column).</p>
<p>Finally, here is the Java code that I used for my experiments.</p>
<pre><code>import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class Basket {
public static final int PAIRS = 250;
public static final int NUM_EXPERIMENTS = 2_000_000;
int n;
List<Integer> basket;
Set<Integer> arm;
public Basket(int n) {
// basket size
this.n = n;
// socks are here
this.basket = new ArrayList<Integer>();
// arm is just a set of different socks
this.arm = new HashSet<Integer>();
// add a pair of same socks to the basket
for(int i = 0; i < n; i++) {
basket.add(i);
basket.add(i);
}
// shuffle the basket
Collections.shuffle(basket);
}
// returns maximum arm length
int hangSocks() {
// maximum arm length
int maxArmLength = 0;
// we have to hang all socks
for(int i = 0; i < 2 * n; i++) {
// take one sock from the basket
int sock = basket.get(i);
// if the sock of the same color is already on your arm...
if(arm.contains(sock)) {
// ...remove sock from your arm and put the pair over the hot pipe
arm.remove(sock);
}
else {
// put the sock on your arm
arm.add(sock);
// update maximum arm length
maxArmLength = Math.max(maxArmLength, arm.size());
}
}
return maxArmLength;
}
public static void main(String[] args) {
// results of our experiments will be stored here
int[] results = new int[PAIRS + 1];
// run millions of experiments
for(int i = 0; i < NUM_EXPERIMENTS; i++) {
Basket b = new Basket(PAIRS);
// arm length in a single experiment
int length = b.hangSocks();
// remember how often this result appeared
results[length]++;
}
// print results in CSV format so that we can plot them in Excel
for(int i = 0; i < results.length; i++) {
System.out.println(i + "," + results[i]);
}
// find average arm length
int sum = 0;
for(int i = 0; i < results.length; i++) {
sum += i * results[i];
}
double average = (double) sum / (double) NUM_EXPERIMENTS;
System.out.println(String.format("Average arm length is %.2f", average));
}
}
</code></pre>
<p><strong>EDIT</strong>: For N=500, the average value of maximum arm length after 2,000,000 tests is 257.19. For N=1000, the result is 509.23.</p>
<p>It seems that for <span class="math-container">$N\to\infty$</span>, the result goes down to <span class="math-container">$N/2$</span>. I don't know how to prove this.</p>
| <p>The expected number of single socks is maximized when you are halfway through. When you have drawn <span class="math-container">$N$</span> socks the chance that a given pair has one on your arm is <span class="math-container">$\frac {2N^2}{2N^2+2N(N-1)}=\frac{N^2}{2N^2-N}\approx \frac 12+\frac 1{2N}$</span>. If we make the socks distinguishable, to have one on your arm sock <span class="math-container">$1$</span> of a pair has <span class="math-container">$2N$</span> positions it can be in, then sock <span class="math-container">$2$</span> has <span class="math-container">$N$</span> choices-to be in the other half of the run. To not have one on your arm sock <span class="math-container">$1$</span> again has <span class="math-container">$2N$</span> choices but sock <span class="math-container">$2$</span> has only <span class="math-container">$N-1$</span> as it must be in the same half of the run. This says the expected number on your arm is <span class="math-container">$\frac {N^2}{2N-1}\approx \frac {N+1}2$</span>. </p>
<p>The expected value being below the mode of Oldboy's distributions says that the distribution is not symmetric around the mode. </p>
<p>Note that this addresses the expected maximum at a given point. The expected maximum over a distribution can be higher as Empy2 explains.</p>
|
linear-algebra | <p>Simply as the title says. I've done some research, but still haven't arrived at an answer I am satisfied with. I know the answer varies in different fields, but in general, why would someone study linear algebra?</p>
| <p>Linear algebra is vital in multiple areas of <strong>science</strong> in general. Because linear equations are so easy to solve, practically every area of modern science contains models where equations are approximated by linear equations (using Taylor expansion arguments) and solving for the system helps the theory develop. Beginning to make a list wouldn't even be relevant ; you and I have no idea how people abuse of the power of linear algebra to approximate solutions to equations. Since in most cases, solving equations is a synonym of solving a practical problem, this can be VERY useful. Just for this reason, linear algebra has a reason to exist, and it is enough reason for any scientific to know linear algebra.</p>
<p>More specifically, in mathematics, linear algebra has, of course, its use in abstract algebra ; vector spaces arise in many different areas of algebra such as group theory, ring theory, module theory, representation theory, Galois theory, and much more. Understanding the tools of linear algebra gives one the ability to understand those theories better, and some theorems of linear algebra require also an understanding of those theories ; they are linked in many different intrinsic ways.</p>
<p>Outside of algebra, a big part of analysis, called <em>functional analysis</em>, is actually the infinite-dimensional version of linear algebra. In infinite dimension, most of the finite-dimension theorems break down in a very interesting way ; some of our intuition is preserved, but most of it breaks down. Of course, none of the algebraic intuition goes away, but most of the analytic part does ; closed balls are never compact, norms are not always equivalent, and the structure of the space changes a lot depending on the norm you use. Hence even for someone studying analysis, understanding linear algebra is vital.</p>
<p>In other words, if you wanna start thinking, learn how to think straight (linear) first. =)</p>
<p>Hope that helps,</p>
| <p>Having studied Engineering, I can tell you that Linear Algebra is fundamental and an extremely powerful tool in <strong>every single</strong> discipline of Engineering.</p>
<p>If you are reading this and considering learning linear algebra then I will first issue you with a warning: Linear algebra is mighty stuff. You should be both manically excited and scared by the awesome power it will give you!!!!!!</p>
<p>In the abstract, it allows you to manipulate and understand whole systems of equations with huge numbers of dimensions/variables on paper without any fuss, and solve them computationally. Here are some of the real-world relationships that are governed by linear equations and some of its applications:</p>
<ul>
<li>Load and displacements in structures</li>
<li>Compatability in structures</li>
<li>Finite element analysis (has Mechanical, Electrical, and Thermodynamic applications)</li>
<li>Stress and strain in more than 1-D</li>
<li>Mechanical vibrations</li>
<li>Current and voltage in LCR circuits</li>
<li>Small signals in nonlinear circuits = amplifiers</li>
<li>Flow in a network of pipes</li>
<li>Control theory (governs how state space systems evolve over time, discrete and continuous)</li>
<li>Control theory (Optimal controller can be found using simple linear algebra)</li>
<li>Control theory (Model Predictive control is heavily reliant on linear algebra)</li>
<li>Computer vision (Used to calibrate camera, stitch together stereo images)</li>
<li>Machine learning (Support Vector Machine)</li>
<li>Machine learning (Principal component analysis)</li>
<li>Lots of optimization techniques rely on linear algebra as soon as the dimensionality starts to increase.</li>
<li>Fit an arbitrary polynomial to some data.</li>
</ul>
<p>Arbitrarily large problems of the types listed above can be converted into simple matrix equations, and most of those equations are of the form <code>A x = b</code>. Nearly all other problems are of the form <code>A x = λ x</code>. Yep, you read that right! Nearly all engineering problems, no matter how huge, can be reduced to one of two equations!</p>
<p>Linear algebra is so powerful that it also deals with small deviations in lots of non-linear systems! A typical engineering way to deal with a non-linear system might be to linearize it, then use Linear Algebra to understand it!</p>
|
logic | <ol>
<li>Does $\mathfrak{(p<q)\land(r<s)}$ imply $\mathfrak{p^r<q^s}$, where $\mathfrak{ p,q,r,s}$ are cardinal numbers?</li>
<li>Is it possible to prove in $\mathsf{ZFC}$ that there is a counterexample?</li>
</ol>
| <p>The inequality does not hold in general. First, note that $2^\kappa=\kappa^\kappa$ for any infinite $\kappa$. Second, it is consistent (using the technique of forcing) that $2^{\aleph_0}=2^{\aleph_1}=\aleph_2$, even though $\aleph_0<\aleph_1$. This gives us that it is consistent that $\aleph_0<\aleph_1$ and yet $\aleph_0^{\aleph_0}=\aleph_1^{\aleph_1}$.</p>
<p>For a (combinatorially more involved) $\mathsf{ZFC}$ example, $\beth_\omega^{\aleph_0}=\beth_\omega^{\aleph_1}$, so $\beth_{\omega}^{\aleph_0}=(\beth_\omega^+)^{\aleph_1}$. (For a similar computation, see Jech's exercise 5.18, <a href="https://math.stackexchange.com/q/216817/462">here</a> or <a href="https://math.stackexchange.com/q/21036/462">here</a>.)</p>
| <p>Andres Caicedo asked whether or not we can prove in $\sf ZF$ that there exists a counterexample (i.e. a situation where inequality holds in the assumption, but the exponents are equal).</p>
<p>In $\sf ZFC$ we know that there exists such counterexample, so let us assume $\sf ZF+\lnot AC$. Let $P$ be a set which cannot be well-ordered, and let $\newcommand{\fp}{\mathfrak p}\fp$ denote its cardinal. We make the following assumptions:</p>
<ol>
<li>$\fp^\omega=\fp$, otherwise replace $P$ by $P^\omega$. From this assumption we can conclude that: $\fp=\fp+\fp=\fp\cdot\fp$, and from those we can deduce that $\fp^\fp=2^\fp$.</li>
<li><p>If $\kappa$ is the least ordinal not less or equal than $\fp$, then $\kappa<2^\fp$. If this is not true we can replace $\fp$ by $2^\fp$ (because $\kappa$ is the least ordinal with that property for $2^\fp$ as well) or by $2^{2^{\fp}}$ if needed. Since we know that $\kappa<2^{2^{2^\fp}}$, one of the three options must have the wanted property.</p>
<p>Note that the above properties are preserved by taking powers, so replacing $\fp$ by its power set (once or twice) would not change the first assumption.</p></li>
</ol>
<p>Other important properties following from the first property are: $$2^\fp=(2^\fp)^\fp=2^\fp+2^\fp=2^\fp\cdot2^\fp.$$</p>
<p>We know by a lemma of Tarski that if $\lambda$ is an $\aleph$ and $\mathfrak m$ is a cardinal such that $\frak\lambda+m=\lambda\cdot m$, then $\lambda$ and $\frak m$ are comparable. Because we took $\kappa$ to be incomparable with $\fp$ we know that $\fp+\kappa<\fp\cdot\kappa$.</p>
<p>Note that $\fp(\fp+\kappa)=\kappa(\fp+\kappa)=(\fp+\kappa)(\fp\cdot\kappa)=\fp\cdot\kappa$ by the properties of $\fp$ and $\kappa$.</p>
<ul>
<li><p><strong>Case I: $2^\kappa\nleq2^\fp$.</strong></p>
<p>We observe that $2^\fp<2^{\fp+\kappa}$, and now we calculate:
$$\begin{align}
&(2^\fp)^{\fp+\kappa}=2^{\fp(\fp+\kappa)}=2^{\fp\cdot\kappa}&\tag{1}\\
&(2^{\fp+\kappa})^{\fp\cdot\kappa}=2^{(\fp+\kappa)\fp\cdot\kappa}=2^{\fp\cdot\kappa}&\tag{2}
\end{align}$$
And it is not hard to see that the inequalities are satisfied, but the exponentiation ends up equal.</p></li>
<li><p><strong>Case II: $2^\kappa\leq2^\fp$.</strong></p>
<p>We note that $2^\fp=2^{\fp+\kappa}$, and we make the following calculations:
$$\begin{align}
& 2^\fp=2^{\fp+\kappa}\leq\fp^{\fp+\kappa}\leq(2^\fp)^{\fp+\kappa}=2^{\fp\cdot\kappa}=(2^\kappa)^\fp\leq(2^\fp)^\fp=2^\fp&\tag{3}\\
& 2^\fp\leq\kappa^\fp\leq(2^\fp)^\fp=2^\fp &\tag{4}
\end{align}$$
And in this case we see that $\kappa<2^\fp$ and that $\fp<\fp+\kappa$, but the exponentiation is again equal.</p></li>
</ul>
|
logic | <p>In logic, a semantics is said to be compact iff if every finite subset of a set of sentences has a model, then so to does the entire set. </p>
<p>Most logic texts either don't explain the terminology, or allude to the topological property of compactness. I see an analogy as, given a topological space X and a subset of it S, S is compact iff for every open cover of S, there is a finite subcover of S. But, it doesn't seem strong enough to justify the terminology. </p>
<p>Is there more to the choice of the terminology in logic than this analogy?</p>
| <p>The Compactness Theorem is equivalent to the compactness of the <a href="http://en.wikipedia.org/wiki/Stone%27s_representation_theorem_for_Boolean_algebras" rel="noreferrer">Stone space</a> of the <a href="http://en.wikipedia.org/wiki/Lindenbaum%E2%80%93Tarski_algebra" rel="noreferrer">Lindenbaum–Tarski algebra</a> of the first-order language <span class="math-container">$L$</span>. (This is also the space of <a href="http://en.wikipedia.org/wiki/Type_%28model_theory%29" rel="noreferrer"><span class="math-container">$0$</span>-types</a> over the empty theory.)</p>
<p>A point in the Stone space <span class="math-container">$S_L$</span> is a complete theory <span class="math-container">$T$</span> in the language <span class="math-container">$L$</span>. That is, <span class="math-container">$T$</span> is a set of sentences of <span class="math-container">$L$</span> which is closed under logical deduction and contains exactly one of <span class="math-container">$\sigma$</span> or <span class="math-container">$\lnot\sigma$</span> for every sentence <span class="math-container">$\sigma$</span> of the language. The topology on the set of types has for basis the open sets <span class="math-container">$U(\sigma) = \{T:\sigma\in T\}$</span> for every sentence <span class="math-container">$\sigma$</span> of <span class="math-container">$L$</span>. Note that these are all clopen sets since <span class="math-container">$U(\lnot\sigma)$</span> is complementary to <span class="math-container">$U(\sigma)$</span>.</p>
<p>To see how the Compactness Theorem implies the compactness of <span class="math-container">$S_L$</span>, suppose the basic open sets <span class="math-container">$U(\sigma_i)$</span>, <span class="math-container">$i\in I$</span>, form a cover of <span class="math-container">$S_L$</span>. This means that every complete theory <span class="math-container">$T$</span> contains at least one of the sentences <span class="math-container">$\sigma_i$</span>. I claim that this cover has a finite subcover. If not, then the set <span class="math-container">$\{\lnot\sigma_i:i\in I\}$</span> is finitely consistent. By the Compactness Theorem, the set consistent and hence (by Zorn's Lemma) is contained in a maximally consistent set <span class="math-container">$T$</span>. This theory <span class="math-container">$T$</span> is a point of the Stone space which is not contained in any <span class="math-container">$U(\sigma_i)$</span>, which contradicts our hypothesis that the <span class="math-container">$U(\sigma_i)$</span>, <span class="math-container">$i\in I$</span>, form a cover of the space.</p>
<p>To see how the compactness of <span class="math-container">$S_L$</span> implies the Compactness Theorem, suppose that <span class="math-container">$\{\sigma_i:i\in I\}$</span> is an inconsistent set of sentences in <span class="math-container">$L$</span>. Then <span class="math-container">$U(\lnot\sigma_i),i\in I$</span> forms a cover of <span class="math-container">$S_L$</span>. This cover has a finite subcover, which corresponds to a finite inconsistent subset of <span class="math-container">$\{\sigma_i:i\in I\}$</span>. Therefore, every inconsistent set has a finite inconsistent subset, which is the contrapositive of the Compactness Theorem.</p>
| <p>The analogy for the compactness theorem for propositional calculus is as follows. Let $p_i $ be propositional variables; together, they take values in the product space $2^{\mathbb{N}}$. Suppose we have a collection of statements $S_t$ in these boolean variables such that every finite subset is satisfiable. Then I claim that we can prove that they are all simultaneously satisfiable by using a compactness argument.</p>
<p>Let $F$ be a finite set. Then the set of all truth assignments (this is a subset of $2^{\mathbb{N}}$) which satisfy $S_t$ for $t \in F$ is a closed set $V_F$ of assignments satisfying the sentences in $F$. The intersection of any finitely many of the $V_F$ is nonempty, so by the finite intersection property, the intersection of all of them is nonempty (since the product space is compact), whence any truth in this intersection satisfies all the statements.</p>
<p>I don't know how this works in predicate logic.</p>
|
geometry | <p><a href="https://i.sstatic.net/qBw4u.png" rel="noreferrer"><img src="https://i.sstatic.net/qBw4u.png" alt="enter image description here"></a></p>
<p>The diagram shows 12 small circles of radius 1 and a large circle, inside a square.</p>
<p>Each side of the square is a tangent to the large circle and four of the small circles.</p>
<p>Each small circle touches two other circles.</p>
<p>What is the length of each side of the square?</p>
<p>The answer is 18</p>
<p><strong>CONTEXT:</strong> </p>
<p>This question came up in a Team Maths Challenge I did back in November. No one on our team knew how to do it and we ended up guessing the answer (please understand that time was scarce and we did several other questions without guessing!) I just remembered this question and thought I'd have a go but I am still struggling with it. </p>
<p>There are no worked solutions online (only the answer) so I reaching out to this website as a final resort. Any help would be greatly appreciated. Thank you!</p>
| <p><a href="https://i.sstatic.net/RX88K.jpg" rel="noreferrer"><img src="https://i.sstatic.net/RX88K.jpg" alt="enter image description here"></a></p>
<p>Join the center of the bigger circle (radius assumed to be <span class="math-container">$r$</span>) to the mid-points of the square. It’s easy to see that <span class="math-container">$ABCD$</span> is a square as well. Now, join the center of the big circle to the center of one of the smaller circles (<span class="math-container">$P$</span>). Then <span class="math-container">$BP=r+1$</span>. Further, if we draw a vertical line through <span class="math-container">$P$</span>, it intersects <span class="math-container">$AB$</span> at a point distant <span class="math-container">$r-1$</span> from <span class="math-container">$B$</span>. Lastly, the perpendicular distance from <span class="math-container">$E$</span> to the bottom side of the square is equal to <span class="math-container">$AD=r$</span>. Take away three radii to obtain <span class="math-container">$EP=r-3$</span>. Using Pythagoras’ Theorem, <span class="math-container">$$(r-1)^2 +(r-3)^2 =(r+1)^2 \\ r^2-10r+9=0 \implies r=9,1$$</span>, but clearly <span class="math-container">$r\ne 1$</span>, and so the side of the square is <span class="math-container">$2r=18$</span>.</p>
| <p>It is instructive to consider the general case. Suppose we have a circle of radius <span class="math-container">$r$</span> that is inscribed in a square of side length <span class="math-container">$2r$</span>. Suppose <span class="math-container">$n$</span> tangent circles of unit radius can be drawn along the inside "corner" of the square. What is the relationship between <span class="math-container">$r$</span> and <span class="math-container">$n$</span>? Your question is the case <span class="math-container">$n = 2$</span>, the third circle drawn in the corner being redundant. The figure below illustrates the case <span class="math-container">$n = 5$</span>:</p>
<p><a href="https://i.sstatic.net/ZC985.gif" rel="noreferrer"><img src="https://i.sstatic.net/ZC985.gif" alt="enter image description here"></a></p>
<p>The solution is straightforward. The right triangle shown in the diagram has legs <span class="math-container">$r-1$</span> and <span class="math-container">$r-(2n-1)$</span>, and hypotenuse <span class="math-container">$r+1$</span>. Therefore, <span class="math-container">$$(r-1)^2 + (r-2n+1)^2 = (r+1)^2,$$</span> from which it follows that <span class="math-container">$$r = (1 + \sqrt{2n})^2.$$</span> For <span class="math-container">$n = 2$</span>, this gives <span class="math-container">$r = 9$</span> and the side length of the square is <span class="math-container">$18$</span>. For <span class="math-container">$n = 5$</span>, we have <span class="math-container">$r = 11 + 2 \sqrt{10}$</span>. Whenever <span class="math-container">$n$</span> is twice a square, i.e. <span class="math-container">$n = 2m^2$</span> for a positive integer <span class="math-container">$m$</span>, then <span class="math-container">$r = (1 + 2m)^2$</span> is also an integer and the circumscribing square has integer sides.</p>
<hr>
<p>As a related but different question, given <span class="math-container">$n$</span> such circles, what is the total number of externally tangent unit circles that can be placed in the corner such that their centers form a square lattice and do not intersect the large circle? So for <span class="math-container">$n = 2$</span>, this number is <span class="math-container">$f(n) = 3$</span> as shown in your figure. For <span class="math-container">$n = 5$</span>, it is <span class="math-container">$f(5) = 12$</span>.</p>
|
logic | <p>Is there such a logical thing as proof by example?</p>
<p>I know many times when I am working with algebraic manipulations, I do quick tests to see if I remembered the formula right.</p>
<p>This works and is completely logical for counter examples. One specific counter example disproves the general rule. One example might be whether $(a+b)^2 = a^2+b^2$. This is quickly disproven with most choices of a counter example. </p>
<p>However, say I want to test something that is true like $\log_a(b) = \log_x(b)/\log_x(a)$. I can pick some points a and b and quickly prove it for one example. If I test a sufficient number of points, I can then rest assured that it does work in the general case. <strong>Not that it probably works, but that it does work assuming I pick sufficiently good points</strong>. (Although in practice, I have a vague idea of what makes a set of sufficiently good points and rely on that intuition/observation that it it should work)</p>
<p>Why is this thinking "it probably works" <strong>correct</strong>?</p>
<p>I've thought about it, and here's the best I can come up with, but I'd like to hear a better answer:</p>
<blockquote>
<p>If the equation is false (the two sides aren't equal), then there is
going to be constraints on what a and b can be. In this example it is
one equation and two unknowns. If I can test one point, see it fits
the equation, then test another point, see it fits the equation, and
test one more that doesn't "lie on the path formed by the other two
tested points", then I have proven it.</p>
</blockquote>
<p>I remember being told in school that this is not the same as proving the general case as I've only proved it for specific examples, but thinking about it some more now, I am almost sure it is a rigorous method to prove the general case provided you pick the right points and satisfy some sort of "not on the same path" requirement for the chosen points.</p>
<p>edit: Thank you for the great comments and answers. I was a little hesitant on posting this because of "how stupid a question it is" and getting a bunch of advice on why this won't work instead of a good discussion. I found the polynomial answer the most helpful to my original question of whether or not this method could be rigorous, but I found the link to the small numbers intuition quiz pretty awesome as well.</p>
<p>edit2: Oh I also originally tagged this as linear-algebra because the degrees of freedom nature when the hypothesis is not true. But I neglected to talk about that, so I can see why that was taken out. When a hypothesis is not true (ie polynomial LHS does not equal polynomial RHS), the variables can't be anything, and there exists a counter example to show this. By choosing points that slice these possibilities in the right way, it's proof that the hypothesis is true, at least for polynomials. The points have to be chosen so that there is no possible way the polynomial can meet all of them. If it still meets these points, the only possibility is that the polynomials are the same, proving the hypothesis by example. I would imagine there is a more general version of this, but it's probably harder than writing proofs the more straightforward way in a lot of cases. Maybe "by example" is asking to be stoned and fired. I think "brute force" was closer to what I was asking, but I didn't realize it initially.</p>
| <p>In mathematics, "it probably works" is never a good reason to think something has been proven. There are certain patterns that hold for a large amount of small numbers - most of the numbers one would test - and then break after some obscenely large $M$ (see <a href="http://arxiv.org/pdf/1105.3943.pdf">here</a> for an example). If some equation or statement doesn't hold in general but holds for certain values, then yes, there will be constraints, but those constraints might be very hard or even impossible to quantify: say an equation holds for all composite numbers, but fails for all primes. Since we don't know a formula for the $n$th prime number, it would be very hard to test your "path" to see where this failed.</p>
<p>However, there is such a thing as a proof by example. We often want to show two structures, say $G$ and $H$, to be the same in some mathematical sense: for example, we might want to show $G$ and $H$ are <a href="http://en.wikipedia.org/wiki/Group_isomorphism">isomorphic as groups</a>. Then it would suffice to find an isomorphism between them! In general, if you want to show something exists, you can prove it by <em>finding it</em>!</p>
<p>But again, if you want to show something is true for all elements of a given set (say, you want to show $f(x) = g(x)$ for all $x\in\Bbb{R}$), then you have to employ a more general argument: no amount of case testing will prove your claim (unless you can actually test all the elements of the set explicitly: for example when the set is finite, or when you can apply mathematical induction).</p>
| <p>Yes. As pointed out in the comments by CEdgar:</p>
<p>Theorem: There exists an odd prime number.
Proof: 17 is an odd prime number.</p>
<p>Incidently, this is also a proof by example that there are proofs by example.</p>
|
geometry | <p>First of all, I am very comfortable with the tensor product of vector spaces. I am also very familiar with the well-known generalizations, in particular the theory of monoidal categories. I have gained quite some intuition for tensor products and can work with them. Therefore, my question is not about the definition of tensor products, nor is it about its properties. It is rather about the mental images. My intuition for tensor products was never really <strong>geometric</strong>. Well, except for the tensor product of commutative algebras, which corresponds to the fiber product of the corresponding affine schemes. But let's just stick to real vector spaces here, for which I have some geometric intuition, for example from classical analytic geometry. </p>
<p>The direct product of two (or more) vector spaces is quite easy to imagine: There are two (or more) "directions" or "dimensions" in which we "insert" the vectors of the individual vector spaces. For example, the direct product of a line with a plane is a three-dimensional space.</p>
<p>The exterior algebra of a vector space consists of "blades", as is nicely explained in the <a href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="noreferrer">Wikipedia article</a>.</p>
<p>Now what about the tensor product of two finite-dimensional real vector spaces $V,W$? Of course $V \otimes W$ is a direct product of $\dim(V)$ copies of $W$, but this description is not intrinsic, and also it doesn't really incorporate the symmetry $V \otimes W \cong W \otimes V$. How can we describe $V \otimes W$ geometrically in terms of $V$ and $W$? This description should be intrinsic and symmetric.</p>
<p>Note that <a href="https://math.stackexchange.com/questions/115630">SE/115630</a> basically asked the same, but received no actual answer. The answer given at <a href="https://math.stackexchange.com/questions/309838">SE/309838</a> discusses where tensor products are used in differential geometry for more abstract notions such as tensor fields and tensor bundles, but this doesn't answer the question either. (Even if my question gets closed as a duplicate, then I hope that the other questions receive more attention and answers.)</p>
<p>More generally, I would like to ask for a geometric picture of the tensor product of two vector bundles on nice topological spaces. For example, tensoring with a line bundle is some kind of twisting. But this is still some kind of vague. For example, consider the Möbius strip on the circle $S^1$, and pull it back to the torus $S^1 \times S^1$ along the first projection. Do the same with the second projection, and then tensor both. We get a line bundle on the torus, okay, but how does it look like geometrically?</p>
<p>Perhaps the following related question is easier to answer: Assume we have a geometric understanding of two linear maps $f : \mathbb{R}^n \to \mathbb{R}^m$, $g : \mathbb{R}^{n'} \to \mathbb{R}^{m'}$. Then, how can we imagine their tensor product $f \otimes g : \mathbb{R}^n \otimes \mathbb{R}^{n'} \to \mathbb{R}^m \otimes \mathbb{R}^{m'}$ or the corresponding linear map $\mathbb{R}^{n n'} \to \mathbb{R}^{m m'}$ geometrically? This is connected to the question about vector bundles via their cocycle description.</p>
| <p>Well, this may not qualify as "geometric intuition for the tensor product", but I can offer some insight into the tensor product of line bundles.</p>
<p>A line bundle is a very simple thing -- all that you can "do" with a line is flip it over, which means that in some basic sense, the Möbius strip is the only really nontrivial line bundle. If you want to understand a line bundle, all you need to understand is where the Möbius strips are.</p>
<p>More precisely, if $X$ is a line bundle over a base space $B$, and $C$ is a closed curve in $B$, then the preimage of $C$ in $X$ is a line bundle over a circle, and is therefore either a cylinder or a Möbius strip. Thus, a line bundle defines a function
$$
\varphi\colon \;\pi_1(B)\; \to \;\{-1,+1\}
$$
where $\varphi$ maps a loop to $-1$ if its preimage is a Möbius strip, and maps a loop to $+1$ if its preimage is a cylinder.</p>
<p>It's not too hard to see that $\varphi$ is actually a homomorphism, where $\{-1,+1\}$ forms a group under multiplication. This homomorphism completely determines the line bundle, and there are no restrictions on the function $\varphi$ beyond the fact that it must be a homomorphism. This makes it easy to classify line bundles on a given space.</p>
<p>Now, if $\varphi$ and $\psi$ are the homomorphisms corresponding to two line bundles, then the tensor product of the bundles corresponds to the <em>algebraic product of $\varphi$ and $\psi$</em>, i.e. the homomorphism $\varphi\psi$ defined by
$$
(\varphi\psi)(\alpha) \;=\; \varphi(\alpha)\,\psi(\alpha).
$$
Thus, the tensor product of two bundles only "flips" the line along the curve $C$ if exactly one of $\varphi$ and $\psi$ flip the line (since $-1\times+1 = -1$).</p>
<p>In the example you give involving the torus, one of the pullbacks flips the line as you go around in the longitudinal direction, and the other flips the line as you around in the meridional direction:</p>
<p><img src="https://i.sstatic.net/iEOgb.png" alt="enter image description here"> <img src="https://i.sstatic.net/SKGy1.png" alt="enter image description here"></p>
<p>Therefore, the tensor product will flip the line when you go around in <em>either</em> direction:</p>
<p><img src="https://i.sstatic.net/tmKVQ.png" alt="enter image description here"></p>
<p>So this gives a geometric picture of the tensor product in this case.</p>
<p>Incidentally, it turns out that the following things are all really the same:</p>
<ol>
<li><p>Line bundles over a space $B$</p></li>
<li><p>Homomorphisms from $\pi_1(X)$ to $\mathbb{Z}/2$.</p></li>
<li><p>Elements of $H^1(B,\mathbb{Z}/2)$.</p></li>
</ol>
<p>In particular, every line bundle corresponds to an element of $H^1(B,\mathbb{Z}/2)$. This is called the <a href="https://en.wikipedia.org/wiki/Stiefel%E2%80%93Whitney_class" rel="noreferrer">Stiefel-Whitney class</a> for the line bundle, and is a simple example of a <a href="https://en.wikipedia.org/wiki/Characteristic_class" rel="noreferrer">characteristic class</a>.</p>
<p><strong>Edit:</strong> As Martin Brandenburg points out, the above classification of line bundles does not work for arbitrary spaces $B$, but does work in the case where $B$ is a CW complex.</p>
| <p>Good question. My personal feeling is that we gain true geometric intuition of vector spaces only once norms/inner products/metrics are introduced. Thus, it probably makes sense to consider tensor products in the category of, say, Hilbert spaces (maybe finite-dimensional ones at first). My geometric intuition is still mute at this point, but I know that (for completed tensor products) we have an isometric isomorphism
$$
L^2(Z_1) \otimes L^2(Z_2) \cong L^2(Z_1 \times Z_2)
$$<br>
where $Z_i$'s are measure spaces. In the finite-dimensional setting one, of course, just uses counting measures on finite sets. From this point, one can at least rely upon analytic intuition for the tensor product (Fubini theorem and computation of double integrals as iterated integrals, etc.).</p>
|
combinatorics | <p>Popular mathematics folklore provides some simple tools
enabling us compactly to describe some truly enormous
numbers. For example, the number $10^{100}$ is commonly
known as a <a href="http://en.wikipedia.org/wiki/Googol">googol</a>,
and a <a href="http://en.wikipedia.org/wiki/Googolplex">googol
plex</a> is
$10^{10^{100}}$. For any number $x$, we have the common
vernacular:</p>
<ul>
<li>$x$ <em>bang</em> is the factorial number $x!$</li>
<li>$x$ <em>plex</em> is the exponential number $10^x$</li>
<li>$x$ <em>stack</em> is the number obtained by iterated exponentiation
(associated upwards) in a tower of height $x$, also denoted $10\uparrow\uparrow x$,
$$10\uparrow\uparrow x = 10^{10^{10^{\cdot^{\cdot^{10}}}}}{\large\rbrace} x\text{ times}.$$</li>
</ul>
<p>Thus, a googol bang is $(10^{100})!$, and a googol stack is
$10\uparrow\uparrow 10^{100}$. The vocabulary enables us to
name larger numbers with ease:</p>
<ul>
<li>googol bang plex stack. (This is the exponential tower $10^{10^{\cdot^{\cdot^{^{10}}}}}$ of height $10^{(10^{100})!}$)</li>
<li>googol stack bang stack bang</li>
<li>googol bang bang stack plex stack</li>
<li>and so on…</li>
</ul>
<p>Consider the collection of all numbers that can be named in
this scheme, by a term starting with googol and having
finitely many adjectival operands: bang, stack, plex, in
any finite pattern, repetitions allowed. (For the purposes
of this question, let us limit ourselves to these three
operations and please accept the base 10 presumption of the
stack and plex terminology simply as an artifact of its
origin in popular mathematics.)</p>
<p>My goal is to sort all such numbers nameable in this
vocabulary by size.</p>
<p>A few simple observations get us started. Once $x$ is large
enough (about 20), then the factors of $x!$ above $10$
compensate for the few below $10$, and so we see that
$10^x\lt x!$, or in other words, $x$ plex is less than $x$
bang. Similarly, $10^{10^{:^{10}}}x$ times is much larger
than $x!$, since $10^y\gt (y+1)y$ for large $y$, and so for
large values we have</p>
<ul>
<li>$x$ plex $\lt$ $x$ bang $\lt$ $x$ stack.</li>
</ul>
<p>In particular, the order for names having at most one
adjective is:</p>
<pre><code> googol
googol plex
googol bang
googol stack
</code></pre>
<p>And more generally, replacing plex with bang or bang with
stack in any of our names results in a strictly (and much)
larger number.</p>
<p>Continuing, since $x$ stack plex $= (x+1)$ stack, it
follows that</p>
<ul>
<li>$x$ stack plex $\lt x$ plex stack.</li>
</ul>
<p>Similarly, for large values,</p>
<ul>
<li>$x$ plex bang $\lt x$ bang plex,</li>
</ul>
<p>because $(10^x)!\lt (10^x)^{10^x}=10^{x10^x}\lt 10^{x!}$.
Also,</p>
<ul>
<li>$x$ stack bang $\lt x$ plex stack $\lt x$ bang stack,</li>
</ul>
<p>because $(10\uparrow\uparrow x)!\lt (10\uparrow\uparrow
x)^{10\uparrow\uparrow x}\lt 10\uparrow\uparrow 2x\lt
10\uparrow\uparrow 10^x\lt 10\uparrow\uparrow x!$. It also
appears to be true for large values that</p>
<ul>
<li>$x$ bang bang $\lt x$ stack.</li>
</ul>
<p>Indeed, one may subsume many more iterations of plex and
bang into a single stack. Note also for large values that</p>
<ul>
<li>$x$ bang $\lt x$ plex plex</li>
</ul>
<p>since $x!\lt x^x$, and this is seen to be less than
$10^{10^x}$ by taking logarithms.</p>
<p>The observations above enable us to form the following
order of all names using at most two adjectives.</p>
<pre><code> googol
googol plex
googol bang
googol plex plex
googol plex bang
googol bang plex
googol bang bang
googol stack
googol stack plex
googol stack bang
googol plex stack
googol bang stack
googol stack stack
</code></pre>
<p>My request is for any or all of the following:</p>
<ol>
<li><p>Expand the list above to include numbers named using
more than two adjectives. (This will not be an
end-extension of the current list, since googol plex plex
plex and googol bang bang bang will still appear before
googol stack.) If people post partial progress, we can
assemble them into a master list later.</p></li>
<li><p>Provide general comparison criteria that will assist
such an on-going effort.</p></li>
<li><p>Provide a complete comparison algorithm that works for
any two expressions having the same number of adjectives.</p></li>
<li><p>Provide a complete comparison algorithm that compares
any two expressions.</p></li>
</ol>
<p>Of course, there is in principle a computable comparison
procedure, since we may program a Turing machine to
actually compute the two values and compare their size.
What is desired, however, is a simple, feasible algorithm.
For example, it would seem that we could hope for an
algorithm that would compare any two names in polynomial
time of the length of the names.</p>
| <p>OK, let's attempt a sorting of the names having at most
three operands. I'll make several observations, and then
use them to assemble the order section by section,
beginning with the part below googol stack.</p>
<ul>
<li><p>googol bang bang bang $\lt$ googol
stack. It seems clear that we shall be able to iterated bangs many
times before exceeding googol stack. Since googol bang bang
bang is the largest three-operand name
using only plex and bang, this means that all such names will interact
only with each below googol stack.</p></li>
<li><p>plex $\lt$ bang. This was established in the question.</p></li>
<li><p>plex bang $\lt$ bang plex. This was established in the
question, and it allows us to make many comparisons in
terms involving only plex and bang, but not quite all of
them.</p></li>
<li><p>googol bang bang $\lt$ googol plex plex plex. This is
because $g!!\lt (g^g)^{g^g}=g^{gg^g}=10^{100\cdot gg^g}$, which is less than
$10^{10^{10^g}}$, since $100\cdot gg^g=10^{102\cdot
10^{100}}\lt 10^{10^g}$. Since googol bang bang is the largest two-operand name using only
plex and bang and googol plex plex plex is the smallest three-operand name, this means that
the two-operand names using only plex and bang will all come
before all the three-operand names.</p></li>
<li><p>googol plex bang bang $\lt$ googol bang plex plex. This
is because $(10^g)!!\lt
((10^g)^{10^g})!=(10^{g10^g})!=(10^{10^{g+100}})!\lt
(10^{10^{g+100}})^{10^{10^{g+100}}}=10^{10^{g+100}10^{10^{g+100}}}=
10^{10^{(g+100)10^{g+100}}}\lt
10^{10^{g!}}$.</p></li>
</ul>
<p>Combining the previous observations leads to the following
order of the three-operand names below googol stack:</p>
<pre><code> googol
googol plex
googol bang
googol plex plex
googol plex bang
googol bang plex
googol bang bang
googol bang bang
googol plex plex plex
googol plex plex bang
googol plex bang plex
googol plex bang bang
googol bang plex plex
googol bang plex bang
googol bang bang plex
googol bang bang bang
googol stack
</code></pre>
<p>Perhaps someone can generalize the methods into a general
comparison algorithm for larger smallish terms using only
plex and bang? This is related to the topic of the Velleman
article linked to by J. M. in the comments.</p>
<p>Meanwhile, let us now turn to the interaction with stack.
Using the observations of the two-operand case in the
question, we may continue as follows:</p>
<pre><code> googol stack plex
googol stack bang
googol stack plex plex
googol stack plex bang
googol stack bang plex
googol stack bang bang
</code></pre>
<p>Now we use the following fact:</p>
<ul>
<li>stack bang bang $\lt$ plex stack. This is established as
in the question, since $(10\uparrow\uparrow x)!!\lt
(10\uparrow\uparrow x)^{10\uparrow\uparrow x}!\lt$
$(10\uparrow\uparrow x)^{(10\uparrow\uparrow
x)(10\uparrow\uparrow x)^{10\uparrow\uparrow x}}=$
$(10\uparrow\uparrow x)^{(10\uparrow\uparrow
x)^{1+10\uparrow\uparrow x}} 10\uparrow\uparrow 4x\lt
10\uparrow\uparrow 10^x$. In fact, it seems that we will be
able to absorb many more iterated bangs after stack into
plex stack.</li>
</ul>
<p>The order therefore continues with:</p>
<pre><code> googol plex stack
googol plex stack plex
googol plex stack bang
</code></pre>
<ul>
<li>plex stack bang $\lt$ bang stack. To see this, observe
that $(10\uparrow\uparrow 10^x)!\lt (10\uparrow\uparrow
10^x)^{10\uparrow\uparrow 10^x}\lt 10\uparrow\uparrow
2\cdot10^x$, since associating upwards is greater, and this
is less than $10\uparrow\uparrow x!$. Again, we will be
able to absorb many operands after plex stack into bang
stack.</li>
</ul>
<p>The order therefore continues with:</p>
<pre><code> googol bang stack
googol bang stack plex
googol bang stack bang
</code></pre>
<ul>
<li>bang stack bang $\lt$ plex plex stack.
This is because $(10\uparrow\uparrow x!)!\lt
(10\uparrow\uparrow x!)^{10\uparrow\uparrow x!}\lt
10\uparrow\uparrow 2x!\lt 10\uparrow 10^{10^x}$.</li>
</ul>
<p>Thus, the order continues with:</p>
<pre><code> googol plex plex stack
googol plex bang stack
googol bang plex stack
googol bang bang stack
</code></pre>
<p>This last item is clearly less than googol stack stack, and
so, using all the pairwise operations we already know, we
continue with:</p>
<pre><code> googol stack stack
googol stack stack plex
googol stack stack bang
googol stack plex stack
googol stack bang stack
googol plex stack stack
googol bang stack stack
googol stack stack stack
</code></pre>
<p>Which seems to complete the list for three-operand names.
If I have made any mistakes, please comment below.</p>
<p>Meanwhile, this answer is just partial progress, since we
have the four-operand names, which will fit into the
hierarchy, and I don't think the observations above are
fully sufficient for the four-operand comparisons, although
many of them will now be settled by these criteria. And of course, I am nowhere near a general comparison algorithm.</p>
<p>Sorry for the length of this answer. Please post comments if I've made any errors.</p>
| <p>The following describes a comparison algorithm that will work for expressions where the number of terms is less than googol - 2.</p>
<p>First, consider the situation with only bangs and plexes. To compare two numbers, first count the total number of bangs and plexes in each. If one has more than the other, that number is bigger. If the two numbers have the same number of bangs and plexes, compare the terms lexicographically, setting bang > plex. So googol plex bang plex plex > googol plex plex bang bang, since the first terms are equal, and the second term favors the first.</p>
<p>To prove this, first note that x bang > x plex for $x \ge 25$. To show that the higher number of terms always wins, it suffices to show that googol plex$^{k+1} >$ googol bang$^k$. We will instead show that $x$ plex$^{k+1} > x$ bang $^k$ for $x \ge 100$. Set $x = 10^y$.</p>
<p>$10^y$ bang $< (10^y)^{10^y} = 10^{y*10^y} < 10^{10^{10^y}} = 10^y$ plex plex</p>
<p>$10^y$ bang bang $< (10^{y*10^y})^{10^{y*10^y}} $</p>
<p>$= 10^{10^{y*10^y + y + \log_{10} y}}$ </p>
<p>$= 10^{10^{10^{y + \log_{10} y} (1 + \frac{y + \log_{10} y}{10^{y + \log_{10} y}})}}$ </p>
<p>$< 10^{10^{10^{y + \log_{10} y} (1 + \frac{y}{10^{y}})}}$</p>
<p>(We use the fact that x/10^x is decreasing for large x.) </p>
<p>$= 10^{10^{10^{y + \log_{10} y + \log_{10}(1 + \frac{y}{10^{y}})}}}$ </p>
<p>$< 10^{10^{10^{y + \log_{10} y + \frac{y}{10^{y}}}}}$ </p>
<p>(We use the fact that ln(1+x) < x, so log_10 (1+x) < x)</p>
<p>$< 10^{10^{10^{2y}}} < 10^{10^{10^{10^y}}} = 10^y$ plex plex plex</p>
<p>$10^y$ bang bang bang < $(10^{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}})^{10^{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}}} $</p>
<p>$= 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}} + 10^{y + \log_{10} y + \frac{y}{10^y}})}}$ </p>
<p>$= 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}}(1 + \frac{10^{y + \log_{10} y + \frac{y}{10^y}}}{10^{10^{y + \log_{10} y + \frac{y}{10^y}}}})}}$ </p>
<p>$< 10^{10^{(10^{10^{y + \log_{10} y + \frac{y}{10^y}}}(1 + \frac{10^{y }}{10^{10^{y}}})}}$ </p>
<p>$= 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} + \log_{10}(1+\frac{10^{y }}{10^{10^{y}}}))}}}$ </p>
<p>$< 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} + \frac{10^{y }}{10^{10^{y}}})}}}$ </p>
<p>$= 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} (1 + \frac{10^{y }}{10^{10^{y}} * (10^{y + \log_{10} y + \frac{y}{10^y}})}))}}}$ </p>
<p>$< 10^{10^{10^{(10^{y + \log_{10} y + \frac{y}{10^y}} (1 + \frac{1}{10^{10^{y}} }))}}}$</p>
<p>$= 10^{10^{10^{10^{y + \log_{10} y + \frac{y}{10^y} + \frac{1}{10^{10^{y}} }} }}}$ </p>
<p>$< 10^{10^{10^{10^{2y}}}} < 10^{10^{10^{10^{10^y}}}} = 10^y$ plex plex plex plex</p>
<p>We can see that the third bang added less than $\frac{1}{10^{10^y}}$ to the top exponent. Similarly, adding a fourth bang will add less than $\frac{1}{10^{10^{10^y}}}$, adding a fifth bang will add less than $\frac{1}{10^{10^{10^{10^y}}}}$, and so on. It's clear that all the fractions will add up to less than 1, so in general,</p>
<p>$10^y$ bang$^{k} < 10^{10^{10^{\cdot^{\cdot^{10^{y + \log_{10} y + 1}}}}}}{\large\rbrace} k+1\text{ 10's} < 10^{10^{10^{\cdot^{\cdot^{10^{10^y}}}}}}{\large\rbrace} k+2\text{ 10's} = 10^y$ plex$^{k+1}$.</p>
<p>Next, we have to show that the lexicographic order works. We will show that it works for all $x \ge 100$. Suppose our procedure failed; take two numbers with the fewest number of terms for which it fails, e.g. $x s_1 ... s_n$ and $x t_1 ... t_n$. It cannot be that $s_1$ and $t_1$ are both plex or both bang, since then $(x s_1) s_2 ... s_n$ and $(x s_1) t_2 ... t_n$ would be a failure of the procedure with one fewer term. So set $s_1 =$ bang and $t_1 =$ plex. Since our procedure tells us that $x s_1 ... s_n$ > $x t_1 ... t_n$, and our procedure fails, it must be that $x s_1 ... s_n$ < $x t_1 ... t_n$. Then</p>
<p>$x$ bang plex ... plex $< x$ bang $s_2 ... s_n < x$ plex $t_2 ... t_n < x$ plex bang ... bang.</p>
<p>So to show our procedure works, it suffices to show that x bang plex$^k$ > x plex bang$^k$. Set x = 10^y.</p>
<p>$10^y$ bang > $(\frac{10^y}{e})^{10^y} > (10^{y - \frac{1}{2}})^{10^y} = 10^{(y-\frac{1}{2})10^y}$</p>
<p>$10^y$ bang plex$^k > 10^{10^{10^{\cdot^{\cdot^{10^{(y-\frac{1}{2})10^y}}}}}}{\large\rbrace} k+1\text{ 10's}$</p>
<p>To determine $10^y$ plex bang$^k$, we can use our previous inequality for $10^y$ bang$^k$ and set $x = 10^y$ plex $= 10^{10^y}$, i.e. substitute $10^y$ for $y$. We get</p>
<p>$10^y$ plex bang$^k < 10^{10^{10^{\cdot^{\cdot^{10^{(10^y + \log_{10}(10^y) + 1}}}}}}{\large\rbrace} k+1\text{ 10's} = 10^{10^{10^{\cdot^{\cdot^{10^{10^y + y + 1}}}}}}{\large\rbrace} k+1\text{ 10's}$</p>
<p>$< 10^{10^{10^{\cdot^{\cdot^{10^{(y-\frac{1}{2})10^y}}}}}}{\large\rbrace} k+1\text{ 10's} < 10^y$ bang plex$^k$.</p>
<p>Okay, now for terms with stack. Given two expressions, first compare the number of times stack appears; the number in which stack appears more often is the winner. If stack appears n times for both expressions, then in each expression consider the n+1 groups of plexes and bangs separated by the n stacks. Compare the n+1 groups lexicographically, using the ordering we defined above for plexes and bangs. Whichever expression is greater denotes the larger number.</p>
<p>Now, this procedure clearly does not work all the time, since a googol followed be a googol-2 plexes is greater than googol stack. However, I believe that if the number of terms in the expressions are less than googol-2, then the procedure is correct.</p>
<p>First, observe that $x$ plex stack > $x$ stack plex and $x$ bang stack > $x$ stack bang, since </p>
<p>$x$ stack plex $< x$ stack bang $< (10\uparrow\uparrow x)^{10\uparrow\uparrow x} < 10\uparrow\uparrow (2x) < x$ plex stack $< x$ bang stack.</p>
<p>Thus if googol $s_1 ... s_n$ is some expression with fewer stacks than googol $t_1 ... t_m$, we can move all the plexes and bangs in $s_1 ... s_n$ to the beginning. Let $s_1 ... s_i$ and $t_1 ... t_j$ be the initial bangs and plexes before the first stack. There will be less than googol-2 bangs and plexes, and </p>
<p>googol bang$^{\text{googol}-3} < 10^{10^{10^{\cdot^{\cdot^{10^{100 + \log_{10} 100 + 1}}}}}}{\large\rbrace} \text{googol-2 10's} < 10^{10^{10^{\cdot^{\cdot^{10^{103}}}}}}{\large\rbrace} \text{googol-2 10's}$</p>
<p>$ < 10 \uparrow\uparrow $googol = googol stack</p>
<p>and so googol $s_1 ... s_i$ will be less than googol $t_1 ... t_{j+1}$ ($t_{j+1}$ is a stack). $s_{i+1} ... s_n$ consists of $k$ stacks, and $t_{j+2} ... t_m$ consists of at least $k$ stacks and possibly some plexes and bangs. Thus googol $s_1 ... s_n$ will be less than googol $t_1 ... t_m$.</p>
<p>Now consider $x S_1$ stack $S_2$ stack ... stack $S_n$ versus $x T_1$ stack $T_2$ stack ... stack $T_n$, where the $S_i$ and $T_i$ are sequences of plexes and bangs. Without loss of generality, we can assume that $S_1 > T_1$ in our order. (If $S_1 = T_1$, we can consider ($x S_1$ stack) $S_2$ stack ... stack $S_n$ versus ($x T_1$ stack) $T_2$ stack ... stack $T_n$, and compare $S_2$ versus $T_2$, etc., until we get to an $S_i$ and $T_i$ that are different.) $x S_1$ stack $S_2$ stack ... stack $S_n$ is, at the minimum, $x S_1$ stack ... stack, while $x T_1$ stack $T_2$ stack ... stack $T_n$, is, at the maximum, $x T_1$ stack bang$^{\text{googol}-3}$ stack .... stack. So it is enough to show</p>
<p>x S_1 stack > x T_1 stack bang$^{\text{googol}-3}$</p>
<p>We have seen that $x$ bang$^k < 10^{10^{10^{\cdot^{\cdot^{10^x}}}}}{\large\rbrace} k+1\text{ times}$ so $x$ bang$^{\text{googol}-3} < 10^{10^{10^{\cdot^{\cdot^{10^x}}}}}{\large\rbrace} \text{googol-2 times}$, and $x T_1$ stack bang$^{\text{googol}-3} < ((x T_1) +$ googol) stack. Thus we must show $x S_1 > (x T_1) +$ googol.</p>
<p>We can assume without loss of generality that the first term of $S_1$ and the first term of $T_1$ are different. (Otherwise set $x = x s_1 ... s_{i-1}$ where i is the smallest number such that s_i and t_i are different.) We have seen above that it is enough to consider </p>
<p>x (plex)^(k+1) versus x (bang)^k
x bang (plex)^k versus x plex (bang)^k</p>
<p>We have previously examined these two cases. In both cases, adding a googol to the smaller leads to the same inequality.</p>
<p>And with that, we are done.</p>
<hr>
<p>What are the prospects for a general comparison algorithm, when the number of terms exceeds googol-3? The difficulty can be illustrated by considering the following two expressions:</p>
<p>$x$ $S_1$ stack plex$^k$</p>
<p>$x$ $T_1$ stack</p>
<p>The two expressions are equal precisely when k = $x$ $T_1$ - $x$ $S_1$. So a general comparison algorthm must allow for the calculation of arbitrary expressions, which perhaps makes our endeavor pointless.</p>
<p>In light of this, I believe the following general comparison algorithm is the best that can be done.</p>
<p>We already have a general comparison algorithm for expressions with no appearances of stack. If stack appears in both expressions, let them be $x$ $S_1$ stack $S_2$ and $x$ $T_1$ stack $T_2$, where $S_2$ and $T_2$ have no appearances of stack. Replace $x$ $S_1$ stack with plex$^{(x S_1)}$, and $x$ $T_1$ stack with plex$^{(x T_1)}$, and do our previous comparison algorithm on the two new expressions. This clearly works because $x$ $S_1$ stack = $10^{10}$ plex$^{(x S_1 -2)}$ and
$x$ $T_1$ stack = $10^{10}$ plex$^{(x T_1-2)}$.</p>
<p>The remaining case is where one expression has stack and the other does not, i.e. googol $S_1$ stack $S_2$ versus googol $T$, where $S_2$ and $T$ have no appearances of stack. Let $s$ and $t$ be the number of terms in $S_2$ and $T$ respectively. Then googol $T$ is greater than googol $S_1$ stack $S_2$ iff $t \ge $ googol $S_1 + s - 2$.</p>
<p>Indeed, if $t \ge $ googol $S_1 + s - 2$,</p>
<p>googol $T \ge$ googol plex$^{\text{googol} S_1 + s - 2} = 10^{10^{10^{\cdot^{\cdot^{10^{100}}}}}}{\large\rbrace} $ googol $S_1$ $+s-1$ 10's $ > 10^{10^{10^{\cdot^{\cdot^{10^{10} + 10 + 1}}}}}{\large\rbrace} $ googol $S_1 +s-1$ 10's > googol $S_1$ stack bang$^s$ </p>
<p>$\ge$ googol $S_1$ stack $S_2$.</p>
<p>If $t \le $ googol $S_1 + s - 3$,</p>
<p>googol $T \le$ googol bang$^{\text{googol} S_1 + s - 3} < 10^{10^{10^{\cdot^{\cdot^{10^{103}}}}}}{\large\rbrace} $ googol $S_1$ $+s-2$ 10's $ < 10^{10^{10^{\cdot^{\cdot^{10^{10^{10}}}}}}}{\large\rbrace} $ googol $S_1 +s$ 10's = googol $S_1$ stack plex$^s$ </p>
<p>$\le$ googol $S_1$ stack $S_2$.</p>
<p>So the comparison algorithm, while not particularly clever, works. </p>
<p>In one of the comments, someone raised the question of a polynomial algorithm (presumably as a function of the maximum number of terms). We can implement one as follows. Let n be the maximum number of terms. We use the following lemma.</p>
<p>Lemma. For any two expressions googol $S$ and googol $T$, if googol $S$ > googol $T$, then googol $S$ > 2 googol $T$.</p>
<p>This lemma is not too hard to verify, but for reasons of space I will not do so here.</p>
<p>As before, we have a simple algorithm in O(n) time when both expressions do not contain stack. If exactly one of the expressions has a stack, we compute $x$ $S_1$ as above, but we stop if the calculation exceeds n. If the calculation finishes, then we can do the previous comparison in O(log n) time; if the calculation stops, then we know that $x$ $S_1$ stack $S_2$ is larger, again in O(log n) time.</p>
<p>If both expressions have stack, then from our previous precedure we calculate both $x$ $S_1$ and $x$ $T_1$. Now we stop if either calculation exceeds $2m$, where $m = $the maximum length of $S_2$ and $T_2$ (Clearly, $2m < 2n$). If the caculation finishes, then we can do our previous procedure in O(n) time. If the calculation stops prematurely, than the larger of $x$ $S_1$ or $x$ $T_1$ will determine the larger original expression. indeed, if $y = x$ $S_1$ and $z = x$ $T_1$, and $y > z$, then by the Lemma $y > 2z$, so since $y > 2m$, we have $y > z+m$. In our procedure we replace $x$ $S_1$ stack by plex$^y$ and $x$ $T_1$ stack by plex$^z$; since $y$ is more than $m$ more than $z$, plex$^y$ $S_2$ will be longer than plex$^z$ $T_2$, so the first expression will be larger. So we apply our procedure to $x$ $S_1$ and $x$ $S_2$; this will reduce the sum of the lengths by at least $m+2$, having used O(log m) operations.</p>
<p>So we wind up with an algorithm that takes O(n) operations.</p>
<hr>
<p>We could extend the notation to have suffixes that apply k -> 10^^^k (Pent), k -> 10^^^^k (Hex), etc. I believe the obvious extension of the above procedure will work, e.g. for expressions with plex, bang, stack, and pent, first count the number of pents; the expression with more pents will be the larger. Otherwise, compare the n+1 groups of plex bang and stack lexicographically by our previously defined procedure. So long as the number of terms is less than googol-2, this procedure should work.</p>
|
logic | <p>Why is establishing the absolute consistency of ZFC impossible? What are the fundamental limitations that prohibit us with coming up with a proof?</p>
<p><strong>EDIT:</strong> <a href="https://mathoverflow.net/q/24919">This</a> post seems to make the most sense. In short: if we were to come up with a mathematical proof of the consistency of ZFC, we would be able to mimic that proof inside ZFC. Ergo, if ZFC is consistent, there can be no proof that it is.</p>
| <p>This kind of question often includes several common points of confusion.</p>
<h3>Confusing point 1: we cannot talk rigorously about a statement being "unprovable" without reference to the formal system for doing the proof.</h3>
<p>There is a mathematical, formally defined relation of provability between a formal system and a sentence, which defines what it means for the sentence to be "provable" from the formal system. This relation depends on both the statement and the formal system. On the other hand, there is no rigorous notion of "provable" without reference to a formal system.</p>
<p>So it does not really make any sense to talk about a statement being "unprovable" without reference to what system is doing the proving.</p>
<p>In particular, every statement is provable from a formal system that already includes that statement as an axiom. That is somewhat trivial - but even if the statement is not an axiom, it is still the case that the only way for a statement to be provable in a particular system is for the statement to already be a consequence of the axioms of the system. This is true for Con(ZFC) and for every other mathematical statement. Some statements are provable from no axioms at all - these are called logically valid. The incompleteness theorems show in a very strong way that Con(ZFC) is not logically valid.</p>
<h3>Confusing point 2: Con(ZFC) is not different, in the end, than many other statements that we accept as "provable" without reference to consistency.</h3>
<p>There is a particular polynomial <span class="math-container">$P$</span>, with integer coefficients, integer exponents, and many variables, so that the statement Con(ZFC) can be expressed as "there are no positive integer inputs which cause the value of <span class="math-container">$P$</span> to equal 0". This statement is not particularly different in form, for example, than the special case of Fermat's last theorem: "there are not any positive integer inputs <span class="math-container">$x,y,z$</span> such that <span class="math-container">$x^{7}+y^{7}-z^{7} = 0$</span>."</p>
<p>Nobody seriously claims that the proof of that special case of Fermat's last theorem is merely a conditional claim that can never be "absolutely proved" without assuming the consistency of some theory. But Con(ZFC) is not significantly different in form - just longer - than that special case of Fermat's last theorem. Both of these are just statements that some multivariable integer polynomial is never equal to zero on positive integer inputs.</p>
<p>The real situation is that <em>almost nothing</em> can be proved "absolutely". Unless a statement is logically valid, additional axioms will be required to prove it. There is nothing special about Con(ZFC) in that respect. Any proof of a non-trivial theorem will always rely on extra "assumptions', and in a trivial sense those assumptions always have to be at least as strong as the theorem we are trying to prove.</p>
<h3>Confusing point 3: ZFC is not the strongest natural system for set theory</h3>
<p>There are many, many axioms systems for set theory. ZFC is of interest because its axioms are very natural to motivate and because it is strong enough for formalize the vast majority of mathematical theorems. But that does not mean that ZFC is somehow a stopping point. There are natural systems of set theory stronger than ZFC.</p>
<p>On particular example is <a href="https://en.wikipedia.org/wiki/Morse%E2%80%93Kelley_set_theory" rel="noreferrer">Morse--Kelley set theory</a>, MK, which was exposited in the appendix of Kelley's book <em>General Topology</em>. This is a perfectly reasonable system for set theory, which happens to prove Con(ZFC). We should pay attention to the formal meaning of this: there is a finite derivation that has no assumptions besides the axioms of MK, and which ends with Con(ZFC). And note that MK was designed by a mathematician and published in a mathematics textbook for mathematical purposes.</p>
<p>The argument given in <a href="https://mathoverflow.net/a/24919/5442">this MathOverflow answer</a> includes a particular premise: that all "mathematical" techniques are already included in ZFC. That is the "Key Assumption" in that argument, which was alluded to in the question above.</p>
<p>There <em>is</em> some merit in that heuristic argument: many of the standard techniques of mathematics can be formalized in ZFC. The situation is not as clear as we might suppose, though, because of examples like MK set theory, which must be defined as "nonmathematical" in order for the Key Assumption to hold. At the worst, we could find ourselves making a circular argument, where we <em>define</em> "mathematical" to be "formalizable in ZFC" and then argue that ZFC is able to formalize all mathematical arguments.</p>
<p>A deeper question, which cannot be answered because it is ahistorical, is whether MK would be considered a more "natural" system if it had been exposited before ZFC. Just like ZFC, MK is based on a natural intuition about the nature of sets and classes, which leads to the next point.</p>
<h3>An informal, "mathematical" proof of the consistency of ZFC</h3>
<p>What if we don't look at formal theories, and we just look for a "mathematical" but informal proof of the consistency of ZFC? Actually, we have one: there is a well known argument that our intuitive picture of the cumulative hierarchy shows that ZFC is satisfied by the cumulative hierarchy, and thus ZFC is unable to prove a contradiction. Of course, this argument relies on a pre-existing, informal understanding of the cumulative hierarchy. So not all mathematicians will accept it - but it is of interest exactly because it is very compelling as an informal argument.</p>
<p>If we want to separate "mathematical proof" from "formal proof", then it is not at all clear why this kind of proof should be out of the question. Unlike formal proofs, mathematicians may differ on whether they accept this informal proof of Con(ZFC). But it is certainly some kind of informal "mathematical proof" of Con(ZFC), if we are willing to consider informal proofs as mathematical.</p>
<h3>So what is the deal with Con(ZFC)?</h3>
<p>We should really ask <em>why</em> someone would be interested in an "absolute" consistency proof of ZFC. Presumably, it is because they doubt that ZFC is consistent, and they want to shore up their belief (Hilbert's program can be caricatured in this way).</p>
<p>In that case, as soon as we see from the incompleteness theorems that the consistency of ZFC is not a logical validity, we are naturally led to an alternate question: "Which theories are strong enough to prove the consistency of ZFC?" There has been a lot of work on that question, in mathematical logic and foundations.</p>
<p>The incompleteness theorems give part of the answer: no theory that can be interpreted in ZFC can prove Con(ZFC). Examples like MK give another part: there are natural theories strong enough to prove Con(ZFC). In the end, we can choose any formal system we like for each mathematical theorem we want to prove. Some of those theorems are provable in systems that are unable prove Con(ZFC), and some of those theorems require systems that do prove Con(ZFC). Separating those two groups of theorems leads to interesting research in set theory and logic.</p>
| <p>To add to Carl Mummert's answer...</p>
<h2>Confusing point 0: No formal system can be shown to be absolutely consistent.</h2>
<p>Yes you read that right. Perhaps you might say, how about the formal system consisting of just first-order logic and not a single axiom, namely the pure identity theory, or maybe even without equality? Sorry, but that doesn't work. How do you state the claim that this system is absolutely consistent? You would need to work in a meta-system that is powerful enough to reason about sentences that are provable over the formal system. To do so, your meta-system already needs string manipulation capability, which turns out to be more or less equivalent to first-order PA. Hence before you can even <strong>assert</strong> that any formal system is consistent, you need to believe the consistency of the meta-system, which is likely going to be essentially PA, which then means that you <strong>do not ever have</strong> absolute consistency.</p>
<p>Of course, if you <strong>assume</strong> that PA is consistent, then you have some other options, but even then it's not so simple. For example you may try defining "absolute consistent" as "provably consistent within PA as the meta-system". This flops on its face because then you cannot even claim that PA itself is absolutely consistent, otherwise it contradicts Godel's incompleteness theorem. So ok you try again, this time defining "T is absolutely consistent" to mean "PA proves that Con(PA) implies Con(T)". Alright this is better; at least we can now show that PA is absolutely consistent, though it is trivial! </p>
<p>However, note that even with the assumption of consistency of some formal system such as PA, as above, you are still working in some meta-system, and by doing so you are already accepting the consistency of that meta-system without being able to affirm it <a href="https://math.stackexchange.com/a/1334753/21820">non-circularly</a>. Therefore any attempt to define any truly absolute notion of consistency is doomed from the start.</p>
|
probability | <p>$\newcommand{\erf}{\operatorname{erf}}$
This may be a very naïve question, but here goes.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Error_function">error function</a> $\erf$ is defined by
$$\erf(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2}dt.$$
Of course, it is closely related to the normal cdf
$$\Phi(x) = P(N < x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2}dt$$
(where $N \sim N(0,1)$ is a standard normal) by the expression $\erf(x) = 2\Phi(x \sqrt{2})-1$.</p>
<p>My question is:</p>
<blockquote>
<p>Why is it natural or useful to define $\erf$ normalized in this way?</p>
</blockquote>
<p>I may be biased: as a probabilist, I think much more naturally in terms of $\Phi$. However, anytime I want to compute something, I find that my calculator or math library only provides $\erf$, and I have to go check a textbook or Wikipedia to remember where all the $1$s and $2$s go. Being charitable, I have to assume that $\erf$ was invented for some reason other than to cause me annoyance, so I would like to know what it is. If nothing else, it might help me remember the definition.</p>
<p>Wikipedia says:</p>
<blockquote>
<p><em>The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.</em></p>
</blockquote>
<p>So perhaps a practitioner of one of these mysterious "other branches of mathematics" would care to enlighten me. </p>
<p>The most reasonable expression I've found is that
$$P(|N| < x) = \erf(x/\sqrt{2}).$$
This at least gets rid of all but one of the apparently spurious constants, but still has a peculiar $\sqrt{2}$ floating around.</p>
| <p>Some paper chasing netted <a href="http://www.jstatsoft.org/v11/a05/paper">this short article</a> by George Marsaglia, in which he also quotes <a href="http://www.informaworld.com/smpp/content~db=all~content=a911145858~frm=abslink">the article by James Glaisher</a> where the error function was given a name and notation (but with a different normalization). Here's the relevant section of the paper:</p>
<blockquote>
<p>In 1871, J.W. Glaisher published an article on definite integrals in which
he comments that while there is scarcely a function that cannot be put
in the form of a definite integral, for the evaluation of those that
cannot be put in the form of a tolerable series we are limited to
combinations of algebraic, circular, logarithmic and exponential—the
elementary or primary functions. ... He writes:</p>
<blockquote>
<p>The chief point of importance, therefore, is the choice of the
elementary functions; and this is a work of some difficulty. One function
however, viz. the integral $\int_x^\infty e^{-x^2}\mathrm dx$,
well known for its use in physics, is so obviously suitable for the purpose,
that, with the exception of receiving a name and a fixed notation, it may
almost be said to have already become primary... As it is necessary that
the function should have a name, and as I do not know that any has been
suggested, I propose to call it the <em>Error-function</em>, on account of its
earliest and still most important use being in connexion with the theory of
Probability, and notably with the theory of Errors, and to write</p>
<p>$$\int_x^\infty e^{-x^2}\mathrm dx=\mathrm{Erf}(x)$$</p>
</blockquote>
<p>Glaisher goes on to demonstrate use of $\mathrm{Erf}$ in the evaluation of a
variety of definite integrals. We still use "error function" and
$\mathrm{Erf}$, but $\mathrm{Erf}$ has become $\mathrm{erf}$, with a change
of limits and a normalizing factor:
$\mathrm{erf}(x)=\frac2{\sqrt{\pi}}\int_0^x e^{-t^2}\mathrm dt$ while Glaisher’s
original $\mathrm{Erf}$ has become
$\mathrm{erfc}(x)=\frac2{\sqrt{\pi}}\int_x^\infty e^{-t^2}\mathrm dt$. The normalizing
factor $\frac2{\sqrt{\pi}}$ that makes $\mathrm{erfc}(0)=1$ was not used in
early editions of the famous “A Course in Modern Analysis” by Whittaker and
Watson. Both were students and later colleagues of Glaisher, as were other
eminences from Cambridge mathematics/physics: Maxwell, Thomson (Lord Kelvin)
Rayleigh, Littlewood, Jeans, Whitehead and Russell. Glaisher had a long and
distinguished career at Cambridge and was editor of <em>The Quarterly Journal of
Mathematics</em> for fifty years, from 1878 until his death in 1928.</p>
<p>It is unfortunate that changes from Glaisher’s original $\mathrm{Erf}$:
the switch of limits, names and the standardizing factor, did not apply to
what Glaisher acknowledged was its most important application: the normal
distribution function, and thus
$\frac1{\sqrt{2\pi}}\int e^{-\frac12t^2}\mathrm dt$ did not become the
basic integral form. So those of us interested in its most important
application are stuck with conversions...</p>
<p>...A search of the Internet will show many applications of what we now call
$\mathrm{erf}$ or $\mathrm{erfc}$ to problems of the type that seemed of
more interest to Glaisher and his famous colleagues: integral solutions
of differential equations. These include the telegrapher’s equation,
studied by Lord Kelvin in connection with the Atlantic cable, and Kelvin’s
estimate of the age of the earth (25 million years), based on the solution
of a heat equation for a molten sphere (it was far off because of then
unknown contributions from radioactive decay). More recent Internet mentions
of the use of $\mathrm{erf}$ or $\mathrm{erfc}$ for solving
differential equations include short-circuit power dissipation in
electrical engineering, current as a function of time in a switching diode,
thermal spreading of impedance in electrical components, diffusion of a
unidirectional magnetic field, recovery times of junction diodes and
the Mars Orbiter Laser Altimeter.</p>
</blockquote>
<p>On the other hand, for the applications where the error function is to be evaluated at <em>complex</em> values (spectroscopy, for instance), probably the more "natural" function to consider is <a href="http://dx.doi.org/10.1016/0022-4073%2867%2990057-X">Faddeeva's (or Voigt's) function:</a></p>
<p>$$w(z)=\exp\left(-z^2\right)\mathrm{erfc}(-iz)$$</p>
<p>there, the normalization factor simplifies most of the formulae in which it is used. In short, I suppose the choice of whether you use the error function or the normal distribution CDF $\Phi$ or the Faddeeva function in your applications is a matter of convenience.</p>
| <p>I think the normalization in $x$ is easy to account for: it's natural to write down the integral $\int_0^x e^{-t^2} \, dt$ as an integral even if it's not actually the most natural probabilistic quantity. So it remains to explain the normalization in $y$, and as far as I can tell this is so $\lim_{x \to \infty} \text{erf}(x) = 1$. </p>
<p>Beyond that, the normalization's probably stuck more for historical reasons than anything else. </p>
|
matrices | <p><span class="math-container">$$\det(A^T) = \det(A)$$</span></p>
<p>Using the geometric definition of the determinant as the area spanned by the <em>columns</em>, could someone give a geometric interpretation of the property?</p>
| <p><em>A geometric interpretation in four intuitive steps....</em></p>
<p><strong>The Determinant is the Volume Change Factor</strong></p>
<p>Think of the matrix as a geometric transformation, mapping points (column vectors) to points: $x \mapsto Mx$.
The determinant $\mbox{det}(M)$ gives the factor by which volumes change under this mapping.</p>
<p>For example, in the question you define the determinant as the volume of the parallelepiped whose edges are given by the matrix columns. This is exactly what the unit cube maps to, so again, the determinant is the factor by which the volume changes.</p>
<p><strong>A Matrix Maps a Sphere to an Ellipsoid</strong></p>
<p>Being a linear transformation, a matrix maps a sphere to an ellipsoid.
The singular value decomposition makes this especially clear.</p>
<p>If you consider the principal axes of the ellipsoid (and their preimage in the sphere), the singular value decomposition expresses the matrix as a product of (1) a rotation that aligns the principal axes with the coordinate axes, (2) scalings in the coordinate axis directions to obtain the ellipsoidal shape, and (3) another rotation into the final position.</p>
<p><strong>The Transpose Inverts the Rotation but Keeps the Scaling</strong></p>
<p>The transpose of the matrix is very closely related, since the transpose of a product is the reversed product of the transposes, and the transpose of a rotation is its inverse. In this case, we see that the transpose is given by the inverse of rotation (3), the <em>same</em> scaling (2), and finally the inverse of rotation (1).</p>
<p>(This is almost the same as the inverse of the matrix, except the inverse naturally uses the <em>inverse</em> of the original scaling (2).)</p>
<p><strong>The Transpose has the Same Determinant</strong></p>
<p>Anyway, the rotations don't change the volume -- only the scaling step (2) changes the volume. Since this step is exactly the same for $M$ and $M^\top$, the determinants are the same.</p>
| <p>This is more-or-less a reformulation of Matt's answer.
He relies on the existence of the SVD-decomposition, I show that <span class="math-container">$\det(A)=\det(A^T)$</span> can be stated in a little different way.</p>
<p>Every square matrix can be represented as the product of an orthogonal matrix (representing an isometry) and an upper triangular matrix (<a href="http://en.wikipedia.org/wiki/QR_decomposition" rel="noreferrer">QR decomposition</a>)- where the determinant of an upper (or lower) triangular matrix is just the product of the elements along the diagonal (that stay in their place under transposition), so, by the Binet formula, <span class="math-container">$A=QR$</span> gives:
<span class="math-container">$$\det(A^T)=\det(R^T Q^T)=\det(R)\det(Q^T)=\det(R)\det(Q^{-1}),$$</span>
<span class="math-container">$$\det(A^T)=\frac{\det{R}}{\det{Q}}=\det(Q)\det(R)=\det(QR)=\det(A),$$</span>
where we used that the transpose of an orthogonal matrix is its inverse, and the determinant of an orthogonal matrix belongs to <span class="math-container">$\{-1,1\}$</span> - since an orthogonal matrix represents an isometry.</p>
<hr />
<p>You can also consider that <span class="math-container">$(*)$</span> the determinant of a matrix is preserved under Gauss-row-moves (replacing a row with the sum of that row with a linear combination of the others) and Gauss-column-moves, too, since the volume spanned by <span class="math-container">$(v_1,\ldots,v_n)$</span> is the same of the volume spanned by <span class="math-container">$(v_1+\alpha_2 v_2+\ldots,v_2,\ldots,v_n)$</span>. By Gauss-row-moves you can put <span class="math-container">$A$</span> in upper triangular form <span class="math-container">$R$</span>, then have <span class="math-container">$\det A=\prod R_{ii}.$</span> If you apply the same moves as column moves on <span class="math-container">$A^T$</span>, you end with <span class="math-container">$R^T$</span> that is lower triangular and has the same determinant of <span class="math-container">$R$</span>, obviously. So, in order to provide a "really geometric" proof that <span class="math-container">$\det(A)=\det(A^T)$</span>, we only need to provide a "really geometric" interpretation of <span class="math-container">$(*)$</span>. An intuition is that the volume of the parallelepiped originally spanned by the columns of <span class="math-container">$A$</span> is the same if we change, for istance, the base of our vector space by sending <span class="math-container">$(e_1,\ldots,e_n)$</span> into <span class="math-container">$(e_1,\ldots,e_{i-1},e_i+\alpha\, e_j,e_{i+1},\ldots,e_n)\,$</span> with <span class="math-container">$i\neq j$</span>, since the geometric object is the same, and we are only changing its "description".</p>
|
logic | <p>I am trying to understand what “$p$ implies $q$” means. I read that $p$ is a sufficient condition for $q$, and $q$ is a necessary condition for $p$.
Further from <a href="http://en.wikipedia.org/wiki/Necessary_and_sufficient_condition#Definitions">Wikipedia</a>,</p>
<blockquote>
<p>A necessary condition of a statement must be satisfied for the
statement to be true. Formally, a statement $P$ is a necessary condition
of a statement $Q$ if $Q$ implies $P,\quad (Q \Rightarrow P)$.</p>
<p>A sufficient condition is one that, if satisfied, assures the
statement's truth. Formally, a statement $P$ is a sufficient condition
of a statement $Q$ if $P$ implies $Q,\quad (P \Rightarrow Q)$.</p>
</blockquote>
<p>Now what I am stuck with is that if $P$ is not satisfied will the condition still always be true?</p>
| <p>This is a simple matter answered by the truth table of $\Rightarrow$:</p>
<p>$$\begin{array}{ c | c || c | }
P & Q & P\Rightarrow Q \\ \hline
\text T & \text T & \text T \\
\text T & \text F & \text F \\
\text F & \text T & \text T \\
\text F & \text F & \text T
\end{array}$$</p>
<p>This shows that when $P$ is false, the implication is true. Note that this is the <em>definition</em> of the table, there is no need to prove it. This is how $\Rightarrow$ is defined to work.</p>
<p>As an example, here is one:</p>
<p>$$\textbf{If it is raining then there are clouds in the sky}$$</p>
<p>In this case $P=$It is raining, and $Q=$There are clouds in the sky. Note that $P$ is sufficient to conclude $Q$, and $Q$ is necessary for $P$. There is no rain without clouds, and if there are no clouds then there cannot be any rain.</p>
<p>However, note that $P$ is not necessary for $Q$. There could be light clouds without any rain, and there could be clouds of snow and blizzard (which is technically not rain).</p>
| <p>Once I read a classic example which I would like to share with you.
A politician said.........
p(If I Will win) q(I will cut down taxes by half)
People will feel cheated only in the case when the politician actually won but taxes remained the same.
In this case, q is FALSE but p is true so p->q become false.
We are okay with the rest of cases/situation, so rest of combination of p and q this statement is TRUE.</p>
<p>Lets examine "p is not necessary but sufficient for q"................</p>
<p>There can be a case when this politician lose but taxes get reduced so for the reduction of taxes p is not necessary but sufficient...........</p>
<p>regarding "q is a necessary condition for p"
we can say reduction in taxes is necessary after the victory of the politician else the entire statement of the politician will be false or we can say p->q will be false.</p>
|
logic | <p>I'm trying to wrap my head around the relationship between truth in formal logic, as the value a formal expression can take on, as opposed to commonplace notions of truth.</p>
<p>Personal background: When I was taking a class in formal logic last semester, I found that the most efficient way to do my homework was to forget my typical notions of truth and falsehood, and simply treat truth and falsehood as formal, abstract values that formal expressions may be assigned. In other words, I treated everything formally, which from the name of the course is presumably what I was supposed to do while solving problems.</p>
<p>On the other hand, I came out of the course more confused than enlightened about what truth refers to in mathematics. For example, on what levels are each of the following two statements "true"?</p>
<blockquote>
<ol>
<li>There are infinitely many prime numbers.</li>
<li>The empty function $f\colon \emptyset \to \mathbb R $ is injective.</li>
</ol>
</blockquote>
<p>For the first one, I can obviously see that there would be a contradiction if there were only finitely many prime numbers. To me, the classic proof by contradiction is not a "formal proof" or anything; it is merely a natural language argument that proves (in the everyday sense of the word "proves") why the statement must be true (in the everyday sense of the word "true").</p>
<p>On the other hand, I run into trouble when I try to think about the second one. The very concept of the "empty function" doesn't even feel like it makes sense, but if I think about it as the relation between $\emptyset$ and $\mathbb R$ containing no elements, and then try to write out the statement formally, I get (if I did it correctly)</p>
<blockquote>
<p>$
\forall x \forall y ( ((x\in \emptyset) \land (y\in \emptyset) \land (f (x) = f (y))) \implies (x=y))
$</p>
</blockquote>
<p>which I think has to be true in a formal sense (since the antecedent is always false?). But to be honest, I don't really know how to think about "truth" here; the situation feels much more confusing than with the first statement.</p>
<p>So, in conclusion, my questions are:</p>
<blockquote>
<p>In what sense is each of the above statements "true"? And, more generally,</p>
<p>Is the notion of truth in (mathematical) logic <em>just</em> a formal value assigned to expressions? Or should I think of it as encompassing, but also generalizing, the intuitive notion of a true statement?</p>
</blockquote>
<p>(Any insightful comments or answers are appreciated, even if they don't address all of my questions directly.)</p>
| <p>I like to think that mathematical truth is a mathematical model of "real world truth", similar in my mind to the way in which the mathematical real number line $\mathbb{R}$ is a mathematical model of a "real world line", and similarly for many other mathematical models. </p>
<p>In order to achieve the level of rigor needed to do mathematics, sometimes the description of the mathematical model has formal details that perhaps do not reflect anything in particular that one sees in the real world. Oh well! That's just the way things go. </p>
<p>So yes, the empty function is injective. It's a formal consequence of how we axiomatize mathematical truth.</p>
<p>And, by the way, yes, there are infinitely many primes. The classical proof by contradiction that you feel is a natural language proof and not really a "formal proof" is actually not very hard to formalize at all. Part of the training of a mathematician is (a) to use our natural intuition, experience, or whatever, in order to come up with natural language proofs and then (b) to turn those natural language proofs into formal proofs.</p>
| <p>Your main issue here seems to be that you are wondering how all the following statements:</p>
<blockquote>
<p>If the Earth is flat, then the Earth exists.</p>
<p>If the Earth is flat, then the Earth does not exist.</p>
<p>If there is life on Europa, then the Earth exists.</p>
</blockquote>
<p>could possibly be meaningfully assigned the same truth value in the real world. These are called vacuous truths, the first two because the falsity of the condition means that the consequent is irrelevant, and the third because the truth of the consequent means that the condition is irrelevant. One could interpret logic as a game of some sort, where the prover tries to convince the refuter of his claim. If the prover makes a claim of the form:</p>
<blockquote>
<p>If A then B.</p>
</blockquote>
<p>then the refuter must try to refute it. How? She must convince the prover that A is true but yet B is false. Back to our vacuous examples, the refuter must convince the prover that the Earth is flat. Nah... That's not going to happen, which is why the refuter can't refute the prover. In the third case, the refuter must convince the prover that the Earth doesn't exist. Again, no way...</p>
<p>On the other hand, the prover can prove the first two claims by showing that the Earth is not flat! (Come, follow me around the globe in eighty hours.) After doing this he can convince the refuter that he can always keep his promise because it can't be broken; the Earth is not flat, so the condition of his promise will never come to pass. The consequent part of his promise is irrelevant. In the third case, the condition part is immaterial because the prover can convince the refuter that no matter whether she can show that there is life on Europa, he can convince her that the Earth exists.</p>
<p>This is exactly the same as when you talk about an empty function being injective:</p>
<blockquote>
<p>Any function with empty domain is injective.</p>
</blockquote>
<p>which expands to:
<span class="math-container">$\def\none{\varnothing}$</span></p>
<blockquote>
<p>Given any function <span class="math-container">$f$</span> such that <span class="math-container">$Dom(f) = \none$</span>, and any <span class="math-container">$a,b \in Dom(f)$</span>, if <span class="math-container">$f(a) = f(b)$</span> then <span class="math-container">$a = b$</span>.</p>
</blockquote>
<p>Well, what does the prover have to do to convince the refuter? He says, give me any function <span class="math-container">$f$</span> such that <span class="math-container">$Dom(f) = \none$</span>, and give me any <span class="math-container">$a,b \in Dom(f)$</span>! The refuter simply can't! There isn't any object in <span class="math-container">$Dom(f)$</span>!</p>
<p>But wait, you say, how about the also true statement:</p>
<blockquote>
<p>Any function with singleton domain is injective.</p>
</blockquote>
<p>which expands to:</p>
<blockquote>
<p>Given any function <span class="math-container">$f$</span> such that <span class="math-container">$Dom(f) = \{x\}$</span> for some <span class="math-container">$x$</span>, and any <span class="math-container">$a,b \in Dom(f)$</span>, if <span class="math-container">$f(a) = f(b)$</span> then <span class="math-container">$a = b$</span>.</p>
</blockquote>
<p>This time the refuter can continue the game. She gives the prover a function <span class="math-container">$f$</span> and provides an <span class="math-container">$x$</span> such that <span class="math-container">$Dom(f) = \{x\}$</span>, and also gives him <span class="math-container">$a,b \in Dom(f)$</span>. But then the prover now tells her: See? You assured me that every object in <span class="math-container">$Dom(f)$</span> is equal to <span class="math-container">$x$</span>, so you've to accept that <span class="math-container">$a = x$</span> and <span class="math-container">$b = x$</span>, and hence by meaning of equality <span class="math-container">$a = b$</span>. Now I can convince you that if <span class="math-container">$f(a) = f(b)$</span> then <span class="math-container">$a = b$</span>. (This is exactly the third kind of vacuous statement that we discussed at the beginning.) Indeed, haven't I already convinced you that <span class="math-container">$a = b$</span>, so you don't need to even bother to show me that <span class="math-container">$f(a) = f(b)$</span>?</p>
|
probability | <p>If we have a probability space <span class="math-container">$(\Omega,\mathcal{F},P)$</span> and <span class="math-container">$\Omega$</span> is partitioned into pairwise disjoint subsets <span class="math-container">$A_{i}$</span>, with <span class="math-container">$i\in\mathbb{N}$</span>, then the <a href="https://en.wikipedia.org/wiki/Law_of_total_probability" rel="noreferrer">law of total probability</a> says that <span class="math-container">$P(B)=\sum_{i=1}^{n}P(B|A_{i})P(A_i{})$</span>. This law can be proved using the following two facts:
<span class="math-container">\begin{align*}
P(B|A_{i})&=\frac{P(B\cap A_{i})}{P(A_{i})}\\
P\left(\bigcup_{i\in \mathbb{N}} S_{i}\right)&=\sum_{i\in\mathbb{N}}P(S_{i})
\end{align*}</span>
Where the <span class="math-container">$S_{i}$</span>'s are a pairwise disjoint and a <span class="math-container">$\textit{countable}$</span> family of events in <span class="math-container">$\mathcal{F}$</span>.</p>
<p>However, if we want to apply the law of total probability on a continuous random variable <span class="math-container">$X$</span> with density <span class="math-container">$f$</span>, we have (<a href="https://en.wikipedia.org/wiki/Law_of_total_probability#Continuous_case" rel="noreferrer">like here</a>):
<span class="math-container">$$P(A)=\int_{-\infty}^{\infty}P(A|X=x)f(x)dx$$</span>
which is the law of total probabillity but with the summation replaced with an integral, and <span class="math-container">$P(A_{i})$</span> replaced with <span class="math-container">$f(x)dx$</span>. The problem is that we are conditioning on an <span class="math-container">$\textit{uncountable}$</span> family. Is there any proof of this statement (if true)?</p>
| <p>Excellent question. The issue here is that you first have to define what <span class="math-container">$\mathbb{P}(A|X=x)$</span> means, as you're conditioning on the event <span class="math-container">$[X=x]$</span>, which has probability zero if <span class="math-container">$X$</span> is a continuous random variable. Can we still give <span class="math-container">$\mathbb{P}(A|X=x)$</span> a meaning? In the words of Kolmogorov,</p>
<blockquote>
<p>"The concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible."</p>
</blockquote>
<p>The problem with conditioning on a single event of probability zero is that it can lead to paradoxes, such as the <a href="https://en.wikipedia.org/wiki/Borel%E2%80%93Kolmogorov_paradox" rel="noreferrer">Borel-Kolmogorov paradox</a>. However, if we don't just have an isolated hypothesis such as <span class="math-container">$[X=x]$</span>, but a whole partition of hypotheses <span class="math-container">$\{[X=x] ~|~ x \in \mathbb{R}\}$</span> with respect to which our notion of conditional probability is supposed to make sense, we can give a meaning to <span class="math-container">$\mathbb{P}(A|X=x)$</span> for almost every <span class="math-container">$x$</span>. Let's look at an important special case.</p>
<hr />
<h2>Continuous random variables in Euclidean space</h2>
<p>In many instances where we might want to apply the law of total probability for continuous random variables, we are actually interested in events of the form <span class="math-container">$A = [(X,Y) \in B]$</span> where <span class="math-container">$B$</span> is a Borel set and <span class="math-container">$X,Y$</span> are random variables taking values in <span class="math-container">$\mathbb{R}^d$</span> which are absolutely continuous with respect to Lebesgue measure. For simplicity, I will assume here that <span class="math-container">$X,Y$</span> take values in <span class="math-container">$\mathbb{R}$</span>, although the multivariate case is completely analogous. Choose a representative of <span class="math-container">$f_{X,Y}$</span>, the density of <span class="math-container">$(X,Y)$</span>, and a representative of <span class="math-container">$f_X$</span>, the density of <span class="math-container">$X$</span>, then the conditional density of <span class="math-container">$Y$</span> given <span class="math-container">$X$</span> is defined as <span class="math-container">$$ f_{Y|X}(x,y) = \frac{f_{X,Y}(x,y)}{f_{X}(x)}$$</span> at all points <span class="math-container">$(x,y)$</span> where <span class="math-container">$f(x) > 0$</span>. We may then define for <span class="math-container">$A = [(X,Y) \in B]$</span> and <span class="math-container">$B_x := \{ y \in \mathbb{R} : (x,y) \in B\}$</span></p>
<p><span class="math-container">$$\mathbb{P}(A | X = x) := \int_{B_x}^{} f_{Y|X}(x,y)~\mathrm{d}y, $$</span>
at least at all points <span class="math-container">$x$</span> where <span class="math-container">$f(x) > 0$</span>. Note that this definition depends on the choice of representatives we made for the densities <span class="math-container">$f_{X,Y}$</span> and <span class="math-container">$f_{X}$</span>, and we should keep this in mind when trying to interpret <span class="math-container">$P(A|X=x)$</span> pointwise. Whichever choice we made, the law of total probability holds as can be seen as follows:</p>
<p><span class="math-container">\begin{align*} \mathbb{P}(A) &= \mathbb{E}[1_{B}(X,Y)] = \int_{B} f_{X,Y}(x,y)~\mathrm{d}y~\mathrm{d}x = \int_{-\infty}^{\infty}\int_{B_x} f_{X,Y}(x,y)~\mathrm{d}y~\mathrm{d}x \\
&= \int_{-\infty}^{\infty}f_{X}(x)\int_{B_x} f_{Y|X}(x,y)~\mathrm{d}y~\mathrm{d}x = \int_{-\infty}^{\infty}\mathbb{P}(A|X=x)~ f_X(x)~\mathrm{d}x. \end{align*}</span></p>
<p>One can convince themselves that this construction gives us the properties we would expect if, for example, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent, which should give us some confidence that this notion of conditional probability makes sense.</p>
<hr />
<h2>Disintegrations</h2>
<p>The more general name for the concept we dealt with in the previous paragraph is <a href="https://en.wikipedia.org/wiki/Disintegration_theorem#Statement_of_the_theorem" rel="noreferrer">disintegration</a>. In complete generality, disintegrations need not exist, however if the probability space <span class="math-container">$\Omega$</span> is a Radon space equipped with its Borel <span class="math-container">$\sigma$</span>-field, they do. It might seem off-putting that the topology of the probability space now comes into play, but I believe for most purposes it will not be a severe restriction to assume that the probability space is a (possibly infinite) product of the space <span class="math-container">$([0,1],\mathcal{B},\lambda)$</span>, that is, <span class="math-container">$[0,1]$</span> equipped with the Euclidean topology, Borel <span class="math-container">$\sigma$</span>-field and Lebesgue measure. A one-dimensional variable <span class="math-container">$X$</span> can then be understood as <span class="math-container">$X(\omega) = F^{-1}(\omega)$</span>, where <span class="math-container">$F^{-1}$</span> is the generalized inverse of the cumulative distribution function of <span class="math-container">$X$</span>. The <a href="https://en.wikipedia.org/wiki/Disintegration_theorem#Statement_of_the_theorem" rel="noreferrer">disintegration theorem</a> then gives us the existence of a family of measures <span class="math-container">$(\mu_x)_{x \in \mathbb{R}}$</span>, where <span class="math-container">$\mu_x$</span> is supported on the event <span class="math-container">$[X=x]$</span>, and the family <span class="math-container">$(\mu_x)_{x\in \mathbb{R}}$</span> is unique up to <span class="math-container">$\text{law}(X)$</span>-almost everywhere equivalence. Writing <span class="math-container">$\mu_x$</span> as <span class="math-container">$\mathbb{P}(\cdot|X=x)$</span>, in particular, for any Borel set <span class="math-container">$A \in \mathcal{B}$</span> we then again have</p>
<p><span class="math-container">$$\mathbb{P}(A) = \int_{-\infty}^{\infty} \mathbb{P}(A|X=x)~f_X(x)~\mathrm{d}x.$$</span></p>
<hr />
<p>Reference for Kolmogorov quote:</p>
<p><em>Kolmogoroff, A.</em>, Grundbegriffe der Wahrscheinlichkeitsrechnung., Ergebnisse der Mathematik und ihrer Grenzgebiete 2, Nr. 3. Berlin: Julius Springer. IV + 62 S. (1933). <a href="https://zbmath.org/?q=an:59.1152.03" rel="noreferrer">ZBL59.1152.03</a>.></p>
| <p>Think of it like this:
Suppose you have a continuous random variable <span class="math-container">$X$</span> with pdf <span class="math-container">$f(x)$</span>. Then <span class="math-container">$P(A)=E(1_{A})=E[E(1_{A}|X)]=\int E(1_{A}|X=x)f(x)dx=\int P(A|X=x)f(x)dx$</span>.</p>
|
Subsets and Splits