tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
combinatorics | <p>If $\mathrm P(X=k)=\binom nkp^k(1-p)^{n-k}$ for a binomial distribution, then from the definition of the expected value
$$\mathrm E(X) = \sum^n_{k=0}k\mathrm P(X=k)=\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}$$ but the expected value of a Binomal distribution is $np$,
so how is </p>
<blockquote>
<p>$$\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}=np$$</p>
</blockquote>
| <p>The main idea is to factor out $np$. I believe we can rewrite:</p>
<p>$$\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}= \sum^n_{k=1} k\binom nkp^k(1-p)^{n-k}$$</p>
<p>Factoring out an $np$, this gives (and cancelling the $k$'s):</p>
<p>$$\sum^n_{k=1} k\binom nkp^k(1-p)^{n-k} = np \sum^n_{k=1} \dfrac{(n-1)!}{(n-k)!(k-1)!}p^{k-1}(1-p)^{n-k}$$</p>
<p>Notice that the RHS is:</p>
<p>$$np \sum^n_{k=1} \dfrac{(n-1)!}{(n-k)!(k-1)!}p^{k-1}(1-p)^{n-k} = np \sum^n_{k=1} \binom {n-1}{k-1}p^{k-1}(1-p)^{n-k},$$</p>
<p>and since $\displaystyle \sum^n_{k=1} \binom {n-1}{k-1}p^{k-1}(1-p)^{n-k} = (p + (1-p))^{n-1} = 1$, we therefore indeed have </p>
<p>$$\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k} = np$$.</p>
| <p>Let $B_i=1$ if we have a success on the $i$-th trial, and $0$ otherwise. Then the number $X$ of successes is $B_1+B_2+\cdots +B_n$. But then by the linearity of expectation, we have
$$E(X)=E(B_1+B_2+\cdots+B_n)=E(B_1)+E(B_2)+\cdots +E(B_n).$$
It is easy to verify that $E(B_i)=p$, so $E(X)=np$.</p>
<p>You wrote down <em>another</em> expression for the mean. So the above argument shows that the combinatorial identity of your problem is correct. You can think of it as a mean proof of a combinatorial identity.</p>
<p><strong>Remark:</strong> A very similar argument to the one above can be used to compute the variance of the binomial.</p>
<p>The linearity of expectation holds even when the random variables are not independent. Suppose we take a sample of size $n$, <strong>without replacement,</strong> from a box that has $N$ objects, of which $G$ are good. The <em>same</em> argument shows that the expected number of good objects in the sample is $n\dfrac{G}{N}$. This is somewhat unpleasant to prove using combinatorial manipulation.</p>
|
matrices | <p>After looking in my book for a couple of hours, I'm still confused about what it means for a $(n\times n)$-matrix $A$ to have a determinant equal to zero, $\det(A)=0$.</p>
<p>I hope someone can explain this to me in plain English.</p>
| <p>For an $n\times n$ matrix, each of the following is equivalent to the condition of the matrix having determinant $0$:</p>
<ul>
<li><p>The columns of the matrix are dependent vectors in $\mathbb R^n$</p></li>
<li><p>The rows of the matrix are dependent vectors in $\mathbb R^n$</p></li>
<li><p>The matrix is not invertible. </p></li>
<li><p>The volume of the parallelepiped determined by the column vectors of the matrix is $0$.</p></li>
<li><p>The volume of the parallelepiped determined by the row vectors of the matrix is $0$.</p></li>
<li><p>The system of homogenous linear equations represented by the matrix has a non-trivial solution. </p></li>
<li><p>The determinant of the linear transformation determined by the matrix is $0$. </p></li>
<li><p>The free coefficient in the characteristic polynomial of the matrix is $0$. </p></li>
</ul>
<p>Depending on the definition of the determinant you saw, proving each equivalence can be more or less hard.</p>
| <p>For me, this is the most intuitive video on the web that explains determinants, and everyone who wants a deep and visual understanding of this topic should watch it:</p>
<p><a href="https://www.youtube.com/watch?v=Ip3X9LOh2dk" rel="noreferrer">The determinant by 3Blue1Brown</a></p>
<p>The whole playlist is available at this link:</p>
<p><a href="https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="noreferrer">Essence of linear algebra by 3Blue1Brown</a></p>
<p>The crucial part of the series is "Linear transformations and matrices". If you understand that well, everything else will be like a piece of cake. Literally: plain English + visual.</p>
|
combinatorics | <p>This was asked on sci.math ages ago, and never got a satisfactory answer.</p>
<blockquote>
<p>Given a number of sticks of integral length $ \ge n$ whose lengths
add to $n(n+1)/2$. Can these always be broken (by cuts) into sticks of
lengths $1,2,3, \ldots ,n$?</p>
</blockquote>
<p>You are not allowed to glue sticks back together. Assume you have an accurate measuring device.</p>
<p>More formally, is the following conjecture true? (Taken from iwriteiam link below).</p>
<blockquote>
<p><strong>Cutting Sticks Conjecture</strong>: For all natural numbers $n$, and any given sequence $a_1, .., a_k$ of
natural numbers greater or equal $n$ of which the sum equals
$n(n+1)/2$, there exists a partitioning $(P_1, .., P_k)$ of $\{1, .., n\}$
such that sum of the numbers in $P_i$ equals $a_i$, for all $1 \leq i \leq k$.</p>
</blockquote>
<p>Some links which discuss this problem:</p>
<ul>
<li><a href="http://www.iwriteiam.nl/cutsticks.html">http://www.iwriteiam.nl/cutsticks.html</a></li>
</ul>
| <p>This is not a solution, just something I found that might be relevant.</p>
<p>On the page linked to in the question, a reduction and various strategies are considered. I'll briefly reproduce the reduction, both because I think it's the most useful part of that page and perhaps not everyone will want to read that entire page, and also because I need it to say what I found.</p>
<p>Let a counterexample with minimal $n$ be given. If one of the sticks were of length $n$, we could use that stick as the target stick of length $n$ and cut the remaining sticks into lengths $1$ through $n-1$, since otherwise they would form a smaller counterexample. Likewise, if one of the sticks had length greater than $2n-2$, we could cut off a stick of length $n$ and the remaining sticks would all be of length $\ge n-1$, so again we could cut them into lengths $1$ through $n-1$ because otherwise they would form a smaller counterexample. Thus,</p>
<blockquote>
<p>the lengths of the sticks in a counterexample with minimal $n$ must be $\gt n$ and $\lt 2n-1$.</p>
</blockquote>
<p>Problem instances that satisfy these conditions for a potential minimal counterexample are called "hard" on that page; I suggest we adopt that terminology here.</p>
<p>The strategies discussed on that page include various ways of forming the target sticks in order of decreasing length. It was found that there are counterexamples both for the strategy of always cutting the next-longest target stick from the shortest possible remaining stick (counterexample $\langle11,12,16,16\rangle$) and for the strategy of always cutting the next-longest target stick from the longest remaining stick unless it already exists (counterexample $\langle10,10,12,13\rangle$), whereas if the stick to cut from was randomized, it was always possible to form the desired sticks up to $n=23$.</p>
<p>I've checked that all hard problem instances up to $n=30$ are solvable, and I found that they remain solvable independent of which stick we cut the target stick of length $n$ from. This is equivalent to saying that a problem instance for $n-1$ can always be solved if all stick lengths except one are $\gt n$ and $\lt 2n-1$ and one is $\lt n-1$, since all of these
instances can result from cutting a stick of length $n$ from a hard problem instance for $n$.</p>
<p>I thought that this might be generalized to the solvability of an instance being entirely determined by whether the sticks of length $\le n$ can be cut to form distinct integers, but that's not the case, since it's possible to leave only a few holes below $n$ such that the few remaining sticks above $n$ can't fill them.</p>
| <p>I have implemented the suggestion I made in a comment to joriki's answer. For $3 \le n \le 18$, I have generated a list of subsets $S \subset \{1,2,...,n-1\}$ with the property that if a set of sticks with total length $n(n+1)/2$ takes all the lengths in $S$, together with any other lengths ≥n, then the sticks can always be cut into sticks of length $1,2,...,n$. It is available at <a href="http://www.megafileupload.com/en/file/326981/Sticks-txt.html">this link</a> (it's about 900K). </p>
<p>I stared at it for a while, but nothing jumped out at me.</p>
<p><strong>Edited to add:</strong> I have changed the program to output the sets in a more human-friendly order: <a href="http://pastebin.com/0yL3rvnJ">part 1 (n = 1 to 17)</a> and <a href="http://pastebin.com/LzwUSUDS">part 2 (n = 18)</a>.</p>
|
game-theory | <p>I thought a lot about this question — and initially, I intended to ask this on <a href="https://gamedev.stackexchange.com/">gamedev.stackexchange.com</a> — but due to its rather theoretical aspects, I think it might be more appropriate to address a rather mathematical spectrum of readers.</p>
<p>Imagine you had to design a dungeon in a game that comprises many puzzles that are mutually dependent from one another. If you lack an example, just think back to the times you've played Zelda or any other video game of this sort:</p>
<p><img src="https://i.sstatic.net/FzVLu.png" alt="A Zelda dungeon"></p>
<p><strong>Rules</strong>:</p>
<p>The player may freely wander around the dungeon; some rooms are locked, some aren't. The main goal is to solve puzzles in rooms $\{R_0, ..., R_n\}$ to either achieve keys that help the player enter a locked room or in order to achieve an item that may allow the player to solve a puzzle in some room $R_k$ that, before, wasn't solvable without the ability of the item.</p>
<p>Every key works with every door. Once a key is used, the door is unlocked forever and the player loses the key. In the end, the player needs to reach a specific room $R_b$ where he can fight the dungeon's end boss in order to finish the dungeon. The dungeon is <strong>solvable</strong> iff the player:</p>
<ul>
<li>can reach rooms $R_0$ to $R_n$, regardless of the order chosen </li>
<li>has collected all items $I_0$ to $I_m$ that he needs to fight the dungeon's end boss once the player enters $R_b$</li>
</ul>
<p><strong>The goal</strong>:</p>
<p>Your goal is to design the dungeon in a way that guarantees maximum <strong><a href="http://en.wikipedia.org/wiki/Nonlinear_gameplay" rel="nofollow noreferrer">nonlinearity</a></strong>, i.e., the player should, at any point of time in the dungeon, be able to try to solve as many puzzles as possible and thus be able to explore as many regions as possible without harming the solvability of the dungeon.</p>
<p><strong>The problem</strong>:</p>
<p>For example, imagine the player has two keys, but then utilizes both of them in order to get through two consecutive doors. The player then finds himself in a puzzle room whose solution requires some item $I_\alpha$ that he should actually have gotten using the two keys that he now does not possess anymore. Unfortunately, the player finds himself in a gridlock — the dungeon is now unsolvable.</p>
<p>Can you guarantee that your dungeon design is free of gridlocks?</p>
<p><strong>Disclaimer</strong></p>
<p>I am not asking for any <em>implementation</em>, <em>code</em> or any of the like. This is a <strong>strictly theoretical question</strong> that I want to consider from a <strong>strictly mathematical</strong> point of view. Any suggestions that exceed these limitations (by providing any material such as the previously mentioned) are — albeit highly appreciated — explicitly not asked for.</p>
| <p>This might not be exactly what you're looking for, but I think the safest (and probably easiest) way to accomplish this might be to design the whole dungeon to be (at least mostly) linear at first, then adding in some choices for the player, so it doesn't seem as linear.</p>
<p>Say you come up with some plan: Visiting the rooms in the order 1,4,5,3,2 will allow the player to succeed, so make sure the necessary items are placed to allow the player to do that. Then you could conceivably move some of the necessary items around - maybe they would find the item that allows them to open room 3 in the first room as well, but they wouldn't know if they should head to room 4 or 3 first, for a little non-linearity.</p>
<p>Alternatively, if the dungeon is fairly large, you could break it up into a few underlying linear pieces: Let's say it's got 4 groups of 5 rooms, call the groups $A,B,C$, and $D$. From $A$, you could complete either $B$ or $C$ first, while both $B$ and $C$ would need to be explored in order to get everything to work on the $D$ group of rooms. And from within $A$, perhaps you could start exploring the $C$ group, but couldn't finish until you'd exhausted $A$, just to layer in some additional complexity.</p>
<p>A mathematical object that would help seems fairly tough. I imagine either <a href="http://en.wikipedia.org/wiki/Partially_ordered_set" rel="nofollow">partially ordered sets</a> or <a href="http://en.wikipedia.org/wiki/Directed_graph" rel="nofollow">directed graphs</a> could help you map out the dependencies. For directed graphs, the issue that seems difficult is that you've got two different things to consider - rooms and items. Either way, if the dungeon is complicated enough, then either of the above might grow too complicated to be of much help. I think partially ordered sets would be the most promising, and I can draw an example of how I picture it helping, if you're interested.</p>
<p>I wish I knew of a great mathematical tool could help you, so my most promising idea seems to be chunking the dungeon up into semi-linear groups. That way you can focus on a collection of smaller problems individually and make sure the dependencies all work out, and add in some complexity safely.</p>
| <p>The maximum non-linearity is achieved by allowing the player to tackle all of the puzzles in any order. This means that all of the puzzles can be accessed without using any keys, and that the keys must all be used in a linear corridor which separates the area containing the puzzles from $R_b$.</p>
<p>However, I suspect this isn't what you're looking for, so you may need to refine the question.</p>
|
matrices | <p>If the column vectors of a matrix $A$ are all orthogonal and $A$ is a square matrix, can I say that the row vectors of matrix $A$ are also orthogonal to each other?</p>
<p>From the equation $Q \cdot Q^{T}=I$ if $Q$ is orthogonal and square matrix, it seems that this is true but I still find it hard to believe. I have a feeling that I may still be wrong because those column vectors that are perpendicular are vectors within the column space. Taking the rows vectors give a totally different direction from the column vectors in the row space and so how could they always happen to be perpendicular?</p>
<p>Thanks for any help.</p>
| <p>Recall that two vectors are orthogonal if and only if their inner product is zero. You are incorrect in asserting that if the columns of $Q$ are orthogonal to each other then $QQ^T = I$; this follows if the columns of $Q$ form an <em>orthonormal</em> set (basis for $\mathbb{R}^n$); orthogonality is not sufficient. Note that "$Q$ is an orthogonal matrix" is <strong>not</strong> equivalent to "the columns of $Q$ are pairwise orthogonal".</p>
<p>With that clarification, the answer is that if you only ask that the columns be pairwise orthogonal, then the rows need not be pairwise orthogonal. For example, take
$$A = \left(\begin{array}{ccc}1& 0 & 0\\0& 0 & 1\\1 & 0 & 0\end{array}\right).$$
The columns are orthogonal to each other: the middle column is orthogonal to everything (being the zero vector), and the first and third columns are orthogonal. However, the rows are not orthogonal, since the first and third rows are equal and nonzero.</p>
<p>On the other hand, if you require that the columns of $Q$ be an <strong>orthonormal</strong> set (pairwise orthogonal, and the inner product of each column with itself equals $1$), then it <em>does</em> follow: precisely as you argue. That condition <em>is</em> equivalent to "the matrix is orthogonal", and since $I = Q^TQ = QQ^T$ and $(Q^T)^T = Q$, it follows that if $Q$ is orthogonal then so is $Q^T$, hence the columns of $Q^T$ (i.e., the <em>rows</em> of $Q$) form an orthonormal set as well. </p>
| <p>Even if $A$ is non-singular, orthogonality of columns by itself does not guarantee orthogonality of rows. Here is a 3x3 example:
$$ A = \left( \begin{matrix}
1 & \;2 & \;\;5 \\ 2 & \;2 & -4 \\ 3 & -2 & \;\;1
\end{matrix} \right) $$
Column vectors are orthogonal, but row vectors are not orthogonal.</p>
<p>On the other hand, orthonormality of columns guarantees orthonormality of rows, and vice versa.</p>
<p>As a footnote, one of the forms of Hadamard's inequality concerns the absolute value of the determinant of a matrix given the norms of the column vectors. That absolute value will be maximum when those vectors are orthogonal. The determinant, in absolute value, will be equal to the product of the norms. In the case of the above matrix, as the columns are orthogonal, 84 is the maximum possible absolute value of the determinant $-$ <em>det(A)</em> is -84 $-$ for column vectors with the given norms ($\sqrt {14}, 2\sqrt 3$ and $ \sqrt {42}$ respectively).</p>
<p>Although $det(A)=det(A^T)$, Hadamard's inequality does not imply neither orthogonality of the rows of <em>A</em> nor that the absolute value of the determinant is maximum for the given norms of the row vectors ($ \sqrt{30}, 2\sqrt 6$ and $ \sqrt{14}$ respectively; their product is $ 12 \sqrt{70} \cong 100.4 $).</p>
|
probability | <p>How can a positive random variable $X$ which never takes on the value $+\infty$, have expected value $\mathbb{E}[X] = +\infty$? </p>
| <p>Let $X$ be a random variable that is equal to $2^n$ with probability $2^{-n}$ (for positive integer $n$). Then
$${\mathbb E} X = \sum_{n=1}^\infty 2^{-n} \cdot 2^n = \sum_{n=1}^\infty 1 = \infty.$$</p>
<p><a href="http://en.wikipedia.org/wiki/Cauchy_distribution">Cauchy Distribution</a> is an example of a continuous distribution that doesn't have an expectation. </p>
| <p>Once you consider probabilistic experiments with infinite outcomes, it is easy to find random variables with an infinite expected value. Consider the following example (which is just a game that yields an example similar to the one Yuri provided):</p>
<ul>
<li>You throw a coin until it lands tails.</li>
<li>You then get paid $2^{n}$ dollars, where $n$ is the amount of heads you got.</li>
</ul>
<p>It is easy to construct the expected value funcion of your payment (let's name it $X$):</p>
<p>$$E[X] = \frac{1}{2} \times 2^0 + \frac{1}{4} \times 2^1 + \dots = \sum_{n=1}^{\infty} 2^{-n}\times 2^{n-1} = \sum_{n=1}^{\infty} \frac{1}{2} = \infty $$</p>
<p>This game is also known as <a href="http://en.wikipedia.org/wiki/St._Petersburg_paradox">St. Petersburg paradox</a>.
Why does this occur and how can we interpret it?</p>
<p>From a construction point of view, it is easier to understand. In this particular case, the probability of each outcome decreases exponentially. Since the number of outcomes is infinite, the payout scheme only has to grow at the same rate as the probability of the outcome decreases in order for the series to diverge.</p>
<p>What this means in practice is that, although the payout is always finite, if you average the payouts from $k$ consecutive games, this average will (with high probability) be higher the greater $k$ is. As $k$ approaches infinity, so does the average of the $k$ payouts. Behind this boundless growth is the fact that everytime an unlikely outcome happens, the payout is so large that, when averaged with the payout of more likely outcomes, the average is skewed up.</p>
|
probability | <p>If <span class="math-container">$X\sim \Gamma(a_1,b)$</span> and <span class="math-container">$Y \sim \Gamma(a_2,b)$</span>, I need to prove <span class="math-container">$X+Y\sim\Gamma(a_1+a_2,b)$</span> if <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent.</p>
<p>I am trying to apply formula for independence integral and just trying to multiply the gamma function but stuck ?</p>
| <p>Now that the homework deadline is presumably long past,
here is a proof for the case of $b=1$, adapted
from an <a href="https://stats.stackexchange.com/a/51623/6633">answer</a>
of mine on stats.SE, which fleshes out the details of
what I said in a comment on the question.</p>
<p>If $X$ and $Y$ are independent continuous random variables,
then the probability density function of $Z=X+Y$ is given by the
convolution of the probability density functions $f_X(x)$ and $f_Y(y)$
of $X$ and $Y$ respectively. Thus,
$$f_{X+Y}(z) = \int_{-\infty}^{\infty} f_X(x)f_Y(z-x)\,\mathrm dx.
$$
But when $X$ and $Y$ are nonnegative random variables, $f_X(x) = 0$ when $x < 0$,
and for positive number $z$, $f_Y(z-x) = 0$ when $x > z$. Consequently,
for $z > 0$, the above integral can be simplified to
$$\begin{align}
f_{X+Y}(z) &= \int_0^z f_X(x)f_Y(z-x)\,\mathrm dx\\
&=\int_0^z \frac{x^{a_1-1}e^{-x}}{\Gamma(a_1)}\frac{(z-x)^{a_2-1}e^{-(z-x)}}{\Gamma(a_2)}\,\mathrm dx\\
&= e^{-z}\int_0^z \frac{x^{a_1-1}(z-x)^{a_2-1}}{\Gamma(a_1)\Gamma(a_2)}\,\mathrm dx
&\scriptstyle{\text{now substitute}}~ x = zt~ \text{and think}\\
&= e^{-z}z^{a_1+a_2-1}\int_0^1 \frac{t^{a_1-1}(1-t)^{a_2-1}}{\Gamma(a_1)\Gamma(a_2)}\,\mathrm dt & \scriptstyle{\text{of Beta}}(a_1,a_2)~\text{random variables}\\
&= \frac{e^{-z}z^{a_1+a_2-1}}{\Gamma(a_1+a_2)}
\end{align}$$</p>
| <p>It's easier to use Moment Generating Functions to prove that.
<span class="math-container">$$
M(t;\alpha,\beta ) = Ee^{tX} = \int_{0}^{+\infty} e^{tx} f(x;\alpha,\beta)dx
= \int_{0}^{+\infty} e^{tx} \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1}e^{-\beta x}dx \\
= \frac{\beta^\alpha}{\Gamma(\alpha)} \int_{0}^{+\infty} x^{\alpha-1}e^{-(\beta - t) x}dx = \frac{\beta^\alpha}{\Gamma(\alpha)} \frac{\Gamma(\alpha)}{(\beta - t)^\alpha} = \frac{1}{(1- \frac{t}{\beta})^\alpha}
$$</span>
By using the property of independent random variables, we know
<span class="math-container">$$M_{X + Y}(t) = M_{X}(t)M_{Y}(t) $$</span>
So
if <span class="math-container">$X \sim Gamma(\alpha_1,\beta), Y \sim Gamma(\alpha_2,\beta), $</span>
<span class="math-container">$$M_{X + Y}(t) = \frac{1}{(1- \frac{t}{\beta})^{\alpha_1}}
\frac{1}{(1- \frac{t}{\beta})^{\alpha_2}} = \frac{1}{(1- \frac{t}{\beta})^{\alpha_1 + \alpha_2}}$$</span>
You can see the MGF of the product is still in the format of Gamma distribution. Finally we can get <span class="math-container">$X + Y \sim Gamma(\alpha_1 + \alpha_2, \beta)$</span></p>
|
logic | <p>It's well known that vacuous truths are a concept, i.e. an implication being true even if the premise is false.</p>
<p>What would be the problem with simply redefining this to be evaluated to false? Would we still be able to make systems work with this definition or would it lead to a problem somewhere? Why must it be the case that false -> false is true and false -> true is true?</p>
| <p>Notice that 3=5 is false. but if 3=5 we can prove 8=8 which is true.</p>
<p>$$ 3=5$$ </p>
<p>therefore $$ 5=3$$</p>
<p>Add both sides, $$8=8$$</p>
<p>We can also prove that $$ 8=10$$ which is false.</p>
<p>$$ 3=5$$</p>
<p>Add $5$ to both sides, we get $$8=10$$</p>
<p>The point is that if we assume a false assumption, then we can claim whatever we like.</p>
<p>That means " False $\implies$ False " is true. </p>
<p>And " False $\implies$ True " is true. </p>
| <p>Clearly we want $P\rightarrow P$ to be true, wouldn't you agree? </p>
<p>I mean, if i say:</p>
<blockquote>
<p>If Pat is a bachelor, then Pat is a bachelor</p>
</blockquote>
<p>do you really dispute the truth of that claim, or claim that it depends on whether or not Pat really is a bachelor? The whole point of conditionals is that we can say '<em>if</em>', and thereby imagine a situation where something would be the case, whether it is actually the case or not. And guess what: <em>if</em> Pat would be a bachelor, then Pat would be a bachelor, even if Pat is not actually a bachelor.</p>
<p>So, if $P$ is false, it better be the case that $false \rightarrow false = true$, for otherwise $P \rightarrow P$ would be false, which is just weird.</p>
<p>Of course, we also want $true \rightarrow true = true$ by this same argument, for otherwise again we would have $P \rightarrow P$ being false.</p>
<p>As far as $false \rightarrow true$ is concerned: given that we have that $true \rightarrow true =true$, $false \rightarrow false$, and ( I think you would certainly agree) $true \rightarrow false = false$, we better set $false \rightarrow true =true$, because otherwise the $\rightarrow$ would become commutative, i.e. We would have that $P \rightarrow Q$ is equivalent to $Q \rightarrow P$ ... which is highly undesired, since conditionals have a 'direction' to them that cannot be reversed automatically. Indeed, while I think you would agree with the truth of:</p>
<blockquote>
<p>'if Pat is a bachelor, then Pat is male'</p>
</blockquote>
<p>I doubt you would agree with:</p>
<blockquote>
<p>'if Pat is male, then Pat is a bachelor'</p>
</blockquote>
<p>EDIT</p>
<p>Re-reading your question, and considering some of the ensuing discussions and comments, I wonder if the following might help:</p>
<p>Suppose that we <em>know</em> some statement $P$ is false, i.e. We know that:</p>
<p>$1. \neg P \quad Given$</p>
<p>Then we can show that $P$ implies any $Q$, given the standard definition of logical implication:</p>
<p>$2. P \quad Assumption$</p>
<p>$3. P \lor Q \quad \lor \ Intro \ 2$</p>
<p>$4. Q \quad Disjunctive \ Syllogism \ 1,3$</p>
<p>And, using our typical rule for $\rightarrow \ Intro$, we can then also get:</p>
<p>$5. P \rightarrow Q \quad \rightarrow \ Intro \ 2-5$</p>
<p>And this of course works whether $Q$ is true or false.</p>
|
matrices | <p>How do I prove that the norm of a matrix equals the absolutely largest eigenvalue of the matrix? This is the precise question:</p>
<blockquote>
<p>Let $A$ be a symmetric $n \times n$ matrix. Consider $A$ as an operator in $\mathbb{R}^n$ given by $x \mapsto Ax$. Prove that $\|A\| = \max_j |\lambda_j|$, where $\lambda_j$ are the eigenvalues of $A$.</p>
</blockquote>
<p>I've read the relevant sections in my literature over and over but can't find any clue on how to begin. A solution is suggested <a href="https://math.stackexchange.com/questions/405235/normal-operator-matrix-norm">here</a> but the notion of diagonal operator is not in my literature so it doesn't tell me very much. So, any other hints on how to solve the question? Thanks.</p>
| <p>The norm of a matrix is defined as
<span class="math-container">\begin{equation}
\|A\| = \sup_{\|u\| = 1} \|Au\|
\end{equation}</span>
Taking the singular value decomposition of the matrix <span class="math-container">$A$</span>, we have
<span class="math-container">\begin{equation}
A = VD W^T
\end{equation}</span>
where <span class="math-container">$V$</span> and <span class="math-container">$W$</span> are orthonormal and <span class="math-container">$D$</span> is a diagonal matrix. Since <span class="math-container">$V$</span> and <span class="math-container">$W$</span> are orthonormal, we have <span class="math-container">$\|V\| = 1$</span> and <span class="math-container">$\|W\| = 1$</span>. Then <span class="math-container">$\|Av\| = \|D v\|$</span> for any vector <span class="math-container">$v$</span>. Then we can maximize the norm of <span class="math-container">$Av$</span> by maximizing the norm of <span class="math-container">$Dv$</span>. </p>
<p>By the definition of singular value decomposition, <span class="math-container">$D$</span> will have the singular values of <span class="math-container">$A$</span> on its main diagonal and will have zeros everywhere else. Let <span class="math-container">$\lambda_1, \ldots, \lambda_n$</span> denote these diagonal entries so that</p>
<p><span class="math-container">\begin{equation}
D = \left(\begin{array}{cccc}
\lambda_1 & 0 & \ldots & 0 \\
0 & \lambda_2 & \ldots & 0 \\
\vdots & & \ddots & \vdots \\
0 & 0 & \ldots & \lambda_n
\end{array}\right)
\end{equation}</span></p>
<p>Taking some <span class="math-container">$v = (v_1, v_2, \ldots, v_n)^T$</span>, the product <span class="math-container">$Dv$</span> takes the form
<span class="math-container">\begin{equation}
Dv = \left(\begin{array}{c}
\lambda_1v_1 \\
\vdots \\
\lambda_nv_n
\end{array}\right)
\end{equation}</span>
Maximizing the norm of this is the same as maximizing the norm squared. Then we are trying to maximize the sum
<span class="math-container">\begin{equation}
S = \sum_{i=1}^{n} \lambda_i^2v_i^2
\end{equation}</span>
under the constraint that <span class="math-container">$v$</span> is a unit vector (i.e., <span class="math-container">$\sum_i v_i^2 = 1$</span>). The maximum is attained by finding the largest <span class="math-container">$\lambda_i^2$</span> and setting its corresponding <span class="math-container">$v_i$</span> to <span class="math-container">$1$</span> and then setting each other <span class="math-container">$v_j$</span> to <span class="math-container">$0$</span>. Then the maximum of <span class="math-container">$S$</span> (which is the norm squared) is the square of the absolutely largest eigenvalue of <span class="math-container">$A$</span>. Taking the square root, we get the absolutely largest eigenvalue of <span class="math-container">$A$</span>. </p>
| <p>But the key point is exactly that your matrix is diagonalizable more specially that you can find an orthonormal basis of eigenvector $e_i$ for which you have $A e_i=\lambda_i e_i$. </p>
<p>Then your write for $x=\sum x_ie_i$ and you have $Ax=\sum x_i\lambda_i e_i$ so that $\|Ax\|^2=\sum\lambda_i^2x_i^2$ now by definition of the norm of matrix it gives you $$\frac{\|Ax\|}{\|x\|}\leq|\lambda_{i_0}|$$ therefore $\|A\|\leq|\lambda_{i_0}|$ where $\lambda_{i_0}$ is the greatest eigenvalue . Finally the identity $\|Ae_{i_0}\|=\|\lambda_{i_0}e_i\|$ gives you the inequality $\|A\|\geq|\lambda_{i_0}|$.</p>
|
combinatorics | <p>Have you ever seen this interface? </p>
<p><img src="https://i.sstatic.net/5mhg4.jpg" alt="Pattern lock"></p>
<p>Nowadays, it is used for locking smartphones.</p>
<p>If you haven't, <a href="http://youtu.be/3tnnsxcienQ?t=1m15s" rel="noreferrer">here</a> is a short video on it.</p>
<hr>
<p>The rules for creating a pattern is as follows.</p>
<ul>
<li>We must use four nodes or more to make a pattern at least. </li>
<li>Once a node is visited, then the node can't be visited anymore.</li>
<li>You can start at any node.</li>
<li>A pattern has to be connected.</li>
<li>Cycle is not allowed.</li>
</ul>
<p>How many distinct patterns are possible?</p>
| <p>I believe the answer can be found in <a href="http://oeis.org/A188147" rel="nofollow noreferrer">OEIS</a>. You have to add the paths of length $4$ through $9$ on a $3\times3$ grid, so $80+104+128+112+112+40=576$</p>
<p>I have validated the $80$, $4$ number paths. If we number the grid $$\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9 \end{array}$$ </p>
<p>The paths starting $12$ are
$1236, 1254, 1258, 1256$
and there were $8$ choices of corner/direction, so $32$ paths start at a corner.
Starting at $2$, there are
$2145,2147,2369,2365,2541,2547,2587,2589,2563,2569$ for $10$ and there are $4$ edge cells, so $40$ start at an edge.
Starting at $5$, there are $8$ paths-four choices of first direction and two choices of which way to turn</p>
<p>Added per user3123's comment that cycles are allowed: unfortunately in OEIS there are a huge number of series titled "Number of n-step walks on square lattice" and "Number of walks on square lattice", and there is no specific definition to tell one from another. For $4$ steps, it adds $32$ more paths-four squares to go around, four places to start in each square, and two directions to cycle. So the $4$ step count goes up to $112$. For longer paths, the increase will be larger. But there still will not be too many.</p>
| <p>I don't have the answer as "how to mathematically demonstrate the number of combinations". Still, if that helps, I brute-forced it, and here are the results.</p>
<ul>
<li>$1$ dot: $9$</li>
<li>$2$ dots: $56$</li>
<li>$3$ dots: $320$</li>
<li>$4$ dots: $1624$</li>
<li>$5$ dots: $7152$</li>
<li>$6$ dots: $26016$</li>
<li>$7$ dots: $72912$</li>
<li>$8$ dots: $140704$</li>
<li>$9$ dots: $140704$</li>
</ul>
<p>Total for $4$ to $9$ digits $:389,112$ combinations</p>
|
linear-algebra | <blockquote>
<p>How can I prove <span class="math-container">$\operatorname{rank}A^TA=\operatorname{rank}A$</span> for any <span class="math-container">$A\in M_{m \times n}$</span>?</p>
</blockquote>
<p>This is an exercise in my textbook associated with orthogonal projections and Gram-Schmidt process, but I am unsure how they are relevant.</p>
| <p>Let $\mathbf{x} \in N(A)$ where $N(A)$ is the null space of $A$. </p>
<p>So, $$\begin{align} A\mathbf{x} &=\mathbf{0} \\\implies A^TA\mathbf{x} &=\mathbf{0} \\\implies \mathbf{x} &\in N(A^TA) \end{align}$$ Hence $N(A) \subseteq N(A^TA)$.</p>
<p>Again let $\mathbf{x} \in N(A^TA)$</p>
<p>So, $$\begin{align} A^TA\mathbf{x} &=\mathbf{0} \\\implies \mathbf{x}^TA^TA\mathbf{x} &=\mathbf{0} \\\implies (A\mathbf{x})^T(A\mathbf{x})&=\mathbf{0} \\\implies A\mathbf{x}&=\mathbf{0}\\\implies \mathbf{x} &\in N(A) \end{align}$$ Hence $N(A^TA) \subseteq N(A)$.</p>
<p>Therefore $$\begin{align} N(A^TA) &= N(A)\\ \implies \dim(N(A^TA)) &= \dim(N(A))\\ \implies \text{rank}(A^TA) &= \text{rank}(A)\end{align}$$</p>
| <p>Let $r$ be the rank of $A \in \mathbb{R}^{m \times n}$. We then have the SVD of $A$ as
$$A_{m \times n} = U_{m \times r} \Sigma_{r \times r} V^T_{r \times n}$$
This gives $A^TA$ as $$A^TA = V_{n \times r} \Sigma_{r \times r}^2 V^T_{r \times n}$$ which is nothing but the SVD of $A^TA$. From this it is clear that $A^TA$ also has rank $r$. In fact the singular values of $A^TA$ are nothing but the square of the singular values of $A$.</p>
|
logic | <p>In <a href="https://math.stackexchange.com/a/121131/111520">Henning Makholm's answer</a> to the question, <a href="https://math.stackexchange.com/q/121128/111520">When does the set enter set theory?</a>, he states:</p>
<blockquote>
<p>In axiomatic set theory, the axioms themselves <em>are</em> the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p>
</blockquote>
<p>This assertion clashes with my (admittedly limited) understanding of how first-order logic, model theory, and axiomatic set theories work.
From what I understand, the axioms of a set theory are properties we would like the objects we call "sets" to have, and then each possible model of the theory is a different definition of the notion of a set. But the axioms themselves do not constitute a definition of set, unless we can show that any model of the axioms is isomorphic (in some meaningful way) to a given model.</p>
<p>Am I misunderstanding something? Is the definition of a set specified by the axioms, or by a model of the axioms? I would appreciate any clarification/direction on this.</p>
<hr>
<p><strong>Update:</strong> <em>In addition to all the answers below, I have written up my own answer (marked as community wiki) gathering the excerpts from other answers (to this question as well as some others) which I feel are most pertinent to the question I originally posed.
Since it's currently buried at the bottom (and accepting it won't change its position), I'm linking to it <a href="https://math.stackexchange.com/a/1792236/111520">here</a>. Cheers!</em></p>
| <p>This is the commonplace clash between the semi-Platonic view of the laymathematician and the foundational approach for mathematics through set theory.</p>
<p>It is often convenient, when working in "concrete" mathematics, to assume that there is a single, fixed universe of mathematics. And everyone who took a course or two in logic and set theory should be able to tell you that we can assume this universe is in fact a universe of $\sf ZFC$.</p>
<p>Then we do everything there, and we take the notion of "set" as somewhat primitive. Sets are not defined, they are just the objects of the universe.</p>
<p>But the word "set" is just a word in English. We use it to name this fickle, abstract, primitive object. But how can you ensure that my intuitive understanding of "set" is the same as yours?</p>
<p>This is where "axioms as definitions" come into play. Axioms define the basic ground rules for what it means to be a set. For example, if you don't have a power set, you're not a set: because every set has a power set. The axioms of set theory define what are the basic properties of sets. And once we agree on a list of axioms, we really agree on a list of definitions for what it means to be a set. And even if we grasp sets differently, we can still agree on some common properties and do some math.</p>
<p>You can see this in set theorists who disagree about philosophical outlooks, and whether or not some conjecture should be "true" or "false", or if the question is meaningless due to independence. Is the HOD Conjecture "true", "false" or is it simply "provable" or "independent"? That's a very different take on what are sets, and different set theorists will take different sides of the issue. But all of these set theorists have agreed, at least, that $\sf ZFC$ is a good way to define the basic properties of sets.</p>
<hr>
<p>As we've thrown Plato's name into the mix, let's talk a bit about "essence". How do you define a chair? or a desk? You can't <em>really</em> define a chair, because either you've defined it along the lines of "something you can sit on", in which case there will be things you can sit on which are certainly not chairs (the ground, for example, or a tree); or you've run into circularity "a chair is a chair is a chair"; or you're just being meaninglessly annoying "dude... like... everything is a chair... woah..."</p>
<p>But all three options are not good ways to define a chair. And yet, you're not baffled by this on a daily basis. This is because there is a difference between the "essence" of being a chair, and physical chairs.</p>
<p>Mathematical objects shouldn't be so lucky. Mathematical objects are abstract to begin with. We cannot perceive them in a tangible way, like we perceive chairs. So we are only left with some ideal definition. And this definition should capture the basic properties of sets. And that is exactly what the axioms of $\sf ZFC$, and any other set theory, are trying to do.</p>
| <blockquote>
<p>In axiomatic set theory, the axioms themselves are the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p>
</blockquote>
<p>I half-agree with this. But recall that the axioms of group theory don't axiomatize the concept "element of a group." Rather, they axiomatize the concept "group." In a similar way, the axioms of ZFC don't axiomatize the concept "set." They axiomatize the concept "universe of sets" (or "von Neumann universe" or "cumulative hiearchy", if you prefer).</p>
|
probability | <p>How should I understand the difference or relationship between binomial and Bernoulli distribution?</p>
| <p>A Bernoulli random variable has two possible outcomes: $0$ or $1$. A binomial distribution is the sum of <strong>independent</strong> and <strong>identically</strong> distributed Bernoulli random variables.</p>
<p>So, for example, say I have a coin, and, when tossed, the probability it lands heads is $p$. So the probability that it lands tails is $1-p$ (there are no other possible outcomes for the coin toss). If the coin lands heads, you win one dollar. If the coin lands tails, you win nothing.</p>
<p>For a <em>single</em> coin toss, the probability you win one dollar is $p$. The random variable that represents your winnings after one coin toss is a Bernoulli random variable.</p>
<p>Now, if you toss the coin $5$ times, your winnings could be any whole number of dollars from zero dollars to five dollars, inclusive. The probability that you win five dollars is $p^5$, because each coin toss is independent of the others, and for each coin toss the probability of heads is $p$.</p>
<p>What is the probability that you win <em>exactly</em> three dollars in five tosses? That would require you to toss the coin five times, getting exactly three heads and two tails. This can be achieved with probability $\binom{5}{3} p^3 (1-p)^2$. And, in general, if there are $n$ Bernoulli trials, then the sum of those trials is binomially distributed with parameters $n$ and $p$.</p>
<p>Note that a binomial random variable with parameter $n = 1$ is equivalent to a Bernoulli random variable, i.e. there is only one trial.</p>
| <p>All Bernoulli distributions are binomial distributions, but most binomial distributions are not Bernoulli distributions.</p>
<p>If
$$
X=\begin{cases} 1 & \text{with probability }p, \\ 0 & \text{with probability }1-p, \end{cases}
$$
then the probability distribution of the random variable $X$ is a Bernoulli distribution.</p>
<p>If $X=X_1+\cdots+X_n$ and each of $X_1,\ldots,X_n$ has a Bernoulli distribution with the same value of $p$ and they are independent, then $X$ has a binomial distribution, and the possible values of $X$ are $\{0,1,2,3,\ldots,n\}$. If $n=1$ then that binomial distribution is a Bernoulli distribution.</p>
|
logic | <p>Asaf's answer <a href="https://math.stackexchange.com/questions/45145/why-are-metric-spaces-non-empty">here</a> reminded me of something that should have been bothering me ever since I learned about it, but which I had more or less forgotten about. In first-order logic, there is a convention to only work with non-empty models of a theory $T$. The reason usually given is that the sentences $(\forall x)(x = x)$ and $(\forall x)(x \neq x)$ both hold in the "empty model" of $T$, so if we want the set of sentences satisfied by a model to be consistent, we need to disallow the empty model.</p>
<p>This smells fishy to me. I can't imagine that a sufficiently categorical setup of first-order logic (in terms of functors $C_T \to \text{Set}$ preserving some structure, where $C_T$ is the "free model of $T$" in an appropriate sense) would have this defect, or if it did it would have it for a reason. So something is incomplete about the standard setup of first-order logic, but I don't know what it could be. </p>
<p>The above looks like an example of <a href="http://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">too simple to be simple</a>, except that I can't explain it to myself in the same way that I can explain other examples. </p>
| <p>Both $(\forall x)(x = x)$ and $(\forall x)(x \not = x)$ do hold in the empty model, and it's perfectly consistent. What we lose when we move to empty models, as Qiaochu Yuan points out, are certain inference rules that we're used to. </p>
<p>For first-order languages that include equality, the set $S$ of statements that are true all models (empty or not) is a proper subset of the set $S_N$ of statements that are true in all nonempty models. Because the vast majority of models we are interested in are nonempty, in logic we typically look at sets of inference rules that generate $S_N$ rather than rules that generate $S$. </p>
<p>One particular example where this is useful is the algorithm to put a formula into prenex normal form, which is only correct when we limit to nonempty models. For example, the formula $(\forall x)(x \not = x) \land \bot$ is false in every model, but its prenex normal form $(\forall x)(x \not = x \land \bot)$ is true in the empty model. The marginal benefit of considering the empty model doesn't outweigh the loss of the beautiful algorithm for prenex normal form that works for every other model. In the rare cases when we do need to consider empty models, we realize we have to work with alternative inference rules; it just isn't usually worth the trouble.</p>
<p>From a different point of view, only considering nonempty models is analogous to only considering Hausdorff manifolds. But with the empty model there is only one object being ignored, which we can always treat as a special case if we need to think about it. </p>
| <p>Isn't this a non-issue? </p>
<p>Many of the most common set-ups for the logical axioms were developed long ago, in a time when mathematicians (not just logicians) thought that they wanted to care only about non-empty structures, and so they made sure that $\exists x\, x=x$ was derivable in their logical system. They had to do this in order to have the completeness theorem, that every statement true in every intended model was derivable. And so those systems continue to have that property today.</p>
<p>Meanwhile, many mathematicians developed a fancy to consider the empty structure seriously. So logicians developed logical systems that handle this, in which $\exists x\, x=x$ is not derivable. For example, this is always how I teach first order logic, and it is no problem at all. But as you point out in your answer, one does need to use a different logical set-up. </p>
<p>So if you care about it, then be sure to use the right logical axioms, since definitely you will not want to give up on the completeness theorem.</p>
|
probability | <p>Is there an exact or good approximate expression for the expectation, variance or other moments of the maximum of $n$ independent, identically distributed gaussian random variables where $n$ is large?</p>
<p>If $F$ is the cumulative distribution function for a standard gaussian and $f$ is the probability density function, then the CDF for the maximum is (from the study of order statistics) given by</p>
<p>$$F_{\rm max}(x) = F(x)^n$$</p>
<p>and the PDF is</p>
<p>$$f_{\rm max}(x) = n F(x)^{n-1} f(x)$$</p>
<p>so it's certainly possible to write down integrals which evaluate to the expectation and other moments, but it's not pretty. My intuition tells me that the expectation of the maximum would be proportional to $\log n$, although I don't see how to go about proving this.</p>
| <p>How precise an answer are you looking for? Giving (upper) bounds on the maximum of i.i.d Gaussians is easier than precisely characterizing its moments. Here is one way to go about this (another would be to combine a tail bound on Gaussian RVs with a union bound).</p>
<p>Let $X_i$ for $i = 1,\ldots,n$ be i.i.d $\mathcal{N}(0,\sigma^2)$.</p>
<p>Defining, $$ Z = [\max_{i} X_i] $$</p>
<p>By Jensen's inequality,</p>
<p>$$\exp \{t\mathbb{E}[ Z] \} \leq \mathbb{E} \exp \{tZ\} = \mathbb{E} \max_i \exp \{tX_i\} \leq \sum_{i = 1}^n \mathbb{E} [\exp \{tX_i\}] = n \exp \{t^2 \sigma^2/2 \}$$</p>
<p>where the last equality follows from the definition of the Gaussian moment generating function (a bound for sub-Gaussian random variables also follows by this same argument).</p>
<p>Rewriting this,</p>
<p>$$\mathbb{E}[Z] \leq \frac{\log n}{t} + \frac{t \sigma^2}{2} $$</p>
<p>Now, set $t = \frac{\sqrt{2 \log n}}{\sigma}$ to get</p>
<p>$$\mathbb{E}[Z] \leq \sigma \sqrt{ 2 \log n} $$ </p>
| <p>The <span class="math-container">$\max$</span>-central limit theorem (<a href="http://en.wikipedia.org/wiki/Fisher%E2%80%93Tippett%E2%80%93Gnedenko_theorem" rel="noreferrer">Fisher-Tippet-Gnedenko theorem</a>) can be used to provide a decent approximation when <span class="math-container">$n$</span> is large. See <a href="http://reference.wolfram.com/mathematica/ref/ExtremeValueDistribution.html#6764486" rel="noreferrer">this example</a> at reference page for extreme value distribution in <em>Mathematica</em>.</p>
<p>The <span class="math-container">$\max$</span>-central limit theorem states that <span class="math-container">$F_\max(x) = \left(\Phi(x)\right)^n \approx F_{\text{EV}}\left(\frac{x-\mu_n}{\sigma_n}\right)$</span>, where <span class="math-container">$F_{EV} = \exp(-\exp(-x))$</span> is the cumulative distribution function for the extreme value distribution, and
<span class="math-container">$$
\mu_n = \Phi^{-1}\left(1-\frac{1}{n} \right) \qquad \qquad
\sigma_n = \Phi^{-1}\left(1-\frac{1}{n} \cdot \mathrm{e}^{-1}\right)- \Phi^{-1}\left(1-\frac{1}{n} \right)
$$</span>
Here <span class="math-container">$\Phi^{-1}(q)$</span> denotes the inverse cdf of the standard normal distribution.</p>
<p>The mean of the maximum of the size <span class="math-container">$n$</span> normal sample, for large <span class="math-container">$n$</span>, is well approximated by
<span class="math-container">$$ \begin{eqnarray}
m_n &=& \sqrt{2} \left((\gamma -1) \Phi^{-1}\left(2-\frac{2}{n}\right)-\gamma \Phi^{-1}\left(2-\frac{2}{e n}\right)\right) \\ &=& \sqrt{\log \left(\frac{n^2}{2 \pi \log \left(\frac{n^2}{2\pi} \right)}\right)} \cdot \left(1 + \frac{\gamma}{\log (n)} + \mathcal{o} \left(\frac{1}{\log (n)} \right) \right)
\end{eqnarray}$$</span>
where <span class="math-container">$\gamma$</span> is the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant" rel="noreferrer">Euler-Mascheroni constant</a>.</p>
|
number-theory | <blockquote>
<p>I need to find a way of proving that the square roots of a finite set
of different primes are linearly independent over the field of
rationals. </p>
</blockquote>
<p>I've tried to solve the problem using elementary algebra
and also using the theory of field extensions, without success. To
prove linear independence of two primes is easy but then my problems
arise. I would be very thankful for an answer to this question.</p>
| <p>Below is a simple proof from one of my old sci.math posts, followed by reviews of related papers.</p>
<p><strong>Theorem</strong> <span class="math-container">$ $</span> Let <span class="math-container">$\rm\,Q\,$</span> be a field with <span class="math-container">$2 \ne 0,\,$</span> and <span class="math-container">$\rm\, L = Q(S)\,$</span> be an extension of <span class="math-container">$\rm\,Q\,$</span> generated by <span class="math-container">$\rm\,n\,$</span> square roots <span class="math-container">$\rm\,S \!=\! \{ \sqrt{a}, \sqrt{b},\ldots \}$</span> of <span class="math-container">$\rm\ a,b,\,\ldots \in Q.\,$</span> If all nonempty subsets of <span class="math-container">$\rm S $</span> have product <span class="math-container">$\rm\not\in\! Q\,$</span> then each successive
adjunction <span class="math-container">$\rm\ Q(\sqrt{a}),\ Q(\sqrt{a},\sqrt{b}),\,\ldots$</span> doubles degree over <span class="math-container">$\rm Q,\,$</span> so, in total, <span class="math-container">$\rm\, [L:Q] = 2^n.\,$</span> Thus the <span class="math-container">$\rm 2^n$</span> subproducts of the product of <span class="math-container">$\rm\,S\, $</span> are a basis of <span class="math-container">$\rm\,L\,$</span> over <span class="math-container">$\rm\,Q.$</span></p>
<p><strong>Proof</strong> <span class="math-container">$\ $</span> By induction on the tower height <span class="math-container">$\rm\,n =$</span> number of root adjunctions. The Lemma below implies <span class="math-container">$\rm\ [1, \sqrt{a}\,]\ [1, \sqrt{b}\,] = [1, \sqrt{a}, \sqrt{b}, \sqrt{ab}\,]\ $</span> is a <span class="math-container">$\rm\,Q$</span>-vector space basis of <span class="math-container">$\rm\, Q(\sqrt{a}, \sqrt{b})\,$</span> iff <span class="math-container">$\:\!1\:\!$</span> is the only basis element in <span class="math-container">$\rm\,Q.\,$</span>
We lift it to <span class="math-container">$\rm\, n > 2\,$</span> i.e. <span class="math-container">$\, [1, \sqrt{a_1}\,]\ [1, \sqrt{a_2}\,]\cdots [1, \sqrt{a_n}\,]\,$</span> with <span class="math-container">$2^n$</span> elts.</p>
<p><span class="math-container">$\rm n = 1\!:\ L = Q(\sqrt{a})\ $</span> so <span class="math-container">$\rm\,[L:Q] = 2,\,$</span> since <span class="math-container">$\rm\,\sqrt{a}\not\in Q\,$</span> by hypothesis.</p>
<p><span class="math-container">$\rm n > 1\!:\ L = K(\sqrt{a},\sqrt{b}),\,\ K\ $</span> of height <span class="math-container">$\rm\,n\!-\!2.\,$</span> By induction <span class="math-container">$\rm\,[K:Q] = 2^{n-2} $</span> so we need only show: <span class="math-container">$\rm\ [L:K] = 4,\,$</span> since then <span class="math-container">$\rm\,[L:Q] = [L:K]\ [K:Q] = 4\cdot 2^{n-2}\! = 2^n.\,$</span> The lemma below shows <span class="math-container">$\rm\,[L:K] = 4\,$</span> if <span class="math-container">$\rm\ r = \sqrt{a},\ \sqrt{b},\ \sqrt{a\,b}\ $</span> all <span class="math-container">$\rm\not\in K,\,$</span>
true by induction on <span class="math-container">$\rm\,K(r)\,$</span> of height <span class="math-container">$\rm\,n\!-\!1\,$</span> shows <span class="math-container">$\rm\,[K(r):K] = 2\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\rm\,r\not\in K.\ \ $</span> <span class="math-container">$\bf\small QED$</span></p>
<hr />
<p><strong>Lemma</strong> <span class="math-container">$\rm\ \ [K(\sqrt{a},\sqrt{b}) : K] = 4\ $</span> if <span class="math-container">$\rm\ \sqrt{a},\ \sqrt{b},\ \sqrt{a\,b}\ $</span> all <span class="math-container">$\rm\not\in K\,$</span> and <span class="math-container">$\rm\, 2 \ne 0\,$</span> in <span class="math-container">$\rm\,K.$</span></p>
<p><strong>Proof</strong> <span class="math-container">$\ \ $</span> Let <span class="math-container">$\rm\ L = K(\sqrt{b}).\,$</span> <span class="math-container">$\rm\, [L:K] = 2\,$</span> by <span class="math-container">$\rm\,\sqrt{b} \not\in K,\,$</span> so it suffices to show <span class="math-container">$\rm\, [L(\sqrt{a}):L] = 2.\,$</span> This fails only if <span class="math-container">$\rm\,\sqrt{a} \in L = K(\sqrt{b})$</span> <span class="math-container">$\,\Rightarrow\,$</span> <span class="math-container">$\rm \sqrt{a}\ =\ r + s\ \sqrt{b}\ $</span> for <span class="math-container">$\rm\ r,s\in K,\,$</span> which is false, because squaring yields <span class="math-container">$\rm\,\color{#c00}{(1)}:\ \ a\ =\ r^2 + b\ s^2 + 2\,r\,s\ \sqrt{b},\, $</span> which is contra to hypotheses as follows:</p>
<p><span class="math-container">$\rm\qquad\qquad rs \ne 0\ \ \Rightarrow\ \ \sqrt{b}\ \in\ K\ \ $</span> by solving <span class="math-container">$\color{#c00}{(1)}$</span> for <span class="math-container">$\rm\sqrt{b},\,$</span> using <span class="math-container">$\rm\,2 \ne 0$</span></p>
<p><span class="math-container">$\rm\qquad\qquad\ s = 0\ \ \Rightarrow\ \ \ \sqrt{a}\ \in\ K\ \ $</span> via <span class="math-container">$\rm\ \sqrt{a}\ =\ r + s\ \sqrt{b}\ =\ r \in K$</span></p>
<p><span class="math-container">$\rm\qquad\qquad\ r = 0\ \ \Rightarrow\ \ \sqrt{a\,b}\in K\ \ $</span> via <span class="math-container">$\rm\ \sqrt{a}\ =\ s\ \sqrt{b},\, \ $</span>times <span class="math-container">$\rm\,\sqrt{b}.\qquad$</span> <span class="math-container">$\bf\small QED$</span></p>
<p>In the classical case <span class="math-container">$\rm\:Q\:$</span> is the field of rationals and the square roots have radicands being distinct primes. Here it is quite familiar that a product of any nonempty subset of them is irrational <a href="https://math.stackexchange.com/a/1104334/242">since,</a> over a UFD,
a product of coprime elements is a square iff each factor is a square
(mod unit multiples). Hence the classical case satisfies the theorem's hypotheses.</p>
<p>Elementary proofs like that above are often credited to Besicovitch
(see below). But I have not seen his paper so I cannot say for sure
whether or not Besicovic's proof is essentially the same as above.
Finally, see the papers reviewed below for some stronger results.</p>
<hr />
<p>2,33f 10.0X<br />
Besicovitch, A. S.<br />
On the linear independence of fractional powers of integers.<br />
J. London Math. Soc. 15 (1940). 3-6.</p>
<p>Let <span class="math-container">$\ a_i = b_i\ p_i,\ i=1,\ldots s\:,\:$</span> where the <span class="math-container">$p_i$</span> are <span class="math-container">$s$</span> different primes and
the <span class="math-container">$b_i$</span> positive integers not divisible by any of them. The author proves
by an inductive argument that, if <span class="math-container">$x_j$</span> are positive real roots of
<span class="math-container">$x^{n_j} - a_j = 0,\ j=1,...,s ,$</span> and <span class="math-container">$P(x_1,...,x_s)$</span> is a polynomial with
rational coefficients and of degree not greater than <span class="math-container">$n_j - 1$</span> with respect
to <span class="math-container">$x_j,$</span> then <span class="math-container">$P(x_1,...,x_s)$</span> can vanish only if all its coefficients vanish. <span class="math-container">$\quad$</span> Reviewed by W. Feller.</p>
<hr />
<p>15,404e 10.0X<br />
Mordell, L. J.<br />
On the linear independence of algebraic numbers.<br />
Pacific J. Math. 3 (1953). 625-630.</p>
<p>Let <span class="math-container">$K$</span> be an algebraic number field and <span class="math-container">$x_1,\ldots,x_s$</span> roots of the equations
<span class="math-container">$\ x_i^{n_i} = a_i\ (i=1,2,...,s)$</span> and suppose that (1) <span class="math-container">$K$</span> and all <span class="math-container">$x_i$</span> are real, or
(2) <span class="math-container">$K$</span> includes all the <span class="math-container">$n_i$</span> th roots of unity, i.e. <span class="math-container">$ K(x_i)$</span> is a Kummer field.
The following theorem is proved. A polynomial <span class="math-container">$P(x_1,...,x_s)$</span> with coefficients
in <span class="math-container">$K$</span> and of degrees in <span class="math-container">$x_i$</span>, less than <span class="math-container">$n_i$</span> for <span class="math-container">$i=1,2,\ldots s$</span>, can vanish only if
all its coefficients vanish, provided that the algebraic number field <span class="math-container">$K$</span> is such
that there exists no relation of the form <span class="math-container">$\ x_1^{m_1}\ x_2^{m_2}\:\cdots\: x_s^{m_s} = a$</span>, where <span class="math-container">$a$</span> is a number in <span class="math-container">$K$</span> unless <span class="math-container">$\ m_i \equiv 0 \mod n_i\ (i=1,2,...,s)$</span>. When <span class="math-container">$K$</span> is of the second type, the theorem was proved earlier by Hasse [Klassenkorpertheorie,
Marburg, 1933, pp. 187--195] by help of Galois groups. When <span class="math-container">$K$</span> is of the first
type and <span class="math-container">$K$</span> also the rational number field and the <span class="math-container">$a_i$</span> integers, the theorem was proved by Besicovitch in an elementary way. The author here uses a proof analogous to that used by Besicovitch [J. London Math. Soc. 15b, 3--6 (1940) these Rev. 2, 33]. <span class="math-container">$\quad$</span> Reviewed by H. Bergstrom.</p>
<hr />
<p>46 #1760 12A99<br />
Siegel, Carl Ludwig<br />
Algebraische Abhaengigkeit von Wurzeln. (German)<br />
Acta Arith. 21 (1972), 59-64.</p>
<p>Two nonzero real numbers are said to be equivalent with respect to a real
field <span class="math-container">$R$</span> if their ratio belongs to <span class="math-container">$R$</span>. Each real number <span class="math-container">$r \ne 0$</span> determines
a class <span class="math-container">$[r]$</span> under this equivalence relation, and these classes form a
multiplicative abelian group <span class="math-container">$G$</span> with identity element <span class="math-container">$[1]$</span>. If <span class="math-container">$r_1,\dots,r_h$</span>
are nonzero real numbers such that <span class="math-container">$r_i^{n_i}\in R$</span> for some positive integers <span class="math-container">$n_i\
(i=1,...,h)$</span>, denote by <span class="math-container">$G(r_1,...,r_h) = G_h$</span> the subgroup of <span class="math-container">$G$</span> generated by
<span class="math-container">$[r_1],\dots,[r_h]$</span> and by <span class="math-container">$R(r_1,...,r_h) = R_h$</span> the algebraic extension field of
<span class="math-container">$R = R_0$</span> obtained by the adjunction of <span class="math-container">$r_1,...,r_h$</span>. The central problem
considered in this paper is to determine the degree and find a basis of <span class="math-container">$R_h$</span>
over <span class="math-container">$R$</span>. Special cases of this problem have been considered earlier by A. S.
Besicovitch [J. London Math. Soc. 15 (1940), 3-6; MR 2, 33] and by L. J.
Mordell [Pacific J. Math. 3 (1953), 625-630; MR 15, 404]. The principal
result of this paper is the following theorem: the degree of <span class="math-container">$R_h$</span> with respect
to <span class="math-container">$R_{h-1}$</span> is equal to the index <span class="math-container">$j$</span> of <span class="math-container">$G_{h-1}$</span> in <span class="math-container">$G_h$</span>, and the powers <span class="math-container">$r_i^t\
(t=0,1,...,j-1)$</span> form a basis of <span class="math-container">$R_h$</span> over <span class="math-container">$R_{h-1}$</span>. Several interesting
applications and examples of this result are discussed. <span class="math-container">$\quad$</span> Reviewed by H. S. Butts</p>
| <p>Assume that there was some linear dependence relation of the form</p>
<p>$$ \sum_{k=1}^n c_k \sqrt{p_k} + c_0 = 0 $$</p>
<p>where $ c_k \in \mathbb{Q} $ and the $ p_k $ are distinct prime numbers. Let $ L $ be the smallest extension of $ \mathbb{Q} $ containing all of the $ \sqrt{p_k} $. We argue using the field trace $ T = T_{L/\mathbb{Q}} $. First, note that if $ d \in \mathbb{N} $ is not a perfect square, we have that $ T(\sqrt{d}) = 0 $. This is because $ L/\mathbb{Q} $ is Galois, and $ \sqrt{d} $ cannot be a fixed point of the action of the Galois group as it is not rational. This means that half of the Galois group maps it to its other conjugate $ -\sqrt{d} $, and therefore the sum of all conjugates cancel out. Furthermore, note that we have $ T(q) = 0 $ iff $ q = 0 $ for rational $ q $.</p>
<p>Taking traces on both sides we immediately find that $ c_0 = 0 $. Let $ 1 \leq j \leq n $ and multiply both sides by $ \sqrt{p_j} $ to get</p>
<p>$$ c_j p_j + \sum_{1 \leq k \leq n, k\neq j} c_k \sqrt{p_k p_j} = 0$$</p>
<p>Now, taking traces annihilates the second term entirely and we are left with $ T(c_j p_j) = 0 $, which implies $ c_j = 0 $. Since $ j $ was arbitrary, we conclude that all coefficients are zero, proving linear independence.</p>
|
probability | <p><strong>Question:</strong> Suppose we have one hundred seats, numbered 1 through 100. We randomly select 25 of these seats. What is the expected number of selected pairs of seats that are consecutive? (To clarify: we would count two consecutive selected seats as a single pair.)</p>
<p>For example, if the selected seats are all consecutive (eg 1-25), then we have 24 consecutive pairs (eg 1&2, 2&3, 3&4, ..., 24&25). The probability of this happening is 75/($_{100}C_{25}$). So this contributes $24\cdot 75/(_{100}C_{25}$) to the expected number of consecutive pairs. </p>
<p><strong>Motivation</strong>: I teach. Near the end of an exam, when most of the students have left, I notice that there are still many pairs of students next to each other. I want to know if the number that remain should be expected or not. </p>
| <p>If you're just interested in the <em>expectation</em>, you can use the fact that expectation is additive to compute</p>
<ul>
<li>The expected number of consecutive integers among $\{1,2\}$, plus</li>
<li>The expected number of consecutive integers among $\{2,3\}$, plus</li>
<li>....</li>
<li>plus the expected number of consecutive integers among $\{99,100\}$.</li>
</ul>
<p>Each of these 99 expectations is simply the probability that $n$ and $n+1$ are both chosen, which is $\frac{25}{100}\frac{24}{99}$.</p>
<p>So the expected number of pairs is $99\frac{25}{100}\frac{24}{99} = 6$.</p>
| <p>Let me present the approach proposed by Henning in another way. </p>
<p>We have $99$ possible pairs: $\{(1,2),(2,3),\ldots,(99,100)\}$. Let's define $X_i$ as</p>
<p>$$
X_i = \left \{
\begin{array}{ll}
1 & i\text{-th pair is chosen}\\
0 & \text{otherwise}\\
\end{array}
\right .
$$</p>
<p>The $i$-th pair is denoted as $(i, i+1)$. That pair is chosen when the integer $i$ is chosen, with probability $25/100$, <em>and</em> the integer $i+1$ is also chosen, with probability $24/99$. Then,</p>
<p>$$E[X_i] = P(X_i = 1) = \frac{25}{100}\frac{24}{99},$$</p>
<p>and this holds for $i = 1,2,\ldots,99$. The total number of chosen pairs is then given as</p>
<p>$$X = X_1 + X_2 + \ldots + X_{99},$$</p>
<p>and using the linearity of the expectation, we get</p>
<p>\begin{align}
E[X] &= E[X_1] + E[X_2] + \cdots +E[X_{99}]\\
&= 99E[X_1]\\
&= 99\frac{25}{100}\frac{24}{99} = 6
\end{align}</p>
|
number-theory | <p>In <a href="https://math.stackexchange.com/a/373935/752">this recent answer</a> to <a href="https://math.stackexchange.com/q/373918/752">this question</a> by Eesu, Vladimir
Reshetnikov proved that
$$
\begin{equation}
\left( 26+15\sqrt{3}\right) ^{1/3}+\left( 26-15\sqrt{3}\right) ^{1/3}=4.\tag{1}
\end{equation}
$$</p>
<p>I would like to know if this result can be <em>generalized</em> to other triples of
natural numbers. </p>
<blockquote>
<p><strong>Question</strong>. What are the solutions of the following equation? $$ \begin{equation} \left( p+q\sqrt{3}\right) ^{1/3}+\left(
p-q\sqrt{3}\right) ^{1/3}=n,\qquad \text{where }\left( p,q,n\right)
\in \mathbb{N} ^{3}.\tag{2} \end{equation} $$</p>
</blockquote>
<p>For $(1)$ we could write $26+15\sqrt{3}$ in the form $(a+b\sqrt{3})^{3}$ </p>
<p>$$
26+15\sqrt{3}=(a+b\sqrt{3})^{3}=a^{3}+9ab^{2}+3( a^{2}b+b^{3})
\sqrt{3}
$$</p>
<p>and solve the system
$$
\left\{
\begin{array}{c}
a^{3}+9ab^{2}=26 \\
a^{2}b+b^{3}=5.
\end{array}
\right.
$$</p>
<p>A solution is $(a,b)=(2,1)$. Hence $26+15\sqrt{3}=(2+\sqrt{3})^3 $. Using the same method to $26-15\sqrt{3}$, we find $26-15\sqrt{3}=(2-\sqrt{3})^3 $, thus proving $(1)$.</p>
<p>For $(2)$ the very same idea yields</p>
<p>$$
p+q\sqrt{3}=(a+b\sqrt{3})^{3}=a^{3}+9ab^{2}+3( a^{2}b+b^{3}) \tag{3}
\sqrt{3}
$$</p>
<p>and</p>
<p>$$
\left\{
\begin{array}{c}
a^{3}+9ab^{2}=p \\
3( a^{2}b+b^{3}) =q.
\end{array}
\right. \tag{4}
$$</p>
<p>I tried to solve this system for $a,b$ but since the solution is of the form</p>
<p>$$
(a,b)=\Big(x^{3},\frac{3qx^{3}}{8x^{9}+p}\Big),\tag{5}
$$</p>
<p>where $x$ satisfies the <em>cubic</em> equation
$$
64x^{3}-48x^{2}p+( -15p^{2}+81q^{2}) x-p^{3}=0,\tag{6}
$$
would be very difficult to succeed, using this naive approach. </p>
<blockquote>
<p>Is this problem solvable, at least partially?</p>
</blockquote>
<p>Is $\sqrt[3]{p+q\sqrt{3}}+\sqrt[3]{p-q\sqrt{3}}=n$, $(p,q,n)\in\mathbb{N} ^3$ solvable?</p>
| <p>The solutions are of the form $\displaystyle(p, q)= \left(\frac{3t^2nr+n^3}{8},\,\frac{3n^2t+t^3r}{8}\right)$, for any rational parameter $t$. To prove it, we start with $$\left(p+q\sqrt{r}\right)^{1/3}+\left(p-q\sqrt{r}\right)^{1/3}=n\tag{$\left(p,q,n,r\right)\in\mathbb{N}^{4}$}$$
and cube both sides using the identity $(a+b)^3=a^3+3ab(a+b)+b^3$ to, then, get $$\left(\frac{n^3-2p}{3n}\right)^3=p^2-rq^2,$$ which is a nicer form to work with. Keeping $n$ and $r$ fixed, we see that for every $p={1,2,3,\ldots}$ there is a solution $(p,q)$, where $\displaystyle q^2=\frac{1}{r}\left(p^2-\left(\frac{n^3-2p}{3n}\right)^3\right)$. When is this number a perfect square? <a href="http://www.wolframalpha.com/input/?i=%5Cfrac%7Bp%5E2-%5Cleft%28%5Cfrac%7Bn%5E3-2p%7D%7B3n%7D%5Cright%29%5E3%7D%7Br%7D">Wolfram</a> says it equals $$q^2 =\frac{(8p-n^3) (n^3+p)^2}{(3n)^2\cdot 3nr},$$ which reduces the question to when $\displaystyle \frac{8p-n^3}{3nr}$ is a perfect square, and you get solutions of the form $\displaystyle (p,q)=\left(p,\frac{n^3+p}{3n}\sqrt{\frac{8p-n^3}{3nr}}\right).$ Note that when $r=3$, this simplifies further to when $\displaystyle \frac{8p}{n}-n^2$ is a perfect square.</p>
<hr>
<p>Now, we note that if $\displaystyle (p,q)=\left(p,\frac{n^3+p}{3n}\sqrt{\frac{8p-n^3}{3nr}}\right) \in\mathbb{Q}^2$, $\displaystyle\sqrt{\frac{8p-n^3}{3nr}}$ must be rational as well. Call this rational number $t$, our parameter. Then $8p=3t^2nr+n^3$. Substitute back to get $$(p,q)=\left(\frac{3t^2nr+n^3}{8},\,\frac{3n^2t+t^3r}{8}\right).$$ This generates expressions like <a href="http://www.wolframalpha.com/input/?i=%5Cleft%28p%2bq%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%2b%5Cleft%28p-q%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%20where%20r=11,%20p=2589437/8%20and%20q=56351/4&a=%5E_Real">$$\left(\frac{2589437}{8}+\frac{56351}{4}\sqrt{11}\right)^{1/3}+\left(\frac{2589437}{8}-\frac{56351}{4}\sqrt{11}\right)^{1/3}=137$$</a></p>
<p><a href="http://www.wolframalpha.com/input/?i=%5Cleft%28p%2bq%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%2b%5Cleft%28p-q%5Csqrt%7Br%7D%5Cright%29%5E%7B1/3%7D%20where%20r=3,%20p=11155/4%20and%20q=6069/4&a=%5E_Real">$$\left(\frac{11155}{4}+\frac{6069}{4}\sqrt{3}\right)^{1/3}+\left(\frac{11155}{4}-\frac{6069}{4}\sqrt{3}\right)^{1/3}=23$$</a></p>
<p>for whichever $r$ you want, the first using $(r,t,n)=(11,2,137)$ and the second $(r,t,n)=(3,7,23)$.</p>
| <p>Here's a way of finding, at the very least, a large class of rational solutions. It seems plausible to me that these are all the rational solutions, but I don't actually have a proof yet...</p>
<p>Say we want to solve $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}=n$ for some fixed $n$. The left-hand side looks an awful lot like the root of a depressed cubic (as it would be given by <a href="http://en.wikipedia.org/wiki/Cubic_function#Cardano.27s_method">Cardano's formula</a>). So let's try to build some specific depressed cubic having $n$ as a root, where the cubic formula realizes $n$ as $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$.</p>
<p>The depressed cubics having $n$ as a root all take the following form:
$$(x-n)(x^2+nx+b) = x^3 + (b-n^2)x-nb$$
where $b$ is arbitrary. If we want to apply the cubic formula to such a polynomial and come up with the root $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$, we must have:
\begin{eqnarray}
p&=&\frac{nb}{2}\\
3q^2&=& \frac{(nb)^2}{4}+\frac{(-n^2+b)^3}{27}\\
&=&\frac{b^3}{27}+\frac{5b^2n^2}{36}+\frac{bn^4}{9}-\frac{n^6}{27}\\
&=&\frac{1}{108}(4b-n^2)(b+2n^2)^2
\end{eqnarray}
(where I cheated and used Wolfram Alpha to do the last factorization :)).</p>
<p>So the $p$ that arises here will be rational iff $b$ is; the $q$ that arises will be rational iff $4b-n^2$ is a perfect rational square (since $3 * 108=324$ is a perfect square). That is, we can choose rational $n$ and $m$ and set $m^2=4b-n^2$, and then we will be able to find rational $p,q$ via the above formulae, where
$(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$ is a root of the cubic
$$
(x-n)\left(x^2+nx+\frac{m^2+n^2}{4}\right)=(x-n)\left(\left(x+\frac{n}{2}\right)^2+\left(\frac{m}{2}\right)^2\right) \, .
$$</p>
<p>The quadratic factor of this cubic manifestly does not have real roots unless $m=0$; since $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}$ is real, it must therefore be equal to $n$ whenever $m \neq 0$.</p>
<p>To summarize, we have found a two-parameter family of rational solutions to the general equation $(p+q\sqrt{3})^{1/3}+(p-q\sqrt{3})^{1/3}=n$. One of those parameters is $n$ itself; if the other is $m$, we can substitute $b=\frac{m^2+n^2}{4}$ into the above relations to get
\begin{eqnarray}
p&=&n\left(\frac{m^2+n^2}{8}\right)\\
q&=&m\left(\frac{m^2+9n^2}{72}\right) \, .
\end{eqnarray}</p>
<p>To make sure I didn't make any algebra errors, I randomly picked $n=5$, $m=27$ to try out. These give $(p,q)=\left(\frac{1885}{4},\frac{1431}{4}\right)$, and indeed Wolfram Alpha <a href="http://www.wolframalpha.com/input/?i=%281885/4%2b1431/4%20%2a%20sqrt%283%29%29%5E%281/3%29%2b%281885/4-1431/4%20%2a%20sqrt%283%29%29%5E%281/3%29&a=%5E_Real">confirms</a> that
$$
\left(\frac{1885}{4}+\frac{1431}{4} \sqrt{3}\right)^{1/3}+\left(\frac{1885}{4}-\frac{1431}{4} \sqrt{3}\right)^{1/3}=5 \, .
$$</p>
|
matrices | <p>I have this problem from my Graphics course. Given this transformation matrix:</p>
<p>$$\begin{pmatrix}
-2 &-1& 2\\
-2 &1& -1\\
0 &0& 1\\
\end{pmatrix}$$</p>
<p>I need to extract translation, rotation and scale matrices.
I've also have the answer (which is $TRS$):
$$T=\begin{pmatrix}
1&0&2\\
0&1&-1\\
0&0&1\end{pmatrix}\\
R=\begin{pmatrix}
1/\sqrt2 & -1/\sqrt2 &0 \\
1/\sqrt2 & 1/\sqrt2 &0 \\
0&0&1
\end{pmatrix}\\
S=\begin{pmatrix}
-2/\sqrt2 & 0 & 0 \\
0 & \sqrt2 & 0 \\
0& 0& 1
\end{pmatrix}
% 1 0 2 1/sqrt(2) -1/sqrt(2) 0 -2/sqrt(2) 0 0
%T = 0 1 -1 R = /1/sqrt(2) 1/sqrt(2) 0 S = 0 sqrt(2) 0
% 0 0 1 0 0 1 0 0 1
$$</p>
<p>I just have no idea (except for the Translation matrix) how I would get to this solution.</p>
| <p>I am a person from the future, and I had the same problem. For future reference, here's the algorithm for 4x4. You can solve your 3x3 problem by padding out your problem to the larger dimensions.</p>
<p><em><sup>Caveat: the following only works for a matrix containing rotation, translation, and nonnegative scalings. This is the overwhelmingly commonest case, and doubtless what OP was expected to assume. A more-general solution is significantly more complicated and was not provided because OP didn't ask for it; anyone interested in supporting e.g. negative scalings and shear should look at Graphics Gems II §VII.1.</sup></em></p>
<p>Start with a transformation matrix:<span class="math-container">$$
\begin{bmatrix}
a & b & c & d\\
e & f & g & h\\
i & j & k & l\\
0 & 0 & 0 & 1
\end{bmatrix}
$$</span></p>
<ol>
<li><p><strong>Extract Translation</strong><br/>
This is basically the last column of the matrix:<span class="math-container">$$
\vec{t} = \langle ~d,~h,~l~ \rangle
$$</span>While you're at it, zero them in the matrix.</p>
</li>
<li><p><strong>Extract Scale</strong><br/>
For this, take the length of the first three column vectors:<span class="math-container">$$
s_x = \|\langle ~a,~e,~i~ \rangle\|\\
s_y = \|\langle ~b,~f,~j~ \rangle\|\\
s_z = \|\langle ~c,~g,~k~ \rangle\|\\
\vec{s} = \langle s_x,s_y,s_z \rangle
$$</span></p>
</li>
<li><p><strong>Extract Rotation</strong><br/>
Divide the first three column vectors by the scaling factors you just found. Your matrix should now look like this (remember we zeroed the translation):<span class="math-container">$$
\begin{bmatrix}
a/s_x & b/s_y & c/s_z & 0\\
e/s_x & f/s_y & g/s_z & 0\\
i/s_x & j/s_y & k/s_z & 0\\
0 & 0 & 0 & 1
\end{bmatrix}
$$</span>This is the rotation matrix. There are methods to convert it to quaternions, and from there to axis-angle, if you want either of those instead.</p>
</li>
</ol>
<p><sub><a href="http://www.gamedev.net/topic/467665-decomposing-rotationtranslationscale-from-matrix/#entry4076502" rel="noreferrer">resource</a></sub></p>
| <p>It appears you are working with <a href="http://en.wikipedia.org/wiki/Affine_transformation#Augmented_matrix" rel="noreferrer">Affine Transformation Matrices</a>, which is also the case in the <a href="https://math.stackexchange.com/a/13165/61427">other answer you referenced</a>, which is standard for working with 2D computer graphics. The only difference between the matrices here and those in the other answer is that yours use the square form, rather than a rectangular augmented form.</p>
<p>So, using the labels from the other answer, you would have</p>
<p>$$
\left[
\begin{array}{ccc}
a & b & t_x\\
c & d & t_y\\
0 & 0 & 1\end{array}\right]=\left[\begin{array}{ccc}
s_{x}\cos\psi & -s_{x}\sin\psi & t_x\\
s_{y}\sin\psi & s_{y}\cos\psi & t_y\\
0 & 0 & 1\end{array}\right]
$$</p>
<p>The matrices you seek then take the form:</p>
<p>$$
T=\begin{pmatrix}
1 & 0 & t_x \\
0 & 1 & t_y \\
0 & 0 & 1 \end{pmatrix}\\
R=\begin{pmatrix}
\cos{\psi} & -\sin{\psi} &0 \\
\sin{\psi} & \cos{\psi} &0 \\
0 & 0 & 1 \end{pmatrix}\\
S=\begin{pmatrix}
s_x & 0 & 0 \\
0 & s_y & 0 \\
0 & 0 & 1 \end{pmatrix}
$$</p>
<p>If you need help with extracting those values, the other answer has explicit formulae.</p>
|
probability | <p>On a disk, choose <span class="math-container">$n$</span> uniformly random points. Then draw the smallest circle enclosing those points. (<a href="https://www.personal.kent.edu/%7Ermuhamma/Compgeometry/MyCG/CG-Applets/Center/centercli.htm" rel="noreferrer">Here</a> are some algorithms for doing so.)</p>
<p>The circle may or may not lie completely on the disk. For example, with <span class="math-container">$n=7$</span>, here are examples of both cases.</p>
<p><a href="https://i.sstatic.net/PReoa.png" rel="noreferrer"><img src="https://i.sstatic.net/PReoa.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xSoOX.png" rel="noreferrer"><img src="https://i.sstatic.net/xSoOX.png" alt="enter image description here" /></a></p>
<blockquote>
<p>What is <span class="math-container">$\lim\limits_{n\to\infty}\{\text{Probability that the circle lies completely on the disk}\}$</span>?</p>
</blockquote>
<p>Is the limiting probability <span class="math-container">$0$</span>? Or <span class="math-container">$1$</span>? Or something in between? My geometrical intuition fails to tell me anything.</p>
<h2>The case <span class="math-container">$n=2$</span></h2>
<p>I have only been able to find that, when <span class="math-container">$n=2$</span>, the probability that the smallest enclosing circle lies completely on the disk, is <span class="math-container">$2/3$</span>.</p>
<p>Without loss of generality, assume that the perimeter of the disk is <span class="math-container">$x^2+y^2=1$</span>, and the two points are <span class="math-container">$(x,y)$</span> and <span class="math-container">$(0,\sqrt t)$</span> where <span class="math-container">$t$</span> is <a href="https://mathworld.wolfram.com/DiskPointPicking.html" rel="noreferrer">uniformly distributed</a> in <span class="math-container">$[0,1]$</span>.</p>
<p>The smallest enclosing circle has centre <span class="math-container">$C\left(\frac{x}{2}, \frac{y+\sqrt t}{2}\right)$</span> and radius <span class="math-container">$r=\frac12\sqrt{x^2+(y-\sqrt t)^2}$</span>. If the smallest enclosing circle lies completely on the disk, then <span class="math-container">$C$</span> lies within <span class="math-container">$1-r$</span> of the origin. That is,</p>
<p><span class="math-container">$$\sqrt{\left(\frac{x}{2}\right)^2+\left(\frac{y+\sqrt t}{2}\right)^2}\le 1-\frac12\sqrt{x^2+(y-\sqrt t)^2}$$</span></p>
<p>which is equivalent to</p>
<p><span class="math-container">$$\frac{x^2}{1-t}+y^2\le1$$</span></p>
<p>The <a href="https://byjus.com/maths/area-of-ellipse/" rel="noreferrer">area</a> of this region is <span class="math-container">$\pi\sqrt{1-t}$</span>, and the area of the disk is <span class="math-container">$\pi$</span>, so the probability that the smallest enclosing circle lies completely on the disk is <span class="math-container">$\sqrt{1-t}$</span>.</p>
<p>Integrating from <span class="math-container">$t=0$</span> to <span class="math-container">$t=1$</span>, the probability is <span class="math-container">$\int_0^1 \sqrt{1-t}dt=2/3$</span>.</p>
<h2>Edit</h2>
<p>From the comments, @Varun Vejalla has run trials that suggest that, for small values of <span class="math-container">$n$</span>, the probability (that the enclosing circle lies completely on the disk) is <span class="math-container">$\frac{n}{2n-1}$</span>, and that the limiting probability is <span class="math-container">$\frac12$</span>. There should be a way to prove these results.</p>
<h2>Edit2</h2>
<p>I seek to generalize this question <a href="https://mathoverflow.net/q/458571/494920">here</a>.</p>
| <p>First, let me state two lemmas that demand tedious computations. Let <span class="math-container">$B(x, r)$</span> denote the circle centered at <span class="math-container">$x$</span> with radius <span class="math-container">$r$</span>.</p>
<p><strong>Lemma 1</strong>: Let <span class="math-container">$B(x,r)$</span> be a circle contained in <span class="math-container">$B(0, 1)$</span>. Suppose we sample two points <span class="math-container">$p_1, p_2$</span> inside <span class="math-container">$B(0, 1)$</span>, and <span class="math-container">$B(x', r')$</span> is the circle with <span class="math-container">$p_1p_2$</span> as diameter. Then we have
<span class="math-container">$$\mathbb{P}(x' \in x + dx, r' \in r + dr) = \frac{8}{\pi} r dxdr.$$</span></p>
<p><strong>Lemma 2</strong>: Let <span class="math-container">$B(x,r)$</span> be a circle contained in <span class="math-container">$B(0, 1)$</span>. Suppose we sample three points <span class="math-container">$p_1, p_2, p_3$</span> inside <span class="math-container">$B(0, 1)$</span>, and <span class="math-container">$B(x', r')$</span> is the circumcircle of <span class="math-container">$p_1p_2p_3$</span>. Then we have
<span class="math-container">$$\mathbb{P}(x' \in x + dx, r' \in r + dr) = \frac{24}{\pi} r^3 dxdr.$$</span>
Furthermore, conditioned on this happening, the probability that <span class="math-container">$p_1p_2p_3$</span> is acute is exactly <span class="math-container">$1/2$</span>.</p>
<p>Given these two lemmas, let's see how to compute the probability in question. Let <span class="math-container">$p_1, \cdots, p_n$</span> be the <span class="math-container">$n$</span> points we selected, and <span class="math-container">$C$</span> is the smallest circle containing them. For each <span class="math-container">$i < j < k$</span>, let <span class="math-container">$C_{ijk}$</span> denote the circumcircle of three points <span class="math-container">$p_i, p_j, p_k$</span>. For each <span class="math-container">$i < j$</span>, let <span class="math-container">$D_{ij}$</span> denote the circle with diameter <span class="math-container">$p_i, p_j$</span>. Let <span class="math-container">$E$</span> denote the event that <span class="math-container">$C$</span> is contained in <span class="math-container">$B(0, 1)$</span>.</p>
<p>First, a geometric statement.</p>
<p><strong>Claim</strong>: Suppose no four of <span class="math-container">$p_i$</span> are concyclic, which happens with probability <span class="math-container">$1$</span>. Then exactly one of the following scenarios happen.</p>
<ol>
<li><p>There exists unique <span class="math-container">$1 \leq i < j < k \leq n$</span> such that <span class="math-container">$p_i, p_j, p_k$</span> form an acute triangle and <span class="math-container">$C_{ijk}$</span> contains all the points <span class="math-container">$p_1, \cdots, p_n$</span>. In this case, <span class="math-container">$C = C_{ijk}$</span>.</p>
</li>
<li><p>There exists unique <span class="math-container">$1 \leq i < j \leq n$</span> such that <span class="math-container">$D_{ij}$</span> contains all the points <span class="math-container">$p_1, \cdots, p_n$</span>. In this case, <span class="math-container">$C = D_{ij}$</span>.</p>
</li>
</ol>
<p><strong>Proof</strong>: This is not hard to show, and is listed <a href="https://en.wikipedia.org/wiki/Smallest-circle_problem" rel="noreferrer">on wikipedia</a>.</p>
<p>Let <span class="math-container">$E_1$</span> be the event that <span class="math-container">$E$</span> happens and we are in scenario <span class="math-container">$1$</span>. Let <span class="math-container">$E_2$</span> be the event that <span class="math-container">$E$</span> happens and we are in scenario <span class="math-container">$2$</span>.</p>
<p>We first compute the probability that <span class="math-container">$E_1$</span> happens. It is
<span class="math-container">$$\mathbb{P}(E_1) = \sum_{1 \leq i < j < k \leq n} \mathbb{P}(\forall \ell \neq i,j,k, p_\ell \in C_{ijk} , C_{ijk} \subset B(0, 1), p_ip_jp_k \text{acute}).$$</span>
Conditioned on <span class="math-container">$C_{ijk} = B(x, r)$</span>, Lemma 2 shows that this happens with probability <span class="math-container">$\frac{1}{2}r^{2(n - 3)} \mathbb{1}_{|x| + r \leq 1}$</span>. Lemma 2 also tells us the distribution of <span class="math-container">$(x, r)$</span>. Integrating over <span class="math-container">$(x, r)$</span>, we conclude that
<span class="math-container">$$\mathbb{P}(\forall \ell \neq i,j,k, p_\ell \in C_{ijk} , C_{ijk} \subset B(0, 1), p_ip_jp_k \text{acute}) = \int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3 dr dx.$$</span>
Thus we have
<span class="math-container">$$\mathbb{P}(E_1) = \binom{n}{3}\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3 dr dx.$$</span>
We can first integrate the <span class="math-container">$x$</span>-variable to get
<span class="math-container">$$\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3dr dx = 12 \int_0^1 r^{2n - 3}(1 - r)^2 dr.$$</span>
Note that
<span class="math-container">$$\int_0^1 r^{2n - 3}(1 - r)^2 dr = \frac{(2n - 3)! * 2!}{(2n)!} = \frac{2}{2n * (2n - 1) * (2n - 2)}.$$</span>
So we conclude that
<span class="math-container">$$\mathbb{P}(E_1) = \frac{n - 2}{2n - 1}.$$</span>
We next compute the probability that <span class="math-container">$E_2$</span> happens. It is
<span class="math-container">$$\mathbb{P}(E_2) = \sum_{1 \leq i < j \leq n} \mathbb{P}(\forall \ell \neq i,j, p_\ell \in D_{ij} , D_{ij} \subset B(0, 1)).$$</span>
Conditioned on <span class="math-container">$D_{ij} = B(x, r)$</span>, this happens with probability <span class="math-container">$r^{2(n - 2)} \mathbb{1}_{|x| + r \leq 1}$</span>. Lemma 1 tells us the distribution of <span class="math-container">$(x, r)$</span>. So we conclude that the probability that
<span class="math-container">$$\mathbb{P}(\forall \ell \neq i,j, p_\ell \in D_{ij} , D_{ij} \subset B(0, 1)) = \int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx.$$</span>
So
<span class="math-container">$$\mathbb{P}(E_2) = \binom{n}{2}\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx.$$</span>
We compute that
<span class="math-container">$$\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx = 8\int_0^1 r^{2n - 3} (1 - r)^2 dr = 8 \frac{(2n - 3)! 2!}{(2n)!}.$$</span>
So we conclude that
<span class="math-container">$$\mathbb{P}(E_2) = 8 \binom{n}{2} \frac{(2n - 3)! 2!}{(2n)!} = \frac{2}{2n - 1}.$$</span>
Finally, we get
<span class="math-container">$$\mathbb{P}(E) = \mathbb{P}(E_1) + \mathbb{P}(E_2) = \boxed{\frac{n}{2n - 1}}.$$</span>
The proofs of the two lemmas are really not very interesting. The main tricks are some coordinate changes. Let's look at Lemma 1 for example. The trick is to make the coordinate change <span class="math-container">$p_1 = x + r (\cos \theta, \sin \theta), p_2 = x + r (-\cos \theta, -\sin \theta)$</span>. One can compute the Jacobian of the coordinate change as something like
<span class="math-container">$$J = \begin{bmatrix} 1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 \\
\cos \theta & \sin \theta & -\cos \theta & -\sin \theta \\
-r\sin \theta & r\cos \theta & r\sin \theta & -r\cos \theta \\
\end{bmatrix}.$$</span>
And we can compute that <span class="math-container">$|\det J| = 4r$</span>. As <span class="math-container">$p_1, p_2$</span> has density function <span class="math-container">$\frac{1}{\pi^2} \mathbf{1}_{p_1, p_2 \in B(0, 1)}$</span>, the new coordinate system <span class="math-container">$(x, r, \theta)$</span> has density function
<span class="math-container">$$\frac{4r}{\pi^2} \mathbf{1}_{p_1, p_2 \in B(0, 1)}$$</span>
The second term can be dropped as it is always <span class="math-container">$1$</span> in the neighbor of <span class="math-container">$(x, r)$</span>. To get the density of <span class="math-container">$(x, r)$</span> you can integrate in the <span class="math-container">$\theta$</span> variable to conclude that the density of <span class="math-container">$(x, r)$</span> is
<span class="math-container">$$\frac{8r}{\pi}$$</span>
as desired.</p>
<p>The proof of Lemma 2 is analogous, except you can use the more complicated coordinate change from <span class="math-container">$(p_1, p_2, p_3)$</span> to <span class="math-container">$(x, r, \theta_1, \theta_2, \theta_3)$</span>
<span class="math-container">$$p_1 = x + r (\cos \theta_1, \sin \theta_1), p_2 = x + r (\cos \theta_2, \sin \theta_2), p_3 = x + r (\cos \theta_3, \sin \theta_3).$$</span>
The Jacobian <span class="math-container">$J$</span> is now <span class="math-container">$6$</span> dimensional, and Mathematica tells me that its determinant is
<span class="math-container">$$|\det J| = r^3|\sin(\theta_1 - \theta_2) + \sin(\theta_2 - \theta_3) + \sin(\theta_3 - \theta_1)|.$$</span>
So we just need to integrate this in <span class="math-container">$\theta_{1,2,3}$</span>! Unfortunately, Mathematica failed to do this integration, but I imagine you can do this by hand and get the desired Lemma.</p>
| <p>Intuitive answer:</p>
<p>Let <span class="math-container">$R_d$</span> be the radius of the disk and let <span class="math-container">$R_e$</span> be the radius of the enclosing circle of the random points on the disk and <span class="math-container">$R_p$</span> be the radius of a circle that passes through the outermost points such that all selected random points on the disk either lie on or within <span class="math-container">$R_p$</span>.</p>
<p>The answer to this question is highly dependent on the exact precise definitions of the terms and phrases used in the question.</p>
<p>Assumption 1:
Does the definition of "enclosing disk ...(of the points)... lies completely on the disk" includes the case of the enclosing circle lying exactly on the perimeter of the disk? i.e does it mean <span class="math-container">$R_e < R_d$</span> or <span class="math-container">$R\leq R_d$</span>? I will assume the latter.</p>
<p>Assumption 2:
Does the smallest enclosing circle of the points include the case of some of the enclosed points lying on the enclosing disk? i.e. does it mean <span class="math-container">$R_e > R_p$</span> or <span class="math-container">$R_e = R_p$</span>? I will assume the latter.</p>
<p>It is well known that a circle can be defined by a minimum of 3 non colinear points. The question can now be boiled down to "If there are infinitely points on the disk, what is the probability of at least 3 of the points being on the perimeter of the disk?"</p>
<p>Intuition says that if there are an infinite number of points that are either on or within the perimeter of the disk, then the probability of there being 3 points exactly on the perimeter of the disk is exactly unity. If there are at least 3 points exactly on the perimeter of the disk then the enclosing circle lies completely on the disk, so the answer to the OP question is:</p>
<p>"The probability that the smallest circle enclosing <span class="math-container">$n$</span> random points on a disk lies completely on the disk, as <span class="math-container">$n\to\infty$</span>, is 1.</p>
<p>If we define the meaning of "enclosing circle lies completely on the disk" to mean strictly <span class="math-container">$R_e < R_d$</span> then things get more complicated. Now the question boils down to "What is the probability of an infinite number of random points on the disk not having any points exactly on the perimeter of the disk?"</p>
<p>If any of the random points lie exactly on the perimeter of the disk, then the enclosing circle touches the perimeter of the disk and by the definitions of this alternative interpretation of the question, the enclosing circle does not lie entirely within the perimeter of the disk. The intuitive probability of placing an infinite number of random points on the disk without any of the points landing exactly on the perimeter of the disk is zero, so the answer to this alternative interpretation of the question is:</p>
<p>"The probability that the smallest circle enclosing <span class="math-container">$n$</span> random points on a disk lies completely on the disk, as <span class="math-container">$n\to\infty$</span> = 0.</p>
|
logic | <p>By the fundamental theorem of calculus I mean the following.</p>
<p><strong>Theorem:</strong> Let $B$ be a Banach space and $f : [a, b] \to B$ be a continuously differentiable function (this means that we can write $f(x + h) = f(x) + h f'(x) + o(|h|)$ for some continuous function $f' : [a, b] \to B$). Then</p>
<p>$$\int_a^b f'(t) \, dt = f(b) - f(a).$$</p>
<p>(This integral can be defined in any reasonable way, e.g. one can use the Bochner integral or a Riemann sum.)</p>
<p>This theorem can be proven from Hahn-Banach, which allows you to reduce to the case $B = \mathbb{R}$. However, Hahn-Banach is independent of ZF.</p>
<p>Recently I tried to prove this theorem without Hahn-Banach and found that I couldn't do it. The standard proof in the case $B = \mathbb{R}$ relies on the mean value theorem, which is not applicable here. I can only prove it (I think) under stronger hypotheses, e.g. $f'$ continuously differentiable or Lipschitz.</p>
<p>So I am curious whether this theorem is even true in the absence of Hahn-Banach. It is likely that I am just missing some nice argument involving uniform continuity, but if I'm not, that would be good to know.</p>
| <p>I believe that one of the standard proofs works.</p>
<ol>
<li><p>Let $F(x) := \intop_a^x f^\prime (t) dt$. Then $F$ is differentiable and its derivative is $f^\prime$ due to a standard estimate that has nothing to do with AC.</p></li>
<li><p>$(F-f)^\prime = 0$, hence it is constant. This boils down to the one-dimensional case: just consider $g := \Vert F-f-F(a)+f(a) \Vert$. It is a real-valued function with zero derivative, and $g(a)=0$, so we can use the usual "one-dimensional" mean value theorem.</p></li>
</ol>
| <p>Claim: Let $g:[a,b]\to B$ be differentiable, with $g'(t)=0$ for all $t\in[a,b]$. Then $g(t)$ is a constant.</p>
<p>Proof: Fix $\epsilon>0$. For each $t\in[a,b]$, we can find $\delta_t>0$ such that $0<|h|<\delta_t\Rightarrow\|g(t+h)-g(t)\|<\epsilon|h|$. The open intervals $(t-\delta_t,t+\delta_t)$ cover $[a,b]$, so there is a finite subcover $\{(t_i-\delta_{t_i},t_i+\delta_{t_i}):1\leq i\leq N\}$. We may choose our labeling so that $t_1<t_2<\ldots<t_N$. Now, we should be able to find points $x_0,\ldots,x_N$ with $x_0=a<t_1<x_1<t_2<\ldots<t_N<x_N=b$, satisfying $|x_i-t_i|<\delta_i$ and $|x_{i-1}-t_i|<\delta_i$ for $1\leq i\leq N$. Now,
\begin{eqnarray*}
\|g(b)-g(a)\|&=&\left\|\sum_{i=1}^N(g(x_i)-g(t_{i}))-(g(t_i)-g(x_{i-1}))\right\|\\
&<&\sum_{i=1}^N \epsilon (x_i-t_i)+\epsilon(t_i-x_{i-1})\\
&=&\epsilon(b-a)
\end{eqnarray*}
Since $\epsilon$ is arbitrary, we have $g(b)=g(a)$. This argument works on any subinterval, so $g(t)$ is constant.</p>
<p>We can apply the above with $g(x):=\int_a^xf'(t)\,dt-(f(x)-f(a))$. I believe it can be checked directly that $g'(x)=0$ for all $x$, and that $g(a)=0$, so that $g(x)$ must be identically 0.</p>
|
probability | <p>I just learned about the Monty Hall problem and found it quite amazing. So I thought about extending the problem a bit to understand more about it.<hr>
In this modification of the Monty Hall Problem, instead of three doors, we have four (or maybe $n$) doors, one with a car and the other three (or $n-1$) with a goat each (I want the car).</p>
<p>We need to choose any one of the doors. After we have chosen the door, Monty deliberately reveals one of the doors that has a goat and asks us if we wish to change our choice.</p>
<p>So should we switch the door we have chosen, or does it not matter if we switch or stay with our choice?</p>
<p>It would be even better if we knew the probability of winning upon switching given that Monty opens $k$ doors.</p>
| <p><em>I decided to make an answer out of <a href="https://math.stackexchange.com/questions/608957/monty-hall-problem-extended/609552#comment1283624_608977">my comment</a>, just for the heck of it.</em></p>
<hr>
<h2>$n$ doors, $k$ revealed</h2>
<p>Suppose we have $n$ doors, with a car behind $1$ of them. The probability of choosing the door with the car behind it on your first pick, is $\frac{1}{n}$. </p>
<p>Monty then opens $k$ doors, where $0\leq k\leq n-2$ (he has to leave your original door and at least one other door closed).</p>
<p>The probability of picking the car if you choose a different door, is the chance of not having picked the car in the first place, which is $\frac{n-1}{n}$, times the probability of picking it <em>now</em>, which is $\frac{1}{n-k-1}$. This gives us a total probability of $$ \frac{n-1}{n}\cdot \frac{1}{n-k-1} = \frac{1}{n} \cdot \frac{n-1}{n-k-1} \geq \frac{1}{n} $$</p>
<p><strong>No doors revealed</strong><br>
If Monty opens no doors, $k = 0$ and that reduces to $\frac{1}{n}$, which means your odds remain the same.</p>
<p><strong>At least one door revealed</strong><br>
For all $k > 0$, $\frac{n-1}{n-k-1} > 1$ and so the probabilty of picking the car on your second guess is greater than $\frac{1}{n}$.</p>
<p><strong>Maximum number of doors revealed</strong><br>
If $k$ is at its maximum value of $n-2$, the probability of picking a car after switching becomes $$\frac{1}{n}\cdot \frac{n-1}{n-(n-2)-1} = \frac{1}{n}\cdot \frac{n-1}{1} = \frac{n-1}{n}$$
For $n=3$, this is the solution to the original Monty Hall problem.</p>
<p>Switch.</p>
| <p>By not switching, you win a car if and only if you chose correctly initially. This happens with probability $\frac{1}{4}$. If you switch, you win a car if and only if you chose <em>incorrectly</em> initially, and then of the remaining two doors, you choose correctly. This happens with probability $\frac{3}{4}\times\frac{1}{2}=\frac{3}{8}$. So if you choose to switch, you are more likely to win a car than if you do not switch.</p>
<p>You never told me whether you'd prefer to win a car or a goat though, so I can't tell you what to do. </p>
<p><a href="http://xkcd.com/1282" rel="noreferrer"><img src="https://imgs.xkcd.com/comics/monty_hall.png" alt="xkcd #1282" title="A few minutes later, the goat from behind door C drives away in the car."></a></p>
|
probability | <p>The following probability question appeared in an <a href="https://math.stackexchange.com/questions/250/a-challenge-by-r-p-feynman-give-counter-intuitive-theorems-that-can-be-transl/346#346">earlier thread</a>:</p>
<blockquote>
<p>I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?</p>
</blockquote>
<p>The claim was that it is not actually a mathematical problem and it is only a language problem.</p>
<hr>
<p>If one wanted to restate this problem formally the obvious way would be like so:</p>
<p><strong>Definition</strong>: <em>Sex</em> is defined as an element of the set $\\{\text{boy},\text{girl}\\}$.</p>
<p><strong>Definition</strong>: <em>Birthday</em> is defined as an element of the set $\\{\text{Monday},\text{Tuesday},\text{Wednesday},\text{Thursday},\text{Friday},\text{Saturday},\text{Sunday}\\}$</p>
<p><strong>Definition</strong>: A <em>Child</em> is defined to be an ordered pair: (sex $\times$ birthday).</p>
<p>Let $(x,y)$ be a pair of children,</p>
<p>Define an auxiliary predicate $H(s,b) :\\!\\!\iff s = \text{boy} \text{ and } b = \text{Tuesday}$.</p>
<p>Calculate $P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y))$</p>
<p><em>I don't see any other sensible way to formalize this question.</em></p>
<hr>
<p>To actually solve this problem now requires no thought (infact it is thinking which leads us to guess incorrect answers), we just compute</p>
<p>$$
\begin{align*}
& P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y)) \\\\
=& \frac{P(x\text{ is a boy and }y\text{ is a boy and }(H(x)\text{ or }H(y)))}
{P(H(x)\text{ or }H(y))} \\\\
=& \frac{P((x\text{ is a boy and }y\text{ is a boy and }H(x))\text{ or }(x\text{ is a boy and }y\text{ is a boy and }H(y)))}
{P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\
=& \frac{\begin{align*} &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday}) \\\\
+ &P(x\text{ is a boy and }y\text{ is a boy and }y\text{ born on Tuesday}) \\\\
- &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday and }y\text{ born on Tuesday}) \\\\
\end{align*}}
{P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\
=& \frac{1/2 \cdot 1/2 \cdot 1/7 + 1/2 \cdot 1/2 \cdot 1/7 - 1/2 \cdot 1/2 \cdot 1/7 \cdot 1/7}
{1/2 \cdot 1/7 + 1/2 \cdot 1/7 - 1/2 \cdot 1/7 \cdot 1/2 \cdot 1/7} \\\\
=& 13/27
\end{align*}
$$</p>
<hr>
<p>Now what I am wondering is, does this refute the claim that this puzzle is just a language problem or add to it? Was there a lot of room for misinterpreting the questions which I just missed? </p>
| <p>There are even trickier aspects to this question. For example, what is the strategy of the guy telling you about his family? If he always mentions a boy first and not a daughter, we get one probability; if he talks about the sex of the first born child, we get a different probability. Your calculation makes a choice in this issue - you choose the version of "if the father has a boy and a girl, he'll mention the boy".</p>
<p>What I'm aiming to is this: the question is not well-defined mathematically. It has several possible interpretations, and as such the "problem" here is indeed of the language; or more correctly, the fact that a simple statement in English does not convey enough information to specify the precise model for the problem.</p>
<p>Let's look at a simplified version without days. The probability space for the make-up of the family is {BB, GB, BG, GG} (GB means "an older girl and a small boy", etc). We want to know what is $P(BB|A)$ where A is determined by the way we interpret the statement about the boys. Now let's look at different possible interpretations.</p>
<p>1) If there is a boy in the family, the statement will mention him. In this case A={BB,BG,GB} and so the probability is $1/3$.</p>
<p>2) If there is a girl in the family, the statement will mention her. In this case, since the statement talked about a boy, there are NO girls in the family. So A={BB} and so the probability is 1.</p>
<p>3) The statement talks about the sex of the firstborn. In this case A={BB,BG} and so the probability is $1/2$.</p>
<p>The bottom line: The statement about the family looks "constant" to us, but it must be looked as a function from the random state of the family - and there are several different possible functions, from which you must choose one otherwise no probabilistic analysis of the situation will make sense.</p>
| <p>It is actually <em>impossible</em> to have a unique and unambiguous answer to the puzzle without explicitly articulating a probability model for how the information on gender and birthday is generated. The reason is that (1) for the problem to have a unique answer some random process is required, and (2) the answer is a function of which random model is used.</p>
<ol>
<li><p>The problem assumes that a unique probability can be deduced as the answer. This requires that the set of children described is chosen by a random process, otherwise the number of boys is a deterministic quantity and the probability would be 0 or 1 but with no ability to determine which is the case. More generally one can consider random processes that produce the complete set of information referenced in the problem: choose a parent, then choose what to reveal about the number, gender, and birth days of its children.</p></li>
<li><p>The answer depends on which random process is used. If the Tuesday birth is disclosed only when there are two boys, the probability of two boys is 1. If Tuesday birth is disclosed only when there is a sister, the probability of two boys is 0. The answer could be any number between 0 or 1 depending on what process is assumed to produce the data. </p></li>
</ol>
<p>There is also a linguistic question of how to interpret "one is a boy born on Tuesday". It could mean that the number of Tuesday-born males is exactly one, or at least one child.</p>
|
differentiation | <p>In maths and sciences, I see the phrases "function of" and "with respect to" used quite a lot. For example, one might say that <span class="math-container">$f$</span> is a function of <span class="math-container">$x$</span>, and then differentiate <span class="math-container">$f$</span> "with respect to <span class="math-container">$x$</span>". I am familiar with the definition of a function and of the derivative, but it's really not clear to me what a <em>function of</em> something is, or why we need to say "with respect to". I find all this a bit confusing, and it makes it hard for me to follow arguments sometimes.</p>
<p>In my research, I've found <a href="https://math.stackexchange.com/questions/264745/rigorous-definition-of-function-of">this</a>, but the answers here aren't quite what I'm looking for. The answers seemed to discuss either what a function is, but I know what a function is. I am also unsatisfied with the suggestion that <span class="math-container">$f$</span> is a function of <span class="math-container">$x$</span> if we just label its argument as <span class="math-container">$x$</span>, since labels are arbitrary. I could write <span class="math-container">$f(x)$</span> for some value in the domain of <span class="math-container">$f$</span>, but couldn't I equally well write <span class="math-container">$f(t)$</span> or <span class="math-container">$f(w)$</span> instead?</p>
<p>To illustrate my confusion with a concrete example: consider the cumulative amount of wax burnt, <span class="math-container">$w$</span> as a candle burns. In a simple picture, we could say that <span class="math-container">$w$</span> depends on the amount of time for which the candle has been burning, and so we might say something like "<span class="math-container">$w$</span> is a function of time". In this simple picture, <span class="math-container">$w$</span> is a function of a single real variable.</p>
<p>My confusion is, why do we actually <em>say</em> that <span class="math-container">$w$</span> is a function of time? Surely <span class="math-container">$w$</span> is just a function on some subset of the real numbers (depending specifically on how we chose to define <span class="math-container">$w$</span>), rather than a function of time? Sure, <span class="math-container">$w$</span> only has the interpretation we think it does (cumulative amount of wax burnt) when we provide a time as its argument, but why does that mean it is a function <em>of time</em>? There's nothing stopping me from putting any old argument (provided <span class="math-container">$w$</span> is defined at that point) in to <span class="math-container">$w$</span>, like the distance I have walked since the candle was lit. Sure, we can't really interpret <span class="math-container">$w$</span> in the same way if I did this, but there is nothing in the definition of <span class="math-container">$w$</span> which stops me from doing this.</p>
<p>Also, what happens when I do some differentiation on <span class="math-container">$w$</span>. If I differentiate <span class="math-container">$w$</span> "with respect to time", then I'd get the time rate at which the candle is burning. If I differentiate <span class="math-container">$w$</span> "with respect to" the distance I have walked since the candle was lit, I'd expect to get either zero (since <span class="math-container">$w$</span> is not a function of this), or something more complicated (since the distance I have walked is related to time). I just can't see mathematically what is happening here: ultimately, no matter what we're calling our variables, <span class="math-container">$w$</span> is a function of a single variable, not of multiple, and so shouldn't there be absolutely no ambiguity in how to differentiate <span class="math-container">$w$</span>? Shouldn't there just be "the derivative of w", found by differentiating <span class="math-container">$w$</span> with respect to <em>its argument</em> (writing "with respect to its argument" is redundant!).</p>
<p>Can anyone help clarify what we mean by "function of" as opposed to function, and how this is important when we differentiate functions "with respect to" something? Thanks!</p>
| <p>As a student of math and physics, this has been one of the biggest annoyances for me; I'll give my two cents on the matter. Throughout my entire answer, whenever I use the term "function", it will always mean in the usual math sense (a rule with a certain domain and codomain blablabla).</p>
<p>I generally find two ways in which people use the phrase "... is a function of ..." The first is as you say: "<span class="math-container">$f$</span> is a function of <span class="math-container">$x$</span>" simply means that for the remainder of the discussion, we shall agree to denote the input of the function <span class="math-container">$f$</span> by the letter <span class="math-container">$x$</span>. This is just a notational choice as you say, so there's no real math going on. We just make this choice of notation to in a sense "standardize everything". Of course, we usually allow for variants on the letter <span class="math-container">$x$</span>. So, we may write things like <span class="math-container">$f(x), f(x_0), f(x_1), f(x'), f(\tilde{x}), f(\bar{x})$</span> etc. The way to interpret this is as usual: this is just the result obtained by evaluating the function <span class="math-container">$f$</span> on a specific element of its domain.</p>
<p>Also, you're right that the input label is completely arbitrary, so we can say <span class="math-container">$f(t), f(y), f(\ddot{\smile})$</span> whatever else we like. But again, often times it might just be convenient to use certain letters for certain purposes (this can allow for easier reading, and also reduce notational conflicts); and as much as possible it is a good idea to conform to the widely used notation, because at the end of the day, math is about communicating ideas, and one must find a balance between absolute precision and rigour and clarity/flow of thought. </p>
<hr>
<p>btw as a side remark, I think I am a very very very nitpicky individual regarding issues like: <span class="math-container">$f$</span> vs <span class="math-container">$f(x)$</span> for a function, I'm also always careful to use my quantifiers properly etc. However, there have been a few textbooks I glossed over, which are also extremely picky and explicit and precise about everything; but while what they wrote was <span class="math-container">$100 \%$</span> correct, it was difficult to read (I had to pause often etc). This is as opposed to some other books/papers which leave certain issues implicit, but convey ideas more clearly. This is what I meant above regarding balance between precision and flow of thought.</p>
<hr>
<p>Now, back to the issue at hand. In your third and fourth paragraphs, I think you have made a couple of true statements, but you're missing the point. (one of) the job(s) of any scientist is to quantitatively describe and explain observations made in real life. For example, you introduced the example of the amount of wax burnt, <span class="math-container">$w$</span>. If all you wish to do is study properties of functions which map <span class="math-container">$\Bbb{R} \to \Bbb{R}$</span> (or subsets thereof), then there is clearly no point in calling <span class="math-container">$w$</span> the wax burnt or whatever.</p>
<p>But given that you have <span class="math-container">$w$</span> as the amount of wax burnt, the most naive model for describing how this changes is to assume that the flame which is burning the wax is kept constant and all other variables are kept constant etc. Then, clearly the amount of wax burnt will only depend on the time elapsed. From the moment you start your measurement/experiment process, at each time <span class="math-container">$t$</span>, there will be a certain amount of wax burnt off, <span class="math-container">$w(t)$</span>. In other words, we have a function <span class="math-container">$w: [0, \tau] \to \Bbb{R}$</span>, where the physical interpretation is that for each <span class="math-container">$t \in [0, \tau]$</span>, <span class="math-container">$w(t)$</span> is the amount of wax burnt off <span class="math-container">$t$</span> units of time after starting the process. Let's for the sake of definiteness say that <span class="math-container">$w(t) = t^3$</span> (with the above domain and codomain).</p>
<hr>
<blockquote>
<p>"Sure, <span class="math-container">$w$</span> only has the interpretation we think it does (cumulative amount of wax burnt) when we provide a (real number in the domain of definition, which we interpret as) time as its argument"</p>
</blockquote>
<p>True.</p>
<blockquote>
<p>"...Sure, we can't really interpret <span class="math-container">$w$</span> in the same way if I did this, but there is nothing in the definition of w which stops me from doing this."</p>
</blockquote>
<p>Also true.</p>
<p>But here's where you're missing the point. If you didn't want to give a physical interpretation of what elements in the domain and target space of <span class="math-container">$w$</span> mean, why would you even talk about the example of burning wax? Why not just tell me the following:</p>
<blockquote>
<p>Fix a number <span class="math-container">$\tau > 0$</span>, and define <span class="math-container">$w: [0, \tau] \to \Bbb{R}$</span> by <span class="math-container">$w(t) = t^3$</span>.</p>
</blockquote>
<p>This is a perfectly self-contained mathematical statement. And now, I can tell you a bunch of properties of <span class="math-container">$w$</span>. Such as:</p>
<ul>
<li><span class="math-container">$w$</span> is an increasing function</li>
<li>For all <span class="math-container">$t \in [0, \tau]$</span>, <span class="math-container">$w'(t) = 3t^2$</span> (derivatives at end points of course are interpreted as one-sided limits)</li>
<li><span class="math-container">$w$</span> has exactly one root (of multiplicity <span class="math-container">$3$</span>) on this interval of definition.</li>
</ul>
<p>(and many more other properties). So, if you want to completely forget about the physical context, and just focus on the function and its properties, then of course you can do so. Sometimes, such an abstraction is very useful as it removes any "clutter".</p>
<p>However, I really don't think it is (always) a good idea to completely disconnect mathematical ideas from their physical origins/interpretations. And the reason that in the sciences people often assign such interpretations is because their purpose is to use the powerful tool of mathematics to quantitatively model an actual physical observation.</p>
<p>So, while you have made a few technically true statements in your third and fourth paragraphs, I believe you've missed the point of why people assign physical meaning to certain quantities.</p>
<hr>
<p>For your fifth paragraph however, I agree with the sentiment you're describing, and questions like this have tortured me. You're right that <span class="math-container">$w$</span> is a function of a single variable (where in this physical context, we interpret the arguments as time). If you now ask me how does <span class="math-container">$w$</span> change in relation to the distance I have started to walk, then I completely agree that there is no relation whatsoever. </p>
<p>But what is really going on is a terrible, annoying, confusing abuse of notation, where we use the same letter <span class="math-container">$w$</span> to have two differnent meanings. Physicists love such abuse of notation, and this has confused me for so long (and it still does from time to time). Of course, the intuitive idea of why the amount of wax burnt should depend on distance is clear: the further I walk, the more time has passed, and hence the more max has burnt. So, this is really a two step process.</p>
<p>To formalize this, we need to introduce a second function <span class="math-container">$\gamma$</span> (between certain subsets of <span class="math-container">$\Bbb{R}$</span>), where the interpretation is that <span class="math-container">$\gamma(x)$</span> is the time taken to walk a distance <span class="math-container">$x$</span>. Then when we (by abuse of language) say <span class="math-container">$w$</span> is a function of distance, what we really mean is that</p>
<blockquote>
<p>The composite function <span class="math-container">$w \circ \gamma$</span> has the physical interpretation that for each <span class="math-container">$x \in \text{domain}(\gamma)$</span>, <span class="math-container">$(w \circ \gamma)(x)$</span> is the amount of wax burnt when I walk a distance <span class="math-container">$x$</span>.</p>
</blockquote>
<p>Very often, this composition is not made explicit. In the Leibniz chain rule notation
<span class="math-container">\begin{align}
\dfrac{dw}{dx} &= \dfrac{dw}{dt} \dfrac{dt}{dx}
\end{align}</span>
Where on the LHS <span class="math-container">$w$</span> is miraculously a function of distance, even though on the LHS (and initially) <span class="math-container">$w$</span> was a function of time, what is really going on is that the <span class="math-container">$w$</span> on the LHS is a complete abuse of notation. And of course, the precise way of writing it is <span class="math-container">$(w \circ \gamma)'(x) = w'(\gamma(x)) \cdot \gamma'(x)$</span>.</p>
<p>In general, whenever you initially have a function <span class="math-container">$f$</span> "as a function of <span class="math-container">$x$</span>" and then suddenly it becomes a "function of <span class="math-container">$t$</span>", what is really meant is that we are given two functions <span class="math-container">$f$</span> and <span class="math-container">$\gamma$</span>; and when we say "consider <span class="math-container">$f$</span> as a function of <span class="math-container">$x$</span>", we really mean to just consider the function <span class="math-container">$f$</span>, but when we say "consider <span class="math-container">$f$</span> as a function of time", we really mean to consider the (completely different) function <span class="math-container">$f \circ \gamma$</span>. </p>
<p>Summary: if the arugments of a function suddenly change interpretations (eg from time to distance or really anything else) then you immediately know that the author is being sloppy/lazy in explicitly mentioning that there is a hidden composition.</p>
| <p>Excellent question. There are already good answers, I'll try to make a few, concise points.</p>
<h1>Be nice to your readers</h1>
<p>You should try to be nice to people reading and using your definitions, including your future self. It means that you should stick to conventions when possible.</p>
<h1>Variable names imply domain and codomain</h1>
<p>If you write that "<span class="math-container">$f$</span> is a function of <span class="math-container">$x$</span>", readers will assume that it means that <span class="math-container">$f:\mathbb{R}\rightarrow\mathbb{R}$</span>.</p>
<p>Similarly, if you write <span class="math-container">$f(z)$</span> it will imply that <span class="math-container">$f:\mathbb{C}\rightarrow\mathbb{C}$</span>, and <span class="math-container">$f(n)$</span> might be for <span class="math-container">$f:\mathbb{N}\rightarrow\mathbb{Z}$</span>.</p>
<p>It wouldn't be wrong to define <span class="math-container">$f:\mathbb{C}\rightarrow\mathbb{C}$</span> as <span class="math-container">$f(n)= \frac{in+1}{\overline{n}-i}$</span> but it would be surprising and might lead to incorrect assumptions (e.g. <span class="math-container">$\overline{n} = n$</span>).</p>
<h1>Free and bound variables</h1>
<p>You might be interested in knowing the distinction between <a href="https://en.wikipedia.org/wiki/Free_variables_and_bound_variables#Examples" rel="noreferrer">free and bound variables</a>.</p>
<p><span class="math-container">$$\sum_{k=1}^{10} f(k, n)$$</span></p>
<blockquote>
<p><span class="math-container">$n$</span> is a free variable and <span class="math-container">$k$</span> is a bound variable; consequently the value
of this expression depends on the value of n, but there is nothing
called <span class="math-container">$k$</span> on which it could depend.</p>
</blockquote>
<p>Here's a related <a href="https://stackoverflow.com/a/21856306/6419007">answer</a> on StackOverflow.</p>
<h1>"All models are wrong, some are useful", <a href="https://en.wikipedia.org/wiki/All_models_are_wrong" rel="noreferrer">George Box</a></h1>
<p>Your simplified amount of wax burnt as a function of time is probably wrong (it cannot perfectly know or describe the status of every atom) but it might at least be useful.</p>
<p>The amount of wax burnt as a function of "the distance you have walked since the candle was lit" will be even less correct and much less useful.</p>
<h1>Physical variable names have meaning</h1>
<p><a href="https://en.wikipedia.org/wiki/List_of_common_physics_notations" rel="noreferrer">Physical variable names</a> are not just placeholders. They are linked to <a href="https://en.wikipedia.org/wiki/Physical_quantity" rel="noreferrer">physical quantities</a> and <a href="https://en.wikipedia.org/wiki/Unit_of_measurement" rel="noreferrer">units</a>. Replacing <span class="math-container">$l$</span> by <span class="math-container">$t$</span> as a variable name for a function will not just be surprising to readers, it will break <a href="https://en.wikipedia.org/wiki/Dimensional_analysis" rel="noreferrer">dimensional homogeneity</a>.</p>
|
combinatorics | <p>I'm trying to prove that : </p>
<p>$$\frac{100!}{50!\cdot2^{50}}$$</p>
<p>is an integer .</p>
<p>For the moment I did the following : </p>
<p>$$\frac{100!}{50!\cdot2^{50}} = \frac{51 \cdot 52 \cdots 99 \cdot 100}{2^{50}}$$</p>
<p>But it still doesn't quite work out . </p>
<p>Hints anyone ? </p>
<p>Thanks </p>
| <p>$$ \frac{(2n)!}{n! 2^{n}} = \frac{\prod\limits_{k=1}^{2n} k}{\prod\limits_{k=1}^{n} (2k)} = \prod_{k=1}^{n} (2k-1) \in \Bbb{Z}. $$</p>
| <p>We have $100$ people at a dance class. How many ways are there to divide them into $50$ dance pairs of $2$ people each? (Of course we will pay no attention to gender.)</p>
<p>Clearly there is an <strong>integer</strong> number of ways. Let us count the ways.</p>
<p>We solve first a <em>different</em> problem. This is a tango class. How many ways are there to divide $100$ people into dance pairs, one person to be called the leader and the other the follower? </p>
<p>Line up the people. There are $100!$ ways to do this. Now go down the line, pairing $1$ and $2$ and calling $1$ the leader, pairing $3$ and $4$ and calling $3$ the leader, and so on.</p>
<p>We obtain each leader-follower division in $50!$ ways, since the groups of $2$ can be permuted. So there are $\dfrac{100!}{50!}$ ways to divide the people into $50$ leader-follower pairs to dance the tango.</p>
<p>Now solve the original problem. To just count the number of democratic pairs, note that interchanging the leader/follower tags produces the same pair division. So each democratic pairing gives rise to $2^{50}$ leader/follower pairings. It follows that there are $\dfrac{100!}{2^{50}\cdot 50!}$ democratic pairings.</p>
|
matrices | <p>Let $M$ be a symmetric $n \times n$ matrix. </p>
<p>Is there any equality or inequality that relates the trace and determinant of $M$?</p>
| <p>Not exactly what you're looking for but I would be remiss not to mention that for any complex square matrix $A$ the following identity holds:</p>
<p>$$\det(e^A)=e^{\mbox{tr}(A)} $$</p>
| <p>The determinant and the trace are two quite different beasts, little relation can be found among them.</p>
<p>If the matrix is not only symmetric (hermitic) but also <em>positive semi-definite</em>, then its eigenvalues are real and non-negative.
Hence, <a href="https://en.wikipedia.org/wiki/Trace_(linear_algebra)#Eigenvalue_relationships" rel="noreferrer">given the properties</a> <span class="math-container">${\rm tr}(M)=\sum \lambda_i$</span> and <span class="math-container">${\rm det}(M)=\prod \lambda_i$</span>, and recalling the <a href="https://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means" rel="noreferrer">AM GM inequality</a>, we get the following (probably not very useful) inequality:</p>
<p><span class="math-container">$$\frac{{\rm tr}(M)}{n} \ge {\rm det}(M)^{1/n}$$</span></p>
<p>(equality holds iff <span class="math-container">$M = \lambda I$</span> for some <span class="math-container">$\lambda \ge 0$</span>)</p>
<p>Much more interesting/insightful/useful are the answers by <a href="https://math.stackexchange.com/a/2083444/312">Owen Sizemore</a> and <a href="https://math.stackexchange.com/a/2083801/312">Rodrigo de Azevedo</a>.</p>
|
combinatorics | <p>I'm having a hard time finding the pattern. Let's say we have a set</p>
<p>$$S = \{1, 2, 3\}$$</p>
<p>The subsets are:</p>
<p>$$P = \{ \{\}, \{1\}, \{2\}, \{3\}, \{1, 2\}, \{1, 3\}, \{2, 3\}, \{1, 2, 3\} \}$$</p>
<p>And the value I'm looking for, is the sum of the cardinalities of all of these subsets. That is, for this example, $$0+1+1+1+2+2+2+3=12$$</p>
<p><strong>What's the formula for this value?</strong></p>
<p>I can sort of see a pattern, but I can't generalize it.</p>
| <p>Here is a bijective argument. Fix a finite set $S$. Let us count the number of pairs $(X,x)$ where $X$ is a subset of $S$ and $x \in X$. We have two ways of doing this, depending which coordinate we fix first.</p>
<p><strong>First way</strong>: For each set $X$, there are $|X|$ elements $x \in X$, so the count is $\sum_{X \subseteq S} |X|$. </p>
<p><strong>Second way:</strong> For each element $x \in S$, there are $2^{|S|-1}$ sets $X$ with $x \in X$. We get them all by taking the union of $\{x\}$ with an arbitrary subset of $S\setminus\{x\}$. Thus, the count is $\sum_{x \in S} 2^{|S|-1} = |S| 2^{|S|-1}$.</p>
<p>Since both methods count the same thing, we get
$$\sum_{X \subseteq S} |X| = |S| 2^{|S|-1},$$
as in the other answers.</p>
| <p>Each time an element appears in a set, it contributes $1$ to the value you are looking for. For a given element, it appears in exactly half of the subsets, i.e. $2^{n-1}$ sets. As there are $n$ total elements, you have $$n2^{n-1}$$ as others have pointed out.</p>
|
probability | <p>I am writing a computer program that involves generating 4 random numbers, a, b, c, and d, the sum of which should equal 100. </p>
<p>Here is the method I first came up with to achieve that goal, in pseudocode:</p>
<pre><code>Generate a random number out of 100. (Let's say it generates 16).
Assign this value as the first number, so a = 16.
Take away a from 100, which gives 84.
Generate a random number out of 84. (Let's say it generates 21).
Assign this value as the second number, so b = 21.
Take away b from 84, which gives 63.
Generate a random number out of 63. (Let's say it generates 40).
Assign this value as the third number, so c = 40.
Take away c from 63, which gives 23.
Assign the remainder as the fourth number, so d = 23.
</code></pre>
<p>However, for some reason I have a funny feeling about this method. Am I truly generating four random numbers that sum to 100 here? Would this be equivalent to me generating four random numbers out of 100 over and over again, and only accepting when the sum is 100? Or am I creating some sort of bias by picking a random number out of 100, and then a random number out of the remainder, and so on? Thanks.</p>
| <p>No, this is not a good approach - half the time, the first element will be $50$ or more, which is way too often. Essentially, the odds that the first element is $100$ should not be the same as the odds that the first elements is $10$. There is only one way for $a=100$, but there are loads of ways for $a=10$.</p>
<p>The number of such sums $100=a+b+c+d$ with $a,b,c,d\geq 0$ integers, is: $\binom{100+3}{3}$. If your algorithm doesn't randomly choose from $1$ to some multiple of $103$, you can't get an even probability.</p>
<p>An ideal approach. Let pick a number $x_1$ from $1$ to $103$. Then pick a different number $x_2\neq x_1$ from $1$ to $103$, then pick a third number $x_3\neq x_1,x_2$ from $1$ to $103$.</p>
<p>Then sort these values, so that $x_1<x_2<x_3$. Then set $$a=x_1-1, b=x_2-x_1-1, c=x_3-x_2-1, d=103-x_3.$$</p>
| <p>Generate four random numbers between $0$ and $1$</p>
<p>Add these four numbers; then divide each of the four numbers by the sum, multiply by $100$, and round to the nearest integer.</p>
<p>Check that the four integers add to $100$ (they will, two thirds of the time). If they don't (rounding errors), try again...</p>
|
differentiation | <p>I am watching the following video lecture:</p>
<p><a href="https://www.youtube.com/watch?v=G_p4QJrjdOw" rel="noreferrer">https://www.youtube.com/watch?v=G_p4QJrjdOw</a></p>
<p>In there, he talks about calculating gradient of $ x^{T}Ax $ and he does that using the concept of exterior derivative. The proof goes as follows:</p>
<ol>
<li>$ y = x^{T}Ax$</li>
<li>$ dy = dx^{T}Ax + x^{T}Adx = x^{T}(A+A^{T})dx$ (using trace property of matrices)</li>
<li>$ dy = (\nabla y)^{T} dx $ and because the rule is true for all $dx$</li>
<li>$ \nabla y = x^{T}(A+A^{T})$</li>
</ol>
<p>It seems that in step 2, some form of product rule for differentials is applied. I am familiar with product rule for single variable calculus, but I am not understanding how product rule was applied to a multi-variate function expressed in matrix form.</p>
<p>It would be great if somebody could point me to a mathematical theorem that allows Step 2 in the above proof.</p>
<p>Thanks!
Ajay</p>
| <p><span class="math-container">\begin{align*}
dy
& = d(x^{T}Ax)
= d(Ax\cdot x)
= d\left(\sum_{i=1}^{n}(Ax)_{i}x_{i}\right) \\
& = d \left(\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i,j}x_{j}x_{i}\right)
=\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i,j}x_{i}dx_{j}+\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i,j}x_{j}dx_{i} \\
& =\sum_{i=1}^{n}(Ax)dx_{i}+\sum_{i=1}^{n}(Adx)x_{i}
=(dx)^{T}Ax+x^{T}Adx \\
& =(dx)^{T}Ax+(dx)^{T}A^{T}x
=(dx)^{T}(A+A^{T})x.
\end{align*}</span></p>
| <p>Step 2 might be the result of a simple computation. Consider $u(x)=x^TAx$, then
$$
u(x+h)=(x+h)^TA(x+h)=x^TAx+h^TAx+x^TAh+h^TAh,
$$
that is, $u(x+h)=u(x)+x^T(A+A^T)h+r_x(h)$ where $r_x(h)=h^TAh$ (this uses the fact that $h^TAx=x^TA^Th$, which holds because $m=h^TAx$ is a $1\times1$ matrix hence $m^T=m$). </p>
<p>One sees that $r_x(h)=o(\|h\|)$ when $h\to0$.
This proves that the differential of $u$ at $x$ is the linear function $\nabla u(x):\mathbb R^n\to\mathbb R$, $h\mapsto x^T(A+A^T)h$, which can be identified with the unique vector $z$ such that $\nabla u(x)(h)=z^Th$ for every $h$ in $\mathbb R^n$, that is, $z=(A+A^T)x$.</p>
|
linear-algebra | <p>It’s from the book “Linear Algebra and its Applications” by Gilbert Strang, page 260.</p>
<p><span class="math-container">$$(I-A)^{-1}=I+A+A^2+A^3+\ldots$$</span></p>
<p>Nonnegative matrix <span class="math-container">$A$</span> has the largest eigenvalue <span class="math-container">$\lambda_1<1$</span>.</p>
<p>Then, the book says <span class="math-container">$(I-A)^{-1}$</span> has the same eigenvector, with eigenvalue <span class="math-container">$1/(1-\lambda_1)$</span>.</p>
<p>Why? Is there any other formulas between inverse matrix and eigenvalue that I don’t know?</p>
| <p>A matrix $A$ has an eigenvalue $\lambda$ if and only if $A^{-1}$ has eigenvalue $\lambda^{-1}$. To see this, note that
$$A\mathbf{v} = \lambda\mathbf{v} \implies A^{-1}A\mathbf{v} = \lambda A^{-1}\mathbf{v}\implies A^{-1}\mathbf{v} = \frac{1}{\lambda}\mathbf{v}$$</p>
<p>If your matrix $A$ has eigenvalue $\lambda$, then $I-A$ has eigenvalue $1 - \lambda$ and therefore $(I-A)^{-1}$ has eigenvalue $\frac{1}{1-\lambda}$.</p>
| <p>If you are looking at a single eigenvector $v$ only, with eigenvalue $\lambda$, then $A$ just acts as the scalar $\lambda$, and <em>any</em> reasonable expression in $A$ acts on $v$ as the same expression in $\lambda$. This works for expressions $I-A$ (really $1-A$, so it acts as $1-\lambda$), its inverse $(I-A)^{-1}$, in fact for any rational function of $A$ (if well defined; this is where you need $\lambda_1<1$) and even for $\exp A$.</p>
|
logic | <p>A friend of mine told me the following fact:</p>
<blockquote>
<p>If $k$ is any algebraically closed field, then a polynomial map $f\colon k^n\to k^n$ of affine space $k^n$ is surjective if it is injective.</p>
</blockquote>
<p>The proof he told me was actually a <em>logic</em> argument, which I didn't really understand, so I can't fully reproduce it here. The idea was that since it is the first order statement, it is enough to prove it for fields of characteristic $p>0$ for infinitely many $p$. Then by some reason it was enough to prove it for locally finite fields, and it somehow reduces to the same statement about <em>finite</em> fields $k$, where it is already obvious.</p>
<p>I am very far from logic, so I would like to see a nice algebraic (or geometric) proof of this fact. Do you know how to do that?</p>
<p>Thank you very much!</p>
| <p>I would like to comment, that the model theoretic proof really is not that complicated, and this is coming from someone who is very algebro geometrically minded.</p>
<p>While there are many places you can probably find this on the internet, let me rephrase the argument here. Perhaps I'll say it in some way that is, for some reason, more clear to you. I will be semi-informal about the intuitive stuff, in an effort to make it more comprehensible.</p>
<p>I will include many of the proofs of the important theorems, although they can be easily disregarded. It maybe worth reading them though just to convince yourself how trivial their proofs really are.</p>
<p>Also, I am no model theorist, by any standard of the term, so there is a possibility I'll make a mistake below. I hope, in that case, one of our resident model theorists could correct me :)</p>
<hr />
<p>Recall that a <em>language</em> <span class="math-container">$\mathcal{L}$</span> is nothing more than a string of symbols over some "signature/alphabet" of base symbols. For example, consider the language of rings <span class="math-container">$\mathcal{L}_\text{ring}$</span> which is sentences formed out of the alphabet <span class="math-container">$\{+,\cdot,0,1\}$</span>. Another example might be the language of groups <span class="math-container">$\mathcal{L}_\text{group}$</span> which is formed out of the alphabet <span class="math-container">$\{\cdot,e,i\}$</span> (where <span class="math-container">$\cdot$</span> is supposed to be a binary operation, <span class="math-container">$e$</span> the identity element of the group, and <span class="math-container">$i$</span> the inverse function).</p>
<p>Now, all of the elements of this alphabet are supposed to represent either a function (with some arity), a constant (just some element), or a relation. For example, in <span class="math-container">$\mathcal{L}_\text{group}$</span> the symbol <span class="math-container">$e$</span> is a constant, and the functions <span class="math-container">$\cdot$</span> and <span class="math-container">$i$</span> are functions with arities <span class="math-container">$1$</span> and <span class="math-container">$2$</span> respectively.</p>
<p>Now, why I am distinguishing between constant symbols and function symbols (since they are at that point just meaningless symbols) seems silly, until one defines an <span class="math-container">$\mathcal{L}$</span>-structure. Namely, given a language <span class="math-container">$\mathcal{L}$</span>, an <em><span class="math-container">$\mathcal{L}$</span>-structure</em> <span class="math-container">$S$</span> of that language is nothing more than a set <span class="math-container">$S$</span> with an interpretation of each of the symbols of <span class="math-container">$\mathcal{L}$</span> in terms of <span class="math-container">$S$</span>. For example, an <span class="math-container">$\mathcal{L}_\text{group}$</span> structure is nothing but an interpretation of <span class="math-container">$e$</span>, <span class="math-container">$\cdot$</span>, and <span class="math-container">$i$</span> in <span class="math-container">$S$</span>. What does this mean? Well, since <span class="math-container">$e$</span> is a constant symbol, it's interpretation should just be an element <span class="math-container">$e\in S$</span> that I've picked. Since <span class="math-container">$\cdot$</span> is a function symbol with arity <span class="math-container">$2$</span> an interpretation is just some function <span class="math-container">$\cdot:S^2\to S$</span>.Finally, since <span class="math-container">$i$</span> is a function symbol with arity <span class="math-container">$1$</span> an interpretation is just a function <span class="math-container">$i:S\to S$</span>.</p>
<p>Now, note that there was absolutely no restriction on how I interpreted my symbols in an <span class="math-container">$\mathcal{L}$</span>-structure, and moreover there are many such interpretations. As an example, I can make <span class="math-container">$\{0,1\}$</span> an <span class="math-container">$\mathcal{L}_\text{group}$</span>-structure by saying <span class="math-container">$e=0$</span>, <span class="math-container">$\cdot:\{0,1\}^2\to\{0,1\}$</span> just sends everything to <span class="math-container">$1$</span>, and <span class="math-container">$i:\{0,1\}\to\{0,1\}$</span> just sends everything to <span class="math-container">$0$</span>. Indeed, at this point I have not dictated that the interpretations satisfy any properties. But, of course, this seems silly. We'd expect that the language of groups should have something to do with groups. This is precisely where a theory comes in.</p>
<p>A <em>theory</em> <span class="math-container">$\mathcal{T}$</span> in a language <span class="math-container">$\mathcal{L}$</span> is nothing more than a collection of sentences using only the alphabet, the standard logical quantifiers (e.g. <span class="math-container">$\forall,\exists$</span>), and the standard logical connectives (e.g. <span class="math-container">$\vee,\wedge$</span>,etc.). For example, the theory <span class="math-container">$\mathcal{T}_\text{group}$</span> in <span class="math-container">$\mathcal{L}_\text{group}$</span> consists of the following three sentences</p>
<p><span class="math-container">$$1.(\forall x,y,z)(x\cdot(y\cdot z)=(x\cdot y)\cdot z)$$</span>
<span class="math-container">$$2.(\forall x)(\cdot(x,i(x))=e=\cdot(i(x),x)\text{ }$$</span>
<span class="math-container">$$3.(\forall x)(\cdot(x,e)=x=\cdot(e,x)\quad\quad$$</span></p>
<p>A <em>model</em> of a theory <span class="math-container">$\mathcal{T}$</span> is then nothing but an <span class="math-container">$\mathcal{L}$</span>-structure for which the interpretations of the symbols satisfy the sentences in <span class="math-container">$\mathcal{T}$</span> (where we are quantifying over the elements of the set). For example, my interpretation of <span class="math-container">$\mathcal{L}_\text{group}$</span> in the set <span class="math-container">$\{0,1\}$</span> is NOT a model of the theory <span class="math-container">$\mathcal{T}_\text{group}$</span>. That said, the <span class="math-container">$\mathcal{L}_\text{group}$</span>-structure on the set <span class="math-container">$\{0,1\}$</span> where <span class="math-container">$e=0$</span>, <span class="math-container">$\cdot(0,1)=\cdot(1,0)=1$</span>, <span class="math-container">$\cdot(1,1)=\cdot(0,0)=0$</span>, and <span class="math-container">$i(0)=0$</span>, and <span class="math-container">$i(1)=1$</span> is a model of <span class="math-container">$\mathcal{T}_\text{group}$</span>. In fact, I think you can see that a model of <span class="math-container">$\mathcal{T}_\text{group}$</span> is nothing more than a group!</p>
<p>Now that all of the basic terminology is established, which really was quite intuitive, we can finally discuss something substantive. I've often been told that there are only two theorems in model theory. The first is the following:</p>
<blockquote>
<p><strong>Theorem(Godel,Malstev):</strong> A theory of first order sentences has a model if and only if every finite subset has a model.</p>
</blockquote>
<p>This, from what I understand (I've never seen the proof) isn't really that complicated. In fact, if you interpret correctly in terms of Stone spaces it apparently comes directly from Tychnoff's theorem. It also has a very, intuitively obvious, formulation in terms of ultraproducts, in which case it follows from Łos's theorem. The intuition for the correctness of the compactness theorem follows much like the intuition in algebraic geometry that one can often descend problems about schemes <span class="math-container">$X/\mathbb{C}$</span> to a statement about some scheme <span class="math-container">$X'/R$</span> for some finitely generated subring <span class="math-container">$R$</span> of <span class="math-container">$\mathbb{C}$</span> (this is Grothendieck's trick of "passing to the limit").</p>
<p>The second foundational theorem of model theory is the following:</p>
<blockquote>
<p><strong>Theorem(Lowenheim-Skolem):</strong> If <span class="math-container">$\mathcal{T}$</span> is a theory which admits an infinite model, then it admits a model of size <span class="math-container">$\kappa$</span> for every <span class="math-container">$\kappa\geqslant \max\{\# \mathcal{L},\aleph_0\}$</span>.</p>
</blockquote>
<p>The above theorem actually is two theorems rolled into one: the "upward" Lowenheim-Skolem theorem, and the "downard" Lowenheim-Skolem theorem. The upward part says that if you have an infinite model of size <span class="math-container">$\kappa$</span>, you have a model of size <span class="math-container">$\lambda$</span> for every <span class="math-container">$\lambda\geqslant\kappa$</span>. The downward part should now have obvious meaning.</p>
<p>The downward part of Lowenheim-Skolem is annoying and elementary--you explicitly construct the models. This is also where the size of the language comes into play. Intuitively, the size of the language comes into play since your language could contain nothing but constant symbols, all of which are dictated by your theory to be distinct. The upward Lowenheim-Skolem theorem is easy to prove from compactness and the downward part of Lowenheim-Skolem.</p>
<p><em>Proof(Upward Lowenheim-Skolem)</em> Let <span class="math-container">$M$</span> be your model of size <span class="math-container">$\kappa$</span>, and let <span class="math-container">$\lambda\geqslant\kappa$</span> be a cardinal. Consider the language <span class="math-container">$\mathcal{L}'$</span> which is obtained from the language of <span class="math-container">$\mathcal{T}$</span> by appending the set of constants <span class="math-container">$\{c_i\}_{i\leqslant\lambda}$</span>. Let <span class="math-container">$\mathcal{T}'$</span> be the theory obtained by appending the sentences <span class="math-container">$\varphi_{i,j}:c_i\ne c_j$</span> for all <span class="math-container">$i,j$</span>.</p>
<p>Note then that for all finite subsets <span class="math-container">$\Delta$</span> of <span class="math-container">$\mathcal{T}'$</span>, there is a model of <span class="math-container">$\Delta$</span>. Indeed, consider the finitely many sentences <span class="math-container">$\varphi_{i,j}\in \Delta$</span>, with <span class="math-container">$(i,j)\in I$</span> where <span class="math-container">$I$</span> some finite set. Let us define a <span class="math-container">$\mathcal{L}'$</span>-structure <span class="math-container">$M_\Delta$</span>, whose underlying set is the same as <span class="math-container">$M$</span>. We interpret <span class="math-container">$\mathcal{L}$</span> in <span class="math-container">$M_\Delta$</span> the same way we interpretted it in <span class="math-container">$M$</span>, and for each <span class="math-container">$\{c_i\}_{i\leqslant\lambda}$</span> we choose some element <span class="math-container">$c_i\in M$</span>. We choose the <span class="math-container">$c_i$</span> though such that <span class="math-container">$c_i\ne c_j$</span> if <span class="math-container">$(i,j)\in I$</span>. Note then that by the very construction of <span class="math-container">$M_\Delta$</span>, that <span class="math-container">$M_\Delta$</span> is a model of <span class="math-container">$\Delta$</span> as desired.</p>
<p>Thus, by compactness, <span class="math-container">$\mathcal{T}'$</span> admits some model <span class="math-container">$N$</span>. Necessarily, <span class="math-container">$\# N\geqslant\lambda$</span> since <span class="math-container">$N$</span> interprets <span class="math-container">$\lambda$</span> many symbols <span class="math-container">$c_i$</span> which, by virtue of modeling <span class="math-container">$\mathcal{T}'$</span>, are all distinct. We then get a model of size exactly <span class="math-container">$\lambda$</span> by applying downward Lowenheim-Skolem. <span class="math-container">$\blacksquare$</span></p>
<p>The amazing thing is that given the Compactness Theorem everything else becomes very easy at least as far as the proof of Ax-Grothendieck is concerned.</p>
<p>Before we get to the Ax-Grothendieck theorem though, we first need to discuss some technically powerful, but surprisingly simple theorems.</p>
<p>Let us say that a theory <span class="math-container">$\mathcal{T}$</span> is <em>complete</em> if for every sentence <span class="math-container">$\varphi$</span> in the language, either <span class="math-container">$\varphi$</span> is in <span class="math-container">$T$</span> or <span class="math-container">$\neg\varphi$</span> (<span class="math-container">$\neg$</span> is negation) is in <span class="math-container">$\mathcal{T}$</span>. This says that every model <span class="math-container">$M$</span> of <span class="math-container">$\mathcal{T}$</span> must either satisfy <span class="math-container">$\varphi$</span>, or every model <span class="math-container">$M$</span> of <span class="math-container">$\mathcal{T}$</span> must satisfy <span class="math-container">$\neg\varphi$</span>. For example, the theory <span class="math-container">$\mathcal{T}_\text{group}$</span> is NOT complete. Consider the sentence <span class="math-container">$\varphi$</span> given by <span class="math-container">$(\forall x,y)(x\cdot y=y\cdot x)$</span>. Then, there are models of <span class="math-container">$T$</span> for which <span class="math-container">$\varphi$</span> holds (e.g. abelian groups) and those for which <span class="math-container">$\varphi$</span> does not hold (non-abelian groups). Thus, being a complete theory is a a very, very strong thing, yet an obviously desirable one. Why is it so desirable? Because, to check that some sentence is true for all models of the theory, one needs only check it is true for ONE model of the theory.</p>
<p>Now, while completeness seems like a very attractive property, it has some obvious immediate downsides. First, one would expect that most theories aren't complete (think of three theories--none of them are probably complete). Second, even if something is complete, it seems very difficult to prove it is such--think about how one might attack such a problem. That said, there is a very important model theoretic property of a theory which guarantees completeness, and is a property that is secretly familiar to us from our everyday mathematical lives. This property is <span class="math-container">$\kappa$</span>-categoricity.</p>
<p>Let us say that a theory <span class="math-container">$\mathcal{T}$</span> is <em><span class="math-container">$\kappa$</span>-categorical</em>, if <span class="math-container">$\mathcal{T}$</span> admits, up to isomorphism, only one model of cardinality <span class="math-container">$\kappa$</span>. Isomorphism between two models <span class="math-container">$M,M'$</span> of <span class="math-container">$\mathcal{T}$</span> is what you'd expect--it's a bijection, which preserves the structure dictated by the language (in all of the theories you'd care about, rings, groups, etc. isomorphisms are what'd you expect). Why is <span class="math-container">$\kappa$</span>-categoricity so nice? Well, because of the following:</p>
<blockquote>
<p><strong>Theorem(Vaught's Test):</strong> Suppose that <span class="math-container">$\mathcal{T}$</span> is <span class="math-container">$\kappa$</span> categorical where <span class="math-container">$\kappa\geqslant \max(\aleph_0,|\mathcal{L}|)$</span>, and every model of <span class="math-container">$\mathcal{T}$</span> is infinite. Then, <span class="math-container">$\mathcal{T}$</span> is complete.</p>
</blockquote>
<p><em>Proof:</em> If <span class="math-container">$\mathcal{T}$</span> were not complete, there would exist some sentence <span class="math-container">$\varphi$</span> in the language of <span class="math-container">$\mathcal{T}$</span> such that there are models <span class="math-container">$M$</span> and <span class="math-container">$M'$</span> of <span class="math-container">$\mathcal{T}$</span> for which <span class="math-container">$\varphi$</span> holds in <span class="math-container">$M$</span> and <span class="math-container">$\neg\varphi$</span> holds in <span class="math-container">$M'$</span>. Consider then the theories <span class="math-container">$\mathcal{S}=\mathcal{T}\cup\{\varphi\}$</span> and <span class="math-container">$\mathcal{S}'=\mathcal{T}\cup\{\neg\varphi\}$</span>. Note then that <span class="math-container">$M$</span> is a model of <span class="math-container">$\mathcal{S}$</span> and <span class="math-container">$M'$</span> is a model of <span class="math-container">$\mathcal{S}'$</span>. Since <span class="math-container">$M$</span> and <span class="math-container">$M'$</span> are infinite (since all models of <span class="math-container">$\mathcal{T}$</span> are) we have by Lowenheim-Skolem that there is a model <span class="math-container">$N$</span> of <span class="math-container">$\mathcal{S}$</span> and <span class="math-container">$N'$</span> of <span class="math-container">$\mathcal{S}'$</span>, each of size <span class="math-container">$\kappa$</span>. Evidently <span class="math-container">$N$</span> and <span class="math-container">$N'$</span> are models of <span class="math-container">$\mathcal{T}$</span> of size <span class="math-container">$\kappa$</span>, and so by assumption of <span class="math-container">$\kappa$</span>-categoricity, isomorphic. But, this is a contradiction since <span class="math-container">$\varphi$</span> holds in <span class="math-container">$N$</span> but not in <span class="math-container">$N'$</span>. <span class="math-container">$\blacksquare$</span></p>
<p>As an example of a <span class="math-container">$2^{\aleph_0}$</span>-categorical theory, consider the theory of <span class="math-container">$\mathbb{F}_2$</span> vector spaces (defined in the way you'd expect). Since, by cardinality considerations, any <span class="math-container">$\mathbb{F}_2$</span> vector space of cardinality <span class="math-container">$2^{\aleph_0}$</span> must be dimension <span class="math-container">$2^{\aleph_0}$</span> any two are isomorphic.</p>
<p>It's interesting to note that, in fact, <span class="math-container">$\mathbb{F}_2$</span> vector spaces are <span class="math-container">$\kappa$</span>-categorical for every uncountable cardinal <span class="math-container">$\kappa$</span>. This is no mistake, since Morley's categoricity (a very deep theorem) says that if a theory in a countable language is <span class="math-container">$\kappa$</span>-categorical for one uncountable cardinal, it's <span class="math-container">$\lambda$</span>-categorical for all uncountable cardinals <span class="math-container">$\lambda$</span>.</p>
<hr />
<p>Ok, so now onto what you actually care about.</p>
<p>Let us define the theory <span class="math-container">$\mathsf{ACF}$</span> of algebraically closed fields. This is a theory in the language of rings <span class="math-container">$\mathcal{L}_\text{ring}$</span>, with the obvious extra sentences dictating commutativity and existence of inverses, as well as the existence of roots of all polynomials. This last condition is a little annoying to encode with just <span class="math-container">$\mathcal{L}_\text{ring}$</span> and logical quantifies/connectives. It goes something like for each <span class="math-container">$n$</span> adding in the sentence <span class="math-container">$(\forall a_0,...,a_n)(\exists x)(a_0+\cdots+a_n x^n=0)$</span>.</p>
<p>Now, the theory <span class="math-container">$\mathsf{ACF}$</span> is NOT complete. Indeed, consider the sentence <span class="math-container">$(\forall x)(x+x=0)$</span>. Then, this sentence is modeled by <span class="math-container">$\overline{\mathbb{F}_2}$</span> but the negation is modeled by <span class="math-container">$\mathbb{C}$</span>. Somewhat amazingly, this type of sentence (specifying characteristic) is the ONLY obstruction to completeness.</p>
<p>To clarify this statement, let us define the following theories. For each prime <span class="math-container">$p$</span>, let <span class="math-container">$\mathsf{ACF}_p$</span> denote the theory <span class="math-container">$\mathcal{T}_\mathsf{ACF}$</span> with the extra sentence <span class="math-container">$(\forall x)(\underbrace{x+\cdots+x}_{p\text{ times}})=0$</span> thrown in. Evidently the models of <span class="math-container">$\mathcal{T}_{\mathsf{ACF}_p}$</span> are just the algebraically closed fields of characteristic <span class="math-container">$p$</span>. Note though that we also want to define the theory <span class="math-container">$\mathsf{ACF}_0$</span> of algebraically closed fields of characteristic <span class="math-container">$0$</span>. Now, how we do this is VERY important. Instead of throwing in the sentence <span class="math-container">$(\forall x)(\underbrace{x+\cdots+x}_{p\text{ times}})=0$</span> we throw in its negation FOR ALL <span class="math-container">$p$</span>. In particular, we specify characteristic <span class="math-container">$0$</span> by specifying not characteristic <span class="math-container">$p$</span> for all <span class="math-container">$p$</span>. In particular, it takes infinitely many sentences to specifying characteristic <span class="math-container">$0$</span>.</p>
<p>Now, my statement that the only thing stopping completeness for <span class="math-container">$\mathsf{ACF}$</span> is justified by the following:</p>
<blockquote>
<p><strong>Theorem:</strong> For every <span class="math-container">$p=\text{prime},\infty$</span>, <span class="math-container">$\mathsf{ACF}_p$</span> is <span class="math-container">$\kappa$</span>-categorical for every uncountable cardinal <span class="math-container">$\kappa$</span>.</p>
</blockquote>
<p>This will imply by Vaught's test (since every algebraically closed field is infinite and the language of rings is countable) that each <span class="math-container">$\mathsf{ACF}_p$</span> is complete!</p>
<p>The proof is actually very simple.</p>
<p><em>Proof:</em> Suppose that <span class="math-container">$K,K'$</span> are two uncountable algebraically closed fields of the same characteristic. Suppose that <span class="math-container">$k$</span> is the prime subfield of <span class="math-container">$K,K'$</span> (i.e. <span class="math-container">$k=\mathbb{F}_p$</span> or <span class="math-container">$k=\mathbb{Q}$</span>). Then, by cardinality considerations we have that</p>
<p><span class="math-container">$$\text{tr.deg}_k K=\# K=\kappa=\# K'=\text{tr.deg}_k K'$$</span></p>
<p>Thus, we know that there are embeddings <span class="math-container">$K\hookleftarrow k(\{x_i\}_{i\leqslant\kappa})\hookrightarrow K'$</span> such that <span class="math-container">$K/k(\{x_i\})$</span> and <span class="math-container">$K'/k(\{x_i\})$</span> are both algebraic. But, since <span class="math-container">$K,K'$</span> are algebraically closed, we can conclude that <span class="math-container">$K$</span> and <span class="math-container">$K'$</span> must both be algebraic closures of <span class="math-container">$k(\{x_i\})$</span> and thus isomorphic. <span class="math-container">$\blacksquare$</span></p>
<p>Just to reiterate, this tells us, for example, that to check whether or not a sentence is true for all algebraically closed fields of characteristic <span class="math-container">$0$</span>, it suffices to prove it's true for <span class="math-container">$\mathbb{C}$</span> (this, and several other various forms, is known as the <em>Lefschetz principle</em>).</p>
<p>Ok, with this, we can finally state the "big theorem" you were alluding to</p>
<blockquote>
<p><strong>Theorem:</strong> Let <span class="math-container">$\varphi$</span> be a sentence in <span class="math-container">$\mathcal{L}_\text{ring}$</span> then the following are equivalent</p>
<ol>
<li>The sentence is true for some algebraically closed field <span class="math-container">$K$</span> of characteristic <span class="math-container">$0$</span>.</li>
<li>The sentence is true for all algebraically closed fields of characteristic <span class="math-container">$0$</span>.</li>
<li>The sentence holds true for an algebraically closed field K of characteristic <span class="math-container">$p$</span> ,for arbitrarily large <span class="math-container">$p$</span></li>
<li>The sentence holds true for all algebraically closed fields of characteristic <span class="math-container">$p$</span> for arbitrarily large <span class="math-container">$p$</span>.</li>
</ol>
</blockquote>
<p><em>Proof:</em> The equivalence of 1 and 2, and the equivalence of 3 and 4, follow from the completeness of <span class="math-container">$\mathsf{ACF}_0$</span> and <span class="math-container">$\mathsf{ACF}_p$</span> respectively. To prove that 1 and 3 are equivalent, we merely use compactness. Indeed, we want to show that <span class="math-container">$\mathcal{T}_{\mathsf{ACF}_0}\cup\{\varphi\}$</span> (where <span class="math-container">$\varphi$</span> is our sentence) has a model. But, by compactness, <span class="math-container">$\mathcal{T}_{\mathsf{ACF}_0}\cup\{\varphi\}$</span> will have a model if and only if every FINITE subset of <span class="math-container">$\mathcal{T}_{\mathsf{ACF}_0}\cup\{\varphi\}$</span> has a model. But, any finite subset <span class="math-container">$\Delta$</span> cannot contain the statements <span class="math-container">$(\forall x)(\underbrace{x+\cdots+x}_{p\text{ times}})=0$</span> for all <span class="math-container">$p$</span>, and so, in particular, for a large enough prime <span class="math-container">$p$</span>, our finite subset cannot contain <span class="math-container">$(\forall x)(\underbrace{x+\cdots+x}_{p\text{ times}})=0$</span>. In particular, choosing a <span class="math-container">$p_0$</span> large enough so that <span class="math-container">$\varphi$</span> has a model <span class="math-container">$M$</span> in <span class="math-container">$\mathsf{ACF}_0$</span> (which we can do by assumption), we can see that <span class="math-container">$M$</span> is a model of <span class="math-container">$\Delta$</span>. Thus (by compactness once again), we obtain a model of <span class="math-container">$\mathsf{ACF}_0$</span> as desired. <span class="math-container">$\blacksquare$</span></p>
<p>So, with this, we can make rigorous the proof of Ax-Grothendieck:</p>
<blockquote>
<p><strong>Theorem(Ax-Grothendieck):</strong> If <span class="math-container">$K$</span> is an algebraically closed field, every injective polynomial map <span class="math-container">$f:K^n\to K^n$</span> is surjective.</p>
</blockquote>
<p><em>Proof:</em> I leave it to you to show that we can take the statement "every injective polynomial map <span class="math-container">$K^n\to K^n$</span> is surjective" can be phrased in the language of <span class="math-container">$\mathsf{ACF}$</span> (it's long, and arduous, but elementary). Thus, by the above theorem, to prove this statement is true for all algebraically closed fields, it suffices to show that for each prime <span class="math-container">$p$</span>, there is SOME algebraically closed field <span class="math-container">$K_p$</span> such that the statement is true. Let's choose <span class="math-container">$K_p=\overline{\mathbb{F}_p}$</span>.</p>
<p>To prove the statement for <span class="math-container">$\overline{\mathbb{F}_p}$</span> we proceed as follows. Let <span class="math-container">$f:\overline{\mathbb{F}_p}^n\to\overline{\mathbb{F}_p}^n$</span> be injective. Choose any <span class="math-container">$\overline{b}\in \overline{\mathbb{F}_p}^n$</span>. Let <span class="math-container">$\ell$</span> be the field obtained by adjoining the coordinates of <span class="math-container">$b$</span> and the coefficients of the polynomials defining <span class="math-container">$f$</span> to <span class="math-container">$\mathbb{F}_p$</span>. Observe then that <span class="math-container">$f$</span> restricts to an injective polynomial map <span class="math-container">$f:\ell\to\ell$</span>. But, by cardinality considerations, we must have that <span class="math-container">$f$</span> is surjective there, and thus <span class="math-container">$\overline{b}$</span> is in the image of <span class="math-container">$f$</span> as desired. The Ax-Grothendieck theorem follows. <span class="math-container">$\blacksquare$</span></p>
<p>Let me point out something somewhat miraculous about the above proof. The most obvious "Wow!" factor came from our ability to prove a statement in characteristic <span class="math-container">$0$</span> by working solely in characteristic <span class="math-container">$p$</span>, but to me there is something equally amazing happening. Our proof in the case of <span class="math-container">$K=\overline{\mathbb{F}_p}$</span> COMPLETELY relied on the fact that <span class="math-container">$K/\mathbb{F}_p$</span> was algebraic. It would have failed for any other algebraically closed field of characteristic <span class="math-container">$p$</span> (because it would no longer be algebraic over <span class="math-container">$\mathbb{F}_p$</span>). We were only able to conclude the result for the other algebraically closed fields of characteristic <span class="math-container">$p$</span> by compactness. This, to me, seems like magic.</p>
<hr />
<p>The thing to notice, for me, about the above is how everything even Lowenheim-Skolem (although, only the upward part) followed formally from the compactness theorem. It really is quite astounding. I mean, it's so formal, and dare I say trivial, that even someone like me (who knows almost no model theory) can deduce it.</p>
<p>Something else to keep in mind about the above is the limited range of its powers. Upon first glance, one might expect the techniques above to revolutionize mathematics. I mean, it seems like such a profitable course of action to prove things about fields/algebro geometric things, by reducing it to the finite field case. The problem with this is simple--many of the statements we care about as algebraic geometers are outside the purview of first order logic (or at least phrasing them would be nightmarishly difficult). Try stating Riemann-Roch in the language of <span class="math-container">$\mathsf{ACF}$</span>. So, while the above is powerful, and there are some deep philosophical consequences, it is not the end-all-be-all of mathematical theorems.</p>
<p>It is worth noting that if there were a subject where model theory ought to have a strong power over, it's the algebraic geometry of algebraically closed fields. There, not only do we have a well-behaved theory (e.g. <span class="math-container">$\mathsf{ACF}_p$</span> is complete, it has quantifier elimination, etc.) but the structure morphisms are polynomial maps--they are definable.</p>
<p>This, by the way, is how I feel about much of model theory (from my very uneducated point of view). It seems like a philosophically powerful, but practically ineffective theory. Besides the work of those like Hrushovski, I haven't heard of model theory being a sledgehammer in more "mainstream" mathematics. Just a thought.</p>
| <p>Please see the Wikipedia article on the <a href="http://en.wikipedia.org/wiki/Ax%E2%80%93Grothendieck_theorem">Ax-Grothendieck Theorem.</a></p>
<p><strong>Remark:</strong> This is really a comment, but I would not like to see the result disappear from MSE for lack of an answer. </p>
|
differentiation | <p>I've been thinking about this problem: Let $f: (a, +\infty) \to \mathbb{R}$ be a differentiable function such that $\lim\limits_{x \to +\infty} f(x) = L < \infty$. Then must it be the case that $\lim\limits_{x\to +\infty}f'(x) = 0$?</p>
<p>It looks like it's true, but I haven't managed to work out a proof. I came up with this, but it's pretty sketchy:</p>
<p>$$
\begin{align}
\lim_{x \to +\infty} f'(x) &= \lim_{x \to +\infty} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\
&= \lim_{h \to 0} \lim_{x \to +\infty} \frac{f(x+h)-f(x)}{h} \\
&= \lim_{h \to 0} \frac1{h} \lim_{x \to +\infty}[f(x+h)-f(x)] \\
&= \lim_{h \to 0} \frac1{h}(L-L) \\
&= \lim_{h \to 0} \frac{0}{h} \\
&= 0
\end{align}
$$</p>
<p>In particular, I don't think I can swap the order of the limits just like that. Is this correct, and if it isn't, how can we prove the statement? I know there is <a href="https://math.stackexchange.com/questions/42277/limit-of-the-derivative-of-a-function-as-x-goes-to-infinity">a similar question</a> already, but I think this is different in two aspects. First, that question assumes that $\lim\limits_{x \to +\infty}f'(x)$ exists, which I don't. Second, I also wanted to know if interchanging limits is a valid operation in this case.</p>
| <p>The answer is: No. Consider $f(x)=x^{-1}\sin(x^3)$ on $x\gt0$. The derivative $f'(x)$ oscillates between roughly $+3x$ and $-3x$ hence $\liminf\limits_{x\to+\infty}\,f'(x)=-\infty$ and $\limsup\limits_{x\to+\infty}\,f'(x)=+\infty$.</p>
| <p>There is a famous theorem known as Barbalat's lemma, which states the additional condition for $\lim_{x \to \infty} f'(x) = 0$. According to the lemma, $f'(x)$ should be uniformly continuous on $[a, \infty)$. In many applications, the uniform continuity of $f'(x)$ is shown by proving $f''(x)$ exists and is bounded on $[a, \infty)$.</p>
<p>(See Wikipedia <a href="https://en.wikipedia.org/wiki/Lyapunov_stability#Barbalat.27s_lemma_and_stability_of_time-varying_systems" rel="noreferrer">https://en.wikipedia.org/wiki/Lyapunov_stability#Barbalat.27s_lemma_and_stability_of_time-varying_systems</a> for the statement of Barbalat's lemma and its applications in stability analysis).</p>
|
combinatorics | <p>How many <strong>positive integers</strong> less than $1{,}000{,}000$ have the digit $2$ in them? </p>
<p>I could determine it by summing it in terms of the number of decimal places, i.e. between $999{,}999$ and $100{,}000$, etc.</p>
<p>Then to determine the number of numbers between $999{,}999$ and $100{,}000$ that have the digit $2$ in them would be $9^5$.</p>
<p>Is this correct, or am I miscounting?</p>
| <p>Though not always the smartest way, such questions can mechanically be answered as follows. (In this case the "smart" way to do it is Cameron's answer. It is instructive to see that this mechanical procedure basically recovers Cameron's method.) Let $a_n$ and $b_n$ be the amounts of $n$-digit numbers that do not and do have a $2$ in them. So $a_0=1$ and $b_0=0$. These number satisfy the recurrence
$$
\begin{pmatrix}a_{n+1}\\b_{n+1}\end{pmatrix}=
\begin{pmatrix}9&0\\1&10\end{pmatrix}
\begin{pmatrix}a_n\\b_n\end{pmatrix}
$$</p>
<p>(Take a moment to understand what this recurrence expresses.) Now
$$
\begin{pmatrix}9&0\\1&10\end{pmatrix}^6 \begin{pmatrix}1\\0\end{pmatrix}=\begin{pmatrix}531441\\468559\end{pmatrix}
$$</p>
<p>so the answer is $468559=10^6-9^6$.</p>
| <p>The number of numbers from $1$ t0 $10^6$ that do <strong>not</strong> have the digit $2$ is clearly the same as the number of numbers that do not have the digit $9$. Now read each of these in base $9$ and you get <strong>all</strong> the numbers from 1 to $10^6$ (base 9) $=9^6$ (base 10). Therefore, there are $10^6-9^6$ numbers between $1$ and $10^6$ that use the digit 2. </p>
|
game-theory | <p>I'm playing this game with children and I'm ready to stab my eyes with an ice pick.
It seems like it never ends, but I know I expect it to end. What is my expected number of spins to remove all the fruit from the tree?</p>
<p>Goal: To remove 14 cherries from tree by executing one of following seven directions at random per turn.</p>
<pre><code> 1. Remove 1 cherry.
2. Remove 2 cherries.
3. Remove 3 cherries.
4. Remove 4 cherries.
5. Return 1 cherry to tree.
6. Return 2 cherries to tree.
7. Return all your cherries to tree.
</code></pre>
<p>Once I realized I have a 1/7 chance each turn of playing this game in perpetuity, I started reaching for the kitchen drawer.</p>
| <p>I actually spent some time about a year ago doing some computations for a variant of this game, sold as <a href="http://en.wikipedia.org/wiki/Hi_Ho!_Cherry-O" rel="noreferrer">Hi-Ho Cherry-O</a>. It's identical to your game, except with 10 cherries instead of 14. (I learned about it from a colleague with a 4-year-old daughter.)</p>
<p>The computation is a nice example of some simple Markov chain techniques, which produce linear equations of the sort in Brett Frankel's answer. I considered the cases of 1 to 4 players, which are amenable to computer solution.</p>
<p>Another interesting feature is that since the players take turns, the first player has a slight advantage. </p>
<p>Here are the results I got for 10 cherries. If you are really interested, I can try and reconstruct my code and run the 14 cherry case.</p>
<pre>
1 player game:
Expected length: 15.8019792994073 rounds
2 player game:
Expected number of rounds: 9.58554137805221
P(player 1 wins) = 0.518720469382215
P(player 2 wins) = 0.481279530617784
Expected number of turns = 18.6523622867222
3 player game:
Expected number of rounds: 7.49668096168849
P(player 1 wins) = 0.357756582790784
P(player 2 wins) = 0.332728455615310
P(player 3 wins) = 0.309514961593905
Expected number of turns: 21.4418012638686
4 player game:
Expected number of rounds: 6.44149249272987
P(player 1 wins) = 0.276928283784381
P(player 2 wins) = 0.258099951775544
P(player 3 wins) = 0.240610168544412
P(player 4 wins) = 0.224361595895655
Expected number of turns: 24.1783750474708
</pre>
<p><strong>Edit</strong>: I should also mention some <a href="http://www.math.byu.edu/~jeffh/mathematics/games/cherry/cherry.html" rel="noreferrer">previous work</a> by Jeffrey Humpherys.</p>
| <p>You can solve via a series of 14 linear equations:
Let $E_n$ be the expected number of turns remaining until the game is over when there are currently $n$ cherries on the tree.
For example, $$E_1=\frac{4}{7}1+\frac{1}{7}(1+E_2)+\frac{1}{7}(1+E_3)+\frac{1}{7}(1+E_{14})$$</p>
<p>$$E_{14}=\frac{1}{7}(1+E_{13})+\frac{1}{7}(1+E_{12})+\frac{1}{7}(1+E_{11})+\frac{1}{7}(1+E_{10})+\frac{3}{7}(1+E_{14})$$</p>
<p>By the time you finish writing down all 14 equations and solving, the game may well be over. (Then again, I expect the answer will be quite large).</p>
|
combinatorics | <p>It occured to me that when you perform division in some algebraic system, such as $\frac a b = c$ in $\mathbb R$, the division itself represents a relation of sorts between $a$ and $b$, and once you calculate this relation, the resulting element $c$, being 'merely' the relation or some kind of representation of it, has lost the information about what either $a$ or $b$ may have been.</p>
<p>So division destroys or weakens information. Other operations have similar peculiarities. Multiplication such as $a b = c$ is very 'lossy' in $\mathbb R$, but not as lossy in $\mathbb N$ since the set of possible divisors of $c$ is finite.</p>
<p>So my question is, are there any formalizations which account for (or may be able to account for) this particular aspect of mathematical operations (or functions/relations in general)?</p>
| <p>You are asking, if I understand right, for how much information is lost during the mapping from two elements to another symbol indicating the relation. In your example, function division maps $a,b$ to $\frac ab$, which you choose $c$ to represent the relation of $a,b$.</p>
<p>Unfortunately, we can say nothing on loss of information for that because essentially they contain no information. In the area of information theory, we discuss information only for possiblities. Possibility means uncertaint. Uncertainty implies information. That is how Boltzmann defines entropy as a measure for information.
$$S=k_B\ln\Omega$$
where $\Omega$ is the total number of possibilities, $k_B$ is a constant. The point is, rather than measuring the loss of information of elements, which is the only one of possibilities, we can measure that for the sets in which elements belong to. The <strong>Sets</strong> here are, of course, domain and range of the function.</p>
<hr>
<p>Follow the idea above, we introduce some notations. The mapping $\Phi$ has domain $\mathcal D$ and range $\mathcal R$. The size of a set $A$ is denoted by $|A|$ and the information it contains by $S_A$. Further assume both $\mathcal D$ and $\mathcal R$ is finite. We can define the <strong>information</strong> of $\mathcal D$ contains is logarithm of the total number of possibilities of $\mathcal D$, namely $$S_{\mathcal D}:=\ln|\mathcal D|$$
Sometimes function $\Phi=\Phi(x_1,\ldots,x_n)$ is an $n$-variable function where $x_i\in X_i$, then $\mathcal D=X_1\times\cdots\times X_n$ is cartesian product of each domain, in which case $|\mathcal D|=|X_1|\cdots|x_n|$. On the other hand, we don't want define $S_\mathcal R=\ln|\mathcal R|$ because $\mathcal R$ is not dependent with $\mathcal D$. They are connected by function $\Phi$. Thus we denote $n_y=|\Phi^{-1}(y)|$ and $\displaystyle p_y=\frac{n_y}{\sum_{y\in\mathcal R}n_y}=\frac{n_y}{|\mathcal D|}$for each $y\in\mathcal R$, where $\Phi^{-1}(y)\in\mathcal D$ is the preimage of $y$. Now the definition of <strong>information</strong> of $\mathcal R$ with respect to $\Phi$ is defined as
$$S_\mathcal R:=\sum_{y\in\mathcal R}p_y\ln\frac{1}{p_y}$$
Hence the loss of information
$$\Delta S=S_\mathcal D-S_\mathcal R\geq0$$
The inequality follows the maximum of concave function $S_\mathcal R$ is $S_\mathcal D$ where $n_y=1$ for each $y$. In another word, the information retains invariant if and only if the function is one-to-one. This definition is consistent with both the definition of entropy in information theory and, which is the most important, our intuition of loss of information.</p>
<p>We can easily generalize this method to cases where both domain and range are compact, just through replacing summation by integration and size by measure.</p>
<hr>
<p>As for other cases, this method doesn't work any more but we still have some ways to roughly approximate the loss.</p>
<p>Note function $\Phi$ itself defines an equivalent relation $\sim$ and we can further define the quotient space
$$\mathcal Q=\mathcal D/\sim$$
and
$$\Delta S=\dim\mathcal D-\dim\mathcal Q$$
This has also correspondence in many areas such as linear algebra, group theory, topology etc. Sometimes we call the space of $\sim$ kernel. When the kernel is zero space, we say two spaces are isomorphic.</p>
<hr>
<p>At the last, to give an explantion for your example $c=\Phi(a,b)=ab$, if $a,b\in\mathbb Q$, we have $\Delta S^\mathbb Q=\dim_\mathbb Q(\mathbb Q\times\mathbb Q)-\dim_\mathbb Q(\mathbb Q)=2-1=1$, while if $a,b\in\mathbb R$, we have $\Delta S^\mathbb R=1$. Although both value equal to 1, knowing that the cardinal of 1 dimensional rational number is less than that of real number, the information loss from rational multiplication is less than that from real multiplication.</p>
<p>I hope this will help even if this may not be what you want.</p>
| <p>Your particular example of information being lost in multiplication or division is really the reason one studies ideals in ring theory, normal subgroups in group theory, and kernels in general. But I think prime ideals explain it best.</p>
<p></p>
<p>The prime factorization of a number represents how much information was lost in creating it. A prime number only has one factorization, so if it is a product, we know the factors. The more prime factors an element has, the more information is lost in creating it.</p>
<p>Mathematicians generalized this to 'prime ideals' where a class of products (called an ideal) has information loss measured by the number of of prime ideals in its decomposition.</p>
<p>Fields have the greatest information loss, as every element can occur from a product with any given factors.</p>
<p>Ideals can also be used in another way to destroy information. Quotienting by an ideal causes you to lose information in proportion to the size of the ideal. Quotienting the integers by the ideal of multiples of 10 tells you the last digit of a number, while quotienting hy multiples of 2 only tells you if a number is odd or even.</p>
<p></p>
|
game-theory | <p>Two players alternate placing kings on a <span class="math-container">$6\times6$</span> chessboard, such that no two kings are allowed to attack each other (not even two kings placed by the same player). The last person who can place a king wins. Which player has a winning strategy?</p>
<p>Recall that, in chess, a king attacks the eight neighboring squares. (Incidentally, this problem is the same as placing <span class="math-container">$2\times2$</span> nonoverlapping boxes on a <span class="math-container">$7\times7$</span> grid.)</p>
<p>For an odd-sized board, the first player has a winning strategy: go in the middle and then mirror the other person's moves. For a <span class="math-container">$2\times2$</span> the first player wins trivially; for a <span class="math-container">$4\times 4$</span> the second player wins regardless of strategy (it will always end after four kings are placed). <span class="math-container">$6\times6$</span> is the first nontrivial case: while there can be at most nine kings on the board at once, the game could theoretically end after as few as four moves.</p>
<p>Ideally I'd like to know the answer for a <em>general</em> <span class="math-container">$n\times n$</span> board, but I figured I'd start small and work my way up. <span class="math-container">$6\times6$</span> proved trickier than I had anticipated.</p>
| <p>I wrote <a href="https://replit.com/@harrypotter5771/WiltedOnerlookedStruct#main.py" rel="noreferrer">some code</a> to bash the nim values; a <span class="math-container">$6\times 6$</span> grid is a win for the first player with nim value <span class="math-container">$1$</span>, by playing in any of the purple squares below:</p>
<p> <a href="https://i.sstatic.net/zhWWU.png" rel="noreferrer"><img src="https://i.sstatic.net/zhWWU.png" alt="enter image description here" /></a></p>
<p>If the first player plays in the middle of a <span class="math-container">$3\times 3$</span> quadrant, the following diagram shows their winning responses to any of the second player's replies:</p>
<p><a href="https://i.sstatic.net/4Lh60.png" rel="noreferrer"><img src="https://i.sstatic.net/4Lh60.png" alt="enter image description here" /></a></p>
| <p><a href="https://gist.github.com/joriki/82df06ead3f464c76138d2faa5abe7e2" rel="noreferrer">Here’s Java code</a> that computes the value of the game on the <span class="math-container">$n\times n$</span> board up to <span class="math-container">$n=8$</span>. The results for <span class="math-container">$n=6$</span> coincide with those in RavenclawPrefect’s answer. For <span class="math-container">$n=8$</span>, the first player loses whatever they play.</p>
|
linear-algebra | <p>When I first took linear algebra, we never learned about dual spaces. Today in lecture we discussed them and I understand what they are, but I don't really understand why we want to study them within linear algebra.</p>
<p>I was wondering if anyone knew a nice intuitive motivation for the study of dual spaces and whether or not they "show up" as often as other concepts in linear algebra? Is their usefulness something that just becomes more apparent as you learn more math and see them arise in different settings?</p>
<hr>
<h3>Edit</h3>
<p>I understand that dual spaces show up in functional analysis and multilinear algebra, but I still don't really understand the intuition/motivation behind their definition in the standard topics covered in a linear algebra course. (Hopefully, this clarifies my question)</p>
| <p>Let <span class="math-container">$V$</span> be a vector space (over any field, but we can take it to be <span class="math-container">$\mathbb R$</span> if you like,
and for concreteness I will take the field to be <span class="math-container">$\mathbb R$</span> from now on;
everything is just as interesting in that case). Certainly one of the interesting concepts
in linear algebra is that of a <em>hyperplane</em> in <span class="math-container">$V$</span>.</p>
<p>For example, if <span class="math-container">$V = \mathbb R^n$</span>, then a hyperplane is just the solution set to an equation
of the form
<span class="math-container">$$a_1 x_1 + \cdots + a_n x_n = b,$$</span>
for some <span class="math-container">$a_i$</span> not all zero and some <span class="math-container">$b$</span>.
Recall that solving such equations (or simultaneous sets of such equations) is one
of the basic motivations for developing linear algebra.</p>
<p>Now remember that when a vector space is not given to you as <span class="math-container">$\mathbb R^n$</span>,
it doesn't normally have a canonical basis, so we don't have a canonical way
to write its elements down via coordinates, and so we can't describe hyperplanes
by explicit equations like above. (Or better, we can, but only after choosing
coordinates, and this is not canonical.)</p>
<p>How can we canonically describe hyperplanes in <span class="math-container">$V$</span>?</p>
<p>For this we need a conceptual interpretation of the above equation. And here linear
functionals come to the rescue. More precisely, the map</p>
<p><span class="math-container">$$\begin{align*}
\ell: \mathbb{R}^n &\rightarrow \mathbb{R} \\
(x_1,\ldots,x_n) &\mapsto a_1 x_1 + \cdots + a_n x_n
\end{align*}$$</span></p>
<p>is a linear functional on <span class="math-container">$\mathbb R^n$</span>, and so the above equation for the
hyperplane can be written as
<span class="math-container">$$\ell(v) = b,$$</span>
where <span class="math-container">$v = (x_1,\ldots,x_n).$</span></p>
<p>More generally, if <span class="math-container">$V$</span> is any vector space, and <span class="math-container">$\ell: V \to \mathbb R$</span> is any
non-zero linear functional (i.e. non-zero element of the dual space), then
for any <span class="math-container">$b \in \mathbb R,$</span> the set</p>
<p><span class="math-container">$$\{v \, | \, \ell(v) = b\}$$</span></p>
<p>is a hyperplane in <span class="math-container">$V$</span>, and all hyperplanes in <span class="math-container">$V$</span> arise this way.</p>
<p>So this gives a reasonable justification for introducing the elements of the dual
space to <span class="math-container">$V$</span>; they generalize the notion of linear equation in several variables
from the case of <span class="math-container">$\mathbb R^n$</span> to the case of an arbitrary vector space.</p>
<p>Now you might ask: why do we make them a vector space themselves? Why do we want
to add them to one another, or multiply them by scalars?</p>
<p>There are lots of reasons for this; here is one: Remember how important it is,
when you solve systems of linear equations, to add equations together, or
to multiply them by scalars (here I am referring to all the steps you typically
make when performing Gaussian elimination on a collection of simultaneous linear
equations)? Well, under the dictionary above between linear equations
and linear functionals, these processes correspond precisely to adding together
linear functionals, or multiplying them by scalars. If you ponder this for a bit,
you can hopefully convince yourself that making the set of linear
functionals a vector space is a pretty natural thing to do.</p>
<p>Summary: just as concrete vectors <span class="math-container">$(x_1,\ldots,x_n) \in \mathbb R^n$</span> are naturally
generalized to elements of vector spaces, concrete linear expressions
<span class="math-container">$a_1 x_1 + \ldots + a_n x_n$</span> in <span class="math-container">$x_1,\ldots, x_n$</span> are naturally generalized to linear functionals.</p>
| <p>Since there is no answer giving the following point of view, I'll allow myself to resuscitate the post.</p>
<p>The dual is intuitively the space of "rulers" (or measurement-instruments) of our vector space. Its elements measure vectors. This is what makes the dual space and its relatives so important in Differential Geometry, for instance. This immediately motivates the study of the dual space. For motivations in other areas, the other answers are quite well-versed.</p>
<p>This also happens to explain intuitively some facts. For instance, the fact that there is no canonical isomorphism between a vector space and its dual can then be seen as a consequence of the fact that rulers need scaling, and there is no canonical way to provide one scaling for space. However, if we were to <strong>measure the measure-instruments</strong>, how could we proceed? Is there a canonical way to do so? Well, if we want to measure our measures, why not measure them by how they act on what they are supposed to measure? We need no bases for that. This justifies intuitively why there is a natural embedding of the space on its bidual. (Note, however, that this fails to justify why it is an isomorphism in the finite-dimensional case).</p>
|
logic | <p>You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.</p>
<p>The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length. </p>
<p>So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?</p>
<p>And if so, what is the smallest possible ratio?</p>
<p>Example 1 (unit is cm):</p>
<ul>
<li>1st cut: 50 : 50 ratio: 1 </li>
<li>2nd cut: 50 : 25 : 25 ratio: 2 - bad</li>
</ul>
<p>Example 2</p>
<ul>
<li>1st cut: 40 : 60 ratio: 1.5</li>
<li>2nd cut: 40 : 30 : 30 ratio: 1.33</li>
<li>3rd cut: 20 : 20 : 30 : 30 ratio: 1.5</li>
<li>4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad</li>
</ul>
<p>Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.</p>
| <p>TLDR: $a_n=\log_2(1+1/n)$ works, and is the only smooth solution.</p>
<p>This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.</p>
<p>Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,\dots,a_{2n-1}$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_{2n}$ and $a_{2n+1}$. We have the following constraints:</p>
<p>$$a_1=1\qquad a_n=a_{2n}+a_{2n+1}\qquad a_n\ge a_{n+1}\qquad a_n<2a_{2n-1}$$</p>
<p>I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_{2n}+a_{2n+1}$ as well).</p>
<p>First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_n\in (1/2,1)$ then we can define $a_{2n}=a_nb_n$ and $a_{2n+1}=a_n(1-b_n)$, and this will completely define the sequence $a_n$.</p>
<p>Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence</p>
<p>$$1,\frac12,\frac12,\frac14,\frac14,\frac14,\frac14,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\frac18,\dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does <em>not</em> tend to a limit - it varies between $1$ and $2$.</p>
<p>If we average this exponentially, by considering the function $g(x)=2^xa_{\lfloor 2^x\rfloor}$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]\to\Bbb R$ such that $g(x+n)\to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.</p>
<p>There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,\log_2 (3/2)]$ and scaled down on $[\log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $\int_0^{\log_2(3/2)}h(x)\,dx$ and $\int_{\log_2(3/2)}^1h(x)\,dx$.</p>
<p>Thus, to keep these balanced we should let $b_1=\log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[\log_2(2n),\log_2(2n+1)]$ and $[\log_2(2n+1),\log_2(2n+2)]$ (reduced$\bmod 1$), so we must set them to
$$b_n=\frac{\log_2(2n+1)-\log_2(2n)}{\log_2(2n+2)-\log_2(2n)}=\frac{\log(1+1/2n)}{\log(1+1/n)}.$$</p>
<p>When we do this, a miracle occurs, and $a_n=\log_2(1+1/n)$ becomes analytically solvable:
\begin{align}
a_1&=\log_2(1+1/1)=1\\
a_{2n}+a_{2n+1}&=\log_2\Big(1+\frac1{2n}\Big)+\log_2\Big(1+\frac1{2n+1}\Big)\\
&=\log_2\left[\Big(1+\frac1{2n}\Big)\Big(1+\frac1{2n+1}\Big)\right]\\
&=\log_2\left[1+\frac{2n+(2n+1)+1}{2n(2n+1)}\right]\\
&=\log_2\left[1+\frac1n\right]=a_n.
\end{align}</p>
<p>As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
\begin{align}
2a_m&=2\log_2\Big(1+\frac1m\Big)=\log_2\Big(1+\frac1m\Big)^2=\log_2\Big(1+\frac2m+\frac1{m^2}\Big)\\
&\ge\log_2\Big(1+\frac2m\Big)>\log_2\Big(1+\frac2{2n}\Big)=a_n,
\end{align}</p>
<p>so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $\frac 1{\log 2}=\log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.</p>
<p>It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):</p>
<ul>
<li>$58:42$, ratio = $1.41$</li>
<li>$42:32:26$, ratio = $1.58$</li>
<li>$32:26:22:19$, ratio = $1.67$</li>
<li>$26:22:19:17:15$, ratio = $1.73$</li>
<li>$22:19:17:15:14:13$, ratio = $1.77$</li>
</ul>
<p>If you are working with a sequence of points treated$\bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=\log_2(2n+1)\bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:</p>
<p> <a href="https://i.sstatic.net/SCaaE.gif" rel="noreferrer"><img src="https://i.sstatic.net/SCaaE.gif" alt="sausages"></a></p>
<p>A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0\le x\le 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0\le x\le 1$, and no solution can do better than that (in the limit) for any $x$.</p>
| <p>YES, it is possible!</p>
<p>You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$. </p>
<p>From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.</p>
<p>So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/b\approx 1$). All you have to make sure is that </p>
<ul>
<li>$a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round. </li>
<li>$a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round). </li>
</ul>
<p>Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.</p>
|
geometry | <p>Given</p>
<ol>
<li>A straight line of arbitrary length</li>
<li>The ability to construct a straight line in any direction from any starting point with the "unit length", or the length whose square root of its magnitude yields its own magnitude.</li>
</ol>
<p>Is there a way to geometrically construct (using only a compass and straightedge) the a line with the length of the square root of the arbitrary-lengthed line? What is the mathematical basis?</p>
<p>Also, why can't this be done without the unit line length?</p>
| <p>If you have a segment $AB$, place the unit length segment on the line where $AB$ lies, starting with $A$ and in the direction opposite to $B$; let $C$ be the other point of the segment. Now draw a semicircle with diameter $BC$ and the perpendicular to $A$; this line crosses the semicircle in a point $D$. Now $AD$ is the square root of $AB$. </p>
<p>$\triangle BCD$ is a right triangle, like $\triangle ACD$ and $\triangle ABD$; all of these are similar, so you find out that $AC/AD = AD/AB$. But $AC=1$, so $AD = \sqrt{AB}$.</p>
<p>See the drawing below:</p>
<p><img src="https://i.sstatic.net/bAuQy.png" alt="constructing square root of a line segment"></p>
| <p>Without the unit-length segment--that is, without something to compare the first segment to--its length is entirely arbitrary, so can't be valued, so there's no value of which to take the square root.</p>
<p>Let the given segment (with length x) be AB and let point C be on ray AB such that BC = 1. Construct the midpoint M of segment AC, construct the circle with center M passing through A, construct the line perpendicular to AB through B, and let D be one of the intersections of that line with the circle centered at M (call the other intersection E). BD = sqrt(x).</p>
<p>AC and DE are chords of the circle intersecting at B, so by the power of a point theorem, AB * BC = DB * BE, so x * 1 = x = DB * BE. Since DE is perpendicular to AC and AC is a diameter of the circle, AC bisects DE and DB = BE, so x = DB^2 or DB = sqrt(x).</p>
<p><em><strong>edit</em></strong>: this is a special case of the more general geometric-mean construction. Given two lengths AB and BC (arranged as above), the above construction produces the length BD = sqrt(AB * BC), which is the geometric mean of AB and BC.</p>
|
probability | <p>what are some good books on probabilities and measure theory?
I already know basic probabalities, but I'm interested in sigma-algrebas, filtrations, stopping times etc, with possibly examples of "real life" situations where they would be used</p>
<p>thanks</p>
| <p>I'd recommend Klenke's <a href="http://books.google.ch/books?id=tcm3y5UJxDsC&printsec=frontcover&hl=de#v=onepage&q&f=false"><em>Probability Theory</em></a>.</p>
<p>It gives a good overview of the basic ideas in probability theory. In the beginning it builds up the basics of measure theory and set functions.</p>
<p>There are also some examples of applications of probability theory.</p>
| <p>I think Chung's <a href="http://rads.stackoverflow.com/amzn/click/0121741516" rel="noreferrer">A Course in Probability Theory</a> is a good one that is rigorous. Also Sid Resnick's <a href="http://rads.stackoverflow.com/amzn/click/081764055X" rel="noreferrer">A Probability Path</a> is advanced but easy to read.</p>
|
combinatorics | <p>Doing a little reading over the break (<em>The Probabilistic Method</em> by Alon and Spencer); can't come up with the solution for this seemingly simple (and perhaps even a little surprising?) result:</p>
<blockquote>
<p>(A-S 1.6.3) Prove that for every two independent identically distributed real random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>,
<span class="math-container">$$Pr[|X-Y| \leq 2] \leq 3 Pr[|X-Y| \leq 1].$$</span></p>
</blockquote>
| <p>You may read the paper "<a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.8591" rel="nofollow noreferrer">The 123 theorem and its extensions</a>" by Noga Alon and Raphael Yuster.</p>
| <p>I can prove it for the case when $Z = |X-Y|$ takes only integer values. </p>
<p>Let $q_i = P(Z=i)$ for $i=0,1,\dots$. Then, we need to show that $\frac{q_0+q_1}{q_0+q_1+q_2} \geq \frac{1}{3}$. This follows from the observation that $2q_0 \geq q_i$ for all $i$. This follows from Cauchy Schwarz inequality. Then,</p>
<p>$\begin{aligned}
3(q_0+q_1) &\geq (q_0+q_1+q_2) \\
2(q_0+q_1) &\geq q_2 \\
\end{aligned}$</p>
<p>which is true since $2q_0 \geq q_2$. Thanks to iMath for this last observation.</p>
<p>In the case of $Z$ being real, I tried mimicking the proof above but the details don't quite work out. In this case, Cauchy-Schwarz still implies that $f_Z(z) \leq 2f_Z(0)$ for all $z$. However, the proof seems to need one more estimation along the lines of $\int_0^1 f_Z(z) dz \geq f_Z(0)$.</p>
|
geometry | <p><a href="https://i.sstatic.net/EUaPe.png" rel="noreferrer"><img src="https://i.sstatic.net/EUaPe.png" alt="enter image description here"></a></p>
<p>Don't let the simplicity of this diagram <strong>fool</strong> you. I have been wondering about this for quite some time, but I can't think of an <strong>easy</strong>/smart way of finding it.</p>
<p><strong>Any ideas?</strong></p>
<hr>
<p><em>For <strong>reference</strong>, the <strong>Area</strong> is:</em></p>
<p>$$\bbox[10pt, border:2pt solid grey]{90−18.75\pi−25\cdot \arctan\left(\frac 12\right)}$$</p>
| <p><a href="https://i.sstatic.net/DNr5O.png"><img src="https://i.sstatic.net/DNr5O.png" alt="enter image description here"></a></p>
<p>We observe that $\triangle PRT$ can be partitioned into five congruent sub-triangles. Therefore, the entire shaded region has area given by ...
$$\begin{align}
3 u + |\text{region}\; PAT| &= 3u + |\square OAPT| - |\text{sector}\;OAT| \\[6pt]
&= 3u + \frac{3}{5}\,|\triangle PRT| - |\text{sector}\;OAT| \\[6pt]
&= 3\cdot\frac{1}{4} r^2 \left( 4 - \pi \right) \;+\; \frac{3}{5}\cdot r^2 \;-\; \frac{1}{2}r^2\cdot 2\theta
\end{align}$$
Since $\theta = \operatorname{atan}\frac{1}{2}$, this becomes</p>
<blockquote>
<p>$$r^2\left(\; \frac{18}{5} - \frac{3}{4}\pi - \operatorname{atan}\frac{1}{2} \;\right) \qquad\stackrel{r=5}{\to}\qquad
90 - \frac{75}{4}\pi - 25\;\operatorname{atan}\frac{1}{2}$$</p>
</blockquote>
| <p>[<strong>Note:</strong> <a href="https://math.stackexchange.com/a/1875751/409">My second answer</a> is much better.]</p>
<p>I'll focus on the <em>unshaded</em> region at the bottom-left.</p>
<p><a href="https://i.sstatic.net/o8jYN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o8jYN.png" alt="enter image description here"></a></p>
<p>By an aspect of the <a href="https://en.wikipedia.org/wiki/Inscribed_angle#Theorem" rel="nofollow noreferrer">Inscribed Angle Theorem</a>, we know that $\angle AOB = 2\;\angle ABP$ (justifying marking these $\theta$ and $2\theta$). By a related result, we have that
$$\phi = \frac{1}{2}\left(\angle BOC - \angle AOB\right) = 45^\circ - \theta$$
Moreover, we know that
$$\phi = \operatorname{atan}\frac{1}{2} \approx 26.56^\circ \qquad\to\qquad \theta = 45^\circ - \operatorname{atan}\frac{1}{2} \approx 18.43^\circ$$</p>
<p>From here, knowing the circle's radius, one may calculate the lower-left area as ...
$$\begin{align}
&|\triangle PAB| + |\triangle OAB| - |\text{sector } OAB| \\
\end{align}$$
... from which we readily derive the area in the original question. For now, I'll leave these details to the reader.</p>
|
differentiation | <p>These days, the standard way to present differential calculus is by introducing the Cauchy-Weierstrass definition of the limit. One then defines the derivative as a limit, proves results like the Leibniz and chain rules, and uses this machinery to differentiate some simple functions such as polynomials. The purpose of my question is to see what creative alternatives people can describe to this approach. The nature of the question is that there is not going to be a single best answer. I have several methods that I've collected which I'll put in as answers to my own question.</p>
<p>It's not reasonable to expect answers to include an entire introductory textbook treatment of differentiation, nor would anyone want to read answers that were that lengthy. A sketch is fine. Lack of rigor is fine. Well known notation and terminology can be assumed. It would be nice to develop things to the point where one can differentiate a polynomial, since that would help to illustrate how your method works and demonstrate that it's usable. For this purpose, it suffices to prove that if $n>0$ is an integer, the derivative of $x^n$ equals $0$ at $0$ and equals $n$ at $1$; the result at other nonzero values of $x$ follows by scaling. Doing this for $n=2$ is fine if the generalization to $n>2$ is obvious.</p>
| <p>When I taught Number Theory I needed to speak of the derivative for a polynomial
(over $\mathbb Z$ and $\mathbb Z/p \mathbb Z$). Instead of taking the derivative from $\mathbb R$ and restrict it to $\mathbb Z$, I used the following approach, which works for polynomials only (but it would work in any polynomial ring).</p>
<p>Let $P(X)$ be a polynomial, and $a$ a point. Then by the division Theorem we have</p>
<p>$$P(X)=(X-a)Q(X) +R \,.$$</p>
<p>where $R$ is a constant. We define </p>
<p>$$P'(a):= Q(a) \,. \quad (*)$$</p>
<p>It is important though to point that $Q(X) \neq P'(X)$ in general, since at different points we get different $Q's$.</p>
<p>The following Lemma is an immediate consequence of $(*)$:</p>
<p><strong>Lemma</strong></p>
<p>1) $(P_1 \pm P_2)' =P_1'+P_2'$,</p>
<p>2) $(aP)'=aP'$</p>
<p>3) $(a)'=0$</p>
<p>4) $(X^n)'=n X^{n-1}$.</p>
<p>Thus, one gets the general formula for the derivative of a polynomial.</p>
<p>The product rule can also be proven relatively easy, and then one can actually prove that</p>
<p>$$P(X)=P(a) + P'(a)(X-a)+ \frac{P''(a)}{2!}(x-a)^2+...+ \frac{P^{(n)}(a)}{n!}(x-a)^n \,,$$</p>
<p>where $n$ is the degree of the polynomial.</p>
<p>It also follows from here that $a$ is a multiple root of $P(X)$ with multiplicity $k$ if and only if $P(a)=P'(a)=...=P^{(k-1)}(a)=0$ and $P^{(k)}(a) \neq 0$.</p>
<p>This is a purely algebraic approach, it works nicely for polynomials in any rings, and can probably be easily extended to rational functions, but not much more generally.</p>
<p><strong>Note</strong> that $R=P(a)$, thus for all $x \neq a$ we have $Q(X)=\frac{P(X)-P(a)}{x-a}$, thus this definition is equivalent to the standard definition in $\mathbb R$.</p>
<p>Also, note that $P''(a) \neq Q'(a)$ in $(*)$. Actually, from the product rule one gets $P''(a) =P'(a)+Q'(a)$.</p>
| <p>The following is meant to be an axiomatization of differential calculus of a single variable. To avoid complications, let's say that $f$, $g$, $f'$, and $g'$ are smooth functions from $\mathbb{R}$ to $\mathbb{R}$ ("smooth" being defined by the usual Cauchy-Weierstrass definition of the derivative, not by these axioms, i.e., I don't want to worry about nondifferentiable points right now). In all of these, assume the obvious quantifiers such as $\forall f \forall g$.</p>
<p>Axiom Z: $\exists f : f'\ne 0$</p>
<p>Axiom A: $(f+g)'=f'+g'$</p>
<p>Axiom C: $(g \circ f)'=(g'\circ f)f'$</p>
<p>A bunch of the following is my presentation of reasoning given in a post by Tom Goodwillie:
<a href="https://mathoverflow.net/questions/108773/independence-of-leibniz-rule-and-locality-from-other-properties-of-the-derivative/108804#108804">https://mathoverflow.net/questions/108773/independence-of-leibniz-rule-and-locality-from-other-properties-of-the-derivative/108804#108804</a> This whole answer is a shortened and cleaned up presentation of what was worked out in that MO question.</p>
<p>Theorems:</p>
<p>(1) The derivative of the identity function $I$ is 1. -- Applying axiom C to $I=I\circ I=I\circ I\circ I$ shows that $I'$ is equal to either 0 or 1 everywhere. Since continuity is assumed, $I'$ has the same value everywhere. By Z and C, that value can't be 0.</p>
<p>(2) The derivative of a constant function is 0. -- From A and (1) we can show that the derivative of $-I$ is $-1$. Composition of the constant function with $-I$ then shows that the derivative of the constant is 0, evaluated at 0.</p>
<p>(3) The derivative of $cx$, where $c$ is a constant, is $c$. -- By pre- or post-composing with a translation, we see that the derivative must be a constant $h(c)$. The function $h$ is a homomorphism of the reals with $h(1)=1$, so $h=c$.</p>
<p>(4) The derivative of $cf$, where $c$ is a constant, is $cf'$. -- This follows from (3) and C.</p>
<p>(5) The derivative of an even function at 0 is 0. -- Axiom C.</p>
<p>(6) The derivative of $s(x)=x^2$ is $2x$. -- Let $u(x)=s(x+1)-s(x)=2x+1$. Then $u'=2$. By (5), $s'(0)=0$. Therefore $s'(1)=2$. Precomposition with a scaling function then establishes the result for all $x$.</p>
<p>(7) For any functions $f$ and $g$, $(fg)'=f'g+g'f$. -- Write $2fg=(f+g)^2-f^2-g^2$ and apply (6).</p>
|
combinatorics | <p>How many 6-letter permutations can be formed using only the letters of the word, <strong>MISSISSIPPI</strong>? I understand the trivial case where there are no repeating letters in the word (for arranging smaller words) but I stumble when this isn't satisfied. I'm wondering if there's a simpler way to computing all the possible permutations, as an alternative to manually calculating all the possible 6-letter combinations and summing the corresponding permutations of each. In addition, is there a method to generalize the result based on any <strong>P</strong> number of letters in a set with repeating letters, to calculate the number of <em>n</em>-letter permutations?</p>
<p>Sorry if this question (or similar) has been asked before, but if it has, please link me to the relevant page. I also apologize if the explanation is unclear, or if I've incorrectly used any terminology (I'm only a high-school student.) If so, please comment. :-) </p>
| <p>I can think of a generating function type of approach. You have 1 M, 2 P's, 4 I's and 4 S's in the word MISSISSIPPI. Suppose you picked the two P's and four I's, the number of permutations would be $\frac{6!}{4! 2!}$. However, we need to sum over all possible selections of 6 letters from this group.</p>
<p>The answer will be the coefficient of $x^6$ in</p>
<p>\begin{equation}
6!\left(1 + \frac{x}{1!}\right)\left(1 + \frac{x}{1!} + \frac{x^2}{2!}\right)\left(1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}\right)^2
\end{equation}</p>
<p>Each polynomial term corresponds to the ways in which you could make a selection of a given letter. So you have $1 + x$ for the letter M and $1 + x + x^2/2$ for the 2 letters P and so on.</p>
<p>The coefficient of $x^6$ comes out to 1610 in this case. </p>
<p><strong>EDIT</strong>: (I'm elaborating a bit in response to George's comment). </p>
<p>This is a pretty standard approach to such counting problems. The value of $x$ is not relevant to the problem at all. The benefit of using such polynomials is that it gives you a nice tool to "mechanically" solve the problem. The idea is that by looking at the coefficient of a particular term in the expanded polynomial, you get the answer.</p>
<p>When I wrote a term (1+x) corresponding to the letter M, it captures two possibilities</p>
<p>1) I could either leave out M (which corresponds to the coefficient of x^0 which is 1)</p>
<p>2) I could include M, which corresponds to the coefficient of x^1 which is one.</p>
<p>Suppose you select 1M, 2I's 2P's and 1 S. This is encoded by the term $x^1\cdot x^2 \cdot x^2 \cdot x^1$. The first $x^1$ term corresponds to selecting the single M. The first $x^2$ term corresponds to selecting 2 I's (which are identical). Using similar logic, the next $x^2$ is for 2P's and the last $x^1$ is for 1S. Since you are interested in permutations with repetition, the number of permutations for this case will be $\frac{6!}{2!2!}$, which should be the coefficient of this term.</p>
<p>How would you encode 0 M, 3I's, 2P's and 1S? The term would then be $x^0 \cdot x^3 \cdot x^2 \cdot x^1$. However, this term would have to be multiplied by $\frac{6!}{3!2!}$ to keep the count correct. The $6!$ in the numerator will be common to all such terms. The denominator depends on your selection of letters.</p>
<p>You need to add all such possibilities. Instead of listing them all out, which will be laborious, you represent the possibility of choosing each letter by a polynomial. As an example, for 4 S's. you have $1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}$. You divide $x^n$ by $n!$ to keep the count correct.</p>
<p>You then multiply the polynomials and look at the appropriate coefficient.</p>
<p>\begin{equation}
6!\left(1 + \frac{x}{1!}\right)\left(1 + \frac{x}{1!} + \frac{x^2}{2!}\right)\left(1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}\right)^2
\end{equation}</p>
<p>You expand out this polynomial and look at the coefficient of $x^6$ which gives you the answer.</p>
<p>For more powerful uses of this kind of approach, please read the book at <a href="http://www.math.upenn.edu/~wilf/gfologyLinked2.pdf">http://www.math.upenn.edu/~wilf/gfologyLinked2.pdf</a> (Warning - it a big PDF file).</p>
| <p>I'll add a "basic" approach to complement the excellent <a href="https://math.stackexchange.com/a/20240/206402">generating function solution</a> given by <a href="https://math.stackexchange.com/users/2641">@svenkatr</a> above.</p>
<p>The difficulty attaches to the repeated letters; getting the number of 6-letter permutations from <span class="math-container">$ABCDEFGHIJK$</span> is simply the <a href="https://en.wikipedia.org/wiki/Falling_and_rising_factorials" rel="nofollow noreferrer">falling factorial</a> <span class="math-container">$(11)_6 = 332640$</span>. However taking <span class="math-container">$6$</span> letters from the multiset <span class="math-container">$\{M, I^4, S^4,P^2\}$</span> means the combinations are far fewer, since repeats are inevitable (there are only <span class="math-container">$4$</span> different letters) and may be quite high multiplicity.</p>
<p>For a given unordered <em>repetition pattern</em> in the chosen 6 letters, say <span class="math-container">$aaaabc$</span>, we can fill this from the choice of letters based on which letters occur at a suitable multiplicity. For the example <span class="math-container">$aaaabc$</span>, <span class="math-container">$a$</span> can be either <span class="math-container">$I$</span> or <span class="math-container">$S$</span> while then <span class="math-container">$b$</span> and <span class="math-container">$c$</span> are a free choice from the remaining letters, <span class="math-container">$\binom 32 = 3$</span> giving <span class="math-container">$6$</span> options to fill this pattern. Then the arrangements of this pattern are the <a href="https://en.wikipedia.org/wiki/Multinomial_theorem#Number_of_unique_permutations_of_words" rel="nofollow noreferrer">multinomial</a> <span class="math-container">$\binom {6}{4,1,1} = 30$</span>.</p>
<p>So once we identify all patterns we can assess each in turn to get a total answer:
<span class="math-container">$$\begin{array}{|c|c|} \hline
\text{For this pattern:} & \text{options to fill} & \text{arrangements} & \text{total options} \\[1ex] \hline
aaaabb & \binom 21 \binom 21 = 4 & \binom{6}{4,2} = 15 & 60 \\[1ex]
aaaabc & \binom 21 \binom 32 = 6 & \binom{6}{4,1,1} = 30 & 180 \\[1ex]
aaabbb & \binom 22 = 1 & \binom{6}{3,3} = 20 & 20 \\[1ex]
aaabbc & \binom 21 \binom 21 \binom 21 = 8 & \binom{6}{3,2,1} = 60 & 480 \\[1ex]
aaabcd & \binom 21 \cdot \binom 33 = 2 & \binom{6}{3,1,1,1} = 120 & 240 \\[1ex]
aabbcc & \binom 33 = 1 & \binom{6}{2,2,2} = 90 & 90 \\[1ex]
aabbcd & \binom 32\cdot \binom 22 = 3 & \binom{6}{2,2,1,1} = 180 & 540 \\[1ex]
\hline
\end{array}$$</span></p>
<p>summing to <span class="math-container">$\fbox{1610}$</span> overall options.</p>
|
linear-algebra | <p>According to Wikipedia:</p>
<blockquote>
<p>A common convention is to list the singular values in descending order. In this case, the diagonal matrix <span class="math-container">$\Sigma$</span> is uniquely determined by <span class="math-container">$M$</span> (though the matrices <span class="math-container">$U$</span> and <span class="math-container">$V$</span> are not).</p>
</blockquote>
<p>My question is, are <span class="math-container">$U$</span> and <span class="math-container">$V$</span> uniquely determined up to some equivalence relation (and what equivalence relation)?</p>
| <p>Let $A = U_1 \Sigma V_1^* = U_2 \Sigma V_2^*$. Let us assume that $\Sigma$ has distinct diagonal elements and that $A$ is tall. Then</p>
<p>$$A^* A = V_1 \Sigma^* \Sigma V_1^* = V_2 \Sigma^* \Sigma V_2^*.$$</p>
<p>From this, we get</p>
<p>$$\Sigma^* \Sigma V_1^* V_2 = V_1^* V_2 \Sigma^* \Sigma.$$</p>
<p>Notice that $\Sigma^* \Sigma$ is diagonal with all different diagonal elements (that's why we needed $A$ to be tall) and $V_1^* V_2$ is unitary. Defining $V := V_1^* V_2$ and $D := \Sigma^* \Sigma$, we have</p>
<p>$$D V = V D.$$</p>
<p>Now, since $V$ and $D$ commute, they have the same eigenvectors. But, $D$ is a diagonal matrix with distinct diagonal elements (i.e., distinct eigenvalues), so it's eigenvectors are the elements of the canon basis. That means that $V$ is diagonal too, which means that</p>
<p>$$V = \operatorname{diag}(e^{{\rm i}\varphi_1}, e^{{\rm i}\varphi_2}, \dots, e^{{\rm i}\varphi_n}),$$</p>
<p>for some $\varphi_i$, $i=1,\dots,n$.</p>
<p>In other words, $V_2 = V_1 V$. Plug that back in the formula for $A$ and you get</p>
<p>$$A = U_1 \Sigma V_1^* = U_2 \Sigma V_2^* = U_2 \Sigma V^* V_1^* = U_2 V^* \Sigma V_1^*.$$</p>
<p>So, $U_2 = U_1 V$ if $\Sigma$ (and, in extension, $A$) is square nonsingular. Other options, somewhat similar to this, are possible if $\Sigma$ has zeroes on the diagonal and/or is rectangular.</p>
<p>If $\Sigma$ has repeating diagonal elements, much more can be done to change $U$ and $V$ (for example, one or both can permute corresponding columns).</p>
<p>If $A$ is not thin, but wide, you can do the same thing by starting with $AA^*$.</p>
<p>So, to answer your question: for a square, nonsingular $A$, there is a nice relation between different pairs of $U$ and $V$ (multiplication by a unitary diagonal matrix, applied in the same way to the both of them). Otherwise, you get quite a bit more freedom, which I believe is hard to formalize.</p>
| <p>I'm going to provide a full characterisation of the set of SVDs for a given matrix <span class="math-container">$A$</span>, using two different (but of course ultimately equivalent) kinds of formalisms. First a standard matrix formalism, and then using dyadic notation.</p>
<p>The TL;DR is that if <span class="math-container">$A=UDV^\dagger=\tilde U \tilde D\tilde V^\dagger$</span> for <span class="math-container">$U,\tilde U,V,\tilde V$</span> isometries and <span class="math-container">$D,\tilde D>0$</span> squared strictly positive diagonal matrices, then we can safely assume <span class="math-container">$D=\tilde D$</span> by trivially rearranging the basis for the underlying space, and furthermore that <span class="math-container">$\tilde V=V W$</span> with <span class="math-container">$W$</span> a unitary block-diagonal matrix that leaves invariant certain subspaces directly determined by <span class="math-container">$A$</span> (further details below). This characterises all and only the possible SVDs. So given any SVD, the freedom in the choice of other SVDs corresponds to the freedom in choosing these unitaries <span class="math-container">$W$</span> (how much freedom is that, depends in turn on the degeneracy of the singular values of <span class="math-container">$A$</span>).</p>
<h2>Regular notation</h2>
<p>Consider the SVD of a given matrix <span class="math-container">$A$</span>, in the form <span class="math-container">$A=UDV^\dagger$</span> with <span class="math-container">$D>0$</span> a diagonal squared matrix with strictly positive entries, and <span class="math-container">$U,V$</span> isometries (this writing is general: the SVD is often written in a way that makes <span class="math-container">$U,V$</span> unitaries and <span class="math-container">$D$</span> not necessarily squared, but you can have instead <span class="math-container">$D>0$</span> squared if you allow <span class="math-container">$U,V$</span> to be isometries).</p>
<p>The question is, if you have
<span class="math-container">$$A = UDV^\dagger = \tilde U \tilde D \tilde V^\dagger,$$</span>
with <span class="math-container">$U,\tilde U,V,\tilde V$</span> isometries, and <span class="math-container">$D,\tilde D>0$</span> diagonal squared matrices, what does this imply for <span class="math-container">$\tilde U,\tilde D,\tilde V$</span>? And more specifically, can we somehow find an explicit relation between them?</p>
<ol>
<li><p>The first easy observation is that you must have <span class="math-container">$D=\tilde D$</span>, modulo trivial rearrangements of the basis elements. This follows from <span class="math-container">$AA^\dagger=UD^2 U^\dagger = \tilde U \tilde D^2 \tilde U^\dagger$</span>, which means that <span class="math-container">$D^2$</span> and <span class="math-container">$\tilde D^2$</span> both contain the eigenvalues of <span class="math-container">$AA^\dagger$</span>. Because the spectrum is determined algebraically from <span class="math-container">$AA^\dagger$</span>, the set of eigenvalues must be identical, and the singular values are by definition positive reals, we must have <span class="math-container">$D=\tilde D$</span>.</p>
</li>
<li><p>The above reduces the question to: if <span class="math-container">$UDV^\dagger=\tilde U D\tilde V^\dagger$</span>, with <span class="math-container">$U,\tilde U,V,\tilde V$</span> isometries, what can we say about <span class="math-container">$\tilde U,\tilde V$</span>? To this end, we observe that the freedom in the choice of <span class="math-container">$U,V$</span> amounts to the different possible ways to decompose the subspaces associated to each distinct singular value.</p>
<p>More precisely, consider the subspace <span class="math-container">$V^{(d)}\equiv \{v: \, \|Av\|=d\}$</span> corresponding to a singular value <span class="math-container">$d$</span>. We can then uniquely decompose the matrix as <span class="math-container">$A=\sum_d A_d$</span> where <span class="math-container">$A_d\equiv A \Pi_d$</span> and <span class="math-container">$\Pi_d$</span> is the projection onto <span class="math-container">$V^{(d)}$</span>, and the sum is over all nonzero singular values <span class="math-container">$d$</span> of <span class="math-container">$A$</span>. We can now observe that any and all SVDs of <span class="math-container">$A$</span> correspond to a choice of orthonormal basis for each <span class="math-container">$V^{(d)}$</span>. Namely, for any such basis <span class="math-container">$\{\mathbf v_k\}$</span> we associate the partial isometry <span class="math-container">$V_d\equiv \sum_k \mathbf v_k \mathbf e_k^\dagger$</span>. The corresponding basis for the image of <span class="math-container">$A_d$</span> is then determined as <span class="math-container">$\mathbf u_k= A \mathbf v_k$</span>, and we then define the partial isometry <span class="math-container">$U_d\equiv \sum_k \mathbf u_k \mathbf e_k^\dagger$</span>.
Here, <span class="math-container">$\mathbf e_k$</span> denotes an orthonormal basis spanning the elements of <span class="math-container">$D$</span> corresponding to the singular value <span class="math-container">$d$</span>.
This procedure provides a decomposition <span class="math-container">$A_d= U_d D V_d^\dagger$</span>, and therefore an SVD for <span class="math-container">$A$</span> itself by summing these.
Any SVD can be constructed this way.</p>
<p>In conclusion, the freedom in choosing an SVD is entirely in the
choice of bases <span class="math-container">$\{\mathbf v_k\}$</span> above. We can summarise this freedom concisely by saying that given any SVD <span class="math-container">$A=UDV^\dagger$</span>, any other SVD can be written as <span class="math-container">$A=UW D (VW)^\dagger$</span> for some unitary <span class="math-container">$W$</span> such that <span class="math-container">$[W,D]=0$</span>. This commutation property is a concise way to state that <span class="math-container">$W$</span> is only allowed to mix vectors corresponding to the same singular value, that is, to the same eigenspace of <span class="math-container">$D$</span>.</p>
</li>
</ol>
<h4>Toy example #1</h4>
<p>Let's work out a simple toy example to illustrate the above results.</p>
<p>Let
<span class="math-container">$$H \equiv \begin{pmatrix}1&1\\1&-1\end{pmatrix}.$$</span>
This is a somewhat trivial example because <span class="math-container">$H$</span> is Hermitian, but still illustrates some aspects. A standard SVD reads
<span class="math-container">$$ H =
\underbrace{\frac{1}{\sqrt2}\begin{pmatrix}1&1\\-1&1\end{pmatrix} }_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt2 & 0\\0&\sqrt2\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}0&1\\1&0\end{pmatrix}}_{\equiv V^\dagger}.$$</span>
In this case, we have two identical singular values. According to our discussion above, this means that we can apply a(ny) unitary transformation to the columns of <span class="math-container">$V$</span> and still obtain another SVD. That is, given any unitary <span class="math-container">$W$</span>, <span class="math-container">$\tilde V\equiv V W$</span> gives another SVD for <span class="math-container">$H$</span>. In this simple case, you can also observe this directly, as <span class="math-container">$D=\sqrt2 I$</span>, and therefore
<span class="math-container">$$H = UDV^\dagger= UD W \tilde V^\dagger
= (UW) D V^\dagger,$$</span>
hence <span class="math-container">$\tilde U\equiv UW$</span>, <span class="math-container">$\tilde V\equiv UV$</span> give the alternative SVD <span class="math-container">$H=\tilde U D\tilde V^\dagger$</span>, and all SVDs have this form.</p>
<h4>Toy example #2</h4>
<p>Consider a simple non-squared case. Let
<span class="math-container">$$A \equiv \begin{pmatrix}1&1\\1 & \omega \\ 1 & \omega^2\end{pmatrix},
\qquad \omega\equiv e^{2\pi i/3}.$$</span>
This is again almost trivial because <span class="math-container">$A$</span> is an isometry, up to a constant. Still, we can write its SVD as
<span class="math-container">$$A =
\underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\1&\omega\\1&\omega^2\end{pmatrix}}_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}0&1\\1&0\end{pmatrix}}_{\equiv V^\dagger}.
$$</span>
Notice that now <span class="math-container">$U,V$</span> are isometries, but not unitaries, and that <span class="math-container">$D>0$</span> is squared. Are per our results above, <em>any</em> SVD will have the form <span class="math-container">$A=\tilde U D \tilde V^\dagger$</span> with <span class="math-container">$\tilde V=V W$</span> for some unitary <span class="math-container">$W$</span>.</p>
<p>for example, taking <span class="math-container">$W=V$</span> (we can do this, because here <span class="math-container">$V$</span> is also unitary), we get the alternative SVD <span class="math-container">$A=\tilde U D \tilde V^\dagger$</span> with <span class="math-container">$\tilde V=VW=VV=I$</span> and <span class="math-container">$\tilde U= U W^\dagger=UV=\frac1{\sqrt3}\begin{pmatrix}1&\omega&\omega^2\\1&1&1\end{pmatrix}^T$</span>, that is,
<span class="math-container">$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\\omega&1\\\omega^2&1\end{pmatrix}}_{\equiv \tilde U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}1&0\\0&1\end{pmatrix}}_{\equiv \tilde V^\dagger}.$$</span></p>
<h4>Toy example #3</h4>
<p>Let's do an example with non-degenerate singular values. Let
<span class="math-container">$$A = \begin{pmatrix}1& 2 \\ 1 & 2\omega \\ 1 & 2\omega^2\end{pmatrix},
\qquad \omega\equiv e^{2\pi i/3}.$$</span>
This time the singular values are <span class="math-container">$\sqrt3$</span> and <span class="math-container">$2\sqrt3$</span>.
One SVD is easily derived as
<span class="math-container">$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\1&\omega\\1&\omega^2\end{pmatrix}}_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&2\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}1&0\\0&1\end{pmatrix}}_{\equiv V^\dagger}.$$</span>
However, in this case there is much less freedom in choosing other SVDs, because these must correspond to <span class="math-container">$\tilde V=VW$</span> where <span class="math-container">$W$</span> only mixes columns of <span class="math-container">$V$</span> corresponding to the same values of <span class="math-container">$D$</span>. In this case <span class="math-container">$D$</span> is non-degenerate, thus <span class="math-container">$W$</span> must be diagonal, and therefore the full set of SVDs must correspond to
<span class="math-container">$W=\begin{pmatrix}e^{i\alpha}&0\\0&e^{i\beta}\end{pmatrix}$</span>, that is,
<span class="math-container">$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}e^{-i\alpha}&e^{-i\beta}\\e^{-i\alpha}&\omega e^{-i\beta}\\e^{-i\alpha}&\omega^2 e^{-i\beta}\end{pmatrix}}_{\equiv \tilde U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&2\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}e^{i\alpha}&0\\0&e^{i\beta}\end{pmatrix}}_{\equiv \tilde V^\dagger}.$$</span>
All SVDs will look like this, for some <span class="math-container">$\alpha,\beta\in\mathbb{R}$</span>.
Although note that by permuting the elements of <span class="math-container">$D$</span>, we obtain SVDs which look different, although are ultimately equivalent to the above.</p>
<h2>In dyadic notation</h2>
<h4>SVD in dyadic notation removes "trivial" redundancies</h4>
<p>The SVD of an arbitrary matrix <span class="math-container">$A$</span> can be written in <a href="https://en.wikipedia.org/wiki/Dyadics" rel="nofollow noreferrer">dyadic notation</a> as
<span class="math-container">$$A=\sum_k s_k u_k v_k^*,\tag A$$</span>
where <span class="math-container">$s_k\ge0$</span> are the singular values, and <span class="math-container">$\{u_k\}_k$</span> and <span class="math-container">$\{v_k\}_k$</span> are orthonormal sets of vectors spanning <span class="math-container">$\mathrm{im}(A)$</span> and <span class="math-container">$\ker(A)^\perp$</span>, respectively.
The connection between this and the more standard way of writing the SVD of <span class="math-container">$A$</span> as <span class="math-container">$A=UDV^\dagger$</span> is that <span class="math-container">$u_k$</span> is the <span class="math-container">$k$</span>-th column of <span class="math-container">$U$</span>, and <span class="math-container">$v_k$</span> is the <span class="math-container">$k$</span>-th column of <span class="math-container">$V$</span>.</p>
<h4>Global phase redundancies are always present</h4>
<p>If <span class="math-container">$A$</span> is nondegenerate, the only freedom in the choice of vectors <span class="math-container">$u_k,v_k$</span>
is their global phase: replacing <span class="math-container">$u_k\mapsto e^{i\phi}u_k$</span> and <span class="math-container">$v_k\mapsto e^{i\phi}v_k$</span> does not affect <span class="math-container">$A$</span>.</p>
<h4>Degeneracy gives more freedom</h4>
<p>On the other hand, when there are repeated singular values, there is additional freedom in the choice of <span class="math-container">$u_k,v_k$</span>, similarly to how there is more freedom in the choice of eigenvectors corresponding to degenerate eigenvalues.
More precisely, note that (A) implies
<span class="math-container">$$AA^\dagger=\sum_k s_k^2 \underbrace{u_k u_k^*}_{\equiv\mathbb P_{u_k}},
\qquad
A^\dagger A=\sum_k s_k^2 \mathbb P_{v_k}.$$</span>
This tells us that, whenever there are degenerate singular values, the corresponding set of principal components is defined up to a unitary rotation in the corresponding degenerate eigenspace.
In other words, the set of vectors <span class="math-container">$\{u_k\}$</span> in (A) can be chosen as any orthonormal basis of the eigenspace <span class="math-container">$\ker(AA^\dagger-s_k^2)$</span>, and similarly <span class="math-container">$\{v_k\}_k$</span> can be any basis of <span class="math-container">$\ker(A^\dagger A-s_k^2)$</span>.</p>
<p>However, note that a choice of <span class="math-container">$\{v_k\}_k$</span> determines <span class="math-container">$\{u_k\}$</span>, and vice-versa (otherwise <span class="math-container">$A$</span> wouldn't be well-defined, or injective outside its kernel).</p>
<h4>Summary</h4>
<p>A choice of <span class="math-container">$U$</span> uniquely determines <span class="math-container">$V$</span>, so we can restrict ourselves to reason about the freedom in the choice of <span class="math-container">$U$</span>. There are twe main sources of redundancy:</p>
<ol>
<li>The vectors can be always scaled by a phase factor: <span class="math-container">$u_k\mapsto e^{i\phi_k}u_k$</span> and <span class="math-container">$v_k\mapsto e^{i\phi_k}v_k$</span>. In matrix notation, this corresponds to changing <span class="math-container">$U\mapsto U \Lambda$</span> and <span class="math-container">$V\mapsto V\Lambda$</span> for an arbitrary diagonal unitary matrix <span class="math-container">$\Lambda$</span>.</li>
<li>When there are "degenerate singular values" <span class="math-container">$s_k$</span> (that is, singular values corresponding to degenerate eigenvalues of <span class="math-container">$A^\dagger A$</span>), there is additional freedom in the choice of <span class="math-container">$U$</span>, which can be chosen as any matrix whose columns form a basis for the eigenspace <span class="math-container">$\ker(AA^\dagger-s_k^2)$</span>.</li>
</ol>
<p>Finally, we should note that the former point is included in the latter, which therefore encodes all of the freedom allowed in choosing the vectors <span class="math-container">$\{v_k\}$</span>. This is because multiplying the elements of an orthonormal basis by phases does not affect its being an orthonormal basis.</p>
|
combinatorics | <blockquote>
<p>Can <span class="math-container">$18$</span> consecutive positive integers be separated into two groups such that their product is equal? We cannot leave out any number and neither we can take any number more than once.</p>
</blockquote>
<p>My work:<br />
When the smallest number is not <span class="math-container">$17$</span> or its multiple, there cannot exist any such arrangement as <span class="math-container">$17$</span> is a prime.</p>
<p>When the smallest number is a multiple of <span class="math-container">$17$</span> but not of <span class="math-container">$13$</span> or <span class="math-container">$11$</span>, then no such arrangement exists.</p>
<p>But what happens, when the smallest number is a multiple of <span class="math-container">$ 17 $</span> and <span class="math-container">$13$</span> or <span class="math-container">$11$</span> or both?<br />
Please help!</p>
| <p>This is impossible. </p>
<p>At most one of the integers can be divisible by $19$. If there is such an integer, then one group will contain it and the other one will not. The first product is then divisible by $19$ whereas the second is not (since $19$ is prime) --- a contradiction.</p>
<p>So if this possible, the remainders of the numbers after division by $19$ must be precisely $1,2,3,\cdots,18$. </p>
<p>Now let $x$ be the product of the numbers in one of the groups. Then </p>
<p>$x^2 \equiv 18! \equiv -1 \pmod{19}$ </p>
<p>by <a href="http://en.wikipedia.org/wiki/Wilson%27s_theorem">Wilson's Theorem</a>. However $-1$ is not a quadratic residue mod $19$, because the only possible squares mod $19$ are $1,4,9,16,6,17,11,7,5$.</p>
| <p>If $18$ consecutive positive integers could be separated into two groups with equal products, then the product of all $18$ integers would be a perfect square. However, the product of two or more consecutive positive integers can never be a perfect square, according to a famous theorem of P. Erdős, <a href="http://www.renyi.hu/~p_erdos/1939-03.pdf">Note on products of consecutive integers</a>, J. London Math. Soc. 14 (1939), 194-198.</p>
|
matrices | <p>In which cases is the inverse of a matrix equal to its transpose, that is, when do we have <span class="math-container">$A^{-1} = A^{T}$</span>? Is it when <span class="math-container">$A$</span> is orthogonal? </p>
| <p>If $A^{-1}=A^T$, then $A^TA=I$. This means that each column has unit length and is perpendicular to every other column. That means it is an orthonormal matrix.</p>
| <p>You're right. This is the definition of orthogonal matrix.</p>
|
logic | <p>In <a href="https://math.stackexchange.com/a/121131/111520">Henning Makholm's answer</a> to the question, <a href="https://math.stackexchange.com/q/121128/111520">When does the set enter set theory?</a>, he states:</p>
<blockquote>
<p>In axiomatic set theory, the axioms themselves <em>are</em> the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p>
</blockquote>
<p>This assertion clashes with my (admittedly limited) understanding of how first-order logic, model theory, and axiomatic set theories work.
From what I understand, the axioms of a set theory are properties we would like the objects we call "sets" to have, and then each possible model of the theory is a different definition of the notion of a set. But the axioms themselves do not constitute a definition of set, unless we can show that any model of the axioms is isomorphic (in some meaningful way) to a given model.</p>
<p>Am I misunderstanding something? Is the definition of a set specified by the axioms, or by a model of the axioms? I would appreciate any clarification/direction on this.</p>
<hr>
<p><strong>Update:</strong> <em>In addition to all the answers below, I have written up my own answer (marked as community wiki) gathering the excerpts from other answers (to this question as well as some others) which I feel are most pertinent to the question I originally posed.
Since it's currently buried at the bottom (and accepting it won't change its position), I'm linking to it <a href="https://math.stackexchange.com/a/1792236/111520">here</a>. Cheers!</em></p>
| <p>This is the commonplace clash between the semi-Platonic view of the laymathematician and the foundational approach for mathematics through set theory.</p>
<p>It is often convenient, when working in "concrete" mathematics, to assume that there is a single, fixed universe of mathematics. And everyone who took a course or two in logic and set theory should be able to tell you that we can assume this universe is in fact a universe of $\sf ZFC$.</p>
<p>Then we do everything there, and we take the notion of "set" as somewhat primitive. Sets are not defined, they are just the objects of the universe.</p>
<p>But the word "set" is just a word in English. We use it to name this fickle, abstract, primitive object. But how can you ensure that my intuitive understanding of "set" is the same as yours?</p>
<p>This is where "axioms as definitions" come into play. Axioms define the basic ground rules for what it means to be a set. For example, if you don't have a power set, you're not a set: because every set has a power set. The axioms of set theory define what are the basic properties of sets. And once we agree on a list of axioms, we really agree on a list of definitions for what it means to be a set. And even if we grasp sets differently, we can still agree on some common properties and do some math.</p>
<p>You can see this in set theorists who disagree about philosophical outlooks, and whether or not some conjecture should be "true" or "false", or if the question is meaningless due to independence. Is the HOD Conjecture "true", "false" or is it simply "provable" or "independent"? That's a very different take on what are sets, and different set theorists will take different sides of the issue. But all of these set theorists have agreed, at least, that $\sf ZFC$ is a good way to define the basic properties of sets.</p>
<hr>
<p>As we've thrown Plato's name into the mix, let's talk a bit about "essence". How do you define a chair? or a desk? You can't <em>really</em> define a chair, because either you've defined it along the lines of "something you can sit on", in which case there will be things you can sit on which are certainly not chairs (the ground, for example, or a tree); or you've run into circularity "a chair is a chair is a chair"; or you're just being meaninglessly annoying "dude... like... everything is a chair... woah..."</p>
<p>But all three options are not good ways to define a chair. And yet, you're not baffled by this on a daily basis. This is because there is a difference between the "essence" of being a chair, and physical chairs.</p>
<p>Mathematical objects shouldn't be so lucky. Mathematical objects are abstract to begin with. We cannot perceive them in a tangible way, like we perceive chairs. So we are only left with some ideal definition. And this definition should capture the basic properties of sets. And that is exactly what the axioms of $\sf ZFC$, and any other set theory, are trying to do.</p>
| <blockquote>
<p>In axiomatic set theory, the axioms themselves are the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p>
</blockquote>
<p>I half-agree with this. But recall that the axioms of group theory don't axiomatize the concept "element of a group." Rather, they axiomatize the concept "group." In a similar way, the axioms of ZFC don't axiomatize the concept "set." They axiomatize the concept "universe of sets" (or "von Neumann universe" or "cumulative hiearchy", if you prefer).</p>
|
combinatorics | <p>Consider $n$ distinct points $x_1,\dots,x_n$ on $\mathbb{R}$. Associated to these points is the <strong>multiset</strong> of all distances $d(x_i,x_j)$ between two points. Suppose one is only handed this multiset (you do not know the corresponding indices). Does this allow one to uniquely recover the original points up to reflection and translation? </p>
| <p>This is a really nice question!</p>
<p><strong>Counterexample for <span class="math-container">$n=6$</span></strong></p>
<p>The sets <span class="math-container">$$\{0,1,4,5,11,13\}\\\{0,1,2,6,10,13\}$$</span>
are affinely inequivalent, but the multiset of differences is in both cases <span class="math-container">$$1^2\cdot 2\cdot 3\cdot 4^2 \cdot 5 \cdot 6\cdot 7 \cdot 8\cdot 9\cdot 10\cdot 11\cdot 12\cdot 13.$$</span></p>
<p><strong>Counterexamples for <span class="math-container">$n \geq 7$</span></strong>
According to Lemma 2.1 in the first linked article in the answer of Steve Kass, for all <span class="math-container">$n\geq 6$</span>, a counterexample is given by
<span class="math-container">$$X\cup \{n+1,n+3\}\quad\text{and}\quad X\cup\{2,n+1\}$$</span>
where <span class="math-container">$X = \{5,6,7,\ldots,n-1,n-2\} \cup \{0,1,n,n+5\}$</span>.</p>
<p><strong>Uniqueness for <span class="math-container">$n\leq 5$</span></strong></p>
<p>For <span class="math-container">$n\in\{1,2\}$</span>, the uniqueness is clear.</p>
<p>Let <span class="math-container">$X = \{x_1,x_2,\ldots,x_n\}$</span> be a point set. We assume <span class="math-container">$x_1\leq x_2\leq\ldots \leq x_n$</span>. Up to affine equivalence, we may assume <span class="math-container">$x_1 = 0$</span>. We denote the distance of two points <span class="math-container">$x,y$</span> by <span class="math-container">$\delta(x,y) = \left|x-y\right|$</span> and furthermore define the abbreviation <span class="math-container">$\delta_{i,j} = \delta(x_i,x_j)$</span>.
In the multiset of distances, let <span class="math-container">$a \geq b\geq c\geq d$</span> be the largest elements.
It is clear that <span class="math-container">$a = \delta_{1,n}$</span> and thus <span class="math-container">$x_n = a$</span>.
Up to affine equivalence, we may assume <span class="math-container">$b = \delta_{1,n-1}$</span> (the other possibility is <span class="math-container">$\delta_{2,n}$</span>), so <span class="math-container">$x_{n-1} = b$</span>. This shows the uniqueness for <span class="math-container">$n = 3$</span>.</p>
<p>For <span class="math-container">$n=4$</span>, <span class="math-container">$\{\delta_{1,2},\delta_{2,n}\} = \{c,x_4-c\}$</span>. This distinguishes the last remaining distance <span class="math-container">$\delta_{2,3}$</span>, which in turn fixes <span class="math-container">$x_2$</span>.</p>
<p>It remains to consider the case <span class="math-container">$n=5$</span>.</p>
<p>First we see that if one further point <span class="math-container">$x\in\{x_2,x_3\}$</span> is fixed, the set <span class="math-container">$X$</span> is completely determined: Let <span class="math-container">$y$</span> be the missing point. Among the four remaining distances <span class="math-container">$\delta(x_1,y)$</span>, <span class="math-container">$\delta(x,y)$</span>, <span class="math-container">$\delta(y,x_4)$</span>, <span class="math-container">$\delta(y,x_5)$</span>, the maximum <span class="math-container">$d_m$</span> is contained in the set <span class="math-container">$\{\delta(x_1,y),\delta(x_5,y)\} = \{d_m, x_5-d_m\}$</span>. So we also know the set <span class="math-container">$\{\delta(x,y),\delta(y,x_4)\}$</span>. Because of <span class="math-container">$x,y\in(x_0,x_4)$</span>, <span class="math-container">$\delta(y,x_4)$</span> is the larger of those two distances, which fixes the point <span class="math-container">$y$</span>.</p>
<p>By the choice of <span class="math-container">$c$</span>, we have <span class="math-container">$c = \delta_{1,3}$</span> (Case A) or <span class="math-container">$c = \delta_{2,n}$</span> (Case B).
If the multiset of distances admits only one of those two cases, then by the above reasoning, <span class="math-container">$X$</span> is uniquely determined. So we have to see that if both cases are possible, then the point sets are necessarily identical.</p>
<p>Case A) If <span class="math-container">$c = \delta_{1,3}$</span>, then <span class="math-container">$x_3 = c$</span>, and <span class="math-container">$d\in\{\delta_{1,2},\delta_{2,5}\}$</span>.</p>
<p>Case A1) <span class="math-container">$d = \delta_{1,2}$</span>. Now <span class="math-container">$X = \{0,d,c,b,a\}$</span>, and the <span class="math-container">$10$</span> distances are
<span class="math-container">$$\delta_{1,2} = d,\quad \delta_{1,3} = c,\quad \delta_{1,4}= b,\quad \delta_{1,5} = a,\\ \delta_{2,3} = c-d,\quad \delta_{2,4} = b-d,\quad \delta_{2,5} = a-d,\\ \delta_{3,4} = b-c,\quad \delta_{3,5} = a-c, \\\delta_{4,5} = a-b$$</span></p>
<p>Case A2) <span class="math-container">$d = \delta_{2,5}$</span>. Now <span class="math-container">$X = \{0,a-d,c,b,a\}$</span>, and the <span class="math-container">$10$</span> distances are
<span class="math-container">$$\delta_{1,2} = a-d,\quad \delta_{1,3} = c,\quad \delta_{1,4}= b,\quad \delta_{1,5} = a,\\ \delta_{2,3} = -a+c+d,\quad \delta_{2,4} = -a+b+d,\quad \delta_{2,5} = d,\\ \delta_{3,4} = b-c,\quad \delta_{3,5} = a-c, \\\delta_{4,5} = a-b$$</span></p>
<p>Case B) If <span class="math-container">$c = \delta_{2,5}$</span>, then <span class="math-container">$x_2 = a-c$</span>, and <span class="math-container">$d\in\{\delta_{1,3},\delta_{3,5},\delta_{2,4}\}$</span>.</p>
<p>Case B1) <span class="math-container">$d = \delta_{1,3}$</span>.
Now <span class="math-container">$X = \{0,a-c,d,b,a\}$</span>, and the <span class="math-container">$10$</span> distances are
<span class="math-container">$$\delta_{1,2} = a-c,\quad \delta_{1,3} = d,\quad \delta_{1,4}= b,\quad \delta_{1,5} = a,\\ \delta_{2,3} = -a+c+d,\quad \delta_{2,4} = -a+b+c,\quad \delta_{2,5} = c,\\ \delta_{3,4} = b-d,\quad \delta_{3,5} = a-d, \\\delta_{4,5} = a-b.$$</span></p>
<p>By the above consideration, B1) and A1) cannot appear both (since they have <span class="math-container">$4$</span> points in common).</p>
<p>Assume that both B1) and A2) are possible. Then by comparing the distances, the two sets <span class="math-container">$\{-a+b+d, b-c\}$</span> and <span class="math-container">$\{-a+b+c, b-d\}$</span> must be the same. In both possibilities to match the elements, we end up with <span class="math-container">$c = d$</span>, which shows that the point set is the same in both cases.</p>
<p>Case B2) <span class="math-container">$d = \delta_{3,5}$</span>.
Now <span class="math-container">$X = \{0,a-c,a-d,b,a\}$</span>, and the <span class="math-container">$10$</span> distances are
<span class="math-container">$$\delta_{1,2} = a-c,\quad \delta_{1,3} = a-d,\quad \delta_{1,4}= b,\quad \delta_{1,5} = a,\\ \delta_{2,3} = c-d,\quad \delta_{2,4} = -a+b+c,\quad \delta_{2,5} = c,\\ \delta_{3,4} = -a+b+d,\quad \delta_{3,5} = d, \\\delta_{4,5} = a-b.$$</span>
We go on similarly as in B1).</p>
<p>Case B3) <span class="math-container">$d = \delta_{2,4}$</span>. Here necessarily <span class="math-container">$a+d = b+c$</span>, and the point <span class="math-container">$x_3$</span> is not yet fixed. The point set is <span class="math-container">$X = \{0,a-c,x_3,b,a\}$</span>. We go on similarly as in B1), B2) to see that if B3) occurs together with A1) or A2), then <span class="math-container">$X$</span> is uniquely determined.</p>
<p>EDIT:
The uniqueness for <span class="math-container">$n\leq 5$</span> is also stated in Lemma 2.1. in the first linked article of Steve Kass. However, the proof doesn't give to many details, and I do not understand the part "since <span class="math-container">$a + b = c$</span>, if <span class="math-container">$a + c = 1$</span> then <span class="math-container">$b$</span> uniquely determines <span class="math-container">$T$</span>.".</p>
| <p>This problem is called the "turnpike problem" or the "partial digest problem." Sets like the two @azimut gave are called "homometric" or "homeometric," and there can be many for a given set of distances (but the number of them is always a power of two). Here are a couple of references:</p>
<p><a href="http://www.cs.sunysb.edu/~skiena/papers/turnpike.ps">Reconstructing Sets From Interpoint Distances</a></p>
<p><a href="http://dustwell.com/PastWork/PartialDigestProblem.pdf">The Partial Digest Problem</a></p>
<p><a href="http://www.collectionscanada.gc.ca/obj/s4/f2/dsk1/tape3/PQDD_0014/NQ61635.pdf">On the Turnpike Problem</a></p>
|
game-theory | <p>I want to self study game theory. Which math-related qualifications should I have? And can you recommend any books? Where do I have to begin?</p>
| <p>I've decided to flesh out my small comment into a (hopefully respectable) answer.</p>
<p>The book I read to learn Game Theory is called "<a href="http://www.rand.org/pubs/commercial_books/CB113-1.html" rel="noreferrer">The Compleat Strategyst</a>", thanks to J.M. for pointing out that it is now a free download. This was one of the first books on Game Theory, and at this point is probably very dated, but it is a nice easy introduction and, since it is free, you may as well go through it. I read the whole book and did all the examples in a couple of weeks. I said before that Linear Algebra was a prerequisite, however after flipping through it again I see that they explain all the mechanics necessary within the book itself, so unless you are also interested in the theory behind it, you will be fine without any linear algebra background.</p>
<p>Since it sounds like you do want the theory (and almost any aspect of Game Theory beyond the introduction provided by that book will still require Linear Algebra) you may want to grab a Linear Algebra book. I'm partial to <a href="http://linear.axler.net/" rel="noreferrer">Axler's Linear Algebra Done Right</a>, which is (in my opinion) sufficient for self-study.</p>
<p>The Wikipedia page on <a href="http://en.wikipedia.org/wiki/Game_theory" rel="noreferrer">Game Theory</a> lists many types of games. Aspects of the first five are covered at various lengths in "The Compleat Strategyst", these include:</p>
<ul>
<li>Cooperative or non-cooperative</li>
<li>Symmetric and asymmetric</li>
<li>Zero-sum and non-zero-sum</li>
<li>Simultaneous and sequential</li>
<li>Perfect information and imperfect information</li>
</ul>
<p>The rest of the math you will need to know depends on what sort of games you're interested in exploring after that, and the math required is given away largely by the name:</p>
<ul>
<li><a href="http://en.wikipedia.org/wiki/Combinatorial_game_theory" rel="noreferrer">Combinatorial Game Theory</a> will likely require combinatorics.</li>
<li><a href="http://en.wikipedia.org/wiki/Game_theory#Infinitely_long_games" rel="noreferrer">Infinitely long games</a> seem to be related to set theory.</li>
<li>Both <a href="http://en.wikipedia.org/wiki/Game_theory#Discrete_and_continuous_games" rel="noreferrer">discrete and continuous games</a> and <a href="http://en.wikipedia.org/wiki/Game_theory#Many-player_and_population_games" rel="noreferrer">many-player/population games</a> would seem to require calculus (and perhaps differential equations).</li>
<li><a href="http://en.wikipedia.org/wiki/Game_theory#Stochastic_outcomes_.28and_relation_to_other_fields.29" rel="noreferrer">Stochastic outcomes</a> are related to statistics.</li>
<li><a href="http://en.wikipedia.org/wiki/Game_theory#Metagames" rel="noreferrer">Metagames</a> (also sometimes referred to as "reverse Game Theory") use some fairly sophisticated mathematics, so you'll probably need a good understanding of analysis and abstract algebra.</li>
</ul>
<p>Also see this (somewhat duplicate) <a href="https://math.stackexchange.com/questions/43632/good-non-mathematician-book-on-game-theory">question</a> for video lectures which will give you a better understanding of what game theory is before you shell out any money to buy anything.</p>
| <blockquote>
<p>Shameless Advertisement: <a href="http://area51.stackexchange.com/proposals/47845/game-theory?referrer=MGta8a5T8SlJlb1swuXPYQ2">Game Theory
proposal</a> on Area 51 that you should totally follow!</p>
</blockquote>
<p>It definitely depends on the flavor of game theory you're interested in, but in my experience, no truly introductory text requires anything beyond simple algebra and logical reasoning. Once you get into something advanced, it requires in-depth knowledge of a specific subfield (say research on permutations), but that isn't something I would prepare for, assuming you can learn quickly. Rather, it seems mathematicians and social scientists often collaborate to solve these problems.</p>
<blockquote>
<p>My Personal Recommendation: <a href="http://en.wikipedia.org/wiki/Game_theory">Wikipedia's entry on Game Theory</a> and then also read about each game in the <a href="http://en.wikipedia.org/wiki/List_of_games_in_game_theory">list of games</a> and probably the entry on <a href="http://en.wikipedia.org/wiki/Solution_concept">solution concepts</a>.</p>
</blockquote>
<p>No Math:<br>
- <a href="http://rads.stackoverflow.com/amzn/click/0393310353">Thinking Strategically</a><br>
- <a href="http://en.wikipedia.org/wiki/Evolution_of_cooperation">The Evolution of Cooperation</a><br>
- <a href="http://en.wikipedia.org/wiki/The_Complexity_of_Cooperation">The Complexity of Cooperation</a><br>
Minimal Notation<br>
- <a href="http://rads.stackoverflow.com/amzn/click/0691090394">Behavioral Game Theory</a><br>
- <a href="http://rads.stackoverflow.com/amzn/click/0262061945">The Theory of Learning in Games</a><br>
(Technical) Textbooks<br>
- <a href="http://rads.stackoverflow.com/amzn/click/0195128958">An Introduction to Game Theory</a><br>
- <a href="http://www.cambridge.org/journals/nisan/downloads/Nisan_Non-printable.pdf">Algorithmic Game Theory</a><br>
- <a href="http://rads.stackoverflow.com/amzn/click/0123745071">Auction Theory</a><br>
Reference Books:<br>
- <a href="http://rads.stackoverflow.com/amzn/click/0262514133">Combinatorial Auctions</a><br>
- <a href="http://rads.stackoverflow.com/amzn/click/0444826424">Handbook of Experimental Economics Results</a><br>
- <a href="http://rads.stackoverflow.com/amzn/click/0691058970">The Handbook of Experimental Economics</a><br>
Journals<br>
- <a href="http://www.journals.elsevier.com/games-and-economic-behavior/">Games and Economic Behavior</a> </p>
<p>Reading recently-published papers is quite fun; they're usually sufficiently contained such that if you can read a paper (that is, read only its abstract, introduction, and conclusion), you can get a better idea of why concepts found in an undergraduate text are considered central.</p>
|
number-theory | <p>I've solved for it making a computer program, but was wondering there was a mathematical equation that you could use to solve for the nth prime?</p>
| <p>No, there is no known formula that gives the nth prime, except <a href="https://www.youtube.com/watch?v=j5s0h42GfvM" rel="nofollow noreferrer">artificial ones</a> you can write that are basically equivalent to "the <span class="math-container">$n$</span>th prime". But if you only want an approximation, the <span class="math-container">$n$</span>th prime is roughly around <span class="math-container">$n \ln n$</span> (or more precisely, near the number <span class="math-container">$m$</span> such that <span class="math-container">$m/\ln m = n$</span>) by the <a href="http://en.wikipedia.org/wiki/Prime_number_theorem" rel="nofollow noreferrer">prime number theorem</a>. In fact, we have the following asymptotic bound on the <span class="math-container">$n$</span>th prime <span class="math-container">$p_n$</span>:</p>
<blockquote>
<p><span class="math-container">$n \ln n + n(\ln\ln n - 1) < p_n < n \ln n + n \ln \ln n$</span> for <span class="math-container">$n\ge{}6$</span></p>
</blockquote>
<p>You can <a href="http://en.wikipedia.org/wiki/Generating_primes#Prime_sieves" rel="nofollow noreferrer">sieve</a> within this range if you want the <span class="math-container">$n$</span>th prime. [Edit: Using more accurate estimates you'll have a much smaller range to sieve; see the answer by Charles.]</p>
<p>Entirely unrelated: if you want to see formulae that generate a lot of primes (not the <span class="math-container">$n$</span>th prime) up to some extent, like the famous <span class="math-container">$f(n)=n^2-n+41$</span>, look at the Wikipedia article <a href="http://en.wikipedia.org/wiki/Formula_for_primes" rel="nofollow noreferrer">formula for primes</a>, or Mathworld for <a href="http://mathworld.wolfram.com/PrimeFormulas.html" rel="nofollow noreferrer">Prime Formulas</a>.</p>
| <p>Far better than sieving in the large range ShreevatsaR suggested (which, for the 10¹⁵th prime, has 10¹⁵ members and takes about 33 TB to store in compact form), take a good first guess like Riemann's R and use one of the advanced methods of computing pi(x) for that first guess. (If this is far off for some reason—it shouldn't be—estimate the distance to the proper point and calculate a new guess from there.) At this point, you can sieve the small distance, perhaps just 10⁸ or 10⁹, to the desired number.</p>
<p>This is about 100,000 times faster for numbers around the size I indicated. Even for numbers as small as 10 to 12 digits, this is faster if you don't have a precomputed table large enough to contain your answer.</p>
|
logic | <p>An extreme form of constructivism is called <em>finitisim</em>. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as <span class="math-container">$e$</span> or <span class="math-container">$\sqrt{2}$</span>. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite).</p>
<p>After doing a little research I found a constructivist definition in Wikipedia <a href="http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis" rel="noreferrer">http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis</a> , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set).</p>
<p>So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system? </p>
<p><em>Original version of this question, <a href="http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite">which had been closed during private beta</a>, is as follows:</em></p>
<blockquote>
<p><strong>If all sets were finite, how would mathematics be like?</strong></p>
<p>If we replace the axiom that 'there
exists an infinite set' with 'all sets
are finite', how would mathematics be
like? My guess is that, all the theory
that has practical importance would
still show up, but everything would be
very very unreadable for humans. Is
that true?</p>
<p>We would have the natural numbers,
athough the class of all natural
numbers would not be a set. In the
same sense, we could have the rational
numbers. But could we have the real
numbers? Can the standard
constructions be adapted to this
setting?</p>
</blockquote>
| <p>Set theory with all sets finite has been studied, is a familiar theory in disguise, and is enough for most/all concrete real analysis.</p>
<p>Specifically, Zermelo-Fraenkel set theory with the Axiom of Infinity replaced by its negation (informally, "there is no infinite set") is equivalent to first-order Peano Arithmetic. Call this system <em>finite ZF</em>, the theory of hereditarily finite sets. Then under the Goedel arithmetic encoding of finite sets, Peano Arithmetic can prove all the theorems of Finite ZF, and under any of the standard constructions of integers from finite sets, Finite ZF proves all the theorems of Peano Arithmetic. </p>
<p>The implication is that theorems unprovable in PA involve intrinsically infinitary reasoning. Notably, finite ZF was used as an equivalent of PA in the Paris-Harrington paper "A Mathematical Incompleteness in Peano Arithmetic" which proved that their modification of the finite Ramsey theorem can't be proved in PA.</p>
<p>Real numbers and infinite sequences are not directly objects of the finite ZF universe, but there is a clear sense in which real (and complex, and functional) analysis can be performed in finite ZF or in PA. One can make statements about $\pi$ or any other explicitly defined real number, as theorems about a specific sequence of rational approximations ($\forall n P(n)$) and these can be formulated and proved using a theory of finite sets. PA can perform very complicated induction proofs, i.e., transfinite induction below $\epsilon_0$. In practice this means any concrete real number calculation in ordinary mathematics. For the example of the prime number theorem, using complex analysis and the Riemann zeta function, see Gaisi Takeuti's <em>Two Applications of Logic to Mathematics</em>. More discussion of this in a MO thread and my posting there:</p>
<p><a href="https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence">https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence</a></p>
<p><a href="https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence/31942#31942">https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence/31942#31942</a></p>
<p>Proof theory in general and reverse mathematics in particular contain analyses of the logical strength of various theorems in mathematics (when made suitably concrete as statements about sequences of integers), and from this point of view PA, and its avatar finite set theory, are very powerful systems. </p>
| <p>Disclaimer: I am not a finitist --- but as a theoretical computer scientist, I have a certain sympathy for finitism. The following is the result of me openly speculating what an "official" finitist response would be, based on grounds of computability.</p>
<p>The short version is this: <strong>(a)</strong> It depends on what you mean by a 'number', but there's a reasonable approach which makes it reasonable to talk about finitistic approaches to real numbers; <strong>(b)</strong> What you can do finitisitically with numbers, real, rational, or otherwise, depends on how you represent those numbers.</p>
<ol>
<li><p><strong>What is a number?</strong> Is −1 a number? Is sqrt(2) a number? Is <em>i</em> = sqrt(−1) a number? What about quaternions? --- I'm going to completely ignore this question and suggest a pragmatic, formalist approach: a "number" is an element of a "number system"; and a "number system" is a collection of expressions which you can transform or describe properties of in some given ways (<em>i.e.</em> certain given arithmetic operations) and test certain properties (<em>e.g.</em> tests for equality, ordering, <em>etc.</em>) These expressions don't have to have a meaningful interpretation in terms of quantities or magnitudes as far as I'm concerned; <em>you</em> get to choose which operations/tests you care about.<br><br> A finitist would demand that any operation or property be described by an algorithm which provably terminates. That is, it isn't sufficient to prove existence or universality <em>a la</em> classical logic; existence proofs must be finite constructions --- of a "number", that is a representation in some "number system" --- and univserality must be shown by a computable test. </p></li>
<li><p><strong>Representation of numbers:</strong> How we represent the numbers matters. A finitist should have no qualms about rational numbers: ratios which ultimately boil down to ordered pairs. Despite this, the decimal expansions of these numbers may be infinitely long: 1/3 = 0.33333... what's going on here?<br><br> Well, the issue is that we have two representations for the same number, one of which is finite in length (and allows us to perform computations) and another which is not finite in length. However, the decimal expansion can be easily expressed as a function: for all <em>k</em>, the <em>k</em><sup>th</sup> decimal place after the point is '3'; so you can still characterize it precisely in terms of a finite rule.<br><br> What's important is that there exists <strong>some</strong> finite way to express the number. But the way in which we choose to <em>define</em> the number (as a part of system or numbers, using some way of expressing numbers) will affect what we can do with it...
there is now a question about what operations we can perform.<br><br>--- For rationals-as-ratios, we can add/subtract, multiply/divide, and test order/equality. So this representation is a very good one for rationals. <br><br>--- For rationals-as-decimal-expansions, we can still add/subtract and multiply/divide, by defining a new digit-function which describes how to compute the result from the decimal expansions; these will be messier than the representations as ratios. Order comparisons are still possible for <em>distinct</em> rationals; but you cannot test equality for arbitrary decimal-expansion representations, because you cannot necessarily verify that all decimal places of the difference |<em>a</em>−<em>b</em>| are 0. The best you can do in general is testing "equality up to precision ε", wherein you show that |<em>a</em>−<em>b</em>| < ε, for some desired precision ε. This is a number system which informally we may say has certain amount of "vagueness"; but it is in principle completely specified --- there's nothing wrong with this in principle. It's just a matter of how you wish to define your system of arithmetic.</p></li>
<li><p><strong>What representation of reals?</strong> Obviously, because there are uncountably many real numbers, you cannot represent all real numbers even if you <em>aren't</em> a finitist. But we can still express some of them. The same is true if you're a finitist: you just don't have access to as many, and/or you're restricted in what you can do with them, according to what your representation can handle.<br><br>
--- Algebraic irrational numbers such as sqrt(2) can be expressed simply like that: "sqrt(2)". There's nothing wrong with the expressions "sqrt(2) − 1" or "[1 + sqrt(5)]/2" --- they express quantities perfectly well. You can perform arithmetic operations on them perfectly well; and you can also perform ordering/equality tests by transforming them into a normal form of the type "[sum of integers and roots of integers]/[positive integer]"; if the difference of two quantities is zero, the normal form of the difference will just end up being '0'. For order comparisons, we can compute enough decimal places of each term in the sum to determine whether the result is positive or negative, a process which is guaranteed to terminate.<br><br>
--- Numbers such as π and e can be represented by decimal expansions, and computed with in this form, as with the rational numbers. The decimal expansions can be gotten from classical equalities (<em>e.g.</em> "infinite" series, except computing only <em>partial</em> sums; a number such as e may be expressed by some finite representation of such an 'exact' formula, together with a computable function which describes how many terms of the series are required to get a correct evaluation of the first <em>k</em> decimal places.) Of course, what you can do finitistically with these representations is limited in the same way as described above with the rationals; specifically, you cannot always test equality.</p></li>
</ol>
|
probability | <p>I have come across both $P(\dots)$ and $\Pr(\dots)$ being used to represent probabilities. Is there any difference in the meaning of these notations, or are they just different shorthands?</p>
<p>I seem to come by $\Pr(\dots)$ more often in Bayesian probability contexts, though I wouldn't say that's a rule. </p>
| <p>They are just different conventions. They don't signify any different meaning.</p>
<p>I personally find the $\Pr$ notation most useful when the discussion involves combinatorics. It distinguishes probability <em>somewhat</em> from permutation. (Unless you use ${^n{\rm P}_r}$ ...)</p>
<p>It also has that convenient LaTeX command <code>\Pr</code> which renders it in times roman font, and with some space padding, which helps it stand out in a line of multiplied probabilities using just a few keystrokes.</p>
| <p>They are just different notation. Some authors even use the blackboard bold font: $\mathbb{P}$. What matters is what's inside of the subsequent parentheses (or sometimes brackets, [].)</p>
<p>Several notation species exist for expectation ($E, \text{E},\mathbb{E}$) and variance ($V, \text{V},Var, \mathbb{V}$) too, but they all have the same definition.</p>
|
probability | <p>Everybody knows the famous <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty Hall problem</a>; way too much ink has been spilled over it already. Let's take it as a given and consider the following variant of the problem that I thought up this morning.</p>
<p>Suppose Monty has three apples. Two of them have worms in them, and one doesn't. (For the purposes of this problem, let's assume that finding a worm in your apple is an <em>undesirable</em> outcome). He gives three "contestants" one apple each, then he picks one that he knows has a worm in his apple and instructs him to bite into it. The poor contestant does so, finds (half of) a worm in it, and runs off-stage in disgust.</p>
<p>Now consider the situations of the two remaining contestants. Each one has a classical Monty Hall problem facing him. From player A's perspective, one "door" has been "opened" and revealed to have a "goat"; using the same logic as before, he should choose to switch apples with player B.</p>
<p>The paradox is that player B can use the same logic to conclude that he should switch apples with player A. Therefore, each of the two remaining contestants agree that they should switch apples, and they'll both be better off! Of course, this can't be the case. Exactly one of them gets a worm no matter what.</p>
<p>Where is the flaw in the logic? Where does the analogy between this variant of the problem and the classical version break down?</p>
| <p>I completely agree with Henning Makholm: the important difference between this problem and the classic Monty Hall problem is not whether the apples are chosen by the players or assigned to them — in fact, that makes <em>absolutely no difference</em>, since they have no information to base any meaningful choice on at the point where the apples are given to them.</p>
<p>Rather, the key difference is that, in the classic Monty Hall problem, the player knows that Monty will never open the door they choose. Similarly, if one of the players in this problem knew that they wouldn't be asked to bite into the first apple, they'd be better off switching apples with the other remaining player. But of course, if the apples are assigned randomly, it's impossible for more than one of the players to (correctly) possess such knowledge: if two of the players knew they'd never be chosen to go first, and the third one got the wormless apple, Monty would have no way to pick a player with a wormy apple to go first.</p>
<p>Anyway, you don't really have to believe my reasoning above; as with the classic Monty Hall problem, we can simply enumerate the possible outcomes.
Of course, I'm making here a few assumptions which weren't <em>quite</em> explicitly stated by the OP, but which seem like reasonable interpretations of the problem statement and match the classic Monty Hall problem:</p>
<ul>
<li>Each of the players is equally likely to get the wormless apple.</li>
<li>Monty will <em>always</em> choose a player with a wormy apple to go first.</li>
<li>Of the two players with wormy apples, both are equally likely to be chosen to go first.</li>
<li>All the players know all of the above things in advance.</li>
</ul>
<p>Given these assumptions, there are six possible situations the might occur, with equal probability, at the point where the two remaining players are asked to switch:</p>
<ol>
<li>$A$ has the wormless apple, $B$ went first $\to$ $A$ and $C$ remain.</li>
<li>$A$ has the wormless apple, $C$ went first $\to$ $A$ and $B$ remain.</li>
<li>$B$ has the wormless apple, $C$ went first $\to$ $B$ and $A$ remain.</li>
<li>$B$ has the wormless apple, $A$ went first $\to$ $B$ and $C$ remain.</li>
<li>$C$ has the wormless apple, $A$ went first $\to$ $C$ and $B$ remain.</li>
<li>$C$ has the wormless apple, $B$ went first $\to$ $C$ and $A$ remain.</li>
</ol>
<p>From the list above, you can easily count that, for each player, there are four scenarios where they remain, and in two of those they have the wormless apple. So it makes no difference whether they switch or not.</p>
<p>But what if one player, say $A$, knew that they'd never be chosen to go first? Then, if $B$ or $C$ got the wormless apple, Monty would have to choose the other one of them to go first. Thus, scenarios 4 and 5 above become impossible, while 3 and 6 <em>become twice as likely</em>. Thus, if, say, $A$ and $B$ remain, they know that they have to be in scenarios 2 or 3 — and of those, the one in which $B$ has the wormless apple (3) is now twice as likely as the one in which $A$ has it (2), so $A$ should want to switch but $B$ should not.</p>
| <p>I'm not sure Arturo's solution (in comments) is entirely right. It doesn't really matter who's making the random choice, as long as it is indeed random (which we're assuming it to be here).</p>
<p>I would point to the fact that in the game you describe, every player has -- at the time the apples where chosen -- a risk of getting to run from the stage in disgust rather than getting a chance to switch. Since this risk is dependent on whether the apple he was originally assigned contained a worm or not, once a player knows he's <em>not</em> running in disgust, he has information that the player in a classic Monty Hall game doesn't have, and that changes the odds for him.</p>
<p>(From the beginning, the chance that a player has an apple with a worm is 2/3. However, once a player finds himself being given the chance to switch, half of that probability has disappeared into the possibility that he would have been eliminated initially. So the remaining chance of having a worm is now only 1/2 <em>of the initial probability mass</em>, which is the same as the chance of having a good apple.)</p>
<p>This also points to the sometimes underappreciated fact that what is important in Monty Hall-type problems is <em>not</em> simply what has actually happened to you before you make your choice, but also your assumptions about what else <em>might</em> have had happened to you but <em>didn't</em>. If, the original game had been such that Monty has an option not to offer you a switch at all, the entire standard argument for switching collapses. (Imagine, for example, that Monty were a miser and only offered a switch to contestants that had picked the prize door initially).</p>
|
differentiation | <p>I have been wondering whether the following limit is being used somehow, as a variation of the derivative:</p>
<p>$$\lim_{h\to 0} \frac{f(x+h)-f(x-h)}{2h} .$$</p>
<p><strong>Edit:</strong>
I know that this limit is defined in some places where the derivative is not defined, but it gives us some useful information.</p>
<p>The question <strong>is not</strong> whether this limit is similar to the derivative, but whether it is useful somehow.</p>
<p>Thanks.</p>
| <p>The "symmetric difference" form of the derivative is quite convenient for the purposes of <em>numerical</em> computation; to wit, note that the symmetric difference can be expanded in this way:</p>
<p>$$D_h f(x)=\frac{f(x+h)-f(x-h)}{2h}=f^\prime(x)+\frac{f^{\prime\prime\prime}(x)}{3!}h^2+\frac{f^{(5)}(x)}{5!}h^4+\dots$$</p>
<p>and one thing that should be noted here is that in this series expansion, only <em>even</em> powers of $h$ show up.</p>
<p>Consider the corresponding expansion when $h$ is halved:</p>
<p>$$D_{h/2} f(x)=\frac{f(x+h/2)-f(x-h/2)}{h}=f^\prime(x)+\frac{f^{\prime\prime\prime}(x)}{3!}\left(\frac{h}{2}\right)^2+\frac{f^{(5)}(x)}{5!}\left(\frac{h}{2}\right)^4+\dots$$</p>
<p>One could take a particular linear combination of this half-$h$ expansion and the previous expansion in $h$ such that the term with $h^2$ zeroes out:</p>
<p>$$4D_{h/2} f(x)-D_h f(x)=3f^\prime(x)-\frac{f^{(5)}(x)}{160}h^4+\dots$$</p>
<p>and we have after a division by $3$:</p>
<p>$$\frac{4D_{h/2} f(x)-D_h f(x)}{3}=f^\prime(x)-\frac{f^{(5)}(x)}{480}h^4+\dots$$</p>
<p>Note that the surviving terms after $f^\prime(x)$ are (supposed to be) much smaller than either of the terms after $f^\prime(x)$ in the expansions for $D_h f(x)$ and $D_{h/2} f(x)$. Numerically speaking, one could obtain a slightly more accurate estimate of the derivative by evaluating the symmetric difference at a certain (well-chosen) step size $h$ and at half of the given $h$, and computing the linear combination $\dfrac{4D_{h/2} f(x)-D_h f(x)}{3}$. (This is akin to deriving Simpson's rule from the trapezoidal rule). The procedure generalizes, as one keeps taking appropriate linear combinations of a symmetric difference for some $h$ and the symmetric difference at half $h$ to zero out successive powers of $h^2$; this is the famous <em>Richardson extrapolation</em>.</p>
| <p><strong>Lemma</strong>: Let $f$ be a convex function on an open interval $I$. For all $x \in I$,
$$
g(x) = \lim_{h \to 0} \frac{f(x+h) - f(x-h)}{2h}
$$
exists and $f(y) \geq f(x) + g(x) (y-x)$ for all $y \in I$.</p>
<p>In particular, $g$ is a <a href="http://en.wikipedia.org/wiki/Subderivative">subderivative</a> of $f$. </p>
|
logic | <p>The hypothetical relation is $z = \mathrm{xor}\left(x,y\right)$ where xor is any bitwise operator such as AND, OR, NAND, etc. I see that these operations may be defined for integers <a href="https://math.stackexchange.com/questions/37877/is-there-any-mathematical-operation-on-integers-that-yields-the-same-result-as-d">trivially using binary-decimal conversion</a>.</p>
<p>In the same way, can't we perform bitwise arithmetic on real numbers? For example, the following is $\mathrm{xor}\left(1.5, 2.75\right)$:</p>
<pre><code> 01.01
xor 10.11
---------
= 11.10
</code></pre>
<p>The answer is 3.5.</p>
<p>What do the 3D plots of the binary bitwise operators look like, and what are some interesting mathematical properties? (e.g. gradient)</p>
<p>By the way, if you plot any of these using Sage, can I see the code? I couldn't get bitwise operations to work this way.</p>
| <p>Basically, it looks like this:</p>
<p><img src="https://i.sstatic.net/0QYnq.png" alt="3D plot of z = x xor y, producing a Sierpinski tetrahedron"><br>
<sup>(Image rendered in <a href="http://www.povray.org/" rel="noreferrer">POV-Ray</a> by the author, using a recursively constructed mesh, some area lights and lots of anti-aliasing.)</sup></p>
<p>In the picture, the blue square on the $x$-$y$ plane represents the unit square $[0,1]^2$, and the yellow shape is the graph $z = x \oplus y$ over this square, where $\oplus$ denotes bitwise $\rm xor$.</p>
<p>Note that this graph is discontinuous at a dense subset of the plane. In the 3D rendering above, no attempt has been made to accurately portray the precise value of $x \oplus y$ at the points of discontinuity, and indeed, it is not generally uniquely defined. That is because the discontinuities occur at points where $x$ or $y$ is a <a href="//en.wikipedia.org/wiki/Dyadic_rational" rel="noreferrer">dyadic fraction</a>, and therefore has two possible binary expansions (e.g. $\frac12 = 0.100000\dots_2 = 0.011111\dots_2$).</p>
<p>As can be seen from the picture, the graph is self-similar, in the sense that the full graph over $[0,1]^2$ consists of four scaled-down and translated copies of itself. Indeed, this self-similarity is evident from the properties of the $\oplus$ operation, namely that:</p>
<ul>
<li>$\displaystyle \frac x2 \oplus \frac y2 = \frac{x \oplus y}2$, and</li>
<li>$\displaystyle x \oplus \left(y \oplus \frac12\right) = \left(x \oplus \frac12\right) \oplus y = (x \oplus y) \oplus \frac12$.</li>
</ul>
<p>The first property implies that the graph of $x \oplus y$ over the bottom left quarter $[0,1/2]^2$ of the square $[0,1]^2$ is a scaled-down copy of the full graph, while the second property implies that the graphs of $x \oplus y$ in the other quarters are identical to the first quarter, except that the lower right and upper left ones are translated up by $\frac12$.</p>
<p>The resulting fractal shape is also known as the <a href="http://mathworld.wolfram.com/Tetrix.html" rel="noreferrer">Tetrix</a> or the Sierpinski tetrahedron, and is a 3D analogue of the 2-dimensional <a href="//en.wikipedia.org/wiki/Sierpinski_triangle" rel="noreferrer">Sierpinski triangle</a>, which is also closely linked with the $\rm xor$ operation — one way to construct approximations of the Sierpinski triangle is to compute $2^n$ rows of <a href="//en.wikipedia.org/wiki/Pascal%27s_triangle" rel="noreferrer">Pascal's triangle</a> using integer addition modulo $2$, which is equivalent to logical $\rm xor$.</p>
<p>It may be surprising to observe that this fully 3-dimensional fractal shape is indeed (at least approximately, ignoring the pesky multivaluedness issues at the discontinuities) the graph of a function in the $x$-$y$ plane. Yet, when viewed from above, each of the four sub-tetrahedra indeed precisely covers one quarter of the full unit square (and each of the 16 sub-sub-tetrahedra covers one quarter of a quarter, and so on...).</p>
| <p>Here's what XOR looks like. Black is $z=0$; White is $z=1$. The x and y axes run from $[0,1]$.</p>
<p><img src="https://i.sstatic.net/MbmDz.png" alt="XOR"></p>
<p>I used the (naive) formula</p>
<p>$ x \oplus' y = \left(\lfloor 2^{20}x\rfloor \oplus \lfloor 2^{20}y\rfloor\right)2^{-20} $</p>
<p>At least with this naive interpretation, the function does not seem continuous.</p>
<p>EDIT: using analogous formulas, here are </p>
<p>AND:
<img src="https://i.sstatic.net/sSyN2.png" alt="enter image description here"></p>
<p>OR:
<img src="https://i.sstatic.net/88W3i.png" alt="enter image description here"></p>
<p>They all look pretty similar, but the 0:1 ratio difference is noticable.</p>
<p>The XOR looks like a fractal, rather than an addition plot.</p>
|
linear-algebra | <p>I've come across a paper that mentions the fact that matrices commute if and only if they share a common basis of eigenvectors. Where can I find a proof of this statement?</p>
| <p>Suppose that $A$ and $B$ are $n\times n$ matrices, with complex entries say, that commute.<br>
Then we decompose $\mathbb C^n$ as a direct sum of eigenspaces of $A$, say
$\mathbb C^n = E_{\lambda_1} \oplus \cdots \oplus E_{\lambda_m}$, where $\lambda_1,\ldots, \lambda_m$ are the eigenvalues of $A$, and $E_{\lambda_i}$ is the eigenspace for $\lambda_i$.
(Here $m \leq n$, but some eigenspaces could be of dimension bigger than one, so we need not have $m = n$.)</p>
<p>Now one sees that since $B$ commutes with $A$, $B$ preserves each of the $E_{\lambda_i}$:
If $A v = \lambda_i v, $ then $A (B v) = (AB)v = (BA)v = B(Av) = B(\lambda_i v) = \lambda_i Bv.$ </p>
<p>Now we consider $B$ restricted to each $E_{\lambda_i}$ separately, and decompose
each $E_{\lambda_i}$ into a sum of eigenspaces for $B$. Putting all these decompositions together, we get a decomposition of $\mathbb C^n$ into a direct sum of spaces, each of which is a simultaneous eigenspace for $A$ and $B$.</p>
<p>NB: I am cheating here, in that $A$ and $B$ may not be diagonalizable (and then the statement of your question is not literally true), but in this case, if you replace "eigenspace" by "generalized eigenspace", the above argument goes through just as well.</p>
| <p>This is false in a sort of trivial way. The identity matrix $I$ commutes with every matrix and has eigenvector set all of the underlying vector space $V$, but no non-central matrix has this property.</p>
<p>What is true is that two matrices which commute and are also <em>diagonalizable</em> are <a href="http://en.wikipedia.org/wiki/Diagonalizable_matrix#Simultaneous_diagonalization">simultaneously diagonalizable</a>. The proof is particularly simple if at least one of the two matrices has distinct eigenvalues.</p>
|
combinatorics | <p>In an exam with <span class="math-container">$12$</span> yes/no questions with <span class="math-container">$8$</span> correct needed to pass, is it better to answer randomly or answer exactly <span class="math-container">$6$</span> times yes and 6 times no, given that the answer 'yes' is correct for exactly <span class="math-container">$6$</span> questions?</p>
<p>I have calculated the probability of passing by guessing randomly and it is</p>
<p><span class="math-container">$$\sum_{k=8}^{12} {{12}\choose{k}}0.5^k0.5^{n-k}=0.194$$</span></p>
<p>Now given that the answer 'yes' is right exactly <span class="math-container">$6$</span> times, is it better to guess 'yes' and 'no' <span class="math-container">$6$</span> times each? </p>
<p>My idea is that it can be modelled by drawing balls without replacement. The balls we draw are the correct answers to the questions.</p>
<p>Looking at the first question, we still know that there are <span class="math-container">$6$</span> yes and no's that are correct. The chance that a yes is right is <span class="math-container">$\frac{6}{12}$</span> and the chance that a no is right is also <span class="math-container">$\frac{6}{12}$</span>. </p>
<p>Of course the probability in the next question depends on what the first right answer was. If yes was right, yes will be right with a probability of <span class="math-container">$5/11$</span> and a no is right with the chance <span class="math-container">$6/11$</span>. If no was right, the probabilities would change places.</p>
<p>Now that we have to make the choice <span class="math-container">$12$</span> times and make the distinction which one was right, we get <span class="math-container">$2^{12}$</span> paths total. We cannot know what the correct answers to the previous questions were. So we are drawing <span class="math-container">$12$</span> balls at once, but from what urn? It cannot contain <span class="math-container">$24$</span> balls with <span class="math-container">$12$</span> yes and <span class="math-container">$12$</span> no's. Is this model even correct?</p>
<p>Is there a more elegant way to approach that?</p>
<p>I am asking for hints, not solutions, as I'm feeling stuck. Thank you.</p>
<hr>
<p><strong>Edit</strong>: After giving @David K's answer more thought, I noticed that the question can be described by the <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="noreferrer">hypergeometric distribution</a>, which yields the desired result.</p>
| <p>We are given the fact that there are $12$ questions, that $6$ have the correct answer "yes" and $6$ have the correct answer "no."</p>
<p>There are $\binom{12}{6} = 924$ different sequences of $6$ "yes" answers and $6$ "no" answers.
If we know nothing that will give us a better chance of answering any question
correctly than sheer luck, the most reasonable assumption is that every possible sequence of answers is equally likely, that is, each one has
$\frac{1}{924}$ chance to occur.</p>
<p>So guess "yes" $6$ times and "no" $6$ times. I do not care how you do that:
you may guess "yes" for the first $6$, or flip a coin and answer "yes" for heads and "no" for tails until you have used up either the $6$ "yeses" or the $6$ "noes" and the rest of your answers are forced, or you can put $6$ balls labeled "yes" and $6$ labeled "no" in an urn, draw them one at a time, and answer the questions in that sequence.</p>
<p>No matter <em>what</em> you do, you end up with some sequence of "yes" $6$ times and "no" $6$ times. You get $12$ correct if and only if the sequence of correct answers is exactly the same as your sequence.
That probability is $\frac{1}{924}.$</p>
<p>There is no way for you to get $11$ correct. You get $10$ correct if and only if the correct answers are "yes" on $5$ of your "yes" answers and "no" on your other "yes" answers.
The number of ways this can happen is the number of ways to choose $5$ correct answers from your $6$ "yes" answers, times the number of ways to choose $5$ correct answers from your $6$ "no" answers:
$\binom 65 \times \binom 65 = 36.$</p>
<p>There is no way for you to get $9$ correct. You get $8$ correct if and only if the correct answers are "yes" on $4$ of your "yes" answers and "no" on your other "yes" answers.
The number of ways this can happen is the number of ways to choose $4$ correct answers from your $6$ "yes" answers, times the number of ways to choose $4$ correct answers from your $6$ "no" answers:
$\binom 64 \times \binom 64 = 225.$</p>
<p>In any other case you fail. So the chance to pass is
$$
\frac{1 + 36 + 225}{924} = \frac{131}{462} \approx 0.283550,
$$
which is much better than the chance of passing if you simply toss a coin for each individual question
but not nearly as good as getting $4$ or more heads in $6$ coin tosses.</p>
<hr>
<p>Just to check, we can compute the chance of failing in the same way:
$6$ answers correct ($3$ "yes" and $3$ "no"), $4$ answers correct,
$2$ correct, $0$ correct. This probability comes to
$$
\frac{\binom 63^2 + \binom 62^2 + \binom 61^2 + 1}{924}
= \frac{400 + 225 + 36 + 1}{924}
= \frac{331}{462} \approx 0.716450,
$$
which is the value needed to confirm the answer above.</p>
| <p>In terms of balls and urns: Maybe it helps to think about it as follows:</p>
<p>You have a red urn and a blue urn, and you have $6$ red balls and $6$ blue balls. You randomly put $6$ of the twelve balls in the red urn, and the other $6$ in the blue urn. Now: what it the chance that at least $8$ balls are in the 'right' (i.e. same colored) urn? </p>
<p>Well, to get $8$ correct, you either need to get all $6$ red balls in the red urn ($1$ possibility), or $5$ red ones and $1$ blue in the red urn (${6 \choose 5} \cdot {6 \choose 1} = 6 \cdot 6 = 36$ possibilities), or $4$ red ones and $2$ blue ones (${6 \choose 4} \cdot {6 \choose 2} = 15 \cdot 15 = 225$ possibilities). This is out of a total of ${12 \choose 6} = 924$ possibilities, and so the probability is $\frac{1+36+225}{924}$</p>
<p>NOTE: Thanks to @DavidK for pointing out my initial answer was wrong! Everyone please upvote his answer!</p>
|
geometry | <p>This is a claim one of my students made without justification on his exam. It definitely wasn't the right way to approach the problem, but now I've been nerdsniped into trying to figure out if it is true.</p>
<blockquote>
<p>Let $a_i$ be a sequence of positive reals such that $\sum a_i^2 = \infty$. Then $[0,1]^2$ can be covered by translates of the squares $[0,a_i]^2$. </p>
</blockquote>
<p>It is definitely not enough that $\sum a_i^2>1$: You can't cover a unit square with $5$ axis-aligned squares of sidelength $1/\sqrt{5}$. </p>
| <p>We actually only need $\sum a_i^2 >4$ for this to hold*. In particular, note that this condition implies that there is some <em>finite</em> set $I$ of the indexing subset such that $\sum_{i\in I}a_i^2>4$.</p>
<p>Let $b_i$ be the largest power of two less than or equal to $a_i$ for each $i\in I$. Clearly, $2b_i>a_i$. In particular, $\sum_{i\in I}b_i^2>1$. Now, we can inductively construct a by squares of side length $b_i$ for $i\in I$. To be specific, for each $k$, let $c_k$ be the number of $i\in I$ such that $b_i=2^{-k}$. We are equivalently trying to cover $[0,1]^2$ by a collection of squares of powers of two in side length, with $c_k$ copies of $[0,2^{-k}]^2$.</p>
<p>However, this is easy: Let $G_k$ be the partition** of $[0,1]^2$ into translates $[0,2^{-k}]^2$ in the obvious way (i.e. by slicing into $2^k$ rows and columns). Note that $G_{k+1}$ is always a refinement** of $G_k$. Then, we can place the squares from biggest to smallest; we can greedily place as many of the biggest $c_1$ squares into $G_1$ as possible. Then, if the square is not full, we put as many $c_2$ squares into the uncovered cells of $G_2$ as possible - and so on. Since each is a refinement of the last, we can always place a new square in without overlap of a previous square - unless everything is already full. This process must terminate eventually, since we only have finitely many squares. However, since their areas sum to more than one, and they are being placed without overlap, they must actually cover the square. </p>
<p>(*This inequality actually doesn't need to be strict; if you trace through the argument, the fact that the inequality $2b_i>a_i$ is strict lets us weaken the inequality for the sum)</p>
<p>(**This is all up to the boundaries of the cells, which might intersect. But since we're dealing with a finite set of squares, the union is closed, so measure zero sets don't matter anyways)</p>
<hr>
<p>Here's an illustration of the process, where the squares are assumed to have powers of two as side lengths, $c_1=2$ and $c_2=5$ and $c_3=9$ and $c_4=12$. Observe that there's really nothing that could possibly go wrong:</p>
<p><a href="https://i.sstatic.net/b2Ni9.png" rel="noreferrer"><img src="https://i.sstatic.net/b2Ni9.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/YRG94.png" rel="noreferrer"><img src="https://i.sstatic.net/YRG94.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/LYG1X.png" rel="noreferrer"><img src="https://i.sstatic.net/LYG1X.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/Jco0y.png" rel="noreferrer"><img src="https://i.sstatic.net/Jco0y.png" alt="enter image description here"></a></p>
| <p>An answer exploiting the idea of <a href="https://math.stackexchange.com/users/10400/will-jagy">Will Jagy</a> outlined in the comments.</p>
<p>If there is some $a_n\geq 1$ there is nothing to prove, so we may assume $0< a_n <1$. There is nothing to prove also if $\limsup a_n>0$, so we may assume $\lim_{n\to +\infty}a_n=0$.</p>
<p>Assuming $\sum_{n\geq 1}a_n<+\infty$ we would have $\sum_{n\geq 1} a_n^2<+\infty$, so $\sum_{n\geq 1}a_n$ is divergent. We may say that some finite subset $S\subseteq \mathbb{N}^+$ is a <em>good subset</em> if $\sum_{s\in S}a_s\geq 1$ and decide that the <em>rank</em> of a good subset is $\min_{s\in S}a_s$. We may pick the good subset with the highest rank (there is a highest rank since $\lim_{n\to +\infty}a_n=0$) for covering a strip as tall as the rank of such subset, remove these elements from the original sequence, re-index it and repeat.</p>
<p>It is not be difficult to show that the set of strips built by this $\min\max$ greedy algorithm is a cover of the whole square, i.e. that the sum of the ranks exceeds one at some point. Assuming that an infinity of strips is produced, the fact that these strips do not cover the whole square but $\sum_{n\geq 1}a_n^2$ is divergent leads to a contradiction.</p>
<hr>
<p>This introduces a horribly difficult problem:</p>
<blockquote>
<p>What is the infimum of $r\in\mathbb{R}^+$ such that for every sequence $\{a_n\}_{n\geq 1}$ with $a_n\in(0,1)$ and $\sum_{n\geq 1}a_n^2\geq r$, it is possible to cover $[0,1]^2$ with translates of $[0,a_1]^2,[0,a_2]^2,[0,a_3]^2,\ldots$?</p>
</blockquote>
|
logic | <p>If "This is a lie" were a true statement, its fulfilled claim of being a lie implies it can't be true, leading to a contradiction. If it were false, it could not be a lie and thus had to be true, again leading to a contradiction. So with this argumentation the statement "This is a lie" can be neither true nor false and therefore binary logic is not enough to treat logic.</p>
<p>Is this argumentation correct?</p>
| <p>The problem lies in your interpretation of the sentence. If you want to apply logic to it, you need first reformulate it into the language of logic. There are many different ways to do that, but in majority of "usual" methods the curious thing happens: for a sentence to talk about it self you need a set that contains the sentence which in turn uses the set again -- you are unable to find a formal representation of it. This is closely related to Russel's paradox and his proof that there's no set of all sets, e.g. there is no set of all sentences (if they can refer to this set). </p>
<p>Of course, there are some "unusual" structures, e.g. you could take sets defined using the bisimilarity relation where x = {1, x} would be a proper definition, although, this is far beyond binary logic.</p>
<p>Concluding: the simple binary logic is not contradictory, but it is too weak to express this sentence.</p>
<p>On a happier note: this kind of thing happens all the time, and even innocent fairy-tales fall for that, e.g. consider what would happen if Pinocchio had said "my nose will grow"... (Have anyone tried ever this on their's native language teacher?)</p>
| <p>No, this just shows that logic should not allow arbitrary sentences and ascribe a truth value to all of them. Completely nonsensical utterances clearly fall under the forbidden category. But sentences like this classical liar paradox which mix levels cannot be allowed either. If you want to know more interesting forms of the liar paradox and applications of them, I advise you to read Hofstadters "Gödel, Escher, Bach", where such matters are discussed at great lengths. </p>
|
probability | <p>In a probability course, a game was introduced which a logical approach won't yield a strategy for winning, but a probabilistic one will. My problem is that I don't remember the details (the rules of the game)! I would be thankful if anyone can complete the description of the game. I give the outline of the game, below.</p>
<p>Some person (A) hides a 100 or 200 dollar bill, and asks another one (B) to guess which one is hidden. If B's guess is correct, something happens and if not, something else (this is what I don't remember). The strange point is, B can think of a strategy so that always ends to a positive amount, but now A can deduce that B will use this strategy, and finds a strategy to overcome B. Now B knows A's strategy, and will uses another strategy, and so on. So, before even playing the game for once, there is an infinite chain of strategies which A and B choose successively!</p>
<p>Can you complete the story? I mean, what happens when B's guess correct and incorrect?</p>
<p>Thanks.</p>
| <p>In the <a href="http://blog.plover.com/math/envelope.html" rel="noreferrer">Envelope Paradox</a> player 1 writes any two different numbers $a< b$ on two slips of paper. Then player 2 draws one of the two slips each with probability $\frac 12$, looks at its number $x$, and predicts whether $x$ is the larger or the smaller of the two numbers.</p>
<p>It appears at first that no strategy by player 2 can achieve a success rate grater than $\frac 12$. But there is in fact a strategy that will do this.</p>
<p>The strategy is as follows: Player 2 should first select some probability distribution $D$ which is positive everywhere on the real line. (A normal distribution will suffice.) She should then select a number $y$ at random according to distribution $D$. That is, her selection $y$ should lie in the interval $I$ with probability exactly $$\int_I D(x)\; dx.$$ General methods for doing this are straightforward; <a href="https://en.wikipedia.org/wiki/Box-Muller_transform" rel="noreferrer">methods for doing this</a> when $D$ is a normal distribution are well-studied.</p>
<p>Player 2 now draws a slip at random; let the number on it be $x$.
If $x>y$, player 2 should predict that $x$ is the larger number $b$; if $x<y$ she should predict that $x$ is the smaller number $a$. ($y=x$ occurs with probability 0 and can be disregarded, but if you insist, then player 2 can flip a coin in this case without affecting the expectation of the strategy.)</p>
<p>There are six possible situations, depending on whether the selected slip $x$ is actually the smaller number $a$ or the larger number $b$, and whether the random number $y$ selected by player 2 is less than both $a$ and $b$, greater than both $a$ and $b$, or in between $a$ and $b$.</p>
<p>The table below shows the prediction made by player 2 in each of the six cases; this prediction does not depend on whether $x=a$ or $x=b$, only on the result of her comparison of $x$ and $y$: </p>
<p>$$\begin{array}{r|cc}
& x=a & x=b \\ \hline
y < a & x=b & \color{blue}{x=b} \\
a<y<b & \color{blue}{x=a} & \color{blue}{x=b} \\
b<y & \color{blue}{x=a} & x=a
\end{array}
$$</p>
<p>For example, the upper-left entry says that when player 2 draws the smaller of the two numbers, so that $x=a$, and selects a random number $y<a$, she compares $y$ with $x$, sees that $y$ is smaller than $x$, and so predicts that $x$ is the larger of the two numbers, that $x=b$. In this case she is mistaken. Items in blue text are <em>correct</em> predictions.</p>
<p>In the first and third rows, player 2 achieves a success with probability $\frac 12$. In the middle row, player 2's prediction is always correct. Player 2's total probability of a successful prediction is therefore
$$
\frac12 \Pr(y < a) + \Pr(a < y < b) + \frac12\Pr(b<y) = \\
\frac12(\color{maroon}{\Pr(y<a) + \Pr(a < y < b) + \Pr(b<y)}) + \frac12\Pr(a<y<b) = \\
\frac12\cdot \color{maroon}{1}+ \frac12\Pr(a<y<b)
$$</p>
<p>Since $D$ was chosen to be everywhere positive, player 2's probability $$\Pr(a < y< b) = \int_a^b D(x)\;dx$$ of selecting $y$ between $a$ and $b$ is <em>strictly</em> greater than $0$ and her probability of making a correct prediction is <em>strictly</em> greater than $\frac12$ by half this strictly positive amount.</p>
<p>This analysis points toward player 1's strategy, if he wants to minimize player 2's chance of success. If player 2 uses a distribution $D$ which is identically zero on some interval $I$, and player 1 knows this, then player 1 can reduce player 2's success rate to exactly $\frac12$ by always choosing $a$ and $b$ in this interval. If player 2's distribution is everywhere positive, player 1 cannot do this, even if he knows $D$. But player 2's distribution $D(x)$ must necessarily approach zero as $x$ becomes very large. Since Player 2's edge over $\frac12$ is $\frac12\Pr(a<y<b)$ for $y$ chosen from distribution $D$, player 1 can bound player 2's chance of success to less than $\frac12 + \epsilon$ for any given positive $\epsilon$, by choosing $a$ and $b$ sufficiently large and close together. And even if player 1 doesn't know $D$, he should <em>still</em> choose $a$ and $b$ very large and close together. </p>
<p>I have heard this paradox attributed to Feller, but I'm afraid I don't have a reference.</p>
<p>[ Addendum 2014-06-04: <a href="https://math.stackexchange.com/q/709984/25554">I asked here for a reference</a>, and was answered: the source is <a href="https://en.wikipedia.org/wiki/Thomas_M._Cover" rel="noreferrer">Thomas M. Cover</a> “<a href="http://www-isl.stanford.edu/~cover/papers/paper73.pdf" rel="noreferrer">Pick the largest number</a>”<em>Open Problems in Communication and Computation</em> Springer-Verlag, 1987, p152. ] </p>
| <p>I know this is a late answer, but I'm pretty sure I know what game OP is thinking of (and none of the other answers have it right).</p>
<p>The way it works is person A chooses to hide either $100$ or $200$ dollars in an envelope, and person B has to guess the amount that person A hid. If person B guesses correctly they win the money in the envelope, but if they guess incorrectly they win nothing.</p>
<p>If person A uses the predictable strategy of putting $100$ dollars in the envelope every time, then person B can win $100$ dollars every time by guessing $100$ correctly.</p>
<p>If person A instead chooses to randomly put either $100$ or $200$, then person B can guess $200$ every time--he'll win half the time, so again will win $100$ dollars per game on average.</p>
<p>But a third, better option for A is to randomly put $100$ in the envelope with probability $2/3$, and $200$ dollars in with probability $1/3$. If person B guesses $100$, he has a $2/3$ chance of being right so his expected winnings are $\$66.67$. If person B guesses $200$, he has a $1/3$ chance of being right so his expected winnings are again $\$66.67$. No matter what B does, this strategy guarantees that he will win only $\$66.67$ on average.</p>
<p>Looking at person B's strategy, he can do something similar. If he guesses $100$ with probability $2/3$ and $200$ with probability $1/3$, then no matter what strategy person A uses he wins an average of $\$66.67$. These strategies for A and B are the <a href="https://en.wikipedia.org/wiki/Nash_equilibrium" rel="noreferrer">Nash Equilibrium</a> for this game, the set of strategies where neither person can improve their expected winnings by changing their strategy.</p>
<p>The "infinite chain of strategies" you mention comes in if either A or B start with a strategy that isn't in the Nash equilibrium. Suppose A decides to put $100$ dollars in the envelope every time. The B's best strategy is of course to guess $100$ every time. But given that, A's best strategy is clearly to put $200$ dollars in the envelope every time, at which point B should change to guessing $200$ every time, and so on. In a Nash equilibrium, though, neither player gains any advantage by modifying their strategy so such an infinite progression doesn't occur.</p>
<p>The interesting thing about this game is that although the game is entirely deterministic, the best strategies involve randomness. This actually turns out to be true in most deterministic games where the goal is to predict what your opponent will do. Another common example is <a href="https://en.wikipedia.org/wiki/Rock-paper-scissors" rel="noreferrer">rock-paper-scissors</a>, where the equilibrium strategy is unsurprisingly to choose between rock, paper, and scissors with equal probability.</p>
|
game-theory | <p>Two persons have 2 uniform sticks with equal length which can be cut at any point. Each person will cut the stick into $n$ parts ($n$ is an odd number). And each person's $n$ parts will be permuted randomly, and be compared with the other person's sticks one by one. When one's stick is longer than the other person's, he will get one point. The person with more points will win the game. How to maximize the probability of winning the game for one of the person. What is the best strategy to cut the stick.</p>
| <p>When intuition doesn't help, try brute force. </p>
<p>trials = Table[With[{k = RandomReal[{0, 1}, {7}]}, k/Total[k]], {50}];
Column[Sort[Transpose[{Total /@ Table[Total[Sign[trials[[a]] - trials[[b]]]], {a, 1, 50}, {b, 1, 50}], trials}]]]</p>
<p>Now we can look at some best/worst performers.</p>
<p>{-55, {0.018611, 0.0405574, 0.032157, 0.333219, 0.311017, 0.17885, 0.0855879}},<br>
{-39, {0.313092, 0.178326, 0.0454452, 0.064321, 0.228231, 0.0907115,0.0798742}}, </p>
<p>{29, {0.0360088, 0.220396, 0.13208, 0.145903, 0.180813, 0.240151, 0.044648}},<br>
{29, {0.0799122, 0.0547817, 0.234127, 0.119589, 0.0290167, 0.255561,0.227013}},<br>
{33, {0.0541814, 0.216338, 0.0619272, 0.204252, 0.0254828, 0.225743,0.212075}}</p>
<p>So, in the case of 7 pieces, best strategy seems to be to give 3 tiny pieces and 4 larger pieces. With some bigger trials, 2 tiny pieces and 5 larger pieces seemed best. But this is a competitive game ... so I selected the top 10% of a larger trial, and then had those guys compete. This time the winner was to have 3 values of 2/7 and one value of 1/7
{0.0263105, 0.0848388, 0.228165, 0.249932, 0.0788568, 0.215831, 0.116066}</p>
<p>Your iterations may behave differently.</p>
| <p>Here's a (partial) answer to the setting where the number of sticks is 3 (i.e. $n = 3$). With some effort, one can show the following claims: </p>
<p>Given the strategy $(a,a,b)$ where $a \ge b$ (i.e. you break your stick into two equal parts of size $a$ and one smaller part of size $b = 1-2a$), the optimal value for $a$ is $\frac13$. That is, breaking your stick into three equal parts maximizes the number of strategies you beat. </p>
<p>Another similar claim: given the strategy $(a,b,0)$ (i.e. break your stick into three parts - one of length $a$, one of length $b = 1-a\le a$ and one of length 0), the optimal value of $a$ is $a = \frac12$.</p>
<p>Generally speaking, this is proven by drawing the space of strategies as an equilateral triangle in barycentric coordinates, dividing that triangle into spaces of strategies (triangles, trapezoids and such), and seeing what's the likelihood of your given strategy beating another strategy in that space. </p>
<p>For example, the strategy $(\frac13,\frac13,\frac13)$ beats any strategy that has two sticks of length $<\frac13$, and loses to all other strategies (there are some strategies with one stick of length $\frac13$, but these have measure 0 so are ignored) at all times. Two thirds of the strategies have two sticks of length $< \frac13$, so it beats strategies $2/3$ of the time.</p>
<p>Alternatively, for $(\frac12,\frac12,0)$, It surely beats any strategy that has two sticks of length $<\frac12$, and beats any strategy with one stick of length $>\frac12$ and two with length $< \frac12$ w.p. $\frac13$. So in total its expected winning probability is $\frac14\cdot 1 + 3\cdot\frac14\cdot \frac13 = \frac12$. (take an equilateral triangle and divide into 4 equal equilateral triangles.</p>
<p>I'll be happy to write a formal proof of this, but now I'm a bit busy :)</p>
<p>My thought is that in general it would be hard to prove anything about this setting for $n \ge 5$, but I may be wrong...</p>
|
game-theory | <p>Let's say you have an arbitrary length of time. You are playing a game in which you want to push a button during this time span after a light comes on. If you do so, you win; if not, you lose. You can't see the light; you can only guess a random time to push the button. If you are playing all by yourself, the obvious way to guarantee winning is to push the button at the last possible instant, so that you will be after the light coming on no matter what.</p>
<p>Now let's say you are playing against another player. You both have the same goal: push your button after the light comes on but before the other player pushes their button. I'm sure the best solution in this case would be to push the button in the exact middle of the time range if the other player is pushing at a random time, but it could be that there is no optimal solution if both players are using a coherent strategy. This is complicated by the fact that one can lose either being first or third in the sequence of three events, and in some cases by being second, if the light is third. Also, both players can lose; you don't win by being closer if you're too early.</p>
<p>If there <strong>is</strong> an optimal solution for the second case, can that be generalized in any way? For a length of time <em>t</em> and a random number of players <em>p</em> is there a moment <em>x</em> that would give the best chance of being the winner? This is somewhat like the way bidding works on <em>The Price is Right</em>, except there is no real way to judge the "value", or correct time, to bid.</p>
<p><strong>EDIT:</strong> Let's also take this one level higher. Consider playing multiple rounds of the game. You are allowed one button push per game, so the total number of rounds and pushes permitted are the same, but you can distribute your pushes however you like, anywhere from one per round to all in the same round. Is there an optimum mix at that level? Remember again that you wouldn't know when or if the other player was playing in any particular round, you would only know if you had won or lost that round, and it is possible to both win and lose whether or not the other player plays.</p>
| <p>The big assumptions I'll make in my answer are (1) that all distributions are absolutely continuous (with respect to Lebesgue) everywhere, so we can employ probability densities, (2) The support of both players' strategies are the same and of the form $[a,b]$. Maybe someone can extend this answer by relaxing these assumptions.</p>
<p>Let $F$, $G_1$ and $G_2$ be the cdfs of the light and players 1 and 2 arrival times. Let $f$, $g_1$, and $g_2$ be the corresponding densitites.</p>
<p>Player 1's expectation of winning is
$$
P = \int_0^{\infty} I(t) \ d G_1(t)
$$
where
$$
I(t) = F(t)(1-G_2(t)) + \int_0^t G_2(t') dF(t').
$$
The first term in the integrand corresponds to $\mathbb{P}(light \le 1 < 2)$. The second is $\mathbb{P}(2 \le light \le 1)$.</p>
<p>Player 1 chooses $G_1$ to maximize his payoff. That means he chooses $G_1$ with its support where the integrand $I(t)$ of $P$ above is maximized. If $G_1$'s support is $[a,b]$, that means that
\begin{eqnarray}
(1)\ \ I(t) \ \mbox{ is a constant on $[a,b]$} \\
(2)\ \ I([a,b])\ge I(t) \mbox{ for any $t\ge 0$}.
\end{eqnarray}
Differentiate, and we end up with the following boundary value problem:
\begin{eqnarray}
0 &=& f(t) (1-G_2(t)) - F(t) G_2'(t) + G_2(t) f(t) \\
G_2(a) &=& 0 \\
G_2(b) &=& 1.
\end{eqnarray}
This simplifies to
$$
f(t) = F(t) G_2'(t).
$$
The BVP is easily solved:
$$
(3)\ \ \ G_2(t) = \log\left(\frac{F(t)}{F(a)}\right) \ \ \mbox{ for $t\in[a,b]$},
$$
but we require $G_2(b) = 1$, so we must have $F(b) = e F(a)$.</p>
<p>Let $T$ be the end of the support of $F$, i.e., (informally) the last time the light might come on. We might have $T=+\infty$. Note that
\begin{eqnarray}
I(b) &=& \int_0^b G_2(t') dF(t') \\
I(T) &=& \int_0^T G_2(t') dF(t')
\end{eqnarray}
If $b<T$, then the second integral strictly exceeds the first, violating $(2)$ above. Thus, $b\ge T$. But by $(3)$, $G_2(t)$ is flat for $t>T$, so in fact, $b=T$.</p>
<p>In summary: choose $a$ such that $F(a) = e^{-1}$ and choose $G_2$ according to $(3)$. $G_1$ is the same.</p>
<p>Note in passing that I did <em>not</em> assume the players used the same strategy, just that the supports are the same.</p>
<p>Now I've done the easy part. How to deal with the possibility of different supports, supports that aren't intervals, non-absolutely continuous distributions, etc., I'll leave to someone else!</p>
| <p>We assume that the set of <em>common knowledge</em> of the game (each player knows it, and each player knows that the other player knows etc), is (skipping formalities):</p>
<p>$1)$ The rules of the game. These rules include that if the button is pushed after $T$ the player loses (only implicitly assumed in the OP's question).<br>
$2)$ That both players are "rational" meaning in our case that they prefer winning to losing, and that they won't adopt strictly dominated strategies.<br>
$3)$ That the "length of time" is finite, $[0,T]$.<br>
$4)$ That the distribution of the timing of the light flash $\lambda$, is $G(\lambda)$, be it Uniform or other.<br>
$5)$ That both players follow the principle of insufficient reason, whenever the need arises. This means that when no relevant information is available, chance is modelled as a uniform random variable. (Note: we need to <em>assume</em> that, because, although the principle of insufficient reason is a very intuitive argument, nevertheless philosophical and epistemological battles still rage over this principle, and so, "rationality" alone is <em>not</em> sufficient to argue that the PIR will be followed). </p>
<p>Denote $t_1$ the time-choice of player $1$ and $t_2$ the time-choice of player 2. Both $t_1$ and $t_2$ range in $[0,T]$. They cannot range below $0$ because it is impossible, and they don't range above $T$ because this is a strictly dominated strategy. </p>
<p>If we are player $1$, then $t_1$ is a decision variable, while, $\lambda$ and $t_2$ are random variables. The probability of us winnig is</p>
<p>$$P(\text {player 1 wins}) = P(\lambda \le t_1, t_2 > t_1) $$</p>
<p>Now, from the point of view of player 1, $\lambda$ and $t_2$ are independent random variables: if player $1$ "knows" that $\lambda = \bar \lambda$, this won't affect how he views the distribution of $t_2$. So</p>
<p>$$P(\text {player 1 wins}) = P(\lambda \le t_1) \cdot P(t_2 > t_1) = G(t_1)\cdot [1-F_2(t_1)]$$</p>
<p>where $F_2()$ is the distribution function of $t_2$. Player $1$ wants to maximize this probability over the choice of $t_1$:</p>
<p>$$\max_{t_1} P(\text {Player 1 wins})= G(t_1)\cdot [1-F_2(t_1)] $$</p>
<p>First order condition is </p>
<p>$$\frac {\partial}{\partial t_1} P(\text {Player 1 wins}) =0 \Rightarrow g(t_1^*)\cdot [1-F_2(t_1^*)] - G(t_1^*)f_2(t_1^*) =0 \qquad [1]$$</p>
<p>where lower case letters denote the corresponding density functions (which we assume they exist).</p>
<p>The <em>second-order</em> condition (because we have to make sure that this is a maximum), is </p>
<p>$$\frac {\partial^2}{\partial t_1^2} P(\text {Player 1 wins})|_{t^*_1} <0 \Rightarrow \\ g'(t^*_1)\cdot [1-F_2(t^*_1)] - 2g(t^*_1)f_2(t^*_1) - G(t^*_1)f'_2(t^*_1) <0 \qquad [2]$$</p>
<p>Now, since we have no other information on the timing of the light-flash, except its range, then by our assumptions regarding the common knowledge set, $\lambda \sim U(0,T)$. Then </p>
<p>$$[1] \rightarrow \frac 1T [1-F_2(t_1^*)] - \frac {t_1^*}{T}f_2(t_1^*) =0
\Rightarrow t_1^* = \frac {1-F_2(t_1^*)}{f_2(t_1^*)} \qquad [1a]$$</p>
<p>while
$$[2] \rightarrow - \frac 2Tf_2(t^*_1) - \frac {t_1^*}{T}f'_2(t^*_1) =-\frac 1{T}\Big(2f_2(t^*_1)+t^*_1f'_2(t^*_1)\Big) \qquad [2a]$$</p>
<p><strong>To cover a conjecture of the OP, that the button will be pushed in the exact middle of the time-length</strong>, this will happen if player $1$ models $t_2$ as being a uniform random variable, $t_2 \sim U(0,T)$. Then </p>
<p>$$[1a] \rightarrow t_1^* = \frac {1-(t_1^*/T)}{1/T} = T-t_1^* \Rightarrow t_1^* =T/2 \qquad [1b]$$
and
$$[2a] \rightarrow -\frac 1{T}\frac 2T <0 \qquad [2b]$$
so it will indeed be a maximum (likewise for player 2).</p>
<p>Player $1$ will model $t_2$ as a Uniform, if he has no other information on it except its range. Well, does he know something more? By the set of common knowledge, he knows that player $2$ will also try to maximize from her part, and that she will model the timing of the light-flash as a uniform. So player $1$ knows that player $2$ will end up looking at the conditions</p>
<p>$$t_2^* = \frac {1-F_1(t_2^*)}{f_1(t_2^*)},\;\; [3] \qquad -\frac 1{T}\Big(2f_1(t^*_2)+t^*_2f'_1(t^*_2)\Big) <0 \qquad [4]$$</p>
<p>Does this knowledge permit player $1$ to infer something about the distribution of $t_2$? No, because $[3]$ and $[4]$ contain abstract information about how $t_2$ will be determined as a function of <em>what, according to player $2$</em>, is the distribution of $t_1$. They do not help player 2 in any way in relation to the distribution of $t_2$. </p>
<p>So we conclude, that <em>given the assumed set of common knowledge</em>, both will model each other distributions as Uniforms. Hmmm... does this tell us that indeed the solution of the game will be $(t_1^*,t_2^*) =? (T/2,\,T/2)$?<br>
It appears that since both can essentially predict the choice of the other, they will then have an incentive to push the button earlier. It does not take much thinking to realize that this line of thinking would lead us to conclude that they would both hit the button at time $0$, thus a.s. "guaranteeing" that <em>they will both lose</em>, which they also know, because they both treat the light-flash as a continuous rv, and so the probability of the light flash occurring exactly at time zero, is zero. But this is a strictly dominated strategy and the players won't select it. </p>
<p>Does it pay to randomize over the interval $[0,T/2]$? Well, no, because probability of wining won't be at a maximum. So we conclude that indeed the solution to this game is </p>
<p>$$(t_1^*,t_2^*) = (T/2,\,T/2)$$</p>
<p>even though the players know a priori what each will play.It is not difficult to calculate that in this case the expected payoffs will be
$$ (v_1,v_2) = (1/4,\; 1/4)$$
This pure strategy profile will be a rationalizible equilibrium <em>if</em> it is not strictly dominated by a mixed strategy. </p>
|
combinatorics | <p>In <a href="https://math.stackexchange.com/questions/80092/how-can-i-express-sum-k-0n-binom-1-2k-1k-binom-1-2n-k-without-u/80101#80101">my answer here</a> I prove, using generating functions, a statement equivalent to
$$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} (-1)^k = 2^n \binom{n}{n/2}$$
when $n$ is even. (Clearly the sum is $0$ when $n$ is odd.) The nice expression on the right-hand side indicates that there should be a pretty combinatorial proof of this statement. The proof should start by associating objects with even parity and objects with odd parity counted by the left-hand side. The number of leftover (unassociated) objects should have even parity and should "obviously" be $2^n \binom{n}{n/2}$. I'm having trouble finding such a proof, though. So, my question is</p>
<blockquote>
<p>Can someone produce a combinatorial proof that, for even $n$, $$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} (-1)^k = 2^n \binom{n}{n/2}?$$</p>
</blockquote>
<p><em>Some thoughts so far:</em> </p>
<p>Combinatorial proofs for $\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} = 4^n$ are given by <a href="https://math.stackexchange.com/questions/37971/identity-involving-binomial-coefficients/37984#37984">Phira here</a> and by <a href="https://math.stackexchange.com/questions/72367/proof-of-a-combinatorial-identity-sum-i-0n-2i-choose-i2n-i-choose-n/72661#72661">Brian M. Scott here</a>. The proofs are basically equivalent. In Phira's argument, both sides count the number of paths of length $2n$ starting from $(0,0)$ using steps of $(1,1)$ and $(1,-1)$. By conditioning on the largest value of $2k$ for which a particular path returns to the horizontal axis at $(2k,0)$ and using the facts that there are $\binom{2k}{k}$ paths from $(0,0)$ to $(2k,0)$ and $\binom{2n-2k}{n-k}$ paths of length $2n-2k$ that start at the horizontal axis but never return to the axis we obtain the left-hand side.</p>
<p>With these interpretations of the central binomial coefficients $2^n \binom{n}{n/2}$ could count (1) paths that do not return to the horizontal axis by the path's halfway point of $(n,0)$, or (2) paths that touch the point $(n,0)$. But I haven't been able to construct the association that makes these the leftover paths (nor do all of these paths have even parity anyway). So perhaps there's some other interpretation of $2^n \binom{n}{n/2}$ as the number of leftover paths.</p>
<p><HR></p>
<p><strong>Update.</strong> <em>Some more thoughts</em>:</p>
<p>There's another way to view the identity $\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} = 4^n$. Both sides count the number of lattice paths of length $n$ when north, south, east, and west steps are allowed. The right side is obvious. The left side has a similar interpretation as before: $\binom{2k}{k}$ counts the number of NSEW lattice paths of length $k$ that end on the line $y=0$, and $\binom{2n-2k}{n-k}$ counts the number of NSEW lattice paths of length $n-k$ that never return to the line $y =0$. So far, this isn't much different as before. However, $2^n \binom{n}{n/2}$ has an intriguing interpretation: It counts the number of NSEW lattice paths that end on the diagonal $y = x$ (or, equivalently, $y = -x$). So maybe there's an involution that leaves these as the leftover paths. (Proofs of all of these claims can be found on <a href="http://mikespivey.wordpress.com/2012/01/04/morelatticestat/" rel="noreferrer">this blog post</a>, for those who are interested.)</p>
| <p>Divide by $4^n$ so that the identity reads (again, for $n$ even)</p>
<p>$$ \sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n} (-1)^k = \frac{1}{2^n} \binom{n}{n/2}. \tag{1}$$</p>
<p><strong>Claim 1</strong>: Select a permutation $\sigma$ of $[n]$ uniformly at random. For each cycle $w$ of $\sigma$, color $w$ red with probability $1/2$; otherwise, color it blue. This creates a colored permutation $\sigma_C$. Then $$\binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n}$$ is the probability that exactly $k$ of the $n$ elements of a randomly-chosen permutation $\sigma$ are colored red. (See proof of Claim 1 below.)</p>
<p><strong>Claim 2</strong>: Select a permutation $\sigma$ of $[n]$ uniformly at random. Then, if $n$ is even, $$\frac{1}{2^n} \binom{n}{n/2}$$ is the probability that $\sigma$ contains only cycles of even length. (See proof of Claim 2 below.)</p>
<p><strong>Combinatorial proof of $(1)$, given Claims 1 and 2</strong>: For any colored permutation $\sigma_C$, find the smallest element of $[n]$ contained in an odd-length cycle $w$ of $\sigma_C$. Let $f(\sigma_C)$ be the colored permutation for which the color of $w$ is flipped. Then $f(f(\sigma_C)) = \sigma_C$, and $\sigma_C$ and $f(\sigma_C)$ have different parities for the number of red elements but the same probability of occurring. Thus $f$ is a sign-reversing involution on the colored permutations for which $f$ is defined. The only colored permutations $\sigma_C$ for which $f$ is not defined are those that have only even-length cycles. However, any permutation with an odd number of red elements must have at least one odd-length cycle, so the only colored permutations for which $f$ is not defined have an even number of red elements. Thus the left-hand side of $(1)$ must equal the probability of choosing a colored permutation that contains only even-length cycles. The probability of selecting one of the several colored variants of a given uncolored permutation $\sigma$, though, is that of choosing an uncolored permutation uniformly at random and obtaining $\sigma$, so the left-hand side of $(1)$ must equal the probability of selecting a permutation of $[n]$ uniformly at random and obtaining one containing only cycles of even length. Therefore,
$$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n} (-1)^k = \frac{1}{2^n} \binom{n}{n/2}.$$</p>
<p>(Clearly, $$\sum_{k=0}^n \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n} = 1,$$
which gives another combinatorial proof of the unsigned version of $(1)$ mentioned in the question.)</p>
<p><HR></p>
<p><strong>Proof of Claim 1</strong>: There are $\binom{n}{k}$ ways to choose which $k$ elements of a given permutation will be red and which $n-k$ elements will be blue. Given $k$ particular elements of $[n]$, the number of ways those $k$ elements can be expressed as the product of $i$ disjoint cycles is $\left[ {k \atop i} \right]$, an unsigned <a href="http://en.wikipedia.org/wiki/Stirling_numbers_of_the_first_kind">Stirling number of the first kind</a>. Thus the probability of choosing a permutation $\sigma$ that has those $k$ elements as the product of $i$ disjoint cycles and the remaining $n-k$ elements as the product of $j$ disjoint cycles is $\left[ {k \atop i} \right] \left[ {n-k \atop j}\right] /n!$, and the probability that the $i$ cycles are colored red and the $j$ cycles are colored blue as well is $\left[ {k \atop i} \right] \left[ {n-k \atop j}\right]/(2^i 2^j n!).$ Summing up, the probability that exactly $k$ of the $n$ elements in a randomly chosen permutation are colored red is
\begin{align}
\frac{\binom{n}{k}}{n!} \sum_{i=1}^k \sum_{j=1}^{n-k} \frac{\left[ {k \atop i} \right] \left[ n-k \atop j \right]}{2^i 2^j} = \frac{\binom{n}{k}}{n!} \sum_{i=1}^k \frac{\left[ {k \atop i} \right]}{2^i} \sum_{j=1}^{n-k} \frac{\left[ {n-k \atop j} \right]}{2^j}.
\end{align}
The two sums are basically the same, so we'll just do the first one.
$$\sum_{i=1}^k \frac{\left[ {k \atop i} \right]}{2^i} = \left( \frac{1}{2} \right)^{\overline{k}} = \prod_{i=0}^{k-1} \left(\frac{1}{2} + i\right) = \frac{1 (3) (5) \cdots (2k-1)}{2^k} = \frac{1 (2) (3) \cdots (2k-1)(2k)}{2^k 2^k k!} = \frac{(2k)!}{4^k k!}.$$
(The first equality is the well-known property that Stirling numbers of the first kind are used to convert rising factorial powers to ordinary powers. This property can be proved combinatorially. For example, Vol. 1 of Richard Stanley's <em><a href="http://www-math.mit.edu/~rstan/ec/ec1/">Enumerative Combinatorics</a></em>, 2nd ed., pp. 34-35 contains two such combinatorial proofs.)</p>
<p>Thus the probability that exactly $k$ of the $n$ elements of a randomly chosen permutation are colored red is $$\frac{\binom{n}{k}}{n!} \frac{(2k)!}{4^k k! } \frac{(2n-2k)!}{4^{n-k} (n-k)!} = \binom{2k}{k} \binom{2n-2k}{n-k} \frac{1}{4^n}.$$</p>
<p><HR></p>
<p><strong>Proof of Claim 2</strong>: Since there can be no odd cycles, $\sigma(1) \neq 1$. Thus there are $n-1$ choices for $\sigma(1)$. We have already chosen the element that maps to $\sigma(1)$, but otherwise there are no restrictions on the value of $\sigma(\sigma(1))$, and so we have $n-1$ choices for $\sigma(\sigma(1))$ as well. </p>
<p>Now $n-2$ elements are unassigned. If $\sigma(\sigma(1)) \neq 1$, then we have an open cycle. We can't assign $\sigma^3(1) = 1$, as that would close the current cycle at an odd number of elements. Also, $\sigma(1)$ and $\sigma^2(1)$ are already taken. Thus there are $n-3$ choices for the value of $\sigma^3(1)$. If $\sigma(\sigma(1)) = 1$, then we have just closed an even cycle. Selecting any unassigned element in $[n]$, say $j$, we cannot have $\sigma(j) = j$, as that would create an odd cycle, and $1$ and $\sigma(1)$ are already taken. Thus we have $n-3$ choices for $\sigma(j)$ as well.</p>
<p>In general, if there are $i$ elements unassigned and $i$ is even, there is either one even-length open cycle or no open cycles. If there is an open cycle, we cannot close it, and so we have $i-1$ choices for the next element in the cycle. If there is not an open cycle, we select the smallest unassigned element $j$. Since we cannot have $\sigma(j) = j$, there are $i-1$ choices for $\sigma(j)$. Either way, we have $i-1$ choices. If there are $i$ elements unassigned and $i$ is odd, though, there must always be an odd-length open cycle. Since we can close it, there are $i$ choices for the next element in the cycle. </p>
<p>All together, then, if $n$ is even then the number of permutations of $[n]$ that contain only cycles of even length is $$(n-1)^2 (n-3)^2 \cdots (1)^2 = \left(\frac{n!}{2^{n/2} (n/2)!}\right)^2 = \frac{n!}{2^n} \binom{n}{n/2}.$$ Thus the probability of choosing a permutation uniformly at random and obtaining one that contains only cycles of even length is $$\frac{1}{2^n} \binom{n}{n/2}.$$ </p>
<p><HR></p>
<p>(I've been thinking about this problem off and on for the two months since I first posted it. What finally broke it open for me was discovering the interpretation of the unsigned version of the identity mentioned as #60 on Richard Stanley's "<a href="http://www-math.mit.edu/~rstan/bij.pdf">Bijective Proof Problems</a>" document.)</p>
| <p>My alternative solution can be found here (in Section 4), by counting paths: <a href="http://arxiv.org/abs/1204.5923">http://arxiv.org/abs/1204.5923</a></p>
|
logic | <p>According to <a href="http://en.wikipedia.org/wiki/Logical_connective#Order_of_precedence">the precedence of logical connectives</a>, operator $\rightarrow$ gets higher precedence than $\leftrightarrow$ operator. But what about associativity of $\rightarrow$ operator?</p>
<p>The implies operator ($\rightarrow$) does not have the associative property. That means that $(p \rightarrow q) \rightarrow r$ is not equivalent to $p \rightarrow (q \rightarrow r)$. Because of that, the question comes op how $p \rightarrow q \rightarrow r$ should be interpreted.</p>
<p>The proposition $p \rightarrow q \rightarrow r$ can be defined in multiple ways that make sense:</p>
<ul>
<li>$(p \rightarrow q) \rightarrow r$ (left associativity)</li>
<li>$p \rightarrow (q \rightarrow r)$ (right associativity)</li>
<li>$(p \rightarrow q) \land (q \rightarrow r)$</li>
</ul>
<p>Which one of these definitions is used?</p>
<p>I could not locate any book/webpage that mentions about associativity of logical operators in discrete mathematics.</p>
<p>Please also cite the reference (book/reliable webpage) that you use to answer my question (as I'm planning to add this to wikipedia page about 'logical connectives').</p>
<p>Thanks.</p>
<p>PS: I got this question when I saw this problem:
Check if following compound proposition is tautology or not:</p>
<p>$$ \mathrm{p} \leftrightarrow (\mathrm{q} \wedge \mathrm{r}) \rightarrow \neg\mathrm{r} \rightarrow \neg\mathrm{p}$$</p>
| <p>When you enter $p \Rightarrow q \Rightarrow r$ in <a href="http://en.wikipedia.org/wiki/Mathematica">Mathematica</a> (with <code>p \[Implies] q \[Implies] r</code>), it displays $p \Rightarrow (q \Rightarrow r)$.</p>
<p>That makes it plausible that the $\rightarrow$ operator is generally accepted as right-associative.</p>
| <p>Some logical operators are associative: both <span class="math-container">$\wedge$</span> and <span class="math-container">$\vee$</span> are associative, as a simple check of truth tables verifies. Likewise, the biconditional <span class="math-container">$\leftrightarrow$</span> is associative.</p>
<p>However, the implication <span class="math-container">$\rightarrow$</span> is <em>not</em> associative. Compare <span class="math-container">$(p\rightarrow q)\rightarrow r$</span> and <span class="math-container">$p\rightarrow(q\rightarrow r)$</span>. If all of <span class="math-container">$p$</span>, <span class="math-container">$q$</span>, and <span class="math-container">$r$</span> are false, then <span class="math-container">$p\rightarrow (q\rightarrow r)$</span> is true, because the antecedent is false; but <span class="math-container">$(p\rightarrow q\rightarrow r$</span> is false, because <span class="math-container">$r$</span> is false, but <span class="math-container">$p\rightarrow q$</span> is true. They also disagree with <span class="math-container">$p$</span> and <span class="math-container">$r$</span> are false but <span class="math-container">$q$</span> is true: then <span class="math-container">$p\rightarrow(q\rightarrow r)$</span> is true because the antecedent is false, but <span class="math-container">$(p\rightarrow q)$</span> is true, <span class="math-container">$r$</span> false, so <span class="math-container">$(p\rightarrow q)\rightarrow r$</span> is false.</p>
<p>Since they take different values at some truth assignments, the two propositions are not equivalent, so <span class="math-container">$\to$</span> is not associative.</p>
|
logic | <blockquote>
<p><em><strong>Theorem 1</strong> [ZFC, classical logic]:</em> If $A,B$ are sets such that $\textbf{2}\times A\cong \textbf{2}\times B$, then $A\cong B$.</p>
</blockquote>
<p>That's because the axiom of choice allows for the definition of cardinality $|A|$ of any set $A$, and for $|A|\geq\aleph_0$ we have $|\textbf{2}\times A|=|A|$.</p>
<blockquote>
<p><em><strong>Theorem 2</strong>:</em> Theorem 1 still holds in ZF with classical logic.</p>
</blockquote>
<p>This is less trivial and explained in Section 5 of <a href="https://math.dartmouth.edu/~doyle/docs/three/three.pdf" rel="noreferrer">Division by Three</a> - however, though the construction does not involve any choices, it <em>does</em> involve the law of excluded middle.</p>
<blockquote>
<p><strong><em>Question:</em></strong> Are there <em>intuitionistic</em> set theories in which one can prove $$\textbf{2}\times A\cong \textbf{2}\times B\quad\Rightarrow\quad A\cong B\quad\text{?}$$ </p>
</blockquote>
<p>For example, is this statement true in elementary topoi or can it be proved in some intuitionistic type theory?</p>
<blockquote>
<p>In his comment below Kyle indicated that the statement is unprovable in some type theory - does somebody know the argument or a reference for that?</p>
</blockquote>
<p><em>Edit</em> See also the related question <a href="https://math.stackexchange.com/questions/1114752/does-a-times-a-cong-b-times-b-imply-a-cong-b">Does $A\times A\cong B\times B$ imply $A\cong B$?</a> about 'square roots'</p>
| <p>There was a paper recently posted to arXiv about this question: Swan, <a href="https://arxiv.org/abs/1804.04490" rel="noreferrer"><em>On Dividing by Two in Constructive Mathematics</em></a>.</p>
<p>It turns out that there are examples of toposes where you can't divide by two.</p>
| <p>(The following is not really an answer, or just a very partial one, but it's definitely relevant and too long for a comment.)</p>
<p>There is a theorem of Richard Friedberg ("<a href="https://eudml.org/doc/169898" rel="nofollow noreferrer">The uniqueness of finite division for recursive equivalence types</a>", <em>Math. Z.</em> <strong>75</strong> (1961), 3–7) which goes as follows (all of this is in classical logic):</p>
<p>For $A$ and $B$ subsets of $\mathbb{N}$, define $A \sim B$ when there exists a partial computable function $f:\mathbb{N}\rightharpoonup\mathbb{N}$ that is one-to-one on its domain and defined at least on all of $A$ such that $f(A) = B$. (One also says that $A$ and $B$ are <em>computably equivalent</em> or <em>recursively equivalent</em>, and it is indeed an equivalence relation, not to be confused with "computably/recursively isomorphic", <a href="https://mathoverflow.net/questions/228599/partial-computably-isomorphic-sets">see here</a>.) Then [Friedberg's theorem states]: if $n$ is a positive integer then $(n\times A) \sim (n \times B)$ implies $A\sim B$ (here, $n\times A$ is the set of natural numbers coding pairs $(i,k)$ where $0\leq i<n$ and $k\in A$ for some standard coding of pairs of natural numbers by natural numbers).</p>
<p>To make this assertion closer to the question asked here, subsets of $\mathbb{N}$ can be considered as objects, indeed subobjects of $\mathcal{N}$, in the <a href="https://en.wikipedia.org/wiki/Effective_topos" rel="nofollow noreferrer">effective topos</a> (an elementary topos with n.n.o. $\mathcal{N}$ such that all functions $\mathcal{N}\to\mathcal{N}$ are computable), in fact, these subobjects are exactly those classified by maps $\mathcal{N} \to \Omega_{\neg\neg}$ where $\Omega_{\neg\neg} = \nabla 2$ is the subobject of the truth values $p\in\Omega$ such that $\neg\neg p = p$; moreover, to say that two such objects are isomorphic, or internally isomorphic, in the effective topos, is equivalent to saying that $A$ and $B$ are computably isomorphic as above. So Friedberg's result can be reinterpreted by saying that if $A$ and $B$ are such objects of the effective topos and if $n\times A$ and $n\times B$ are isomorphic then $A$ and $B$ are.</p>
<p>I'm not sure how much this can be internalized (e.g., does the effective topos validate "if $A$ and $B$ are $\neg\neg$-stable sets of natural numbers and $n\times A$ is isomorphic to $n\times B$ then $A$ is isomorphic to $B$" for explicit $n$? and how about for $n$ quantified inside the topos?) or generalized (do we really need $\neg\neg$-stability?). But this may be worth looking into, and provides at least a positive kind-of-answer to the original question.</p>
|
probability | <p>If you are given a die and asked to roll it twice. What is the probability that the value of the second roll will be less than the value of the first roll?</p>
| <p>There are various ways to answer this. Here is one:</p>
<p>There is clearly a $1$ out of $6$ chance that the two rolls will be the same, hence a $5$ out of $6$ chance that they will be different. Further, the chance that the first roll is greater than the second must be equal to the chance that the second roll is greater than the first (e.g. switch the two dice!), so both chances must be $2.5$ out of $6$ or $5$ out of $12$.</p>
| <p>Here another way to solve the problem
$$
\text{Pr }[\textrm{second} > \textrm{first}] + \text{Pr }[\textrm{second} < \textrm{first}] + \text{Pr }[\textrm{second} = \textrm{first}] = 1
$$
Because of symmetry $\text{Pr }[\text{second} > \text{first}] = \text{Pr }[\text{second} < \text{first}]$, so
$$
\text{Pr }[\text{second} > \text{first}] = \frac{1 - \text{Pr }[\text{second} = \text{first}]}{2}
= \frac{1 - \frac{1}{6}}{2} = \frac{5}{12}
$$</p>
|
matrices | <p>I am sure the answer to this is (kind of) well known. I've searched the web and the site for a proof and found nothing, and if this is a duplicate, I'm sorry.</p>
<p>The following question was given in a contest I took part. I had an approach but it didn't solve the problem.</p>
<blockquote>
<p>Consider $V$ <strong>a</strong> linear subspace of the real vector space $\mathcal{M}_n(\Bbb{R})$ ($n\times n$ real entries matrices) such that $V$ contains only singular matrices (i.e matrices with determinant equal to $0$). What is the maximal dimension of $V$?</p>
</blockquote>
<p>A quick guess would be $n^2-n$ since if we consider $W$ the set of $n\times n$ real matrices with last line equal to $0$ then this space has dimension $n^2-n$ and it is a linear space of singular matrices.</p>
<p>Now the only thing there is to prove is that if $V$ is a subspace of $\mathcal{M}_n(\Bbb{R})$ of dimension $k > n^2-n$ then $V$ contains a non-singular matrix. The official proof was unsatisfactory for me, because it was a combinatorial one, and seemed to have few things in common with linear algebra. I was hoping for a pure linear algebra proof.</p>
<p>My approach was to search for a permutation matrix in $V$, but I used some 'false theorem' in between, which I am ashamed to post here. </p>
| <p>We can show more generally that if $\mathcal M$ is a linear subspace of $\mathcal M_n(\mathbb R)$ such that all its element have a rank less than or equal to $p$, where $1\leq p<n$, then the dimension of $\mathcal M$ is less than or equal to $np$. To see that, consider the subspace $\mathcal E:=\left\{\begin{pmatrix}0&B\\^tB&A\end{pmatrix}, A\in\mathcal M_{n-p}(\mathbb R),B\in\mathcal M_{p,n-p}(\mathbb R) \right\}$. Its dimension is $p(n-p)+(n-p)^2=(n-p)(p+n-p)=n(n-p)$. Let $\mathcal M$ a linear subspace of $\mathcal M_n(\mathbb R)$ such that $\displaystyle\max_{M\in\mathcal M}\operatorname{rank}(M)=p$. We can assume that this space contains the matrix $J:=\begin{pmatrix}I_p&0\\0&0\end{pmatrix}\in\mathcal M_n(\mathbb R)$. Indeed, if $M_0\in\mathcal M$ is such that $\operatorname{rank}M_0=p$, we can find $P,Q\in\mathcal M_n(\mathbb R)$ invertible matrices such that $J=PM_0Q$, and the map $\varphi\colon \mathcal M\to\varphi(\mathcal M)$ defined by $\varphi(M)=PMQ$ is a rank-preserving bijective linear map.</p>
<p>If we take $M\in\mathcal M\cap \mathcal E$, then we can show, considering $M+\lambda J\in\mathcal M$, that $M=0$. Therefore, since
$$\dim (\mathcal M+\mathcal E)=\dim(\mathcal M)+\dim(\mathcal E)\leq \dim(\mathcal M_n(\mathbb R))=n^2, $$<br>
we have
$$\dim (\mathcal M)\leq n^2-n(n-p)=np.$$</p>
| <p>If you consider a subspace <span class="math-container">$\mathcal{M}$</span> of <strong>symmetric</strong> singular matrices of rank <span class="math-container">$p$</span>, <a href="https://math.stackexchange.com/a/66936/445105">Davide's argument</a> can be reused to prove <span class="math-container">$\text{dim}(\mathcal{M})\le \frac{p(p+1)}2$</span>.</p>
<h3>Proof</h3>
<p>Consider</p>
<p><span class="math-container">$$
\mathcal E:=\left\{\begin{pmatrix}0&B\\B^T&A\end{pmatrix}, A\in \text{Sym}_{n-p}(\mathbb R),B\in\mathcal M_{p,n-p}(\mathbb R) \right\}.
$$</span>
which is a subspace of the symmetric <span class="math-container">$n\times n$</span> matrices <span class="math-container">$\text{Sym}_n(\mathbb{R})$</span> with dimension
<span class="math-container">$$
\text{dim}(\mathcal E) = p(n-p) + \frac{(n-p)(n-p+1)}2
= \frac{n-p}2\bigl(2p + (n-p+1)\bigr)
= \frac{(n-p)(n+p+1)}2
$$</span>
With similar arguments, <span class="math-container">$\mathcal{M}\cap \mathcal{E} = \emptyset$</span>, which results in
<span class="math-container">$$
\text{dim}(\mathcal E) + \text{dim}(\mathcal{M})
\le \text{Sym}_n(\mathbb R)
= \frac{n(n+1)}2
$$</span>
And thus
<span class="math-container">$$
\text{dim}(\mathcal{M})
\le \frac{n(n+1)}2 - \frac{(n-p)(n+p+1)}2
= \frac{n^2 + n}2 - \frac{(n^2 - p^2) + (n-p)}2
= \frac{p^2+p}2
$$</span></p>
|
differentiation | <p>I'm trying to prove that <span class="math-container">$\frac{\mathrm{d} }{\mathrm{d} x}\ln x = \frac{1}{x}$</span>.</p>
<p>Here's what I've got so far:
<span class="math-container">$$
\begin{align}
\frac{\mathrm{d}}{\mathrm{d} x}\ln x &= \lim_{h\to0} \frac{\ln(x + h) - \ln(x)}{h} \\
&= \lim_{h\to0} \frac{\ln(\frac{x + h}{x})}{h} \\
&= \lim_{h\to0} \frac{\ln(1 + \frac{h}{x})}{h} \\
\end{align}
$$</span>
To simplify the logarithm:
<span class="math-container">$$
\lim_{h\to0}\left (1 + \frac{h}{x}\right )^{\frac{1}{h}} = e^{\frac{1}{x}}
$$</span>
This is the line I have trouble with. I can see that it is true by putting numbers in, but I can't prove it. I know that <span class="math-container">$e^{\frac{1}{x}} = \lim_{h\to0}\left (1 + h \right )^{\frac{h}{x}}$</span>, but I can't work out how to get from the above line to that.
<span class="math-container">$$
\lim_{h\to0}\left ( \left (1 + \frac{h}{x}\right )^{\frac{1}{h}}\right )^{h} = e^{\frac{h}{x}}
$$</span>
Going back to the derivative:
<span class="math-container">$$
\begin{align}
\frac{\mathrm{d}}{\mathrm{d} x}\ln x &= \lim_{h\to0} \frac{\ln(e^{\frac{h}{x}})}{h} \\
&= \lim_{h\to0} \frac{\frac{h}{x}\ln(e)}{h} \\
&= \lim_{h\to0} \frac{h}{x} \div h\\
&= \frac{1}{x} \\
\end{align}
$$</span></p>
<p>This proof seems fine, apart from the middle step to get <span class="math-container">$e^{\frac{1}{x}}$</span>. How could I prove that part?</p>
| <p>If you can use the chain rule and the fact that the derivative of $e^x$ is $e^x$ and the fact that $\ln(x)$ is differentiable, then we have:</p>
<p>$$\frac{\mathrm{d} }{\mathrm{d} x} x = 1$$</p>
<p>$$\frac{\mathrm{d} }{\mathrm{d} x} e^{\ln(x)} = e^{\ln(x)} \frac{\mathrm{d} }{\mathrm{d} x} \ln(x) = 1$$</p>
<p>$$e^{\ln(x)} \frac{\mathrm{d} }{\mathrm{d} x} \ln(x) = 1$$</p>
<p>$$x \frac{\mathrm{d} }{\mathrm{d} x} \ln(x) = 1$$</p>
<p>$$\frac{\mathrm{d} }{\mathrm{d} x} \ln(x) = \frac{1}{x}$$</p>
| <p>The simplest way is to use the inverse function theorem for derivatives:</p>
<p>If <span class="math-container">$f$</span> is a bijection from an interval <span class="math-container">$I$</span> onto an interval <span class="math-container">$J=f(I)$</span>, which has a derivative at <span class="math-container">$x\in I$</span>, and if <span class="math-container">$f'(x)\neq 0$</span>, then <span class="math-container">$f^{-1}\colon J\to I$</span> has a derivative at <span class="math-container">$y=f(x)$</span>, and
<span class="math-container">$$\bigl(f^{-1}\bigr)'(y)=\frac1{f'(x)}=\frac1{f'\bigl(f^{-1}(y)\bigr)}.$$</span></p>
<p>As <span class="math-container">$(\mathrm e^x)'=\mathrm e^x\neq 0\,$</span> for all <span class="math-container">$x$</span>, we know that <span class="math-container">$\,\ln\,$</span> has a derivative at each point of its domain, and
<span class="math-container">$$(\ln)'(y)=\frac1{\mathrm e^{\,\ln y}}=\frac1y.$$</span></p>
|
differentiation | <p>I am trying to get the derivative of $|x|$, and I want that derivative function, say $g(x)$, to be a function of x. </p>
<p>So it really needs the |x| to be smooth (ex. $x^2$); I am wondering what is the best way to approximate |x| with something smooth?</p>
<p>I propose $\sqrt{x^2 + \epsilon}$, where $\epsilon = 10^{-10}$, but there should be something better? Perhaps Taylor expansion?</p>
<p><strong>Sorry for any confusion. I should add some additional information here:</strong></p>
<p>I want to use $|x|$ as part of an object function $J(x)$ which I want to minimize. So it would be nice to approximate $|x|$ with some smooth function so that I can get the analytic form of the first-order derivative of $J(x)$.</p>
<p>Thanks a lot. </p>
| <p>A bit late to the party, but you can smoothly approximate $f(x) = |x|$ by observing $\partial f/\partial x = \mbox{sgn}(x)$ for $x \neq 0$. Therefore approximating the $\mbox{sgn}(x)$ function by </p>
<p>$$ f(x) = 2\mbox{ sigmoid}(kx)-1$$ </p>
<p>(the $k$ being a parameter that allows you to control the smoothness), we get</p>
<p>$$ \partial f/\partial x = 2\left (\frac{e^{kx}}{1+e^{kx}} \right ) - 1 $$
$$ \Rightarrow f(x) = \frac{2}{k}\log(1+e^{kx})-x-\frac{2}{k}\log(2)$$</p>
<p>where the constant term was chosen to ensure $f(0) = 0 $.</p>
<p>Included are plots of the function for $x \in [-5,5]$, where $k=1$ is red, $k=10$ is blue and $k=100$ is black.</p>
<p>Note that if you wanted to use the more smooth $k=1$ in the interval $[-5,5]$ it may be worth applying a further linear transformation i.e. ~$5f(x)/3.6$ to ensure the values at the edge of the interval are correct.</p>
<p><img src="https://i.sstatic.net/0hBiS.png" alt="enter image description here"></p>
| <p>I have used your $\sqrt{x^2+\epsilon}$ function once before, when an application I was working on called for a curve with a tight radius of curvature near $x=0$. Whether it is best for your purpose might depend on your purpose.</p>
<p>(Side note: The derivative of $|x|$ does not exist at $x=0$. In physics, we sometimes cheat and write ${d\over dx}|x| = \rm{sgn}(x)$, the "sign" function that gives $-1$ when $x<0$, $0$ when $x=0$, and $+1$ when $x > 0$. But only when we know that the value at $x=0$ will not get us into trouble!)</p>
|
combinatorics | <p>Suppose you want to open a lock with three digits code, the lock is a little special because it can be opened if you have two digits guessed right. To clarify, if the correct three digit is <code>123</code>, then guess <code>124</code> or <code>153</code> can open the box. The lock looks like this:</p>
<p><a href="https://i.sstatic.net/YGl7R.png" rel="noreferrer"><img src="https://i.sstatic.net/YGl7Rt.png" alt="a rendered picture of a three decimal digit lock" /></a></p>
<p><strong>Question is: what is best strategy to open the box?</strong> The best strategy is a strategy which requires least attempts at average. You should also find the average.</p>
<p>The strategies I came up with are:</p>
<p>First strategy: hold first digit, and constantly change second and third while keep them equal when changing them. For example, I will try: <code>111</code>, <code>122</code>...<code>199</code>,<code>211</code>,<code>222</code>,...,<code>299</code>,....</p>
<p>Second strategy: hold second and third equal, and constantly change first one. For example: <code>111</code>, <code>211</code>,...,<code>911</code>,<code>122</code>,<code>222</code>,...</p>
<p><strong>I don't know if these two are best, nor do I know if they are equivalent efficient.</strong></p>
<h2>Edit</h2>
<p><a href="https://jsfiddle.net/31ds8ux4/3/" rel="noreferrer">Here is a program</a> to calculate the average number of trails for a strategy from an ardent comment. To use it, replace the line '// Input your list here' with your test list and press run.</p>
| <p>I have a strategy with at most $50$ tries and $22449/1000=22.449$ expected tries.</p>
<p>$$
\begin{align}
443, 796, 869, 101, 230, 577, 314, 022, 965, 656, \\
588, 757, 875, 213, 140, 331, 689, 998, 404, 410, \\
134, 303, 241, 886, 555, 667, 779, 421, 599, 000, \\
432, 202, 897, 768, 044, 033, 695, 342, 011, 976, \\
678, 959, 112, 858, 987, 224, 123, 320, 566, 785\phantom{,}
\end{align}
$$</p>
<p>This was obtained by starting from the unordered set of these words (given by a covering code) and then ordering them using a greedy computer search; more details below.</p>
<hr>
<p>First, I'll get some ideas by considering another problem, <em>optimizing the number of tries needed for the worst case</em>, which has a known solution for this case. A <a href="https://en.wikipedia.org/wiki/Covering_code">covering code</a> $C$ with alphabet size $q=10$, length $n=3$, and covering radius $R=1$ is a set of $3$-tuples (called <em>words</em>) of length $3$ over $\{0,1,\dots,9\}$ such that every possible $3$-tuple differs from one in $C$ in at most one position. This is exactly what we need. </p>
<p>The minimal size with these parameters is $50$ [1]. It contains these words:
$$
\begin{array}{|ccccc|}
\hline
000 & 011 & 022 & 033 & 044 \\
101 & 112 & 123 & 134 & 140 \\
202 & 213 & 224 & 230 & 241 \\
303 & 314 & 320 & 331 & 342 \\
404 & 410 & 421 & 432 & 443 \\
\hline
555 & 566 & 577 & 588 & 599 \\
656 & 667 & 678 & 689 & 695 \\
757 & 768 & 779 & 785 & 796 \\
858 & 869 & 875 & 886 & 897 \\
959 & 965 & 976 & 987 & 998 \\
\hline
\end{array}
$$</p>
<p>For any two columns, the upper half contains a word for all $25$ pairs of symbols in $\{0,1,2,3,4\}$ that can occur there, and the lower half contains all $25$ pairs of symbols in $\{5,6,7,8,9\}$ that can occur there. The correct combination has to contain at least two symbols from either set, so it is opened by entering one of these words.</p>
<p>[1] Some sources refer to "J. G. Kalbfleisch and R. G. Stanton, A combinatorial theorem of matching, J. London Math. Soc. (1), 44 (1969), 60–64; and (2), 1 (1969), 398", but I can't find the paper. However, the value is listed in the fourth PDF listed at the top of <a href="http://www.sztaki.hu/~keri/codes/">this page by Gerzson Kéri</a>.</p>
<hr>
<p>Now back to the original problem, <em>optimizing the expected number of tries required</em>. My idea is to try to take these words and optimize the order somehow. This of course doesn't guarantee that we have an optimal solution.</p>
<p>I tried a greedy random approach: Choose words one by one. At each step, for each possible new word $c$, find the number of previously uncovered words that would be covered by $c$. Then among the ones that cover the most, choose one by random. After some time, the best that I could find was the order given at the top of this answer, with $22449/1000$ as the expected number.</p>
| <p>OK, so the basic idea I have is as follows:</p>
<p>We want to cover as many possible combinations as possible for each try. Thus, for example, the OP's strategy of doing 111,122,..., followed by 211,222, ... would seem less than optimal, since the first 10 tries already cover any combinations with the last two digits being the same. So, it would be better to do 111,122, ... followed by something like 223,234,245, so that not only do you try to get a hit between the first digit and any of the other two, but you are also trying other options to get a hit between the 2nd and 3rd digit. </p>
<p>In fact, even 111,122,... is less than optimal, since they both cover 121, so you get overlap. So, it's probably best to start with something like 111,222,333,... since each of these covers 28 combinations without overlap.</p>
<p>After that, I figure we should just keep making sure that every new try we make has as few digits in common with whatever combinations we already tried up to that point... basically try and make each combination differ in 2 digits from any of the ones tried before. The following sequence will do exactly that (each sequence of 10 has a unique difference between 1st and 2nd digit, between 2nd and 3rd, and between 1st and 3rd):</p>
<p>000, 111, 222, 333, ..., 999, (at this point we have covered 10*28=280 combinations)</p>
<p>012, 123, 234, 345, ..., 901, (another 10 * 22 (e.g. 012 covers itself and 3*7 more) = 220)</p>
<p>(so with a mere 20 tries we cover half of all possible combinations!)</p>
<p>024, 135, 246, 357, ..., 913, (another 10 * 18 (this takes some writing out) = 180)</p>
<p>036, 147, 258, 369, ..., 925, (another 10 * 12 = 120)</p>
<p>048, 159, 260, 371, ..., 937, (another 10 * 8 = 80)</p>
<p>(at this point, don't proceed with 050,... since that has two digits in common with earlier 000)</p>
<p>051, 162, 273, 384, ..., 940, (another 10 * 5 = 50)</p>
<p>063, 174, 285, 396, ..., 952, (another 10 * 4 = 40)</p>
<p>075, 186, 297, 308, ..., 964, (another 10 * 2 = 20)</p>
<p>087, 198, 209, 310, ... ,976 (the last 10)</p>
<p>In an earlier version of my answer I said to do another 10 after these:</p>
<p>099, 100, 211, 322, ... , 988 </p>
<p>the idea being that this would cover all possible pairs of 1st and 2nd digit, and thus covering all possible combinations (and thus we would have a worst case of 100). However, it turns out that whichever ones these additional 10 would cover have been covered already by the previous 90. So, worst case of this method is 90 tries, not 100.</p>
<p>The above method gives an average of 0.28*5.5 (the first 10 tries each cover 28 cases, so that is 280 cases out of 1000 is 28%, which on average take (1 + 10)/2 = 5.5 tries) + 0.22*15.5 + 0.18*25.5 + 0.12*35.5 + 0.08*45.5 + 0.05*55.5 + 0.04*65.5 + 0.02*75.5 + 0.01*85.5 = 25.2 tries to open the box.</p>
<p>I really think this method cannot be improved in terms of average number of tries, but I have no proof. </p>
<p><strong>Edit</strong> ok, so this is <em>not</em> the most efficient strategy: see JiK's answer! Time to eat humble pie! ... Well, I hope at least I was able to express some of the ideas why some strategies might be better than others.</p>
|
probability | <p>For independent events, the probability of <em>both</em> occurring is the <strong>product</strong> of the probabilities of the individual events: </p>
<p>$Pr(A\; \text{and}\;B) = Pr(A \cap B)= Pr(A)\times Pr(B)$.</p>
<p>Example: if you flip a coin twice, the probability of heads both times is: $1/2 \times 1/2 =1/4.$</p>
<p>I don't understand why we multiply. I mean, I've memorized the operation by now, that we multiply for independent events; but <strong>why</strong>, I don't get it. </p>
<p>If I have $4$ bags with $3$ balls, then I have $3\times 4=12$ balls: This I understand. Multiplication is (the act of) <strong>scaling</strong>. </p>
<p>But what does scaling have to do with independent events? I don't understand why we scale one event by the other to calculate $Pr(A \cap B)$, if A, B are independent. </p>
<p>Explain it to me as if I'm really dense, because I am. Thanks. </p>
| <p>I like this answer taken from <a href="https://web.archive.org/web/20180128170657/http://mathforum.org/library/drmath/view/74065.html" rel="nofollow noreferrer">Link</a> :</p>
<p>"
It may be clearer to you if you think of probability as the fraction
of the time that something will happen. If event A happens 1/2 of the
time, and event B happens 1/3 of the time, and events A and B are
independent, then event B will happen 1/3 of the times that event A
happens, right? And to find 1/3 of 1/2, we multiply. The probability
that events A and B both happen is 1/6.</p>
<p>Note also that adding two probabilities will give a larger number than
either of them; but the probability that two events BOTH happen can't
be greater than either of the individual events. So it would make no
sense to add probabilities in this situation.
"</p>
| <p>If you randomly pick one from $n$ objects, each object has the probability $\frac{1}{n}$ of being picked. Now imagine you pick randomly <em>twice</em> - one object from a set of $n$ objects, and a second object from a <em>different</em> set of $m$ objects. There are $n\cdot m$ possible pairts of objects, and thus the probability of each individual pair is $\frac{1}{n\cdot m} = \frac{1}{n}\cdot \frac{1}{m}$.</p>
<hr>
<p>More generally, let $A$ be some event with probability $\Pr(A) = a$, and $B$ some other event with probability $\Pr(B) = b$. Assume you already know that $A$ happened, meaning that instead of looking at the <em>whole</em> probability space (i.e. at the whole set of possible outcomes), we're now looking at only $A$. What can we say about the probability that $B$ happens also, i.e. about the probability $\Pr(B\mid A)$ (to be read as "the probability of $B$ under the condition $A$")?</p>
<p>In general, not much! But, if $A$ and $B$ are <em>independent</em>, then by the definition of <em>independence</em>, knowing that $A$ has happened <em>doesn't</em> provide us with any information about $B$. In other words, knowing that $A$ has happened doesn't make the likelyhood of $B$ happening also any smaller or larger, so $$
\Pr(B\mid A) = \Pr(B) \text{ if $A,B$ are independent.}
$$</p>
<p>Now look at $\Pr(A \cap B)$, i.e. the probability that <em>both</em> $A$ and $B$ happen. We know that if $A$ has happened, then $A \cap B$ happens with probability $\Pr(B\mid A)$. If we <em>don't</em> know that $A$ has happened, we have to <em>scale</em> this probability with the probability of $A$. Thus, $$
\Pr(A \cap B)= \Pr(B\mid A)\Pr(A) \text{.}
$$
[ You can imagine $A$ and $B$ to be some shapes, both inside some larger shape $\Omega$. $\Pr(A\cap B)$ is then the <em>percentage</em> of the area of $\Omega$ that is covered by both $A$ and $B$, $\Pr(A)$ the percentage of the area of $\Omega$ covered by $A$, and $\Pr(B\mid A)$ is the percentage of the area of $A$ covered by $B$. ]</p>
<p>If $A,B$ are independent, we can combine these two results to get $$
\Pr(A\cap B) = \Pr(A)\Pr(B) \text{.}
$$</p>
|
probability | <p>Two events are mutually exclusive if they can't both happen.</p>
<p>Independent events are events where knowledge of the probability of one doesn't change the probability of the other.</p>
<p>Are these definitions correct? If possible, please give more than one example and counterexample. </p>
| <p>Yes, that's fine.</p>
<p>Events are mutually exclusive if the occurrence of one event excludes the occurrence of the other(s). Mutually exclusive events cannot happen at the same time. For example: when tossing a coin, the result can either be <code>heads</code> or <code>tails</code> but cannot be both.</p>
<p>$$\left.\begin{align}P(A\cap B) &= 0 \\ P(A\cup B) &= P(A)+P(B)\\ P(A\mid B)&=0 \\ P(A\mid \neg B) &= \frac{P(A)}{1-P(B)}\end{align}\right\}\text{ mutually exclusive }A,B$$</p>
<p>Events are independent if the occurrence of one event does not influence (and is not influenced by) the occurrence of the other(s). For example: when tossing two coins, the result of one flip does not affect the result of the other.</p>
<p>$$\left.\begin{align}P(A\cap B) &= P(A)P(B) \\ P(A\cup B) &= P(A)+P(B)-P(A)P(B)\\ P(A\mid B)&=P(A) \\ P(A\mid \neg B) &= P(A)\end{align}\right\}\text{ independent }A,B$$</p>
<p>This of course means mutually exclusive events are not independent, and independent events cannot be mutually exclusive. (Events of measure zero excepted.)</p>
| <p>After reading the answers above I still could not understand clearly the difference between mutually exclusive AND independent events. I found a nice answer from Dr. Pete posted on <a href="https://web.archive.org/web/20180128173051/http://mathforum.org/library/drmath/view/69825.html" rel="nofollow noreferrer">math forum</a>. So I attach it here so that op and many other confused guys like me could save some of their time.</p>
<blockquote>
<p><strong>If two events A and B are independent</strong> a real-life example is the following. Consider a fair coin and a fair
six-sided die. Let event A be obtaining heads, and event B be rolling
a 6. Then we can reasonably assume that events A and B are
independent, because the outcome of one does not affect the outcome of
the other. The probability that both A and B occur is</p>
<p>P(A and B) = P(A)P(B) = (1/2)(1/6) = 1/12.</p>
<p><strong>An example of a mutually exclusive event</strong> is the following. Consider a
fair six-sided die as before, only in addition to the numbers 1
through 6 on each face, we have the property that the even-numbered
faces are colored red, and the odd-numbered faces are colored green.
Let event A be rolling a green face, and event B be rolling a 6. Then</p>
<p>P(B) = 1/6</p>
<p>P(A) = 1/2</p>
<p>as in our previous example. But it is obvious that events A and B
cannot simultaneously occur, since rolling a 6 means the face is red,
and rolling a green face means the number showing is odd. Therefore</p>
<p>P(A and B) = 0.</p>
<p>Therefore, we see that a mutually exclusive pair of nontrivial events
are also necessarily dependent events. This makes sense because if A
and B are mutually exclusive, then if A occurs, then B cannot also
occur; and vice versa. This stands in contrast to saying the outcome
of A does not affect the outcome of B, which is independence of
events.</p>
</blockquote>
|
logic | <p>Stephen Hawking believes that Gödel's Incompleteness Theorem makes the search for a 'Theory of Everything' impossible. He reasons that because there exist mathematical results that cannot be proven, there exist physical results that cannot be proven as well. Exactly how valid is his reasoning? How can he apply a mathematical theorem to an empirical science?</p>
| <p><a href="http://www.damtp.cam.ac.uk/events/strings02/dirac/hawking/">Hawking's argument</a> relies on several assumptions about a "Theory of Everything". For example, Hawking states that a Theory of Everything would have to not only predict what we think of as "physical" results, it would also have to predict mathematical results. Going further, he states that a Theory of Everything would be a finite set of rules which can be used effectively in order to provide the answer to any physical question including many purely mathematical questions such as the Goldbach conjecture. If we accept that characterization of a Theory of Everything, then we don't need to worry about the incompleteness theorem, because Church's and Turing's solutions to the <a href="http://en.wikipedia.org/wiki/Entscheidungsproblem">Entscheidungsproblem</a> also show that there is no such effective system. </p>
<p>But it is far from clear to me that a Theory of Everything would be able to provide answers to arbitrary mathematical questions. And it is not clear to me that a Theory of Everything would be effective. However, if we make the definition of what we mean by "Theory of Everything" strong enough then we will indeed set the goal so high that it is unattainable. </p>
<p>To his credit, Hawking does not talk about results being "unprovable" in some abstract sense. He assumes that a Theory of Everything would be a particular finite set of rules, and he presents an argument that no such set of rules would be sufficient. </p>
| <p>@Sperners Lemma's comment should really be promoted to an answer. For indeed, it is a fairly gross misunderstanding of what Gödel's theorem says to summarize it as asserting that "there exist mathematical results that cannot be proven", for the reason he briefly indicates.</p>
<p>And incidentally, though it is a quite different issue, a Theory of Everything in the standard sense surely doesn't have to entail that every physical truth could be "proven". Let's bow to the wisdom of Wikipedia which asserts </p>
<blockquote>
<p>A theory of everything (ToE) or final theory is a putative theory of
theoretical physics that fully explains and links together all known
physical phenomena, and predicts the outcome of any experiment that
could be carried out in principle.</p>
</blockquote>
<p>So NB a ToE is a body of <em>laws</em> which (<em>if</em> we assume that they are deterministic) will imply lots of <em>conditionals</em> of the form "if this happens, then that happens". But a ToE which wraps up all the <em>laws</em> into one neat package needn't tell us the contingent <em>initial conditions</em>, so (even if it is deterministic) need not tell us all the physical facts. </p>
|
game-theory | <p>In <span class="math-container">$\mathbb{R}^2$</span>, a wolf is trying to catch <strong>two</strong> sheep. At time <span class="math-container">$0$</span> the wolf's at <span class="math-container">$(0,0)$</span> and the sheep are at <span class="math-container">$(1,0)$</span>. The animals are moving continuously and react instantaneously according to each other's positions. Wolf speed is <span class="math-container">$1$</span> and sheep speed is <span class="math-container">$1/2$</span>. The wolf catches a sheep if their distance is <span class="math-container">$0$</span>.</p>
<p>The wolf wants to catch the pair of sheep in minimum time. The sheep want to maximize that time.</p>
<p><strong>Question:</strong> How does everyone move?</p>
<hr />
<p><strong>Technical note</strong></p>
<p>To those who find the descriptions above somewhat ambiguous and subject to possible misinterpretations, we can reframe some terms in a more rigorous manner:</p>
<ul>
<li><p><strong>Continuous movement:</strong> we use <span class="math-container">$w(t)$</span>, <span class="math-container">$s_1(t)$</span> and <span class="math-container">$s_2(t)$</span> to denote the animals' positions at time <span class="math-container">$t$</span>. Call the functions <span class="math-container">$w, s_1, s_2$</span> the wolf path or the sheep path. They satisfy <span class="math-container">$$\vert w(t)-w(s)\vert \leq \vert t-s\vert, \vert s_i(t)-s_i(s)\vert \leq \frac{1}{2}\vert t-s\vert, i=1,2, \forall t,s\geq0,$$</span> with initial conditions: <span class="math-container">$w(0)=(0,0)$</span>, <span class="math-container">$s_1(0)=s_2(0)=(1,0)$</span></p>
</li>
<li><p><strong>Instantaneous reaction:</strong> intuitively we want each animal's <strong>choice of path (strategy)</strong> be as free as possible <strong>influenced only by the other players paths up to this moment</strong>. Let <span class="math-container">$W$</span> be the set of all wolf paths and <span class="math-container">$S_i$</span> the set of all sheep <span class="math-container">$i$</span> paths. Then</p>
<ul>
<li><strong>The wolf's strategy</strong> is a function <span class="math-container">$f_w$</span> from <span class="math-container">$S_1\times S_2$</span> to <span class="math-container">$W$</span> such that if <span class="math-container">$s,s'\in S_1\times S_2$</span> agree on <span class="math-container">$[0,t]$</span>, then <span class="math-container">$f_w(s)$</span> and <span class="math-container">$f_w(s')$</span> also agree on <span class="math-container">$[0,t]$</span>, <span class="math-container">$\forall t$</span>.</li>
<li><strong>The sheep's strategy</strong> is a function <span class="math-container">$f_{s}$</span> from <span class="math-container">$W$</span> to <span class="math-container">$S_1\times S_2$</span> such that if <span class="math-container">$w, w'\in W$</span> agree on <span class="math-container">$[0,t]$</span>, then <span class="math-container">$f_{s}(w)$</span> and <span class="math-container">$f_{s}(w')$</span> also agree on <span class="math-container">$[0,t]$</span>, <span class="math-container">$\forall t$</span>.</li>
</ul>
</li>
</ul>
| <p>(Too long for a comment.) As it is discussed in comments, the upper bound is <span class="math-container">$6$</span>. The wolf being faster than the sheep by <span class="math-container">$1/2$</span> will catch this sheep within <span class="math-container">$2$</span> time units. Each sheep goes no more than <span class="math-container">$1$</span> during this time, therefore distance to the other sheep is at most <span class="math-container">$2$</span>, and the wolf will catch it in <span class="math-container">$4$</span> time units.</p>
<p>My observation is that this upper bound doesn't depend on metrics. Also for Manhattan <span class="math-container">$L_1$</span> metrics this bound is reachable. If two sheep go in two different constant directions other than directly to the wolf (e. g., along the line <span class="math-container">$x = 1$</span> in the opposite directions), then the wolf will catch one of them exactly after <span class="math-container">$2$</span> time units (at the point <span class="math-container">$(1, \pm 1)$</span>). The second one will be at distance <span class="math-container">$2$</span> at that moment and will not need to change anything to be alive for <span class="math-container">$4$</span> more time units (will be caught at the point <span class="math-container">$(1, \mp 3)$</span>). Also both sheep may change direction with a simple restriction: not getting closer to the wolf and to each other.</p>
<p>For Chebyshov <span class="math-container">$L_{\infty}$</span> metrics situation is rather similar, however the strategy is a bit different. Now both sheep should move away from the wolf and at the same time away from each other. This is possible only when moving along lines <span class="math-container">$y = x - 1$</span> and <span class="math-container">$y = 1 - x$</span> with increasing <span class="math-container">$x$</span>. Then the wolf will catch one of them at the point <span class="math-container">$(2, \pm 1)$</span>. The second sheep may vary its <span class="math-container">$x$</span>-speed keeping the same <span class="math-container">$y$</span>-speed to be caught at <span class="math-container">$(x_2, \mp 3)$</span> for some <span class="math-container">$x_2 \in [0, 4]$</span>. Also the second sheep could start changing <span class="math-container">$x$</span>-speed before the wolf catches the first sheep. However it changes only the range of <span class="math-container">$x_2$</span> to <span class="math-container">$[-1, 4]$</span>, not the total time.</p>
<p>If we have <span class="math-container">$3$</span> sheep instead of <span class="math-container">$2$</span> then situation will be rather similar for Manhattan <span class="math-container">$L_1$</span> metrics. Simple upper bound is <span class="math-container">$18$</span> and it is easily reachable. However <span class="math-container">$3$</span> sheep for Chebyshov <span class="math-container">$L_{\infty}$</span> metrics and <span class="math-container">$4$</span> sheep for Manhattan <span class="math-container">$L_1$</span> metrics are not so easy cases.</p>
<p>In <span class="math-container">$k$</span>-dimensional space <span class="math-container">$s \le 2k-1$</span> sheep have a strategy to reach simple upper bound of <span class="math-container">$2 \cdot 3^{s - 1}$</span> time units for Manhattan <span class="math-container">$L_1$</span> metrics and <span class="math-container">$s \le 2^{k - 1}$</span> sheep have such strategy for Chebyshov <span class="math-container">$L_{\infty}$</span> metrics.</p>
| <p>Inspired by @10010100102ohno from a <a href="https://puzzling.stackexchange.com/a/120730/2722">puzzling stack exchange thread</a>, I tried to code it up (though I'm not nearly as good a programmer; note I started with some GPT4 code). Here is what I came up with:</p>
<p><a href="https://jsfiddle.net/pbzn1Lva/" rel="nofollow noreferrer">https://jsfiddle.net/pbzn1Lva/</a></p>
<p>The wolf strategy is simply to follow the closest sheep. The sheep strategy is to initially head in opposite directions, and then the sheep closer to the wolf slowly turns in a direction heading directly away from the wolf, and the farther sheep slowly turns in a direction away from the other sheep. This leads to a time of about 4.6. Niether the sheep nor wolf movement is optimal, but it was the best I could do and should be in the ball park.</p>
|
probability | <p>I give you a hat which has <span class="math-container">$10$</span> coins inside of it. <span class="math-container">$1$</span> out of the <span class="math-container">$10$</span> have two heads on it, and the rest of them are fair. You draw a coin at random from the jar and flip it <span class="math-container">$5$</span> times. If you flip heads <span class="math-container">$5$</span> times in a row, what is the probability that you get heads on your next flip?</p>
<p>I tried to approach this question by using Bayes: Let <span class="math-container">$R$</span> be the event that the coin with both heads is drawn and <span class="math-container">$F$</span> be the event that <span class="math-container">$5$</span> heads are flipped in a row. Then
<span class="math-container">$$\begin{align*}
P(R|F) &= \frac{P(F|R)P(R)}{P(F)} \\ &= \frac{1\cdot 1/10}{1\cdot 1/10 + 1/2^5\cdot 9/10} \\ &= 32/41
\end{align*}$$</span></p>
<p>Thus the probability that you get heads on the next flip is</p>
<p><span class="math-container">$$\begin{align*}
P(H|R)P(R) + P(H|R')P(R') &= 1\cdot 32/41 + 1/2\cdot (1 - 32/41) \\ &= 73/82
\end{align*}$$</span></p>
<p>However, according to my friend, this is a trick question because the flip after the first <span class="math-container">$5$</span> flips is independent of the first <span class="math-container">$5$</span> flips, and therefore the correct probability is
<span class="math-container">$$1\cdot 1/10+1/2\cdot 9/10 = 11/20$$</span></p>
<p>Is this true or not?</p>
| <p>To convince your friend that he is wrong, you could modify the question:</p>
<blockquote>
<p>A hat contains ten 6-sided dice. Nine dice have scores 1, 2, 3, 4, 5, 6, and the other dice has 6 on every face. Randomly choose one dice, toss it <span class="math-container">$1000$</span> times, and write down the results. Repeat this procedure many times. Now look at the trials in which the first <span class="math-container">$999$</span> tosses were all 6's: what proportion of those trials have the <span class="math-container">$1000^\text{th}$</span> toss also being a 6?</p>
</blockquote>
<p>Common sense tells us that the proportion is extremely close to <span class="math-container">$1$</span>. <span class="math-container">$\left(\text{The theoretical proportion is}\dfrac{6^{1000}+9}{6^{1000}+54}.\right)$</span></p>
<p>But according to your friend's method, the theoretical proportion would be <span class="math-container">$\dfrac{1}{10}(1)+\dfrac{9}{10}\left(\dfrac{1}{6}\right)=\dfrac{1}{4}$</span>. I think your friend would see that this is clearly wrong.</p>
| <p>The main idea behind this problem is a topic known as <em>predictive posterior probability</em>.</p>
<p>Let <span class="math-container">$P$</span> denote the probability of the coin you randomly selected landing on heads.</p>
<p>Then <span class="math-container">$P$</span> is a random variable supported on <span class="math-container">$\{0.5,1\}$</span> and has pdf <span class="math-container">$$\mathbb{P}(P=0.5)=0.9\\ \mathbb{P}(P=1)=0.1$$</span></p>
<p>Let <span class="math-container">$E=\{HHHHH\}$</span> denote the "evidence" you witness. The <em>posterior</em> probability that <span class="math-container">$P=0.5$</span> given this evidence <span class="math-container">$E$</span> can be evaluated using Bayes' rule:</p>
<p><span class="math-container">$$\begin{eqnarray*}\mathbb{P}(P=0.5|E) &=& \frac{\mathbb{P}(E|P=0.5)\mathbb{P}(P=0.5)}{\mathbb{P}(E|P=0.5)\mathbb{P}(P=0.5)+\mathbb{P}(E|P=1)\mathbb{P}(P=1)} \\ &=& \frac{(0.5)^5\times 0.9}{(0.5)^5\times 0.9+1^5\times 0.1} \\ &=& \frac{9}{41}\end{eqnarray*}$$</span> The <em>posterior</em> pdf of <span class="math-container">$P$</span> given <span class="math-container">$E$</span> is supported on <span class="math-container">$\{0.5,1\}$</span> and has pdf <span class="math-container">$$\mathbb{P}\left(P=0.5|E\right)=\frac{9}{41} \\ \mathbb{P}\left(P=1|E\right)=\frac{32}{41}$$</span> Finally, the <em>posterior predictive probability</em> that we flip heads again given this evidence <span class="math-container">$E$</span> is <span class="math-container">$$\begin{eqnarray*}\mathbb{P}(\text{Next flip heads}|E) &=& \mathbb{P}(\text{Next flip heads}|E,P=0.5)\mathbb{P}(P=0.5|E)+\mathbb{P}(\text{Next flip heads}|E,P=1)\mathbb{P}(P=1|E) \\ &=& \frac{1}{2}\times \frac{9}{41}+1\times \frac{32}{41} \\ &=& \frac{73}{82}\end{eqnarray*}$$</span></p>
|
combinatorics | <p>Let $H_n$ denote the $n$th harmonic number; i.e., $H_n = \sum\limits_{i=1}^n \frac{1}{i}$. I've got a couple of proofs of the following limiting expression, which I don't think is that well-known: $$\lim_{n \to \infty} \left(H_n - \frac{1}{2^n} \sum_{k=1}^n \binom{n}{k} H_k \right) = \log 2.$$
I'm curious about other ways to prove this expression, and so I thought I would ask here to see if anybody knows any or can think of any. I would particularly like to see a combinatorial proof, but that might be difficult given that we're taking a limit and we have a transcendental number on one side. I'd like to see any proofs, though. I'll hold off from posting my own for a day or two to give others a chance to respond first.</p>
<p>(The probability tag is included because the expression whose limit is being taken can also be interpreted probabilistically.)</p>
<p><HR></p>
<p>(<strong>Added</strong>: I've accepted Srivatsan's first answer, and I've posted my two proofs for those who are interested in seeing them. </p>
<p>Also, the sort of inverse question may be of interest. Suppose we have a function $f(n)$ such that $$\lim_{n \to \infty} \left(f(n) - \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} f(k) \right) = L,$$ where $L$ is finite and nonzero. What can we say about $f(n)$? <a href="https://math.stackexchange.com/questions/8415/asymptotic-difference-between-a-function-and-its-binomial-average">This question was asked</a> and <a href="https://math.stackexchange.com/questions/8415/asymptotic-difference-between-a-function-and-its-binomial-average/22582#22582">answered a while back</a>; it turns out that $f(n)$ must be $\Theta (\log n)$. More specifically, we must have $\frac{f(n)}{\log_2 n} \to L$ as $n \to \infty$.) </p>
| <p>I made an quick estimate in my comment. The basic idea is that the binomial distribution $2^{−n} \binom{n}{k}$ is concentrated around $k= \frac{n}{2}$. Simply plugging this value in the limit expression, we get $H_n−H_{n/2} \sim \ln 2$ for large $n$. Fortunately, formalizing the intuition isn't that hard. </p>
<p>Call the giant sum $S$. Notice that $S$ can be written as $\newcommand{\E}{\mathbf{E}}$
$$
\sum_{k=0}^{\infty} \frac{1}{2^{n}} \binom{n}{k} (H(n) - H(k)) = \sum_{k=0}^{\infty} \Pr[X = k](H(n) - H(k)) = \E \left[ H(n) - H(X) \right],
$$
where $X$ is distributed according to the binomial distribution $\mathrm{Bin}(n, \frac12)$. We need the following two facts about $X$: </p>
<ul>
<li>With probability $1$, $0 \leqslant H(n) - H(X) \leqslant H(n) = O(\ln n)$.</li>
<li>From the <a href="http://en.wikipedia.org/wiki/Bernstein_inequalities_%28probability_theory%29" rel="noreferrer">Bernstein inequality</a>, for any $\varepsilon \gt 0$, we know that $X$ lies in the range $\frac{1}{2}n (1\pm \varepsilon)$, except with probability at most $e^{- \Omega(n \varepsilon^2) }$. </li>
</ul>
<p>Since the function $x \mapsto H(n) - H(x)$ is monotone decreasing, we have
$$
S \leqslant \color{Red}{H(n)} \color{Blue}{-H\left( \frac{n(1-\varepsilon)}{2} \right)} + \color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}.
$$
Plugging in the standard estimate $H(n) = \ln n + \gamma + O\Big(\frac1n \Big)$ for the harmonic sum, we get:
$$
\begin{align*}
S
&\leqslant \color{Red}{\ln n + \gamma + O \Big(\frac1n \Big)} \color{Blue}{- \ln \left(\frac{n(1-\varepsilon)}{2} \right) - \gamma + O \Big(\frac1n \Big)} +\color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}
\\ &\leqslant \ln 2 - \ln (1- \varepsilon) + o_{n \to \infty}(1)
\leqslant \ln 2 + O(\varepsilon) + o_{n \to \infty}(1). \tag{1}
\end{align*}
$$</p>
<p>An analogous argument gets the lower bound
$$
S \geqslant \ln 2 - \ln (1+\varepsilon) - o_{n \to \infty}(1) \geqslant \ln 2 - O(\varepsilon) - o_{n \to \infty}(1). \tag{2}
$$
Since the estimates $(1)$ and $(2)$ hold for all $\varepsilon > 0$, it follows that $S \to \ln 2$ as $n \to \infty$. </p>
| <p>Here's a different proof. We will simplify the second term as follows:
$$
\begin{eqnarray*}
\frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \frac{1}{t} \right]
&=&
\frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \int_{0}^1 x^{t-1} dx \right]
\\ &=&
\frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} x^{t-1} \right] dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \cdot \frac{x^k-1}{x-1} \right] dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \frac{\sum\limits_{k=0}^n \binom{n}{k} x^k- \sum\limits_{k=0}^n \binom{n}{k}}{x-1} dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \frac{(x+1)^n- 2^n}{x-1} dx.
\end{eqnarray*}
$$</p>
<p>Make the substitution $y = \frac{x+1}{2}$, so the new limits are now $1/2$ and $1$. The integral then changes to:
$$
\begin{eqnarray*}
\int_{1/2}^1 \frac{y^n- 1}{y-1} dy
&=&
\int_{1/2}^1 (1+y+y^2+\ldots+y^{n-1}) dy
\\ &=&
\left. y + \frac{y^2}{2} + \frac{y^3}{3} + \ldots + \frac{y^n}{n} \right|_{1/2}^1
\\ &=&
H_n - \sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i.
\end{eqnarray*}
$$
Notice that conveniently $H_n$ is the first term in our function. Rearranging, the expression under the limit is equal to:
$$
\sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i.
$$
The final step is to note that this is just the $n$th partial sum of the Taylor series expansion of $f(y) = -\ln(1-y)$ at $y=1/2$. Therefore, as $n \to \infty$, this sequence approaches the value $$-\ln \left(1-\frac{1}{2} \right) = \ln 2.$$</p>
<p><em>ADDED:</em> As Didier's comments hint, this proof also shows that the given sequence, call it $u_n$, is monotonoic and is hence always smaller than $\ln 2$. Moreover, we also have a tight error estimate:
$$
\frac{1}{n2^n} < \ln 2 - u_n < \frac{2}{n2^n}, \ \ \ \ (n \geq 1).
$$</p>
|
linear-algebra | <p>I use linear algebra quite a lot in applications, but I do not have a very strong abstract algebra background (i.e. around the level of an intro course, just knowing the basics of rings, groups, ideals, the first isomorphism theorem).</p>
<p>Of course, the latter is far more general, so I was interested in how it can given insight into the former.
For instance, I thought it was interesting to look at the set of rotations as the group $SO(3)$.</p>
<p>So, essentially I'm curious as to what insights one can glean from "viewing linear algebra though an abstract algebra lens".
Not necessarily practical tools, but rather more important are ones that aid in understanding or intuition (i.e. provide something new).</p>
<p>As a particular example, what are the relations of vector spaces to these abstract structures, and is viewing them from that point of view helpful?</p>
| <p>The "Rank-Nullity" Theorem from Linear Algebra can be viewed as a corollary of the First Isomorphism Theorem, which may be more intuitive.</p>
<p>Suppose $T:V\to V$ is a linear transformation. Then by First Isomorphism Theorem, $V/\ker T\cong T(V)$.</p>
<p>So $\dim V-\rm{Null}(T)=\rm{Rank}(T)$, which is the Rank-Nullity Theorem.</p>
<p>This may be more intuitive than the traditional Linear Algebra proof of Rank-Nullity Theorem (see <a href="https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem" rel="noreferrer">https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem</a>).</p>
| <p>There are tons of ways that abstract algebra informs linear algebra; here is just one example. Suppose you have a vector space $V$ over a field $k$ with a linear map $T:V\to V$. Given a polynomial $p(x)$ with coefficients in $k$, you get a linear map $p(T):V\to V$. This makes $V$ a module over the ring $k[x]$ of polynomials with coefficients in $k$: given $p(x)\in k[x]$ and $v\in V$, the scalar multiplication $p(x)\cdot v$ is just $p(T)v$. In particular, multiplication by $x$ corresponds to the linear map $T$.</p>
<p>Conversely, given a $k[x]$-module $V$, it is a $k$-vector space by considering multiplication by constant polynomials, and multiplication by $x$ gives a $k$-linear map $T:V\to V$. So any $k[x]$-module $V$ can be thought of as a vector space together with a linear map $V\to V$, and this is inverse to the construction described in the previous paragraph.</p>
<p>So vector spaces $V$ together with a chosen linear map $V\to V$ are essentially the same thing as $k[x]$-modules. This is really powerful because $k[x]$ is a very nice ring: it is a principal ideal domain, and there is a very nice <a href="https://en.wikipedia.org/wiki/Structure_theorem_for_finitely_generated_modules_over_a_principal_ideal_domain" rel="noreferrer">classification of all finitely generated modules over any principal ideal domain</a>. This gives us a classification of all linear maps from a finite-dimensional vector space to itself, up to isomorphism. When you represent linear maps by matrices, "up to isomorphism" ends up meaning "up to conjugation". So this gives a classification of all $n\times n$ matrices over a field $k$, up to conjugation by invertible $n\times n$ matrices, called the <a href="https://en.wikipedia.org/wiki/Frobenius_normal_form" rel="noreferrer">rational canonical form</a>. In the case that $k$ is algebraically closed (for instance, $k=\mathbb{C}$), you can go further and get the very powerful <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="noreferrer">Jordan normal form</a> from this classification.</p>
<p>Now of course, these canonical forms for matrices can be obtained without all this language of abstract algebra: you can formulate the arguments in this particular case purely in the language of matrices if you really want to. But the general framework provided by abstract algebra provides a lot of context that can make these ideas easier to understand (for instance, you can think of this classification of matrices as being very closely analogous to the classification of finite abelian groups, since that is just the same result applied to the ring $\mathbb{Z}$ instead of $k[x]$). It also provides a framework to generalize these results to more difficult situations. For instance, if you want to consider a vector space together with two linear maps which commute with each other, that is now equivalent to a $k[x,y]$-module. There is not such a nice classification in this case, but the language of rings and modules lets you formulate and think about this question using the same tools as when you had just one linear map.</p>
|
differentiation | <p>As referred <a href="https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="noreferrer">in Wikipedia</a> (see the specified criteria there), L'Hôpital's rule says,</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)}
$$</span></p>
<p>As</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f'(x)}{g'(x)}=
\lim_{x\to c}\frac{\int f'(x)\ dx}{\int g'(x)\ dx}
$$</span></p>
<p>Just out of curiosity, can you integrate instead of taking a derivative?
Does</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int f(x)\ dx}{\int g(x)\ dx}
$$</span></p>
<p>work? (given the specifications in Wikipedia only the other way around: the function must be integrable by some method, etc.) When? Would it have any practical use? I hope this doesn't sound stupid, it just occurred to me, and I can't find the answer myself.</p>
<br/>
##Edit##
<p>(In response to the comments and answers.)</p>
<p>Take 2 functions <span class="math-container">$f$</span> and <span class="math-container">$g$</span>. When is</p>
<p><span class="math-container">$$
\lim_{x\to c}\frac{f(x)}{g(x)}=
\lim_{x\to c}\frac{\int_x^c f(a)\ da}{\int_x^c g(a)\ da}
$$</span></p>
<p>true?</p>
<p>Not saying that it always works, however, it sometimes may help. Sometimes one can apply l'Hôpital's even when an indefinite form isn't reached. Maybe this only works on exceptional cases.</p>
<p>Most functions are simplified by taking their derivative, but it may happen by integration as well (say <span class="math-container">$\int \frac1{x^2}\ dx=-\frac1x+C$</span>, that is simpler). In a few of those cases, integrating functions of both nominator and denominator may simplify.</p>
<p>What do those (hypothetical) functions have to make it work? And even in those cases, is is ever useful? How? Why/why not?</p>
| <p>With L'Hôpital's rule your limit must be of the form <span class="math-container">$\dfrac 00$</span>, so your antiderivatives must take the value <span class="math-container">$0$</span> at <span class="math-container">$c$</span>. In this case you have <span class="math-container">$$\lim_{x \to c} \frac{ \int_c^x f(t) \, dt}{\int_c^x g(t) \, dt} = \lim_{x \to c} \frac{f(x)}{g(x)}$$</span> provided <span class="math-container">$g$</span> satisfies the usual hypothesis that <span class="math-container">$g(x) \not= 0$</span> in a deleted neighborhood of <span class="math-container">$c$</span>.</p>
| <p>I recently came across a situation where it was useful to go through exactly this process, so (although I'm certainly late to the party) here's an application of L'Hôpital's rule in reverse:</p>
<p>We have a list of distinct real numbers $\{x_0,\dots, x_n\}$.
We define the $(n+1)$th <em>nodal polynomial</em> as
$$
\omega_{n+1}(x) = (x-x_0)(x-x_1)\cdots(x-x_n)
$$
Similarly, the $n$th nodal polynomial is
$$
\omega_n(x) = (x-x_0)\cdots (x-x_{n-1})
$$
Now, suppose we wanted to calculate $\omega_{n+1}'(x_i)/\omega_{n}'(x_i)$ when $0 \leq i \leq n-1$. Now, we could calculate $\omega_{n}'(x_i)$ and $\omega_{n+1}'(x_i)$ explicitly and go through some tedious algebra, or we could note that because these derivatives are non-zero, we have
$$
\frac{\omega_{n+1}'(x_i)}{\omega_{n}'(x_i)} =
\lim_{x\to x_i} \frac{\omega_{n+1}'(x)}{\omega_{n}'(x)} =
\lim_{x\to x_i} \frac{\omega_{n+1}(x)}{\omega_{n}(x)} =
\lim_{x\to x_i} (x-x_{n+1}) = x_i-x_{n}
$$
It is important that both $\omega_{n+1}$ and $\omega_n$ are zero at $x_i$, so that in applying L'Hôpital's rule, we intentionally produce an indeterminate form. It should be clear though that doing so allowed us to cancel factors and thus (perhaps surprisingly) saved us some work in the end. </p>
<p>So would this method have practical use? It certainly did for me!</p>
<hr>
<p><strong>PS:</strong> If anyone is wondering, this was a handy step in proving a recursive formula involving Newton's divided differences.</p>
|
probability | <h2>Does this reflect the real world and what is the empirical evidence behind this?</h2>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/f/f9/Largenumbers.svg" alt="Wikipedia illustration"></p>
<p><strong><em>Layman here so please avoid abstract math in your response.</em></strong></p>
<p>The Law of Large Numbers states that the <em>average</em> of the results from multiple trials will tend to converge to its expected value <em>(e.g. 0.5 in a coin toss experiment)</em> as the sample size increases. The way I understand it, while the first 10 coin tosses may result in an average closer to 0 or 1 rather than 0.5, after 1000 tosses a statistician would expect the average to be very close to 0.5 and definitely 0.5 with an infinite number of trials.</p>
<p>Given that a coin has no memory and each coin toss is independent, what physical laws would determine that the average of all trials will eventually reach 0.5. More specifically, why does a statistician believe that a random event with 2 possible outcomes will have a close to equal amount of both outcomes over say 10,000 trials? What prevents the coin to fall 9900 times on heads instead of 5200?</p>
<p>Finally, since gambling and insurance institutions rely on such expectations, are there any experiments that have conclusively shown the validity of the LLN in the real world?</p>
<p><strong>EDIT:</strong> I do differentiate between the LLN and the Gambler's fallacy. My question is NOT if or why any specific outcome or series of outcomes become more likely with more trials--that's obviously false--but why <strong><em>the mean of all outcomes</em></strong> tends toward the expected value?</p>
<p><strong>FURTHER EDIT:</strong> LLN seems to rely on two assumptions in order to work:</p>
<ol>
<li>The universe is <strong>indifferent</strong> towards the result of any one trial, because each outcome is equally likely</li>
<li>The universe is <strong>NOT indifferent</strong> towards any one particular outcome coming up too frequently and dominating the rest.</li>
</ol>
<p>Obviously, we as humans would label 50/50 or a similar distribution of a coin toss experiment <strong><em>"random"</em></strong>, but if heads or tails turns out to be say 60-70% after thousands of trials, we would suspect there is something wrong with the coin and it isn't fair. Thus, if the universe is truly <em>indifferent</em> towards the average of large samples, there is no way we can have true randomness and consistent predictions--there will always be a suspicion of bias unless the total distribution is not somehow kept in check by something that preserves the relative frequencies.</p>
<p><strong>Why is the universe NOT indifferent towards big samples of coin tosses? What is the objective reason for this phenomenon?</strong></p>
<p><strong>NOTE:</strong> <em>A good explanation would not be circular: justifying probability with probabilistic assumptions (e.g. "it's just more likely"). Please check your answers, as most of them fall into this trap.</em></p>
| <p>Reading between the lines, it sounds like you are committing the fallacy of the layman interpretation of the "law of averages": that if a coin comes up heads 10 times in a row, then it needs to come up tails <em>more</em> often from then on, in order to balance out that initial asymmetry.</p>
<p>The real point is that no divine presence needs to take corrective action in order for the <em>average</em> to stabilize. The simple reason is attenuation: once you've tossed the coin another 1000 times, the effect of those initial 10 heads has been diluted to mean almost nothing. What used to look like 100% heads is now a small blip only strong enough to move the needle from 50% to 51%.</p>
<p>Now combine this observation with the easily verified fact that 9900 out of 10000 heads is simply a less common combination than 5000 out of 10000. The reason for that is combinatorial: there is simply less freedom in hitting an extreme target than a moderate one.</p>
<p>To take a tractable example, suppose I ask you to flip a coin 4 times and get 4 heads. If you've flip tails even once, you've failed. But if instead I ask you to aim for 2 heads, you still have options (albeit slimmer) no matter how the first two flips turn out. Numerically we can see that 2 out of 4 can be achieved in 6 ways:
HHTT, HTHT, HTTH, THHT, THTH, TTHH. But the 4 out of 4 goal can be achieved in only one way: HHHH. If you work out the numbers for 9900 out of 10000 versus 5000 out of 10000 (or any specific number in that neighbourhood), that disparity becomes truly immense.</p>
<p>To summarize: it takes no conscious effort to get an empirical average to tend towards its expected value. In fact it would be fair to think in the exact opposite terms: the effect that requires conscious effort is forcing the empirical average to <em>stray</em> from its expectation.</p>
| <p>Nice question! In the real word, we don't get to let $n \to \infty$, so the question of why LLN should be of any comfort is important. </p>
<p>The short answer to your question is that we <em>cannot</em> empirically verify LLN since we can never perform an infinite number of experiments. Its a theoretical idea that is very well founded, but, like all applied mathematics, the question of whether or not a particular model or theory holds is a perennial concern.</p>
<p>A more useful law from a statistical standpoint is the Central Limit Theorem and the various probability inequalities (Chebyshev, Markov, Chernov, etc). These allow us to place bounds on or approximate the <em>probability</em> of our sample average being far from the true value for a <em>finite</em> sample.</p>
<p>As for an actual experiment to test LLN. One can hardly do better than <a href="http://en.wikipedia.org/wiki/John_Edmund_Kerrich">John Kerrichs 10,000 coin flip experiment</a>-- he got 50.67% heads!!</p>
<p>So, in general, I would say LLN is empirically well supported by the fact that scientists from all fields rely upon sample averages to estimate models, and this approach has been largely successful, so the sample averages appear to be converging nicely for finite, and <em>feasible</em>, sample sizes.</p>
<p>There are "pathological" cases that one can construct (I'll spare you the details) where one needs astronomical sample sizes to get a reasonable probability of being close to the true mean. This is apparent if you are using the Central Limit Theorem, but the LLN is simply not informative enough to give me much comfort in day-to-day practice.</p>
<p><strong>The physical basis for probability</strong></p>
<p>It seems you still an issue with <em>why</em> long-run averages exist in the real world, apart from the theory of probability regarding the behavior of these averages <em>assuming</em> long-run averages exist. Let me state a fact that may help you:</p>
<p><strong>Fact</strong> Nether probability theory nor the existence of a long-run averages <strong>requires randomness</strong> ! </p>
<p>The determinism vs. indeterminism debate is for philosophers, not mathematics. The notion of probability as a physical observable comes from <em>ignorance</em> <strong>or</strong> <em>absence</em> of the detailed dynamics of what you are observing. You could just as easily apply probability theory to a boring 'ol pendulum as to the stock market or coin flips...its just that with pendulum's we have a nice, detailed theory that that allows us make precise estimates of future observations. I have no doubt that a full physical analysis of a coin flip would allow for us to predict what face would come up...but in reality, we will never know this! </p>
<p>This isn't an issue though. We don't need to assume a guiding hand nor true indeterminism to apply probability theory. Lets say that coin flips are truly deterministic, then we can still apply probability theory meaningfully if we assume a couple basic things:</p>
<ol>
<li>The underlying process is $ergodic$...okay, this is a bit technical, but it basically means that the process dynamics are stable over the long term (e.g., we are not flipping coins in a hurricane or where tornados pop in and out of the vicinity!). Note that I said nothing about randomness...this could be a totally deterministic, albeit very complex, process...all we need is that the dynamics are stable (i.e., we could write down a series of equations with specific parameters for the coin flips and they wouldn't change from flip to flip).</li>
<li>The values the process can take on at any time are "well behaved". Basically, like I said earlier wrt the Cauchy...the system should not produce values that consistently exceed $\approx n$ times the sum of all previous observations. It may happen once in a while, but it should become very rare, very fast (precise definition is somewhat technical).</li>
</ol>
<p>With these two assumptions, we now have the physical basis for the existence of a long-run average of a physical process. Now, if its complicated, then instead of using physics to model it exactly, we can apply probability theory to describe the statistical properties of this process (i.e., aggregated over many observations). </p>
<p>Note that the above is independent from whether or not we have selected the correct probability model. Models are made to match reality...reality does not conform itself to our models. Therefore, it is the job of the <em>modeler</em>, not nature or divine provenance, to ensure that the results of the model match the observed outcomes.</p>
<p>Hope this helps clarify when and how probability applies to the real world.</p>
|
matrices | <p>Given a square matrix,
is the transpose of the inverse equal to the inverse of the transpose?</p>
<p><span class="math-container">$$
(A^{-1})^T = (A^T)^{-1}
$$</span></p>
| <p>Is $(A^{-1})^T = (A^T)^{-1}$ you ask.</p>
<p>Well
$$\begin{align}
A^T(A^{-1})^T = (A^{-1}A)^{T} = I^T = I \\
(A^{-1})^TA^T = (AA^{-1})^{T} = I^T = I
\end{align}
$$</p>
<p>This proves that the inverse of $A^T$ is $(A^{-1})^T$. So the answer to your question is yes.</p>
<p>Here I have used that
$$
A^TB^T = (BA)^T.
$$
And we have used that the inverse of a matrix $A$ is exactly (by definition) the matrix $B$ such that $AB = BA = I$.</p>
| <p>Given that $A\in \mathbb{R}^{n\times n}$ is invertible, $(A^{-1})^T = (A^T)^{-1}$ holds.</p>
<p>Proof:</p>
<p>$A$ is invertible and $\textrm{rank }A = n = \textrm{rank }A^T,$
so $A^T$ is invertible as well. Conclusion:
$$(A^{-1})^T = (A^{-1})^TA^T(A^T)^{-1} = (AA^{-1})^T(A^T)^{-1} = \textrm{id}^T(A^T)^{-1} = (A^T)^{-1}.$$</p>
|
linear-algebra | <p>If I have a covariance matrix for a data set and I multiply it times one of it's eigenvectors. Let's say the eigenvector with the highest eigenvalue. The result is the eigenvector or a scaled version of the eigenvector. </p>
<p>What does this really tell me? Why is this the principal component? What property makes it a principal component? Geometrically, I understand that the principal component (eigenvector) will be sloped at the general slope of the data (loosely speaking). Again, can someone help understand why this happens? </p>
| <p><strong>Short answer:</strong> The eigenvector with the largest eigenvalue is the direction along which the data set has the maximum variance. Meditate upon this.</p>
<p><strong>Long answer:</strong> Let's say you want to reduce the dimensionality of your data set, say down to just one dimension. In general, this means picking a unit vector <span class="math-container">$u$</span>, and replacing each data point, <span class="math-container">$x_i$</span>, with its projection along this vector, <span class="math-container">$u^T x_i$</span>. Of course, you should choose <span class="math-container">$u$</span> so that you retain as much of the variation of the data points as possible: if your data points lay along a line and you picked <span class="math-container">$u$</span> orthogonal to that line, all the data points would project onto the same value, and you would lose almost all the information in the data set! So you would like to maximize the <em>variance</em> of the new data values <span class="math-container">$u^T x_i$</span>. It's not hard to show that if the covariance matrix of the original data points <span class="math-container">$x_i$</span> was <span class="math-container">$\Sigma$</span>, the variance of the new data points is just <span class="math-container">$u^T \Sigma u$</span>. As <span class="math-container">$\Sigma$</span> is symmetric, the unit vector <span class="math-container">$u$</span> which maximizes <span class="math-container">$u^T \Sigma u$</span> is nothing but the eigenvector with the largest eigenvalue.</p>
<p>If you want to retain more than one dimension of your data set, in principle what you can do is first find the largest principal component, call it <span class="math-container">$u_1$</span>, then subtract that out from all the data points to get a "flattened" data set that has <em>no</em> variance along <span class="math-container">$u_1$</span>. Find the principal component of this flattened data set, call it <span class="math-container">$u_2$</span>. If you stopped here, <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span> would be a basis of the two-dimensional subspace which retains the most variance of the original data; or, you can repeat the process and get as many dimensions as you want. As it turns out, all the vectors <span class="math-container">$u_1, u_2, \ldots$</span> you get from this process are just the eigenvectors of <span class="math-container">$\Sigma$</span> in decreasing order of eigenvalue. That's why these are the principal components of the data set.</p>
| <p>Some informal explanation:</p>
<p>Covariance matrix $C_y$ (it is symmetric) encodes the correlations between variables of a vector. In general a covariance matrix is non-diagonal (i.e. have non zero correlations with respect to different variables).</p>
<p><strong>But it's interesting to ask, is it possible to diagonalize the covariance matrix by changing basis of the vector?</strong>. In this case there will be no (i.e. zero) correlations between different variables of the vector. </p>
<p>Diagonalization of this symmetric matrix is possible with eigen value decomposition.
You may read <em><a href="https://arxiv.org/pdf/1404.1100.pdf" rel="noreferrer">A Tutorial on Principal Component Analysis</a></em> (pages 6-7), by Jonathon Shlens, to get a good understanding. </p>
|
probability | <p>I wish to use the <a href="http://en.wikipedia.org/wiki/Computational_formula_for_the_variance">Computational formula of the variance</a> to calculate the variance of a normal-distributed function. For this, I need the expected value of $X$ as well as the one of $X^2$. Intuitively, I would have assumed that $E(X^2)$ is always equal to $E(X)^2$. In fact, I cannot imagine how they could be different.</p>
<p>Could you explain how this is possible, e.g. with an example?</p>
| <p>Assume $X$ is a random variable that is 0 half the time and 1 half the time. Then
$$EX = 0.5 \times 0 + 0.5 \times 1 = 0.5$$
so that
$$(EX)^2 = 0.25,$$
whereas on the other hand
$$E(X^2) = 0.5 \times 0^2 + 0.5 \times 1^2 = 0.5.$$
By the way, since $Var(X) = E[(X - \mu)^2] = \sum_x (x - \mu)^2 P(x)$, the only way the variance could ever be 0 in the discrete case is when $X$ is constant.</p>
| <p>Let <span class="math-container">$EX=\mu$</span> and <span class="math-container">$E(X-\mu)^2=\sigma^2$</span>, then</p>
<p><span class="math-container">$$
EX^2 = E[X-\mu+\mu]^2=\\
=E(X-\mu)^2+2E[(X-\mu)\mu]+E(\mu^2)=\\=\sigma^2+2\mu E(X-\mu)+\mu^2=\\
=\sigma^2+\mu^2
$$</span></p>
<p>So <span class="math-container">$EX^2 =\sigma^2+\mu^2$</span>, no matter the distribution, and <span class="math-container">$EX^2\ne(EX)^2$</span> unless the variance equals zero.</p>
|
probability | <p>I have been using Sebastian Thrun's course on AI and I have encountered a slightly difficult problem with probability theory.</p>
<p>He poses the following statement:</p>
<p>$$
P(R \mid H,S) = \frac{P(H \mid R,S) \; P(R \mid S)}{P(H \mid S)}
$$</p>
<p>I understand he used Bayes' Rule to get the RHS equation, but fail to see how he did this. If somebody could provide a breakdown of the application of the rule in this problem that would be great.</p>
| <p>Taking it one step at a time:
$$\begin{align}
\mathsf P(R\mid H, S) & = \frac{\mathsf P(R,H,S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R, S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R\mid S)\,\mathsf P(S)}{\mathsf P(H, S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\,\mathsf P(R\mid S)}{\mathsf P(H\mid S)}\frac{\mathsf P(S)}{\mathsf P(S)}
\\[1ex] & =\frac{\mathsf P(H\mid R,S)\;\mathsf P(R\mid S)}{\mathsf P(H\mid S)}
\end{align}$$</p>
| <p>You don't really need Bayes' Theorem. Just apply the definition of conditional probability in two ways. Firstly,</p>
<p>\begin{eqnarray*}
P(R\mid H,S) &=& \dfrac{P(R,H\mid S)}{P(H\mid S)} \\
&& \\
\therefore\quad P(R,H\mid S) &=& P(R\mid H,S)P(H\mid S).
\end{eqnarray*}</p>
<p>Secondly,</p>
<p>\begin{eqnarray*}
P(H\mid R,S) &=& \dfrac{P(R,H\mid S)}{P(R\mid S)} \\
&& \\
\therefore\quad P(R,H\mid S) &=& P(H\mid R,S)P(R\mid S).
\end{eqnarray*}</p>
<p>Combine these two to get the result.</p>
|
combinatorics | <p>This is from the lecture notes in this course of discrete mathematics I am following. </p>
<p>The professor is writing about how fast binomial coefficients grow.</p>
<blockquote>
<p>So, suppose you had 2 minutes to save your life and had to estimate, up to a factor of $100$, the value of, say, $\binom{63}{19}$. How would you do it? I will leave this (hopefully intriguing!) question hanging and maybe come back to the topic of efficiently estimating binomial coefficients later.</p>
</blockquote>
<p>Any ideas/hints on how to do it?</p>
| <p>Two minutes is a lot of calculations, I'd write the 19 numbers in the numerator and the 19 numbers in the denominator, and cancel whatever can be cancelled in under a minute.</p>
<p>You get:</p>
<p>$$ 3^3\times5^2\times7^2\times23\times29\times31\times47\times53\times59\times61$$</p>
<p>We approximate this as:</p>
<p>$$20\times 20 \times 50 \times 20 \times 20 \times 20 \times 50 \times 50\times 50 \times 50=10^{15}$$</p>
<p>The actual value is $6.131164307078475\times 10^{15}$</p>
| <p>In the numerator we have $63!/(63-19)!\approx(63-9)^{19}=54^{19}\approx50^{20}=100^{20}/2^{20}\approx10^{34}$.</p>
<p>In the denominator we have $19!\approx\left(\frac{1+19}2\right)^{19}=10^{19}$.</p>
<p>So the quotient is roughly $10^{15}$.</p>
<p>I'm not sure I could have done that in two minutes under the threat of death, though.</p>
|
probability | <p>Let <span class="math-container">$X_0$</span> be the unit disc, and consider the process of "cutting out circles", where to construct <span class="math-container">$X_n$</span> you select a uniform random point <span class="math-container">$x \in X_{n-1}$</span>, and cut out the largest circle with center <span class="math-container">$x$</span>. To illustrate this process, we have the following graphic:</p>
<p><a href="https://i.sstatic.net/D1mbZ.png" rel="noreferrer"><img src="https://i.sstatic.net/D1mbZ.png" alt="cutting out circles" /></a></p>
<p>where the graphs are respectively showing one sample of <span class="math-container">$X_1,X_2,X_3,X_{100}$</span> (the orange parts have been cut out).</p>
<p>Can we prove we eventually cut everything out? Formally, is the following true
<span class="math-container">$$\text{lim}_{n \to \infty} \mathbb{E}[\text{Area}(X_n)] = 0$$</span></p>
<p>where <span class="math-container">$\mathbb{E}$</span> denotes we are taking the expectation value. Doing simulations, this seems true, in fact <span class="math-container">$\mathbb{E}[\text{Area}($</span>X_n<span class="math-container">$)]$</span> seems to decay with some power law, but after 4 years I still don't really know how to prove this :(. The main thing you need to rule out is that <span class="math-container">$X_n$</span> doesn't get too skinny too quickly, it seems.</p>
| <p><strong>This proof is incomplete, as noted in the comments and at the end of this answer</strong></p>
<p>Apologies for the length. I tried to break it up in to sections so it's easier to follow and I tried to make all implications really clear. Happy to revise as needed</p>
<p>I'll start with some definitions to keep things clear.</p>
<p>Let</p>
<ul>
<li>The area of a set <span class="math-container">$S \subset \mathbb{R^2}$</span> be the 2-Lebesgue measure <span class="math-container">$\lambda^*_2(S):= A(S)$</span></li>
<li><span class="math-container">$p_n$</span> be the point selected from <span class="math-container">$X_{n-1}$</span> such that <span class="math-container">$P(p_n \in Q) = \frac{A(Q)}{A({X_{n-1}})} \; \forall Q\in \mathcal{B}(X_{n-1})$</span></li>
<li><span class="math-container">$C_n(p)$</span> is the maximal circle drawn around <span class="math-container">$p \in X_{n-1}$</span> that fits in <span class="math-container">$X_{n-1}$</span>: <span class="math-container">$C_n(p) = \max_r \textrm{Circle}(p,r):\textrm{Circle}(p,r) \subseteq X_{n-1})$</span></li>
<li><span class="math-container">$A_n = A(C_n(p_n)) $</span> be the area of the circle drawn around <span class="math-container">$p_n$</span> <span class="math-container">$($</span>i.e., <span class="math-container">$X_n = X_{n-1}\setminus C_n(p_n))$</span></li>
</ul>
<p>We know that <span class="math-container">$0 \leq A_n \leq 1$</span>. By your definition of the generating process we can also make a stronger statement:</p>
<p>Also, since you're using a uniform probability measure over (well-behaved) subsets of <span class="math-container">$X_{n-1}$</span> as the distribution of <span class="math-container">$p_n$</span> we have <span class="math-container">$P(p_n \in B) := \frac{A(B)}{A(X_{n-1})}\;\;\forall B\in \sigma\left(X_{n-1}\right) \implies P(p_1 \in S) = P(S) \;\;\forall S \in \sigma(X_0)$</span>.</p>
<p><strong>Lemma 1</strong>: <span class="math-container">$P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$</span></p>
<p><em>Proof</em>: We'll show this by proving</p>
<ol>
<li><span class="math-container">$P(A_n>0)=1\;\forall n$</span></li>
<li><span class="math-container">$(1) \implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span></li>
<li><span class="math-container">$(2) \implies P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1$</span></li>
</ol>
<p><span class="math-container">$A_n = 0$</span> can only happen if <span class="math-container">$p_n$</span> falls directly on the boundary of <span class="math-container">$X_n$</span> (i.e., <span class="math-container">$p_n \in \partial_{X_{n-1}} \subset \mathbb{R^2})$</span>. However, since each <span class="math-container">$\partial_{X_{n-1}}$</span> is the union of a finite number of smooth curves (circular arcs) in <span class="math-container">$\mathbb{R^2}$</span> we have <span class="math-container">${A}(\partial_{X_{n-1}})=0 \;\forall n \implies P(p_n \in \partial_{X_{n-1}})=0\;\;\forall n \implies P(A_n>0)=1\;\forall n$</span></p>
<p>If <span class="math-container">$P(A_n>0)=1\;\forall n$</span> then since <span class="math-container">$A(X_i) = A(X_{i-1}) - A_n\;\forall i$</span> we have that <span class="math-container">$A(X_{i-1}) - A(X_i) = A_n\;\forall i$</span></p>
<p>Therefore, <span class="math-container">$P(A(X_{i-1}) - A(X_i) > 0\;\forall i) = P(A_n>0\;\forall i)=1\implies P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span></p>
<p>If <span class="math-container">$P\left(A(X_{i})\leq A(X_{i-1}) \;\;\forall i \right)=1$</span> then <span class="math-container">$(A(X_{i}))_{i\in \mathbb{N}}$</span> is a monotonic decreasing sequence almost surely.</p>
<p>Since <span class="math-container">$A(X_i)\geq 0\;\;\forall i\;\;(A(X_{i}))_{i\in \mathbb{N}}$</span> is bounded from below, the monotone convergence theorem implies <span class="math-container">$P\left(\exists L \in [0,\infty): \lim \limits_{n \to \infty} A(X_{n}) = L\right)=1\;\;\square$</span></p>
<p>As you've stated, what we want to show is that eventually we've cut away all the area. There are two senses in which this can be true:</p>
<ol>
<li>Almost all sequences <span class="math-container">$\left(A(X_i)\right)_1^{\infty}$</span> converge to <span class="math-container">$0$</span>: <span class="math-container">$P\left(\lim \limits_{n\to\infty}A(X_n) = 0\right) = 1$</span></li>
<li><span class="math-container">$\left(A(X_i)\right)_1^{\infty}$</span> converges in mean to <span class="math-container">$0$</span>: <span class="math-container">$\lim \limits_{n\to \infty} \mathbb{E}[A(X_n)] = 0$</span></li>
</ol>
<p>In general, these two senses of convergence do not imply each other. However, with a couple additional conditions we can show almost sure convergence implies convergence in mean. Your question is about (2), and we will get there via proving (1) <em>plus</em> a sufficient condition for <span class="math-container">$(1)\implies (2)$</span>.</p>
<p>I'll proceed as follows:</p>
<ol>
<li>Show <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span> using Borel-Cantelli Lemma</li>
<li>Use the fact that <span class="math-container">$0<A(X_n)\leq 1$</span> to apply the Dominated Convergence Theorem to show <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></li>
</ol>
<hr />
<h2>Step 1: <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span></h2>
<p>If <span class="math-container">$\lim_{n\to \infty} A(X_n) = A_R > 0$</span> then there is some set <span class="math-container">$R$</span> with positive area <span class="math-container">$A(R)=A_R >0$</span> that is a subset of <em>all</em> <span class="math-container">$X_n$</span> (i.e.,<span class="math-container">$\exists R \subset X_0: A(R)>0\;\textrm{and}\;R \subset X_i\;\;\forall i> 0)$</span></p>
<p>Let's call a set <span class="math-container">$S\subset X_0:A(S)>0,\;S \subset X_i\;\;\forall i> 0$</span> a <em>reserved set</em> <span class="math-container">$(R)$</span> since we are "setting it aside". In the rest of this proof, the letter <span class="math-container">$R$</span> will refer to a reserved set.</p>
<p>Let's define the set <span class="math-container">$Y_n = X_n \setminus R$</span>, and the event <span class="math-container">$T_n:=p_n \in Y_{n-1}$</span> then</p>
<p><strong>Lemma 2</strong>: <span class="math-container">$P\left(\bigcap_1^n T_i \right) \leq A(Y_0)^n = (1 - A_R)^n\;\;\forall n>0$</span></p>
<p><em>Proof</em>: We'll prove this by induction. Note that <span class="math-container">$P(T_1) = A(Y_0)$</span> and <span class="math-container">$P(T_1\cap T_2) = P(T_2|T_1)P(T_1)$</span>. We know that if <span class="math-container">$T_1$</span> has happened, then <strong>Lemma 1</strong> implies that <span class="math-container">$A(Y_{1}) < A(Y_0)$</span>. Therefore</p>
<p><span class="math-container">$$P(T_2|T_1)<P(T_1)=A(Y_0)\implies P\left(T_1 \bigcap T_2\right)\leq A(Y_0)^2$$</span></p>
<p>If <span class="math-container">$P(\bigcap_{i=1}^n T_i) \leq A(Y_0)^n$</span> then by a similar argument we have</p>
<p><span class="math-container">$$P\left(\bigcap_{i=1}^{n+1} T_i\right) = P\left( T_{n+1} \left| \;\bigcap_{i=1}^n T_i\right. \right)P\left(\bigcap_{i=1}^n T_i\right)\leq A(Y_0)A(Y_0)^n = A(Y_0)^{n+1}\;\;\square$$</span></p>
<p>However, to allow <span class="math-container">$R$</span> to persist, we must ensure that <em>not only</em> does <span class="math-container">$T_n$</span> occur for all <span class="math-container">$n>0$</span> but that each <span class="math-container">$p_n$</span> doesn't fall in some neighborhood <span class="math-container">$\mathcal{N}_n(R)$</span> around <span class="math-container">$R$</span>:</p>
<p><span class="math-container">$$\mathcal{N}_n(R):= \mathcal{R}_n\setminus R$$</span>
<span class="math-container">$$\textrm{where}\; \mathcal{R}_n:=\{p \in X_{n-1}: A(C_n(p)\cap R)>0\}\supseteq R$$</span></p>
<p>Let's define the event <span class="math-container">$T'_n:=p_n \in X_{n-1}\setminus \mathcal{R}_n$</span> to capture the above requirement for a particular point <span class="math-container">$p_n$</span>. We then have the following.</p>
<p><strong>Lemma 3</strong>: <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R \implies P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$</span></p>
<p><em>Proof</em>: Assume <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span>. If <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$</span> then <span class="math-container">$P\left(\exists k>0:p_k \in \mathcal{R}_k\right)>0$</span>. By the definition of <span class="math-container">$ \mathcal{R}_k$</span>, <span class="math-container">$A(C_k(p_k)\cap R) > 0$</span> which means that <span class="math-container">$X_{k}\cap R \subset R \implies A(X_{k}\cap R) < A_R$</span>. By <strong>Lemma 1</strong>, <span class="math-container">$(X_i)_{i \in \mathbb{N}}$</span> is a strictly decreasing sequence of sets so <span class="math-container">$A(X_{j}\cap R) < A_R \;\;\forall j>i$</span>; therefore, <span class="math-container">$\exists \epsilon > 0: P\left(A(X_n) \overset{a.s.}{\to} A_R - \epsilon\right)>0$</span>. However, this contradicts our assumption <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span>. Therefore, <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)<1$</span> is false which implies <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1\;\square$</span></p>
<p><strong>Corollary 1</strong>: <span class="math-container">$P\left(\bigcap \limits_{i \in \mathbb{N}} T_i'\right)=1$</span> is a necessary condition for <span class="math-container">$A(X_n) \overset{a.s.}{\to} A_R$</span></p>
<p><em>Proof</em>: This follows immediately from <strong>Lemma 3</strong> by the logic of material implication: <span class="math-container">$X \implies Y \iff \neg Y \implies \neg X$</span> -- an implication is logically equivalent to its contrapositive.</p>
<p>We can express <strong>Corollary 1</strong> as an event <span class="math-container">$\mathcal{T}$</span> in a probability space <span class="math-container">$\left(X_0^{\mathbb{N}},\mathcal{F},\mathbb{P}\right)$</span> constructed from the sample space of infinite sequences of points <span class="math-container">$p_n \in X_0$</span> where:</p>
<ul>
<li><p><span class="math-container">$X_0^{\mathbb{N}}:=\prod_{i\in\mathbb{N}}X_0$</span> is the set of all sequences of points in the unit disk <span class="math-container">$X_0 \subset \mathbb{R^2}$</span></p>
</li>
<li><p><span class="math-container">$\mathcal{F}$</span> is the product Borel <span class="math-container">$\sigma$</span>-algebra generated by the product topology of all open sets in <span class="math-container">$X_0^{\mathbb{N}}$</span></p>
</li>
<li><p><span class="math-container">$\mathbb{P}$</span> is a probability measure defined on <span class="math-container">$\mathcal{F}$</span></p>
</li>
</ul>
<p>With this space defined, we can define our event <span class="math-container">$\mathcal{T}$</span> as as the intersection of a non-increasing sequence of cylinder sets in <span class="math-container">$\mathcal{F}$</span>:</p>
<p><span class="math-container">$$\mathcal{T}:=\bigcap_{i=1}^{\infty}\mathcal{T}_i \;\;\;\textrm{where } \mathcal{T}_i:=\bigcap_{j=1}^{i} T'_j = \text{Cyl}_{\mathcal{F}}(T'_1,..,T'_i)$$</span></p>
<p><strong>Lemma 4</strong>: <span class="math-container">$\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)\leq (1-A_R)^n$</span></p>
<p><em>Proof</em>: <span class="math-container">$\mathbb{P}(\mathcal{T}_n) = \mathbb{P}(\bigcap_1^n T'_i)$</span> follows from the definition of <span class="math-container">$\mathcal{T}_n$</span>. <span class="math-container">$\mathbb{P}(\bigcap_1^n T'_i)\leq \mathbb{P}\left(\bigcap_1^n T_i\right)$</span> follows immediately from <span class="math-container">$R\subseteq \mathcal{R}_n\;\;\forall n\;\square$</span></p>
<p><strong>Lemma 5</strong>: <span class="math-container">$\mathcal{T} \subseteq \limsup \limits_{n\to \infty} \mathcal{T}_n$</span></p>
<p><em>Proof</em>: By definition <span class="math-container">$\mathcal{T} \subset \mathcal{T}_i \;\forall i>0$</span>. Since <span class="math-container">$\left(\mathcal{T}_i\right)_{i \in \mathbb{N}}$</span> is nonincreasing, we have <span class="math-container">$\limsup \limits_{i\to \infty} \mathcal{T}_i = \limsup \limits_{i\to \infty}\mathcal{T}_i = \lim \limits_{i\to \infty}\mathcal{T}_i = \mathcal{T}\;\;\square$</span></p>
<p><strong>Lemma 6</strong>: <span class="math-container">$\mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0\;\;\forall A_R \in (0,1]$</span></p>
<p><em>Proof</em>: From <strong>Lemma 4</strong>
<span class="math-container">$$\sum \limits_{i=1}^{\infty} \mathbb{P}\left(\mathcal{T}_i\right) \leq \sum \limits_{i=1}^{\infty} (1-A_R)^i = \sum \limits_{i=0}^{\infty} \left[(1-A_R) \cdot (1-A_R)^i\right] =$$</span>
<span class="math-container">$$ \frac{1-A_R}{1-(1-A_R)} = \frac{1-A_R}{A_R}=\frac{1}{A_R}-1 < \infty \;\; \forall A_R \in (0,1]\implies$$</span>
<span class="math-container">$$ \mathbb{P}\left(\limsup \limits_{i\to \infty} \mathcal{T}_i\right) = 0 \;\; \forall A_R \in (0,1]\textrm{ (Borel-Cantelli) }\;\square$$</span></p>
<p><strong>Lemma 6</strong> implies that only <em>finitely many</em> <span class="math-container">$\mathcal{T}_i$</span> will occur with probability 1. Specifically, for almost every sequence <span class="math-container">$\omega \in X_0^{\infty}$</span> there <span class="math-container">$\exists n_{\omega}<\infty$</span> such <span class="math-container">$p_{n_{\omega}} \in \mathcal{R}_{n_{\omega}}$</span>.</p>
<p>We can define this as a stopping time for each sequence <span class="math-container">$\omega \in X_0^{\infty}$</span> as follows:</p>
<p><span class="math-container">$$\tau(\omega) := \max \limits_{n \in \mathbb{N}} \{n:\omega \in \mathcal{T}_n\}$$</span></p>
<p><strong>Corollary 2</strong>: <span class="math-container">$\mathbb{P}(\tau < \infty) = 1$</span></p>
<p><em>Proof</em>: This follows immediately from <strong>Lemma 6</strong> and the definition of <span class="math-container">$\tau$</span></p>
<p><strong>Lemma 7</strong>: <span class="math-container">$P(\mathcal{T}) = 0\;\;\forall R:A(R)>0$</span></p>
<p><em>Proof</em>: This follows from <strong>Lemma 5</strong> and <strong>Lemma 6</strong></p>
<hr />
<p><strong>This is where I'm missing a step</strong>
For Theorem 1 below to work, Lemma 7 + Corollary 1 are not sufficient.</p>
<p>Just because every subset of positive area <span class="math-container">$R$</span> has probability zero of occurring doesn't imply that the probability of the set of all possible subsets of area <span class="math-container">$R$</span> has a zero probability. An analogous situation is with continuous random variables -- there are an uncountable number of points, but yet when we draw from it we nonetheless get a point.</p>
<p>What I don't know are the sufficient conditions for the following:</p>
<p><span class="math-container">$P(\omega)=0 \;\forall \omega\in \Omega: A(\omega)=R \implies P(\{\omega: A(\omega)=R\})=0$</span></p>
<hr />
<hr />
<p><strong>Theorem 1</strong>: <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span></p>
<p><em>Proof</em>: <strong>Lemma 7</strong> and <strong>Corollary 1</strong> imply <span class="math-container">$A(X_n)$</span> does <em>not</em> converge to <span class="math-container">$A_R$</span> almost surely, which implies <span class="math-container">$P(A(X_n) \to A_R) < 1 \;\forall A_R > 0$</span>. <strong>Corollary 2</strong> makes the stronger statement that <span class="math-container">$P(A(X_n) \to A_R)=0\;\forall A_R>0$</span> (i.e., almost never), since we know that the sequences of centers of each circle <span class="math-container">$p_n$</span> viewed as a stochastic process will almost surely hit <span class="math-container">$R$</span> (again, since we've defined <span class="math-container">$R$</span> such that <span class="math-container">$A(R)>0)$</span>. <span class="math-container">$P(A(X_n) \to A_R) = 0 \;\forall A_R>0$</span> with <strong>Lemma 1</strong> implies that <span class="math-container">$P(A(X_n) \to 0) = 1$</span>. Therefore, <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0\;\square$</span></p>
<hr />
<h2>Step 2: <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></h2>
<p>We will appeal to the <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables#Properties_4" rel="nofollow noreferrer">Dominated Convergence Theorem</a> to prove this result.</p>
<p><strong>Theorem 2</strong>: <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span></p>
<p><em>Proof</em>: From <strong>Theorem 1</strong> we've shown that <span class="math-container">$A(X_n) \overset{a.s.}{\to} 0$</span>. Given an almost surely constant random variable <span class="math-container">$Z\overset{a.s.}{=}c$</span>, we have <span class="math-container">$c>1 \implies |A(X_n)| < Z\;\forall n$</span>. In addition, <span class="math-container">$\mathbb{E}[Z]=c<\infty$</span>, hence <span class="math-container">$Z$</span> is <span class="math-container">$\mathbb{P}$</span>-integrable. Therefore, <span class="math-container">$\mathbb{E}[A(X_n)] \to 0$</span> by the Dominated Convergence Theorem. <span class="math-container">$\square$</span></p>
| <p>New to this, so not sure about the rigor, but here goes.</p>
<p>Let $A_k$ be the $k$th circle. Assume the area of $\bigcup_{k=1}^n A_k$ does not approach the total area of the circle $A_T$ as $n$ tends towards infinity. Then there must be some area $K$ which is not covered yet cannot harbor a new circle. Let $C = \bigcup_{k=1}^\infty A_k$. Consider a point $P$ in such that $d(P,K)=0$ and $d(P,C)>0$. If no such point exists, then $K \subset C$, as $C$ is a clearly a closed set of points. If such a point does exist, then another circle with center $P$ and nonzero area can be made to cover part of $K$, and the same logic applies to all possible $K$. Therefore there is no area $K$ which cannot contain a new circle, and by consequence $$\lim_{n\to\infty}\Bigg[\bigcup_{k=1}^n A_k\Bigg] = \big[A_T\big]$$
Since the size of circles is continuous, there must be a set of circles $\{A_k\}_{k=1}^\infty$ such that $\big[A_k\big]=E(\big[A_k\big])$ for each $k \in \mathbb{N}$, and therefore $$\lim_{n\to\infty} E(\big[A_k\big]) = \big[A_k\big] $$</p>
<p><strong>EDIT</strong>: This proof is wrong becuase I'm bad at probability, working on a new one.</p>
|
logic | <p>I am still an undergraduate student, and so perhaps I just haven't seen enough of the mathematical world. </p>
<p><strong>Question:</strong> What are some examples of mathematical logic solving open problem outside of mathematical logic?</p>
<p>Note that the <a href="//en.wikipedia.org/wiki/Ax%E2%80%93Grothendieck_theorem" rel="nofollow noreferrer">Ax-Grothendieck Theorem</a> <em>would have been</em> a perfect example of this (namely, If $P$ is a polynomial function from $\mathbb{C}^n$ to $\mathbb{C}^n$ and $P$ is injective then $P$ is bijective). However, even though there is a beautiful logical proof of this result, it was first proven not specifically using mathematical logic. I'm curious as to whether there are any results where "the logicians got there first".</p>
<p><strong>Edit 1:Bonus</strong>: I am quite curious if one can post an example from Reverse Mathematics. </p>
<p><strong>Edit 2:</strong><a href="https://math.stackexchange.com/questions/886848/why-exactly-is-whiteheads-problem-undecidable">This post</a> reminded me that the solution to <a href="http://en.wikipedia.org/wiki/Whitehead_problem" rel="nofollow noreferrer">Whitehead's Problem</a> came from logic (a problem in group theory). According to the wikipedia article, the proof by Shelah was 'completely unexpected'. It utilizes the fact that <strong>ZFC+(V=L)</strong> implies every Whitehead group is free while <strong>ZFC+$\neg$CH+MA</strong> implies there exists a Whitehead group which is not free. Since these two separate axiom systems are equiconsistent, hence Whitehead's problem is undecidable. </p>
<p><strong>Edit 3:</strong> A year later, we have some more examples: </p>
<p>1) Hrushovski's Proof of the Mordell-Lang Conjecture for functional fields in all characteristics. </p>
<p>2) The Andre-Óort conjecture by Pila and Tsimerman.</p>
<p>3) Various results in O-minimality including work by Wilkie and van den Dries (as well as others). </p>
<p>4) Zilber's Pseudo-Exponential Field as work towards Schanuel's conjecture. </p>
| <p>I was impressed by Bernstein and Robinson's 1966 proof that if some polynomial of an operator on a Hilbert space is compact then the operator has an invariant subspace. This solved a particular instance of <a href="http://en.wikipedia.org/wiki/Invariant_subspace_problem" rel="nofollow noreferrer">invariant subspace problem</a>, one of pure operator theory without any hint of logic.</p>
<p>Bernstein and Robinson used hyperfinite-dimensional Hilbert space, a nonstandard model, and some very metamathematical things like transfer principle and saturation. Halmos was very unhappy with their proof and eliminated non-standard analysis from it the same year. But the fact remains that the proof was originally found through non-trivial application of the model theory.</p>
<p>Another example is the solution to the <a href="http://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem" rel="nofollow noreferrer">Hilbert's tenth problem</a> by Matiyasevich. Hilbert asked for a procedure to determine whether a given polynomial Diophantine equation is solvable. This was a number theoretic problem, and he did not expect that such procedure can not exist. Proving non-existence though required developing a branch of mathematical logic now called computability theory (by Gödel, Church, Turing and others) that formalizes the notion of algorithm. Matiyasevich showed that any recursively enumerable set can be the solution set for a Diophantine equation, and since not all recursively enumerable sets are computable there can be no solvability algorithm. </p>
<p>This example is typical of how logic serves all parts of mathematics by saving effort on doomed searches for impossible constructions, proofs or counterexamples. For instance, an analyst might ask if the plane can be decomposed into a union of two sets, one at most countable along every vertical line, and the other along every horizontal line. It seems unlikely and people could spend a lot of time trying to disprove it. In vain, because Sierpinski <a href="http://www.ams.org/journals/bull/1936-42-05/S0002-9904-1936-06291-9/S0002-9904-1936-06291-9.pdf" rel="nofollow noreferrer">proved</a> that existence of such a decomposition is equivalent to the continuum hypothesis, and Gödel showed that disproving it is impossible by an elaborate logical construction now called <a href="http://en.wikipedia.org/wiki/Inner_model#Related_ideas" rel="nofollow noreferrer">inner model</a>. As is proving it as Cohen showed by an even more elaborate logical construction called forcing.</p>
<p>A more recent example is the <a href="http://www.ams.org/journals/jams/1996-9-03/S0894-0347-96-00202-0/" rel="nofollow noreferrer">proof of the Mordell-Lang conjecture for function fields by Hrushovski (1996)</a>. The conjecture "<em>is essentially a finiteness statement on the intersection of a subvariety of a semi-Abelian variety with a subgroup of finite rank</em>". In characteristic $0$, and for Abelian varieties or
finitely generated groups the conjecture was proved more traditionally by Raynaud, Faltings and Vojta. They inferred the result for function fields from the one for number fields using a specialization argument, another proof was found by Buium. Abramovich and Voloch proved many cases in characteristic $p$. Hrushovski gave a uniform proof for general semi-Abelian varieties in arbitrary characteristic using "<em>model-theoretic analysis of the kernel of Manin's homomorphism</em>", which involves definable subsets, Morley dimension, $\lambda$-saturated structures, model-theoretic algebraic closure, compactness theorem for first-order theories, etc. </p>
| <p>The <a href="http://en.wikipedia.org/wiki/Tarski-Seidenberg_theorem">Tarski–Seidenberg theorem</a> says that the set of semialgebraic sets is closed under projection. It's a pure real-algebraic statement that was originally proved with logic.</p>
<p>Jacobson says this in chapter 5 of his <em>Basic Algebra I</em>:</p>
<blockquote>
<p>More generally, Tarski's theorem implies his metamathematical principle that any "elementary" sentence of algebra which is true in one real closed field (e.g., the field of real numbers) is true in every real closed field.</p>
</blockquote>
|
geometry | <p>In <a href="https://math.stackexchange.com/questions/38350/n-lines-cannot-divide-a-plane-region-into-x-regions-finding-x-for-n">this</a> post it is mentioned that $n$ straight lines can divide the plane into a maximum number of $(n^{2}+n+2)/2$ different regions. </p>
<p>What happens if we use circles instead of lines? That is, what is the maximum number of regions into which n circles can divide the plane?</p>
<p>After some exploration it seems to me that in order to get maximum division the circles must intersect pairwise, with no two of them tangent, none of them being inside another and no three of them concurrent (That is no three intersecting at a point).</p>
<p>The answer seems to me to be affirmative, as the number I obtain is $n^{2}-n+2$ different regions. Is that correct?</p>
| <p>For the question stated in the title, the answer is yes, if more is interpreted as "more than or equal to".</p>
<p>Proof: let <span class="math-container">$\Lambda$</span> be a collection of lines, and let <span class="math-container">$P$</span> be the extended two plane (the Riemann sphere). Let <span class="math-container">$P_1$</span> be a connected component of <span class="math-container">$P\setminus \Lambda$</span>. Let <span class="math-container">$C$</span> be a small circle entirely contained in <span class="math-container">$P_1$</span>. Let <span class="math-container">$\Phi$</span> be the <a href="http://en.wikipedia.org/wiki/M%C3%B6bius_transformation" rel="nofollow noreferrer">conformal inversion</a> of <span class="math-container">$P$</span> about <span class="math-container">$C$</span>. Then by elementary properties of conformal inversion, <span class="math-container">$\Phi(\Lambda)$</span> is now a collection of circles in <span class="math-container">$P$</span>. The number of connected components of <span class="math-container">$P\setminus \Phi(\Lambda)$</span> is the same as the number of connected components of <span class="math-container">$P\setminus \Lambda$</span> since <span class="math-container">$\Phi$</span> is continuous. So this shows that <strong>for any collection of lines, one can find a collection of circles that divides the plane into at least the same number of regions</strong>.</p>
<p>Remark: via the conformal inversion, all the circles in <span class="math-container">$\Phi(\Lambda)$</span> thus constructed pass through the center of the circle <span class="math-container">$C$</span>. One can imagine that by perturbing one of the circles somewhat to reduce concurrency, one can increase the number of regions.</p>
<hr />
<p>Another way to think about it is that lines can be approximated by really, really large circles. So starting with a configuration of lines, you can replace the lines with really really large circles. Then in the finite region "close" to where all the intersections are, the number of regions formed is already the same as that coming from lines. But when the circles "curve back", additional intersections can happen and that can only introduce "new" regions.</p>
<hr />
<p>Lastly, <a href="http://mathworld.wolfram.com/PlaneDivisionbyCircles.html" rel="nofollow noreferrer">yes, the number you derived is correct</a>. See <a href="http://oeis.org/A014206" rel="nofollow noreferrer">also this OEIS entry</a>.</p>
| <p>One may deduce the formula $n^{2}-n+2$ as follows: Start with $m$ circles already drawn on the plane with no two of them tangent, none of them being inside another and no three of them concurrent. Then draw the $m+1$ circle $C$ so that is does not violate the propeties stated before and see how it helps increase the number of regions. Indeed, we can see that that $C$ intersects each of the remaining $m$ circles at two points. Therefore, $C$ is divided into $2m$ arcs, each of which divides in two a region formed previously by the first $m$ circles. But a circle divides the plane into two regions, and so we can count step by step ($m=1,2,\cdots, n$) the total number of regions obatined after drawing the $n$-th circle. That is,
$$
2+2(2-1)+2(3-1)+2(4-1)+\cdots+2(n-1)=n^{2}-n+2
$$</p>
<p>Since $n^{2}-n+2\ge (n^{2}+n+2)/2$ for $n\ge 1$ the answer is affirmative. </p>
<p>ADDENDUM: An easy way to see that the answer to my question is affirmative without finding a formula may be as follows: Suppose that $l_{n}$ is the maximum number of regions into which the plane $\mathbb{R}^{2}$ can be divided by $n$ lines, and that $c_{n}$ is the maximum number of regions into which the plane can be divided by $n$ circles. </p>
<p>Now, in the one-point compactification $\mathbb{R}^{2}\cup\{\infty\}$ of the plane, denoted by $S$ (a sphere), the $n$ lines become circles intersecting all at the point $\infty$. Therefore, these circles divide $S$ into at least $l_{n}$ regions. Now, if we pick a point $p$ in the complement in $S$ of the circles and take the stereographic projection through $p$ mapping onto the plane tangent to $S$ at the antipode of $p$ we obtain a plane which is divided by $n$ circles into at least $l_{n}$ regions. Therefore, $l_{n}\le c_{n}$.</p>
<p>Moreover, from this we can see that the plane and the sphere have equal maximum number of regions into which they can be divided by circles. </p>
|
geometry | <p><strong>For the sake of context:</strong> I've just finished a master degree in <em>Mathematics</em> and my goal is to get a Ph.D. in <em>Complex Analysis</em>. In the years I spent studying my undergraduate mathematics degree I had always avoided the <em>Geometry</em> courses because the subject (or maybe the way it was taught to me) seemed... tedious. I remember to think <em>"Euclidean Geometry, manifolds, Riemann metrics, curvature, geodesics... Ok, so what?"</em>. That thought, that inability to appreciate the <em>beauty</em> of Geometry ---which I can tell it exists by the many people in the mathematical community who know way more than me about it and claim so--- in the same way I distinctly see it (specially) in <em>Complex Analysis</em> has always bothered me.</p>
<p>A few days ago I was reading Rudin's <em>Real and Complex Analysis</em> and reached the following theorem:</p>
<blockquote>
<p><strong>Theorem:</strong> If <span class="math-container">$\varphi$</span> is convex on <span class="math-container">$(a,b)$</span>, then <span class="math-container">$\varphi$</span> is continuous on <span class="math-container">$(a,b)$</span>.</p>
<p><strong>PROOF</strong> The idea of the proof is most easily conveyed in geometric language. Those who may worry that this is not "rigorous" are invited to transcribe it in terms of epsilons and deltas.</p>
<p>Suppose <span class="math-container">$a<s<x<y<t<b$</span>. Write <span class="math-container">$S$</span> for the point <span class="math-container">$(s,\varphi(s))$</span> in the plane, and deal similarly with <span class="math-container">$x,y,$</span> and <span class="math-container">$t$</span>. Then <span class="math-container">$X$</span> is on or below the line <span class="math-container">$SY$</span>, hence <span class="math-container">$Y$</span> is on or above the line through <span class="math-container">$S$</span> and <span class="math-container">$X$</span>; also, <span class="math-container">$Y$</span> is on or below <span class="math-container">$XT$</span>. As <span class="math-container">$y\to x$</span>, it follows that <span class="math-container">$Y\to X$</span>, i.e., <span class="math-container">$\varphi(y)\to\varphi(x)$</span>. Left-hand limits are handled in the same manner, and the continuity of <span class="math-container">$\varphi$</span> follows.</p>
</blockquote>
<p>The instant I read "in geometric language" I started frowning, but I continued reading. After drawing a picture and give it some thought I was thinking, much to my surprise, that it couldn't exist a more beautiful proof of this theorem. I couldn't like more that "three-lines-Lipschitz" clean argument;<a href="https://i.sstatic.net/REJ73.png" rel="noreferrer"><img src="https://i.sstatic.net/REJ73.png" alt="enter image description here" /></a></p>
<p>(!!) Today I had a similar "incident" working out the proof that the (angle preserving) set of isometries of the <em>Poincaré disk model</em> coincides exactly with the Mobius transformations of the unit disk, so I have decided to give a <em>hard try</em> to <em>Geometry</em>.</p>
<p>I want to begin with <em>Differential Geometry</em> which, being closer to my comfort zone, I expect to be a good choice. I have already chosen from which books I will study (Spivak's <em>A Comprehensive Introduction to Differential Geometry</em>), so my question is</p>
<p><strong>Question:</strong> Could you please share some <strong>example</strong> of a <em>geometric argument</em>, <em>geometric result</em>, <em>geometric idea</em>, or even a <em>geometric calculation</em> <strong>in the realm of Differential Geometry</strong> which ---in the same spirit as the two examples I gave--- you find <em>exceptionally</em> beautiful or enlightening and explain why?</p>
<p>It would be a good source of motivation to keep studying and learning <em>Geometry</em> to me (and maybe others with a similar problem), so I would be sincerely grateful to hear from you. =)</p>
<p>[Sorry for the extension, I couldn't find a shorter way to accurately explain what I am looking for.]</p>
| <p>$\newcommand{\Reals}{\mathbf{R}}$<strong>Theorem</strong>: Let $S^{2} \subset \Reals^{3}$ be the unit sphere centered at the origin, and $n = (0, 0, 1)$ the north pole. The stereographic projection mapping $\Pi:S^{2} \setminus\{n\} \to \Reals^{2}$ is conformal.</p>
<p><em>Proof</em>: Fix a point $p \neq n$ arbitrarily, and let $v$ and $w$ be arbitrary tangent vectors at $p$. The plane containing $n$ and $p$ and parallel to $v$ cuts the sphere in a circle $V$ tangent to $v$. Similarly, the plane containing $n$ and $p$ and parallel to $w$ cuts the sphere in a circle $W$ tangent to $w$.</p>
<p><a href="https://i.sstatic.net/gqe0O.png" rel="noreferrer"><img src="https://i.sstatic.net/gqe0O.png" alt="Conformality of stereographic projection"></a></p>
<p>The circles $V$ and $W$ form a digon with vertices $p$ and $n$. By symmetry, the angle $\theta$ they make at $p$ is equal to the angle they make at $n$. Let $v'$ and $w'$ be the tangent vectors at $n$ obtained from $v$ and $w$ by reflection across the plane of symmetry that exchanges $p$ and $n$.</p>
<p>Because the tangent plane to the sphere at the north pole is parallel to the $(x, y)$-plane, the image $\Pi_{*}v$ of $v$ under $\Pi$ is parallel to the vector obtained by translating $v'$ along the ray from $n$ through $p$ to the $(x, y)$-plane. Similarly, $\Pi_{*}w$ is parallel to the vector obtained by translating $w'$.</p>
<p>It follows at once that the angle between $\Pi_{*}v$ and $\Pi_{*}w$ is equal to the angle between $v'$ and $w'$, which is equal to the angle between $v$ and $w$.</p>
<hr>
<p>Conformality of stereographic projection allows the holomorphic structure on the Riemann sphere to be visualized in three-dimensional Euclidean geometry, providing a basic link between complex analysis and differential geometry.</p>
<p>In coordinates, stereographic projection and its inverse are given by
$$
\Pi(x, y, z) = \frac{(x, y)}{1 - z},\qquad
\Pi^{-1}(u, v) = \frac{(2u, 2v, u^{2} + v^{2} - 1)}{u^{2} + v^{2} + 1}.
$$
It's possible to calculate the induced Riemannian metric on the plane:
$$
g = \frac{4(du^{2} + dv^{2})}{(u^{2} + v^{2} + 1)^{2}}.
$$
Conformality is encoded in $g$ being a scalar function times the Euclidean metric. While this analytic argument has its own elegance, the geometric argument above is essentially obvious.</p>
| <p>Here is an interesting result on the study of curves in $\mathbb R^3$. The proof is analytic, and will be paraphrased from Do Carmo's book "Differential Geometry of Curves and Surfaces," which is a standard reference. While much of differential geometry has to do with manifolds of higher dimension than just curves, there are many notions that I don't think I would be comfortable trying to distill down into a rather short argument or explanation of some theorem without really losing the beauty of the statement. There are many experts on differential geometry here, and I will let them have the pleasure of explaining some of these results. It's worth noting that much of differential geometry is real-analytic, rather than complex. Hopefully you still find this result interesting. First, some terminology:</p>
<p>Let $\alpha(s)$ be a curve parameterized by arc length. The derivative of $\alpha$ is a tangent vector $t$, and since $\alpha$ is parameterized by arc length, it is unit length. This means that $|\alpha''(s)|$ measures the rate of change of the angle which neighboring tangents make with the tangent at $s$. This value is denoted by $\kappa$ and is the 'curvature' of $\alpha$. If the curvature at a point of a curve is non-zero, then there is a unit vector $n$ in the direction of $\alpha''$ defined by $\alpha''(s) = \kappa (s) n(s)$, which is perpendicular to the tangent line (this follows from differentiating $\alpha' \cdot \alpha' = 1$). This vector $n$ is the normal vector, and the plane spanned by the tangent and normal is called the 'osculating plane'. </p>
<p>In geometry, we often times want coordinates adapted to our particular situation. In particular, when dealing with curves, it is desirable to have a set of linearly-independent vectors that live at each point of a curve, called a 'moving frame.' The third and final vector of this frame is found by taking the cross product of the tangent with the normal, to produce the 'binormal.' This frame is very special and is called the 'Frenet frame.' </p>
<p>There is a final notion we need - the derivative of the binormal. This is called the 'torsion.' Similar to curvature, this measure how much the curve pulls and twists away from the osculating plane. It is denoted by $\tau$</p>
<p>Now we are ready to state the claim:</p>
<p>THEOREM: Given differentiable functions on an interval $k(s) >0$ and $\tau(s)$, there exists a regular parameterized curve $\alpha$ such that $s$ is the arc length, $k(s)$ is the curvature of $\alpha$ and $\tau$ is the torsion of $\alpha$. Moreover, any other curve with the same conditions is isometric to $\alpha$, in the sense that there is some rigid motion (i.e. element of $O(3)$ and/or a translation) which maps the other curve onto $\alpha$.</p>
<p>We will sketch the proof, which uses the theory of ordinary differential equations.</p>
<p>From the definitions, it is not hard to show that the Frenet frame can be given by the following equations.</p>
<p>$$\frac{dt}{ds} = \kappa n$$</p>
<p>$$\frac{dn}{ds} = -\kappa t - \tau b$$</p>
<p>$$\frac{db}{dt} = \tau n$$</p>
<p>Now, really since these are all vector quantities, they are each actually a linear function of three variables - the coordinates of the vectors at each point. And so the Frenet frame gives a linear differential system on $I \times \mathbb R^n$.</p>
<p>This system might sound like a hot mess, but we do have an existence theorem that can handle it. In particular, since the system is linear, the usual local existence result from analysis can handle this system on the whole interval. It follows that given these functions, and the initial conditions we use to create the orthonormal frame at one point, we can solve the system of ODEs without any issues. However, we don't know that the solutions remain orthonormal, and this is key if we want our curve to be essentially defined by the Frenet frame we are trying to prescribe.</p>
<p>Now to check orthnormality, we will use the Frenet equations to check that the quantities $\langle t,n \rangle, \langle t, b \rangle, \langle n, b \rangle, \langle t ,t \rangle, \langle n, n \rangle, \langle b, b\rangle$ are all either $0$ or $1$, respectively.</p>
<p>From these expressions, we can simply differentiate each of these expressions, and then a relatively easy computation shows that the desired result is true.</p>
<p>Now we must actually obtain the curve. This is straight forward enough. Set $\alpha(s) = \int t(s) ds$. This ensures that $\alpha'(s) =t (s) $ and that $\alpha''(s) = \kappa n$, so that $\kappa (s) $ actually is the curvature at each point. That we have succeeded in prescribing the torsion is a little harder. We can see that $\alpha '''(s) = \kappa ' n - \kappa^2t - \kappa \tau b$ from the product rule and the definition. Then we have to use another (computational) fact, which is also not super hard to prove, that $\frac{-\langle \alpha' \times \alpha'', \alpha''' \rangle}{\kappa^2} = \tau$, and this shows that $\alpha$ is the claimed curve.</p>
<p>The uniqueness part is thankfully easier. The Frenet frames being linearly independent sets and in fact orthonormal means that we can just rotate one frame and translate so that origins coincide, and then use the uniqueness part of our existence/uniqueness theorem for ordinary differential equations, to show that after this change, the resulting solutions, and hence curves, coincide.</p>
<p>In hindsight, I feel like this argument might be rather terse. The basic theory of curves and surfaces has many computations in it, most of which are very simple, but sometimes a little long, but are always satisfying to work out, in my opinion. I encourage you to hunt down a copy of Do Carmo's book, and take a pass at it. The book has a good number of illustrations, but spares few if any details on mathematical rigor. The analyst in you will be pleased that proofs are not distilled into pictures, with the actual details left to the reader - you will find a complete telling of all the details of elementary differential geometry in this text.</p>
|
logic | <p>Completeness is defined as if given $\Sigma\models\Phi$ then $\Sigma\vdash\Phi$.
Meaning if for every truth placement $Z$ in $\Sigma$ we would get $T$, then $\Phi$ also would get $T$. If the previous does indeed exists, then we can prove $\Phi$ using the rules in $\Sigma$.</p>
<p>Soundness is defined as: when given that $\Sigma\vdash\Phi$ then $\Sigma\models\Phi$ , which is the opposite. </p>
<p>Can you please explain the basic difference between the two of them ? </p>
<p>Thanks ,Ron</p>
| <p>In brief:</p>
<p>Soundness means that you <em>cannot</em> prove anything that's wrong.</p>
<p>Completeness means that you <em>can</em> prove anything that's right.</p>
<p>In both cases, we are talking about a some fixed system of rules for proof (the one used to define the relation $\vdash$ ).</p>
<p>In more detail: Think of $\Sigma$ as a set of hypotheses, and $\Phi$ as a statement we are trying to prove. When we say $\Sigma \models \Phi$, we are saying that $\Sigma$ <em>logically implies</em> $\Phi$, i.e., in every circumstance in which $\Sigma$ is true, then $\Phi$ is true. Informally, $\Phi$ is "right" given $\Sigma$.</p>
<p>When we say $\Sigma \vdash \Phi$, on the other hand, we must have some set of rules of proof (sometimes called "inference rules") in mind. Usually these rules have the form, "if you start with some particular statements, then you can derive these other statements". If you can derive $\Phi$ starting from $\Sigma$, then we say that $\Sigma \vdash \Phi$, or that $\Phi$ is provable from $\Sigma$. </p>
<p>We are thinking of a proof as something used to convince others, so it's important that the rules for $\vdash$ are mechanical enough so that another person or a computer can <em>check</em> a purported proof (this is different from saying that the other person/computer could <em>create</em> the proof, which we do <em>not</em> require).</p>
<p>Soundness states: $\Sigma \vdash \Phi$ implies $\Sigma \models \Phi$. If you can prove $\Phi$ from $\Sigma$, then $\Phi$ is true given $\Sigma$. Put differently, if $\Phi$ is not true (given $\Sigma$), then you can't prove $\Phi$ from $\Sigma$. Informally: "You can't prove anything that's wrong."</p>
<p>Completeness states: $\Sigma \models \Phi$ imples $\Sigma \vdash \Phi$. If $\Phi$ is true given $\Sigma$, then you can prove $\Phi$ from $\Sigma$. Informally: "You can prove anything that's right."</p>
<p>Ideally, a proof system is both sound and complete.</p>
| <p>From the perspective of trying to write down axioms for first-order logic that satisfy both completeness and soundness, soundness is the easy direction: all you have to do is make sure that all of your axioms are true and that all of your inference rules preserve truth. Completeness is the hard direction: you need to write down strong enough axioms to capture semantic truth, and it's not obvious from the outset that this is even possible in a non-trivial way. </p>
<p>(A trivial way would be to admit all truths as your axioms, but the problem with this logical system is that you can't recognize what counts as a valid proof.) </p>
|
game-theory | <p>Assume a tic-tac-toe board's state is stored in a matrix.
$$
S=\begin{bmatrix}
-1 & 0 & 1 \\
1 & -1 & 0 \\
1 & 0 & -1 \\
\end{bmatrix}
$$</p>
<p>Here, $X$ is mapped to $1$, $O$ is mapped to $-1$ and an empty state is mapped to zero, but <strong>any other numeric mapping will do</strong> if there is one more suitable for solving the problem. Is it possible to create some single expression involving the matrix $S$ which will indicate whether the board is in a winning state? For the above matrix, the expression should show a win for $O$. </p>
<p>I recognize that there are more direct programmatic approaches to this, so this is more of an academic question.</p>
<p>Edit: I have been asked what to do if the board shows two winners. You could either:</p>
<ol>
<li>Assume only valid board states. Since gameplay would stop after once side wins, it is not possible to have a board with two winners. </li>
<li>Alternatively (or equivalently?), your expression could arbitrarily pick a winner in a board that has two.</li>
</ol>
| <p>Sure, here's one way to do it with linear-algebra primitives. Define column vectors $e_1=(1,0,0)^T$, $e_2=(0,1,0)^T$, $e_3=(0,0,1)^T$, and a row vector $a=(1,1,1)$.</p>
<ul>
<li>You can detect if a row or column of $S$ is a winner by testing $a S e_i$ and $a S^T e_i$ for $\pm 3$. (Exercise: Which expression detects winning rows and which detects winning columns?)</li>
<li>You can detect if the main diagonal is a winner by testing the trace of $S$.</li>
<li>Finally, let $R$ be the matrix that permutes rows $1$ and $3$; then you can detect if the other diagonal is a winner by testing the trace of $RS$.</li>
</ul>
<p>In order to combine all eight tests into a single expression, you'll have to specify what you want to do in case of an over-determined matrix. For example, if one row shows a win for $+$ and another row shows a win for $-$, what's the desired behavior?</p>
<p><strong>Edit:</strong> Okay, assuming only valid board states, it's not too hard. We're just going to have to introduce some unusual notation. Define a slightly arbitrary function
$$
\max^*(a,b)=\begin{cases}
a& \text{if } |a| \geq |b|\\
b& \text{otherwise }
\end{cases}
$$
Then $\max^*$ is an associative operation, so we can extend it iteratively to any number of arguments, and the winner of the game is
$$w(S)=\left\lfloor\frac13\max^*\left(\max^*_i(a S e_i), \max^*_i(a S^T e_i), \mathrm{tr}(S),\mathrm{tr}(RS)\right)\right\rceil$$
where $\lfloor x\rceil$ is the round-towards-zero function, so that $w(S)=0$ means nobody wins.</p>
<p>(If $S$ is an invalid state in which both players "win", $w(S)$ will pick the first winner according to the order of expressions tested.)</p>
<p><strong>Edit 2:</strong> Here's a theoretical approach that technically uses only matrix multiplication on $S$... but then shifts all the work to a scalar.</p>
<p>Let $a=(1,3,3^2)$ and $b=(1,3^3, 3^6)^T=(1,27,729)^T$. Then $aSb$ is an integer which encodes $S$, and it has absolute value $\leq (3^9-1)/2=9841$. So there exists a polynomial $p$ of degree $\leq 3^9=19683$ such that $w(S)=p(aSb)$.</p>
<p>In fact, we can choose the even coefficients of $p$ to be zero. The odd coefficients are slightly harder to compute. :)</p>
| <p>Just a comment on the fact that more obscure solutions may exist that are easier to compute. I was able to construct one for the $2\times 2$ tic-tac-toe. Let
\begin{align}
\mathbf{Z}_1 = \left[\begin{array}{cc}2.3049 & -2.2506 \\ -2.2310 & 2.2420 \end{array}\right]
\end{align}
and
\begin{align}
\mathbf{Z}_2 =\left[\begin{array}{cc} -0.2072 & 0.2190 \\ 0.3336 & -0.0792\end{array}\right]
\end{align}
Let the tic-tac-toe matrix be $\mathbf{Z}$ with entries $0$, $1$ or $-1$ in the same manner as defined in the question. Let
\begin{align}
\chi \triangleq \det(\mathbf{Z}_1+\mathbf{Z})|\det(\mathbf{Z}_2+\mathbf{Z})|
\end{align}
Then, unless $\mathbf{Z}$ specifies an illegal board where both sides have won, $\chi \geq 1.5 \iff $ user $1$ has won, $-1 < \chi < 1.5\iff$ the game has not ended yet, and $\chi \leq -1$ if user $-1$ has won. For example, for
\begin{align}
\mathbf{Z} = \left[\begin{array}{cc} 1 & 1 \\ -1 & 0\end{array}\right]
\end{align}
obviously user $1$ has won, and indeed we have $\chi = 2.5252$.</p>
<p>The above was found using a genetic algorithm and experimenting with different $\chi$ forms.</p>
|
linear-algebra | <blockquote>
<p><strong>Possible Duplicate:</strong><br />
<a href="https://math.stackexchange.com/questions/51292/relation-of-this-antisymmetric-matrix-r-left-beginsmallmatrix0-1-10-e">Relation of this antisymmetric matrix <span class="math-container">$r = \!\left(\begin{smallmatrix}0 &1\\-1 & 0\end{smallmatrix}\right)$</span> to <span class="math-container">$i$</span></a></p>
</blockquote>
<p>On Wikipedia, it says that:</p>
<blockquote>
<p><strong>Matrix representation of complex numbers</strong><br />
Complex numbers <span class="math-container">$z=a+ib$</span> can also be represented by <span class="math-container">$2\times2$</span> matrices that have the following form: <span class="math-container">$$\pmatrix{a&-b\\b&a}$$</span></p>
</blockquote>
<p>I don't understand why they can be represented by these matrices or where these matrices come from.</p>
| <p>No one seems to have mentioned it explicitly, so I will. The matrix <span class="math-container">$J = \left( \begin{smallmatrix} 0 & -1\\1 & 0 \end{smallmatrix} \right)$</span> satisfies <span class="math-container">$J^{2} = -I,$</span> where <span class="math-container">$I$</span> is the <span class="math-container">$2 \times 2$</span> identity matrix (in fact, this is because <span class="math-container">$J$</span> has eigenvalues <span class="math-container">$i$</span> and <span class="math-container">$-i$</span>, but let us put that aside for one moment). Hence there really is no difference between the matrix <span class="math-container">$aI + bJ$</span> and the complex number <span class="math-container">$a +bi.$</span></p>
| <p>Look at the arithmetic operations and their actions. With + and *, these matrices form a field. And we have the isomorphism
$$a + ib \mapsto \left[\matrix{a&-b\cr b &a}\right].$$</p>
|
probability | <p>What is the average number of times it would it take to roll a fair <span class="math-container">$6$</span>-sided die and get all numbers on the die? The order in which the numbers appear does not matter.</p>
<p>I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer <span class="math-container">$(1-(\frac56)^n)^6 = .5$</span> or <span class="math-container">$n = 12.152$</span></p>
<p>Can someone please explain this to me, possibly with a link to a general topic?</p>
| <p>The time until the first result appears is $1$. After that, the random time until a second (different) result appears is geometrically distributed with parameter of success $5/6$, hence with mean $6/5$ (recall that the mean of a geometrically distributed random variable is the inverse of its parameter). After that, the random time until a third (different) result appears is geometrically distributed with parameter of success $4/6$, hence with mean $6/4$. And so on, until the random time of appearance of the last and sixth result, which is geometrically distributed with parameter of success $1/6$, hence with mean $6/1$. This shows that the mean total time to get all six results is
$$\sum_{k=1}^6\frac6k=\frac{147}{10}=14.7.$$</p>
<hr>
<p><em>Edit:</em> This is called the <a href="http://en.wikipedia.org/wiki/Coupon_collector%27s_problem">coupon collector problem</a>. For a fair $n$-sided die, the expected number of attempts needed to get all $n$ values is
$$n\sum_{k=1}^n\frac1k,$$ which, for large $n$, is approximately $n\log n$. This stands in contrast with the mean time needed to complete only some proportion $cn$ of the whole collection, for some fixed $c$ in $(0,1)$, which, for large $n$, is approximately $-\log(1-c)n$. One sees that most of the $n\log n$ steps needed to complete the full collection are actually spent completing the last one per cent, or the last one per thousand, or the last whatever percentage of the collection.</p>
| <p>Here's the logic:</p>
<p>The chance of rolling a number you haven't yet rolled when you start off is $1$, as any number would work. Once you've rolled this number, your chance of rolling a number you haven't yet rolled is $5/6$. Continuing in this manner, after you've rolled $n$ different numbers the chance of rolling one you haven't yet rolled is $(6-n)/6$.</p>
<p>You can figure out the mean time it takes for a result of probability $p$ to appear with a simple formula: $1/p$. Furthermore, the mean time it takes for multiple results to appear is the sum of the mean times for each individual result to occur.</p>
<p>This allows us to calculate the mean time required to roll every number:
$t = 1/1 + 6/5 + 6/4 + 6/3 + 6/2 + 6/1 = 1 + 12/10 + 15/10 + 2 + 3 + 6 = 12 + 27/10 = 14.7$</p>
|
number-theory | <p>A few days ago I was recalling some facts about the $p$-adic numbers, for example the fact that the $p$-adic metric is an ultrametric implies very strongly that there is no order on $\mathbb{Q}_p$, as any number in the interior of an open ball is in fact its center.</p>
<p>I know that if you take the completion of the algebraic closure of the $p$-adic completion you get something which is isomorphic to $\mathbb{C}$ (this result was very surprising until I studied model theory, then it became obvious).</p>
<p>Furthermore, if the algebraic closure is of an extension of dimension $2$ then the field is orderable, or even real closed. Either way, it implies that the $p$-adic numbers don't have this property.</p>
<p>So I was thinking, is there a $p$-adic number whose square equals $2$? $3$? $2011$? For which prime numbers $p$? How far down the rabbit hole of algebraic numbers can you go inside the $p$-adic numbers? Are there general results connecting the choice (or rather properties) of $p$ to the "amount" of algebraic closure it gives?</p>
| <blockquote>
<p>A few days ago I was recalling some facts about the p-adic numbers, for example the fact that the p-adic metric is an ultrametric implies very strongly that there is no order on <span class="math-container">$\mathbb{Q}_p$</span>, as any number in the interior of an open ball is in fact its center.</p>
</blockquote>
<p>This argument is not correct. For instance why does it not apply to <span class="math-container">$\mathbb{Q}$</span> with the <span class="math-container">$p$</span>-adic metric? In fact any field which admits an ordering also admits a nontrivial non-Archimedean metric.</p>
<p>It is true though that <span class="math-container">$\mathbb{Q}_p$</span> cannot be ordered. By the Artin-Schreier theorem, this is equivalent to the fact that <span class="math-container">$-1$</span> is a sum of squares. Using Hensel's Lemma and a little quadratic form theory it is not hard to show that <span class="math-container">$-1$</span> is a sum of four squares in <span class="math-container">$\mathbb{Q}_p$</span>.</p>
<blockquote>
<p>I know that if you take the completion of the algebraic close of the p-adic completion you get something which is isomorphic to <span class="math-container">$\mathbb{C}$</span> (this result was very surprising until I studied model theory, then it became obvious).</p>
</blockquote>
<p>I don't mean to pick, but I am familiar with basic model theory and I don't see how it helps to establish this result. Rather it is basic field theory: any two algebraically closed fields of equal characteristic and absolute transcendence degree are isomorphic. (From this the completeness of the theory of algebraically closed fields of any given characteristic follows easily, by Vaught's test.)</p>
<blockquote>
<p>So I was thinking, is there a <span class="math-container">$p$</span>-adic number whose square equals 2? 3? 2011? For which prime numbers <span class="math-container">$p$</span>?</p>
</blockquote>
<p>All of these answers depend on <span class="math-container">$p$</span>. The general situation is as follows: for any odd <span class="math-container">$p$</span>, the group of <strong>square classes</strong> <span class="math-container">$\mathbb{Q}_p^{\times}/\mathbb{Q}_p^{\times 2}$</span> -- which parameterizes quadratic extensions -- has order <span class="math-container">$4$</span>, meaning there are exactly three quadratic extensions of <span class="math-container">$\mathbb{Q}_p$</span> inside any algebraic closure. If <span class="math-container">$u$</span> is any integer which is not a square modulo <span class="math-container">$p$</span>, then these three extensions are given by adjoinging <span class="math-container">$\sqrt{p}$</span>, <span class="math-container">$\sqrt{u}$</span> and <span class="math-container">$\sqrt{up}$</span>. When <span class="math-container">$p = 2$</span> the group of square classes has cardinality <span class="math-container">$8$</span>, meaning there are <span class="math-container">$7$</span> quadratic extensions.</p>
<blockquote>
<p>How far down the rabbit hole of algebraic numbers can you go inside the p-adic numbers? Are there general results connecting the choice (or rather properties) of <span class="math-container">$p$</span> to the "amount" of algebraic closure it gives?</p>
</blockquote>
<p>I don't know exactly what you are looking for as an answer here. The absolute Galois group of <span class="math-container">$\mathbb{Q}_p$</span> is in some sense rather well understood: it is an infinite profinite group but it is "small" in the technical sense that there are only finitely many open subgroups of any given index. Also every finite extension of <span class="math-container">$\mathbb{Q}_p$</span> is solvable. All in all it is vague -- but fair -- to say that the fields <span class="math-container">$\mathbb{Q}_p$</span> are "much closer to being algebraically closed" than the field <span class="math-container">$\mathbb{Q}$</span> but "not as close to being algebraically closed" as the finite field <span class="math-container">$\mathbb{F}_p$</span>. This can be made precise in various ways.</p>
<p>If you are interested in the <span class="math-container">$p$</span>-adic numbers you should read intermediate level number theory texts on local fields. For instance <a href="http://alpha.math.uga.edu/%7Epete/MATH8410.html" rel="nofollow noreferrer">this page</a> collects notes from a course on (in part) local fields that I taught last spring. I also highly recommend books called <em>Local Fields</em>: one by Cassels and one by Serre.</p>
<p><b>Added</b>: see in particular Sections 5.4 and 5.5 <a href="http://alpha.math.uga.edu/%7Epete/8410Chapter5.pdf" rel="nofollow noreferrer">of this set of notes</a> for information about the number of <span class="math-container">$n$</span>th power classes and the number of field extensions of a given degree.</p>
| <p>Suppose that $K$ is an algebraic number field, i.e. a finite extension of $\mathbb Q$. It has a ring of integers $\mathcal O_K$ (the integral closure of $\mathbb Z$ in $K$). Suppose that there is a prime ideal $\wp \subset \mathcal O_K$ such that:</p>
<ol>
<li><p>$p \in \wp,$ but $p \not\in \wp^2$.</p></li>
<li><p>The order of $\mathcal O_K/\wp = p.$ (Note that (1) implies in particular that
$\wp \cap \mathbb Z = p \mathbb Z$, so that $\mathcal O_K/\wp$ is an extension of $\mathbb Z/p\mathbb Z$. We are now requiring that it in fact be the trivial extension.)</p></li>
</ol>
<p>Then the number field $K$ embeds into $\mathbb Q_p$. The converse also holds.</p>
<p>So if you want to know whether you can solve the equation $f(x) = 0$ in $\mathbb Q_p$ (where $f(x)$ is some irreducible polynomial in $\mathbb Q[x]$), then set
$K = \mathbb Q[x]/f(x)$ and apply this criterion.
This is easiest to do when
$f(x)$ has integral coefficients, and remains separable when reduced mod $p$
(something that you can check by computing the discriminant and seeing whether
or not it is divisible by $p$),
because in this case the criterion is equivalent to asking that $f(x)$ have a root mod $p$.</p>
<p>Incidentally, there are many $f(x)$ that satisfy this criterion
(because, among other
things, the algebraic closure of $\mathbb Q$ in $\mathbb Q_p$ has infinite degree over $\mathbb Q$), but there are also many $f(x)$ that don't.</p>
|
differentiation | <p>I'm still struggling to understand why the derivative of sine only works for radians. I had always thought that radians and degrees were both arbitrary units of measurement, and just now I'm discovering that I've been wrong all along! I'm guessing that when you differentiate sine, the step that only works for radians is when you replace $\sin(dx)$ with just $dx$, because as $dx$ approaches $0$ then $\sin(dx)$ equals $dx$ because $\sin(\theta)$ equals $\theta$. But isn't the same true for degrees? As $dx$ approaches $\theta$ degrees then $\sin(dx \,\text{degrees})$ still approaches $0$. But I've come to the understanding that $\sin(dx \,\text{degrees})$ approaches $0$ almost $60$ times slower, so if $\sin(dx \,\text{radians})$ can be replaced with $dx$ then $\sin(dx \,\text{degrees})$ would have to be replaced with $(\pi/180)$ times $dx$ degrees.</p>
<p>But the question remains of why it works perfectly for radians. How do we know that we can replace $\sin(dx)$ with just $dx$ without any kind of conversion applied like we need for degrees? It's not good enough to just say that we can see that $\sin(dx)$ approaches $dx$ as $dx$ gets very small. Mathematically we can see that $\sin(.00001)$ is pretty darn close to $0.00001$ when we're using radians. But let's say we had a unit of measurement "sixths" where there are $6$ of them in a full circle, pretty close to radians. It would also look like $\sin(dx \,\text{sixths})$ approaches $dx$ when it gets very small, but we know we'd have to replace $\sin(dx \,\text{sixths})$ with $(\pi/3) \,dx$ sixths when differentiating. So how do we know that radians work out so magically, and why do they?</p>
<p>I've read the answers to <a href="https://math.stackexchange.com/questions/466299/why-is-the-derivative-of-sine-the-cosine-in-radians-but-not-in-degrees">this question</a> and followed the links, and no, they don't answer my question.</p>
| <p>Radians, unlike degrees, are not arbitrary in an important sense. </p>
<p>The circumference of a unit circle is $2\pi$; an arc of the unit circle subtended by an angle of $\theta$ radians has arc length of $\theta$. </p>
<p>With these 'natural' units, the trigonometric functions behave in a certain way. Particularly important is
$$\lim_{x\to 0} \frac{\sin x}{x} = 1 \quad\quad - (*)$$</p>
<p>Now study the derivative of $\sin$ at $x = a$:</p>
<p>$$\lim_{x \to a} \frac{\sin x - \sin a}{x-a} = \lim_{x \to a}\left( \frac{\sin\left(\frac{x-a}{2}\right)}{(x-a)/2}\cdot \cos\left(\frac{x+a}{2}\right)\right)$$</p>
<p>This limit is equal to $$\cos a$$ precisely because of the limit $(*)$. And $(*)$ is quite different in degrees.</p>
| <p>It seems to me that the best answer thus far is <strong>Simon S</strong>'s. Others have hinted at the important property:</p>
<p><span class="math-container">$$
\lim_{x\rightarrow 0} \frac{\sin(x)}{x} = 1
$$</span></p>
<p>Some have simply stated it's important with little reason as to <em>why</em> it's important (specifically in regards to your question about the derivative of <span class="math-container">$\sin(x)$</span> equaling the <span class="math-container">$\cos(x)$</span>). <strong>Simon S</strong>'s answer explained <em>why</em> that limit is important for the derivative. However, what I find lacking is <em>why</em> it is that the limit equals what it equals <em>and</em> what <em>would</em> it equal if we decided to use degrees instead of radians.</p>
<p><em>At this point, I want to acknowledge that my answer is essentially the same as <strong>Simon S</strong>'s except that I am going to go into gruesome detail.</em></p>
<p>Before I go into this, there is absolutely <em>nothing</em> wrong with using degrees over radians. It <em>will</em> change what the definition of the derivative of the trigonometric functions are, but it won't change <em>any</em> of our math--it just introduces a tedious factor we always have to carry around.</p>
<p>I am going to use <a href="https://proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Geometric_Proof" rel="nofollow noreferrer">this geometric proof</a> as a way to make sense of the limit above:</p>
<p><a href="https://i.sstatic.net/JvPjH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JvPjH.png" alt="geometric proof link" /></a></p>
<p>There is only <em>one</em> part of the proof that will change if we decide to use degrees as opposed to radians and that is when we find the area of the sector subtended by <span class="math-container">$\theta$</span>. When we use radians we get: <span class="math-container">$A_{AB} = \pi 1^2 * \frac{\theta}{2\pi} = \frac{\theta}{2}$</span>--just as they found in the given proof. If <em>however</em>, we use degrees then we will get: <span class="math-container">$A_{AB} = \theta * \frac{\pi}{360}$</span>. Now this changes their initial inequality which the rest of the proof relies on:</p>
<p><span class="math-container">$$
\frac{1}{2}\sin(\theta) \leq \frac{\pi \theta}{360} \leq \frac{1}{2}\tan(\theta)
$$</span></p>
<p><em>(the others don't change because the sine and tangents equal the same thing regardless of whether or not we use radians or degrees--with a proper, trigonometric definition of each, of course).</em></p>
<p>We still proceed in the same way (I'm going to be less formal and not worry about the absolute values--although we should technically). We divide everything by <span class="math-container">$\sin(\theta)$</span> which since I'm only worrying about the first quadrant won't change the directions of the inequalities:</p>
<p><span class="math-container">$$
\frac{1}{2} \leq \frac{\pi \theta}{360 \sin(\theta)} \leq \frac{1}{2\cos(\theta)}\\
\frac{360}{2\pi} \leq \frac{\theta}{\sin(\theta)} \leq \frac{360}{2\pi \cos(\theta)} \\
\frac{\pi}{180} \geq \frac{\sin(\theta)}{\theta} \geq \frac{\pi}{180}\cos(\theta)
$$</span></p>
<p>When we plug in <span class="math-container">$\theta = 0$</span> (whether radians or degrees) we get <span class="math-container">$\cos(0) = 1$</span> and thus we use the squeeze theorem to show that:</p>
<p><span class="math-container">$$
\frac{\pi}{180} \leq \lim_{\theta \rightarrow 0} \frac{\sin(\theta)}{\theta} \leq \frac{\pi}{180}
$$</span></p>
<p>Therefore, if we use degrees, then:</p>
<p><span class="math-container">$$
\lim_{\theta \rightarrow 0} \frac{\sin(\theta)}{\theta} = \frac{\pi}{180}
$$</span></p>
<p>Going back to <strong>Simon S</strong>'s answer, this gives, as the definition of the derivative for <span class="math-container">$\sin(x)$</span>:</p>
<p><span class="math-container">$$
\lim_{h \rightarrow 0} \frac{\sin(x + h) - \sin(x)}{h}\\
\lim_{h \rightarrow 0} \frac{\sin(x)\cos(h) + \sin(h)\cos(x) - \sin(x)}{h} \\
\lim_{h \rightarrow 0} \frac{\sin(h)\cos(x) + \sin(x)(\cos(h) - 1)}{h}
$$</span></p>
<p>This may be a little sloppy, but when <span class="math-container">$h = 0$</span> <span class="math-container">$\cos(h) - 1 = 1 - 1 = 0$</span>, so we can drop the second part and are left with:</p>
<p>*Actually this is <em>extremely</em> sloppy, at this point I would refer back to <strong>Simon S</strong>'s Answer</p>
<p><span class="math-container">$$
\lim_{h \rightarrow 0} \frac{\sin(h)}{h}\cos(x) = \cos(x)\lim_{h \rightarrow 0} \frac{\sin(h)}{h}
$$</span></p>
<p>Using our above result we find the following:</p>
<p><span class="math-container">$$
\frac{d}{dx}\sin(x) = \frac{\pi}{180}\cos(x)
$$</span></p>
<p>This is what the derivative of <span class="math-container">$\sin(x)$</span> is <em>when we use degrees</em>! And yes, this will work fine in a Taylor series where we plug in degrees for the polynomial as opposed to radians (although the Taylor series <em>will</em> look different!).</p>
<p>And hopefully you already realize that this is what the derivative of <span class="math-container">$\sin(x)$</span> is when we use degrees, because if we accept that we <em>must</em> use radians, then we must convert our degrees to radians:</p>
<p><span class="math-container">$$
\sin(x^\circ) = \sin\left(\frac{\pi}{180}x\right)
$$</span></p>
<p>Now using the chain rule we get:</p>
<p><span class="math-container">$$
\frac{d}{dx}\sin(x^\circ) = \frac{\pi}{180}\cos(x^\circ)
$$</span></p>
<p>So the question isn't really why does it only work with radians--it works just as well with degrees except that we get a different definition of the derivative. The reason we prefer radians to degrees is that radians doesn't require this extra factor of <span class="math-container">$\frac{\pi}{180}$</span> every single time we differentiate a trigonometric function.</p>
|
logic | <p>There are <em>consistent</em> first-order theories that prove their own inconsistency. For example, construct one like this:</p>
<blockquote>
<p>Assuming there is a consistent and sufficiently expressive first-order theory at all, call it <span class="math-container">$T'$</span>. The incompleteness theorem gives us that <span class="math-container">$\mathrm{Con}(T')$</span> (the consistency of <span class="math-container">$T'$</span>) is not provable in <span class="math-container">$T'$</span>. Hence <span class="math-container">$T=T'+\neg\mathrm{Con}(T')$</span> is consistent. Since <span class="math-container">$T$</span> proves that we can derive a contradiction from <span class="math-container">$T'$</span> alone, it also proves that we can derive it from <span class="math-container">$T$</span> (because <span class="math-container">$T'\subset T$</span>). So <span class="math-container">$T$</span> is consistent but proves <span class="math-container">$\neg\mathrm{Con}(T)$</span>.</p>
</blockquote>
<p>How to think about such a strange theory? Obviously the theory <span class="math-container">$T$</span> is lying about itself. But what does this lying mean mathematically? Are the formulas and deductive rules interpreted in the language of <span class="math-container">$T$</span> different from the one in my meta theory? Can I trust <span class="math-container">$T$</span>'s ability to express logic, deduction and arithmetic at all?</p>
<p>Note that a theory <span class="math-container">$T$</span> as above is just an example to <em>demonstrate</em> that such strange theories might exist. It might be hard to argue about the usefulness of a theory with such a complicated and highly dubious axiom as <span class="math-container">$\neg\mathrm{Con}(T')$</span>. But not all such <strong>self-falsifying theories</strong> must be so obvious and artificial. For example, it could be that ZFC can prove <span class="math-container">$\neg\mathrm{Con(ZFC)}$</span> while still being consistent. But how can we trust in a theory that fails to mirror our logic even when we try to implement it carefully? How can we be sure that all other theorems on logic derived in ZFC are trustworthy despite ZFC proves at least one wrong statement (wrong in the sense that our meta logic gives us a different result than the internal proof logic of ZFC)?</p>
| <p>When we think about theories like ZFC or PA, we often view them <em>foundationally</em>: in particular, we often suppose that they are <em>true</em>. Truth is very strong. Although it's difficult to say exactly what it means for ZFC to be "true" (on the face of it we have to commit to the actual existence of a universe of sets!), some consequences of being true are easy to figure out: true things are consistent, and - since their consistency is true - don't prove that they are inconsistent.</p>
<p>However, this makes things like PA + $\neg$Con(PA) seem mysterious. So how are we to understand these?</p>
<p>The key is to remember that - assuming we work in some appropriate meta-theory - a theory is to be thought of as its <em>class of models</em>. A theory is consistent iff it has a model. So when we say PA + $\neg$Con(PA) is consistent, what we mean is that there are ordered semirings (= models of PA without induction) with some very strong properties. </p>
<p>One of these strong properties is the induction scheme, which can be rephrased model-theoretically as saying that these ordered semirings have no <em>definable proper cuts</em>. </p>
<p><em>It's very useful down the road to get a good feel for nonstandard models of PA as structures in their own right as oppposed to "incorrect" interpretations of the theory; <a href="https://global.oup.com/academic/product/models-of-peano-arithmetic-9780198532132?lang=en&cc=de" rel="noreferrer">Kaye's book</a> is a very good source here.</em></p>
<p>The other is that they satisfy $\neg$Con(PA). This one seems mysterious since we think of $\neg$Con(PA) as asserting a fact on the meta-level. However, remember that the whole point of Goedel's incompleteness theorem in this context is that we can write down a sentence in the language of arithmetic which we <em>externally prove</em> is true iff PA is inconsistent. Post-Goedel, the MRDP theorem showed that we may take this sentence to be of the form "$\mathcal{E}$ has a solution" where $\mathcal{E}$ is a specific Diophantine equation. So $\neg$Con(PA) just means that a certain algebraic behavior occurs.</p>
<p>So models of PA+$\neg$Con(PA) are just ordered semirings with some interesting properties - they have no proper definable cuts, and they have solutions to some Diophantine equations which don't have solutions in $\mathbb{N}$. This demystifies them a lot! </p>
<hr>
<p>So now let's return to the meaning of the arithmetic sentence we call "$\neg$Con(PA)." In the metatheory, we have some object we call "$\mathbb{N}$" and we prove:</p>
<blockquote>
<p>If $T$ is a recursively axiomatizable theory, then $T$ is consistent iff $\mathbb{N}\models$ "$\mathcal{E}_T$ has no solutions."</p>
</blockquote>
<p>(Here $\mathcal{E}_T$ is the analogue of $\mathcal{E}$ for $T$; remember that by the MRDP theorem, we're expressing "$\neg$Con(T)" as "$\mathcal{E}_T$ has no solutions" for simplicity.) Note that this claim is specific to $\mathbb{N}$: other ordered semirings, even nice ones!, need not work in place of $\mathbb{N}$. In particular, there will be lots of ordered semirings which our metatheory proves satisfy PA, but for which the claim analogous to the one above fails.</p>
<p>It's worth thinking of an analogous situation in non-foundationally-flavored mathematics. Take a topological space $T$, and let $\pi_1(T)$ and $H_1(T)$ be the fundamental group and the first homology group (with coefficients in $\mathbb{Z}$, say) respectively. <strong>Don't pay attention too much to what these are</strong>, the point is just that they're both groups coding the behavior of $T$ which are closely related in many ways. I'm thinking of $\pi_1(T)$ as the analogue of $\mathbb{N}$ and $H_1(T)$ as the analogue of a nonstandard model satisfying $\neg$Con(PA), respectively.</p>
<p>Now, the statement "$\pi_1(T)$ is abelian" (here, my analogue of $\neg$Con(PA)) tells us a lot about $T$ (take my word for us). But the statement "$H_1(T)$ is abelian" <strong>does not</strong> tell us the same things (actually it tells us nothing: $H_1(T)$ is always abelian :P). </p>
<p>We have a group $G$, and some other group $H$ similar to $G$ in lots of ways, and a property $p$; and if $G$ has $p$, we learn something, but if $H$ has $p$ we don't learn that thing. This is exactly what's going on here. It's not the property by itself that carries any meaning, it's the statement that the property holds <em>of a specific object</em> that carries meaning useful to us. We often conflate these two, since there's a clear notion of "truth" for arithmetic sentences, but thinking about it in these terms should demystify theories like PA+$\neg$Con(PA) a bit.</p>
| <p>If I understand correctly you problem the key to solve it is to think carefully to the concept of encoding.</p>
<p>For simplicity allow me to consider the case where $T'$ is PA (Peano Arithmetic). </p>
<p>The internalization of the syntactic properties of PA in itself uses an encoding which is roughly a mapping that associates to formulas and proofs constant terms (their encodings) and to meta-theoretical properties ("<em>$x$ is a proof of $y$ in PA</em>", "$x$ is provable in PA", etc) formulas in the language of $T$ in such a way the following holds:</p>
<blockquote>
<p>if $RS$ is a syntactic (meta-theoretic) property and $O_1,\dots,O_n$ are syntactic objects (formulas or proofs) then $RS(O_1,\dots,O_n)$ holds if and only if $PA \vdash Enc(RS)(Enc(O_1),\dots,Enc(O_n))$, where $Enc$ is the mapping that associates to syntactic objects their encodings in $PA$'s language.</p>
</blockquote>
<p>The important thing to keep in mind is that this <em>encoding-condition</em> is required to hold only for <strong>encodings</strong>.</p>
<p>Now let consider a theory $T=PA+\neg Enc(Con(PA))$ in the language of arithmetic. </p>
<p>Clearly $T \vdash \neg Enc(Con(PA))$ but what does this mean? By soundness and completeness this is equivalent to say that in every <em>arithmetic structure</em> $M$ which is a model of $T$ it must hold $M \models \neg Enc(Con(PA))$.
We have that $$Enc(Con(PA))\equiv \neg \exists x\ Enc(\text{*is a proof*})(x,Enc(\bot))$$
hence
$$\neg Enc(Con(PA)) \equiv \exists x\ Enc(\text{* is a proof *})(x,Enc(\bot))$$
so in each model $M$ of $T$ there is an element $m \in M$ such that
$$M \models Enc(\text{* is a proof *})(m,Enc(\bot))$$
the problem is that this $m$ is not an encoding, it is not even required to be the interpretation of a constant term, hence there is no way that we could decode this term to a proof (in PA) of $\bot$.</p>
<p>The point is that the formula $Enc(\text{* is proof of*})$ define a relation for each arithmetic structure but it has its intended meaning only when applied to encodings: meaning that $Enc(\text{*is a proof of*})(m,n)$ expresses that $m$ is the encoding of a proof of the formula encoded by $n$ only when $m$ and $n$ are encoding.</p>
<p>The argument shown here should be easy to adapt to other kind of theories such as the ones you described. </p>
<p>I hope this helps.</p>
|
logic | <p>Speaking informally and working, for example, in <a href="http://en.wikipedia.org/wiki/Peano_arithmetic">Peano Arithmetic (PA)</a>, we know that the essence of the Gödel's first incompleteness theorem is that there are true statements (in our model PA), which are not provable in that model. However, looking for "concrete" examples of such theorems, I found, for example, Paris–Harrington theorem - which is true but unprovable in PA. </p>
<p>Thus, I have a few questions: do we know that this theorem is true since it can be proven in the
second-order arithmetic? and if so, are there other ways to prove "concrete" statements that
they are unprovable but true, except showing that they have a proof in a bigger system?</p>
<p>In fact, is the separation between such "concrete" theorems and self-referring ones (as the ones appearing the Gödel's proof) a justified one, in the philosophical sense? That is - both are true but unprovable...</p>
<p>Thanks for everyone.</p>
| <p>Before discussing any of the technical issues associated with your question: Yes, in order to ensure that a statement (about numbers) is true (in the standard model), we currently do not have any procedure other than exhibiting a proof. (In fact, most mathematicians would argue that this is the only way to meaningfully discuss truth.) If the statement is unprovable in <span class="math-container">$\mathsf{PA}$</span>, such a proof by necessity is in a stronger system, typically a fragment of set theory, <span class="math-container">$\mathsf{ZFC}$</span>, sometimes a fragment of second order arithmetic, <span class="math-container">$\mathsf{Z}_2$</span>. But sometimes we go beyond <span class="math-container">$\mathsf{ZFC}$</span>. The typical extensions we consider usually assume <a href="http://ww16.cantorsattic.info/Cantor%27s_Attic?sub1=20230116-0401-42ac-8a67-eb5e4e6152fe" rel="nofollow noreferrer">large cardinals</a>. There is a well justified body of evidence indicating that it makes sense to say that proofs of arithmetic statements in these systems are indeed true (I link to a small discussion on these matters near the end).</p>
<p>(Since proof is at the heart of this discussion, you may be interested is a series of essays by computer scientists, mathematicians, and philosophers, on <em>The nature of mathematical proof</em> that was published in the journal Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci, vol 363, No. 1835, Oct 15, 2005. Sadly, the only <a href="https://www.jstor.org/stable/i30039728" rel="nofollow noreferrer">link</a> I have is behind a paywall.)</p>
<hr />
<p>There is a technical difference between the sentences produced by Gödel's techniques (or <a href="https://en.wikipedia.org/wiki/Rosser%27s_trick" rel="nofollow noreferrer">Rosser</a>'s) and sentences such as the strong Ramsey theorem that the Paris-Harrington theorem discusses. (For additional examples, see <a href="https://en.wikipedia.org/wiki/Goodstein%27s_theorem" rel="nofollow noreferrer">here</a> and the references listed there.)</p>
<p>Gödel's statement is <span class="math-container">$\Pi^0_1$</span>, that is, it has the form <span class="math-container">$\forall n\,\psi(n)$</span> where <span class="math-container">$\psi$</span> is a recursive statement (in the language of arithmetic, all its quantifiers are bounded). The strong Ramsey theorem is <span class="math-container">$\Pi^0_2$</span>, that is, it has the form <span class="math-container">$\forall n\,\exists m\,\psi(n,m)$</span>, where <span class="math-container">$\psi$</span> is recursive.</p>
<p><span class="math-container">$\Pi^0_1$</span> are meaningful statements even by strict finitistic standards (see for example Richard Zach's <a href="https://web.archive.org/web/20180703181242/http://www.ucalgary.ca:80/rzach/papers/hilbert.html" rel="nofollow noreferrer">dissertation</a> and his discussion of Tait's analysis of Hilbert's program). They are true if they are unprovable (in fact, <span class="math-container">$\mathsf{PA}$</span> proves every sentence <span class="math-container">$\phi$</span> that is true of the natural numbers and such that <span class="math-container">$\lnot\phi$</span> is <span class="math-container">$\Pi^0_1$</span>): If <span class="math-container">$\exists n\,\psi(n)$</span> is true, then for some number <span class="math-container">$m$</span>, we can prove this statement simply by verifying <span class="math-container">$\psi(m)$</span> which, being a recursive statement, has a straightforward verification -- informally, we run a computer program that is guaranteed to halt, and check that the program outputs <span class="math-container">$\mathtt{Yes}$</span>.</p>
<p>On the other hand, <span class="math-container">$\Pi^0_2$</span> statements cannot be called finitistic. If unprovable, this does not automatically ensure their validity (or falsehood). However, they admit a natural interpretation: They state that certain recursive function is total (if the sentence is <span class="math-container">$\forall n\,\exists m\,\psi(n,m)$</span>, the function <span class="math-container">$f(n)=m$</span> assigns to each <span class="math-container">$n$</span> the least <span class="math-container">$m$</span> such that <span class="math-container">$\psi(n,m)$</span>), that is, they state that certain computer program will halt, no matter its input. Typical independent <span class="math-container">$\Pi^0_2$</span> statements have the peculiarity that the corresponding recursive function is fast growing (see for example <a href="https://en.wikipedia.org/wiki/Fast-growing_hierarchy" rel="nofollow noreferrer">this discussion</a>, or <a href="https://andrescaicedo.files.wordpress.com/2010/04/goodsteins.pdf" rel="nofollow noreferrer">my article</a> on Goodstein sequences). By a theorem of Wainer, there is a standard mathematical approach to verifying their unprovability, namely, one checks that the function <span class="math-container">$f$</span> under discussion eventually dominates the first <span class="math-container">$\varepsilon_0$</span> functions in a fast growing hierarchy.</p>
<p>One could consider Gödel statements and the like as first generation independence statements, and the more "combinatorial" <span class="math-container">$\Pi^0_2$</span> statements that followed as second generation. The mathematical arguments involved in either case are really very different. A third generation family of independence statements would be <span class="math-container">$\Pi^0_1$</span> combinatorial statements, so they are finitistic in the sense of Hilbert's program, but they have immediate, apparent mathematical (as opposed to meta-mathematical) meaning. Such a family of statements is considered of great interest. Remarkably, recent work of Harvey Friedman has produced such examples, see for example his book on Boolean relation theory, available at his <a href="https://u.osu.edu/friedman.8/foundational-adventures/boolean-relation-theory-book/" rel="nofollow noreferrer">page</a>.</p>
<hr />
<p>This is by no means the end of the story. For example, we now have a whole family of independent meta-mathematical statements of increased complexity of which <span class="math-container">$\mathrm{Con}(\mathsf{PA})$</span> (the sentence in the second incompleteness theorem) is but the weakest possible. See for example <strong><a href="https://projecteuclid.org/ebooks/lecture-notes-in-logic/Aspects-of-Incompleteness/toc/lnl/1235416274" rel="nofollow noreferrer">Aspects of Incompleteness</a></strong> by Per Lindström, or <strong><a href="https://projecteuclid.org/ebooks/perspectives-in-logic/Metamathematics-of-First-Order-Arithmetic/toc/pl/1235421926" rel="nofollow noreferrer">Metamathematics of first-order arithmetic</a></strong> by Petr Hájek and Pavel Pudlák. In the context of set theory, we can go further, as the completeness theorem tells us that set theory is consistent if and only if <span class="math-container">$\mathrm{Con}(\mathsf{ZFC})$</span> holds. We can go beyond this and require the existence of an <span class="math-container">$\omega$</span>-model (one whose set of natural numbers is isomorphic to the standard model) or even stronger, the existence of a <span class="math-container">$\beta$</span>-model (that is, a well-founded model). Woodin has explained how his <a href="https://en.wikipedia.org/wiki/%CE%A9-logic" rel="nofollow noreferrer"><span class="math-container">$\Omega$</span>-logic</a> provides in a sense an ultimate extension to this process.</p>
<p>We also have families of <span class="math-container">$\Pi^0_2$</span> statements whose truth is harder and harder to verify. For example, the Paris-Harrington theorem gives a result that is not only independent of <span class="math-container">$\mathsf{PA}$</span> but also independent of the theory obtained by adding to <span class="math-container">$\mathsf{PA}$</span> all true <span class="math-container">$\Pi^0_1$</span> statements. But it is easy to verify once we go beyond this theory. On the other hand, Friedman identified a finite form of <a href="https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem" rel="nofollow noreferrer">Kruskal's tree theorem</a> that cannot be verified by so-called predicative means, which means that adding it to <span class="math-container">$\mathsf{PA}$</span> gives us a system with consistency strength significantly higher than that of <span class="math-container">$\mathsf{PA}$</span> with, say, Goodstein's theorem.</p>
<p>And we can go beyond, as Friedman has examples of <span class="math-container">$\Pi^0_2$</span> statements whose proof requires assumptions of a large cardinal character (typically, the existence of Mahlo cardinals of all finite orders, but recent examples go much further).</p>
<p>In set theory we study the consistency strength hierarchy, and have identified remarkable structure, starting with the identification of the large cardinal hierarchy. For a brief and somewhat technical discussion of these issues, see <a href="https://mathoverflow.net/a/44185/6085">here</a>. The point is that we can see these results as extending significantly beyond arithmetic the realm of what is true but unprovable (in specific formal systems).</p>
| <p>First we need to define what do we mean when we say "true". In the context of $\sf PA$ this is somewhat of a Platonistic approach where we take $\Bbb N$ to be <strong>the</strong> standard model of $\sf PA$, so everything is measured against that model. Therefore when we say "true" about an arithmetical statement we mean that it is true in $\Bbb N$.</p>
<p>So in order to show that a statement is true we need to prove that it holds for $\Bbb N$. One way is indeed to prove it from a stronger system, like second-order arithmetic, which is categorical (i.e. it only has one model, up to isomorphism). Another would be using an even stronger system like set theory (e.g. $\sf ZFC$) to prove more about the natural numbers.</p>
<p>In order to prove that a statement is unprovable we need to show that there exists at least one model of $\sf PA$ in which it is false. We can do that either directly, or by showing that the statement implies something we already know is unprovable. For example, if we show that $\varphi$ implies $\operatorname{Con}(\sf PA)$ then we know that $\varphi$ is unprovable because $\operatorname{Con}(\sf PA)$ is unprovable.</p>
<p>Finally, the distinction between "concrete" and "contrived" statements which a true but unprovable is indeed a philosophical one, and I would accept that it is justified. We often want to know that a mathematical concept helps solving problems which are not created just for its own sake, and "contrived" statements often look like they were cooked specifically for the purpose of incompleteness (and they are, in most cases). However the line between these can be sometimes fuzzy and subjective -- much like any other philosophical line.</p>
|
logic | <p>I'm reading a book on axiomatic set theory, classic Set Theory: For Guided Independent Study, and at the beginning of chapter 4 it says:</p>
<blockquote>
<p>So far in this book we have given the impression that sets are needed to help explain the important number systems on which so much of mathematics (and the science that exploits mathematics) is based. Dedekind's construction of the real numbers, along with the associated axioms for the reals, completes the process of putting the calculus (and much more) on a rigorous footing.</p>
</blockquote>
<p>and then it says:</p>
<blockquote>
<p>It is important to realize that there are schools of mathematics that would reject 'standard' real analysis and, along with it, Dedekind's work.</p>
</blockquote>
<p>How is it possible that "schools of mathematics" reject standard real analysis and Dedekind's work? I don't know if I'm misinterpreting things but, how can people reject a whole branch of mathematics if everything has to be proved to be called a theorem and cannot be disproved unless a logical mistake is found?</p>
<p>I've even watched this video in the past: <a href="https://www.youtube.com/watch?reload=9&v=jlnBo3APRlU" rel="noreferrer">https://www.youtube.com/watch?reload=9&v=jlnBo3APRlU</a> and this guy, who's supposed to be a teacher, says that real numbers don't exist and that they are only rational numbers. I don't know if this is a related problem but how is this possible?</p>
| <p>Although the possibility of different axioms is a concern, I think the major objection the author is speaking of is largely about <em>constructivism</em> (i.e. intuitionistic logic). There really is a big gap between rational numbers and real ones: with enough memory and time, a computer can represent any rational number and can do arithmetic on these numbers and compare them. This is not true for real numbers.</p>
<p>To be specific, but not too technical: let's start by agreeing that the rational numbers <span class="math-container">$\mathbb Q$</span> are a sensible concept - the only controversial bit of that involving infinite sets. A Dedekind cut is really just a function <span class="math-container">$f:\mathbb Q\rightarrow \{0,1\}$</span> such that (a) <span class="math-container">$f$</span> is surjective, (b) if <span class="math-container">$x<y$</span> and <span class="math-container">$f(y)=0$</span> then <span class="math-container">$f(x)=0$</span>, and (c) for all <span class="math-container">$x$</span> such that <span class="math-container">$f(x)=0$</span> there exists a <span class="math-container">$y$</span> such that <span class="math-container">$x<y$</span> and <span class="math-container">$f(y)=0$</span>.</p>
<p>Immediately we're into trouble with this definition - it is common that constructivists view a function <span class="math-container">$f:\mathbb Q\rightarrow\{0,1\}$</span> as some object or oracle that, given a rational number, yields either <span class="math-container">$0$</span> or <span class="math-container">$1$</span>. So, I can ask about <span class="math-container">$f(0)$</span> or <span class="math-container">$f(1)$</span> or <span class="math-container">$f(1/2)$</span> and get answers - and maybe from these queries I could conclude <span class="math-container">$f$</span> was not a Dedekind cut (for instance, if <span class="math-container">$f(0)=1$</span> and <span class="math-container">$f(1)=0$</span>). However, no matter how long I spend inquiring about <span class="math-container">$f$</span>, I'm never going to even be able to verify that <span class="math-container">$f$</span> is a Dedekind cut. Even if I had two <span class="math-container">$f$</span> and <span class="math-container">$g$</span> that I knew to be Dedekind cuts, it would not be possible for me to, by asking for finitely many values, determine whether <span class="math-container">$f=g$</span> or not - and, in constructivism, there is no recourse to the law of the excluded middle, so we cannot say "either <span class="math-container">$f=g$</span> or it doesn't" and then have no path to discussing equality in the terms of "given two values, are they equal?"*.</p>
<p>The same trouble comes up when I try to add two cuts - if I had the Dedekind cut for <span class="math-container">$\sqrt{2}$</span> and the cut for <span class="math-container">$2-\sqrt{2}$</span> and wanted <span class="math-container">$g$</span> to be the Dedekind cut of the sum, I would never, by querying the given cuts, be able to determine <span class="math-container">$g(2)$</span> - I would never find two elements of the lower cut of the summands that added to at least <span class="math-container">$2$</span> nor two elements of the upper cut of the summands that added to no more than <span class="math-container">$2$</span>.</p>
<p>There are some constructive ways around this obstacle - you can certainly say "real numbers are these functions alongside proofs that they are Dedekind cuts" and then you can define what a proof that <span class="math-container">$x<y$</span> or <span class="math-container">$x=y$</span> or <span class="math-container">$x=y+z$</span> looks like - and even then prove some theorems, but you never get to the typical axiomatizations where you get to say "an ordered ring is a set <span class="math-container">$S$</span> alongside functions <span class="math-container">$+,\times :S\times S \rightarrow S$</span> and <span class="math-container">$<:S\times S \rightarrow \{0,1\}$</span> such that..." because you can't define these functions constructively on <span class="math-container">$\mathbb R$</span>.</p>
<p>(*To be more concrete - type theory discusses equality in the sense of "a proof that two functions <span class="math-container">$f,g$</span> are equal is a function that, for each input <span class="math-container">$x$</span>, gives a proof that <span class="math-container">$f(x)=g(x)$</span>" - and the fact that we can't figure this out by querying doesn't mean that we can't show specific functions to be equal by other means. However, it's a <em>huge</em> leap to go from "I can compare two rational numbers" - which is to say, I can always produce, from two rational numbers, a proof of equality or inequality - to "a proof that two real numbers is equal consists of..." understanding that the latter definition does not let us always produce a proof of equality or inequality for any pair of real numbers)</p>
| <p>This is somewhat surprising if you're not used to this. But <strong>of course</strong> you're free to reject whatever mathematical statements you dislike. The real question is what else you are forced to reject with it, and what would remain of the mathematics that you know and love otherwise.</p>
<p>The onus is on you, as someone who decided that "everyone else is wrong", to convince people that your idea is better, and to get people to take interest in what and how to transfer mathematics from "the realm of error" into "the world of truth". That is, until someone will come in and reject your ideas, etc.</p>
<hr />
<p>For example, Lebesgue is well-known as someone who rejected the axiom of choice. For him, the existence of non-measurable sets was unthinkable, so he was forced to reject the axiom of choice, and many other theorems that would contradict that.</p>
<p>Another example is in Kronecker who rejected the idea that infinite sets <em>exist</em>, this means that for Kronecker the axiom of infinity would be false. That implies that we <em>want to</em> work, in some sense, with some a second-order theory over the natural numbers, we can get some analysis done, and everything beyond that would be "a fiction".</p>
<p>Many people would reject large cardinal axioms, those are easily misunderstood and mistrusted outside of set theory (although often ignored just as well). But without inaccessible cardinals, there are not Grothendieck universes; without measurable cardinals there are some accessible categories which are not well-copowered. Even some set theorists reject large cardinal axioms such as Reinhardt and Berkeley cardinals, since they imply the negation of the axiom of choice, which (unlike Lebesgue) most set theorists readily accept as "obvious truth".</p>
<p>What <em>is</em> true, is that there is an implicit theory underlying mathematics, which lets us develop "most of working mathematics" without having to worry about foundations. But this theory is not without its controversies. It includes infinite sets, the axiom of choice, the law of excluded middle, and more. Sometimes it is just interesting to see <em>what part</em> actually depends on these axioms, and sometimes people outright feel that something is wrong with the axioms.</p>
<p>If you are more inclined to use computer assistance in your work (e.g. proof verification software), you might be more inclined to take a different foundation which is easier to understand from your proof assistant's point of view. This may be something that rejects the LEM, for example, or otherwise is incongruous with what "most people" would call "every day mathematics".</p>
|
linear-algebra | <blockquote>
<p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be two matrices which can be multiplied. Then <span class="math-container">$$\operatorname{rank}(AB) \leq \operatorname{min}(\operatorname{rank}(A), \operatorname{rank}(B)).$$</span></p>
</blockquote>
<p>I proved <span class="math-container">$\operatorname{rank}(AB) \leq \operatorname{rank}(B)$</span> by interpreting <span class="math-container">$AB$</span> as a composition of linear maps, observing that <span class="math-container">$\operatorname{ker}(B) \subseteq \operatorname{ker}(AB)$</span> and using the kernel-image dimension formula. This also provides, in my opinion, a nice interpretation: if non-stable, under subsequent compositions the kernel can only get bigger, and the image can only get smaller, in a sort of <em>loss of information</em>.</p>
<p>How do you manage <span class="math-container">$\operatorname{rank}(AB) \leq \operatorname{rank}(A)$</span>? Is there a nice interpretation like the previous one?</p>
| <p>Yes. If you think of A and B as linear maps, then the domain of A is certainly at least as big as the image of B. Thus when we apply A to either of these things, we should get "more stuff" in the former case, as the former is bigger than the latter.</p>
| <p>Once you have proved <span class="math-container">$\operatorname{rank}(AB) \le \operatorname{rank}(A)$</span>, you can obtain the other inequality by using transposition and the fact that it doesn't change the rank (see e.g. this <a href="https://math.stackexchange.com/questions/2315/is-the-rank-of-a-matrix-the-same-of-its-transpose-if-yes-how-can-i-prove-it">question</a>). </p>
<p>Specifically, letting <span class="math-container">$C=A^T$</span> and <span class="math-container">$D=B^T$</span>, we have that <span class="math-container">$\operatorname{rank}(DC) \le \operatorname{rank}(D) \implies \operatorname{rank}(C^TD^T)\le \operatorname{rank} (D^T)$</span>, which is <span class="math-container">$\operatorname{rank}(AB) \le \operatorname{rank}(B)$</span>.</p>
|
logic | <p>Recently I have started reviewing mathematical notions, that I have always just accepted. Today it is one of the fundamental ones used in equations: </p>
<blockquote>
<p>If we have an equation, then the equation holds if we do the same to both sides. </p>
</blockquote>
<p>This seems perfectly obvious, but it must be stated as an axiom somewhere, presumably in formal logic(?). Only, I don't know what it would be called, or indeed how to search for it - does anybody knw?</p>
| <p>This axiom is known as the <em>substitution property of equality</em>. It states that if $f$ is a function, and $x = y$, then $f(x) = f(y)$. See, for example, <a href="https://en.wikipedia.org/wiki/First-order_logic#Equality_and_its_axioms" rel="nofollow noreferrer">Wikipedia</a>.</p>
<p>For example, if your equation is $4x = 2$, then you can apply the function $f(x) = x/2$ to both sides, and the axiom tells you that $f(4x) = f(2)$, or in other words, that $2x = 1$. You could then apply the axiom again (with the same function, even) to conclude that $x = 1/2$.</p>
| <p>"Do the same to both sides" is rather vague. What we can say is that if $f:A \rightarrow B$ is a <em>bijection</em> between sets $A$ and $B$ then, by definition</p>
<p>$\forall \space x,y \in A \space x=y \iff f(x)=f(y)$</p>
<p>The operation of adding $c$ (and its inverse subtracting $c$) is a bijection in groups, rings and fields, so we can conclude that</p>
<p>$x=y \iff x+c=y+c$</p>
<p>However, multiplication by $c$ is only a bijection for certain values of $c$ ($c \ne 0$ in fields, $\gcd(c,n)=1$ in $\mathbb{Z}_n$ etc.), so although we can conclude</p>
<p>$x=y \Rightarrow xc = yc$</p>
<p>it is not safe to assume the converse i.e. in general</p>
<p>$xc=yc \nRightarrow x = y$</p>
<p>and we have to take care about which values of $c$ we can "cancel" from both sides of the equation.</p>
<p>Some polynomial functions are bijections in $\mathbb{R}$ e.g.</p>
<p>$x=y \iff x^3=y^3$</p>
<p>but others are not e.g.</p>
<p>$x^2=y^2 \nRightarrow x = y$</p>
<p><em>unless</em> we restrict the domain of $f(x)=x^2$ to, for example, non-negative reals. Similarly</p>
<p>$\sin(x) = \sin(y) \nRightarrow x = y$</p>
<p><em>unless</em> we restrict the domain of $\sin(x)$.</p>
<p>So in general we can only "cancel" a function from both sides of an equation if we are sure it is a bijection, or if we have restricted its domain or range to create a bijection.</p>
|
Subsets and Splits