tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
probability | <p>In game programming (my profession) it is often necessary to generate all kinds of random things, including random curves (for example, to make a procedural island or for an agent to follow some path). </p>
<p>For one dimensional things, we usually use some random generator that generates say floats (which are really rational numbers) in some range and that is good enough as an approximation for most purposes. (Even though we cannot generate actual real numbers within a range uniformly, we can get a good sense of it with the rationals that we DO generate.)</p>
<p>When it comes to 2D things, though, things are very murky. For example, suppose I want to generate curves uniformly between two points (say, all curves bounded in a box, and say, additional requirements such as that the curves are differentiable).</p>
<p>The way we usually do it is to generate random parameters for some specific type of curve - say a Bezier curve, but this is not (AFAI can see) uniform in the original requirements - i.e. some curves that fit the bill will be more likely than others.</p>
<p>Is this even a sensible question to ask?</p>
<p>And if so, is there a way to generate curves (to a decent approximation) so that they are uniform within the parameters? (bounded and smooth)?</p>
<p>It "feels" like there are too many curves; it is strictly true with real numbers too... but there we have that rationals can be close enough for practical purposes; but with 2D things it seems less clear that any possible real curve is "close enough" to a "rational curve".</p>
<p>So I guess the main question is... if we have a set of "all curves", whether we can find a way to generate another set of approximations so that each "real" curve is close enough to our approximation.</p>
<p><em>Or:</em> is there a mapping from "approximations to reals" to "approximations of continuous, differentiable, bounded curves between two points".... (that preserves uniformity, at least intuitively)?</p>
<p><em>Or:</em> is there a notion of distribution of (bounded differential) curves? (And way to pick items from it).</p>
<p><strong>Edit:</strong> I am more interested in the theoretical possibilities. I know LOTS of ways to generate curves... I am particular interested in generating curves without some kind of bias, and whether this even makes sense to want.</p>
<p><strong>Edit:</strong> @pjs36 pointed out the curve may be arbitrary long. I don't mind additional restrictions to prevent pathological curves. Restrictions like "not longer than $x$", or not self-crossing.</p>
| <p>(1) A version of your question was explored here: "<a href="https://mathoverflow.net/q/268106/6094">Generating Random Curves with Fixed Length and Endpoint Distance</a>."
I'd say not definitively answered.
<hr />
<img src="https://i.sstatic.net/Xv5be.jpg" width="400" /></p>
<hr />
<p>(2) You might look at this recent paper:
Igor Rivin, "Random space and plane curves," <a href="https://arxiv.org/abs/1607.05239" rel="nofollow noreferrer">arXiv:1607.05239</a>, 2016. It relies on random trigonometric series:
"our class is precisely the class of random tame knots."</p>
<p>(3) A recent theoretical paper:
Kemppainen, Antti, and Stanislav Smirnov. "Random curves, scaling limits and Loewner evolutions." <em>The Annals of Probability</em> 45.2 (2017): 698-779.
"[W]e show that a weak estimate
on the probability of an annulus crossing implies that a random curve arising
from a statistical mechanics model will have scaling limits and those will be
well-described by Loewner evolutions with random driving forces."
<hr />
<a href="https://i.sstatic.net/QsVmz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsVmz.png" alt="ScalingLimits"></a>
<br />
<sup>
Fig.1: <a href="https://arxiv.org/abs/1212.6215" rel="nofollow noreferrer">arXiv version abstract</a>.
</sup></p>
<hr />
| <p>The answer clearly depends on how you define 'uniform-ness'.</p>
<p>One way is to understand them as an appropriate limit of discrete uniform counterpart. For example, consider uniform distributions on $\mathbb{R}$. We know that they appear as limiting object of discrete uniform distributions.</p>
<p>Similar idea applies to paths. Let $\delta > 0$ and discretize the $d$-dimensional Euclidean space $\mathbb{R}^d$ by the lattice $(\delta \mathbb{Z})^d = \{(\delta k_1, \cdots, \delta k_d) : k_1, \cdots, k_d \in \mathbb{Z} \}$ of mesh size $\delta > 0$. Then consider all the path of length $n$ starting at the origin and allowing backtracking. Since we have $2d$ choices for each step, there are $(2d)^n$ such paths. Regard them as polygonal paths in $\mathbb{R}^d$ by linear interpolation and consider a uniform distribution over them. Under a proper scaling limit, such uniform distribution converges to what we call a <em>Brownian motion</em> (BM). Precisely in this sense BM can be thought as a uniformly drawn continuous path starting at $0$. (It can also be proved that discretization scheme does not matter too much, in the sense that this limiting precedure is a path-analogue of the central limit theorem. BM describes random curves where speeds at different time are almost uncorrelated.)</p>
<p>BM is a basic building block for more complicated objects sharing similar scheme. For instance, now fix a $x$ in $\mathbb{R}^d$, discretize them to obtain points $x_\delta \in (\delta \mathbb{Z})^d$ and let us count all paths of length $n$ in $(\delta \mathbb{Z})^d$ from $0$ to $x_\delta$. Following the same idea as before, the uniform distribution over such paths converge (under appropriate scaling) to what is called <em>Brownian bridge</em>. Differently, Brownian bridge can be understood as BM conditioned to visit the point $y$ at time $t$. Bounding such curves within a region can be done in a similar way.</p>
<p>But here is a bad news: Since the 'velocity' at different time is not deterministically related, it is not hard to imagine that Brownian paths would look quite irregular. Indeed, it can be shown that Brownian paths are nowhere differentiable with probability $1$. That being said, you can observe paths possessing better regularity only when you impose bias.</p>
|
linear-algebra | <p>I am helping my brother with Linear Algebra. I am not able to motivate him to understand what double dual space is. Is there a nice way of explaining the concept? Thanks for your advices, examples and theories.</p>
| <p>If $V$ is a finite dimensional vector space over, say, $\mathbb{R}$, the dual of $V$ is the set of linear maps to $\mathbb{R}$. This is a vector space because it makes sense to add functions $(\phi + \psi)(v) = \phi(v) + \psi(v)$ and multiply them by scalars $(\lambda\phi)(v) = \lambda(\phi(v))$ and these two operations satisfy all the usual axioms.</p>
<p>If $V$ has dimension $n$, then the dual of $V$, which is often written $V^\vee$ or $V^*$, also has dimension $n$. Proof: pick a basis for $V$, say $e_1, \ldots, e_n$. Then for each $i$ there is a unique linear function $\phi_i$ such that $\phi_i(e_i) = 1$ and $\phi_i(e_j) = 0$ whenever $i \neq j$. It's a good exercise to see that these maps $\phi_i$ are linearly independent and span $V^*$.</p>
<p>So given a basis for $V$ we have a way to get a basis for $V^*$. It's true that $V$ and $V^*$ are isomorphic, but the isomorphism depends on the choice of basis (check this by seeing what happens if you change the basis).</p>
<p>Now let's talk about the double dual, $V^{**}$. First, what does it mean? Well, it means what it says. After all, $V^*$ is a vector space, so it makes sense to take its dual. An element of $V^{**}$ is a function that eats elements of $V^*$, i.e. a function that eats functions that eat elements of $V$. This can be a little hard to grasp the first few times you see it. I will use capital Greek letters for elements of $V^{**}$. </p>
<p>Now, here is the trippy thing. Let $v \in V$. I am going to build an element $\Phi_v$ of $V^{**}$. An element of $V^{**}$ should be a function that eats functions that eat vectors in $V$ and returns a number. So we are going to set
$$
\Phi_v(f) = f(v).
$$</p>
<p>You should check that the association $v \mapsto \Phi_v$ is linear (so $\Phi_{\lambda v} = \lambda\Phi_v$ and $\Phi_{v + w} = \Phi_v + \Phi_w$) and is an isomorphism (one-to-one and onto)! This isomorphism didn't depend on choosing a basis, so there's a sense in which $V$ and $V^{**}$ have more in common than $V$ and $V^*$ do.</p>
<p>In fancier language, $V$ and $V^*$ are isomorphic, but not naturally isomorphic (you have to make a choice of basis); $V$ and $V^{**}$ are naturally isomorphic.</p>
<p>Final remark: someone will surely have already said this by the time I've edited and submitted this post, but when $V$ is infinite dimensional, it's not always true anymore that $V = V^{**}$. The map $v \mapsto \Phi_v$ is injective, but not necessarily surjective, in this case.</p>
| <p>Actually it's quite simple: If you have a vector space, <em>any</em> vector space, you can define linear functions on that space. The set of <em>all</em> those functions is the dual space of the vector space. The important point here is that it doesn't matter what this original vector space is. You have a vector space $V$, you have a corresponding dual $V^*$.</p>
<p>OK, now you have linear functions. Now if you add two linear functions, you get again a linear function. Also if you multiply a linear function with a factor, you get again a linear function. Indeed, you can check that linear functions fulfill all the vector space axioms this way. Or in short, the dual space is a vector space in its own right.</p>
<p>But if $V^*$ is a vector space, then it comes with everything a vector space comes with. But as we have seen in the beginning, one thing every vector space comes with is a dual space, the space of all linear functions on it. Therefore also the dual space $V^*$ has a corresponding dual space, $V^{**}$, which is called double dual space (because "dual space of the dual space" is a bit long).</p>
<p>So we have the dual space, but we also want to know what sort of functions are in that double dual space. Well, such a function takes a vector from $V^*$, that is, a linear function on $V$, and maps that to a scalar (that is, to a member of the field the vector space is based on). Now, if you have a linear function on $V$, you already know a way to get a scalar from that: Just apply it to a vector from $V$. Indeed, it is not hard to show that if you just choose an arbitrary fixed element $v\in V$, then the function $F_v\colon\phi\mapsto\phi(v)$ indeed is a linear function on $V^*$, and thus a member of the double dual $V^{**}$. That way we have not only identified certain members of $V^{**}$ but in addition a natural mapping from $V$ to $V^{**}$, namely $F\colon v\mapsto F_v$. It is not hard to prove that this mapping is linear and injective, so that the functions in $V^{**}$ corresponding to vectors in $V$ form a subspace of $V^{**}$. Indeed, if $V$ is finite dimensional, it's even <em>all</em> of $V^{**}$. That's easy to see if you know that $\dim(V^*)=\dim{V}$ and therefore $\dim(V^{**})=\dim{V^*}=\dim{V}$. On the other hand, since $F$ is injective, $\dim(F(V))=\dim(V)$. However for finite dimensional vector spaces, the only subspace of the same dimension as the full space is the full space itself. However if $V$ is infinite dimensional, $V^{**}$ is larger than $V$. In other words, there are functions in $V^{**}$ which are not of the form $F_v$ with $v\in V$.</p>
<p>Note that since $V^{**}$ again is a vector space, it <em>also</em> has a dual space, which again has a dual space, and so on. So in principle you have an infinite series of duals (although only for infinite vector spaces they are all different).</p>
|
combinatorics | <p>Can rotations and translations of this shape</p>
<p><a href="https://i.sstatic.net/uZWU5.png" rel="noreferrer"><img src="https://i.sstatic.net/uZWU5.png" alt="enter image description here"></a></p>
<p>perfectly tile some equilateral triangle?</p>
<hr>
<p>I've now also asked this question on <a href="https://mathoverflow.net/questions/267095/can-a-row-of-five-equilateral-triangles-tile-a-big-equilateral-triangle">mathoverflow</a>.</p>
<hr>
<p>Notes:</p>
<ul>
<li>Obviously I'm ignoring the triangle of side <span class="math-container">$0$</span>.</li>
<li>Because the area of the triangle has to be a multiple of the area of the tile, the triangle must have side length divisible by <span class="math-container">$5$</span> (where <span class="math-container">$1$</span> is the length of the short edges of the tile).</li>
<li>The analogous tile made of <em>three</em> equilateral triangles <em>can</em> tile any equilateral triangle with side length divisible by three.</li>
<li>There is a computer program, <a href="http://burrtools.sourceforge.net/" rel="noreferrer">Burr Tools</a>, which was designed to solve this kind of problem. <a href="https://math.stackexchange.com/users/4308/josh-b">Josh B.</a> has used it to prove by exhaustive search that there is no solution when the side length of the triangle is <span class="math-container">$5$</span>, <span class="math-container">$10$</span>, <span class="math-container">$15$</span>, <span class="math-container">$20$</span> or <span class="math-container">$25$</span>. Lengths of <span class="math-container">$30$</span> or more will take a very long time to check.</li>
<li>This kind of problem can often be solved be a <a href="http://yufeizhao.com/olympiad/tiling.pdf" rel="noreferrer">colouring argument</a> but I've failed to find a suitable colouring. (See below.)</li>
<li><a href="https://math.stackexchange.com/users/26501/lee-mosher">Lee Mosher</a> pointed me in the direction of Conway's theory of <a href="http://www.cimat.mx/ciencia_para_jovenes/pensamiento_matematico/thurston.pdf" rel="noreferrer">tiling groups</a>. This theory can be used to show that if the tile can cover an equilateral triangle of side length <span class="math-container">$n$</span> then <span class="math-container">$a^nb^nc^n=e$</span> in the group <span class="math-container">$\left<a,b,c\;\middle|\;a^3ba^{-2}c=a^{-3}b^{-1}a^2c^{-1}=b^3cb^{-2}a=b^{-3}c^{-1}b^2a^{-1}=c^3ac^{-2}b=c^{-3}a^{-1}c^2b^{-1}=e\right>$</span>. But sadly it turns out that we <em>do</em> have that <span class="math-container">$a^nb^nc^n=e$</span> in this group whenever <span class="math-container">$n$</span> divides by <span class="math-container">$5$</span>.</li>
<li>In fact one can use the methods in <a href="http://www.cflmath.com/Research/Tilehomotopy/tilehomotopy.pdf" rel="noreferrer">this paper</a> of Michael Reid to prove that this tile's homotopy group is the cyclic group with <span class="math-container">$5$</span> elements. I think this means that the <em>only</em> thing these group theoretic methods can tell us is a fact we already knew: that the side length must be divisible by <span class="math-container">$5$</span>.</li>
<li>These group theoretic methods are also supposed to subsume all possible colouring arguments, which means that any proof based purely on colouring is probably futile.</li>
<li>The smallest area that can be left uncovered when trying to cover a triangle of side length <span class="math-container">$(1,\dots,20)$</span> is <span class="math-container">$($</span><a href="https://i.sstatic.net/9iIJT.png" rel="noreferrer"><span class="math-container">$1$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/RjPaa.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/lC3fl.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/kR3Gz.png" rel="noreferrer"><span class="math-container">$1$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/5FlgV.png" rel="noreferrer"><span class="math-container">$5$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/euPiN.png" rel="noreferrer"><span class="math-container">$6$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/ZGWPR.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/CDHCC.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/zv5AL.png" rel="noreferrer"><span class="math-container">$6$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/71Y9n.png" rel="noreferrer"><span class="math-container">$5$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/K9Jc1.png" rel="noreferrer"><span class="math-container">$6$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/2UeYX.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/78ujz.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/aFLjk.png" rel="noreferrer"><span class="math-container">$6$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/ycu54.png" rel="noreferrer"><span class="math-container">$5$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/Wzrak.png" rel="noreferrer"><span class="math-container">$6$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/DXbsB.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/VGBjo.png" rel="noreferrer"><span class="math-container">$4$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/NT0mE.png" rel="noreferrer"><span class="math-container">$6$</span></a><span class="math-container">$,\,$</span><a href="https://i.sstatic.net/m1N3D.png" rel="noreferrer"><span class="math-container">$5$</span></a><span class="math-container">$)$</span> small triangles. In particular it's surprising that when the area is <span class="math-container">$1\;\mathrm{mod}\;5$</span> one must sometimes leave six triangles uncovered rather than just one.</li>
<li>We can look for "near misses" in which all but <span class="math-container">$5$</span> of the small triangles are covered and in which <span class="math-container">$4$</span> of the missing small triangles could be covered by the same tile. There's essentially only <a href="https://i.sstatic.net/fKa4U.png" rel="noreferrer">one</a> near miss for the triangle of side <span class="math-container">$5$</span>, none for the triangle of side <span class="math-container">$10$</span> and six (<a href="https://i.sstatic.net/sqVLe.png" rel="noreferrer">1</a>,<a href="https://i.sstatic.net/wvC9j.png" rel="noreferrer">2</a>,<a href="https://i.sstatic.net/BWPxQ.png" rel="noreferrer">3</a>,<a href="https://i.sstatic.net/d7TD3.png" rel="noreferrer">4</a>,<a href="https://i.sstatic.net/cbu6T.png" rel="noreferrer">5</a>,<a href="https://i.sstatic.net/O0nZN.png" rel="noreferrer">6</a>) for the triangle of side <span class="math-container">$15$</span>. (All other near misses can be generated from these by rotation, reflection, and by reorienting the three tiles that go around the lonesome missing triangle.) This set of six near misses are very interesting since the positions of the single triangle and the place where it "should" go are very constrained.</li>
</ul>
| <p>I suppose I should post: I solved this <a href="https://mathoverflow.net/questions/267095/can-a-row-of-five-equilateral-triangles-tile-a-big-equilateral-triangle/267401#267401">on MathOverflow</a>. The answer is YES: a size-45 triangle can be tiled.</p>
<p>I thank two insights from Josh B here: first that a rhombus with side length 15 can be tiled, and second the strategy to "select a different shape which does tile a triangle, then tile that shape with our $5$ triangle trapezoid."</p>
<p>This $15-15-15-30$ trapezoid can be tiled, and three such trapezoids can tile a triangle with side length $45$.</p>
<p><a href="https://i.sstatic.net/vAh1v.png" rel="noreferrer"><img src="https://i.sstatic.net/vAh1v.png" alt="enter image description here"></a></p>
| <p>Here's the minimal solution, a side-30 triangle. Also posted to MathOverflow here <a href="https://mathoverflow.net/questions/267095/can-a-row-of-five-equilateral-triangles-tile-a-big-equilateral-triangle">https://mathoverflow.net/questions/267095/can-a-row-of-five-equilateral-triangles-tile-a-big-equilateral-triangle</a>.<a href="https://i.sstatic.net/deciC.png" rel="noreferrer"><img src="https://i.sstatic.net/deciC.png" alt="enter image description here"></a></p>
|
matrices | <p>Is there any intuition why rotational matrices are not commutative? I assume the final rotation is the combination of all rotations. Then how does it matter in which order the rotations are applied?</p>
| <p>Here is a picture of a die:</p>
<p><a href="https://i.sstatic.net/Ij8xC.jpg"><img src="https://i.sstatic.net/Ij8xC.jpg" alt="enter image description here"></a></p>
<p>Now let's spin it $90^\circ$ clockwise. The die now shows</p>
<p><a href="https://i.sstatic.net/YNRK3.jpg"><img src="https://i.sstatic.net/YNRK3.jpg" alt="enter image description here"></a></p>
<p>After that, if we flip the left face up, the die lands at</p>
<p><a href="https://i.sstatic.net/JJRKw.jpg"><img src="https://i.sstatic.net/JJRKw.jpg" alt="enter image description here"></a></p>
<hr>
<p>Now, let's do it the other way around: We start with the die in the same position:</p>
<p><a href="https://i.sstatic.net/Ij8xC.jpg"><img src="https://i.sstatic.net/Ij8xC.jpg" alt="enter image description here"></a></p>
<p>Flip the left face up:</p>
<p><a href="https://i.sstatic.net/HWIwe.jpg"><img src="https://i.sstatic.net/HWIwe.jpg" alt="enter image description here"></a></p>
<p>and then $90^\circ$ clockwise</p>
<p><a href="https://i.sstatic.net/pnofv.jpg"><img src="https://i.sstatic.net/pnofv.jpg" alt="enter image description here"></a></p>
<p>If we do it one way, we end up with $3$ on the top and $5, 6$ facing us, while if we do it the other way we end up with $2$ on the top and $1, 3$ facing us. This demonstrates that the two rotations do not commute.</p>
<hr>
<p>Since so many in the comments have come to the conclusion that this is not a complete answer, here are a few more thoughts:</p>
<ul>
<li>Note what happens to the top number of the die: In the first case we change what number is on the left face, then flip the new left face to the top. In the second case we first flip the old left face to the top, and <em>then</em> change what is on the left face. This makes two different numbers face up.</li>
<li>As leftaroundabout said in a comment to the question itself, rotations not commuting is not really anything noteworthy. The fact that they <em>do</em> commute in two dimensions <em>is</em> notable, but asking why they do not commute in general is not very fruitful apart from a concrete demonstration.</li>
</ul>
| <p>Matrices commute if they <em>preserve each others' eigenspaces</em>: there is a set of eigenvectors that, taken together, describe all the eigenspaces of both matrices, in possibly varying partitions.</p>
<p>This makes intuitive sense: this constraint means that a vector in one matrix's eigenspace won't leave that eigenspace when the other is applied, and so the original matrix's transformation still works fine on it. </p>
<p>In two dimensions, no matter what, the eigenvectors of a rotation matrix are $[i,1]$ and $[-i,1]$. So since all such matrices have the same eigenvectors, they will commute.</p>
<p>But in <em>three</em> dimensions, there's always one real eigenvalue for a real matrix such as a rotation matrix, so that eigenvalue has a real eigenvector associated with it: the axis of rotation. But this eigenvector doesn't share values with the rest of the eigenvectors for the rotation matrix (because the other two are necessarily complex)! So the axis is an eigenspace of dimension 1, so <strong>rotations with different axes can't possibly share eigenvectors</strong>, so they cannot commute.</p>
|
logic | <p>I'd heard of propositional logic for years, but until I came across <a href="https://math.stackexchange.com/questions/4043/what-are-good-resources-for-learning-predicate-logic-predicate-calculus">this question</a>, I'd never heard of predicate logic. Moreover, the fact that <em>Introduction to Logic: Predicate Logic</em> and <em>Introduction to Logic: Propositional Logic</em> (both by Howard Pospesel) are distinct books leads me to believe there are significant differences between the two fields. What distinguishes predicate logic from propositional logic?</p>
| <p>Propositional logic (also called sentential logic) is logic that includes sentence letters (A,B,C) and logical connectives, but not quantifiers. The semantics of propositional logic uses truth assignments to the letters to determine whether a compound propositional sentence is true.</p>
<p>Predicate logic is usually used as a synonym for first-order logic, but sometimes it is used to refer to other logics that have similar syntax. Syntactically, first-order logic has the same connectives as propositional logic, but it also has variables for individual objects, quantifiers, symbols for functions, and symbols for relations. The semantics include a domain of discourse for the variables and quantifiers to range over, along with interpretations of the relation and function symbols.</p>
<p>Many undergrad logic books will present both propositional and predicate logic, so if you find one it will have much more info. A couple of well-regarded options that focus directly on this sort of thing are Mendelson's book or Enderton's book.</p>
<p>This set of <a href="https://sgslogic.net/t20/notes/logic.pdf" rel="nofollow noreferrer">lecture notes</a> by Stephen Simpson is free online and has a nice introduction to the area.</p>
| <p>Propositional logic is an axiomatization of Boolean logic. As such predicate logic includes propositional logic. Both systems are known to be consistent, e.g. by exhibiting models in which the axioms are satisfied.</p>
<p>Propositional logic is decidable, for example by the method of truth tables:</p>
<p><a href="http://en.wikipedia.org/wiki/Truth_table" rel="noreferrer"> [Truth table -- Wikipedia]</a></p>
<p>and "complete" in that every tautology in the sentential calculus (basically a Boolean expression on variables that represent "sentences", i.e. that are either True or False) can be proven in propositional logic (and conversely).</p>
<p>Predicate logic (also called predicate calculus and first-order logic) is an extension of propositional logic to formulas involving terms and predicates. The full predicate logic is undecidable:</p>
<p><a href="http://en.wikipedia.org/wiki/First-order_logic" rel="noreferrer"> [First-order logic -- Wikipedia]</a></p>
<p>It is "complete" in the sense that all statements of the predicate calculus which are satisfied in every model can be proven in the "predicate logic" and conversely. This is a famous theorem by Gödel (dissertation,1929):</p>
<p><a href="http://en.wikipedia.org/wiki/G%C3%B6del_completeness_theorem" rel="noreferrer"> [Gödel's completeness theorem -- Wikipedia]</a></p>
<p>Note: As Doug Spoonwood commented, there are formalizations of both propositional logic and predicate logic that dispense with <em>axioms</em> per se and rely entirely on <em>rules of inference</em>. A common presentation would invoke only <em>modus ponens</em> as the single rule of inference and multiple <em>axiom schemas</em>. The important point for a formal logic is that it should be possible to recognize (with finite steps) whether a claim in a proof is logically justified, either as an instance of axiom schemas or by a rule of inference from previously established claims.</p>
|
probability | <p>Let's define a sequence of numbers between 0 and 1. The first term, $r_1$ will be chosen <strong>uniformly randomly</strong> from $(0, 1)$, but now we iterate this process choosing $r_2$ from $(0, r_1)$, and so on, so $r_3\in(0, r_2)$, $r_4\in(0, r_3)$... The set of all possible sequences generated this way contains the sequence of the reciprocals of all natural numbers, which sum diverges; but it also contains all geometric sequences in which all terms are less than 1, and they all have convergent sums. The question is: does $\sum_{n=1}^{\infty} r_n$ converge in general? (I think this is called <em>almost sure convergence</em>?) If so, what is the distribution of the limits of all convergent series from this family?</p>
| <p>Let $(u_i)$ be a sequence of i.i.d. uniform(0,1) random variables. Then the sum you are interested in can be expressed as
$$S_n=u_1+u_1u_2+u_1u_2u_3+\cdots +u_1u_2u_3\cdots u_n.$$
The sequence $(S_n)$ is non-decreasing and certainly converges, possibly to $+\infty$.</p>
<p>On the other hand, taking expectations gives
$$E(S_n)={1\over 2}+{1\over 2^2}+{1\over 2^3}+\cdots +{1\over 2^n},$$
so $\lim_n E(S_n)=1.$ Now by Fatou's lemma,
$$E(S_\infty)\leq \liminf_n E(S_n)=1,$$
so that $S_\infty$ has finite expectation and so is finite almost surely.</p>
| <blockquote>
<p>The probability $f(x)$ that the result is $\in(x,x+dx)$ is given by $$f(x) = \exp(-\gamma)\rho(x)$$ where $\rho$ is the <a href="https://en.wikipedia.org/wiki/Dickman_function" rel="noreferrer">Dickman function</a> as @Hurkyl <a href="https://math.stackexchange.com/questions/2130264/sum-of-random-decreasing-numbers-between-0-and-1-does-it-converge#comment4383202_2130701">pointed out below</a>. This follows from the the delay differential equation for $f$, $$f^\prime(x) = -\frac{f(x-1)}{x}$$ with the conditions $$f(x) = f(1) \;\rm{for}\; 0\le x \le1 \;\rm{and}$$ $$\int\limits_0^\infty f(x) = 1.$$ Derivation follows</p>
</blockquote>
<p>From the other answers, it looks like the probability is flat for the results less than 1. Let us prove this first.</p>
<p>Define $P(x,y)$ to be the probability that the final result lies in $(x,x+dx)$ if the first random number is chosen from the range $[0,y]$. What we want to find is $f(x) = P(x,1)$.</p>
<p>Note that if the random range is changed to $[0,ay]$ the probability distribution gets stretched horizontally by $a$ (which means it has to compress vertically by $a$ as well). Hence $$P(x,y) = aP(ax,ay).$$</p>
<p>We will use this to find $f(x)$ for $x<1$.</p>
<p>Note that if the first number chosen is greater than x we can never get a sum less than or equal to x. Hence $f(x)$ is equal to the probability that the first number chosen is less than or equal to $x$ multiplied by the probability for the random range $[0,x]$. That is, $$f(x) = P(x,1) = p(r_1<x)P(x,x)$$</p>
<p>But $p(r_1<x)$ is just $x$ and $P(x,x) = \frac{1}{x}P(1,1)$ as found above. Hence $$f(x) = f(1).$$</p>
<p>The probability that the result is $x$ is constant for $x<1$.</p>
<p>Using this, we can now iteratively build up the probabilities for $x>1$ in terms of $f(1)$.</p>
<p>First, note that when $x>1$ we have $$f(x) = P(x,1) = \int\limits_0^1 P(x-z,z) dz$$
We apply the compression again to obtain $$f(x) = \int\limits_0^1 \frac{1}{z} f(\frac{x}{z}-1) dz$$
Setting $\frac{x}{z}-1=t$, we get $$f(x) = \int\limits_{x-1}^\infty \frac{f(t)}{t+1} dt$$
This gives us the differential equation $$\frac{df(x)}{dx} = -\frac{f(x-1)}{x}$$
Since we know that $f(x)$ is a constant for $x<1$, this is enough to solve the differential equation numerically for $x>1$, modulo the constant (which can be retrieved by integration in the end). Unfortunately, the solution is essentially piecewise from $n$ to $n+1$ and it is impossible to find a single function that works everywhere.</p>
<p>For example when $x\in[1,2]$, $$f(x) = f(1) \left[1-\log(x)\right]$$</p>
<p>But the expression gets really ugly even for $x \in[2,3]$, requiring the logarithmic integral function $\rm{Li}$.</p>
<p>Finally, as a sanity check, let us compare the random simulation results with $f(x)$ found using numerical integration. The probabilities have been normalised so that $f(0) = 1$.</p>
<p><a href="https://i.sstatic.net/C86kr.png" rel="noreferrer"><img src="https://i.sstatic.net/C86kr.png" alt="Comparison of simulation with numerical integral and exact formula for $x\in[1,2]$"></a></p>
<p>The match is near perfect. In particular, note how the analytical formula matches the numerical one exactly in the range $[1,2]$.</p>
<p>Though we don't have a general analytic expression for $f(x)$, the differential equation can be used to show that the expectation value of $x$ is 1.</p>
<p>Finally, note that the delay differential equation above is the same as that of the <a href="https://en.wikipedia.org/wiki/Dickman_function" rel="noreferrer">Dickman function</a> $\rho(x)$ and hence $f(x) = c \rho(x)$. Its properties have been studied. <a href="https://www.encyclopediaofmath.org/index.php/Dickman_function" rel="noreferrer">For example</a> the Laplace transform of the Dickman function is given by $$\mathcal L \rho(s) = \exp\left[\gamma-\rm{Ein}(s)\right].$$
This gives $$\int_0^\infty \rho(x) dx = \exp(\gamma).$$ Since we want $\int_0^\infty f(x) dx = 1,$ we obtain $$f(1) = \exp(-\gamma) \rho(1) = \exp(-\gamma) \approx 0.56145\ldots$$ That is, $$f(x) = \exp(-\gamma) \rho(x).$$
This completes the description of $f$.</p>
|
probability | <p>I roll a fair die and sequentially sum the numbers the die shows. What are the odds the summation will hit exactly 100?</p>
<p>More generally, what are the odds of hitting an exact target number <em>t</em> while summing the results of numbers sequentially drawn uniformly from a set <em>S</em>.</p>
<p>I have experimentally tested this and the result is (warning - spoilers ahead):</p>
<blockquote class="spoiler">
<p> 1 over the expectation of the drawn numbers. In the case of the fair die 1/((1+2+3+4+5+6)/6) = 6/21 = 2/7</p>
</blockquote>
<p>However I do not have a strong intuition, let alone a formal proof, why this is the case.</p>
<p>I'll be happy to get your thoughts!</p>
| <p>The probability of hitting $n$ on the nose is a function of $n$, call it $f(n)$. It satisfies the recurrence
$$f(n)=\frac{f(n-1)+f(n-2)+f(n-3)+f(n-4)+f(n-5)+f(n-6)}6$$
with boundary values $f(0)=1$ and $f(n)=0$ for $n\lt0$.
I didn't try to solve the recurrence, but my numerical calculations agree with yours, it looks like $f(n)\to2/7$.</p>
| <p>The generating function $$H(x)=\sum\limits_{n=0}^\infty h_nx^n,$$ where $h_0=1$ and, for every $n\geqslant1$, $h_n$ is the probability to hit exactly $n$, solves the identity $$H(x)=1+P(x)H(x),\qquad P(x)=\frac16(x+x^2+\cdots+x^6),$$ hence $$H(x)=\frac1{1-P(x)}.$$
The limit of $h_n$ when $n\to\infty$ is $$\ell=\lim_{x\to1}(1-x)H(x)=\frac1{P'(1)},$$ that is, $$\ell=\frac6{1+2+\cdots+6}=\frac27.$$ This extends to every "die" producing any collection of numbers in any biased or unbiased way. If the "die" produces a random positive integer number $\xi$, the limit of $h_n$ becomes $$\ell=\frac1{E(\xi)}.$$ Assuming the limit $\ell$ exists, this can be understood intuitively as follows: by the law of large numbers, the sum of $n$ draws is about $k=nE(\xi)$ hence, after $n$ draws, $n$ large, one has hit $n$ values from roughly $k$. If each value has a chance roughly $\ell$ to be hit, one can expect that $\ell\approx n/k$, QED. Obvious counterexamples are when $\xi$ is, say, always a multiple of $3$, and these are essentially the only cases since $\ell$ exists if and only if the greatest common divisor of the support of $\xi$ is $1$.</p>
<p><strong>Edit:</strong> To estimate the difference $h_n-\ell$ in the usual case, note that $$1-P(x)=(1-x)(1+x/a)(1-x/u)(1-x/\bar u)(1-x/v)(1-x/\bar v),$$ for some $a$ real positive and some complex numbers $u$ and $v$ with nonzero imaginary parts, hence $$H(x)=\frac{\ell}{1-x}+\frac{b}{1+x/a}+\frac{r}{1-x/u}+\frac{\bar r}{1-x/\bar u}+\frac{s}{1-x/v}+\frac{\bar s}{1-x/\bar v},$$ for some real number $b$ and some complex numbers $r$ and $s$ defined as $$b=-\frac1{aP'(-a)},\qquad r=\frac1{uP'(u)},\qquad s=\frac1{vP'(v)}.$$ Thus, for every $n$, $$h_n=\ell+b(-1)^na^{-n}+2\Re\left(r u^{-n}+s v^{-n}\right).$$ Numerically, $a\approx1.49$, and $|u|$ and $|v|$ are approximately $1.46$ and $1.37$, hence all this yields finally $$|h_n-\ell|=O(\kappa^{-n}),\qquad\kappa\approx1.37.$$ For $n=100$, $\kappa^{-n}\approx2\cdot10^{-14}$ hence one expects that $h_n$ coincides with $\ell$ at up to $13$ or $14$ decimal places.</p>
|
probability | <p>A large portion of combinatorics cases have probabilities of $\frac1e$.</p>
<p>Secretary problem is one of such examples. Excluding trivial cases (a variable has a uniform distribution over $(0,\pi)$ - what is the probability for the value to be below $1$?). I can't recall any example where $\frac1\pi$ would be a solution to a probability problem.</p>
<p>Are there "meaningful" probability theory case with probability approaching $\frac1\pi$?</p>
| <p><strong>Yes</strong> there is!</p>
<p>Here is an example, called the Buffon's needle.</p>
<p>Launch a match of length $1$ on a floor with parallel lines spaced to each other by $2$ units, then the probability that a match crosses a line is </p>
<p>$$\frac 1\pi.$$</p>
<p><em>You can have all the details of the proof <a href="https://en.wikipedia.org/wiki/Buffon%27s_needle">here</a> if you like.</em></p>
<p>$\qquad\qquad\qquad\qquad\quad $<a href="https://i.sstatic.net/dRcEr.jpg"><img src="https://i.sstatic.net/dRcEr.jpg" alt=""></a></p>
<hr>
<p>More generally, if your match (or needle, it's all the same) have a length $a$, and the lines are spaced to each other by $\ell$ units, then the probability that a match crosses a line is </p>
<p>$$\frac {2a}{\pi \ell}.$$</p>
| <p>This doesn't addresses the question as asked, but I think is an interesting example in the spirit of what you asked:</p>
<p>Pick $N$ to be an integer. Now, calculate $p_N$ to be the probability that two random numbers $1 \leq m,n \leq N$ are relatively prime.</p>
<p>Then,
$$\lim_{N \to \infty} p_N =\frac{6}{\pi^2}$$</p>
|
logic | <p>I just read this whole article:
<a href="http://web.maths.unsw.edu.au/~norman/papers/SetTheory.pdf" rel="noreferrer">http://web.maths.unsw.edu.au/~norman/papers/SetTheory.pdf</a><br>
which is also discussed over here:
<a href="https://math.stackexchange.com/questions/356264/infinite-sets-dont-exist/">Infinite sets don't exist!?</a></p>
<p>However, the paragraph which I found most interesting is not really discussed there. I think this paragraph illustrates where most (read: close to all) mathematicians fundementally disagree with Professor NJ Wildberger. I must admit that I'm a first year student mathematics, and I really don't know close to enough to take sides here. Could somebody explain me here why his arguments are/aren't correct?</p>
<p><strong>These edits are made after the answer from Asaf Karagila.</strong><br>
<strong>Edit</strong> $\;$ I've shortened the quote a bit, I hope this question can be reopened ! The full paragraph can be read at the link above.<br>
<strong>Edit</strong> $\;$ I've listed the quotes from his article, I find most intresting: </p>
<ul>
<li><em>The job [of a pure mathematician] is to investigate the mathematical reality of the world in which we live.</em></li>
<li><em>To Euclid, an Axiom was a fact that was sufficiently obvious to not require a proof.</em></li>
</ul>
<p>And from a discussion with the author on the internet:</p>
<p><em>You are sharing with us the common modern assumption that mathematics is built up from
"axioms". It is not a position that Newton, Euler or Gauss would have had a lot of sympathy with, in my opinion. In this course we will slowly come to appreciate that clear and careful definitions are a much preferable beginning to the study of mathematics.</em></p>
<p>Which leads me to the following question: Is it true that with modern mathematics it is becoming less important for an axiom to be self-evident? It sounds to me that ancient mathematics was much more closely related to physics then it is today. Is this true ?</p>
<blockquote>
<h1>Does mathematics require axioms?</h1>
<p>Mathematics does not require "Axioms". The job of a pure mathematician
is not to build some elaborate castle in the sky, and to proclaim that
it stands up on the strength of some arbitrarily chosen assumptions.
The job is to <em>investigate the mathematical reality of the world in
which we live</em>. For this, no assumptions are necessary. Careful
observation is necessary, clear definitions are necessary, and correct
use of language and logic are necessary. But at no point does one need
to start invoking the existence of objects or procedures that we
cannot see, specify, or implement.</p>
<p>People use the term "Axiom" when
often they really mean <em>definition</em>. Thus the "axioms" of group theory
are in fact just definitions. We say exactly what we mean by a group,
that's all. There are no assumptions anywhere. At no point do we or
should we say, "Now that we have defined an abstract group, let's
assume they exist".</p>
<p>Euclid may have called certain of his initial statements Axioms, but
he had something else in mind. Euclid had a lot of geometrical facts
which he wanted to organize as best as he could into a logical
framework. Many decisions had to be made as to a convenient order of
presentation. He rightfully decided that simpler and more basic facts
should appear before complicated and difficult ones. So he contrived
to organize things in a linear way, with most Propositions following
from previous ones by logical reasoning alone, with the exception of
<em>certain initial statements</em> that were taken to be self-evident. To
Euclid, <em>an Axiom was a fact that was sufficiently obvious to not
require a proof</em>. This is a quite different meaning to the use of the
term today. Those formalists who claim that they are following in
Euclid's illustrious footsteps by casting mathematics as a game played
with symbols which are not given meaning are misrepresenting the
situation.</p>
<p>And yes, all right, the Continuum hypothesis doesn't really need to be
true or false, but is allowed to hover in some no-man's land, falling
one way or the other depending on <em>what you believe</em>. Cohen's proof of
the independence of the Continuum hypothesis from the "Axioms" should
have been the long overdue wake-up call. </p>
<p>Whenever discussions about the foundations of mathematics arise, we
pay lip service to the "Axioms" of Zermelo-Fraenkel, but do we ever
use them? Hardly ever. With the notable exception of the "Axiom of
Choice", I bet that fewer than 5% of mathematicians have ever employed
even one of these "Axioms" explicitly in their published work. The
average mathematician probably can't even remember the "Axioms". I
think I am typical-in two weeks time I'll have retired them to their
usual spot in some distant ballpark of my memory, mostly beyond
recall.</p>
<p>In practise, working mathematicians are quite aware of the lurking
contradictions with "infinite set theory". We have learnt to keep the
demons at bay, not by relying on "Axioms" but rather by developing
conventions and intuition that allow us to seemingly avoid the most
obvious traps. Whenever it smells like there may be an "infinite set"
around that is problematic, we quickly use the term "class". For
example: A topology is an "equivalence class of atlases". Of course
most of us could not spell out exactly what does and what does not
constitute a "class", and we learn to not bring up such questions in
company.</p>
</blockquote>
| <blockquote>
<p>Is it true that with modern mathematics it is becoming less important for an axiom to be self-evident?</p>
</blockquote>
<p>Yes and no.</p>
<h2>Yes</h2>
<p>in the sense that we now realize that all proofs, in the end, come down to the axioms and logical deduction rules that were assumed in writing the proof. For every statement, there are systems in which the statement is provable, including specifically the systems that assume the statement as an axiom. Thus no statement is "unprovable" in the broadest sense - it can only be unprovable relative to a specific set of axioms.</p>
<p>When we look at things in complete generality, in this way, there is no reason to think that the "axioms" for every system will be self-evident. There has been a parallel shift in the study of logic away from the traditional viewpoint that there should be a single "correct" logic, towards the modern viewpoint that there are multiple logics which, though incompatible, are each of interest in certain situations.</p>
<h2>No</h2>
<p>in the sense that mathematicians spend their time where it interests them, and few people are interested in studying systems which they feel have implausible or meaningless axioms. Thus some motivation is needed to interest others. The fact that an axiom seems self-evident is one form that motivation can take.</p>
<p>In the case of ZFC, there is a well-known argument that purports to show how the axioms are, in fact, self evident (with the exception of the axiom of replacement), by showing that the axioms all hold in a pre-formal conception of the cumulative hierarchy. This argument is presented, for example, in the article by Shoenfield in the <em>Handbook of Mathematical Logic</em>.</p>
<p>Another in-depth analysis of the state of axiomatics in contemporary foundations of mathematics is "<a href="http://www.jstor.org/stable/420965" rel="noreferrer">Does Mathematics Need New Axioms?</a>" by Solomon Feferman, Harvey M. Friedman, Penelope Maddy and John R. Steel, <em>Bulletin of Symbolic Logic</em>, 2000.</p>
| <p><em>Disclaimer: I didn't read the entire <strong>original</strong> quote in details, the question had since been edited and the quote was shortened. My answer is based on the title, the introduction, and a few paragraphs from the [original] quote.</em></p>
<p>Mathematics, <em>modern mathematics</em> focuses a lot of resources on rigor. After several millenniums where mathematics was based on intuition, and that got <em>some</em> results, we reached a point where rigor was needed.</p>
<p>Once rigor is needed one cannot just "do things". One has to obey a particular set of rules which define what constitutes as a legitimate proof. True, we don't write all proof in a fully rigorous way, and we do make mistakes from time to time due to neglecting the details.</p>
<p>However we need a rigid framework which tells us what is rigor. Axioms are the direct result of this framework, because axioms are really just assumptions that we are not going to argue with (for the time being anyway). It's a word which we use to distinguish some assumptions from other assumptions, and thus giving them some status of "assumptions we do not wish to change very often".</p>
<p>I should add two points, as well.</p>
<ol>
<li><p>I am not living in a mathematical world. The last I checked I had arms and legs, and not mathematical objects. I ate dinner and not some derived functor. And I am using a computer to write this answer. All these things are not mathematical objects, these are physical objects. </p>
<p>Seeing how I am not living in the mathematical world, but rather in the physical world, I see no need whatsoever to insist that mathematics will describe the world I am in. I prefer to talk about mathematics in a framework where I have rules which help me decide whether or not something is a reasonable deduction or not.</p>
<p>Of course, if I were to discuss how many keyboards I have on my desk, or how many speakers are attached to my computer right now -- then of course I wouldn't have any problem in dropping rigor. But unfortunately a lot of the things in modern mathematics deal with infinite and very general objects. These objects defy all intuition and when not working rigorously mistakes pop up more often then they should, as history taught us.</p>
<p>So one has to decide: either do mathematics about the objects on my desk, or in my kitchen cabinets; or stick to rigor and axioms. I think that the latter is a better choice.</p></li>
<li><p>I spoke with more than one Ph.D. student in computer science that did their M.Sc. in mathematics (and some folks that only study a part of their undergrad in mathematics, and the rest in computer science), and everyone agreed on one thing: computer science lacks the definition of proof and rigor, and it gets really difficult to follow some results.</p>
<p>For example, one of them told me he listened to a series of lectures by someone who has a world renowned expertise in a particular topic, and that person made a horrible mistake in the proof of a most trivial lemma. Of course the lemma was correct (and that friend of mine sat to write a proof down), but can we really allow negligence like that? In computer science a lot of the results are later applied into code and put into tests. Of course that doesn't prove their correctness, but it gives a "good enough" feel to it.</p>
<p>How are we, in mathematics, supposed to test our proofs about intangible objects? When we write an inductive argument. How are we even supposed to begin testing it? Here is an example: <strong>all the decimal expansions of integers are shorter than $2000^{1000}$ decimal digits.</strong> I defy someone to write an integer which is larger than $10^{2000^{1000}}$ explicitly. It can't be done in the physical world! Does that mean this preposterous claim is correct? No, it does not. Why? Because our intuition about integers tells us that they are infinite, and that all of them have decimal expansions. It would be absurd to assume otherwise.</p></li>
</ol>
<p>It is important to realize that axioms are not just the axioms of logic and $\sf ZFC$. Axioms are all around us. These are the definitions of mathematical objects. We have axioms of a topological space, and axioms for a category and axioms of groups, semigroups and cohomologies.</p>
<p>To ignore that fact is to bury your head in the sand and insist that axioms are only for logicians and set theorists.</p>
|
game-theory | <p>I have read about this problem:</p>
<p><a href="http://en.wikipedia.org/wiki/Secretary_problem">http://en.wikipedia.org/wiki/Secretary_problem</a></p>
<p>But I want to see how it is proven that the "optimal" solution is indeed optimal. I understand how to prove that <strong>if</strong> the optimal solution is of the form "wait for $t$ candidates and then choose the next best one" then $t=n/e$ is optimal; but why is the best strategy of that form in the first place?</p>
<p>A complete proof is not required - a reference to a good text discussing this is good as well.</p>
| <p>Since this is all or nothing, there is no point in selecting a candidate who is not best so far. </p>
<p>If there are $n$ candidates in total and the $k$th candidate is the best so far then the probability that the $k$th candidate is best overall is $\frac{k}{n}$, which is an increasing function of $k$. </p>
<p>So if you have a decision method which selects the $k$th candidate when best so far but not the $m$th candidate when best so far, with $k \lt m$, then a better (or, in an extreme case, not worse) decision method would be not to select the $k$th candidate when best so far but to select the $m$th candidate when best so far. </p>
<p>So in an optimal method, if at any stage when you are willing to select a best so far candidate, you should be willing to select any subsequent best so far candidates. That gives the strategy in your question of not selecting up to a point and then selecting any best so far candidates after that point.</p>
<p>There is an extreme case: for example if there are two candidates, it does not make any difference whether you accept the first candidate or wait to see the second candidate.</p>
| <p>This is merely a rephrasing of Henry's answer. It's probably utterly useless to give the same proof again in different words, so I'm making it community-wiki.</p>
<p>Recall the secretary problem: At each time $k$, you want to decide whether to pick candidate $k$ or not. You win if you the candidate you pick is the best of all $n$ candidates, and you want to maximize the probability of winning.</p>
<p>Now note that at any time $k$, if candidate $k$ is not the best among candidates $1, \dots, k$, then if you pick the candidate you lose immediately. (This candidate is not even best among those seen at that point; so cannot be best among all candidates.) So you will only pick a candidate if she's best among those seen until then. Thus, any optimal strategy will be of the form that does, at each time $k$:</p>
<blockquote>
<p>If candidate $k$ is better than everyone else seen previously and [some additional conditions], pick candidate $k$ and stop.</p>
</blockquote>
<p>The "[some additional conditions]" must depend only on information you have upto time $k$, and in particular only on $k$ and the relative order of candidates $1$ to $k$. But since you only care about whether you pick the best candidate or not, the only relevant factor of the relative order is who the best is among those seen at that point, and that you already know is candidate $k$. So the "[some additional conditions]" is some predicate $P(k)$ that <em>depends only on $k$</em>.</p>
<p>Now note that if you pick candidate $k$ (who is best among $1$ to $k$), then the probability of winning is $k/n$ (the probability that the best was among the first $k$). This is an increasing function of $k$, which means that if $k$ is good so is $k+1$ (i.e., if $P(k)$ is true then so should $P(k+1)$ be). So $P(k)$ is of the form $[k \ge t]$ i.e., the range of good $k$ is some interval $\{t, \dots, n\}$ for some $t$.</p>
<p>This is what you wanted proved. (And as Henry observed, for the special case $n=2$, both $t=1$ and $t=2$ work, but for larger $n$, you can prove that $t$ should be $n/e$.)</p>
|
differentiation | <p>In Leibniz notation, the 2nd derivative is written as
$$\dfrac{\mathrm d^2y}{\mathrm dx^2}\ ?$$</p>
<p>Why is the location of the $2$ in different places in the $\mathrm dy/\mathrm dx$ terms?</p>
| <p>Purely symbolically, if we accept that $dy = f'(x)\,dx$, and treat $dx$ as a constant, then $$d^2y = d(dy) = d(f'(x)\,dx) = dx\,d(f'(x)) = dx\,f''(x)\,dx = f''(x)\,(dx)^2,$$
so dividing yields:
$$\frac{d^2y}{(dx)^2} = \frac{d^2y}{dx^2} = f''(x).$$</p>
<p>As to where this notation actually comes from, though: My guess is that it comes from a time when mathematicians primarily thought of $dx$ and $dy$ as "infinitesimal quantities." There are ways of doing so rigorously (via non-standard analysis), and perhaps there is a way of making this notation rigorous that way.</p>
<hr>
<p>However, we can still give rigorous meaning to these calculations without appealing to non-standard analysis by using the language of bilinear forms.</p>
<p>If $f$ is differentiable, we can define a map
\begin{align*}
df\colon \mathbb{R} & \to L(\mathbb{R}; \mathbb{R}) \\
df(x)(dx) & = f'(x)\,dx.
\end{align*}
Here, $L(\mathbb{R};\mathbb{R})$ denotes the set of linear maps from $\mathbb{R} \to \mathbb{R}$, and $dx$ is simply a real number. Going one step further, we can consider the map $$d^2f = d(df)\colon \mathbb{R} \to L(\mathbb{R};L(\mathbb{R};\mathbb{R})).$$
By identifying $L(\mathbb{R}; L(\mathbb{R}; \mathbb{R}))$ with the set of bilinear maps $B(\mathbb{R} \times \mathbb{R};\mathbb{R})$, we have the bilinear map
$$d^2f(x)(dx^1, dx^2) = dx^1\, f''(x) \,dx^2$$ whose associated quadratic form is $$d^2f(x)(dx) = f''(x)\,(dx)^2.$$
It is now perfectly legal to divide on both sides by $(dx)^2$, obtaining $$\frac{d^2f}{dx^2} = f''(x).$$</p>
| <p>Somewhat mundanely,</p>
<p>$$ \frac{d}{dx}\left(\frac{d}{dx}(y)\right) = \frac{d}{dx}\left(\frac{dy}{dx}\right) = \frac{d\,dy}{dx\,dx} = \frac{d^2 y}{dx^2} $$</p>
|
probability | <p>Suppose we have two independent random variables <span class="math-container">$Y$</span> and <span class="math-container">$X$</span>, both being exponentially distributed with respective parameters <span class="math-container">$\mu$</span> and <span class="math-container">$\lambda$</span>.</p>
<p>How can we calculate the pdf of <span class="math-container">$Y-X$</span>?</p>
| <p>You can think of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> as waiting times for two independent things (say <span class="math-container">$A$</span> and <span class="math-container">$B$</span> respectively) to happen. Suppose we wait until the first of these happens. If it is <span class="math-container">$A$</span>, then (by the lack-of-memory property of the exponential distribution) the further waiting time until <span class="math-container">$B$</span> happens still has the same
exponential distribution as <span class="math-container">$Y$</span>; if it is <span class="math-container">$B$</span>, the further waiting time until <span class="math-container">$A$</span> happens still has the same exponential distribution as <span class="math-container">$X$</span>. That says that the conditional distribution of <span class="math-container">$Y-X$</span> given <span class="math-container">$Y > X$</span> is the distribution of <span class="math-container">$Y$</span>, and the conditional distribution of <span class="math-container">$Y-X$</span> given <span class="math-container">$Y < X$</span> is the distribution of <span class="math-container">$-X$</span>.</p>
<p>Let <span class="math-container">$Z=Y-X$</span>. We use a formula related to the law of total probability:
<span class="math-container">$$f_Z(x) = f_{Z|Z<0}(x)P(Z<0) + f_{Z|Z\geq 0}(x)P(Z\geq 0)\,.$$</span></p>
<p>Given that we know <span class="math-container">$P(Z<0)= P(Y<X) = \frac{\mu}{\mu+\lambda}$</span>, and correspondingly <span class="math-container">$P(Y>X) = \frac{\lambda}{\mu+\lambda}$</span>, the above implies that the pdf for <span class="math-container">$Y-X$</span> is
<span class="math-container">$$ f(x) = \frac{\lambda \mu}{\lambda+\mu}
\cases{e^{\lambda x} & if $x < 0 $\cr
e^{-\mu x} & if $x \geq 0 \,.$\cr}$$</span></p>
| <p>The right answer depends very much on what your mathematical background is. I will assume that you have seen some calculus of several variables, and not much beyond that. Instead of using your $u$ and $v$, I will use $X$ and $Y$.</p>
<p>The density function of $X$ is $\lambda e^{-\lambda x}$ (for $x \ge 0$), and $0$ elsewhere. There is a similar expression for the density function of $Y$. By independence, the <strong>joint</strong> density function of $X$ and $Y$ is
$$\lambda\mu e^{-\lambda x}e^{-\mu y}$$
in the first quadrant, and $0$ elsewhere.</p>
<p>Let $Z=Y-X$. We want to find the density function of $Z$. First we will find the cumulative distribution function $F_Z(z)$ of $Z$, that is, the probability that $Z\le z$.</p>
<p>So we want the probability that $Y-X \le z$. The geometry is a little different when $z$ is positive than when $z$ is negative. I will do $z$ positive, and you can take care of negative $z$. </p>
<p>Consider $z$ fixed and positive, and <strong>draw</strong> the line $y-x=z$. We want to find the probability that the ordered pair $(X,Y)$ ends up below that line or on it. The only relevant region is in the first quadrant. So let $D$ be the part of the first quadrant that lies below or on the line $y=x+z$. Then
$$P(Z \le z)=\iint_D \lambda\mu e^{-\lambda x}e^{-\mu y}\,dx\,dy.$$</p>
<p>We will evaluate this integral, by using an iterated integral. First we will integrate with respect to $y$, and then with respect to $x$. Note that $y$ travels from $0$ to $x+z$, and then $x$ travels from $0$ to infinity. Thus
$$P(Z\le x)=\int_0^\infty \lambda e^{-\lambda x}\left(\int_{y=0}^{x+z} \mu e^{-\mu y}\,dy\right)dx.$$</p>
<p>The inner integral turns out to be $1-e^{-\mu(x+z)}$. So now we need to find
$$\int_0^\infty \left(\lambda e^{-\lambda x}-\lambda e^{-\mu z} e^{-(\lambda+\mu)x}\right)dx.$$
The first part is easy, it is $1$. The second part is fairly routine. We end up with
$$P(Z \le z)=1-\frac{\lambda}{\lambda+\mu}e^{-\mu z}.$$
For the density function $f_Z(z)$ of $Z$, differentiate the cumulative distribution function. We get
$$f_Z(z)=\frac{\lambda\mu}{\lambda+\mu} e^{-\mu z} \quad\text{for $z \ge 0$.}$$
Please note that we only dealt with positive $z$. A very similar argument will get you $f_Z(z)$ at negative values of $z$. The main difference is that the final integration is from $x=-z$ on. </p>
|
combinatorics | <p>Let $S$ be a set of size $n$. There is an easy way to count the number of subsets with an even number of elements. Algebraically, it comes from the fact that</p>
<p>$\displaystyle \sum_{k=0}^{n} {n \choose k} = (1 + 1)^n$</p>
<p>while</p>
<p>$\displaystyle \sum_{k=0}^{n} (-1)^k {n \choose k} = (1 - 1)^n$.</p>
<p>It follows that </p>
<p>$\displaystyle \sum_{k=0}^{n/2} {n \choose 2k} = 2^{n-1}$. </p>
<p>A direct combinatorial proof is as follows: fix an element $s \in S$. If a given subset has $s$ in it, add it in; otherwise, take it out. This defines a bijection between the number of subsets with an even number of elements and the number of subsets with an odd number of elements.</p>
<p>The analogous formulas for the subsets with a number of elements divisible by $3$ or $4$ are more complicated, and divide into cases depending on the residue of $n \bmod 6$ and $n \bmod 8$, respectively. The algebraic derivations of these formulas are as follows (with $\omega$ a primitive third root of unity): observe that</p>
<p>$\displaystyle \sum_{k=0}^{n} \omega^k {n \choose k} = (1 + \omega)^n = (-\omega^2)^n$</p>
<p>while</p>
<p>$\displaystyle \sum_{k=0}^{n} \omega^{2k} {n \choose k} = (1 + \omega^2)^n = (-\omega)^n$</p>
<p>and that $1 + \omega^k + \omega^{2k} = 0$ if $k$ is not divisible by $3$ and equals $3$ otherwise. (This is a special case of the discrete Fourier transform.) It follows that</p>
<p>$\displaystyle \sum_{k=0}^{n/3} {n \choose 3k} = \frac{2^n + (-\omega)^n + (-\omega)^{2n}}{3}.$</p>
<p>$-\omega$ and $-\omega^2$ are sixth roots of unity, so this formula splits into six cases (or maybe three). Similar observations about fourth roots of unity show that</p>
<p>$\displaystyle \sum_{k=0}^{n/4} {n \choose 4k} = \frac{2^n + (1+i)^n + (1-i)^n}{4}$</p>
<p>where $1+i = \sqrt{2} e^{ \frac{\pi i}{4} }$ is a scalar multiple of an eighth root of unity, so this formula splits into eight cases (or maybe four). </p>
<p><strong>Question:</strong> Does anyone know a direct combinatorial proof of these identities? </p>
| <p>There's a very pretty combinatorial proof of the general identity <span class="math-container">$$\sum_{k \geq 0} \binom{n}{rk} = \frac{1}{r} \sum_{j=0}^{r-1} (1+\omega^j)^n,$$</span>
for <span class="math-container">$\omega$</span> a primitive <span class="math-container">$r$</span>th root of unity, in Benjamin, Chen, and Kindred, "<a href="https://scholarship.claremont.edu/hmc_fac_pub/535/" rel="nofollow noreferrer">Sums of Evenly Spaced Binomial Coefficients</a>," <em>Mathematics Magazine</em> 83 (5), pp. 370-373, December 2010.</p>
<p>They show that both sides count the number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span> beginning at vertex 0, where <span class="math-container">$C_r$</span> is the directed cycle on <span class="math-container">$r$</span> elements with the addition of a loop at each vertex, and a walk is <em>closed</em> if it ends where it starts. </p>
<p><em>Left-hand side</em>: In order for an <span class="math-container">$n$</span>-walk to be closed, it has to take <span class="math-container">$kr$</span> forward moves and <span class="math-container">$n-kr$</span> stationary moves for some <span class="math-container">$k$</span>.</p>
<p><em>Right-hand side</em>: The number of closed walks starting at vertex <span class="math-container">$j$</span> is the same regardless of the choice of <span class="math-container">$j$</span>, and so it suffices to prove that the total number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span> is <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n.$</span> For each <span class="math-container">$n$</span>-walk with initial vertex <span class="math-container">$j$</span>, assign each forward step a weight of <span class="math-container">$\omega^j$</span> and each stationary step a weight of <span class="math-container">$1$</span>. Define the weight of an <span class="math-container">$n$</span>-walk itself to be the product of the weights of the steps in the walk. Thus the sum of the weights of all <span class="math-container">$n$</span>-walks starting at <span class="math-container">$j$</span> is <span class="math-container">$(1+\omega^j)^n$</span>, and <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span> gives the total weight of all <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span>. The open <span class="math-container">$n$</span>-walks can then be partitioned into orbits such that the sum of the weights of the walks in each orbit is <span class="math-container">$0$</span>. Thus the open <span class="math-container">$n$</span>-walks contribute a total of <span class="math-container">$0$</span> to the sum <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span>. Since a closed <span class="math-container">$n$</span>-walk has weight <span class="math-container">$1$</span>, <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span> must therefore give the number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span>.</p>
<p><HR></p>
<p>They then make a slight modification of the argument above to give a combinatorial proof of
<span class="math-container">$$\sum_{k \geq 0} \binom{n}{a+rk} = \frac{1}{r} \sum_{j=0}^{r-1} \omega^{-ja}(1+\omega^j)^n,$$</span>
where <span class="math-container">$0 \leq a < r$</span>.</p>
<p><HR></p>
<p>Benjamin and Scott, in "<a href="https://scholarship.claremont.edu/hmc_fac_pub/124/" rel="nofollow noreferrer">Third and Fourth Binomial Coefficients</a>" (<em>Fibonacci Quarterly</em>, 49 (2), pp. 99-101, May 2011) give different combinatorial arguments for the specific cases you're asking about, <span class="math-container">$\sum_{k \geq 0} \binom{n}{3k}$</span> and <span class="math-container">$\sum_{k \geq 0} \binom{n}{4k}$</span>. I prefer the more general argument above, though, so I'll just leave this one as a link and not summarize it. </p>
| <p>Fix two elements s<sub>1</sub>,s<sub>2</sub>∈S and divide subsets of S into two parts (subsets of S containing only s<sub>2</sub>)∪(subsets of S which contains s<sub>1</sub> if they contain s<sub>2</sub>). The second part contains equal number of sets for all reminders mod 3 (because Z/3 acts there adding s<sub>1</sub>, then s<sub>2</sub>, then removing both of them) -- namely, 2<sup>n-2</sup>. And for the first part we have a bijection with subsets <em>(edit: with 2 mod 3 elements)</em> of a set with (n-2) elements.</p>
<p>So we get a recurrence relation that gives an answer 2<sup>n-2</sup>+2<sup>n-4</sup>+... -- i.e. (2<sup>n</sup>-1):3 for even and (2<sup>n</sup>-2):3 for odd n.</p>
<hr>
<p><strong>Errata.</strong> For n=0,1,5 mod 6 one should add "+1" to the answer from the previous paragraph (e.g. for n=6 the correct answer is 1+20+1=22 and not 21).</p>
<p>Let me try to rephrase the solution to make it obvious. For n=2k divide S on k pairs and consider an action of a group Z/3Z on each pair described in a first paragraph. We get an action of (Z/3Z)<sup>k</sup> on subsets of S, and after removal of it's only fixed point (k-element set consisting of second points from each pair) we get a bijection between subsets which have 0, 1 or 2 elements mod 3. So there are (2<sup>n</sup>-1):3 sets with i mod 3 elements excluding the fixed point <em>and</em> to count that point one should add "+1" for i=k mod 3.</p>
<p>And for n=2k+1 there are 2 fixed points — including or not (2k+1)-th element of S — with k+1 and k elements respectively.</p>
|
combinatorics | <blockquote>
<p>Let <span class="math-container">$k$</span> be a positive integer. A spectator is given <span class="math-container">$n=k!+k−1$</span> balls numbered <span class="math-container">$1,2,\dotsc,n$</span>. Unseen by the illusionist, the spectator arranges the balls into a sequence as they see fit. The assistant studies the sequence, chooses some block of <span class="math-container">$k$</span> consecutive balls, and covers them under their scarf. Then the illusionist looks at the newly obscured sequence and guesses the precise order of the <span class="math-container">$k$</span> balls they do not see.</p>
<p>Devise a strategy for the illusionist and the assistant to follow so that the trick always works.</p>
</blockquote>
<blockquote>
<p>(The strategy needs to be constructed explicitly. For instance, it should be possible to implement the strategy, as described by the solver, in the form of a computer program that takes <span class="math-container">$k$</span> and the obscured sequence as input and then runs in time polynomial in <span class="math-container">$n$</span>. A mere proof that an appropriate strategy exists does not qualify as a complete solution.)</p>
</blockquote>
<blockquote>
<p>Source: Komal, October 2019, problem A <span class="math-container">$760$</span>.<br />
Proposed by Nikolai Beluhov, Bulgaria, and Palmer Mebane, USA</p>
</blockquote>
<hr />
<p><em>I can prove that such a strategy must exist:</em></p>
<p>We have a set <span class="math-container">$A$</span> of all permutations (what assistant sees) and a set <span class="math-container">$B$</span> of all possible positions of a scarf (mark it <span class="math-container">$0$</span>) and remaining numbers (what the illusionist sees).</p>
<p>We connect each <span class="math-container">$a$</span> in <span class="math-container">$A$</span> with <span class="math-container">$b$</span> in <span class="math-container">$B$</span> if a sequence <span class="math-container">$b$</span> without <span class="math-container">$0$</span> matches with some consecutive subsequence in <span class="math-container">$a$</span>. Then each <span class="math-container">$a$</span> has degree <span class="math-container">$n-k+1$</span> and each <span class="math-container">$b$</span> has degree <span class="math-container">$k!$</span>. Now take an arbitrary subset <span class="math-container">$X$</span> in <span class="math-container">$A$</span> and let <span class="math-container">$E$</span> be a set of all edges from <span class="math-container">$X$</span>, and <span class="math-container">$E'$</span> set of all edges from <span class="math-container">$N(X)$</span> (the set of all neighbours of vertices in <span class="math-container">$X$</span>). Then we have <span class="math-container">$E\subseteq E'$</span> and so <span class="math-container">$|E|\leq |E'|$</span>. Now <span class="math-container">$|E|= (n-k+1)|X|$</span> and <span class="math-container">$|E'| = k!|N(X)|$</span>, so we have <span class="math-container">$$ (n-k+1)|X| \leq k!|N(X)|\implies |X|\leq |N(X)|.$$</span>
By Hall marriage theorem there exists a perfect matching between <span class="math-container">$A$</span> and <span class="math-container">$B$</span>...</p>
<p><em>...but I can not find one explicitly. Any idea?</em></p>
<hr />
<p><strong>Update: 2020. 12. 20.</strong></p>
<ul>
<li><a href="https://artofproblemsolving.com/community/c6t309f6h2338577_the_magic_trick" rel="nofollow noreferrer">https://artofproblemsolving.com/community/c6t309f6h2338577_the_magic_trick</a></li>
<li><a href="https://dgrozev.wordpress.com/2020/11/14/magic-recovery-a-komal-problem-about-magic/" rel="nofollow noreferrer">https://dgrozev.wordpress.com/2020/11/14/magic-recovery-a-komal-problem-about-magic/</a></li>
</ul>
| <p>This is a strategy that <strong>works in every case I checked, but its validity for all k needs to be proved.</strong> By his choice of placing the scarf, the assistant conveys information to the illusionist. There are <span class="math-container">$k!$</span> ways of placing the scarf, to the assistant.</p>
<p>Consider a random permutation (<span class="math-container">$a_1,a_2,...,a_n$</span>) of {<span class="math-container">$1,2,...,n$</span>}, that the audience chooses. The assistant looks at this arrangement and assigns a <em>parity</em> to it as follows:</p>
<ol>
<li><p>He takes the <span class="math-container">$k$</span>-touple (<span class="math-container">$a_1,a_2,...,a_{k}$</span>). He constructs a <span class="math-container">$k$</span>-digit number (concatenation) from it as
<span class="math-container">$$a_1a_2...a_{k}$$</span>
He then calculates the rank of this number among all of its <span class="math-container">$k!$</span> permutations (lowest number gets rank 0 and highest gets rank <span class="math-container">$(k!-1)$</span>). He assigns it a variable <span class="math-container">$$\boxed{x_1=(rank)}$$</span></p>
</li>
<li><p>Assign <span class="math-container">$x_2$</span> to {<span class="math-container">$a_2,a_3,...,a_{k+1}$</span>}. Similarly assign <span class="math-container">$x_3,x_4,...,$</span> and <span class="math-container">$x_{k!}$</span>. Take that <span class="math-container">$a_i=a_{i+n}$</span></p>
</li>
</ol>
<p>Compute the parity:
<span class="math-container">$$p=mod\left(\sum_{i=1}^{k!} x_i,k!\right)$$</span></p>
<p>There are <span class="math-container">$k!$</span> values of <span class="math-container">$p$</span> possible: {<span class="math-container">$0,1,2,...,k!-1$</span>}. <strong>Depending on the value of <span class="math-container">$p$</span> the assistant chooses where to place the scarf</strong>. (When <span class="math-container">$p=0$</span> he places over first <span class="math-container">$k!$</span> balls. When <span class="math-container">$p=1$</span> he places over second <span class="math-container">$k!$</span> balls,.. etc)</p>
<p>When the illusionist sees the obscured arrangement, <strong>he knows the value of <span class="math-container">$p$</span> by the position of the scarf. Although I haven't proved it yet, only one arrangement of the obscured balls gives this value of <span class="math-container">$p$</span></strong>. This strategy works for all cases of <span class="math-container">$k=2$</span> and many cases of <span class="math-container">$k=3$</span>. But it is a behemoth to prove for general <span class="math-container">$(k,n)$</span>.</p>
| <p><strong>NOTE</strong>: I found counterexamples to the explicit <span class="math-container">$f$</span> I initially posted. I removed it but am leaving the rest of the answer up as a partial solution.</p>
<hr/>
<h2>Notation and Remarks</h2>
<p>Let <span class="math-container">$S_n$</span> denote the set of permutations of length <span class="math-container">$n$</span> and let <span class="math-container">$C_{n,k}$</span> be the set of covered permutations. For example, the permutation <span class="math-container">$12345678 \in S_8$</span> and <span class="math-container">$123\cdot\cdot\cdot78 \in C_{8,3}$</span>. For brevity I often drop the subscripts.</p>
<p>The act of covering gives us a relation <span class="math-container">$\sim$</span> between the two sets, i.e. we say that <span class="math-container">$\pi \in S$</span> and <span class="math-container">$c \in C$</span> are <em>compatible</em> if <span class="math-container">$c$</span> is a covering of <span class="math-container">$\pi$</span>. We can visualize this compatibility relation as a bipartite graph. For <span class="math-container">$k=2$</span> and <span class="math-container">$n=3$</span> we have:</p>
<p><span class="math-container">$ \qquad \qquad \qquad \qquad \qquad \qquad $</span>
<img src="https://i.sstatic.net/ukeZZm.png" width="192" height="240"></p>
<p>We need to find an injective function <span class="math-container">$f : S_n \rightarrow C_{n,k}$</span> such that <span class="math-container">$\pi$</span> and <span class="math-container">$f(\pi)$</span> are always compatible. The assistant performs the function <span class="math-container">$f$</span> and the illusionist performs the inverse <span class="math-container">$f^{-1}$</span>. As per the problem requirements, both <span class="math-container">$f$</span> and <span class="math-container">$f^{-1}$</span> need to be computable in poly(n) time for a given input. We note that given such an <span class="math-container">$f$</span>, computing <span class="math-container">$f^{-1}$</span> is straightforward: given a covering <span class="math-container">$c$</span>, we consider the compatible permutations. Among these, we choose the <span class="math-container">$\pi$</span> such that <span class="math-container">$f(\pi) = c$</span>.</p>
<p>For <span class="math-container">$n = k! + k - 1$</span> we note that <span class="math-container">$|S_n| = |C_{n,k}| = n!\,$</span>. We also note that each permutation is compatible with exactly <span class="math-container">$k!$</span> coverings and each covering is compatible with exactly <span class="math-container">$k!$</span> permutations. As OP mentioned, the Hall marriage theorem thus implies that a solution exists. We can find such a solution using a maximum matching algorithm on the bipartite graph. This is how @orlp found a solution for <span class="math-container">$k=3$</span> in the comments. However, the maximum matching algorithm computes <span class="math-container">$f$</span> for every permutation in poly(n!) time, where we instead need to compute <span class="math-container">$f$</span> for a single permutation in poly(n) time.</p>
|
matrices | <p>I'm studying for my exam of linear algebra.. I want to prove the following corollary:</p>
<blockquote>
<p>If $A$ is a symmetric positive definite matrix then each entry $a_{ii}> 0$, ie all the elements of the diagonal of the matrix are positive.</p>
</blockquote>
<p>My teacher gave a suggestion to consider the unit vector "$e_i$", but I see that is using it. </p>
<p>$a_{ii} >0$ for each $i = 1, 2, \ldots, n$. For any $i$, define $x = (x_j)$ by $x_i =1$ and by $x_j =0$, if $j\neq i$, since $x \neq 0$, then:</p>
<p>$0< x^TAx = a_{ii}$</p>
<p>But my teacher says my proof is ambiguous. How I can use the unit vector $e_1$ for the demonstration?</p>
| <p>Let <span class="math-container">$e_1 = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}$</span>, and so on, where <span class="math-container">$e_i$</span> is a vector of all zeros, except for a <span class="math-container">$1$</span> in the <span class="math-container">$i^{\mathrm{th}}$</span> place. Since <span class="math-container">$A$</span> is positive definite, then <span class="math-container">$x^T A x > 0$</span> for any non-zero vector <span class="math-container">$x \in \Bbb R^n$</span>. Then, <span class="math-container">$e_1^T A e_1 > 0$</span>, and likewise for <span class="math-container">$e_2, e_3$</span> and so on.</p>
<p>If the <span class="math-container">$i^{\mathrm{th}}$</span> diagonal entry of <span class="math-container">$A$</span> was not positive, <span class="math-container">$a_{ii} < 0$</span>, then <span class="math-container">$e_i^T A e_i = 0\cdot a_{11}\cdot 0 + 1\cdot a_{12}\cdot 0 + \cdots + 1\cdot a_{ii}\cdot 1 + \cdots + 0\cdot a_{nn} \cdot 0$</span>, since <span class="math-container">$e_i$</span> has zeros everywhere but in the <span class="math-container">$i^{\rm th}$</span> spot.</p>
<p>Thus, what would happen if <span class="math-container">$a_{ii}$</span> was negative?</p>
| <p>I guess your teacher wanted a more structured answer.</p>
<p>Since <span class="math-container">$A$</span> is a positive definite <span class="math-container">$n \times n$</span> matrix,
<span class="math-container">$x^TAx > 0$</span> <span class="math-container">$\forall x \in \mathbb{R}^n$</span>.</p>
<p>Now, every unit vector is of the form <span class="math-container">$e_i = \begin{cases} 1 \text{ if } i=j \\ 0 \text{ if } i \neq j \end{cases}$</span></p>
<p>Since the above condition for positive definiteness applies for all <span class="math-container">$x$</span>, <span class="math-container">$e_i^T A e_i >0 $</span>
<span class="math-container">$\implies a_{ii} > 0 $</span> after matrix multiplication.</p>
|
probability | <p>The problem: I'm trying to find the probability of a random undirected graph being connected. I'm using the model $G(n,p)$, where there are at most $n(n-1) \over 2$ edges (no self-loops or duplicate edges) and each edge has a probability $p$ of existing. I found a simple formula online where $f(n)$ is the probability of $G(n,p)$ being connected. But apparently it's too trivial for the writer to explain the formula (it was just stated briefly).</p>
<p>The desired formula: $f(n) = 1-\sum\limits_{i=1}^{n-1}f(i){n-1 \choose i-1}(1-p)^{i(n-i)}$</p>
<p>My method is: Consider any vertex $v$. Then there's a probability ${n-1 \choose i}p^i(1-p)^{n-1-i}$ that it will have $i$ neighbours. After connecting $v$ to these $i$ neighbours, we contract them ($v$ and its neighbours) into a single connected component, so we are left with the problem of $n-i$ vertices (the connected component plus $n-i-1$ other "normal" vertices. </p>
<p>Except that now the probability of the vertex representing connected component being connected to any other vertex is $1-(1-p)^i$. So I introduced another parameter $s$ into the formula, giving us:</p>
<p>$g(n,s)=\sum\limits_{i=1}^{n-1}g(n-i,i){n-1 \choose i}q^i(1-q)^{n-1-i}$,</p>
<p>where $q=1-(1-p)^s$. Then $f(n)=g(n,1)$. But this is nowhere as simple as the mentioned formula, as it has an additional parameter...</p>
<p>Can someone explain how the formula $f(n)$ is obtained? Thanks!</p>
| <p>Here is one way to view the formula. First, note that it suffices to show that
$$
\sum_{i=1}^n {n-1 \choose i-1} f(i) (1-p)^{i(n-i)} = 1,
$$
since ${n-1 \choose n-1}f(n)(1-p)^{n(n-n)}=f(n).$ Let $v_1, v_2, \ldots, v_n$ be $n$ vertices. Trivially,
$$
1= \sum_{i=1}^n P(v_1 \text{ is in a component of size }i).
$$
Now let's consider why $$P(v_1 \text{ is in a component of size }i)={n-1\choose i-1}P(G(i,p) \text{ is connected}) (1-p)^{i(n-i)}.$$
If $\mathcal{A}$ is the set of all $i-1$ subsets of $\{v_2, v_3, \ldots, v_n\}$, then
\begin{align*}
P(v_1 \text{ in comp of size }i)&=\sum_{A \in \mathcal{A}}P(\text{the comp of }v_1\text{ is }\{v_1\}\cup A)\\&=\sum_{A \in \mathcal{A}}P(\{v_1\}\cup A \text{ is conn'd})P(\text{no edge btwn }A\cup\{v_1\} \& \ (A\cup \{v_1\})^c).
\end{align*}
This last equality is due to the fact that the edges within $\{v_1\}\cup A$ are independent of the edges from $A$ to $A^c$. There are precisely ${n-1 \choose i-1}$ elements in $\mathcal{A}$. The probability that $\{v_1\}\cup A$ is connected is equal to the probability that $G(1+|A|,p)$ is connected, which is $f(i)$. There are $i(n-i)$ missing edges from $\{v_1\}\cup A$ to $(\{v_1\}\cup A)^c$. </p>
| <p>What happens is that he/she sums over all cuts of the graph.
To avoid counting anything twice, you focus specifically on some specific vertex, calling it <span class="math-container">$1$</span>.</p>
<p>If the graph is disconnected, there is going to be a connected component of some size, which contains <span class="math-container">$1$</span>.</p>
<p>So you sum over the size this component might have, and with multiplicity <span class="math-container">$n-1 \choose i-1$</span> which is the number of ways to choose <span class="math-container">$1$</span>'s fellow set members. You want the component containing <span class="math-container">$1$</span> to be connected, so you multiply by <span class="math-container">$f(i)$</span>, but nothing in the component can be connected to something not there, so you multiply by <span class="math-container">$(1-p)^{i(n-i)}$</span>. The <span class="math-container">$(n-i)$</span> part of the graph can be connected or not, it doesn't matter. </p>
<p>A slight problem with the formula is, that it is not stable for small <span class="math-container">$p$</span>s and large <span class="math-container">$n$</span>s.</p>
|
number-theory | <p>The continued fraction of this series exhibits a truly crazy pattern and I found no reference for it so far. We have:</p>
<p><span class="math-container">$$\sum_{k=1}^\infty \frac{1}{(2^k)!}=0.5416914682540160487415778421$$</span></p>
<p>But the continued fraction is just beautiful:</p>
<p><code>[1, 1, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 1, 1, 601080389, 2, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 1, 1, 1832624140942590533, 2, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 2, 601080389, 1, 1, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 1, 1, 23951146041928082866135587776380551749, 2, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 1, 1, 601080389, 2, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 2, 1832624140942590533, 1, 1, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 2, 601080389, 1, 1, 5, 2, 69, 1, 1, 5, 2,...]</code></p>
<p>All of these large numbers are not just random - they have a simple closed form:</p>
<p><span class="math-container">$$A_n= \left( \begin{array}( 2^n \\ 2^{n-1} \end{array} \right) -1$$</span></p>
<p><span class="math-container">$$A_1=1$$</span></p>
<p><span class="math-container">$$A_2=5$$</span></p>
<p><span class="math-container">$$A_3=69$$</span></p>
<p><span class="math-container">$$A_4=12869$$</span></p>
<p><span class="math-container">$$A_5=601080389$$</span></p>
<p>And so on. This sequence is not in OEIS, only the larger sequence is, which contains this one as a subsequence <a href="https://oeis.org/A014495" rel="noreferrer">https://oeis.org/A014495</a></p>
<blockquote>
<p>What is the explanation for this?</p>
<p>Is there a regular pattern in this continued fraction (in the positions of numbers)?</p>
</blockquote>
<p>Is there generalizations for other sums of the form <span class="math-container">$\sum_{k=1}^\infty \frac{1}{(a^k)!}$</span>?</p>
<hr />
<p><strong>Edit</strong></p>
<p>I think a good move will be to rename the strings of small numbers:</p>
<p><span class="math-container">$$a=1, 1, 5, 2,\qquad b=1, 1, 5, 1, 1,\qquad c=2,5,1,1,\qquad d=2, 5, 2$$</span></p>
<p>As a side note if we could set <span class="math-container">$1,1=2$</span> then all these strings will be the same.</p>
<p>Now we rewrite the sequence. I will denote <span class="math-container">$A_n$</span> by just their indices <span class="math-container">$n$</span>:</p>
<blockquote>
<p><span class="math-container">$$[a, 3, b, 4, c, 3, c, 5, d, 3, a, 4, b, 3, c, 6, d, 3, b, 4, c, 3, d, 5, a, 3, a, 4, b, 3, c, 7, \\ d, 3, b, 4, c, 3, c, 5, d, 3, a, 4, b, 3, d, 6, a, 3, b, 4, c, 3, d, 5, a, 3, a,...]$$</span></p>
</blockquote>
<p><span class="math-container">$$[a3b4c3c5d3a4b3c6d3b4c3d5a3a4b3c7d3b4c3c5d3a4b3d6a3b4c3d5a3a,...]$$</span></p>
<p>Now we have new large numbers <span class="math-container">$A_n$</span> appear at positions <span class="math-container">$2^n$</span>. And postitions of the same numbers are in a simple arithmetic progression with a difference <span class="math-container">$2^n$</span> as well.</p>
<p>Now we only have to figure out the pattern (if any exists) for <span class="math-container">$a,b,c,d$</span>.</p>
<blockquote>
<p>The <span class="math-container">$10~000$</span> terms of the continued fraction are uploaded at github <a href="http://gist.github.com/anonymous/20d6f0e773a2391dc01815e239eacc79" rel="noreferrer">here</a>.</p>
</blockquote>
<hr />
<p>I also link my <a href="https://math.stackexchange.com/q/1856226/269624">related question</a>, from the iformation there we can conclude that the series above provide a greedy algorithm Egyptian fraction expansion of the number, and the number is irrational by the theorem stated in <a href="http://www.jstor.org/stable/2305906" rel="noreferrer">this paper</a>.</p>
| <p>actually your pattern is true and it is relatively easy to prove. Most of it can be found in an article from Henry Cohn in Acta Arithmetica (1996) ("Symmetry and specializability in continued fractions") where he finds similar patterns for other kind of continued fractions such as $\sum \frac{1}{10^{n!}}$. Curiously he doesn't mention your particular series although his method applies directly to it.</p>
<p>Let $[a_0,a_1,a_2,\dots,a_m]$ a continued fraction and as usual let
$$ \frac{p_m}{q_m} = [a_0,a_1,a_2,\dots,a_m] $$
we use this lemma from the mentioned article (the proof is not difficult and only uses elementary facts about continued fractions): </p>
<p><strong>[Folding Lemma]</strong>
$$ \frac{p_m}{q_m} + \frac{(-1)^m}{xq_m^2} = [a_0,a_1,a_2,\dots,a_m,x,-a_m,-a_{m-1},\dots,-a_2,-a_1] $$</p>
<p>This involves negative integers but it can be easiy transformed in a similar expresion involving only posive numbers using the fact that for any continued fraction:
$$ [\dots, a, -\beta] = [\dots, a-1, 1, \beta-1] $$
So
$$ \frac{p_m}{q_m} + \frac{(-1)^m}{xq_m^2} = [a_0,a_1,a_2,\dots,a_m,x-1,1,a_m-1,a_{m-1},\dots,a_2,a_1] $$</p>
<p>With all this consider the write the $m$th partial sum of your series as
$$ S_n = \sum_{k=1}^n \frac{1}{2^k!} = [0,a_1,a_2,\dots,a_m] = \frac{p_m}{q_m}$$</p>
<p>Where we can take always $m$ even (ie if $m$ is odd and $a_m>1$ then we can consider instead the continued fraction $[0,a_1,a_2,\dots,a_m-1,1]$ and so on). </p>
<p>Now $q_m = 2^n!$ we see it using induction on $n$: it is obvious for $n=1$ and if $S_{n-1} = P/2^{n-1}!$ then
$$ S_n = \frac{P}{2^{n-1}!} + \frac{1}{2^n!} = \frac{P (2^n!/2^{n-1}!) + 1}{2^n!} $$
now any common factor of the numerator and the denominator is has to be a factor of $2^{n-1}!$ dividing also $P$, but this is impossible as both are coprime so $q_m = 2^n!$ and we are done. </p>
<p>Using the "positive" form of the folding lemma with $x = \binom{2^{n+1}}{2^n}$ we get:</p>
<p>$$ \frac{p_m}{q_m} + \frac{(-1)^m}{\binom{2^{n+1}}{2^n}(2^n!)^2} =
\frac{p_m}{q_m} + \frac{1}{2^n!} =
[0,a_1,a_2,\dots,a_m,\binom{2^{n+1}}{2^n}-1,1,a_m-1,a_{m-1},\dots,a_1] $$</p>
<p>And we get the "shape" of the continued fraction and the your $A_m$. Let's see several steps: </p>
<p>We start with the first term wich is
$$ \frac{1}{2} = [0,2] $$
as $m$ is odd we change and use instead the continuos fraction
$$ \frac{1}{2} = [0,1,1] $$
and apply the last formula getting
$$ \frac{1}{2}+\frac{1}{2^2!} = [0,1,1,5,1,0,1] $$
We can dispose of the zeros using the fact that for any continued fraction:
$$ [\dots, a, 0, b, \dots] = [\dots, a+b, \dots ] $$
so
$$ \frac{1}{2}+\frac{1}{2^2!} = [0,1,1,5,2] $$
this time $m$ is even so we apply again the formula getting
$$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!} = [0,1,1,5,2,69,1,1,5,1,1] $$
again $m$ is even (and it will always continue even as easy to infer) so we apply again the formula getting as the next term
$$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!}+\frac{1}{2^4!} = [0,1,1,5,2,69,1,1,5,1,1,12869,1,0,1,5,1,1,69,2,5,1,1] $$
and we reduce it using the zero trick:
$$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!}+\frac{1}{2^4!} = [0,1,1,5,2,69,1,1,5,1,1,12869,2,5,1,1,69,2,5,1,1] $$</p>
<p>from now it is easy to see that the obtained continued fraction always has an even number of terms and we always have to remove the zero leaving a continued fraction ending again in 1,1. So the rule to obtain the continued fraction from here to an arbitrary number of termes is repeating the follewing steps: let the shape of the last continued fraction be $[0,1,1,b_1,\dots,b_k,1,1]$ then the next continued fraction will be
$$[0,1,1,b_1,\dots,b_k,1,1,A_n,2,b_k,\dots,b_1,1,1 ]$$
from this you can easily derive the patterns you have found for the position of apperance of the different integers. </p>
| <p>I have computed $10~000$ entries of the continued fraction, using Mathematica (the number itself was evaluated with $100~000$ digit precision).</p>
<p>The results show that the pattern is very simple for the most part.</p>
<p>First, we denote again:</p>
<p>$$A_n= \left( \begin{array}( 2^n \\ 2^{n-1} \end{array} \right) -1$$</p>
<p>The computed CF contains $A_n$ up to $n=13$, and the formula is numerically confirmed.</p>
<p>Now the positions of $A_n$ go like this ($P_n$ is the position of $A_n$ in the list of all CF entries):</p>
<p>$$P_2=P(5)=[3,8,13,18,23,28,\dots]$$</p>
<p>$$P_3=P(69)=[5,16,25,36,45,56,\dots]$$</p>
<p>$$P_4=[11,30,51,70,91,110,\dots]$$</p>
<p>$$P_5=[21,60,101,140,181,220,\dots]$$</p>
<p>$$P_6=[41,120,201,280,361,440,\dots]$$</p>
<p>But all these are just a combination of two <strong>arithmetic progressions</strong> for even and odd terms!</p>
<p>The first two terms are a little off, but the rest goes exactly like $P_4,P_5$, meaning for $n \geq 4$ we can write the general position for $A_n$ ($k=0,1,2,\dots$):</p>
<blockquote>
<p>$$p_k(A_n)= \begin{cases} 5 \cdot 2^{n-3}(1+2 k)+1,\qquad k \text{ even} \\ 5 \cdot 2^{n-3}(1+2 k),\qquad \qquad k \text{ odd} \end{cases}$$</p>
</blockquote>
<p>As special, different cases, we have:</p>
<blockquote>
<p>$$p_k(5)=3+5 k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_k(69)= \begin{cases} 2(3+5 k)-1,\qquad k \text{ even} \\ 2(3+5 k),\qquad \qquad k \text{ odd} \end{cases}$$</p>
</blockquote>
<hr>
<p>For $P_1=1$ and for $2$ I can't see any definite pattern so far.</p>
<hr>
<blockquote>
<p>Basically, we now know the explicit expression for every CF entry and all of its positions in the list, except for entries $1$ and $2$.</p>
</blockquote>
<p>It's enough now to consider the positions of $2$, then we just fill the rest of the list with $1$. The positions of $2$ start like this:</p>
<p><code>[4, 12, 17, 22, 24, 29, 37, 42, 44, 52, 57, 59, 64, 69, 77, 82, 84, 92, 97, 102, 104, 109, 117, 119, 124, 132, 137, 139, 144, 149, 157, 162, 164, 172, 177, 182, 184, 189, 197, 202, 204, 212, 217, 219, 224, 229, 237, 239, 244, 252, 257, 262, 264, 269, 277, 279, 284, 292, 297, 299, 304, 309, 317, 322, 324, 332, 337, 342, 344, 349, 357, 362, 364, 372, 377, 379, 384, 389, 397, 402, 404, 412, 417, 422, 424, 429, 437, 439, 444, 452, 457, 459, 464, 469, 477, 479, 484, 492, 497, 502, 504, 509, 517, 522, 524, 532, 537, 539, 544, 549, 557, 559, 564, 572, 577, 582, 584, 589, 597,...]</code></p>
<p>So far I have found four uninterrupted patterns for $2$:</p>
<p>$$p_{1k}(2)=4+20k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_{2k}(2)=17+20k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_{3k}(2)=29+40k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_{4k}(2)=12+40k,\qquad k=0,1,2,\dots$$</p>
<p><strong>Edit</strong></p>
<p>Discounting these four progressions the rest of the sequence is very close to $20k$, but some numbers are $20k+2$, while some $20k-1$ with no apparent pattern:</p>
<p><code>[22,42,59,82,102,119,139,162,182,202,219,239,262,279,299,322,342,362,379,402,422,439,459,479,502,522,539,559,582,599,619,642,662,682,699,722,742,759,779,802,822,842,859,879,902,919,939,959,982,1002,1019,1042,1062,1079,1099,1119,1142,1162,1179,1199,1222,1239,1259,1282,1302,1322,1339,1362,1382,1399,1419,1442,1462,1482,1499,1519,1542,1559,1579,1602,1622,1642,1659,1682,1702,1719,1739,1759,1782,1802,1819,1839,1862,1879,1899,1919,1942,1962,1979,2002,2022,2039,2059,2082,2102,2122,2139,2159,2182,2199,2219,2239,2262,2282,2299,2322,2342,2359,2379,2399,2422,2442,2459,2479,2502,2519,2539,2562,2582,2602,2619,2642,2662,2679,2699,2722,2742,2762,2779,2799,2822,2839,2859,2882,2902,2922,2939,2962,2982,2999,3019,3039,3062,3082,3099,3119,3142,3159,3179,3202,3222,3242,3259,3282,3302,3319,3339,3362,3382,3402,3419,3439,3462,3479,3499,3519,3542,3562,3579,3602,3622,3639,3659,3679,3702,3722,3739,3759,3782,3799,3819,3839,3862,3882,3899,3922,3942,3959,3979,4002,4022,4042,4059,4079,4102,4119,4139,4162,4182,4202,4219,4242,4262,4279,4299,4319,4342,4362,4379,4399,4422,4439,4459,4479,4502,4522,4539,4562,4582,4599,4619,4642,4662,4682,4699,4719,4742,4759,4779,4799,4822,4842,4859,4882,4902,4919,4939,4959,4982,5002,...]</code></p>
<hr>
<blockquote>
<p>The $10~000$ terms of the continued fraction are uploaded at github <a href="http://gist.github.com/anonymous/20d6f0e773a2391dc01815e239eacc79">here</a>. You can check my conjectures or try to obtain the full pattern.</p>
</blockquote>
<hr>
<blockquote>
<p>I would also like some hints and outlines for a proof of the above conjectures.</p>
</blockquote>
<p>I understand that it's quite likely that any of the patterns I found break down for some large $k$.</p>
|
combinatorics | <blockquote>
<p>All numbers <span class="math-container">$1$</span> to <span class="math-container">$155$</span> are written on a blackboard, one time each. We randomly choose two numbers and delete them, by replacing one of them with their product plus their sum. We repeat the process until there is only one number left. What is the average value of this number?</p>
</blockquote>
<p>I don't know how to approach it: For two numbers, <span class="math-container">$1$</span> and <span class="math-container">$2$</span>, the only number is <span class="math-container">$1\cdot 2+1+2=5$</span>
For three numbers, <span class="math-container">$1, 2$</span> and <span class="math-container">$3$</span>, we can opt to replace <span class="math-container">$1$</span> and <span class="math-container">$2$</span> with <span class="math-container">$5$</span> and then <span class="math-container">$3$</span> and <span class="math-container">$5$</span> with <span class="math-container">$23$</span>, or
<span class="math-container">$1$</span> and <span class="math-container">$3$</span> with <span class="math-container">$7$</span> and then <span class="math-container">$2$</span>, <span class="math-container">$7$</span> with <span class="math-container">$23$</span> or
<span class="math-container">$2$</span>, <span class="math-container">$3$</span> with <span class="math-container">$11$</span> and <span class="math-container">$1$</span>, <span class="math-container">$11$</span> with <span class="math-container">$23$</span>
so we see that no matter which two numbers we choose, the average number remains the same. Does this lead us anywhere?</p>
| <p>Claim: if <span class="math-container">$a_1,...,a_n$</span> are the <span class="math-container">$n$</span> numbers on the board then after n steps we shall be left with <span class="math-container">$(1+a_1)...(1+a_n)-1$</span>.</p>
<p>Proof: <em>induct on <span class="math-container">$n$</span></em>. Case <span class="math-container">$n=1$</span> is true, so assume the proposition holds for a fixed <span class="math-container">$n$</span> and any <span class="math-container">$a_1$</span>,...<span class="math-container">$a_n$</span>. Consider now <span class="math-container">$n+1$</span> numbers <span class="math-container">$a_1$</span>,...,<span class="math-container">$a_{n+1}$</span>. Suppose that at the first step we choose <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span>. We will be left with <span class="math-container">$ n$</span> numbers <span class="math-container">$b_1=a_1+a_2+a_1a_2$</span>, <span class="math-container">$b_2=a_3$</span>,...,<span class="math-container">$b_n=a_{n+1}$</span>, so by the induction hypothesis at the end we will be left with <span class="math-container">$(b_1+1)...(b_n+1)-1=(a_1+1)...(a_{n+1}+1)-1$</span> as needed, because <span class="math-container">$b_1+1=a_1+a_2+a_1a_2+1=(a_1+1)(a_2+1)$</span></p>
<p>Where did I get the idea of the proof from? I guess from the n=2 case: for <span class="math-container">$a_1,a_2$</span> you are left with <span class="math-container">$a_1+a_2+a_1a_2=(1+a_1)(1+a_2)-1$</span> and I also noted this formula generalises for <span class="math-container">$n=3$</span></p>
<p>So in your case we will be left with <span class="math-container">$156!-1=1\times 2\times...\times 156 -1$</span></p>
| <p>Another way to think of Sorin's observation, without appealing to induction explicitly:</p>
<p>Suppose your original numbers (both the original 155 numbers and later results) are written in <em>white</em> chalk. Now above each <em>white</em> number write that number plus one, in <em>red</em> chalk. Write new red companions to each new white number, and erase the red numbers when their white partners go away.</p>
<p>When we erase <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and write <span class="math-container">$x+y+xy$</span>, the new red number is <span class="math-container">$x+y+xy+1=(x+1)(y+1)$</span>, exactly the product of the two red companions we're erasing.</p>
<p>So we can reformulate the entire game as:</p>
<blockquote>
<p>Write in red the numbers from <span class="math-container">$2$</span> to <span class="math-container">$156$</span>. Keep erasing two numbers and writing their product instead. At the end when you have <em>one</em> red number left, subtract one and write the result in white.</p>
</blockquote>
<p>Since the order of factors is immaterial, the result must be <span class="math-container">$2\cdot 3\cdots 156-1$</span>.</p>
|
matrices | <p>If I multiply two numbers, say $3$ and $5$, I know it means add $3$ to itself $5$ times or add $5$ to itself $3$ times. </p>
<p>But If I multiply two matrices, what does it mean ? I mean I can't think it in terms of repetitive addition. </p>
<blockquote>
<p>What is the intuitive way of thinking about multiplication of matrices?</p>
</blockquote>
| <p>Matrix ¨multiplication¨ is the composition of two linear functions. The composition of two linear functions is a linear function.</p>
<p>If a linear function is represented by A and another by B then AB is their composition. BA is the their reverse composition.</p>
<p>Thats one way of thinking of it. It explains why matrix multiplication is the way it is instead of piecewise multiplication.</p>
| <p>Asking why matrix multiplication isn't just componentwise multiplication is an excellent question: in fact, componentwise multiplication is in some sense the most "natural" generalization of real multiplication to matrices: it satisfies all of the axioms you would expect (associativity, commutativity, existence of identity and inverses (for matrices with no 0 entries), distributivity over addition).</p>
<p>The usual matrix multiplication in fact "gives up" commutativity; we all know that in general <span class="math-container">$AB \neq BA$</span> while for real numbers <span class="math-container">$ab = ba$</span>. What do we gain? Invariance with respect to change of basis. If <span class="math-container">$P$</span> is an invertible matrix,</p>
<p><span class="math-container">$$P^{-1}AP + P^{-1}BP = P^{-1}(A+B)P$$</span>
<span class="math-container">$$(P^{-1}AP) (P^{-1}BP) = P^{-1}(AB)P$$</span>
In other words, it doesn't matter what basis you use to represent the matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, no matter what choice you make their sum and product is the same.</p>
<p>It is easy to see by trying an example that the second property does not hold for multiplication defined component-wise. This is because the inverse of a change of basis <span class="math-container">$P^{-1}$</span> no longer corresponds to the multiplicative inverse of <span class="math-container">$P$</span>.</p>
|
game-theory | <p>Lets say you have two sequences of non negative integers each of length <span class="math-container">$n$</span>.</p>
<p>ie <span class="math-container">$(a_1,a_2,...,a_n)$</span> and <span class="math-container">$(b_1,b_2,...,b_n)$</span> such that
<span class="math-container">$\max(a_i) < k$</span>
and
<span class="math-container">$\max(b_i) < k$</span></p>
<p><strong>Game rule:</strong></p>
<p>You can edit both sequence with <span class="math-container">$\mathrm{swap}(a_i, b_i)$</span> for <span class="math-container">$1 ≤ i≤ n$</span>,</p>
<p><strong>Goal:</strong></p>
<p><span class="math-container">$a_i ≤ a_j$</span> for all <span class="math-container">$i ≤ j$</span>
and
<span class="math-container">$b_i ≤b_j$</span> for all <span class="math-container">$i ≤ j$</span></p>
<p>But not all initial sequence <span class="math-container">$a$</span> and <span class="math-container">$b$</span> can be solved. For example <span class="math-container">$(2,0)$</span> and <span class="math-container">$(0,1)$</span> is a pair of sequence that can't be solved.</p>
<p>Now given <span class="math-container">$n$</span> and <span class="math-container">$k$</span>, count number of different pair of initial sequence <span class="math-container">$(a,b)$</span> that can be solved with game described above.</p>
<p><strong>Example:</strong></p>
<p>for <span class="math-container">$n=1$</span>,<span class="math-container">$k=2$</span>:
These are the cases: <span class="math-container">${[(0),(0)],[(0),(1)],[(1),(0)],[(1),(1)]}$</span>.
Hence answer would be <span class="math-container">$4$</span>.</p>
| <p><em>Disclaimer:</em> This is just a computational answer but it gives a formula for at least for the case <span class="math-container">$k=2$</span> (and other small cases too). And it's too long for a comment.</p>
<p>I get these values for <span class="math-container">$f(k, n) = $</span> the number of these solvable sequence pairs</p>
<p><span class="math-container">$$
\displaystyle \left(\begin{array}{rrrrrrrrr}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
4 & 11 & 26 & 57 & 120 & 247 & 502 & 1013 & 2036 \\
9 & 46 & 180 & 603 & 1827 & 5164 & 13878 & 35905 & 90189 \\
16 & 130 & 750 & 3507 & 14224 & 52068 & 176430 & 562925 & 1711776 \\
25 & 295 & 2345 & 14518 & 75558 & 346050 & 1436820 & 5520295 & 19916039 \\
36 & 581 & 6076 & 48006 & 311136 & 1739166 & 8665866 & 39387491 & 166049884 \\
49 & 1036 & 13776 & 135114 & 1065834 & 7132620 & 41957058 & 222437215 & 1082436355 \\
64 & 1716 & 28260 & 336666 & 3173808 & 25034724 & 171535650 & 1048436675 & 5829137600 \\
81 & 2685 & 53625 & 762399 & 8461167 & 77655435 & 612837225 & 4275850150 & 26924807910
\end{array}\right)
$$</span></p>
<p>Calculation done with a Markov chain (<a href="https://pastebin.com/SyY3UZij" rel="nofollow noreferrer">code here</a>).</p>
<p><strong>The idea:</strong> We start building the two sequences by adding a term to each at a time. We keep as a state <span class="math-container">$(m, M)$</span> where <span class="math-container">$m$</span> is the maximum of the minimums and <span class="math-container">$M$</span> is the maximum of the maximums in the sequences so far. An example: let the sequences be</p>
<p><span class="math-container">$$
(2,1,3,7) \\
(0,2,2,4)
$$</span></p>
<p>Then the path of the <span class="math-container">$(m, M)$</span>'s is <span class="math-container">$(0,0), (0,2), (1,2), (2,3), (4,7)$</span>. It always starts from <span class="math-container">$(0,0)$</span> (when the sequences are empty). Then we append <span class="math-container">$x_1=2, x_2=0$</span>, we get <span class="math-container">$m=\min(x_1, x_2) = 0$</span> and <span class="math-container">$M=\max(x_1, x_2) = 2$</span>. Then append <span class="math-container">$x_1=1, x_2=2$</span> (notice, these need to satisfy <span class="math-container">$\min(x_1, x_2) \geq m$</span> and <span class="math-container">$\max(x_1, x_2) \geq M$</span>) and get <span class="math-container">$(m,M) = (2,3)$</span>. And so on.</p>
<p>This leads to the following: there is a transition from <span class="math-container">$(m_1, M_1)$</span> to <span class="math-container">$(m_2, M_2)$</span> if <span class="math-container">$m_1\leq m_2$</span> and <span class="math-container">$M_1\leq M_2$</span>. And it is of weight <span class="math-container">$1$</span> if <span class="math-container">$m_2=M_2$</span>, because in that case only <span class="math-container">$x_1=x_2$</span> is possible, otherwise we can flip them and get weight <span class="math-container">$2$</span> transition (weight meaning the number of ways that lead to that transition). Also the "transition" -matrix <span class="math-container">$A$</span> has a nice block structure with respect to the blocks determined by value of <span class="math-container">$m$</span> in the state. And I wonder if that can be used for faster calculation of <span class="math-container">$f(k, n) = e_1^T A^n \bf 1$</span>.</p>
<p>For example for <span class="math-container">$k=3$</span> we get the following directed graph (with edge labels indicating how many pairs of numbers lead to that transition)</p>
<p><a href="https://i.sstatic.net/7KGEL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7KGEL.png" alt="enter image description here" /></a></p>
<p>To get the value <span class="math-container">$f(k, n)$</span> we count the number of <span class="math-container">$n$</span>-walks on the graph starting from <span class="math-container">$(0,0)$</span>. (Regard as there being multiple edges between the vertices when edge label <span class="math-container">$>1$</span>)</p>
<p>For <span class="math-container">$k=2$</span> the graph is particularly simple and we get that <span class="math-container">$f(2, n)$</span> is the sum of the first row of</p>
<p><span class="math-container">$$
\displaystyle \left(\begin{array}{rrr}
1 & 2 & 1 \\
0 & 2 & 1 \\
0 & 0 & 1
\end{array}\right)^n
$$</span></p>
<p>and that equals <span class="math-container">$2^{n+2}-n-3$</span>.</p>
<p>Finding the Jordan normal form for the involved matrix (<a href="https://pastebin.com/p649U2Sr" rel="nofollow noreferrer">code here</a>), I was able to find the formulas</p>
<p><span class="math-container">$$\begin{align}
f(3, n) &= \frac{1}{2}(n+2)(2^{n+2}(n-1)+n+5) \\
f(4, n) &= \frac{1}{6}(n+2)(n+3)(2^{n}(2n^2-2n+8) -n -7 ) \\
f(5, n) &= \frac{1}{72}(n+2)(n+3)(n+4)(2^n(2n^3+22n-24)+3n+27) \\
f(6,n) &= \frac{1}{720} (n + 2) \dots (n + 5) (2^n(n^{4} + 2n^{3} + 23 n^{2} - 26 n +72) - 6 n - 66)
\end{align}$$</span></p>
<p>These seem to indicate some sort of pattern.</p>
<p>For <span class="math-container">$k=7,8,\dots, 12$</span> we have that <span class="math-container">$\frac{f(k, n)}{n+k-1\choose k-2}$</span> is</p>
<p><span class="math-container">$$ \begin{aligned}
&\frac{1}{180} (2^n(n^{5} + 5 n^{4} + 45 n^{3} - 5 n^{2} + 314 n -360) + 30 n + 390) \\
&\frac{1}{1260} (2^n(n^{6} + 9 n^{5} + 85 n^{4} + 135 n^{3} + 994 n^{2} - 1224 n + 2880) - 180 n - 2700) \\
&\frac{1}{10080} (2^n(n^{7} + 14 n^{6} + 154 n^{5} + 560 n^{4} + 2989 n^{3} - 574 n^{2} + 17016 n - 20160 ) + 1260 n + 21420) \\
&\frac{1}{90720} (2^n(n^{8} + 20 n^{7} + 266 n^{6} + 1568 n^{5} + 8729 n^{4} + 11900 n^{3} + 71644 n^{2} - 94128 n + 201600) - 10080 n - 191520) \\
&\frac{1}{907200} (2^n(n^{9} + 27 n^{8} + 438 n^{7} + 3654 n^{6} + 23961 n^{5} + 71883 n^{4} + 294272 n^{3} - 75564 n^{2} + 1495728 n - 1814400) + 90720 n + 1905120) \\
&\frac{1}{9979200} (2^n(n^{10} + 35 n^{9} + 690 n^{8} + 7590 n^{7} + 60753 n^{6} + 281715 n^{5} + 1193660 n^{4} + 1453060 n^{3} + 7816896 n^{2} - 10814400 n + 21772800) - 907200 n - 20865600)
\end{aligned}
$$</span></p>
<p><strong>EDIT:</strong></p>
<p>Looking at the block structure of the transition matrix, here's for example <span class="math-container">$k=4$</span>:</p>
<p><span class="math-container">$$A_4 = \left(\begin{array}{rrrr|rrr|rr|r}
1 & 2 & 2 & 2 & 1 & 2 & 2 & 1 & 2 & 1 \\
0 & 2 & 2 & 2 & 1 & 2 & 2 & 1 & 2 & 1 \\
0 & 0 & 2 & 2 & 0 & 2 & 2 & 1 & 2 & 1 \\
0 & 0 & 0 & 2 & 0 & 0 & 2 & 0 & 2 & 1 \\
\hline
0 & 0 & 0 & 0 & 1 & 2 & 2 & 1 & 2 & 1 \\
0 & 0 & 0 & 0 & 0 & 2 & 2 & 1 & 2 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 2 & 1 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 1 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
\end{array}\right)
$$</span></p>
<p>I was able to come up with <a href="https://pastebin.com/ydLFEnAX" rel="nofollow noreferrer">this <span class="math-container">$O(nk^2)$</span> algorithm</a>.</p>
<p>We need to find <span class="math-container">$A^n \bf 1$</span> (its first component is the solution). Do this by initializing the vector <span class="math-container">$v_0 = \bf 1$</span> and iteratively computing <span class="math-container">$v_{j+1} = Av_j$</span>. Start the computation of <span class="math-container">$v_{j+1}$</span> from the last component upwards and notice that the row <span class="math-container">$i$</span> of the matrix is mostly equal (in the blocks to the left of the diagoal one) to the corresponding row one block-level below. (To see why this is true look at the states <span class="math-container">$(m, M)$</span> and <span class="math-container">$(m, M+1)$</span>). Now to do the computation, keep a running total for the current diagonal block and add the rest from the corresponding (already calculated) value from <span class="math-container">$v$</span>.</p>
| <p>Not an answer but an elaboration on P. Quinton's remark.</p>
<p>A pair of sequences <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is solvable if and only if the pair of sequences
<span class="math-container">$$(\min (a_1,b_1),\ldots, \min(a_n,b_n))\quad\mbox{and}\quad (\max (a_1,b_1),\ldots,\max(a_n,b_n)) $$</span>
is solved.</p>
<p>Proof sketch. <span class="math-container">$\Rightarrow$</span> Say <span class="math-container">$\min (a_1,b_1) > \min (a_2,b_2)$</span>, then <span class="math-container">$a_1,b_1 > \min (a_2,b_2)$</span> contradicting solvability. Similar argument for the maximums. <span class="math-container">$\Leftarrow$</span> Winning strategy is clear.</p>
<p>Call a pair of sequences a <em>pair of min/max sequences</em> if it is invariant with respect to the min/max strategy. A pair of min/max sequences can be scrambled in <span class="math-container">$2^n$</span> different ways (for each index either swap or no swap). So the number of all solvable pairs of sequences is bounded above by <span class="math-container">$2^nP(n,k)$</span>, where <span class="math-container">$P(n,k)$</span> is the number of pairs of min/max sequences of length <span class="math-container">$n$</span> and bound <span class="math-container">$k$</span>.</p>
<p>There is a caveat. A pair of constant and equal sequences, for instance, is invariant with respect to scrambling. I don't know how to efficiently account for the <span class="math-container">$a_i=b_i$</span> cases that will otherwise lead to considerable overestimation.</p>
|
probability | <p>A mathematician and a computer are playing a game: First, the mathematician chooses an integer from the range $2,...,1000$. Then, the computer chooses an integer <em>uniformly at random</em> from the same range. If the numbers chosen share a prime factor, the larger number wins. If they do not, the smaller number wins. (If the two numbers are the same, the game is a draw.)</p>
<p>Which number should the mathematician choose in order to maximize his chances of winning?</p>
| <p>For fixed range:</p>
<pre><code>range = 16;
a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}];
b = Table[Sort@DeleteDuplicates@ Flatten@Table[
Table[Position[a, a[[y, m]]][[n, 1]],
{n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}];
c = Table[Complement[Range[range], b[[n]]], {n, 1, range}];
d = Table[Range[n, range], {n, 1, range}];
e = Table[Range[1, n], {n, 1, range}];
w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]],
Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}];
l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1],
n], {n, 1, range}];
results = Table[Length@l[[n]], {n, 1, range}];
cf = Grid[{{Join[{"n"}, Rest@(r = Range[range])] // ColumnForm,
Join[{"win against n"}, Rest@w] // ColumnForm,
Join[{"lose against n"}, Rest@l] // ColumnForm,
Join[{"probability win for n"}, (p = Drop[Table[
results[[n]]/Total@Drop[results, 1] // N,{n, 1, range}], 1])] // ColumnForm}}]
Flatten[Position[p, Max@p] + 1]
</code></pre>
<p>isn't great code, but fun to play with for small ranges, gives</p>
<p><img src="https://i.sstatic.net/6oqyM.png" alt="enter image description here">
<img src="https://i.sstatic.net/hxDGU.png" alt="enter image description here"></p>
<p>and perhaps more illuminating</p>
<pre><code>rr = 20; Grid[{{Join[{"range"}, Rest@(r = Range[rr])] // ColumnForm,
Join[{"best n"}, (t = Rest@Table[
a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}];
b = Table[Sort@DeleteDuplicates@Flatten@Table[Table[
Position[a, a[[y, m]]][[n, 1]], {n, 1,Length@Position[a, a[[y, m]]]}],
{m, 1,PrimeNu[y]}], {y, 1, range}];
c = Table[Complement[Range[range], b[[n]]], {n, 1, range}];
d = Table[Range[n, range], {n, 1, range}];
e = Table[Range[1, n], {n, 1, range}];
w = Table[DeleteCases[DeleteCases[Join[Intersection[c[[n]], e[[n]]],
Intersection[b[[n]], d[[n]]]], 1], n], {n, 1, range}];
l = Table[DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1], n],
{n,1, range}];
results = Table[Length@l[[n]], {n, 1, range}];
p = Drop[Table[results[[n]]/Total@Drop[results, 1] // N,
{n, 1, range}], 1];
{Flatten[Position[p, Max@p] + 1], Max@p}, {range, 1, rr}]/.Indeterminate-> draw);
Table[t[[n, 1]], {n, 1, rr - 1}]] // ColumnForm,
Join[{"probability for win"}, Table[t[[n, 2]], {n, 1, rr - 1}]] // ColumnForm}}]
</code></pre>
<p>compares ranges:</p>
<p><img src="https://i.sstatic.net/P2aA8.png" alt="enter image description here"></p>
<p>Plotting mean "best $n$" against $\sqrt{\text{range}}$ gives</p>
<p><img src="https://i.sstatic.net/LMkAz.png" alt="enter image description here"></p>
<p>For range=$1000,$ "best $n$" are $29$ and $31$, which can be seen as maxima in this plot:</p>
<p><img src="https://i.sstatic.net/06gXJ.png" alt="enter image description here"></p>
<h1>Update</h1>
<p>In light of DanielV's comment that a "primes vs winchance" graph would probably be enlightening, I did a little bit of digging, and it turns out that it is. Looking at the "winchance" (just a weighting for $n$) of the primes in the range only, it is possible to give a fairly accurate prediction using</p>
<pre><code>range = 1000;
a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}];
b = Table[Sort@DeleteDuplicates@Flatten@Table[
Table[Position[a, a[[y, m]]][[n, 1]], {n, 1,
Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}];
c = Table[Complement[Range[range], b[[n]]], {n, 1, range}];
d = Table[Range[n, range], {n, 1, range}];
e = Table[Range[1, n], {n, 1, range}];
w = Table[ DeleteCases[ DeleteCases[
Join[Intersection[c[[n]], e[[n]]], Intersection[b[[n]], d[[n]]]],
1], n], {n, 1, range}];
l = Table[
DeleteCases[DeleteCases[Complement[Range[range], w[[n]]], 1],
n], {n, 1, range}];
results = Table[Length@l[[n]], {n, 1, range}];
p = Drop[Table[
results[[n]]/Total@Drop[results, 1] // N, {n, 1, range}], 1];
{Flatten[Position[p, Max@p] + 1], Max@p};
qq = Prime[Range[PrimePi[2], PrimePi[range]]] - 1;
Show[ListLinePlot[Table[p[[t]] range, {t, qq}],
DataRange -> {1, Length@qq}],
ListLinePlot[
Table[2 - 2/Prime[x] - 2/range (-E + Prime[x]), {x, 1, Length@qq + 0}],
PlotStyle -> Red], PlotRange -> All]
</code></pre>
<p><img src="https://i.sstatic.net/BwAmp.png" alt="enter image description here"></p>
<p>The plot above (there are $2$ plots here) show the values of "winchance" for primes against a plot of $$2+\frac{2 (e-p_n)}{\text{range}}-\frac{2}{p_n}$$</p>
<p>where $p_n$ is the $n$th prime, and "winchance" is the number of possible wins for $n$ divided by total number of possible wins in range ie $$\dfrac{\text{range}}{2}\left(\text{range}-1\right)$$ eg $499500$ for range $1000$.</p>
<p><img src="https://i.sstatic.net/zFegO.png" alt="enter image description here"></p>
<pre><code>Show[p // ListLinePlot, ListPlot[N[
Transpose@{Prime[Range[PrimePi[2] PrimePi[range]]],
Table[(2 + (2*(E - Prime[x]))/range - 2/Prime[x])/range, {x, 1,
Length@qq}]}], PlotStyle -> {Thick, Red, PointSize[Medium]},
DataRange -> {1, range}]]
</code></pre>
<h1>Added</h1>
<p>Bit of fun with game simulation:</p>
<pre><code>games = 100; range = 30;
table = Prime[Range[PrimePi[range]]];
choice = Nearest[table, Round[Sqrt[range]]][[1]];
y = RandomChoice[Range[2, range], games]; z = Table[
Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m, 1, games}];
Count[Table[If[Count[z, choice] == 0 && y[[m]] < choice \[Or]
Count[z, choice] > 0 && y[[m]] < choice, "lose", "win"],
{m, 1, games}], "win"]
</code></pre>
<p>& simulated wins against computer over variety of ranges </p>
<p><img src="https://i.sstatic.net/82BQG.png" alt="enter image description here"></p>
<p>with</p>
<pre><code>Clear[range]
highestRange = 1000;
ListLinePlot[Table[games = 100;
table = Prime[Range[PrimePi[range]]];
choice = Nearest[table, Round[Sqrt[range]]][[1]];
y = RandomChoice[Range[2, range], games];
z = Table[Table[FactorInteger[y[[m]]][[n, 1]], {n, 1, PrimeNu[y[[m]]]}], {m,
1, games}];
Count[Table[ If[Count[z, choice] == 0 && y[[m]] < choice \[Or]
Count[z, choice] > 0 && y[[m]] < choice, "lose", "win"], {m, 1,
games}], "win"], {range,2, highestRange}], Filling -> Axis, PlotRange-> All]
</code></pre>
<h1>Added 2</h1>
<p>Plot of mean "best $n$" up to range$=1000$ with tentative conjectured error bound of $\pm\dfrac{\sqrt{\text{range}}}{\log(\text{range})}$ for range$>30$.</p>
<p><img src="https://i.sstatic.net/gjQ7E.png" alt="enter image description here"></p>
<p>I could well be wrong here though. - In fact, on reflection, I think I am (<a href="https://math.stackexchange.com/questions/865820/very-tight-prime-bounds">related</a>).</p>
| <p>First consider choosing a prime $p$ in the range $[2,N]$. You lose only if the computer chooses a multiple of $p$ or a number smaller than $p$, which occurs with probability
$$
\frac{(\lfloor{N/p}\rfloor-1)+(p-2)}{N-1}=\frac{\lfloor{p+N/p}\rfloor-3}{N-1}.
$$
The term inside the floor function has derivative
$$
1-\frac{N}{p^2},
$$
so it increases for $p\le \sqrt{N}$ and decreases thereafter. The floor function does not change this behavior. So the best prime to choose is always one of the two closest primes to $\sqrt{N}$ (the one on its left and one its right, unless $N$ is the square of a prime). Your chance of losing with this strategy will be $\sim 2/\sqrt{N}$.</p>
<p>On the other hand, consider choosing a composite $q$ whose prime factors are $$p_1 \le p_2 \le \ldots \le p_k.$$ Then the computer certainly wins if it chooses a prime number less than $q$ (other than any of the $p$'s); there are about $q / \log q$ of these by the prime number theorem. It also wins if it chooses a multiple of $p_1$ larger than $q$; there are about $(N-q)/p_1$ of these. Since $p_1 \le \sqrt{q}$ (because $q$ is composite), the computer's chance of winning here is at least about
$$
\frac{q}{N\log q}+\frac{N-q}{N\sqrt{q}}.
$$
The first term increases with $q$, and the second term decreases. The second term is larger than $(1/3)/\sqrt{N}$ until $q \ge (19-\sqrt{37})N/18 \approx 0.72 N$, at which point the first is already $0.72 / \log{N}$, which is itself larger than $(5/3)/\sqrt{N}$ as long as $N > 124$. So the sum of these terms will always be larger than $2/\sqrt{N}$ for $N > 124$ or so, meaning that the computer has a better chance of winning than if you'd chosen the best prime.</p>
<p>This rough calculation shows that choosing the prime closest to $\sqrt{N}$ is the best strategy for sufficiently large $N$, where "sufficiently large" means larger than about $100$. (Other answers have listed the exceptions, the largest of which appears to be $N=30$, consistent with this calculation.)</p>
|
combinatorics | <p><a href="http://math.gmu.edu/~eobrien/Venn4.html" rel="noreferrer">This page</a> gives a few examples of Venn diagrams for <span class="math-container">$4$</span> sets. Some examples:<br>
<img src="https://i.sstatic.net/fHbmV.gif" alt="alt text">
<img src="https://i.sstatic.net/030jM.gif" alt="alt text"><br>
Thinking about it for a little, it is impossible to partition the plane into the <span class="math-container">$16$</span> segments required for a complete <span class="math-container">$4$</span>-set Venn diagram using only circles as we could do for <span class="math-container">$<4$</span> sets. Yet it is doable with ellipses or rectangles, so we don't require non-convex shapes as <a href="http://en.wikipedia.org/wiki/Venn_diagram#Edwards.27_Venn_diagrams" rel="noreferrer">Edwards</a> uses. </p>
<p>So what properties of a shape determine its suitability for <span class="math-container">$n$</span>-set Venn diagrams? Specifically, why are circles not good enough for the case <span class="math-container">$n=4$</span>?</p>
| <p>The short answer, from a <a href="http://www.ams.org/notices/200611/ea-wagon.pdf" rel="noreferrer">paper</a> by Frank Ruskey, Carla D. Savage, and Stan Wagon is as follows:</p>
<blockquote>
<p>... it is impossible to draw a Venn diagram with circles that will represent all the possible intersections of four (or more) sets. This is a simple consequence of the fact that circles can finitely intersect in at most two points and <a href="http://en.wikipedia.org/wiki/Planar_graph#Euler.27s_formula" rel="noreferrer">Euler’s relation</a> F − E + V = 2 for the number of faces, edges, and vertices in a plane graph.</p>
</blockquote>
<p>The same paper goes on in quite some detail about the process of creating Venn diagrams for higher values of <em>n</em>, especially for simple diagrams with rotational symmetry.</p>
<p>For a simple summary, the best answer I could find was on <a href="http://wiki.answers.com/Q/How_do_you_solve_a_four_circle_venn_diagram" rel="noreferrer">WikiAnswers</a>:</p>
<blockquote>
<p>Two circles intersect in at most two points, and each intersection creates one new region. (Going clockwise around the circle, the curve from each intersection to the next divides an existing region into two.)</p>
<p>Since the fourth circle intersects the first three in at most 6 places, it creates at most 6 new regions; that's 14 total, but you need 2^4 = 16 regions to represent all possible relationships between four sets.</p>
<p>But you can create a Venn diagram for four sets with four ellipses, because two ellipses can intersect in more than two points.</p>
</blockquote>
<p>Both of these sources indicate that the critical property of a shape that would make it suitable or unsuitable for higher-order Venn diagrams is the number of possible intersections (and therefore, sub-regions) that can be made using two of the same shape.</p>
<p>To illustrate further, consider some of the complex shapes used for <em>n</em>=5, <em>n</em>=7 and <em>n</em>=11 (from <a href="http://mathworld.wolfram.com/VennDiagram.html" rel="noreferrer">Wolfram Mathworld</a>):</p>
<p><img src="https://i.sstatic.net/JnpeY.jpg" alt="Venn diagrams for n=5, 7 and 11" /></p>
<p>The structure of these shapes is chosen such that they can intersect with each-other in as many different ways as required to produce the number of unique regions required for a given <em>n</em>.</p>
<p>See also: <a href="http://www.brynmawr.edu/math/people/anmyers/PAPERS/Venn.pdf" rel="noreferrer">Are Venn Diagrams Limited to Three or Fewer Sets?</a></p>
| <p>To our surprise, we found that the standard proof that a rotationally symmetric <span class="math-container">$n$</span>-Venn diagram is impossible when <span class="math-container">$n$</span> is not prime is incorrect. So Peter Webb and I found and published a correct proof that addresses the error. The details are all discussed at the paper</p>
<p>Stan Wagon and Peter Webb, <a href="https://www-users.cse.umn.edu/%7Ewebb/Publications/WebbWagonVennNote8.pdf" rel="nofollow noreferrer">Venn symmetry and prime numbers: A seductive proof revisited</a>, American Mathematical Monthly, August 2008, pp 645-648.</p>
<p>We discovered all this after the long paper with Savage et al. cited in another answer.</p>
|
linear-algebra | <p>Given a matrix $A$ over a field $F$, does there always exist a matrix $B$ such that $AB = BA$? (except the trivial case and the polynomial ring?)</p>
| <p>A square matrix $A$ over a field $F$ commutes with every $F$-linear combination of non-negative powers of $A$. That is, for every $a_0,\dots,a_n\in F$,</p>
<p>$$A\left(\sum_{k=0}^na_kA^k\right)=\sum_{k=0}^na_kA^{k+1}=\left(\sum_{k=0}^na_kA^k\right)A\;.$$</p>
<p>This includes as special cases the identity and zero matrices of the same dimensions as $A$ and of course $A$ itself.</p>
<p><strong>Added:</strong> As was noted in the comments, this amounts to saying that $A$ commutes with $p(A)$ for every polynomial over $F$. As was also noted, there are matrices that commute <em>only</em> with these. A simple example is the matrix $$A=\pmatrix{1&1\\0&1}:$$ it’s easily verified that the matrices that commute with $A$ are precisely those of the form $$\pmatrix{a&b\\0&a}=bA+(a-b)I=bA^1+(a-b)A^0\;.$$ At the other extreme, a scalar multiple of an identity matrix commutes with all matrices of the same size.</p>
| <p>Given a square <span class="math-container">$n$</span> by <span class="math-container">$n$</span> matrix <span class="math-container">$A$</span> over a field <span class="math-container">$k,$</span> it is always true that <span class="math-container">$A$</span> commutes with any <span class="math-container">$p(A),$</span> where <span class="math-container">$p(x)$</span> is a polynomial with coefficients in <span class="math-container">$k.$</span> Note that in the polynomial we take the constant <span class="math-container">$p_0$</span> to refer to <span class="math-container">$p_0 I$</span> here, where <span class="math-container">$I$</span> is the identity matrix. Also, by Cayley-Hamilton, any such polynomial may be rewritten as one of degree no larger than <span class="math-container">$(n-1),$</span> and this applies also to power series such as <span class="math-container">$e^A,$</span> although in this case it is better to find <span class="math-container">$e^A$</span> first and then figure out how to write it as a finite polynomial.</p>
<p><strong>THEOREM:</strong> The following are equivalent:</p>
<p>(I) <span class="math-container">$A$</span> commutes only with matrices <span class="math-container">$B = p(A)$</span> for some <span class="math-container">$p(x) \in k[x]$</span></p>
<p>(II) The minimal polynomial and characteristic polynomial of <span class="math-container">$A$</span> coincide; note that, if we enlarge the field to more than the field containing the matrix entries, neither the characteristic nor the minimal polynomial change. Nice proofs for the minimal polynomial at <a href="https://math.stackexchange.com/questions/136804/can-a-matrix-in-mathbbr-have-a-minimal-polynomial-that-has-coefficients-in">Can a matrix in $\mathbb{R}$ have a minimal polynomial that has coefficients in $\mathbb{C}$?</a></p>
<p>(III) <span class="math-container">$A$</span> is similar to a companion matrix.</p>
<p>(IV) if necessary, taking a field extension so that the characteristic polynomial factors completely, each characteristic value occurs in only one Jordan block.</p>
<p>(V) <span class="math-container">$A$</span> has a cyclic vector, that is some <span class="math-container">$v$</span> such that
<span class="math-container">$ \{ v,Av,A^2v, \ldots, A^{n-1}v \} $</span>
is a basis for the vector space.</p>
<p>See <a href="https://mathoverflow.net/questions/65796/when-animals-attack">GAILLARD</a> <a href="http://en.wikipedia.org/wiki/Minimal_polynomial_%28linear_algebra%29" rel="noreferrer">MINIMAL</a> <a href="http://en.wikipedia.org/wiki/Similar_matrix" rel="noreferrer">SIMILAR</a> <a href="http://en.wikipedia.org/wiki/Companion_matrix" rel="noreferrer">COMPANION</a></p>
<p>The equivalence of (II) and (III) is Corollary 9.43 on page 674 of <a href="https://rads.stackoverflow.com/amzn/click/com/0130878685" rel="noreferrer" rel="nofollow noreferrer">ROTMAN</a></p>
<p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p>
<p>Theorem: If <span class="math-container">$A$</span> has a cyclic vector, that is some <span class="math-container">$v$</span> such that
<span class="math-container">$$ \{ v,Av,A^2v, \ldots, A^{n-1}v \} $$</span>
is a basis for the vector space, then
<span class="math-container">$A$</span> commutes only with matrices <span class="math-container">$B = p(A)$</span> for some <span class="math-container">$p(x) \in k[x].$</span></p>
<p>Nice short proof by Gerry at <a href="https://math.stackexchange.com/questions/178604/complex-matrix-that-commutes-with-another-complex-matrix/178633#178633">Complex matrix that commutes with another complex matrix.</a></p>
<p>This is actually if and only if, see <a href="https://planetmath.org/cyclicvectortheorem" rel="noreferrer">Statement</a> and <a href="https://planetmath.org/proofofcyclicvectortheorem" rel="noreferrer">Proof</a></p>
<p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p>
<p>Note that, as in the complex numbers, if the field <span class="math-container">$k$</span> is algebraically closed we may then ask about the Jordan Normal Form of <span class="math-container">$A.$</span> In this case, the condition is that each eigenvalue belong to only a single Jordan block. This includes the easiest case, when all eigenvalues are distinct, as then the Jordan form is just a diagonal matrix with a bunch of different numbers on the diagonal, no repeats.</p>
<p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p>
<p>For example, let
<span class="math-container">$$ A \; = \;
\left( \begin{array}{ccc}
\lambda & 0 & 0 \\
0 & \lambda & 1 \\
0 & 0 & \lambda
\end{array}
\right).
$$</span></p>
<p>Next, with <span class="math-container">$r \neq s,$</span> take
<span class="math-container">$$ B \; = \;
\left( \begin{array}{ccc}
r & 0 & 0 \\
0 & s & t \\
0 & 0 & s
\end{array}
\right).
$$</span></p>
<p>We do get</p>
<p><span class="math-container">$$ AB \; = \; BA \; = \;
\left( \begin{array}{ccc}
\lambda r & 0 & 0 \\
0 & \lambda s & \lambda t + s \\
0 & 0 & \lambda s
\end{array}
\right).
$$</span>
However, since <span class="math-container">$r \neq s,$</span> we know that <span class="math-container">$B$</span> cannot be written as a polynomial in <span class="math-container">$A.$</span></p>
<p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p>
<p>A side note, <a href="https://math.stackexchange.com/questions/1476010/computing-the-dimension-of-a-vector-space-of-matrices-that-commute-with-a-given/1476643#1476643">Computing the dimension of a vector space of matrices that commute with a given matrix B,</a> answer by Pedro. The matrices that commute only with polynomials in themselves are an extreme case, dimension of that vector space of matrices is just <span class="math-container">$n.$</span> The other extreme is the identity matrix, commutes with everything, dimension <span class="math-container">$n^2.$</span> For some in-between matrix, what is the dimension of the vector space of matrices with which it commutes?</p>
<p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p>
<p>Showing what must be the form of a square block if it commutes with a Jordan block. There is no loss in taking the eigenvalue for the Jordan block as zero, and makes it all cleaner:</p>
<pre><code>parisize = 4000000, primelimit = 500000
? jordan = [ 0,1;0,0]
%1 =
[0 1]
[0 0]
? symbols = [a,b;c,d]
%2 =
[a b]
[c d]
? jordan * symbols - symbols * jordan
%3 =
[c -a + d]
[0 -c]
?
?
? symbols = [ a,b,c;d,e,f;g,h,i]
%4 =
[a b c]
[d e f]
[g h i]
? jordan = [ 0,1,0; 0,0,1; 0,0,0]
%5 =
[0 1 0]
[0 0 1]
[0 0 0]
? jordan * symbols - symbols * jordan
%6 =
[d -a + e -b + f]
[g -d + h -e + i]
[0 -g -h]
?
?
? jordan = [ 0,1,0,0; 0,0,1,0; 0,0,0,1;0,0,0,0]
%7 =
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
[0 0 0 0]
? symbols = [ a,b,c,d; e,f,g,h; i,j,k,l; m,n,o,p]
%8 =
[a b c d]
[e f g h]
[i j k l]
[m n o p]
? jordan * symbols - symbols * jordan
%9 =
[e -a + f -b + g -c + h]
[i -e + j -f + k -g + l]
[m -i + n -j + o -k + p]
[0 -m -n -o]
</code></pre>
<p>=================================</p>
|
probability | <p>I understand that the variance of the sum of two independent normally distributed random variables is the sum of the variances, but how does this change when the two random variables are correlated? </p>
| <p>For any two random variables:
$$\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y)+2\text{Cov}(X,Y).$$
If the variables are uncorrelated (that is, $\text{Cov}(X,Y)=0$), then</p>
<p>$$\tag{1}\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y).$$
In particular, if $X$ and $Y$ are independent, then equation $(1)$ holds.</p>
<p>In general
$$
\text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i)+
2\sum_{i< j} \text{Cov}(X_i,X_j).
$$
If for each $i\ne j$, $X_i$ and $X_j$ are uncorrelated, in particular if the $X_i$ are pairwise independent (that is, $X_i$ and $X_j$ are independent whenever $i\ne j$), then
$$
\text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i) .
$$</p>
| <p>Let's work this out from the definitions. Let's say we have 2 random variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span> with means <span class="math-container">$\mu_x$</span> and <span class="math-container">$\mu_y$</span>. Then variances of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> would be:</p>
<p><span class="math-container">$${\sigma_x}^2 = \frac{\sum_i(\mu_x-x_i)(\mu_x-x_i)}{N}$$</span>
<span class="math-container">$${\sigma_y}^2 = \frac{\sum_i(\mu_y-y_i)(\mu_y-y_i)}{N}$$</span></p>
<p>Covariance of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> is:</p>
<p><span class="math-container">$${\sigma_{xy}} = \frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N}$$</span></p>
<p>Now, let us consider the weighted sum <span class="math-container">$p$</span> of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>:</p>
<p><span class="math-container">$$\mu_p = w_x\mu_x + w_y\mu_y$$</span></p>
<p><span class="math-container">$${\sigma_p}^2 = \frac{\sum_i(\mu_p-p_i)^2}{N} = \frac{\sum_i(w_x\mu_x + w_y\mu_y - w_xx_i - w_yy_i)^2}{N} = \frac{\sum_i(w_x(\mu_x - x_i) + w_y(\mu_y - y_i))^2}{N} = \frac{\sum_i(w^2_x(\mu_x - x_i)^2 + w^2_y(\mu_y - y_i)^2 + 2w_xw_y(\mu_x - x_i)(\mu_y - y_i))}{N} \\ = w^2_x\frac{\sum_i(\mu_x-x_i)^2}{N} + w^2_y\frac{\sum_i(\mu_y-y_i)^2}{N} + 2w_xw_y\frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N} \\ = w^2_x\sigma^2_x + w^2_y\sigma^2_y + 2w_xw_y\sigma_{xy}$$</span></p>
|
differentiation | <p>I've been trying to solve the following problem:</p>
<p>Suppose that $f$ and $f'$ are continuous functions on $\mathbb{R}$, and that $\displaystyle\lim_{x\to\infty}f(x)$ and $\displaystyle\lim_{x\to\infty}f'(x)$ exist. Show that $\displaystyle\lim_{x\to\infty}f'(x) = 0$.</p>
<p>I'm not entirely sure what to do. Since there's not a lot of information given, I guess there isn't very much one can do. I tried using the definition of the derivative and showing that it went to $0$ as $x$ went to $\infty$ but that didn't really work out. Now I'm thinking I should assume $\displaystyle\lim_{x\to\infty}f'(x) = L \neq 0$ and try to get a contradiction, but I'm not sure where the contradiction would come from. </p>
<p>Could somebody point me in the right direction (e.g. a certain theorem or property I have to use?) Thanks</p>
| <p>Apply a L'Hospital slick trick: <span class="math-container">$\, $</span> if <span class="math-container">$\rm\ f + f\,'\!\to L\ $</span> as <span class="math-container">$\rm\ x\to\infty\ $</span> then <span class="math-container">$\rm\ f\to L,\ f\,'\!\to 0,\, $</span> since</p>
<p><span class="math-container">$$\rm \lim_{x\to\infty} f(x)\, =\, \lim_{x\to\infty}\frac{e^x\ f(x)}{e^x}\, =\, \lim_{x\to\infty}\frac{e^x\ (f(x)+f\,'(x))}{e^x}\, =\, \lim_{x\to\infty}\, (f(x)+f'(x))\qquad $$</span></p>
<p>This application of L'Hôpital's rule achieved some notoriety because the problem appeared in Hardy's classic calculus texbook <em>A Course of Pure Mathematics</em>, but with a less elegant solution. For example, see Landau; Jones: <a href="http://www.jstor.org/pss/2689813" rel="nofollow noreferrer">A Hardy Old Problem,</a> Math. Magazine 56 (1983) 230-232.</p>
| <p>Hint: If you assume $\lim _{x \to \infty } f'(x) = L \ne 0$, the contradiction would come from the mean value theorem (consider $f(x)-f(M)$ for a fixed but arbitrary large $M$, and let $x \to \infty$). </p>
<p>Explained: If the limit of $f(x)$ exist, there is a horizontal asymptote. Therefore as the function approaches infinity it becomes more linear and thus the derivative approaches zero. </p>
|
linear-algebra | <p><a href="http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors">Wikipedia</a> defines an eigenvector like this:</p>
<p><em>An eigenvector of a square matrix is a non-zero vector that, when multiplied by the matrix, yields a vector that differs from the original vector at most by a multiplicative scalar.</em></p>
<p>So basically in layman language: An eigenvector is a vector that when you multiply it by a square matrix, you get the same vector or the same vector multiplied by a scalar.</p>
<p>There are a lot of terms which are related to this like eigenspaces and eigenvalues and eigenbases and such, which I don't quite understand, in fact, I don't understand at all.</p>
<p>Can someone give an explanation connecting these terms? So that it is clear what they are and why they are related.</p>
| <p>Eigenvectors are those vectors that exhibit especially simple behaviour under a linear transformation: Loosely speaking, they don't bend and rotate, they simply grow (or shrink) in length (though a different interpretation of growth/shrinkage may apply if the ground field is not $\mathbb R$). If it is possible to express any other vector as a linear combination of eigenvectors (preferably if you can in fact find a whole basis made of eigenvectors) then applying the - otherwise complicated - linear transformation suddenly becomes easy because with respect to a basis of eigenvectors the linear transformation is given simply by a diagonal matrix.</p>
<p>Especially when one wants to investigate higher powers of a linear transformation, this is practically only possible for eigenvectors: If $Av=\lambda v$, then $A^nv=\lambda^nv$, and even exponentials become easy for eigenvectors: $\exp(A)v:=\sum\frac1{n!}A^n v=e^\lambda v$.
By the way, the exponential functions $x\mapsto e^{cx}$ are eigenvectors of a famous linear tranformation: differentiation, i.e. mapping a function $f$ to its derivative $f'$. That's precisely why exponetials play an important role as base solutions for linear differential equations (or even their discrete counterpart, linear recurrences like the Fibonacci numbers).</p>
<p>All other terminology is based on this notion:
An (nonzero) <strong>eigenvector</strong> $v$ such that $Av$ is a multiple of $v$ determines its <strong>eigenvalue</strong> $\lambda$ as the scalar factor such that $Av=\lambda v$.
Given an eigenvalue $\lambda$, the set of eigenvectors with that eigenvalue is in fact a subspace (i.e. sums and multiples of eigenvectors with the same(!) eigenvalue are again eigen), called the <strong>eigenspace</strong> for $\lambda$.
If we find a basis consisting of eigenvectors, then we may obviously call it <strong>eigenbasis</strong>. If the vectors of our vector space are not mere number tuples (such as in $\mathbb R^3$) but are also functions and our linear transformation is an operator (such as differentiation), it is often convenient to call the eigenvectors <strong>eigenfunctions</strong> instead; for example, $x\mapsto e^{3x}$ is an eigenfunction of the differentiation operator with eigenvalue $3$ (because the derivative of it is $x\mapsto 3e^{3x}$).</p>
| <p>As far as I understand it, the 'eigen' in words like eigenvalue, eigenvector etc. means something like 'own', or a better translation in English would perhaps be 'characteristic'.</p>
<p>Each square matrix has some special scalars and vectors associated with it. The eigenvectors are the vectors which the matrix preserves (up to scalar multiplication). As you probably know, an $n\times n$ matrix acts as a linear transformation on an $n$-dimensional space, say $F^n$. A vector and its scalar multiples form a line through the origin in $F^n$, and so you can think of the eigenvectors as indicating lines through the origin preserved by the linear transformation corresponding to the matrix.</p>
<p><strong>Defn</strong> Let $A$ be an $n\times n$ matrix over a field $F$. A vector $v\in F^n$ is an <em>eigenvector</em> of $A$ if $Av = \lambda v$ for some $\lambda$ in $F$. A scalar $\lambda\in F$ is an <em>eigenvalue</em> of $A$ if $Av = \lambda v$ for some $v\in F^n$.</p>
<p>The eigenvalues are then the factors by which these special lines through the origin are either stretched or contracted.</p>
|
probability | <blockquote>
<p>Fix an integer $n\geq 2$. Suppose we start at the origin in the complex plane, and on each step we choose an $n^{th}$ root of unity at random, and go $1$ unit distance in that direction. Let $X_N$ be distance from the origin after the $N^{th}$ step. How well can we bound $E(X_N)$ from above?</p>
</blockquote>
<p>In my attempt to calculate this, I found the bound $\sqrt{N}$, but I have a feeling this could be wrong because the problem I am applying this to has instead $\sqrt{N\log N}$. (This reasoning is based on the belief that the problem is likely optimal) What I did was apply Cauchy Schwarz to get a sum with the norm squared, and then try to do some manipulations from there, relying on the fact that the sum of the vectors (not the distance) is zero by symmetry.</p>
| <p>I get the same $\sqrt{N}$ bound.</p>
<p>Let the $k$-th step be $z_k$ (an $n$-th root of unity), so that the position at time $N$ is $z_1 + \cdots + z_N$. The square of the distance from the origin at time $N$ is
$$
X_N^2 = \left( \sum_{k=1}^{N} \overline{z_k} \right) \cdot \left( \sum_{j=1}^{N} z_j \right) = \sum_{k=1}^{N} | z_{k} |^2 + \sum_{k \ne j} \overline{z_k} \cdot z_j = N + \sum_{k \ne j} \overline{z_k} \cdot z_j.
$$
Since each summand $\overline {z_k} \cdot z_j$ (for $k \ne j$) vanishes in expectation, we get $\mathbf E[X_N^2] = N$. Finally $$\mathbf EX_N \leqslant \sqrt{ \mathbf E[X_N^2] } = \sqrt{N} .$$ </p>
<p>This settles that $\mathbf EX_N$ is asymptotically $O(\sqrt{N})$, but the resulting constant is most likely not tight since the approximation in the final step is rather crude (for any finite $n$). </p>
| <p>Srivatsan has given a very fine answer, with a simple, elegant analysis.
With a little more work, we can sharpen the result.</p>
<p><strong>Claim</strong>: For $n \geq 3$, $\mathbb E X_N \sim \sqrt{\frac{\pi}{4} N}$ .</p>
<p>We can analyze this by means of the central limit theorem and the continuous mapping theorem. Below is just a sketch. We have restricted ourselves to the case $n \geq 3$ since the case $n = 2$ corresponds to the standard simple random walk, which has slightly different behavior (cf. Henning's comment). Intuitively, since for $n \geq 3$, the random walk can wiggle in an extra dimension, we should anticipate that its expected norm might grow slightly faster.</p>
<p><em>Proof (sketch)</em>:</p>
<p>Let $Z_i = (R_i,I_i)$, $i=1,2,\ldots$, be an iid (uniform) sample of the roots of unity where $R_i$ indicates the "real" component and $I_i$ the "imaginary" component of the $i$th element of the sample. Then, it is a simple exercise to verify that $\mathbb E R_i = \mathbb E I_i = 0$, and, also, $\mathbb E R_i I_i = 0$. Furthermore,
$$
\mathrm{Var}(R_i) = \mathrm{Var}(I_i) = 1/2 \>,
$$
independently of $n$, using simple properties of the roots of unity.</p>
<p>Hence, by the multivariate central limit theorem, we have that
$$
\sqrt{2N} (\bar{R}_N, \bar{I}_N) \xrightarrow{d} \,\mathcal \,N(0,\mathbf I_2 ) \> ,
$$
where $\bar{R}_N = N^{-1} \sum_{i=1}^N R_i$ and likewise for $\bar{I}_N$. Here $\mathbf I_2$ denotes the $2 \times 2$ identity matrix.</p>
<p>An application of the <a href="http://en.wikipedia.org/wiki/Continuous_mapping_theorem">continuous mapping theorem</a> using $g(x,y) = x^2 + y^2$ yields
$$
2 N (\bar{R}_N^2 + \bar{I}_N^2) = \frac{2}{N} X_N^2 = g( \sqrt{2N} \bar{R}_N, \sqrt{2N} \bar{I}_N ) \,\xrightarrow{d}\, \chi_2^2 \> .
$$</p>
<p>That is, the rescaled squared norm has a limit distribution which is chi-squared with two degrees of freedom.</p>
<p>The square-root of a $\chi_2^2$ distribution is known as a <a href="http://en.wikipedia.org/wiki/Rayleigh_distribution">Rayleigh distribution</a> and has mean $\sqrt{\pi/2}$.</p>
<p>Hence, by a second application of the continuous mapping theorem, $\sqrt{\frac{2}{N}} X_N$ converges to a Rayleigh distribution. </p>
<p>This strongly suggests (but does <em>not</em> prove) that $\mathbb E X_N \sim \sqrt{\frac{\pi}{4} N}$.</p>
<p>To finish the proof, note that $\mathbb E \frac{2}{N} X_N^2 = 2$ for all $N$. By a standard theorem in probability theory, there exists a sequence of random variables $\{Y_N\}$ such that $Y_N \stackrel{d}= \sqrt{\frac{2}{N}} X_N$ and $Y_N$ converges to $Y_\infty$ almost surely, where $Y_\infty$ is a standard Rayleigh. By the uniformity of the second moment above, we know that the set $\{Y_N\}$ is uniformly integrable and so $L_1$ convergent. So,
$$
\mathbb |\mathbb E Y_N - \mathbb E Y_\infty| \leq \mathbb E |Y_N - Y_\infty| \to 0 \> .
$$</p>
<p>Hence $\mathbb E Y_N = \mathbb E X_N \sim \sqrt{\frac{\pi}{4} N}$ as desired.</p>
|
combinatorics | <p>Consider the following two generating functions:
<span class="math-container">$$e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}$$</span>
<span class="math-container">$$\log\left(\frac{1}{1-x}\right)=\sum_{n=1}^{\infty}\frac{x^n}{n}.$$</span>
If we live in function-land, it's clear enough that there is an inverse relationship between these two things. In particular,
<span class="math-container">$$e^{\log\left(\frac{1}{1-x}\right)}=1+x+x^2+x^3+\ldots$$</span>
If we live in generating-function-land, this identity is really not so obvious. We can figure out that the coefficient of <span class="math-container">$x^n$</span> in <span class="math-container">$e^{\log\left(\frac{1}{1-x}\right)}$</span> is given as
<span class="math-container">$$\sum_{a_1+\ldots+a_k=n}\frac{1}{a_1\cdot \cdots \cdot a_k}\cdot \frac{1}{k!}$$</span>
where the sum runs over <em>all</em> ways to write <span class="math-container">$n$</span> as an ordered sum of positive integers. Supposedly, for each choice of <span class="math-container">$n$</span>, this thing sums to <span class="math-container">$1$</span>. I really don't see why. Is there a combinatorial argument that establishes this? </p>
| <p>In your sum, you are distinguishing between the same collection of numbers
when it occurs in different orders. So you'll have separate summands for
<span class="math-container">$(a_1,a_2,a_3,a_4)=(3,1,2,1)$</span>, <span class="math-container">$(2,3,1,1)$</span>, <span class="math-container">$(1,1,3,2)$</span> etc.</p>
<p>Given a multiset of <span class="math-container">$k$</span> numbers adding to <span class="math-container">$n$</span> consisting of <span class="math-container">$t_1$</span> instances
of <span class="math-container">$b_1$</span> up to <span class="math-container">$t_j$</span> instances of <span class="math-container">$b_j$</span>, that contributes
<span class="math-container">$$\frac{k!}{t_1!\cdot\cdots\cdot t_j!}$$</span>
(a multinomial coefficient) summands to the sum, and so an overall
contribution
of
<span class="math-container">$$\frac{1}{t_1!b_1^{t_1}\cdot\cdots\cdot t_j!b_j^{t_j}}$$</span>
to the sum. But that <span class="math-container">$1/n!$</span> times the number of permutations with cycle structure
<span class="math-container">$b_1^{t_1}\cdot\cdots\cdots b_j^{t_j}$</span>. So this identity states that
the total number of permutations of <span class="math-container">$n$</span> objects is <span class="math-container">$n!$</span>.</p>
| <p>In brief, <span class="math-container">$n!$</span> times the summand in the sum you write down is equal to the number of permutations on <span class="math-container">$n$</span> symbols that decompose into the product of disjoint cycles of lengths <span class="math-container">$a_1,\dots,a_k$</span>. More precisely, this is true if you combine all of the terms in the sum corresponding to the same multiset <span class="math-container">$\{a_1,\dots,a_k\}$</span>.</p>
<p>See exercises 10.2 and 10.3 of <a href="http://jlmartin.faculty.ku.edu/~jlmartin/LectureNotes.pdf" rel="noreferrer">these notes</a> for related material.</p>
|
differentiation | <p><strong>Here's how I see it</strong> (please read the following if you can, because I address a lot of arguments people have already made):</p>
<p>Let's take instantaneous speed, for example. If it's truly instantaneous, then there is no change in $x$ (time), since there's no time <em>interval</em>. </p>
<p>Thus, in $\frac{f(x+h) - f(x)}{h}$, $h$ should actually be zero (not arbitrarily close to zero, since that would still be an interval) and therefore instantaneous speed is undefined.</p>
<p>If "instantaneous" is just a figure of speech for "very very very small", then I have two problems with it:</p>
<p>Firstly, well it's not instantaneous at all in the sense of "at a single moment".</p>
<p>Secondly, how is "very very very small" conceptually different from "small"? What's really the difference between considering $1$ second and $10^{-200}$ of a second?</p>
<p>I've heard some people talk about "infinitely small" quantities. This doesn't make any sense to me. In this case, what's the process by which a number goes from "not infinitely small" to "ok, now you're infinitely small"?
Where's the dividing line in degree of smallness beyond which a number is infinitely small?</p>
<p>I understand $\lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$ as the limit of an infinite sequence of ratios, I have no problem with that.</p>
<p>But I thought the point of a limit and infinity in general, is that you never <em>get</em> there.
For example, when people say "the sum of an infinite geometric series", they really mean "the limit", since you can't possibly add infinitely many terms in the arithmetic sense of the word.</p>
<p>So again in this case, since you never get to the limit, $h$ is always some interval, and therefore the rate is not "instantaneous".
Same problem with integrals actually; how do you add up infinitely many terms? Saying you can add up an infinity or terms implies that infinity is a fixed number.</p>
| <p>In math, there's intuition and there's rigor. Saying
<span class="math-container">$$f'(x)=\lim_{h\to0}\frac{f(x+h)-f(x)}h$$</span>
is a rigorous statement. It's very formal. Saying "the derivative is the instantaneous rate of change" is intuitive. It has no formal meaning whatsovever. Many people find it helpful for informing their gut feelings about derivatives.</p>
<p>(<strong>Edit</strong> I should not understate the importance of gut feelings. You'll need to trust your gut if you ever want to prove hard things.)</p>
<p>That being said, here's no reason why you should find it helpful. If it's too fluffy to be useful for you that's fine. But you'll need some intuition on what derivatives are supposed to be describing. I like to think of it as "if I squinted my eyes so hard that <span class="math-container">$f$</span> became linear near some point, then <span class="math-container">$f$</span> would look like <span class="math-container">$f'$</span> near that point." Find something that works for you.</p>
| <p>The idea behind $$\lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$$ is the slope of the graph $y=f(x)$ . </p>
<p>Now for a moment forget about your instantaneous velocity and think about your average velocity. What is average velocity? I think average velocity is $$\frac{x_f-x_i}{t_f-t_i}$$. Now if look at this carefully this is my slope but in a given interval of time.</p>
<p>So you might question what is the difference between instantaneous velocity and average velocity , Both of them talk about intervals .</p>
<p>No that not the idea over here. Now if it had been a linear graph it would have been very easy to calculate your instantaneous velocity , but in your you graph that's not the case. Here you see the particle (or object) is changing its velocity every moment of time and that becomes impossible to deal with . So to remove this element of doubt what we do is try to take the limit and try to get close to the frame of time and try finding the velocity and name it instantaneous velocity. </p>
|
logic | <p>What is the difference between a "proof by contradiction" and "proving the contrapositive"? Intuitive, it feels like doing the exact same thing. And when I compare an exercise, one person proves by contradiction, and the other proves the contrapositive, the proofs look almost exactly the same.</p>
<p>For example, say I want to prove: $P \implies Q$
When I want to prove by contradiction, I would say assume this is not true.
Assume $Q$ is not true, and $P$ is true. Blabla, but this implies $P$ is not true, which is a contradiction.</p>
<p>When I want to prove the contrapositive, I say. Assume $Q$ is not true. Blabla, this implies $P$ is not true.</p>
<p>The only difference in the proof is that I assume $P$ is true in the beginning, when I want to prove by contradiction. But this feels almost redundant, as in the end I always get that this is not true. The only other way that I could get a contradiction is by proving that $Q$ is true. But this would be the exact same things as a direct proof. </p>
<p>Can somebody enlighten me a little bit here ? For example: Are there proofs that can be proven by contradiction but not proven by proving the contrapositve?</p>
| <p>To prove $P \rightarrow Q$, you can do the following:</p>
<ol>
<li>Prove directly, that is assume $P$ and show $Q$;</li>
<li>Prove by contradiction, that is assume $P$ and $\lnot Q$ and derive a contradiction; or</li>
<li>Prove the contrapositive, that is assume $\lnot Q$ and show $\lnot P$.</li>
</ol>
<p>Sometimes the contradiction one arrives at in $(2)$ is merely contradicting the assumed premise $P$, and hence, as you note, is essentially a proof by contrapositive $(3)$. However, note that $(3)$ allows us to assume <em>only</em> $\lnot Q$; if we can then derive $\lnot P$, we have a <em>clean</em> proof by contrapositive.</p>
<p>However, in $(2)$, the aim is to derive a <em>contradiction</em>: the contradiction might <em>not</em> be arriving at $\lnot P$, if one has assumed ($P$ <em>and</em> $\lnot Q$). Arriving at <em>any contradiction</em> counts in a proof by contradiction: say we assume $P$ <em>and</em> $\lnot Q$ and derive, say, $Q$. Since $Q \land \lnot Q$ is a contradiction (can never be true), we are forced then to conclude <em>it <strong>cannot</strong> be that <strong>both</em></strong> $(P \land \lnot Q)$. </p>
<p>But note that $\lnot (P \land \lnot Q) \equiv \lnot P \lor Q\equiv P\rightarrow Q.$</p>
<p>So a proof by contradiction usually looks something like this ($R$ is often $Q$, or $\lnot P$ or any other contradiction):</p>
<ul>
<li>$P \land \lnot Q$ Premise
<ul>
<li>$P$</li>
<li>$\lnot Q$</li>
<li>$\vdots$</li>
<li>$R$</li>
<li>$\vdots$</li>
<li>$\lnot R$</li>
<li>$\lnot R \land R$ Contradiction</li>
</ul></li>
</ul>
<p>$\therefore \lnot (P \land \lnot Q) \equiv P \rightarrow Q$</p>
<hr>
| <p>There is a useful rule of thumb, when you have a proof by contradiction, to see whether it is "really" a proof by contrapositive.</p>
<p>In a proof of by contrapositive, you prove $P \to Q$ by assuming $\lnot Q$ and reasoning until you obtain $\lnot P$.</p>
<p>In a "genuine" proof by contradiction, you assume <em>both</em> $P$ and $\lnot Q$, and deduce some other contradiction $R \land \lnot R$.</p>
<p>So, at then end of your proof, ask yourself: Is the "contradiction" just that I have deduced $\lnot P$, when the implication was $P \to Q$? Did I never use $P$ as an assumption? If both answers are "yes" then your proof is a proof by contraposition, and you can rephrase it in that way. </p>
<p>For example, here is a proof by "contradiction":</p>
<blockquote>
<p>Proposition: Assume $A \subseteq B$. If $x \not \in B$ then $x \not \in A$.</p>
<p>Proof. We proceed by contradiction. Assume $x \not \in B$ and $x \in A$. Then, since $A \subseteq B$, we have $x \in B$. This is a contradiction, so the proof is complete.</p>
</blockquote>
<p>That proof can be directly rephrased into a proof by contrapositive:</p>
<blockquote>
<p>Proposition: Assume $A \subseteq B$. If $x \not \in B$ then $x \not \in A$.</p>
<p>Proof. We proceed by contraposition. Assume $x \in A$. Then, since $A \subseteq B$, we have $x \in B$. This is what we wanted to prove, so the proof is complete.</p>
</blockquote>
<p>Proof by contradiction can be applied to a much broader class of statements than proof by contraposition, which only works for implications. But there are proofs of implications by contradiction that cannot be directly rephrased into proofs by contraposition.</p>
<blockquote>
<p>Proposition: If $x$ is a multiple of $6$ then $x$ is a multiple of $2$.</p>
<p>Proof. We proceed by contradiction. Let $x$ be a number that is a multiple of $6$ but not a multiple of $2$. Then $x = 6y$ for some $y$. We can rewrite this equation as $1\cdot x = 2\cdot (3y)$. Because the right hand side is a multiple of $2$, so is the left hand side. Then, because $2$ is prime, and $1\cdot x $ is a multiple of $2$, either $x$ is a multiple of $2$ or $1$ is a multiple of $2$. Since we have assumed that $x$ is not a multiple of $2$, we see that $1$ must be a multiple of $2$. But that is impossible: we know $1$ is not a multiple of $2$. So we have a contradiction: $1$ is a multiple of $2$ and $1$ is not a multiple of $2$. The proof is complete.</p>
</blockquote>
<p>Of course that proposition can be proved directly as well: the point is just that the proof given is genuinely a proof by contradiction, rather than a proof by contraposition. The key benefit of proof by contradiction is that you can stop when you find <em>any</em> contradiction, not only a contradiction directly involving the hypotheses.</p>
|
probability | <p>If nine coins are tossed, what is the probability that the number of heads is even?</p>
<p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p>
<p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p>
<p><span class="math-container">$n = 9, k = 0$</span></p>
<p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p>
<p><span class="math-container">$n = 9, k = 2$</span></p>
<p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p>
<p><span class="math-container">$n = 9, k = 4$</span>
<span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p>
<p><span class="math-container">$n = 9, k = 6$</span></p>
<p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p>
<p><span class="math-container">$n = 9, k = 8$</span></p>
<p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p>
<p>Add all of these up: </p>
<p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
| <p>The probability is <span class="math-container">$\frac{1}{2}$</span> because the last flip determines it.</p>
| <p>If there are an even number of heads then there must be an odd number of tails. But heads and tails are symmetrical, so the probability must be <span class="math-container">$1/2$</span>.</p>
|
linear-algebra | <p>I've learned that the dot product is just one of many possible <strong>inner product spaces</strong>. Can someone explain this concept? When is it useful to define it as something other than the <strong>dot product</strong>?</p>
| <p>As for the utility of inner product spaces: They're vector spaces where notions like the length of a vector and the angle between two vectors are available. In this way, they generalize $\mathbb R^n$ but preserve some of its additional structure that comes on top of it being a vector space. Familiar friends like Cauchy-Schwarz, the parallelogram rule, and orthogonality all work in inner product spaces.</p>
<p>(Note that there is a more general class of spaces, normed spaces, where notions of length make sense always, but an inner product cannot necessarily be defined.)</p>
<p>The dot product is the standard inner product on $\mathbb R^n$. In general, any symmetric, positive definite matrix will give you an inner product on $\mathbb C^n$. And you can have inner products on infinite dimensional vector spaces, like </p>
<p>$$ \langle \, f, \, g \, \rangle = \int_a^b \ f(x)\overline{g(x)} \, dx$$</p>
<p>for $f, g$ square-integrable functions on $[a,b]$.</p>
<p>This becomes useful, for example, in applications like Fourier series where you want a basis of orthonormal functions for some function space (it's not just the trigonometric functions that work).</p>
| <p>An inner product space is a vector space for which the inner product is defined. The inner product is also known as the 'dot product' for 2D or 3D Euclidean space. An arbitrary number of inner products can be defined according to <a href="http://en.wikipedia.org/wiki/Inner_product_space#Definition">three rules</a>, though most are a lot less intuitive/practical than the Euclidean (dot) product.</p>
<p>Side note:</p>
<p>It may seem slightly esoteric, but as a physicist the obvious application of inner product spaces are Hilbert spaces used in quantum mechanics. The inner product of an eigenfunction with a wavefunction in Hilbert space gives the corresponding eigenvalue.</p>
|
probability | <p>If you would put a rabbit randomly on a circular table with radius <span class="math-container">$r= 1$</span> meter and it moves <span class="math-container">$1$</span> meter in a random direction, what is the chance it won't fall off?</p>
<p>I tried to do this using integrals, but then I noticed you need a double integral or something and since I'm in the 5th form I don't know how that works.</p>
<p><a href="https://i.sstatic.net/dvzvE.png" rel="noreferrer"><img src="https://i.sstatic.net/dvzvE.png" alt="enter image description here" /></a></p>
| <p>If the rabbit is placed at distance $x>0$ from the center, and moves straightforward in an uniformly distributed direction, his chances of not falling off are
$$p(x)={2\arccos{x\over2}\over 2\pi}\ .$$
Assuming that his starting point is uniformly distributed with respect to area the probability $P$ that he will not fall of is therefore given by
$$P={1\over\pi}\int_0^1 p(x)\cdot 2\pi x\>dx={2\over\pi}\int_0^1\arccos{x\over2}\>x\>dx\ .$$
Substituting $x:=2\cos t$ and partial integration then produce
$$P={2\over3}-{\sqrt{3}\over 2\pi}\doteq0.391\ .$$</p>
| <p>Hint: this problem is angularly symmetric. Therefore, only the starting distance from the center matters.</p>
<p>Consider a disk of radius $1$ centered at the origin. For a rabbit at $(x,0)$, for which angles does it fall off the table (do a little trig)? Now, use the symmetry and integrate.</p>
<p>To avoid the double integral, you can integrate along the positive radius and multiply each value you get by $2\pi r$ (the circumference of the circle with that radius). (This $2\pi r$ also comes out of the double integral because it's angularly symmetric.)</p>
|
matrices | <p>Let <span class="math-container">$A$</span> be a positive-definite real matrix in the sense that <span class="math-container">$x^T A x > 0$</span> for every nonzero real vector <span class="math-container">$x$</span>. I don't require <span class="math-container">$A$</span> to be symmetric.</p>
<p>Does it follow that <span class="math-container">$\mathrm{det}(A) > 0$</span>?</p>
| <p>Here is en eigenvalue-less proof that if <span class="math-container">$x^T A x > 0$</span> for each nonzero real vector <span class="math-container">$x$</span>, then <span class="math-container">$\det A > 0$</span>.</p>
<p>Consider the function <span class="math-container">$f(t) = \det \left(t \cdot I + (1-t) \cdot A\right)$</span> defined on the segment <span class="math-container">$[0, 1]$</span>. Clearly, <span class="math-container">$f(0) = \det A$</span> and <span class="math-container">$f(1) = 1$</span>. Note that <span class="math-container">$f$</span> is continuous. If we manage to prove that <span class="math-container">$f(t) \neq 0$</span> for every <span class="math-container">$t \in [0, 1]$</span>, then it will imply that <span class="math-container">$f(0)$</span> and <span class="math-container">$f(1)$</span> have the same sign (by the intermediate value theorem), and the proof will be complete.</p>
<p>So, it remains to show that <span class="math-container">$f(t) \neq 0$</span> whenever <span class="math-container">$t \in [0, 1]$</span>. But this is easy. If <span class="math-container">$t \in [0, 1]$</span> and <span class="math-container">$x$</span> is a nonzero real vector, then
<span class="math-container">$$
x^T (tI + (1-t)A) x = t \cdot x^T x + (1-t) \cdot x^T A x > 0,
$$</span>
which implies that <span class="math-container">$tI + (1-t)A$</span> is not singular, which means that its determinant is nonzero, hence <span class="math-container">$f(t) \neq 0$</span>. Done.</p>
<p><em>PS:</em> The proof is essentially topological. We have shown that there is a path from <span class="math-container">$A$</span> to <span class="math-container">$I$</span> in the space of all invertible matrices, which implies that <span class="math-container">$\det A$</span> and <span class="math-container">$\det I$</span> can be connected by a path in <span class="math-container">$\mathbb{R} \setminus 0$</span>, which means that <span class="math-container">$\det A > 0$</span>. One could use the same techniqe to prove other similar facts. For instance, this comes to mind: if <span class="math-container">$S^2 = \{(x, y, z) \mid x^2 + y^2 + z^2 = 1\}$</span> is the unit sphere, and <span class="math-container">$f: S^2 \to S^2$</span> is a continuous map such that <span class="math-container">$(v, f(v)) > 0$</span> for every <span class="math-container">$v \in S^2$</span>, then <span class="math-container">$f$</span> has <a href="http://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping#From_Sn_to_Sn" rel="noreferrer">degree</a> <span class="math-container">$1$</span>.</p>
| <p>Let $A\in\mathbb{R}^{n\times n}$ such that $x^TAx\geq 0$ for any $x\in\mathbb{R}^n$. </p>
<ul>
<li><p><em>The real eigenvalues of $A$ are non-negative.</em><br>
This follows simply from the fact that to a real eigenvalue $\lambda$ of $A$, you can choose a real eigenvector $x\neq 0$. Hence, $0\leq x^TAx=\lambda x^Tx$ implies that $\lambda\geq 0$.</p></li>
<li><p><em>The complex eigenvalues of $A$ have non-negative real parts.</em><br>
Assume that $\lambda\in\mathbb{C}$, $\lambda=\xi+i\eta$, $\xi,\eta\in\mathbb{R}$ is an eigenvalue of $A$ with an associated eigenvector $x=u+iv$, $u,v\in\mathbb{R}^n$. From $Ax=\lambda x$, we obtain by equating real and imaginary part of the equality the following relations:
$$
Au=\xi u-\eta v, \quad Av=\eta u+\xi v.
$$
Premultiplying the first by $u^T$ and the second by $v^T$ gives
$$
0\leq u^TAu=\xi u^Tu-\eta u^Tv, \quad 0\leq v^TAv =\eta v^Tu+\xi v^Tv.
$$
Since $u^Tv=v^Tu$, we get by summing the two together that
$$
0\leq u^TAu+v^TAv=\xi(u^Tu+v^Tv).
$$
As before, this implies that $\xi\geq 0$ and hence $\mathrm{Re}(\lambda)\geq 0$.</p></li>
</ul>
<p>To summarize, we have the following statement (including the case with a strict inequality, which follows easily from the proofs above):</p>
<blockquote>
<p>Let $A\in\mathbb{R}^{n\times n}$ such that $x^TAx\geq 0$ for all $x\in\mathbb{R}^{n}$ ($x^TAx>0$ for all non-zero $x\in\mathbb{R}^n$). Then the eigenvalues of $A$ have non-negative (positive) real parts.</p>
</blockquote>
<p>Now since the determinant of $A$ is the product of the eigenvalues of $A$, it is:</p>
<ul>
<li>non-negative if $x^TAx\geq 0$ for all $x\in\mathbb{R}^{n}$,</li>
<li>positive if $x^TAx>0$ for all non-zero $x\in\mathbb{R}^{n}$.</li>
</ul>
<p><strong>SLIGHTLY LONGER NOTE:</strong> </p>
<p>When determining the sign of the determinant, we do not need to care much about the complex eigenvalues and avoid thus the second item above about the real parts of the complex spectrum. Assume that $\lambda_1,\ldots,\lambda_k\in\mathbb{R}$ and $\lambda_{k+1},\bar{\lambda}_{k+1},\ldots,\lambda_{p},\bar{\lambda}_{p}\in\mathbb{C}\setminus\mathbb{R}$ are the $n$ eigenvalues of $A$ (such that $k+2(p-k)=n$). The determinant of $A$ is given by
$$\tag{1}
\det(A)=\left(\prod_{i=1}^k\lambda_i\right)\left(\prod_{i=k+1}^p\lambda_i\bar{\lambda_i}\right)=\left(\prod_{i=1}^k\lambda_i\right)\underbrace{\left(\prod_{i=k+1}^p|\lambda_i|^2\right)}_{\geq 0}
$$
The determinant is hence equal to the product of the real eigenvalues times something non-negative. </p>
<p>Hence for the case $x^TAx\geq 0$ for all real $x$, one just needs to show that the <em>real</em> eigenvalues of $A$ are non-negative (the first item above) in order to arrive at the conclusion that $\det(A)\geq 0$. </p>
<p>For the case $x^TAx>0$ for all nonzero real $x$, the analog of the first item above shows that the real eigenvalues are positive and we just need to show that the non-negative term in (1) is actually positive. This can be made simply by showing that $A$ is non-singular which implies that there is no zero eigenvalue (real or complex). Hence assume that $x^TAx>0$ for all nonzero real $x$ and $A$ is singular. Therefore, there exists $y\in\mathbb{R}^n$, $y\neq 0$, such that $Ay=0$. But $0<y^TAy=y^T0=0$ (contradiction). Consequently, $\det(A)>0$.</p>
|
number-theory | <p>I search the least n such that </p>
<p>$$38^n+31$$ </p>
<p>is prime. </p>
<p>I checked the $n$ upto $3000$ and found none, so the least prime of that form must have more than $4000$ digits. I am content with a probable prime, it need not be a proven prime.</p>
| <p>This is not a proof, but does not conveniently fit into a comment.</p>
<p>I'll take into account that $n=4k$ is required, otherwise $38^n+31$ will be divisible by $3$ or $5$ as pointed in the comments.</p>
<p>Now, if we treat the primes as "pseudorandom" in the sense that any large number $n$ has a likelihood $1/\ln(n)$ of being prime (which is the prime number density for large $n$), the expected number of primes for $n=4,8,\ldots,4N$ will increase with $N$ as
$$
\sum_{k=1}^N\frac{1}{\ln(38^{4k}+31)}
\approx\frac{\ln N+\gamma}{4\ln 38}
\text{ where }\gamma=0.57721566\ldots
$$
and for the expected number of primes to exceed 1, you'll need $N$ in the order of 1,200,000.</p>
<p>Of course, you could get lucky and find it at much lower $n$, but a priori I don't see any particular reason why it should be...or shouldn't.</p>
<p>Basically, in general for numbers $a^n+b$, the first prime will usually come fairly early, otherwise often very late (or not at all if $a$ and $b$ have a common factor).</p>
<p>Of course, this argument depends on assuming "pseudorandom" behaviour of the primes, and so cannot be turned into a formal proof. However, it might perhaps be possible to say something about the distribution of $n$ values giving the first prime over different pairs $(a,b)$.</p>
| <p>Primality of numbers of the form $a^n+b$ is a very hard problem in general. For instance, existence of primes of the form $4294967296^n+1=(2^{32})^n+1$ is an old open problem in number theory (<a href="https://en.wikipedia.org/wiki/Fermat_number#Primality_of_Fermat_numbers">wiki</a>), although it is also easy to show that this can be a prime only for $n$ of a special form (powers of $2$). Your problem $2085136^n+31=(38^4)^n+31$ does not seem much easier.</p>
<p>In other words, a theory-based answer to your problem is very unlikely in the near future. For a practice-based answer you will probably need to use some distributed computing project for searching for prime numbers like PrimeGrid, which has found most of the known large primes of the form $ab^n+c$.</p>
|
logic | <p>I am familiar with the mechanism of proof by contradiction: we want to prove $P$, so we assume $¬P$ and prove that this is false; hence $P$ must be true.</p>
<p>I have the following devil's advocate question, which might seem to be more philosophy than mathematics, but I would prefer answers from a mathematician's point of view:</p>
<p>When we prove that $¬P$ is "false", what we are really showing is that it is inconsistent with our underlying set of axioms. Could there ever be a case were, for some $P$ and some set of axioms, $P$ and $¬P$ are <em>both</em> inconsistent with those axioms (or both consistent, for that matter)?</p>
| <p>The situation you ask about, where $P$ is inconsistent with our axioms and $\neg P$ is also inconsistent with our axioms, would mean that the axioms themselves are inconsistent. Specifically, the inconsistency of $P$ with the axioms would mean that $\neg P$ is provable from those axioms. If, in addition, $\neg P$ is inconsistent with the axioms, then the axioms themselves are inconsistent --- they imply $\neg P$ and then they contradict that. (I have phrased this answer so that it remains correct even if the underlying logic of the axiom system is intuitionistic rather than classical.)</p>
| <p>It is possible for both $P$ and $ \neg P $ to be consistent with a set of axioms. If this is the case, then $P$ is called <em>independent</em>. There are a few things known to be independent, such as the Continuum Hypothesis being independent of ZFC.</p>
<p>It is also possible for both $P$ and $ \neg P $ to be inconsistent with a set of axioms. In this case the axioms are considered inconsistent. Inconsistent axioms result in systems which don't work in a way that is useful for engaging in mathematics.</p>
<p>Proof by contradiction depends on the law of the excluded middle. Constructivist mathematics, which uses intuitionistic logic, rejects the use of the law of the excluded middle, and this results in a different type of mathematics. However, this doesn't protect them from the problems resulting from inconsistent axioms.</p>
<p>There are logical systems called <em>paraconsistent logic</em> which can withstand inconsistent axioms. However, they are more difficult to work with than standard logic and are not as widely studied.</p>
|
probability | <p>I was doing some software engineering and wanted to have a thread do something in the background to basically just waste CPU time for a certain test.</p>
<p>While I could have done something really boring like <code>for(i < 10000000) { j = 2 * i }</code>, I ended up having the program start with <span class="math-container">$1$</span>, and then for a million steps choose a random real number <span class="math-container">$r$</span> in the interval <span class="math-container">$[0,R]$</span> (uniformly distributed) and multiply the result by <span class="math-container">$r$</span> at each step.</p>
<ul>
<li>When <span class="math-container">$R = 2$</span>, it converged to <span class="math-container">$0$</span>.</li>
<li>When <span class="math-container">$R = 3$</span>, it exploded to infinity. </li>
</ul>
<p>So of course, the question anyone with a modicum of curiosity would ask: for what <span class="math-container">$R$</span> do we have the transition. And then, I tried the first number between <span class="math-container">$2$</span> and <span class="math-container">$3$</span> that we would all think of, Euler's number <span class="math-container">$e$</span>, and sure enough, this conjecture was right. Would love to see a proof of this. </p>
<p>Now when I should be working, I'm instead wondering about the behavior of this script. </p>
<p>Ironically, rather than wasting my CPUs time, I'm wasting my own time. But it's a beautiful phenomenon. I don't regret it. <span class="math-container">$\ddot\smile$</span></p>
| <p><strong>EDIT:</strong> I saw that you solved it yourself. Congrats! I'm posting this anyway because I was most of the way through typing it when your answer hit. </p>
<p>Infinite products are hard, in general; infinite sums are better, because we have lots of tools at our disposal for handling them. Fortunately, we can always turn a product into a sum via a logarithm.</p>
<p>Let <span class="math-container">$X_i \sim \operatorname{Uniform}(0, r)$</span>, and let <span class="math-container">$Y_n = \prod_{i=1}^{n} X_i$</span>. Note that <span class="math-container">$\log(Y_n) = \sum_{i=1}^n \log(X_i)$</span>. The eventual emergence of <span class="math-container">$e$</span> as important is already somewhat clear, even though we haven't really done anything yet.</p>
<p>The more useful formulation here is that <span class="math-container">$\frac{\log(Y_n)}{n} = \frac 1 n \sum \log(X_i)$</span>, because we know from the Strong Law of Large Numbers that the right side converges almost surely to <span class="math-container">$\mathbb E[\log(X_i)]$</span>. We have
<span class="math-container">$$\mathbb E \log(X_i) = \int_0^r \log(x) \cdot \frac 1 r \, \textrm d x = \frac 1 r [x \log(x) - x] \bigg|_0^r = \log(r) - 1.$$</span></p>
<p>If <span class="math-container">$r < e$</span>, then <span class="math-container">$\log(Y_n) / n \to c < 0$</span>, which implies that <span class="math-container">$\log(Y_n) \to -\infty$</span>, hence <span class="math-container">$Y_n \to 0$</span>. Similarly, if <span class="math-container">$r > e$</span>, then <span class="math-container">$\log(Y_n) / n \to c > 0$</span>, whence <span class="math-container">$Y_n \to \infty$</span>. The fun case is: what happens when <span class="math-container">$r = e$</span>?</p>
| <p>I found the answer! One starts with the uniform distribution on <span class="math-container">$ [0,R] $</span>. The natural logarithm pushes this distribution forward to a distribution on <span class="math-container">$ (-\infty, \ln(R) ] $</span> with density function given by <span class="math-container">$ p(y) = e^y / R, y \in (-\infty, \ln(R)] $</span>. The expected value of this distribution is <span class="math-container">$$ \int_{-\infty}^{\ln(R)}\frac{y e^y}{R} \,\mathrm dy = \ln(R) - 1 .$$</span> Solving for zero gives the answer to the riddle! Love it! </p>
|
logic | <p>We have the well-known Peano axioms for the natural numbers and the real numbers can be characterized by demanding them to be a Dedekind-complete, totally ordered field (or some variation of this).</p>
<p>But I never saw any axiomatic characterization of the rational numbers! Either they are constructed out of the natural numbers or are found as a subset of the reals.</p>
<p>I know that the rational numbers are unique in the sense that they are the <em>smallest</em> totally ordered field. But it is somewhat unsatisfactory for me to define them to be a totally ordered field which has an order-preserving embedding into any other totally ordered field. This would be like defining the real numbers to be an Archimedean totally ordered field such that every other Archimedean totally ordered field can be order-preservingly embedded into it - this is some ugly definition (for me) and I find the usual one much better.</p>
<blockquote>
<p>So what is a <em>nice</em> axiomatic characterization of the rational numbers?</p>
</blockquote>
| <p><strong>Preamble</strong></p>
<p>I believe that the OP is seeking a characterization of $ \mathbb{Q} $ using only the first-order language of fields, $ \mathcal{L}_{\text{Field}} $. Restricting ourselves to this language, we can try to uncover new axioms, in addition to the usual field axioms (i.e., those that relate to the associativity and commutativity of addition and multiplication, the distributivity of multiplication over addition, the behavior of the zero and identity elements, and the existence of a multiplicative inverse for each non-zero element), that describe $ \mathbb{Q} $ uniquely.</p>
<p>Any attempt to describe the <em>smallest field</em> satisfying a given property must prescribe a method of comparing one field with another (namely using field homomorphisms, which are injective if not trivial), but such a method clearly cannot be formalized using $ \mathcal{L}_{\text{Field}} $.</p>
<hr>
<p><strong>1. There Exists No First-Order Characterization of $ \mathbb{Q} $</strong></p>
<p>The answer is ‘no’, if one is seeking a first-order characterization of $ \mathbb{Q} $. This follows from the Upward Löwenheim-Skolem Theorem, which is a classical tool in logic and model theory.</p>
<p>Observe that $ \mathbb{Q} $ is an infinite $ \mathcal{L}_{\text{Field}} $-structure of cardinality $ \aleph_{0} $. The Upward Löwenheim-Skolem Theorem then says that there exists an $ \mathcal{L}_{\text{Field}} $-structure (i.e., a field) $ \mathbb{F} $ of cardinality $ \aleph_{1} $ that is an elementary extension of $ \mathbb{Q} $. By definition, this means that $ \mathbb{Q} $ and $ \mathbb{F} $ satisfy the same set of $ \mathcal{L}_{\text{Field}} $-sentences, so we cannot use first-order logic to distinguish $ \mathbb{Q} $ and $ \mathbb{F} $. In other words, as far as first-order logic can tell, these two fields are identical (an analogy may be found in point-set topology, where two distinct points of a non-$ T_{0} $ topological space can be topologically indistinguishable). However, $ \mathbb{Q} $ and $ \mathbb{F} $ have different cardinalities, so they are not isomorphic. This phenomenon is ultimately due to the fact that the notion of <em>cardinality</em> cannot be formalized using $ \mathcal{L}_{\text{Field}} $. Therefore, any difference between the two fields can only be seen externally, outside of first-order logic.</p>
<hr>
<p><strong>2. Finding a Second-Order Characterization of $ \mathbb{Q} $</strong></p>
<p>This part is inspired by lhf's answer below, which I believe deserves more credit. We start by formalizing the notion of <em>proper subfield</em> using second-order logic.</p>
<p>Let $ P $ be a variable for unary predicates. Consider the following six formulas:
\begin{align}
\Phi^{P}_{1} &\stackrel{\text{def}}{\equiv} (\exists x) \neg P(x); \\
\Phi^{P}_{2} &\stackrel{\text{def}}{\equiv} P(0); \\
\Phi^{P}_{3} &\stackrel{\text{def}}{\equiv} P(1); \\
\Phi^{P}_{4} &\stackrel{\text{def}}{\equiv} (\forall x)(\forall y)((P(x) \land P(y)) \rightarrow P(x + y)); \\
\Phi^{P}_{5} &\stackrel{\text{def}}{\equiv} (\forall x)(\forall y)((P(x) \land P(y)) \rightarrow P(x \cdot y)); \\
\Phi^{P}_{6} &\stackrel{\text{def}}{\equiv} (\forall x)((P(x) \land \neg (x = 0)) \rightarrow (\exists y)(P(y) \land (x \cdot y = 1))).
\end{align}
What $ \Phi^{P}_{1},\ldots,\Phi^{P}_{6} $ are saying is that the set of all elements of the domain of discourse that satisfy the predicate $ P $ forms a proper subfield of the domain. The domain itself will be a field if we impose upon it the first-order field axioms. Hence,
$$
\{ \text{First-order field axioms} \} \cup \{ \text{First-order axioms defining characteristic $ 0 $} \} \cup \{ \neg (\exists P)(\Phi^{P}_{1} ~ \land ~ \Phi^{P}_{2} ~ \land ~ \Phi^{P}_{3} ~ \land ~ \Phi^{P}_{4} ~ \land ~ \Phi^{P}_{5} ~ \land ~ \Phi^{P}_{6}) \}
$$
is a set of first- and second-order axioms that characterizes $ \mathbb{Q} $ uniquely because of the following two reasons:</p>
<ol>
<li><p>Up to isomorphism, $ \mathbb{Q} $ is the only field with characteristic $ 0 $ that contains no proper subfield.</p></li>
<li><p>If $ \mathbb{F} \ncong \mathbb{Q} $ is a field with characteristic $ 0 $, then $ \mathbb{F} $ does not model this set of axioms. Otherwise, interpreting “$ P(x) $” as “$ x \in \mathbb{Q}_{\mathbb{F}} $” yields a contradiction, where $ \mathbb{Q}_{\mathbb{F}} $ is the copy of $ \mathbb{Q} $ sitting inside $ \mathbb{F} $.</p></li>
</ol>
| <p>Among all fields the rationals can be characterized as follows. The field of rational numbers is, up to isomorphism, the smallest field of characteristic $0$. </p>
<p>As you say, the rationals can be built up from the integers via a construction which is a special case of the construction known as the field of fractions of an integral domain. In this way the rationals can be characterized, among all integral domains rings, as follows: The field of rational numbers is, up to isomorphism, the field of fractions of the ring of integers. </p>
<p>Whether or not these characterizations fit your expectations of being nice is a matter of taste. In any case, the second characterization is a special case of a the very common phenomenon of a universal property. </p>
|
probability | <p>Linearity of expectation is a very simple and "obvious" statement, but has many non-trivial applications, e.g., to analyze randomized algorithms (for instance, the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem#Calculating_the_expectation" rel="noreferrer">coupon collector's problem</a>), or in some proofs where dealing with non-independent random variables would otherwise make any calculation daunting.</p>
<p>What are the cleanest, most elegant, or striking applications of the linearity of expectation you've encountered?</p>
| <p><strong>Buffon's needle:</strong> rule a surface with parallel lines a distance <span class="math-container">$d$</span> apart. What is the probability that a randomly dropped needle of length <span class="math-container">$\ell\leq d$</span> crosses a line?</p>
<p>Consider dropping <em>any</em> (continuous) curve of length <span class="math-container">$\ell$</span> onto the surface. Imagine dividing up the curve into <span class="math-container">$N$</span> straight line segments, each of length <span class="math-container">$\ell/N$</span>. Let <span class="math-container">$X_i$</span> be the indicator for the <span class="math-container">$i$</span>-th segment crossing a line. Then if <span class="math-container">$X$</span> is the total number of times the curve crosses a line,
<span class="math-container">$$\mathbb E[X]=\mathbb E\left[\sum X_i\right]=\sum\mathbb E[X_i]=N\cdot\mathbb E[X_1].$$</span>
That is to say, the expected number of crossings is proportional to the length of the curve (and independent of the shape).</p>
<p>Now we need to fix the constant of proportionality. Take the curve to be a circle of diameter <span class="math-container">$d$</span>. Almost surely, this curve will cross a line twice. The length of the circle is <span class="math-container">$\pi d$</span>, so a curve of length <span class="math-container">$\ell$</span> crosses a line <span class="math-container">$\frac{2\ell}{\pi d}$</span> times.</p>
<p>Now observe that a straight needle of length <span class="math-container">$\ell\leq d$</span> can cross a line either <span class="math-container">$0$</span> or <span class="math-container">$1$</span> times. So the probability it crosses a line is precisely this expectation value <span class="math-container">$\frac{2\ell}{\pi d}$</span>.</p>
| <p>As <a href="https://math.stackexchange.com/users/252071/lulu">lulu</a> mentioned in a comment, the fact that a uniformly random permutation <span class="math-container">$\pi\colon\{1,2,\dots,n\}\to\{1,2,\dots,n\}$</span> has in expectation <em>one</em> fixed point is a quite surprising statement, with a one-line proof.</p>
<p>Let <span class="math-container">$X$</span> be the number of fixed points of such a uniformly random <span class="math-container">$\pi$</span>. Then <span class="math-container">$X=\sum_{k=1}^n \mathbf{1}_{\pi(k)=k}$</span>, and thus
<span class="math-container">$$
\mathbb{E}[X] = \mathbb{E}\left[\sum_{k=1}^n \mathbf{1}_{\pi(k)=k}\right]
= \sum_{k=1}^n \mathbb{E}[\mathbf{1}_{\pi(k)=k}]
= \sum_{k=1}^n \mathbb{P}\{\pi(k)=k\}
= \sum_{k=1}^n \frac{1}{n}
= 1\,.
$$</span></p>
|
geometry | <p>This morning, I had eggs for breakfast, and I was looking at the pieces of broken shells and thought "What is the surface area of this egg?" The problem is that I have no real idea about how to find the surface area. </p>
<p>I have learned formulas for circles, and I know the equation for an ellipse; however, I don't know how to apply that. </p>
<p>The only idea I can think of is to put an egg on a sheet of paper and trace it, and then measure the outline drawn, and then try to find an equation for that ellipse and rotate that about the $x$-axis. Now, my problem is how I can find the equation of the ellipse from the graph, and will my tracing method really be the edge of the egg? Also, can I use the <a href="http://tutorial.math.lamar.edu/Classes/CalcII/SurfaceArea.aspx">standard surface area integral</a>? Will I have to use some techniques to solve the integral that are not covered in the <a href="https://apstudent.collegeboard.org/apcourse/ap-calculus-bc/course-details?calcbc">AP BC Calculus</a>?</p>
<p>There has to be a better method for finding the surface area. Please, help me understand how to find the surface area of an egg; i.e., how to use my mathematical knowledge for something other than passing exams. </p>
| <p>There is a nice equation describing the equation of an egg curve credit to <a href="http://www16.ocn.ne.jp/~akiko-y/Egg/index_egg_E.html">Nobuo Yamamoto</a> :
$$
(x^2+y^2)^2 = ax^3 + \frac{3a}{10}xy^2, \tag{1}
$$
where $0\leq x\leq a$, $a$ is the length of the major axis of symmetry for an egg. </p>
<p>In other words, we could get it by cutting a boiled egg in half and measure the distance from tip to the bottom. I just drew it in MATLAB, and the curve looks like the following for $a=1$:
<img src="https://i.sstatic.net/ZXi8A.png" alt="egg"></p>
<p>I must say this curve fits pretty well with an egg. Now we have a curve, then the method of <a href="http://en.wikipedia.org/wiki/Surface_of_revolution">computing surface area by revolution</a>, which is taught in Calculus II I believe, can be used to computing the surface. We just revolve the curve above around the egg's major axis of symmetry and get a surface, here is what looks like when we revolve it around the $x$-axis by degree $\pi$, we can get the lower half by revolve another $\pi$:
<img src="https://i.sstatic.net/GFZhB.png" alt="eggsurf"></p>
<p>First we solve for $y$ in (1) when $y>0$:
$$
y = \sqrt{\frac{3ax}{20} - x^2 + x\sqrt{\frac{7ax}{10} + \frac{9a^2}{400}} },
$$
The formula of computing surface area by revolution is:
$$
S = 2\pi\int_0^a y\sqrt{1+\left(\frac{dy}{dx}\right)^2} \,dx,\tag{2}
$$
The derivative is:
\begin{align}
\frac{dy}{dx} = \frac{1}{2\sqrt{\frac{3ax}{20} - x^2 + x\sqrt{\frac{7ax}{10} + \frac{9a^2}{400}} }} \left( \frac{3a}{20}- 2x+ \sqrt{\frac{7ax}{10} + \frac{9a^2}{400}} + \frac{7ax}{20\sqrt{\frac{7ax}{10} + \frac{9a^2}{400}}}\right),
\end{align}
Plugging $dy/dx$ and $y$ into (2), then you could use your favorite tool of numerical integration to perform the computing for you(Octave, MATLAB, Mathematica, etc).</p>
<hr>
<p><strong>A more tweakable/numerical/experimental approach</strong>: </p>
<p>As J. M. suggests in the comments, the shape <em>looks like</em> an egg, but is a real egg being approximated nicely by this curve? I guess the answer is that "it really depends on that specific egg"!</p>
<p>Let's say we still want to use surface of revolution to compute the surface area.</p>
<p>But this time, we handle it more numerically from the very start, instead of looking for a curve to fit one thing for all.</p>
<p>Two assumptions: </p>
<ul>
<li>All eggs are axial symmetric with respect to its major axis, i.e., if $x$-axis is its major axis, its surface can be obtained by revolving a curve $y= f(x)$, for $0\leq x\leq a$.</li>
<li>That curve $y = f(x)$ has certain smoothness: $f$ and $f'$ are continuous for $x\in (0,a)$ .</li>
</ul>
<p>Now we want to compute the integral (2) using <a href="http://en.wikipedia.org/wiki/Simpson%27s_rule">Simpson's rule</a> or <a href="http://en.wikipedia.org/wiki/Trapezoidal_rule">Trapezoidal rule</a>, which is also taught in Calculus II in most colleges I believe. A remark is that $|f'|\to \infty$ when $x\to 0$ and $x\to a$, it would be much better if we use <a href="http://en.wikipedia.org/wiki/Adaptive_quadrature">adaptive qudrature</a> by putting more sample points near $0$ and $a$.</p>
<p><img src="https://i.sstatic.net/rosjH.png" alt="sample"></p>
<p>The steps are:</p>
<ol>
<li><p>Boil an egg, cut it by half, hold it against a paper, use a pencil to outline its boundary (upper half is enough). </p></li>
<li><p>Draw the major axis, set it to be the $x$-axis, and measure its length $a$.</p></li>
<li><p>Choose $(n+1)$-sample points (including the end points) so that the points are equidistant to their neighbor on the curve. $n$ is chosen to be even, measure the distance to the major axis ($y$-coordinates) like the above figure.</p></li>
<li><p>Denote the sample point as $(x_i,y_i)$, $x_0=0$, $x_n = a$.</p></li>
<li><p>Approximate
$$\frac{dy}{dx}\Big|_{x_i} \approx s_i = \frac{1}{2}\left( \frac{y_{i+1} - y_i}{x_{i+1} - x_i} + \frac{y_{i} - y_{i-1}}{x_{i} - x_{i-1}} \right).$$
For two end points:
$$
s_0 = \frac{y_1 - y_0}{x_1 - x_0},\quad \text{ and }\quad s_n = \frac{y_n- y_{n-1}}{x_n - x_{n-1}}.
$$
This step may be problematic, we can use other methods to approximate $dy/dx$: for example, approximating the curve by cubic spline using sample points $(x_i,y_i)$, but it would be beyond the content of college calculus.</p></li>
<li><p>Let $h = x_{i+1} - x_i = a/n$, approximate (2) by computing:
$$
\frac{2\pi h}{3}\bigg[g(x_0)+2\sum_{j=1}^{n/2-1}g(x_{2j})+
4\sum_{j=1}^{n/2}g(x_{2j-1})+g(x_n)
\bigg], \quad \text{ where } g(x_i) = y_i \sqrt{1+s_i^2}.
$$</p></li>
</ol>
<hr>
<p><strong>Some results comparison</strong>:</p>
<p>Amzoti gave <a href="http://www.poultryscience.org/ps/paperpdfs/05/p0530482.pdf">a link</a> in his comments above that has two semi-empirical formulas:
$$
S_1 = (3.155 − 0.0136a + 0.0115b)ab, \;\text{ and }\;S_2 = \left(0.9658\frac{b}{a}+2.1378\right)ab
$$
where $a$ and $b$ are the length for major and minor axis of the real eggs. If there exists an egg's shape like (1), $a = 1$, and $b\approx 0.7242629$, the surface area computed by above formula is:
$$
S_1 \approx 2.278215 \;\text{ and }\;S_2 \approx 2.054946.
$$
Using the surface by revolution formula (2), we have:
$$
S \approx 2.042087.
$$ </p>
| <p>Let's make a problem a little more interesting by generalizing it. <em>I have an arbitrary convex object, and I want to find its surface area.</em> Of the answers posted so far, only the triangulation strategy of Zach L. and Cong Xu works in this case without breaking the object into little pieces. Here's another approach.</p>
<p>Suppose you project the object onto a randomly oriented plane, <em>i.e.</em> a plane whose normal is chosen uniformly from the unit sphere. Given that the object is convex, the expected value of the projected area is exactly $1/4$ times the surface area of the object, for essentially the same <a href="https://math.stackexchange.com/a/93743/856">reasons given by Christian Blatter for the 2D case</a>. (Short version: each differential surface element $dA$ contributes on average $dA/4$ to the projected area; its orientation doesn't matter because we're averaging over all possible directions of projection; there is no double-counting because the object is convex). This immediately suggests a Monte Carlo algorithm:</p>
<ol>
<li>Rotate the object into a random orientation.</li>
<li>Shine collimated light at the object (<em>e.g.</em> from the sun, or from a point-light/parabolic-mirror combination) and observe its shadow on a plane perpendicular to the light direction.</li>
<li>Record the area of the shadow. Maybe you have graph paper pasted on the plane, or maybe you take a picture with a calibrated camera, binarize the image, and count the number of black pixels.</li>
<li>Repeat lots and lots and lots of times.</li>
</ol>
<p>The average area of the shadow, times $4$, is the surface area of the object.</p>
|
game-theory | <p>I looking for good books/lecture notes/etc to learn game theory. I do not fear the math, so I'm not looking for a "non-mathematical intro" or something like that. Any suggestions are welcome. Just put here any references you've seen and some brief description and/or review. Thanks.</p>
<p>Edit: I'm not constrained to any particular subject. Just want to get a feeling of the type of books out there. Then I can decide what to read. I would like to see here a long list of books on the subject and its applications, together with reviews or opinions of those books.</p>
| <p>Game theory is a very broad subject, you should also know what you intend to use this knowledge for. Anyway, the link below contains video lectures from Yale professor Benjamin Polak. It is given as an introduction to game theory course and contains very good material. Hope this helps, cheers. <a href="http://academicearth.org/speakers/benjamin-polak">http://academicearth.org/speakers/benjamin-polak</a></p>
| <p>If you are not affraid of math then i recommend:
Osborne, Rubinstein: Course in game theory</p>
<p>If you need more examples and more introductory style then:
Osborne: Introduction to game theory</p>
<p>These books do not cover combinatorial game theory and differential games.</p>
|
geometry | <p><strong>I would like to generate a random axis or unit vector in 3D</strong>. In 2D it would be easy, I could just pick an angle between 0 and 2*Pi and use the unit vector pointing in that direction. </p>
<p>But in 3D <strong>I don't know how can I pick a random point on a surface of a sphere</strong>.</p>
<p>If I pick two angles the distribution won't be uniform on the surface of the sphere. There would be more points at the poles and less points at the equator.</p>
<p>If I pick a random point in the (-1,-1,-1):(1,1,1) cube and normalise it, then there would be more chance that a point gets choosen along the diagonals than from the center of the sides. So thats not good either.</p>
<p><strong>But then what's the good solution?</strong> </p>
| <p>You need to use an <a href="http://en.wikipedia.org/wiki/Equal-area_projection#Equal-area">equal-area projection</a> of the sphere onto a rectangle. Such projections are widely used in cartography to draw maps of the earth that represent areas accurately.</p>
<p>One of the simplest such projections is the axial projection of a sphere onto the lateral surface of a cylinder, as illustrated in the following figure:</p>
<p><img src="https://i.sstatic.net/GDe76.png" alt="Cylindrical Projection"></p>
<p>This projection is area-preserving, and was used by Archimedes to compute the surface area of a sphere.</p>
<p>The result is that you can pick a random point on the surface of a unit sphere using the following algorithm:</p>
<ol>
<li><p>Choose a random value of $\theta$ between $0$ and $2\pi$.</p></li>
<li><p>Choose a random value of $z$ between $-1$ and $1$.</p></li>
<li><p>Compute the resulting point:
$$
(x,y,z) \;=\; \left(\sqrt{1-z^2}\cos \theta,\; \sqrt{1-z^2}\sin \theta,\; z\right)
$$</p></li>
</ol>
| <p>Another commonly used convenient method of generating a uniform random point on the sphere in $\mathbb{R}^3$ is this: Generate a standard multivariate normal random vector $(X_1, X_2, X_3)$, and then normalize it to have length 1. That is, $X_1, X_2, X_3$ are three independent standard normal random numbers. There are many well-known ways to generate normal random numbers; one of the simplest is the <a href="http://en.wikipedia.org/wiki/Box-Muller">Box-Muller algorithm</a> which produces two at a time.</p>
<p>This works because the standard multivariate normal distribution is invariant under rotation (i.e. orthogonal transformations).</p>
<p>This has the nice property of generalizing immediately to any number of dimensions without requiring any more thought.</p>
|
geometry | <p>It seems to me that perpendicular, orthogonal and normal are all equivalent in two and three dimensions. I'm curious as to which situations you would want to use one term over the other in two and three dimensions. </p>
<p>Also... what about higher dimensions? It seems like perpendicular and normal would not have a nice meaning whereas orthogonal would as it is defined in terms of the dot product. </p>
<p>Can someone give me a detailed breakdown as to the differences in their meanings, their uses and the situations for which each should be used?</p>
| <p>In two or three dimensions, I agree, perpendicular is more natural than orthogonal.
<p>
In higher dimensions, or if the dimension is represented by an unknown, both are correct, but I think orthogonal is preferable.
<p>
Here's an excerpt from Wikipedia (<a href="https://en.wikipedia.org/wiki/Orthogonality" rel="noreferrer">https://en.wikipedia.org/wiki/Orthogonality</a>):</p>
<blockquote>
<p>In mathematics, orthogonality is the generalization of the notion of perpendicularity to the linear algebra of bilinear forms.</p>
</blockquote>
<p>Normal can be used in any dimension, but it usually means perpendicular to a curve or surface (of some dimension).</p>
| <p>Excellent Question!
The best way to understand the three (3) terms is in the context of the History of Mathematics. Additionally, the notion of Logical Equivalence is also useful in our understanding.
Think of the various ways in which the Parallel Lines Axiom can be replaced by other Statements which turn out to be Logically Equivalent. So,</p>
<p>(1) Perpendicular is usually associated with "dropping a perpendicular from a point,"
and it only presupposes 2 Dimensions in the form of a Single Plane. And no Angularity (Angles) are assumed. For example, Given a Line (Horizontal) with 2 Congruent Circles intersecting at 2 Points) on it (the Centers lie on the Line), the line "Dropped" from the Top to the Bottom, forms a Perpendicular. Notice that nothing is said about Angles.</p>
<p>(2) Orthogonal Line is defined as a line with a Right Angle. And since all Right Angles are Equal (one of Euclid's Axioms) every Orthogonal Line implies 4 Rays from the point of intersection.</p>
<p>Notice that one can prove (probably, depending on who you are and which axioms you choose)
that every Orthogonal is a Perpendicular, and every Perpendicular is an Orthogonal.</p>
<p>(3) The Normal is a Perpendicular to a Plane Tangent to a Surface. So at least three (3) Dimensions. Generalizing, every Pair of Dimensions produced a distinct "new' Plane, an for each a Normal may or can be Defined which lies in the "next" dimension of the given 2 - the restriction is that the Normal only has One Point common to the Plane it intersects, so it lies in the "other" Dimension than the 2 which it is Perpendicular or Orthogonal to.
Any Ray from the Point to which the Normal is Defined forms a Plane, and in that Plane the Normal forms a Perpendicular which is also an Orthogonal. Adding Dimensions is merely adding new Panes wherein a new Perpendicular(s), Orthogonal(s), and Normal(s) are introduced.</p>
<p>Hope this answers your excellent question.</p>
|
geometry | <p>An old (rather easy) contest problem reads as follows:</p>
<blockquote>
<p>Each point in a plane is painted one of two colors. Prove that there exist two points exactly one unit apart that are the same color.</p>
</blockquote>
<p>This proof can be easily written by constructing an equilateral triangle of side length $1$ unit and asserting that it is impossible for the colors of all three vertices to be pairwise unequal.</p>
<p>However, I was curious about the trickier problem</p>
<blockquote>
<p>Each point in a plane is painted one of <strong>three</strong> colors. Do there exist two points exactly one unit apart that are the same color?</p>
</blockquote>
<p>...now, if this happened in $3$-space, I could construct a tetrahedron... but I can't do this in $2$-space. Does this not work with three colors, or is the proof just more complicated? If it doesn't work, how can I construct a counterexample?</p>
| <p>This is the subject well-known open problem called the <a href="https://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson_problem" rel="nofollow noreferrer">Hadwiger-Nelson Problem</a>. The problem asks for the exact minimum number of colors that we can color the plane with, so that no two points of distance one are the same color.</p>
<p>You have observed that we cannot do it with only 2 colors, and asked, can we do it with 3 colors? The answer to that is no as well: on the Wikipedia page, we see that this is proven with the <a href="https://en.wikipedia.org/wiki/Moser_spindle" rel="nofollow noreferrer">Mosser Spindle</a>. It is a collection of 7 points in the plane, with 11 edges of length 1 between them, such that these points cannot be colored with only 3 colors.</p>
<p>Therefore it is known that 2 or 3 colors are impossible. It is not known, however, if it can be done with 4 colors. It is known that it can be done in 7 colors, so the minimum number of colors is either 4, 5, 6, or 7 -- but we don't know which!</p>
<p>In fact, it is expected by some that the exact minimum depends on the infamous <em>Axiom of Choice</em>.</p>
<p><strong>UPDATE:</strong> Remarkable news earlier this year in April 2018: amateur mathematician Aubrey de Grey showed that 4 colors is impossible, thus narrowing the minimum down to 5, 6, or 7. To do this, he constructed a set of about 1500 points in the plane and proved that it is impossible to color them with 4 colors. This is the first progress on the problem in 60 years. You can read about it in <a href="https://arxiv.org/abs/1804.02385" rel="nofollow noreferrer">de Grey's paper</a> (which is quite readable). You can also read about it in <a href="https://gilkalai.wordpress.com/2018/04/10/aubrey-de-grey-the-chromatic-number-of-the-plane-is-at-least-5/" rel="nofollow noreferrer">this blog post</a> and <a href="https://www.quantamagazine.org/decades-old-graph-problem-yields-to-amateur-mathematician-20180417/" rel="nofollow noreferrer">this Quanta magazine article</a>.</p>
| <p>We can prove this by finding a more complicated set of points that cannot be colored. One example is known as the <a href="https://en.wikipedia.org/wiki/Moser_spindle" rel="noreferrer">Moser spindle</a>:</p>
<p><a href="https://i.sstatic.net/UZUvw.png" rel="noreferrer"><img src="https://i.sstatic.net/UZUvw.png" alt="Moser spindle"></a></p>
<p>(The lines mark points that are one unit apart.)</p>
<p>Suppose we try to color these seven points with three colors. Color the point $4$ at the top of the diagram one of them - say, red. Then $3$ and $6$ have to be different both from $4$ and from each other. If we color them blue and green, then $2$ cannot be blue or green, so $2$ has to be red again: same as $4$.</p>
<p>Similarly, we can show that $1$ has to be the same color as $4$. But $1$ and $2$ are also exactly one unit apart, so they must be different colors! Therefore we can't color these seven points (and definitely can't color $\mathbb R^2$) with three colors, without giving two points one unit apart the same color.</p>
|
probability | <p>In one of his interviews, <a href="https://www.youtube.com/shorts/-qvC0ISkp1k" rel="noreferrer">Clip Link</a>, Neil DeGrasse Tyson discusses a coin toss experiment. It goes something like this:</p>
<ol>
<li>Line up 1000 people, each given a coin, to be flipped simultaneously</li>
<li>Ask each one to flip if <strong>heads</strong> the person can continue</li>
<li>If the person gets <strong>tails</strong> they are out</li>
<li>The game continues until <strong>1*</strong> person remains</li>
</ol>
<p>He says the "winner" should not feel too surprised or lucky because there would be another winner if we re-run the experiment! This leads him to talk about our place in the Universe.</p>
<p>I realised, however, that there need <strong>not be a winner at all</strong>, and that the winner should feel lucky and be surprised! (Because the last, say, three people can all flip tails)</p>
<p>Then, I ran an experiment by writing a program with the following parameters:</p>
<ol>
<li><strong>Bias of the coin</strong>: 0.0001 - 0.8999 (8999 values)</li>
<li><strong>Number of people</strong>: 10000</li>
<li><strong>Number of times experiment run per Bias</strong>: 1000</li>
</ol>
<p>I plotted the <strong>Probability of 1 Winner</strong> vs <strong>Bias</strong></p>
<p><a href="https://i.sstatic.net/iVUuXvZj.png" rel="noreferrer"><img src="https://i.sstatic.net/iVUuXvZj.png" alt="enter image description here" /></a></p>
<p>The plot was interesting with <strong>zig-zag</strong> for low bias (for heads) and a smooth one after <strong>p = 0.2</strong>. (Also, there is a 73% chance of a single winner for a fair coin).</p>
<p><strong>Is there an analytic expression for the function <span class="math-container">$$f(p) = (\textrm{probability of $1$ winner with a coin of bias $p$}) \textbf{?}$$</span></strong></p>
<p>I tried doing something and got here:
<span class="math-container">$$
f(p)=p\left(\sum_{i=0}^{e n d} X_i=N-1\right)
$$</span>
where <span class="math-container">$X_i=\operatorname{Binomial}\left(N-\sum_{j=0}^{i-1} X_j, p\right)$</span> and <span class="math-container">$X_0=\operatorname{Binomial}(N, p)$</span></p>
| <p>It is known (to a nonempty set of humans) that when <span class="math-container">$p=\frac12$</span>, there is no limiting probability. Presumably the analysis can be (might have been) extended to other values of <span class="math-container">$p$</span>.
Even more surprisingly, the reason I know this is because it ends up having an application in number theory! In any case, a reference for this limit's nonexistence is <a href="https://www.kurims.kyoto-u.ac.jp/%7Ekyodo/kokyuroku/contents/pdf/1274-9.pdf" rel="noreferrer">Primitive roots: a survey</a> by Li and Pomerance (see the section "The source of the oscillation" starting on page 79). As the number of coins increases to infinity, the probability of winning (when <span class="math-container">$p=\frac12$</span>) oscillates between about <span class="math-container">$0.72134039$</span> and about <span class="math-container">$0.72135465$</span>, a difference of about <span class="math-container">$1.4\times10^{-5}$</span>.</p>
| <p>There is a pretty simple formula for the probability of a unique winner, although it involves an infinite sum. To derive the formula, suppose that there are <span class="math-container">$n$</span> people, and that you continue tossing until everyone is out, since they all got tails. Then you want the probability that at the last time before everyone was out, there was just one person left. If <span class="math-container">$p$</span> is the probability that the coin-flip is heads, to find the probability that this happens after <span class="math-container">$k+1$</span> steps with just person <span class="math-container">$i$</span> left, you can multiply the probability that person <span class="math-container">$i$</span> survives for <span class="math-container">$k$</span> steps, which is <span class="math-container">$p^k$</span>, by the probability that he is out on the <span class="math-container">$k+1^{\rm st}$</span> step, which is <span class="math-container">$1-p$</span>. You also have to multiply this by the probability that none of the other <span class="math-container">$n-1$</span> people survive for <span class="math-container">$k$</span> steps, which is <span class="math-container">$1-p^k$</span> for each of them. Multiplying these probabilities together gives
<span class="math-container">$$
p^k (1-p) (1-p^k)^{n-1}
$$</span>
and summing over the <span class="math-container">$n$</span> possible choices for <span class="math-container">$i$</span> and all <span class="math-container">$k$</span> gives
<span class="math-container">$$
f(p)= (1-p)\sum_{k\ge 1} n p^k (1 - p^k)^{n-1}.\qquad\qquad(*)
$$</span>
(I am assuming that <span class="math-container">$n>1$</span>, so <span class="math-container">$k=0$</span> is impossible.)</p>
<p>Now, the summand in (*) can be approximated by <span class="math-container">$n p^k \exp(-n p^k)$</span>, so if <span class="math-container">$n=p^{-L-\epsilon}$</span>, <span class="math-container">$L\ge 0$</span> large and integral, <span class="math-container">$0\le \epsilon \le1$</span>, <span class="math-container">$f(p)$</span> will be about
<span class="math-container">$$
(1-p) \sum_{j\ge 1-L} p^{j-\epsilon} \exp(-p^{j-\epsilon})
$$</span>
and we can further approximate this by summing over all integers: if <span class="math-container">$L$</span> becomes large and <span class="math-container">$\epsilon$</span> approaches some <span class="math-container">$0\le \delta \le 1$</span>, <span class="math-container">$f(p)$</span> will approach
<span class="math-container">$$
g(\delta):=(1-p)\sum_{j\in\Bbb Z} p^{j-\delta} \exp(-p^{j-\delta}).
$$</span>
The average of this over <span class="math-container">$\delta$</span> has the simple formula
<span class="math-container">$$
\int_0^1 g(\delta) d\delta = (1-p)\int_{\Bbb R} p^x \exp(-p^x) dx = -\frac{1-p}{\log p},
$$</span>
which is <span class="math-container">$1/(2 \log 2)\approx 0.72134752$</span> if <span class="math-container">$p=\frac 1 2$</span>, but as others have pointed out, <span class="math-container">$g(\delta)$</span> oscillates, so the large-<span class="math-container">$n$</span> limit for <span class="math-container">$f(p)$</span> will not exist. You can expand <span class="math-container">$g$</span> in Fourier series to get
<span class="math-container">$$
g(\delta)=-\frac{1-p}{\log p}\left(1+2\sum_{n\ge 1} \Re\left(e^{2\pi i n \delta} \,\,\Gamma(1 + \frac{2\pi i n}{\log p})\right)
\right).
$$</span>
Since <span class="math-container">$\Gamma(1+ri)$</span> falls off exponentially as <span class="math-container">$|r|$</span> becomes large, the peak-to-peak amplitude of the largest oscillation will be
<span class="math-container">$$
h(p):=-\frac{4(1-p)}{\log p} \left|\Gamma(1+\frac{2\pi i}{\log p})\right|,
$$</span>
which, as has already been pointed out, is <span class="math-container">$\approx 1.426\cdot 10^{-5}$</span> for <span class="math-container">$p=1/2$</span>. For some smaller <span class="math-container">$p$</span> it will be larger, although for very small <span class="math-container">$p$</span>, it will decrease as the value of the gamma function approaches 1 and <span class="math-container">$|\log p|$</span> increases. This doesn't mean that the overall oscillation disappears, though, since other terms in the Fourier series will become significant. To illustrate this, here are some graphs of <span class="math-container">$g(\delta)$</span>. From top to bottom, the <span class="math-container">$p$</span> values are <span class="math-container">$0.9$</span>, <span class="math-container">$0.5$</span>, <span class="math-container">$0.2$</span>, <span class="math-container">$0.1$</span>, <span class="math-container">$10^{-3}$</span>, <span class="math-container">$10^{-6}$</span>, <span class="math-container">$10^{-12}$</span>, and <span class="math-container">$10^{-24}$</span>. As <span class="math-container">$p$</span> becomes small,
<span class="math-container">$$
g(0)=(1-p)\sum_{j\in\Bbb Z} p^{j} \exp(-p^{j}) \ \ \qquad {\rm approaches} \ \ \qquad p^0 \exp(-p^0) = \frac 1 e.
$$</span></p>
<p><a href="https://i.sstatic.net/KX5RNjGy.png" rel="noreferrer"><img src="https://i.sstatic.net/KX5RNjGy.png" alt="Graphs of g(delta)" /></a></p>
<p><strong>References</strong></p>
<ul>
<li><a href="https://research.tue.nl/en/publications/on-the-number-of-maxima-in-a-discrete-sample" rel="noreferrer">"On the number of maxima in a discrete sample", J. J. A. M. Brands, F. W. Steutel, and R. J. G. Wilms, Memorandum COSOR 92-16, 1992, Technische Universiteit Eindhoven.</a></li>
<li><a href="https://doi.org/10.1214/aoap/1177005360" rel="noreferrer">"The Asymptotic Probability of a Tie for First Place", Bennett Eisenberg, Gilbert Stengle, and Gilbert Strang, <em>The Annals of Applied Probability</em> <strong>3</strong>, #3 (August 1993), pp. 731 - 745.</a></li>
<li><a href="https://www.jstor.org/stable/2325134" rel="noreferrer">Problem E3436, "Tossing Coins Until All Show Heads", solution, Lennart Råde, Peter Griffin, O. P. Lossers, <em>The American Mathematical Monthly</em>, <strong>101</strong>, #1 (January 1994), pp. 78-80.</a></li>
</ul>
|
logic | <p>I can rather easily imagine that some mathematician/logician had the idea to symbolize "it <strong>E</strong> xists" by $\exists$ - a reversed E - and after that some other (imitative) mathematician/logician had the idea to symbolize "for <strong>A</strong> ll" by $\forall$ - a reversed A. Or vice versa. (Maybe it was one and the same person.)</p>
<p>What is hard (for me) to imagine is, how the one who invented $\forall$ could fail to consider the notations $\vee$ and $\wedge$ such that today $(\forall x \in X) P(x)$ must be spelled out $\bigwedge_{x\in X} P(x)$ instead of $\bigvee_{x\in X}P(x)$? (Or vice versa.)</p>
<p>Since I know that this is not a real question, let me ask it like this: Where can I find more about this observation?</p>
| <p>See <a href="http://jeff560.tripod.com/set.html">Earliest Uses of Symbols of Set Theory and Logic</a> for this and much more.</p>
| <p>The four types of propositions used in the classical Greek syllogisms were called A, E, I, O. Statements of type A were "All p are q". Statement of type E were "Some p are q". So of course a millennium later, mathematicians (who had a classical education) used A and E for these quantifiers, then later turned them upside down to avoid confusion with letters used for other things. </p>
<p>By the way: I and O were "All p are not q" = "No p are q" and "Some p are not q"="Not all p are q", but I don't remember which is I and which is O.</p>
|
game-theory | <p>I heard a riddle once, which goes like this:</p>
<p>There are N lions and 1 sheep in a field. All the lions really want to eat the sheep, but the problem is that if a lion eats a sheep, it becomes a sheep. A lion would rather stay a lion than be eaten by another lion. (There is no other way for a lion to die than to become a sheep and then be eaten). </p>
<p>I was presented with this solution:</p>
<p>If there were 1 lion and 1 sheep, then the lion would simply eat the sheep. </p>
<p>If there were 2 lions and 1 sheep, then no lion would eat the sheep, because if one of them would, it would surely be eaten by the other lion afterwards. </p>
<p>If there were 3 lions, then one of the lions could safely eat the sheep, because it would turn in to the scenario with 2 lions, where no one can eat.</p>
<p>Continuing this argument, the conclusion is as follows:</p>
<ul>
<li><p>If there is an even number of lions, then nothing happens.</p></li>
<li><p>If there is an odd number of lions, then any lion could safely eat the sheep. </p></li>
</ul>
<p>But to me this seems utterly absurd. I think this is similar to the Unexpected Hanging Paradox (Link: <a href="http://en.wikipedia.org/wiki/Unexpected_hanging_paradox">http://en.wikipedia.org/wiki/Unexpected_hanging_paradox</a>). I might have forgotten some assumptions, and those assumptions might actually solve this problem. </p>
<p>Is there a fault in the argument which I haven't discovered? Does anyone have any insights? Is the argument sound?</p>
| <p>Maybe you have doubts whether lions can count up to 101 or have the notion of odd and even. So here is another version of the story:</p>
<p>A certain university has just one math chair, which is inhabited right now. There are $N$ (male) mathematicians aspiring for that chair, and the guy who kills the prof becomes his successor.</p>
| <p>None of the answers posted here are actually fully correct.</p>
<p>Let us use the formulation of the problem as posted on the braingle website: <a href="http://www.braingle.com/brainteasers/teaser.php?id=9026&comm=0" rel="nofollow noreferrer">http://www.braingle.com/brainteasers/teaser.php?id=9026&comm=0</a> </p>
<p>The backward induction solution to the problem (where the sheep survives if the number of lions is even, and gets eaten if the number of lions is odd) can only be applied if lions have <em>common knowledge of lion rationality</em>. It is <strong>not enough</strong> for lions to just be "infinitely logical, smart, and completely aware of their surroundings". It is even not enough for lions to know that all other lions are rational! Common knowledge is much stronger than that, and means that everyone knows that everyone knows that everyone knows ... etc. etc.</p>
<p>Only then can we start applying backward induction! And the original problem formulation makes no such statement about common knowledge of rationality - which is a <strong>very</strong> strong statement to make!</p>
<p>Note that when I used the word "knowledge" above, I used a very strict definition of knowledge - knowledge is a true belief, which holds regardless of any new information becoming available.</p>
<p>Merely having <em>common belief in rationality</em> is not enough for a backward induction solution here! Imagine a situation with 4 lions: what happens if the 4th lion decides to eat the sheep, contrary to what the backward induction solution says he should do? This will invalidate the common belief of all the other lions in common rationality, which was the prerequisite for the backward induction solution! So lion 3 can no longer assume that lion 2 will never eat the sheep - after all, lion 4 just did! And since above all, lions do not want to be eaten, lion 3 will no longer eat the sheep. Turns out the behavior of lion 4 was rational after all, <strong>assuming</strong> lions did not have common knowledge in rationality at the start!</p>
<p>This explains why a lot of people feel unsatisfied with the backward induction solution to this problem, and why they feel that this problem is a paradox. They are completely right to be unsatisfied! Common knowledge in rationality is an <strong>extremely</strong> strong condition, and is unrealistic in most practical scenarios. And in fact in this problem, even common belief in rationality is not stated, merely the rationality of each individual lion, which completely invalidates the backward induction solution - even with only 2 lions, lion 2 cannot reason about what 1 lion would do if he doesn't hold a belief or knowledge of that lion's rationality (which cannot automatically be assumed)!</p>
|
matrices | <p>Mariano mentioned somewhere that everyone should prove once in their life that every matrix is conjugate to its transpose.</p>
<p>I spent quite a bit of time on it now, and still could not prove it. At the risk of devaluing myself, might I ask someone else to show me a proof?</p>
| <p>This question has a nice answer using the theory of <a href="https://en.wikipedia.org/wiki/Structure_theorem_for_finitely_generated_modules_over_a_principal_ideal_domain">modules over a PID</a>. Clearly the <a href="https://en.wikipedia.org/wiki/Smith_normal_form">Smith normal forms</a> (over $K[X]$) of $XI_n-A$ and of $XI_n-A^T$ are the same (by symmetry). Therefore $A$ and $A^T$ have the same invariant factors, thus the same <a href="https://en.wikipedia.org/wiki/Rational_canonical_form">rational canonical form</a>*, and hence they are similar over$~K$.</p>
<p>*The Wikipedia article at the link badly needs rewriting.</p>
| <p>I had in mind an argument using the Jordan form, which reduces the question to single Jordan blocks, which can then be handled using Ted's method ---in the comments.</p>
<p>There is one subtle point: the matrix which conjugates a matrix $A\in M_n(k)$ to its transpose can be taken with coefficients in $k$, no matter what the field is. On the other hand, the Jordan canonical form exists only for algebraically closed fields (or, rather, fields which split the characteristic polynomial)</p>
<p>If $K$ is an algebraic closure of $k$, then we can use the above argument to find an invertible matrix $C\in M_n(K)$ such that $CA=A^tC$. Now, consider the equation $$XA=A^tX$$ in a matrix $X=(x_{ij})$ of unknowns; this is a linear equation, and <em>over $K$</em> it has non-zero solutions. Since the equation has coefficients in $k$, it follows that there are also non-zero solutions with coefficients in $k$. This solutions show $A$ and $A^t$ are conjugated, except for a detail: can you see how to assure that one of this non-zero solutions has non-zero determinant?</p>
|
probability | <p>If I have two variables $X$ and $Y$ which randomly take on values uniformly from the range $[a,b]$ (all values equally probable), what is the expected value for $\max(X,Y)$?</p>
| <p>Here are some useful tools:</p>
<ol>
<li>For every nonnegative random variable $Z$, $$\mathrm E(Z)=\int_0^{+\infty}\mathrm P(Z\geqslant z)\,\mathrm dz=\int_0^{+\infty}(1-\mathrm P(Z\leqslant z))\,\mathrm dz.$$</li>
<li>As soon as $X$ and $Y$ are independent, $$\mathrm P(\max(X,Y)\leqslant z)=\mathrm P(X\leqslant z)\,\mathrm P(Y\leqslant z).$$</li>
<li>If $U$ is uniform on $(0,1)$, then $a+(b-a)U$ is uniform on $(a,b)$.</li>
</ol>
<p>If $(a,b)=(0,1)$, items 1. and 2. together yield $$\mathrm E(\max(X,Y))=\int_0^1(1-z^2)\,\mathrm dz=\frac23.$$ Then item 3. yields the general case, that is, $$\mathrm E(\max(X,Y))=a+\frac23(b-a)=\frac13(2b+a).$$</p>
| <p>I very much liked Martin's approach but there's an error with his integration. The key is on line three. The intution here should be that when y is the maximum, then x can vary from 0 to y whereas y can be anything and vice-versa for when x is the maximum. So the order of integration should be flipped: </p>
<p><img src="https://i.sstatic.net/4AR8n.png" alt="enter image description here"></p>
|
matrices | <p>Recently, I answered <a href="https://math.stackexchange.com/q/1378132/80762">this question about matrix invertibility</a> using a solution technique I called a "<strong>miracle method</strong>." The question and answer are reproduced below:</p>
<blockquote>
<p><strong>Problem:</strong> Let <span class="math-container">$A$</span> be a matrix satisfying <span class="math-container">$A^3 = 2I$</span>. Show that <span class="math-container">$B = A^2 - 2A + 2I$</span> is invertible.</p>
<p><strong>Solution:</strong> Suspend your disbelief for a moment and suppose <span class="math-container">$A$</span> and <span class="math-container">$B$</span> were scalars, not matrices. Then, by power series expansion, we would simply be looking for
<span class="math-container">$$ \frac{1}{B} = \frac{1}{A^2 - 2A + 2} = \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A^4}{8}-\frac{A^5}{8} + \cdots$$</span>
where the coefficient of <span class="math-container">$A^n$</span> is
<span class="math-container">$$ c_n = \frac{1+i}{2^{n+2}} \left((1-i)^n-i (1+i)^n\right). $$</span>
But we know that <span class="math-container">$A^3 = 2$</span>, so
<span class="math-container">$$ \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A^4}{8}-\frac{A^5}{8} + \cdots = \frac{1}{2}+\frac{A}{2}+\frac{A^2}{4}-\frac{A}{4}-\frac{A^2}{4} + \cdots $$</span>
and by summing the resulting coefficients on <span class="math-container">$1$</span>, <span class="math-container">$A$</span>, and <span class="math-container">$A^2$</span>, we find that
<span class="math-container">$$ \frac{1}{B} = \frac{2}{5} + \frac{3}{10}A + \frac{1}{10}A^2. $$</span>
Now, what we've just done should be total nonsense if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are really matrices, not scalars. But try setting <span class="math-container">$B^{-1} = \frac{2}{5}I + \frac{3}{10}A + \frac{1}{10}A^2$</span>, compute the product <span class="math-container">$BB^{-1}$</span>, and you'll find that, <strong>miraculously</strong>, this answer works!</p>
</blockquote>
<p>I discovered this solution technique some time ago while exploring a similar problem in Wolfram <em>Mathematica</em>. However, I have no idea why any of these manipulations should produce a meaningful answer when scalar and matrix inversion are such different operations. <strong>Why does this method work?</strong> Is there something deeper going on here than a serendipitous coincidence in series expansion coefficients?</p>
| <p>The real answer is the set of $n\times n$ matrices forms a Banach algebra - that is, a Banach space with a multiplication that distributes the right way. In the reals, the multiplication is the same as scaling, so the distinction doesn't matter and we don't think about it. But with matrices, scaling and multiplying matrices is different. The point is that there is no miracle. Rather, the argument you gave only uses tools from Banach algebras (notably, you didn't use commutativity). So it generalizes nicely. </p>
<p>This kind of trick is used all the time to great effect. One classic example is proving that when $\|A\|<1$ there is an inverse of $1-A$. One takes the argument about geometric series from real analysis, checks that everything works in a Banach algebra, and then you're done. </p>
| <p>Think about how you derive the finite version of the geometric series formula for scalars. You write:</p>
<p>$$x \sum_{n=0}^N x^n = \sum_{n=1}^{N+1} x^n = \sum_{n=0}^N x^n + x^{N+1} - 1.$$</p>
<p>This can be written as $xS=S+x^{N+1}-1$. So you move the $S$ over, and you get $(x-1)S=x^{N+1}-1$. Thus $S=(x-1)^{-1}(x^{N+1}-1)$.</p>
<p>There is only one point in this calculation where you needed to be careful about commutativity of multiplication, and that is in the step where you multiply both sides by $(x-1)^{-1}$. In the above I was careful to write this on the <em>left</em>, because $xS$ originally multiplied $x$ and $S$ with $x$ on the left. Thus, provided we do this one multiplication step on the left, everything we did works when $x$ is a member of any ring with identity such that $x-1$ has a multiplicative inverse. </p>
<p>As a result, if $A-I$ is invertible, then </p>
<p>$$\sum_{n=0}^N A^n = (A-I)^{-1}(A^{N+1}-I).$$</p>
<p>Moreover, if $\| A \| < 1$ (in any operator norm), then the $A^{N+1}$ term decays as $N \to \infty$. As a result, the partial sums are Cauchy, and so if the ring in question is also complete with respect to this norm, you obtain</p>
<p>$$\sum_{n=0}^\infty A^n = (I-A)^{-1}.$$</p>
<p>In particular, in this situation we recover the converse: if $\| A \| < 1$ then $I-A$ is invertible.</p>
|
combinatorics | <p>Let $m,n\ge 0$ be two integers. Prove that</p>
<p>$$\sum_{k=m}^n (-1)^{k-m} \binom{k}{m} \binom{n}{k} = \delta_{mn}$$</p>
<p>where $\delta_{mn}$ stands for the <em>Kronecker's delta</em> (defined by $\delta_{mn} = \begin{cases} 1, & \text{if } m=n; \\ 0, & \text{if } m\neq n \end{cases}$).</p>
<p>Note: I put the tag "linear algebra" because i think there is an elegant way to attack the problem using a certain type of matrices.</p>
<p>I hope you will enjoy. :)</p>
| <p>This follows easily from the <a href="http://en.wikipedia.org/wiki/Multinomial_theorem">Multinomial Theorem</a>, I believe.</p>
<p>$$ 1 = 1^n = (1 - x + x)^n$$
$$ = \sum_{a+b+c=n} {n \choose a,b,c} 1^a \cdot (-x)^b \cdot x^c$$
$$ = \sum_{m=0}^{n} \sum_{k=m}^{n} {n \choose m,k-m,n-k} 1^{m} \cdot (-x)^{k-m} \cdot x^{n-k} $$
$$ = \sum_{m=0}^{n} \left[ \sum_{k=m}^{n} (-1)^{k-m} {k \choose m}{n \choose k} \right] x^{n-m}$$</p>
<p>Comparing coefficients now gives the result immediately.</p>
| <p>The vector space of polynomials in one variable has two bases $\{1, x, x^2, ... \}$ and $\{1, (x+1), (x+1)^2, ... \}$ and I believe what you've written down is equivalent to the statement that the change-of-basis matrices between these two bases multiply to the identity.</p>
<p>I am still thinking about an inclusion-exclusion argument.</p>
|
logic | <p>In mathematics the existence of a mathematical object is often proved by contradiction without showing how to construct the object.</p>
<p>Does the existence of the object imply that it is at least possible to construct the object?</p>
<p>Or are there mathematical objects that do exist but are impossible to construct? </p>
| <p>Really the answer to this question will come down to the way we define the terms "existence" (and "construct"). Going philosophical for a moment, one may argue that constructibility is a priori required for existence, and so ; this, broadly speaking, is part of the impetus for <strong>intuitionism</strong> and <strong>constructivism</strong>, and related to the impetus for <strong>(ultra)finitism</strong>.$^1$ Incidentally, at least to some degree we can produce formal systems which capture this point of view <em>(although the philosophical stance should really be understood as</em> preceding <em>the formal systems which try to reflect them; I believe this was a point Brouwer and others made strenuously in the early history of intuitionism)</em>.</p>
<p>A less philosophical take would be to interpret "existence" as simply "provable existence relative to some fixed theory" (say, ZFC, or ZFC + large cardinals). In this case it's clear what "exists" means, and the remaining weasel word is "construct." <strong>Computability theory</strong> can give us some results which may be relevant, depending on how we interpret this word: there are lots of objects we can define in a complicated way but provably have no "concrete" definitions:</p>
<ul>
<li><p>The halting problem is not computable.</p></li>
<li><p>Kleene's $\mathcal{O}$ - or, the set of indices for computable well-orderings - is not hyperarithmetic.</p></li>
<li><p>A much deeper example: while we know that for all Turing degrees ${\bf a}$ there is a degree strictly between ${\bf a}$ and ${\bf a'}$ which is c.e. in $\bf a$, we can also show that there is no "uniform" way to produce such a degree in a precise sense.</p></li>
</ul>
<p>Going further up the ladder, ideas from <strong>inner model theory</strong> and <strong>descriptive set theory</strong> become relevant. For example:</p>
<ul>
<li><p>We can show in ZFC that there is a (Hamel) basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$; however, we can also show that no such basis is "nicely definable," in various precise senses (and we get stronger results along these lines as we add large cardinal axioms to ZFC). For example, no such basis can be Borel. </p></li>
<li><p>Other examples of the same flavor: a nontrivial ultrafilter on $\mathbb{N}$; a well-ordering of $\mathbb{R}$; a Vitali (or Bernstein or Luzin) set, or indeed any non-measurable set (or set without the property of Baire, or without the perfect set property); ...</p></li>
<li><p>On the other side of the aisle, the theory ZFC + a measurable cardinal proves that there is a set of natural numbers which is not "constructible" in a <a href="https://en.wikipedia.org/wiki/Constructible_universe" rel="noreferrer">precise set-theoretic sense</a> (basically, can be built just from "definable transfinite recursion" starting with the emptyset). Now the connection between $L$-flavored constructibility and the informal notion of a mathematical construction is tenuous at best, but this does in my opinion say that a measurable cardinal yields a hard-to-construct set of naturals in a precise sense.</p></li>
</ul>
<hr>
<p>$^1$I don't actually hold these stances <a href="https://math.stackexchange.com/a/2757816/28111">except very rarely</a>, and so I'm really not the best person to comment on their motivations; please take this sentence with a grain of salt.</p>
| <p>There exists a Hamel basis for the vector space $\Bbb{R}$ over $\Bbb{Q}$, but nobody has seen one so far. It is the axiom of choice which ensures existence.</p>
|
logic | <p>How would exponentiation be defined in Peano arithmetic? Unless $n$ is fixed natural number, $x^n$ seems to be hard to define. </p>
<p>Edit 2: So, what would be the way to define $x^n+y^n = z^n$ using $\Sigma_1^0$ formula?</p>
<p>Edit: OK, I say Peano arithmetic has addition and multiplication stuffs. This allows additions to be expressed without quantifiers but as for exponentiation Peano arithmetic is silent. That's why I asked this question. Just for clarification.</p>
| <p>The bare bones answer is something like what <a href="https://math.stackexchange.com/users/39174/hagen-von-eitzen">Hagen</a> has said. The idea is this: Exponentiation is understood to be a function defined recursively: $y=2^x$ iff there is a sequence $t_0,t_1,\dots,t_x$ such that </p>
<ul>
<li>$t_0=1$,</li>
<li>$t_x=y$, and</li>
<li>For all $n<x$, $t_{n+1}=2\times t_n$.</li>
</ul>
<p>In this respect, exponentiation is hardly unique: $y=x!$ is defined similarly, for example. Now you'd say that there is a sequence $z_0,z_1,\dots,z_x$, such that </p>
<ul>
<li>$z_0=1$,</li>
<li>$z_x=y$, and</li>
<li>For all $n<x$, $z_{n+1}=(n+1)\times z_n$.</li>
</ul>
<p>(That $t_0=z_0=1$ is coincidence. In one case it is because $2^0=1$; in the other, because $0!=1$.)</p>
<p>So, to define a formula saying that $y=2^x$, you'd like to say that there is a sequence $t_0,\dots,t_x$ as above. </p>
<p>The problem, of course, is that in Peano Arithmetic one talks about numbers rather than sequences. Gödel solved this problem when working on his proof of the incompleteness theorem: He explained how to code finite sequences by numbers, by using the <a href="http://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="noreferrer">Chinese remainder theorem</a>. Recall that this result states that, given any numbers $n_1,\dots,n_k$, pairwise relatively prime, and any numbers $m_1,\dots,m_k$, there is a number $x$ that simultaneously satisfies all congruences
$$ x\equiv m_i\pmod {n_i} $$
for $1\le i\le k$. </p>
<p>In particular, given $m_1,\dots,m_k$, let $n=t!$ where $t=\max(m_1,\dots,m_k,k)$. Letting $n_1=n+1$, $n_2=2n+1,\dots$, $n_k=kn+1$, we see that the $n_i$ are relatively prime, and we can find an $x$ that satisfies $x\equiv m_i\pmod{n_i}$ for all $i$.
We can then say that the pair $\langle x,n\rangle$ <em>codes</em> the sequence $(m_1,\dots,m_k)$. In fact, given $x,n$, it is rather easy to "decode" the $m_i$: Just note that $m_i$ is the <em>remainder</em> of dividing $x$ by $in+1$. </p>
<p>Accordingly, we can <em>define</em> $y=2^x$ by saying that there is a pair $\langle a,b\rangle$ that, in the sense just described, codes a sequence $(t_0,t_1,\dots,t_x)$ such that $t_0=1$, $t_x=y$, and $t_{n+1}=2t_n$ for all $n<x$. (Again, "in the sense just described" ends up meaning simply that "the remainder of dividing $a$ by $ib+1$ is $t_i$ for all $i\le x$". Note that we are not requiring $b$ to be the particular number we exhibited above using factorials.)</p>
<p>Of course, one needs to prove that any two pairs coding such a sequence agree on the value of $t_x$, but this is easy to establish. And we can code a pair by a number using, for example, Cantor's enumeration of $\mathbb N\times\mathbb N$, so that $\langle a,b\rangle$ is coded as the number
$$ c=\frac{(a+b)(a+b+1)}2+b. $$
This is a bijection, and has the additional advantages that it is definable and satisfies $a,b\le c$ (so it is given by a <em>bounded</em> formula). </p>
<p>An issue that appears now is that we need to formalize the discussion of the Chinese remainder theorem and the subsequent coding <em>within</em> Peano arithmetic. This presents new difficulties, as again, we cannot (in the language of arithmetic) talk about sequences, and cannot talk about factorials, until we do all the above, so it is not clear how to prove or even how to <em>formulate</em> these results. </p>
<p>This problem can be solved by noting that we can use induction within Peano arithmetic. One then proceeds to show, essentially, that given any finite sequence, there is a pair that codes it, and that if a pair codes a sequence $\vec s$, and a number $t$ is given, then there is a pair that codes the sequence $\vec s{}^\frown(t)$. That is, one can write down a formula $\phi(x,y,z)$, "$y$ codes a sequence, the $z$-th member of which is $x$", such that PA proves:</p>
<ul>
<li>For all $z$ and $y$ there is a unique $x$ such that $\phi(x,y,z)$.</li>
<li>For all $x$ there is a $y$ such that $\phi(x,y,0)$.</li>
<li>For all $x,y,z$ there is a $w$ such that the first $z$ terms of the sequences coded by $y$ and $w$ coincide, and the next term coded by $w$ is $x$.</li>
</ul>
<p>In fact, we can let $\phi$ be a bounded formula: Take $\phi(x,y,z)$ to be</p>
<blockquote>
<p>There are $a,b\le y$ such that $y=\langle a,b\rangle$ (Cantor's pairing) and $x<zb+1$ and there is a $d\le a$ such that $a=d(zb+1)+x$.</p>
</blockquote>
<p>Once we have in PA the existence of coded sequences like this, implementing recursive definitions such as exponential functions is straightforward.</p>
<p>There are two excellent references for these coding issues and the subtleties surrounding them:</p>
<ol>
<li>Richard Kaye. <strong>Models of Peano arithmetic</strong>. Oxford Logic Guides, 15. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1991. <a href="http://www.ams.org/mathscinet-getitem?mr=1098499" rel="noreferrer">MR1098499 (92k:03034)</a>. (See chapter 5.)</li>
<li>Petr Hájek, and Pavel Pudlák. <strong>Metamathematics of first-order arithmetic</strong>. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1993. <a href="http://www.ams.org/mathscinet-getitem?mr=1219738" rel="noreferrer">MR1219738 (94d:03001)</a>. (See chapter 1.)</li>
</ol>
| <p>Using the following abbreviations</p>
<p>$$\begin{align}a\le b&\equiv\exists n\colon a+n=b\\
a< b&\equiv Sa\le b\\
\operatorname{mod}(a,b,c)&\equiv \exists n\colon a=b\cdot n+c\land c<b\\
\operatorname{seq}(a,b,k,x)&\equiv \operatorname{mod}(a,S(b\cdot Sk),x)\\
\operatorname{pow}(a,b,c)&\equiv\exists x\exists y\colon\operatorname{seq}(x,y,0,S0)\land\operatorname{seq}(x,y,b,c)\land \\&\quad\forall k\forall z\colon((k<b\land \operatorname{seq}(x,y,k,z))\to \operatorname{seq}(x,y,Sk,a\cdot z))\end{align}$$
we have $\operatorname{pow}(a,b,c)$ if and only if $c=a^b$.
Intriguingly, you need some elementary number theory (such as the Chinese remainder theorem) to meta-appreciate this.</p>
<p><em>Example:</em> In order to show that $\operatorname{pow}(10,3,1000)$ is true (i.e., that $10^3=1000$), you may want to try $x=41366298973$ and $y=250$ in the formula above, i.e., verify that $\operatorname{seq}(x,y,0,1)$, $\operatorname{seq}(x,y,1,10)$, $\operatorname{seq}(x,y,2,100)$, and $\operatorname{seq}(x,y,3,1000)$. (It does not matter that $\operatorname{seq}(x,y,4,1138)$ instead of $\operatorname{seq}(x,y,4,10000)$).</p>
|
linear-algebra | <p>A classical exercise in group theory is "Show that if a group has a trivial automorphism group, then it is of order $1$ or $2$." I think that the straightforward solution uses that a exponent two group is a vector space over $\operatorname{GF}(2)$, and therefore has nontrivial automorphisms as soon as its dimension is at least $2$ (simply transposing two basis vectors).</p>
<p>My question is now natural:</p>
<blockquote>
<p>Is it possible, without the axiom of choice, to construct a vector space $E$ over $\operatorname{GF}(2)$, different from $\{0\}$ or $\operatorname{GF}(2)$, whose automorphism group $\operatorname{GL}(E)$ is trivial?</p>
</blockquote>
| <p><strong>Nov. 6th, 2011</strong> <em>After several long months a post on MathOverflow pushed me to reconsider this math, and I have found a mistake. The claim was still true, as shown by Läuchli $\small[1]$, however despite trying to do my best to understand the argument for this specific claim, it eluded me for several days. I then proceeded to construct my own proof, this time errors free - or so I hope. While at it, I am revising the writing style.</em></p>
<p><strong>Jul. 21st, 2012</strong> <em>While reviewing this proof again it was apparent that its most prominent use in generating such space over the field of two elements fails, as the third lemma implicitly assumed $x+x\neq x$. Now this has been corrected and the proof is truly complete.</em></p>
<p>$\newcommand{\sym}{\operatorname{sym}}
\newcommand{\fix}{\operatorname{fix}}
\newcommand{\span}{\operatorname{span}}
\newcommand{\im}{\operatorname{Im}}
\newcommand{\Id}{\operatorname{Id}}
$</p>
<hr>
<p>I got it! The answer is that you can construct such vector space.</p>
<p>I will assume that you are familiar with ZFA and the construction of permutation models, references can be found in Jech's <em>Set Theory</em> $\small[2, \text{Ch}. 15]$ as well <em>The Axiom of Choice</em> $\small{[3]}$. Any questions are welcomed.</p>
<p>Some notations, if $x\in V$ which is assumed to be a model of ZFC+Atoms then:</p>
<ul>
<li>$\sym(x) =\{\pi\in\mathscr{G} \mid \pi x = x\}$, and</li>
<li>$\fix(x) = \{\pi\in\mathscr{G} \mid \forall y\in x:\ \pi y = y\}$</li>
</ul>
<p><strong>Definition:</strong> Suppose $G$ is a group, $\mathcal{F}\subseteq\mathcal{P}(G)$ is <em>a normal subgroups filter</em> if:</p>
<ol>
<li>$G\in\mathcal{F}$;</li>
<li>$H,K$ are subgroups of $G$ such that $H\subseteq K$, then $H\in\mathcal{F}$ implies $K\in\mathcal{F}$;</li>
<li>$H,K$ are subgroups of $G$ such that $H,K\in\mathcal{F}$ then $H\cap K\in\mathcal{F}$; </li>
<li>${1}\notin\mathcal{F}$ (<em>non-triviality</em>);</li>
<li>For every $H\in\mathcal{F}$ and $g\in G$ then $g^{-1}Hg\in\mathcal{F}$ (<em>normality</em>).</li>
</ol>
<p>Now consider the normal subgroups-filter $\mathcal{F}$ to be generated by the subgroups $\fix(E)$ for $E\in I$, where $I$ is an ideal of sets of atoms (closed under finite unions, intersections and subsets).</p>
<p>Basics of permutation models: </p>
<p>A permutation model is a transitive subclass of the universe $V$ that for every ordinal $\alpha$, we have $x\in\mathfrak{U}\cap V_{\alpha+1}$ if and only if $x\subseteq\mathfrak{U}\cap V_\alpha$ and $\sym(x)\in\mathcal{F}$. </p>
<p>The latter property is known as <strong>being symmetric (with respect to $\mathcal{F}$)</strong> and $x$ being in the permutation model means that $x$ is hereditarily symmetric. (Of course at limit stages take limits, and start with the empty set)</p>
<p>If $\mathcal{F}$ was generated by some ideal of sets $I$, then if $x$ is symmetric with respect to $\mathcal{F}$ it means that for some $E\in I$ we have $\fix(E)\subseteq\sym(x)$. In this case we say that $E$ is a <strong>support</strong> of $x$.</p>
<p>Note that if $E$ is a support of $x$ and $E\subseteq E'$ then $E'$ is also a support of $x$, since $\fix(E')\subseteq\fix(E)$.</p>
<p>Lastly if $f$ is a function in $\mathfrak{U}$ and $\pi$ is a permutation in $G$ then $\pi(f(x)) = (\pi f)(\pi x)$.</p>
<hr>
<p>Start with $V$ a model of ZFC+Atoms, assuming there are infinitely (countably should be enough) many atoms. $A$ is the set of atoms, endow it with operations that make it a vector space over a field $\mathbb{F}$ (If we only assume countably many atoms, we should assume the field is countable too. Since we are interested in $\mathbb F_2$ this assertion is not a big hassle). Now consider $\mathscr{G}$ the group of all linear automorphisms of $A$, each can be extended uniquely to an automorphism of $V$.</p>
<p>Now consider the normal subgroups-filter $\mathcal{F}$ to be generated by the subgroups $\fix(E)$ for $E\in I$, where $E$ a finite set of atoms. Note that since all the permutations are linear they extend unique to $\span(E)$. In the case where $\mathbb F$, our field, is finite then so is this span.</p>
<p>Let $\mathfrak{U}$ be the permutation model generated by $\mathscr{G}$ and $\mathcal{F}$. </p>
<p><strong>Lemma I:</strong> Suppose $E$ is a finite set, and $u,v$ are two vectors such that $v\notin\span(E\cup\{u\})$ and $u\notin\span(E\cup\{v\})$ (in which case we say that $u$ and $v$ are <em>linearly independent over $E$</em>), then there is a permutation which fixes $E$ and permutes $u$ with $v$.</p>
<p><em>Proof:</em> Without loss of generality we can assume that $E$ is linearly independent, otherwise take a subset of $E$ which is. Since $E\cup\{u,v\}$ is linearly independent we can (in $V$) extend it to a base of $A$, and define a permutation of this base which fixes $E$, permutes $u$ and $v$. This extends uniquely to a linear permutation $\pi\in\fix(E)$ as needed. $\square$</p>
<p><strong>Lemma II:</strong> In $\mathfrak{U}$, $A$ is a vector space over $\mathbb F$, and if $W\in\mathfrak{U}$ is a linear proper subspace then $W$ has a finite dimension.</p>
<p><em>Proof:</em> Suppose $W$ is as above, let $E$ be a support of $W$. If $W\subseteq\span(E)$ then we are done. Otherwise take $u\notin W\cup \span(E)$ and $v\in W\setminus \span(E)$ and permute $u$ and $v$ while fixing $E$, denote the linear permutation with $\pi$. It is clear that $\pi\in\fix(E)$ but $\pi(W)\neq W$, in contradiction. $\square$</p>
<p><strong>Lemma III:</strong> If $T\in\mathfrak{U}$ is a linear endomorphism of $A$, and $E$ is a support of $T$ then $x\in\span(E)\Leftrightarrow Tx\in\span(E)$, or $Tx=0$.</p>
<p><em>Proof:</em> First for $x\in \span(E)$, if $Tx\notin\span(E)$ for some $Tx\neq u\notin\span(E)$ let $\pi$ be a linear automorphism of $A$ which fixes $E$ and $\pi(Tx)=u$. We have, if so:</p>
<p>$$u=\pi(Tx)=(\pi T)(\pi x) = Tx\neq u$$</p>
<p>On the other hand, if $x\notin\span(E)$ and $Tx\in\span(E)$ and if $Tx=Tu$ for some $x\neq u$ for $u\notin\span(E)$, in which case we have that $x+u\neq x$ set $\pi$ an automorphism which fixes $E$ and $\pi(x)=x+u$, now we have: $$Tx = \pi(Tx) = (\pi T)(\pi x) = T(x+u) = Tx+Tu$$ Therefore $Tx=0$. </p>
<p>Otherwise for all $u\neq x$ we have $Tu\neq Tx$. Let $\pi$ be an automorphism fixing $E$ such that $\pi(x)=u$ for some $u\notin\span(E)$, and we have: $$Tx=\pi(Tx)=(\pi T)(\pi x) = Tu$$ this is a contradiction, so this case is impossible. $\square$</p>
<p><strong>Theorem:</strong> if $T\in\mathfrak{U}$ is an endomorphism of $A$ then for some $\lambda\in\mathbb F$ we have $Tx=\lambda x$ for all $x\in A$.</p>
<p><em>Proof:</em></p>
<p>Assume that $T\neq 0$, so it has a nontrivial image. Let $E$ be a support of $T$. If $\ker(T)$ is nontrivial then it is a proper subspace, thus for a finite set of atoms $B$ we have $\span(B)=\ker(T)$. Without loss of generality, $B\subseteq E$, otherwise $E\cup B$ is also a support of $T$.</p>
<p>For every $v\notin\span(E)$ we have $Tv\notin\span(E)$. However, $E_v = E\cup\{v\}$ is also a support of $T$. Therefore restricting $T$ to $E_v$ yields that $Tv=\lambda v$ for some $\lambda\in\mathbb F$.</p>
<p>Let $v,u\notin\span(E)$ linearly independent over $\span(E)$. We have that: $Tu=\alpha u, Tv=\mu v$, and $v+u\notin\span(E)$ so $T(v+u)=\lambda(v+u)$, for $\lambda\in\mathbb F$.
$$\begin{align}
0&=T(0) \\ &= T(u+v-u-v)\\
&=T(u+v)-Tu-Tv \\ &=\lambda(u+v)-\alpha u-\mu v=(\lambda-\alpha)u+(\lambda-\mu)v
\end{align}$$ Since $u,v$ are linearly independent we have $\alpha=\lambda=\mu$. Due to the fact that for every $u,v\notin\span(E)$ we can find $x$ which is linearly independent over $\span(E)$ both with $u$ and $v$ we can conclude that for $x\notin E$ we have $Tx=\lambda x$. </p>
<p>For $v\in\span(E)$ let $x\notin\span(E)$, we have that $v+x\notin\span(E)$ and therefore:
$$\begin{align}
Tx &= T(x+u - u)\\
&=T(x+u)-T(u)\\
&=\lambda(x+u)-\lambda u = \lambda x
\end{align}$$</p>
<p>We have concluded, if so that $T=\lambda x$ for some $\lambda\in\mathbb F$. $\square$</p>
<hr>
<p>Set $\mathbb F=\mathbb F_2$ the field with two elements and we have created ourselves a vector space without any nontrivial automorphisms. However, one last problem remains. This construction was carried out in ZF+Atoms, while we want to have it without atoms. For this simply use the Jech-Sochor embedding theorem $\small[3, \text{Th}. 6.1, \text p. 85]$, and by setting $\alpha>4$ it should be that any endomorphism is transferred to the model of ZF created by this theorem.</p>
<p>(Many thanks to <a href="https://math.stackexchange.com/users/5363/t-b">t.b.</a> which helped me translating parts of the original paper of Läuchli.<br>
Additional thanks to Uri Abraham for noting that an operator need not be injective in order to be surjective, resulting a shorter proof.)</p>
<hr>
<p><strong>Bibliography</strong></p>
<ol>
<li><p>Läuchli, H. <strong><a href="http://dx.doi.org/10.5169/seals-28602" rel="noreferrer">Auswahlaxiom in der Algebra.</a></strong> <em>Commentarii Mathematici Helvetici</em>, vol 37, pp. 1-19.</p></li>
<li><p>Jech, T. <strong>Set Theory, 3rd millennium ed.,</strong> Springer (2003).</p></li>
<li><p>Jech, T. <strong>The Axiom of Choice</strong>. North-Holland (1973).</p></li>
</ol>
| <p>This is not really an answer, more like a set of comments that's too long to put in the comments.</p>
<p>Taking the question literally, whether in the absence of choice such a vector space can be <em>constructed</em>, <a href="http://groups.google.com/group/sci.math/browse_thread/thread/d6fecbe12086896e/b54bc4d0b9f2e3ae" rel="noreferrer">this thread</a>, in particular the post near the end by Randall Dougherty, answers in the affirmative. (I don't know enough about forcing to check whether the construction works.)</p>
<p>Then the question becomes how much choice is required and whether the non-existence of non-trivial vector spaces over $\mathbb{F}_2$ with trivial automorphism group is equivalent to some more well-known choice-like axiom. In the last post in the thread, Herman Rubin claims that this follows from the Boolean prime ideal theorem and conjectures that it's equivalent to it. I tried for a while to find a proof in either direction but didn't get much further than equipping the set of all ideals of a Boolean algebra with the structure of a vector space over $\mathbb{F}_2$.</p>
<p>The <a href="http://consequences.emich.edu/CONSEQ.HTM" rel="noreferrer">Consequences of the Axiom of Choice</a> website has a number of related results that don't settle the question but at least give a non-trivial upper bound on the amount of choice required: We don't need full choice, since Form 333, the axiom of odd choice, which is weaker than full choice in ZF, is equivalent to [333 A], "In every vector space $B$ over the two element field, every subspace of $B$ has a complementary subspace", which is at least as strong as the result in question (since we just need a complementary subspace for a one-dimensional subspace). [333 B], "Every quotient group of an Abelian group each of whose non-unit elements has order $2$ has a set of representatives", is another equivalent. <a href="http://www.ams.org/proc/1996-124-08/S0002-9939-96-03305-9/S0002-9939-96-03305-9.pdf" rel="noreferrer">Here</a> is the original paper by Keremedis from which these results are taken.</p>
<p>Some other related choice-like principles:</p>
<p>[1 BJ] "In every vector space over the two element field, every generating set contains a basis." : This is equivalent to full choice; the proof is in the Keremedis paper linked to above and is rather neat.</p>
<p>Form 28(p): "Every vector space $V$ over $\mathbb{Z}_p$ has the property that every linearly independent subset can be extended to a basis. ($\mathbb{Z}_p$ is the $p$ element field.)"</p>
<p>Form 66: "Every vector space over a field has a basis." (Apparently it's unknown whether this is equivalent to full choice.)</p>
<p>Form 95(F): "Existence of Complementary Subspaces over a Field $F$: If $F$ is a field, then every vector space $V$ over $F$ has the property that if $S \subseteq V$ is a subspace of $V$, then there is a subspace $S' \subseteq V$ such that $S\cap S' = \{0\}$ and $S \cup S'$ generates $V$." (This implies odd choice.)</p>
<p>Form 109: "Every field $F$ and every vector space $V$ over $F$ has the property that each linearly independent set $A \subseteq V$ can be extended to a basis."</p>
<p>Form 429(p): "Every vector space over $\mathbb{Z}_p$ has a basis. ($\mathbb{Z}_p$ is the $p$ element field.)"</p>
<p>(The full table of implications can be viewed at the website by entering the form numbers in the form.)</p>
<hr>
<p>[<strong>Edit:</strong>]</p>
<p>Out of my failed attempts at equipping the (non-distributive, non-complemented) lattice of subspaces of $E$ with some structure that would guarantee the existence of a maximal ideal (which left me doubting whether there really is a connection to the Boolean prime ideal theorem), this consideration emerged: We have the following chain of implications and equivalences:</p>
<p>$E$ has a basis $\Rightarrow$ There is an inner product on $E$ $\Rightarrow$ Every subspace of $E$ has a complement $\Rightarrow$ Every finite-dimensional subspace of $E$ has a complement $\Leftrightarrow$ Every one-dimensional subspace of $E$ has a complement $\Rightarrow$ There is a one-dimensional subspace of $E$ that has a complement $\Leftrightarrow$ There is a non-trivial linear functional on $E$ $\Rightarrow$ There is a non-trivial automorphism of $E$.</p>
<p>(Here $E$ in each case stands for "every vector space over $\mathbb{F}_2$ with more than two elements", not just a particular one; else the last implication wouldn't hold.)</p>
<p>I don't see any obvious implications in the other direction except for the ones indicated. It seems likely that different steps along this chain require different levels of choice; perhaps progress towards the overall solution might be made by proving further implications in the other direction or by proving that one of the steps in the chain is equivalent to or implies or is implied by a well-known choice-like principle. The only such connections I know of so far are that "Every subspace of $E$ has a complement" is equivalent to the axiom of odd choice (see above) and of course that the axiom of choice implies "$E$ has a basis", and hence the entire chain.</p>
|
number-theory | <p>Let $S$ be a set of size $n$. There is an easy way to count the number of subsets with an even number of elements. Algebraically, it comes from the fact that</p>
<p>$\displaystyle \sum_{k=0}^{n} {n \choose k} = (1 + 1)^n$</p>
<p>while</p>
<p>$\displaystyle \sum_{k=0}^{n} (-1)^k {n \choose k} = (1 - 1)^n$.</p>
<p>It follows that </p>
<p>$\displaystyle \sum_{k=0}^{n/2} {n \choose 2k} = 2^{n-1}$. </p>
<p>A direct combinatorial proof is as follows: fix an element $s \in S$. If a given subset has $s$ in it, add it in; otherwise, take it out. This defines a bijection between the number of subsets with an even number of elements and the number of subsets with an odd number of elements.</p>
<p>The analogous formulas for the subsets with a number of elements divisible by $3$ or $4$ are more complicated, and divide into cases depending on the residue of $n \bmod 6$ and $n \bmod 8$, respectively. The algebraic derivations of these formulas are as follows (with $\omega$ a primitive third root of unity): observe that</p>
<p>$\displaystyle \sum_{k=0}^{n} \omega^k {n \choose k} = (1 + \omega)^n = (-\omega^2)^n$</p>
<p>while</p>
<p>$\displaystyle \sum_{k=0}^{n} \omega^{2k} {n \choose k} = (1 + \omega^2)^n = (-\omega)^n$</p>
<p>and that $1 + \omega^k + \omega^{2k} = 0$ if $k$ is not divisible by $3$ and equals $3$ otherwise. (This is a special case of the discrete Fourier transform.) It follows that</p>
<p>$\displaystyle \sum_{k=0}^{n/3} {n \choose 3k} = \frac{2^n + (-\omega)^n + (-\omega)^{2n}}{3}.$</p>
<p>$-\omega$ and $-\omega^2$ are sixth roots of unity, so this formula splits into six cases (or maybe three). Similar observations about fourth roots of unity show that</p>
<p>$\displaystyle \sum_{k=0}^{n/4} {n \choose 4k} = \frac{2^n + (1+i)^n + (1-i)^n}{4}$</p>
<p>where $1+i = \sqrt{2} e^{ \frac{\pi i}{4} }$ is a scalar multiple of an eighth root of unity, so this formula splits into eight cases (or maybe four). </p>
<p><strong>Question:</strong> Does anyone know a direct combinatorial proof of these identities? </p>
| <p>There's a very pretty combinatorial proof of the general identity <span class="math-container">$$\sum_{k \geq 0} \binom{n}{rk} = \frac{1}{r} \sum_{j=0}^{r-1} (1+\omega^j)^n,$$</span>
for <span class="math-container">$\omega$</span> a primitive <span class="math-container">$r$</span>th root of unity, in Benjamin, Chen, and Kindred, "<a href="https://scholarship.claremont.edu/hmc_fac_pub/535/" rel="nofollow noreferrer">Sums of Evenly Spaced Binomial Coefficients</a>," <em>Mathematics Magazine</em> 83 (5), pp. 370-373, December 2010.</p>
<p>They show that both sides count the number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span> beginning at vertex 0, where <span class="math-container">$C_r$</span> is the directed cycle on <span class="math-container">$r$</span> elements with the addition of a loop at each vertex, and a walk is <em>closed</em> if it ends where it starts. </p>
<p><em>Left-hand side</em>: In order for an <span class="math-container">$n$</span>-walk to be closed, it has to take <span class="math-container">$kr$</span> forward moves and <span class="math-container">$n-kr$</span> stationary moves for some <span class="math-container">$k$</span>.</p>
<p><em>Right-hand side</em>: The number of closed walks starting at vertex <span class="math-container">$j$</span> is the same regardless of the choice of <span class="math-container">$j$</span>, and so it suffices to prove that the total number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span> is <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n.$</span> For each <span class="math-container">$n$</span>-walk with initial vertex <span class="math-container">$j$</span>, assign each forward step a weight of <span class="math-container">$\omega^j$</span> and each stationary step a weight of <span class="math-container">$1$</span>. Define the weight of an <span class="math-container">$n$</span>-walk itself to be the product of the weights of the steps in the walk. Thus the sum of the weights of all <span class="math-container">$n$</span>-walks starting at <span class="math-container">$j$</span> is <span class="math-container">$(1+\omega^j)^n$</span>, and <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span> gives the total weight of all <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span>. The open <span class="math-container">$n$</span>-walks can then be partitioned into orbits such that the sum of the weights of the walks in each orbit is <span class="math-container">$0$</span>. Thus the open <span class="math-container">$n$</span>-walks contribute a total of <span class="math-container">$0$</span> to the sum <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span>. Since a closed <span class="math-container">$n$</span>-walk has weight <span class="math-container">$1$</span>, <span class="math-container">$\sum_{j=0}^{r-1} (1+\omega^j)^n$</span> must therefore give the number of closed <span class="math-container">$n$</span>-walks on <span class="math-container">$C_r$</span>.</p>
<p><HR></p>
<p>They then make a slight modification of the argument above to give a combinatorial proof of
<span class="math-container">$$\sum_{k \geq 0} \binom{n}{a+rk} = \frac{1}{r} \sum_{j=0}^{r-1} \omega^{-ja}(1+\omega^j)^n,$$</span>
where <span class="math-container">$0 \leq a < r$</span>.</p>
<p><HR></p>
<p>Benjamin and Scott, in "<a href="https://scholarship.claremont.edu/hmc_fac_pub/124/" rel="nofollow noreferrer">Third and Fourth Binomial Coefficients</a>" (<em>Fibonacci Quarterly</em>, 49 (2), pp. 99-101, May 2011) give different combinatorial arguments for the specific cases you're asking about, <span class="math-container">$\sum_{k \geq 0} \binom{n}{3k}$</span> and <span class="math-container">$\sum_{k \geq 0} \binom{n}{4k}$</span>. I prefer the more general argument above, though, so I'll just leave this one as a link and not summarize it. </p>
| <p>Fix two elements s<sub>1</sub>,s<sub>2</sub>∈S and divide subsets of S into two parts (subsets of S containing only s<sub>2</sub>)∪(subsets of S which contains s<sub>1</sub> if they contain s<sub>2</sub>). The second part contains equal number of sets for all reminders mod 3 (because Z/3 acts there adding s<sub>1</sub>, then s<sub>2</sub>, then removing both of them) -- namely, 2<sup>n-2</sup>. And for the first part we have a bijection with subsets <em>(edit: with 2 mod 3 elements)</em> of a set with (n-2) elements.</p>
<p>So we get a recurrence relation that gives an answer 2<sup>n-2</sup>+2<sup>n-4</sup>+... -- i.e. (2<sup>n</sup>-1):3 for even and (2<sup>n</sup>-2):3 for odd n.</p>
<hr>
<p><strong>Errata.</strong> For n=0,1,5 mod 6 one should add "+1" to the answer from the previous paragraph (e.g. for n=6 the correct answer is 1+20+1=22 and not 21).</p>
<p>Let me try to rephrase the solution to make it obvious. For n=2k divide S on k pairs and consider an action of a group Z/3Z on each pair described in a first paragraph. We get an action of (Z/3Z)<sup>k</sup> on subsets of S, and after removal of it's only fixed point (k-element set consisting of second points from each pair) we get a bijection between subsets which have 0, 1 or 2 elements mod 3. So there are (2<sup>n</sup>-1):3 sets with i mod 3 elements excluding the fixed point <em>and</em> to count that point one should add "+1" for i=k mod 3.</p>
<p>And for n=2k+1 there are 2 fixed points — including or not (2k+1)-th element of S — with k+1 and k elements respectively.</p>
|
game-theory | <p>This question is a result of having too much free time years ago during military service.
One of the many pastimes was playing tic-tac-toe in varying grid sizes and dimensions, and it lead me to a conjecture.
Now, after several years of mathematical training at a university, I am still unable to settle the conjecture, so I present it to you.</p>
<p>The classical tic-tac-toe game is played on a $3\times3$ grid and two players take turns to put their mark somewhere in the grid.
The first one to get three collinear marks wins.
Collinear includes horizontal, vertical and diagonal lines.
Experience shows that the game always ends in a draw if both players play wisely.</p>
<p>Let us write the grid size $3\times3$ as $3^2$.
We can change the edge length by playing on any $a^2$ grid (where each player tries to get $a$ marks in a row on the $a\times a$ grid).
We can also change dimension by playing on any $a^d$ grid, for example $3^3=3\times3\times3$.
I want to understand something about this game for general $a$ and $d$.
Let me repeat: The goal is to make $a$ collinear marks.</p>
<p>I assume both players play in an optimal way.
It is quite easy to see that the first player wins on a $2^d$ grid for any $d\geq2$ but the game is a tie on $2^1$.
The game is a tie also on $3^1$ and $3^2$, but my experience suggests that the first player wins on $3^3$ but the game ties on $4^d$ for $d\leq3$.
It seems quite credible that if there is a winning strategy on $a^d$, there is one also on $a^{d'}$ for any $d'\geq d$, since more dimensions to move in gives more room for winning rows.
<a href="https://math.stackexchange.com/a/417190/166535">This answer</a> to a related question tells that for any $a$ there is $d$ so that there is a winning strategy on $a^d$.</p>
<p>This brings me to the conjecture:</p>
<blockquote>
<p><s>There is a winning strategy for tic-tac-toe on an $a^d$ grid if and only if $d\geq a$.</s> (Refuted by TonyK's answer below.)</p>
</blockquote>
<p>Is there a characterization of the cases where a winning strategy exists?
It turns out not to be as simple as I thought.</p>
<p>To fix notation, let
$$
\delta(a)=\min\{d;\text{first player wins on }a^d\}
$$
and
$$
\alpha(d)=\max\{a;\text{first player wins on }a^d\}.
$$
The main question is:</p>
<blockquote>
<p>Is there an explicit expression for either of these functions?
Or decent bounds?
Partial answers are also welcome.</p>
</blockquote>
<p>Note that the second player never wins, as was discussed in <a href="https://math.stackexchange.com/questions/366077/why-does-the-strategy-stealing-argument-for-tic-tac-toe-work">this earlier post</a>.</p>
<hr>
<p>A remark for the algebraically-minded:
We can also allow the lines of marks to continue at the opposite face when they exit the grid; this amounts to giving the grid a torus-like structure.
Now there are no special points, unlike in the usual case with boundaries.
Collinear points on a toric grid of size $a^d$ corresponds to a line (maximal collinear set) in the module $(\mathbb Z/a\mathbb Z)^d$.
(If $a$ is odd, then $a$ collinear points in the mentioned module add up to zero, but the converse does not always hold: the nine points in $(\mathbb Z/9\mathbb Z)^3$ with multiples of three as all coordinates add up to zero but are not collinear.)
This approach might be more useful when $a$ is a prime and the module becomes a vector space.
Anyway, if this version of the game seems more manageable, I'm happy with answers about it as well (although the conjecture as stated is not true in this setting; the first player wins on $3^2$).</p>
| <p>I will quote some results and problems from the book <a href="http://www.cambridge.org/us/academic/subjects/mathematics/discrete-mathematics-information-theory-and-coding/combinatorial-games-tic-tac-toe-theory" rel="noreferrer"><em>Combinatorial Games: Tic-Tac-Toe Theory</em></a> by <a href="https://en.wikipedia.org/wiki/J%C3%B3zsef_Beck" rel="noreferrer">József Beck</a>, some of which were also quoted in <a href="https://math.stackexchange.com/questions/994408/converting-a-gomoku-winning-strategy-from-a-small-board-to-a-winning-strategy-on/994604#994604">this answer</a>.</p>
<p>The terms "<strong>win</strong>" and "<strong>draw</strong>" refer to the game as ordinarily played, i.e., the <em>first</em> player to complete a line wins. The term "<strong>Weak Win</strong>" refers to the corresponding Maker-Breaker game, where the first player ("Maker") wins if he completes a line, <em>regardless</em> of whether the second player has previously completed a line; in other words, the second player ("Breaker") can only defend by blocking the first player, he cannot "counterattack" by threatening to make his own line. (Note that ordinary $3\times3$ tic-tac-toe is a Weak Win.) A game is a "<strong>Strong Draw</strong>" if it is not a Weak Win, i.e., if the second player ("Breaker") can prevent the first player from completing a line.</p>
<blockquote>
<p><strong>Theorem 3.1</strong> <em>Ordinary $3^2$ Tic-Tac-Toe is a draw but not a Strong Draw.</em><br>
<br><strong>Theorem 3.2</strong> <em>The $4^2$ game is a Strong Draw, but not a Pairing Strategy Draw (because the second player cannot force a draw by a single pairing strategy.)</em><br>
<br><strong>Theorem 3.3</strong> <em>The $n\times n$ Tic-Tac-Toe is a Pairing Strategy Draw for every $n\ge5.$</em></p>
</blockquote>
<p>For a discussion of <a href="https://en.wikipedia.org/wiki/Oren_Patashnik" rel="noreferrer">Oren Patashnik</a>'s computer-assisted result that $4^3$ tic-tac-toe is a first player win, Beck refers to Patashnik's paper:</p>
<blockquote>
<p>Oren Patashnik, Qubic: $4\times4\times4$ Tic-Tac-Toe, <em>Mathematics Magazine</em> <strong>53</strong> (1980), 202-216.</p>
</blockquote>
<p>Not much more is known about multidimensional tic-tac-toe, as can be seen from the open problems:</p>
<blockquote>
<p><strong>Open Problem 3.2</strong> <em>Is it true that $5^3$ Tic-Tac-Toe is a draw game? Is it true that $5^4$ Tic-Tac-Toe is a first player win?</em></p>
</blockquote>
<p>The conjecture that "if there is a winning strategy on $a^d$, there is one also on $a^{d'}$ for any $d'\geq d$" is given as an open problem:</p>
<blockquote>
<p><strong>Open Problem 5.2</strong> <em>Is it true that, if the $n^d$ Tic-Tac-Toe is a first player win, then the $n^D$ game, where $D\gt d$, is also a win?</em><br>
<br><strong>Open Problem 5.3.</strong> <em>Is it true that, if the $n^d$ game is a draw, then the $(n+1)^d$ game is also a draw?</em></p>
</blockquote>
<p>To see that the intuition "adding more ways to win can't turn a winnable game into a draw game" is wrong, consider the following example of a tic-tac-toe-like game, attributed to Sujith Vijay: The board is the set $V=\{1,2,3,4,5,6,7,8,9\};\ $ the winning sets are $\{1,2,3\},$ $\{1,2,4\},$ $\{1,2,5\},$ $\{1,3,4\},$ $\{1,5,6\},$ $\{3,5,7\},$ $\{2,4,8\},$ $\{2,6,9\}$. As in tic-tac-toe, the two players take turns choosing (previously unchosen) elements of $V;$ the game is won by the first player to succeed in choosing all the elements of a winning set. It can be verified that this is a draw game, but the restriction to the board $\{1,2,3,4,5,6,7\}$ (with winning sets $\{1,2,3\},$ $\{1,2,4\},$ $\{1,2,5\},$ $\{1,3,4\},$ $\{1,5,6\},$ $\{3,5,7\}$) is a first-player win.</p>
| <p>$4^3$ ("Qubic") is a win for the first player. According to <a href="http://en.wikipedia.org/wiki/Oren_Patashnik" rel="nofollow">this link</a>, it was first proved by Oren Patashnik in 1980. The proof is complicated. It took 12 years for this proof to be converted into a practical computer algorithm; I was present at the 1992 Computer Olympiad where the program of Victor Allis and Patrick Schoo romped to victory.</p>
|
probability | <p>I attempted to answer <a href="https://www.quora.com/Two-distinct-real-numbers-between-0-and-1-are-written-on-two-sheets-of-paper-You-have-to-select-one-of-the-sheets-randomly-and-declare-whether-the-number-you-see-is-the-biggest-or-smallest-of-the-two-How-can-one-expect-to-be-correct-more-than-half-the-times-you-play-this-game/answer/Ephraim-Rothschild">this question</a> on Quora, and was told that I am thinking about the problem incorrectly. The question was:</p>
<blockquote>
<p>Two distinct real numbers between 0 and 1 are written on two sheets of
paper. You have to select one of the sheets randomly and declare
whether the number you see is the biggest or smallest of the two. How
can one expect to be correct more than half the times you play this
game?</p>
</blockquote>
<p>My answer was that it was impossible, as the probability should always be 50% for the following reason:</p>
<blockquote>
<p><strong>You can't!</strong> Here's why: </p>
<p>The set of real numbers between (0, 1) is known as an Uncountably Infinite Set
(<a href="https://en.wikipedia.org/wiki/Uncountable_set">https://en.wikipedia.org/wiki/Uncountable_set</a>). A set that is
uncountable has the following interesting property: </p>
<p>Let $\mathbb{S}$ be an uncountably infinite set. Let, $a, b, c, d \in \mathbb{S} (a \neq b, c \neq d)$. If $x$ is an uncountably infinite subset of
$\mathbb{S}$, containing all elements in $\mathbb{S}$ on the interval $(a, b)$; and $y$
is another uncountably infinite subset of $\mathbb{S}$, which contains all
elements of $\mathbb{S}$ on the interval $(c, d),$ <strong>$x$ and $y$ have the same
cardinality (size)!</strong></p>
<p>So for example, the set of all real numbers between (0, 1) is actually
the <em>exact same size</em> as the set of all real numbers between (0, 2)!
It is also the same size as the set of all real numbers between (0,
0.00001). In fact, if you have an uncountably infinite set on the interval $(a, b)$, and $a<n<b$, then then exactly 50% of the numbers
in the set are greater than $n$, and 50% are less than $n$, <em>no matter
what you choose for</em> $n$. This is important because it tells us
something unintuitive about our probability in this case. Let's say
the first number you picked is 0.03. You might think "Well, 97% of the
other possible numbers are larger than this, so the other number is
probably larger." <strong>You would be wrong!</strong> There are actually <em>exactly</em>
as many numbers between (0, 0.03) as there are between (0.03, 1). Even
if you picked 0.03, half of the other possible numbers are smaller
than it, and half of the other possible numbers are larger than it.
<strong>This means there is still a 50% probability that the other number is larger, and a 50% probability that it is smaller!</strong> </p>
<p>"<em>But how can that be?</em>" you ask, "<em>why isn't $\frac{a-b}{2}$ the
midpoint?</em>"</p>
<p>The real question is, why is it that we believe that
$\frac{a-b}{2}$ is the midpoint to begin with? The reason is probably
the following: it seems to make the most sense for discrete
(finite/countably infinite) sets. For example, if instead of the real
numbers, we took the set of all multiples of $0.001$ on the interval
$[0, 1]$. Now it makes sense to say that 0.5 is the midpoint, as we
know that the number of numbers below 0.5 is equal to the number of
numbers above 0.5. If we were to try to say that the midpoint is 0.4,
we would find that there are now more numbers above 0.4 then there are
below 0.4. <strong>This no longer applies when talking about the set of all
real numbers</strong> $\mathbb{R}$. Strangely enough, we can no longer talk
about having a midpoint in $\mathbb{R}$, because every number in
$\mathbb{R}$ could be considered a midpoint. For any point in
$\mathbb{R}$, the numbers above it and the numbers below it always
have the same cardinality. </p>
<p>See the Wikipedia article on Cardinality of the continuum
(<a href="https://en.wikipedia.org/wiki/Cardinality_of_the_continuum">https://en.wikipedia.org/wiki/Cardinality_of_the_continuum</a>).</p>
</blockquote>
<p>My question is, from a mathematical point of view, is this correct? The person who told me that this is wrong is fairly well known, and not someone who I would assume to often be wrong, especially for these types of problems.</p>
<p>The reasoning given for my answer being wrong was as follows:</p>
<blockquote>
<p>Your conclusion is not correct.<br>
You're right that the set of real
numbers between 0 and 1 is uncountable infinite, and most of what you
said here is correct. But that last part is incorrect. If you picked a
random real number between 0 and 1, the number does have a 97% chance
of being above 0.03. Let's look at this another way. Let K = {all
integers divisible by 125423423}. Let M = {all integers not divisible
by 125423423}. K and M are the same size, right? Does this mean, if
you picked an random integer, it has a 50% chance of being in K and a
50% chance or not? A random integer has a 50% chance of being
divisible by 125423423?</p>
</blockquote>
<p>The reason I disagreed with this response was because the last sentence should actually be true. If the set of all numbers that are divisible by 125423423 is the same size as the set of numbers that aren't, there should be a 50% probability of picking a random number from the first set, and a 50% chance that a number would be picked from the second. This is cirtainly the case with finite sets. If there are 2 disjoint, finite sets with equal cardinality, and you choose a random number from the union of the two sets, there should be a 50% chance that the number came from the first set, and a 50% chance that the number came from the second set. Can this idea be generalized for infinite sets of equal cardinality?</p>
<p><strong>Is my answer wrong? If so, am I missing something about how cardinalities of two set relate to the probability of choosing a number from one of them? Where did I go wrong in my logic?</strong></p>
| <p>Yes, your answer is fundamentally wrong. Let me point at that it is not even right in the finite case. In particular, you are using the following false axiom:</p>
<blockquote>
<p>If two sets of outcomes are equally large, they are equally probable.</p>
</blockquote>
<p>However, this is wrong even if we have just two events. For a somewhat real life example, consider some random variable $X$ which is $1$ if I will get married exactly a year from today and which is $0$ otherwise. Now, clearly the sets $\{1\}$ and $\{0\}$ are equally large, each having one element. However, $0$ is far more likely than $1$, although they are both possible outcomes.</p>
<p>The point here is <em>probability</em> is not defined from <em>cardinality</em>. It is, in fact, a separate definition. The mathematical definition for probability goes something like this:</p>
<blockquote>
<p>To discuss probability, we start with a set of possible outcomes. Then, we give a function $\mu$ which takes in a subset of the outcomes and tells us how likely they are.</p>
</blockquote>
<p>One puts various conditions on $\mu$ to make sure it makes sense, but nowhere do we link it to cardinality. As an example, in the previous example with outcomes $0$ and $1$ which are not equally likely, one might have $\mu$ defined something like:
$$\mu(\{\})=0$$
$$\mu(\{0\})=\frac{9999}{10000}$$
$$\mu(\{1\})=\frac{1}{10000}$$
$$\mu(\{0,1\})=1$$
which has nothing to do with the portion of the set of outcomes, which would be represented by the function $\mu'(S)=\frac{|S|}2$.</p>
<p>In general, your discussion of cardinality is correct, but it is irrelevant. Moreover, the conclusions you draw are inconsistent. The sets $(0,1)$ and $(0,\frac{1}2]$ and $(\frac{1}2,1)$ are pairwise equally large, so your reasoning says they are equally probable. However, the number was defined to be in $(0,1)$ so we're saying all the probabilities are $1$ - so we're saying that we're certain that the result will be in two disjoint intervals. This <em>never</em> happens, yet your method predicts that it always happens.</p>
<p>On a somewhat different note, but related in the big picture, you talk about "uncountably infinite sets" having the property that any non-trivial interval is also uncountable. This is true of $\mathbb R$, but not all uncountable subsets - like $(-\infty,-1]\cup \{0\} \cup [1,\infty)$ has that the interval $(-1,1)=\{0\}$ which is not uncountably infinite. Worse, not all uncountable sets have an intrinsic notion of ordering - how, for instance, do you order the set of subsets of natural numbers? The problem is not that there's no answer, but that there are many conflicting answers to that.</p>
<p>I think, maybe, the big thing to think about here is that sets really don't have a lot of structure. Mathematicians add more structure to sets, like probability measures $\mu$ or orders, and these fundamentally change their nature. Though bare sets have counterintuitive results with sets containing equally large copies of themselves, these don't necessarily translate when more structure is added.</p>
| <p>The OP's answer is incorrect. The numbers are not chosen based on <a href="https://en.wikipedia.org/wiki/Cardinality" rel="nofollow noreferrer">cardinality</a>, but based on <a href="https://en.wikipedia.org/wiki/Measure_(mathematics)" rel="nofollow noreferrer">measure</a>. It is not possible to define a <a href="https://en.wikipedia.org/wiki/Probability_distribution" rel="nofollow noreferrer">probability distribution</a> using cardinality (on an infinite set). However it is possible using measure. </p>
<p>Although the problem doesn't specify, if we assume the <a href="https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)" rel="nofollow noreferrer">uniform distribution</a> on $[0,1]$, then if $x=0.03$ then $y$ will be greater than $x$ 97% of the time. Of course, if a different probability distribution is used to select $x,y$, then a different answer will arise. It turns out that it is possible to win more than half the time even NOT KNOWING the distribution used, see this amazing result <a href="https://math.stackexchange.com/questions/655972/help-rules-of-a-game-whose-details-i-dont-remember/656426#656426">here</a>.</p>
|
logic | <p>As most of us, I struggled a lot when I first heard about Axiom of Choice (AC) and its consequences. Some things that can be derived from AC don't agree with my intuition. Consider for instance Zermelo's theorem applied to <span class="math-container">$\mathbb{R}.$</span> </p>
<p>For many years I wished to abandon this evil axiom, but I was powerless. I mistakenly thought that assuming <span class="math-container">$\neg AC$</span> implies that all classical results using AC are then lost. Equivalence of Cauchy and Haine continuity for example. I wasn't aware that there are alternatives to AC.</p>
<p>Finally when dealing with the characterization of Noetherian rings I came across Axiom of Dependent Choice (DC). It sounded so right to me. I loved it since I read it for the very first time.</p>
<p>Since that day I try to re-examine all results that use AC to find out whether DC is enough.</p>
<blockquote>
<p><strong>Question.</strong> What results follows from DC and what results require full AC?</p>
</blockquote>
<p>I am interested in the results form all mathematics. Set theory, topology, algebra, logic, etc.</p>
<p>Obviously all the results that are equivalent to AC fall to the latter group.</p>
| <p>Of course, a complete answer is impossible to give here, since it would have to cover so many details about $\sf DC$, other choice principles, and modern mathematics.</p>
<p>Let me give, in a nutshell, a few examples from every category of interest.</p>
<h1>$\bf 1.$ What is equivalent to $\sf DC$</h1>
<ul>
<li><p>Every tree of height $\omega$ without maximal nodes has a branch; or in a more Zorn-like manner, every partial order where every finite chain has an upper bound, has a maximal element or a countable chain. (Since all finite chains have upper bounds, this translates to "Every partial order has a maximal element or a countable chain.)</p></li>
<li><p>Baire's Category Theorem. The intersection of a countable family of dense open sets in a complete metric space is dense.</p></li>
<li><p>The downwards Lowenheim-Skolem theorem for countable languages: if $\cal L$ is a countable language and $M$ is a structure for $\cal L$, then there is an elementary submodel $N\subseteq M$ which is countable.</p></li>
<li><p>A partial order without infinite descending chains is well-founded, i.e. every non-empty set has a minimal element.</p></li>
</ul>
<h1>$\bf 2.$ What is weaker than $\sf DC$</h1>
<ul>
<li><p>The axiom of choice for countable families.</p></li>
<li><p>Every infinite set has a countably infinite subset.</p></li>
<li><p>The countable union of countable sets is countable.</p></li>
<li><p>The real numbers are not a countable union of countable sets.</p></li>
<li><p>There is a nontrivial measure on Borel sets which is $\sigma$-additive. </p></li>
<li><p>There is no $\alpha$ such that $\aleph_{\alpha+1}$ has countable cofinality.</p></li>
</ul>
<h1>$\bf 3.$ What does not follow from $\sf DC$</h1>
<ul>
<li><p>The Hahn–Banach theorem.</p></li>
<li><p>The existence of irregular sets of reals (e.g. sets which are not measurable, sets which do not have the Baire property, sets which do not have a perfect subset).</p></li>
<li><p>Every set can be linearly ordered.</p></li>
<li><p>The existence of free ultrafilters on $\Bbb N$</p></li>
<li><p>The Krein–Milman theorem.</p></li>
<li><p>The existence of a discontinuous linear functional on $\ell^1$; or even the existence of a non-zero linear functional on $\ell^\infty/c_0$.</p></li>
<li><p>The compactness theorem (for first-order logic), which itself is equivalent to Tychonoff's theorem restricted to Hausdorff spaces.</p></li>
<li><p>The axiom of choice for arbitrary families of finite sets (or really, anything which requires more than countably many choices).</p></li>
</ul>
<p>These lists can be extended ad infinitum. Since $\sf DC$ is one of the most useful choice principles out there, its uses can be implicit (or explicit) in many works of modern mathematics. Even those things which do not follow from $\sf DC$ might have weak instances that do, that turn out to be as good and as useful to things like analysis and number theory as the full axiom of choice.</p>
| <p>Fallen, there is one result that I can reassure you concerning the validity thereof in ZF+ACC (countable choice) namely the $\sigma$-additivity of the Lebesgue measure. On the other hand, it is consistent with ZF alone that there exists a strictly positive real function with zero Lebesgue integral; see <a href="https://arxiv.org/abs/1705.00493" rel="nofollow noreferrer">this recent article</a>.</p>
<p>Another interesting pair of facts is that </p>
<p>(1) the <a href="https://en.wikipedia.org/wiki/Transfer_principle" rel="nofollow noreferrer">transfer principle</a> in Robinson's framework for analysis with infinitesimals for a <em>definable extension</em> $\mathbb{R}\hookrightarrow{}^\ast\mathbb{R}$ can be proved in ZF+ACC. The catch is that </p>
<p>(2) the properness of that particular extension requires stronger foundational material such as the existence of maximal ideals.</p>
<p>So in your announced scheme of things (2) would classify as "a horrible lie" as you put it, but perhaps (1) can assuage your concerns.</p>
|
logic | <p>You can see how mathematical notation evolved during the last centuries <a href="http://www.prandiano.com.br/html/fr_arq.htm">here</a>.</p>
<p>I think everyone here knows that a bad notation can change an otherwise elementar problem into a difficult problem. Just try to do basic arithmetics with roman numbers, for example.</p>
<p>As a computer programmer I know that in some situations programming language notation plays a critical rule because some algorithms are better expressed in a particular language than in other languages even considering they all have the same basis: Lambda Calculus, Turing machines, etc</p>
<p>The linguists has their so-called <a href="http://en.wikipedia.org/wiki/Sapir%E2%80%93Whorf_hypothesis">Sapir–Whorf hypothesis</a> which "...holds that the structure of a language affects the ways in which its respective speakers conceptualize their world, i.e. their world view, or otherwise influences their cognitive processes."</p>
<p>Then, I ask: is there any field in Math that studies Math's notation and its influence for good or for bad in Math itself?</p>
<p>Modifying the fragment on the paragraph above:
is it possible that the notation, the symbols and the language used in Math affects the ways in which Mathematicians conceptualize their world and influences their cognitive processes?</p>
| <p>On a quite different tack, you might well be interested in Mohan Ganesalingham's <em>The Language of Mathematics: A Linguistic and Philosophical Investigation</em> (Springer 2013). </p>
<p>The author is an outstanding mathematician (Senior Wrangler, no less), and has a degree in linguistics, and now works in computer science. The book is based on a prize-winning thesis. I mention those facts in case the word "philosophical" in the sub-title puts you off! Mohan seriously knows his stuff.</p>
| <p>In my opinion, this is one of the exciting promises of intuitionistic mathematics and topos theory.</p>
<p>By discarding the law of the excluded middle $\neg\neg P\implies P$ (of course, we can add it back later if we want), much more of the structure of our axioms becomes evident in our theorems, because we can no longer label arbitrary statements as true-or-false.</p>
<p>For example, in topos theory, one no longer speaks of "the" real numbers, but of "a" real numbers object in a topos. When the topos is $Set$, nothing special happens. But in a different setting, we may have a countable real numbers object, or nontrivial subobjects without points. My understanding is that there are also topoi for which every function $\mathbb{R}\to\mathbb{R}$ is differentiable (which should be a relief to any physicists who pretend this all the time).</p>
<p>I like this view because it makes certain properties of "the" real numbers—its cardinality, for example—appear to be artifacts of the language of sets, which relies upon a firm notion of membership. But since the vast majority of real numbers cannot be pinned down in any meaningful sense, it is not hard to argue that treating $\mathbb{R}$ as a set is, at the very least, a choice of perspective.</p>
<p>And this can actually be useful—the now-classic quantum physics paper <a href="http://arxiv.org/abs/0803.0417" rel="nofollow">What is a Thing?</a> has argued that the standard $\mathbb{R}$ is inadequate for non-classical theories of physics. After all, if the number of particles in a system is dependent on how we measure the system, then why should we expect <em>anything</em> in the universe to behave like a set?</p>
|
logic | <p>This question is mostly from pure curiosity.</p>
<p>We know that any formal system cannot completely pin down the natural numbers. So regardless of whether we're reasoning in PA or ZFC or something else, there will be nonstandard models of the natural numbers, which admit the existence of additional integers, larger than all the finite ones.</p>
<p>Suppose that for some particular Turing machine $Z$, I have proven that $Z$ halts, but that it does so only after some ridiculously huge number of steps $N$, such as $A(A(A(10)))$, where $A$ is the Ackermann sequence. My question is, in a case like this, how can I know for sure that $N$ is a standard natural number and not a nonstandard one?</p>
<p>Of course, in principle I could just simulate the Turing machine until it halts, at which point I would know the value of $N$ and could be sure it is a standard natural number. But in practice I can't do that, because the universe would come to an end long before I was finished. (Let's suppose, unless this is impossible, that there is no way around this for this particular Turing machine; that is, any proof of the exact value of $N$ has a length comparable to $N$.)</p>
<p>If $N$ does turn out to be a nonstandard number then the Turing machine does't halt after all, since when simulating it we would have to count up through every single standard natural number before reaching $N$. This would seem to put us in a tricky situation, because we've proven that some $N$ exists with a particular property, but unless we can say for sure that $N$ is a standard natural number then we haven't actually proven the Turing machine halts at all!</p>
<p>My question is simply whether it's possible for this situation to occur, or if it's not, why not?</p>
<p>I appreciate that the answer to this might depend on the nature of the proof that $Z$ halts, which I haven't specified. If this is the case, which kinds of proof are susceptible to this issue, and which are not?</p>
| <p>[I will take for granted in this answer that the standard integers "exist" in some Platonic sense, since otherwise it's not clear to me that your question is even meaningful.]</p>
<p>You're thinking about this all wrong. Do you believe the axioms of PA are true for the standard integers? Then you should also believe anything you prove from PA is also true for the standard integers. In particular, if you prove that there exists some integer with some property, that existence statement is true in the standard integers.</p>
<p>To put it another way, anything you prove from your axioms is true in <em>any</em> model of the axioms, standard or nonstandard. So the existence of nonstandard models is totally irrelevant. All that is relevant is whether the standard model exists (in other words, whether your axioms are true for the standard integers).</p>
<p>Now, I should point out that this notion is a lot slipperier for something like ZFC than for something like PA. From a philosophical standpoint, the idea that there actually exists a Platonic "standard set-theoretic universe" that ZFC is correctly describing is a lot less coherent than the corresponding statement for integers. For all we know, ZFC may actually be inconsistent and so it proves all sorts of false statements about the integers. Or maybe it is consistent, but it still proves false statements about the integers (because it only has nonstandard models). But if you do believe that the ZFC axioms are true in their intended interpretation, then you should believe that any consequences of them are also true (including consequences about the integers).</p>
| <blockquote>
<p>We know that any formal system cannot completely pin down the natural numbers.</p>
</blockquote>
<p>Incidentally, I said exactly this <a href="https://math.stackexchange.com/a/2251401/21820">here</a>. Besides what I said in that post, I wish to elaborate on the following points:</p>
<ul>
<li><p>A generalized version of the Godel-Rosser incompleteness theorem shows convincingly that there is no practical formal system that can pin down the natural numbers. Specifically, we can easily write a program that, given any proof verifier program for any formal system that interprets arithmetic, will produce an explicit arithmetical sentence that can neither be proven nor disproven by that system. How convincingly? If we phrase the incompleteness theorem in a certain way, it can be proven even in intuitionistic logic. But we still need to work in some meta-system that 'has access to' a model of PA or equivalent, otherwise we cannot even talk about finite strings, which are the <a href="http://math.stackexchange.com/a/1808558/21820">basic building blocks</a> of any practical formal system.</p></li>
<li><p>The philosophical issue is that as far as the real-world is concerned, the empirical evidence suggests that there is no real-world model of PA, due partly to the finite size of the observable universe, but also the fact that a physical storage device with extremely large capacity (on the order of the size of the observable universe) will degrade faster than you can use it! So there is a weird philosophical problem with the preceding point, because if one does not believe that the collection of finite strings embeds into the real-world, then the incompleteness theorems do not actually apply...</p></li>
<li><p>On the other hand, there is undeniably huge empirical evidence that the theorems of PA when translated into statements about real-world programs are correct at human scales. Just for example, there is no known counter-example to the theorems underlying RSA decryption, which depend on Fermat's little theorem among other basic number-theoretic theorems applied to natural numbers on the order of <span class="math-container">$2^{2048}$</span>. So one still has to explain the incredible accuracy of PA at small scales even if it cannot have a real-world model.</p></li>
</ul>
<hr>
<p>But suspending philosophical disbelief, and working in a weak formal system called ACA that practically every logician believes is sound (with respect to the real world), there are many things we can in fact say for sure (besides the incompleteness theorem), that would answer your question (if ACA is sound).</p>
<blockquote>
<p>Suppose that for some particular Turing machine <span class="math-container">$Z$</span>, I have proven that <span class="math-container">$Z$</span> halts [after some number <span class="math-container">$N$</span> of steps. H]ow can I know for sure that <span class="math-container">$N$</span> is a standard natural number and not a nonstandard one?</p>
</blockquote>
<p>Your proof is done within some formal system <span class="math-container">$S$</span>. If <span class="math-container">$S$</span> is <span class="math-container">$Σ_1$</span>-sound (with respect to the real-world) then you can know for sure that <span class="math-container">$Z$</span> really halts. It is entirely possible that <span class="math-container">$S$</span> is not <span class="math-container">$Σ_1$</span>-sound, and that you can never figure it out. For example, given any practical formal system <span class="math-container">$S$</span> that interprets arithmetic, let <span class="math-container">$S' = S + \neg \text{Con}(S)$</span>. If <span class="math-container">$S$</span> is consistent, then <span class="math-container">$S'$</span> is also consistent but <span class="math-container">$Σ_1$</span>-unsound. In particular, it proves that the proof verifier for <span class="math-container">$S$</span> halts on some purported proof of contradiction over <span class="math-container">$S$</span>, which is exactly the type of question you are concerned about!</p>
<p>Worse still, the arithmetical unsoundness of a formal system can lie at any level of the arithmetical hierarchy, as constructively shown <a href="https://math.stackexchange.com/a/2582086/21820">in this post</a>. Precisely, if <span class="math-container">$S$</span> is <span class="math-container">$Σ_n$</span>-sound then there is a <span class="math-container">$Σ_n$</span>-sound extension of <span class="math-container">$S$</span> that is <span class="math-container">$Σ_{n+1}$</span>-unsound.</p>
<p>These imply that it may be difficult to have confidence in the soundness of a formal system without some philosophical justification. Firstly, unsoundness cannot be detected by checking for a proof of inconsistency. Now, if <span class="math-container">$S$</span> is sufficiently expressive, we may be able to state "<span class="math-container">$S$</span> is arithmetically sound" over <span class="math-container">$S$</span>, in which case we can check for a proof of its negation over <span class="math-container">$S$</span>, and if so we know something is really wrong. But even for mere consistency, if we enumerate (endlessly) all possible proofs and never find a contradiction, we still have only enumerated an 'infinitesimal' fraction of all possible proofs, far too little to be sure that there really is no contradiction.</p>
<p>It gets worse. Consider the following:</p>
<blockquote>
<p>Let <span class="math-container">$Q$</span> be some <span class="math-container">$Π_1$</span>-sentence such that <span class="math-container">$S$</span> proves ( <span class="math-container">$Q$</span> is true iff there is no proof of <span class="math-container">$Q$</span> over <span class="math-container">$S$</span> with less than <span class="math-container">$2^{10000}$</span> symbols ).</p>
</blockquote>
<p>It turns out that we can indeed easily construct such a sentence <span class="math-container">$Q$</span>, using the standard Godel-coding tricks and the fixed-point theorem. What may be shocking to those unfamiliar with this is that <span class="math-container">$Q$</span> is actually quite short (less than a billion symbols if <span class="math-container">$S$</span> is something like ZFC), and if <span class="math-container">$S$</span> is <span class="math-container">$Σ_1$</span>-complete, then <span class="math-container">$Q$</span> is provable over <span class="math-container">$S$</span> (because <span class="math-container">$S$</span> can check every possible proof with less than <span class="math-container">$2^{10000}$</span> symbols) but its shortest proof has at least <span class="math-container">$2^{10000}$</span> symbols!</p>
<p>Now let <span class="math-container">$T = S + \neg Q$</span>, where <span class="math-container">$S$</span> has any reasonable deductive system. Firstly, <span class="math-container">$T$</span> is inconsistent. Secondly, the shortest proof of its inconsistency is on the order of <span class="math-container">$2^{10000}/len(Q)$</span>, because it can be converted into a proof of ( <span class="math-container">$\neg Q \to \bot$</span> ) over <span class="math-container">$S$</span>, which after a finite number of extra steps would give a proof of <span class="math-container">$Q$</span> over <span class="math-container">$S$</span>.</p>
<p>In conclusion, a formal system could have a rather small description, but have an inconsistency whose proof is so long that we can never ever store it in the physical world...</p>
<hr>
<p>Finally:</p>
<blockquote>
<p>I appreciate that the answer to this might depend on the nature of the proof that <span class="math-container">$Z$</span> halts, which I haven't specified. If this is the case, which kinds of proof are susceptible to this issue, and which are not?</p>
</blockquote>
<p>It should be clear from all the above that it is indeed the case. To repeat, you need the proof that <span class="math-container">$Z$</span> halts to be done within a formal system that is <span class="math-container">$Σ_1$</span>-sound. How could you know that? Well we cannot know any such thing for sure. Almost all logicians believe that ACA is arithmetically sound, but different logicians start doubting soundness at different points as you climb up the hierarchy of formal systems. Some doubt full second-order arithmetic, called Z2, because of its impredicative comprehension axiom. Others think it still is fine, but doubt ZFC. Some think that ZFC is fine, but doubt some large cardinal axioms.</p>
|
geometry | <p>I would like to find the apothem of a regular pentagon. It follows from </p>
<p>$$\cos \dfrac{2\pi }{5}=\dfrac{-1+\sqrt{5}}{4}.$$</p>
<p>But how can this be proved (geometrically or trigonometrically)? </p>
| <p>Since $x := \cos \frac{2 \pi}{5} = \frac{z + z^{-1}}{2}$ where $z:=e^{\frac{2 i \pi}{5}}$, and $1+z+z^2+z^3+z^4=0$ (for $z^5=1$ and $z \neq 1$), $x^2+\frac{x}{2}-\frac{1}{4}=0$, and voilà.</p>
| <p><img src="https://i.sstatic.net/IvfAH.png" alt="diagram"></p>
<p>Consider a $\triangle ABC$ with $AB=1$, $\mathrm{m}\angle A=\frac{\pi}{5}$ and $\mathrm{m}\angle B=\mathrm{m}\angle C=\frac{2\pi}{5}$, and point $D$ on $\overline{AC}$ such that $\overline{BD}$ bisects $\angle ABC$. Now, $\mathrm{m}\angle CBD=\frac{\pi}{5}$ and $\mathrm{m}\angle BDC=\frac{2\pi}{5}$, so $\triangle ABC\sim\triangle BCD$. Also note that $\triangle ABD$ is isosceles so that $BC=BD=AD$.</p>
<p>Let $x=BC=BD=AD$. From the similar triangles, $\frac{AB}{BC}=\frac{BC}{CD}$ or $\frac{1}{x}=\frac{x}{1-x}$, so $1-x=x^2$ and $x=\frac{\sqrt{5}-1}{2}$ (the other solution is negative and lengths cannot be negative).</p>
<p>Now, apply the Law of Cosines to $\triangle ABC$:
$$\begin{align}
\cos\frac{2\pi}{5}=\cos C&=\frac{a^2+b^2-c^2}{2ab}
\\\\
&=\frac{\left(\frac{\sqrt{5}-1}{2}\right)^2+1^2-1^2}{2\cdot\frac{\sqrt{5}-1}{2}\cdot 1}
\\\\
&=\frac{\frac{\sqrt{5}-1}{2} \cdot \frac{\sqrt{5}-1}{2}}{2\cdot\frac{\sqrt{5}-1}{2}}
\\\\
&=\frac{\sqrt{5}-1}{4}.
\end{align}$$</p>
|
combinatorics | <p>The continued fraction of this series exhibits a truly crazy pattern and I found no reference for it so far. We have:</p>
<p><span class="math-container">$$\sum_{k=1}^\infty \frac{1}{(2^k)!}=0.5416914682540160487415778421$$</span></p>
<p>But the continued fraction is just beautiful:</p>
<p><code>[1, 1, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 1, 1, 601080389, 2, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 1, 1, 1832624140942590533, 2, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 2, 601080389, 1, 1, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 1, 1, 23951146041928082866135587776380551749, 2, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 1, 1, 601080389, 2, 5, 2, 69, 1, 1, 5, 2, 12869, 1, 1, 5, 1, 1, 69, 2, 5, 2, 1832624140942590533, 1, 1, 5, 2, 69, 1, 1, 5, 1, 1, 12869, 2, 5, 1, 1, 69, 2, 5, 2, 601080389, 1, 1, 5, 2, 69, 1, 1, 5, 2,...]</code></p>
<p>All of these large numbers are not just random - they have a simple closed form:</p>
<p><span class="math-container">$$A_n= \left( \begin{array}( 2^n \\ 2^{n-1} \end{array} \right) -1$$</span></p>
<p><span class="math-container">$$A_1=1$$</span></p>
<p><span class="math-container">$$A_2=5$$</span></p>
<p><span class="math-container">$$A_3=69$$</span></p>
<p><span class="math-container">$$A_4=12869$$</span></p>
<p><span class="math-container">$$A_5=601080389$$</span></p>
<p>And so on. This sequence is not in OEIS, only the larger sequence is, which contains this one as a subsequence <a href="https://oeis.org/A014495" rel="noreferrer">https://oeis.org/A014495</a></p>
<blockquote>
<p>What is the explanation for this?</p>
<p>Is there a regular pattern in this continued fraction (in the positions of numbers)?</p>
</blockquote>
<p>Is there generalizations for other sums of the form <span class="math-container">$\sum_{k=1}^\infty \frac{1}{(a^k)!}$</span>?</p>
<hr />
<p><strong>Edit</strong></p>
<p>I think a good move will be to rename the strings of small numbers:</p>
<p><span class="math-container">$$a=1, 1, 5, 2,\qquad b=1, 1, 5, 1, 1,\qquad c=2,5,1,1,\qquad d=2, 5, 2$$</span></p>
<p>As a side note if we could set <span class="math-container">$1,1=2$</span> then all these strings will be the same.</p>
<p>Now we rewrite the sequence. I will denote <span class="math-container">$A_n$</span> by just their indices <span class="math-container">$n$</span>:</p>
<blockquote>
<p><span class="math-container">$$[a, 3, b, 4, c, 3, c, 5, d, 3, a, 4, b, 3, c, 6, d, 3, b, 4, c, 3, d, 5, a, 3, a, 4, b, 3, c, 7, \\ d, 3, b, 4, c, 3, c, 5, d, 3, a, 4, b, 3, d, 6, a, 3, b, 4, c, 3, d, 5, a, 3, a,...]$$</span></p>
</blockquote>
<p><span class="math-container">$$[a3b4c3c5d3a4b3c6d3b4c3d5a3a4b3c7d3b4c3c5d3a4b3d6a3b4c3d5a3a,...]$$</span></p>
<p>Now we have new large numbers <span class="math-container">$A_n$</span> appear at positions <span class="math-container">$2^n$</span>. And postitions of the same numbers are in a simple arithmetic progression with a difference <span class="math-container">$2^n$</span> as well.</p>
<p>Now we only have to figure out the pattern (if any exists) for <span class="math-container">$a,b,c,d$</span>.</p>
<blockquote>
<p>The <span class="math-container">$10~000$</span> terms of the continued fraction are uploaded at github <a href="http://gist.github.com/anonymous/20d6f0e773a2391dc01815e239eacc79" rel="noreferrer">here</a>.</p>
</blockquote>
<hr />
<p>I also link my <a href="https://math.stackexchange.com/q/1856226/269624">related question</a>, from the iformation there we can conclude that the series above provide a greedy algorithm Egyptian fraction expansion of the number, and the number is irrational by the theorem stated in <a href="http://www.jstor.org/stable/2305906" rel="noreferrer">this paper</a>.</p>
| <p>actually your pattern is true and it is relatively easy to prove. Most of it can be found in an article from Henry Cohn in Acta Arithmetica (1996) ("Symmetry and specializability in continued fractions") where he finds similar patterns for other kind of continued fractions such as $\sum \frac{1}{10^{n!}}$. Curiously he doesn't mention your particular series although his method applies directly to it.</p>
<p>Let $[a_0,a_1,a_2,\dots,a_m]$ a continued fraction and as usual let
$$ \frac{p_m}{q_m} = [a_0,a_1,a_2,\dots,a_m] $$
we use this lemma from the mentioned article (the proof is not difficult and only uses elementary facts about continued fractions): </p>
<p><strong>[Folding Lemma]</strong>
$$ \frac{p_m}{q_m} + \frac{(-1)^m}{xq_m^2} = [a_0,a_1,a_2,\dots,a_m,x,-a_m,-a_{m-1},\dots,-a_2,-a_1] $$</p>
<p>This involves negative integers but it can be easiy transformed in a similar expresion involving only posive numbers using the fact that for any continued fraction:
$$ [\dots, a, -\beta] = [\dots, a-1, 1, \beta-1] $$
So
$$ \frac{p_m}{q_m} + \frac{(-1)^m}{xq_m^2} = [a_0,a_1,a_2,\dots,a_m,x-1,1,a_m-1,a_{m-1},\dots,a_2,a_1] $$</p>
<p>With all this consider the write the $m$th partial sum of your series as
$$ S_n = \sum_{k=1}^n \frac{1}{2^k!} = [0,a_1,a_2,\dots,a_m] = \frac{p_m}{q_m}$$</p>
<p>Where we can take always $m$ even (ie if $m$ is odd and $a_m>1$ then we can consider instead the continued fraction $[0,a_1,a_2,\dots,a_m-1,1]$ and so on). </p>
<p>Now $q_m = 2^n!$ we see it using induction on $n$: it is obvious for $n=1$ and if $S_{n-1} = P/2^{n-1}!$ then
$$ S_n = \frac{P}{2^{n-1}!} + \frac{1}{2^n!} = \frac{P (2^n!/2^{n-1}!) + 1}{2^n!} $$
now any common factor of the numerator and the denominator is has to be a factor of $2^{n-1}!$ dividing also $P$, but this is impossible as both are coprime so $q_m = 2^n!$ and we are done. </p>
<p>Using the "positive" form of the folding lemma with $x = \binom{2^{n+1}}{2^n}$ we get:</p>
<p>$$ \frac{p_m}{q_m} + \frac{(-1)^m}{\binom{2^{n+1}}{2^n}(2^n!)^2} =
\frac{p_m}{q_m} + \frac{1}{2^n!} =
[0,a_1,a_2,\dots,a_m,\binom{2^{n+1}}{2^n}-1,1,a_m-1,a_{m-1},\dots,a_1] $$</p>
<p>And we get the "shape" of the continued fraction and the your $A_m$. Let's see several steps: </p>
<p>We start with the first term wich is
$$ \frac{1}{2} = [0,2] $$
as $m$ is odd we change and use instead the continuos fraction
$$ \frac{1}{2} = [0,1,1] $$
and apply the last formula getting
$$ \frac{1}{2}+\frac{1}{2^2!} = [0,1,1,5,1,0,1] $$
We can dispose of the zeros using the fact that for any continued fraction:
$$ [\dots, a, 0, b, \dots] = [\dots, a+b, \dots ] $$
so
$$ \frac{1}{2}+\frac{1}{2^2!} = [0,1,1,5,2] $$
this time $m$ is even so we apply again the formula getting
$$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!} = [0,1,1,5,2,69,1,1,5,1,1] $$
again $m$ is even (and it will always continue even as easy to infer) so we apply again the formula getting as the next term
$$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!}+\frac{1}{2^4!} = [0,1,1,5,2,69,1,1,5,1,1,12869,1,0,1,5,1,1,69,2,5,1,1] $$
and we reduce it using the zero trick:
$$ \frac{1}{2}+\frac{1}{2^2!}+\frac{1}{2^3!}+\frac{1}{2^4!} = [0,1,1,5,2,69,1,1,5,1,1,12869,2,5,1,1,69,2,5,1,1] $$</p>
<p>from now it is easy to see that the obtained continued fraction always has an even number of terms and we always have to remove the zero leaving a continued fraction ending again in 1,1. So the rule to obtain the continued fraction from here to an arbitrary number of termes is repeating the follewing steps: let the shape of the last continued fraction be $[0,1,1,b_1,\dots,b_k,1,1]$ then the next continued fraction will be
$$[0,1,1,b_1,\dots,b_k,1,1,A_n,2,b_k,\dots,b_1,1,1 ]$$
from this you can easily derive the patterns you have found for the position of apperance of the different integers. </p>
| <p>I have computed $10~000$ entries of the continued fraction, using Mathematica (the number itself was evaluated with $100~000$ digit precision).</p>
<p>The results show that the pattern is very simple for the most part.</p>
<p>First, we denote again:</p>
<p>$$A_n= \left( \begin{array}( 2^n \\ 2^{n-1} \end{array} \right) -1$$</p>
<p>The computed CF contains $A_n$ up to $n=13$, and the formula is numerically confirmed.</p>
<p>Now the positions of $A_n$ go like this ($P_n$ is the position of $A_n$ in the list of all CF entries):</p>
<p>$$P_2=P(5)=[3,8,13,18,23,28,\dots]$$</p>
<p>$$P_3=P(69)=[5,16,25,36,45,56,\dots]$$</p>
<p>$$P_4=[11,30,51,70,91,110,\dots]$$</p>
<p>$$P_5=[21,60,101,140,181,220,\dots]$$</p>
<p>$$P_6=[41,120,201,280,361,440,\dots]$$</p>
<p>But all these are just a combination of two <strong>arithmetic progressions</strong> for even and odd terms!</p>
<p>The first two terms are a little off, but the rest goes exactly like $P_4,P_5$, meaning for $n \geq 4$ we can write the general position for $A_n$ ($k=0,1,2,\dots$):</p>
<blockquote>
<p>$$p_k(A_n)= \begin{cases} 5 \cdot 2^{n-3}(1+2 k)+1,\qquad k \text{ even} \\ 5 \cdot 2^{n-3}(1+2 k),\qquad \qquad k \text{ odd} \end{cases}$$</p>
</blockquote>
<p>As special, different cases, we have:</p>
<blockquote>
<p>$$p_k(5)=3+5 k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_k(69)= \begin{cases} 2(3+5 k)-1,\qquad k \text{ even} \\ 2(3+5 k),\qquad \qquad k \text{ odd} \end{cases}$$</p>
</blockquote>
<hr>
<p>For $P_1=1$ and for $2$ I can't see any definite pattern so far.</p>
<hr>
<blockquote>
<p>Basically, we now know the explicit expression for every CF entry and all of its positions in the list, except for entries $1$ and $2$.</p>
</blockquote>
<p>It's enough now to consider the positions of $2$, then we just fill the rest of the list with $1$. The positions of $2$ start like this:</p>
<p><code>[4, 12, 17, 22, 24, 29, 37, 42, 44, 52, 57, 59, 64, 69, 77, 82, 84, 92, 97, 102, 104, 109, 117, 119, 124, 132, 137, 139, 144, 149, 157, 162, 164, 172, 177, 182, 184, 189, 197, 202, 204, 212, 217, 219, 224, 229, 237, 239, 244, 252, 257, 262, 264, 269, 277, 279, 284, 292, 297, 299, 304, 309, 317, 322, 324, 332, 337, 342, 344, 349, 357, 362, 364, 372, 377, 379, 384, 389, 397, 402, 404, 412, 417, 422, 424, 429, 437, 439, 444, 452, 457, 459, 464, 469, 477, 479, 484, 492, 497, 502, 504, 509, 517, 522, 524, 532, 537, 539, 544, 549, 557, 559, 564, 572, 577, 582, 584, 589, 597,...]</code></p>
<p>So far I have found four uninterrupted patterns for $2$:</p>
<p>$$p_{1k}(2)=4+20k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_{2k}(2)=17+20k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_{3k}(2)=29+40k,\qquad k=0,1,2,\dots$$</p>
<p>$$p_{4k}(2)=12+40k,\qquad k=0,1,2,\dots$$</p>
<p><strong>Edit</strong></p>
<p>Discounting these four progressions the rest of the sequence is very close to $20k$, but some numbers are $20k+2$, while some $20k-1$ with no apparent pattern:</p>
<p><code>[22,42,59,82,102,119,139,162,182,202,219,239,262,279,299,322,342,362,379,402,422,439,459,479,502,522,539,559,582,599,619,642,662,682,699,722,742,759,779,802,822,842,859,879,902,919,939,959,982,1002,1019,1042,1062,1079,1099,1119,1142,1162,1179,1199,1222,1239,1259,1282,1302,1322,1339,1362,1382,1399,1419,1442,1462,1482,1499,1519,1542,1559,1579,1602,1622,1642,1659,1682,1702,1719,1739,1759,1782,1802,1819,1839,1862,1879,1899,1919,1942,1962,1979,2002,2022,2039,2059,2082,2102,2122,2139,2159,2182,2199,2219,2239,2262,2282,2299,2322,2342,2359,2379,2399,2422,2442,2459,2479,2502,2519,2539,2562,2582,2602,2619,2642,2662,2679,2699,2722,2742,2762,2779,2799,2822,2839,2859,2882,2902,2922,2939,2962,2982,2999,3019,3039,3062,3082,3099,3119,3142,3159,3179,3202,3222,3242,3259,3282,3302,3319,3339,3362,3382,3402,3419,3439,3462,3479,3499,3519,3542,3562,3579,3602,3622,3639,3659,3679,3702,3722,3739,3759,3782,3799,3819,3839,3862,3882,3899,3922,3942,3959,3979,4002,4022,4042,4059,4079,4102,4119,4139,4162,4182,4202,4219,4242,4262,4279,4299,4319,4342,4362,4379,4399,4422,4439,4459,4479,4502,4522,4539,4562,4582,4599,4619,4642,4662,4682,4699,4719,4742,4759,4779,4799,4822,4842,4859,4882,4902,4919,4939,4959,4982,5002,...]</code></p>
<hr>
<blockquote>
<p>The $10~000$ terms of the continued fraction are uploaded at github <a href="http://gist.github.com/anonymous/20d6f0e773a2391dc01815e239eacc79">here</a>. You can check my conjectures or try to obtain the full pattern.</p>
</blockquote>
<hr>
<blockquote>
<p>I would also like some hints and outlines for a proof of the above conjectures.</p>
</blockquote>
<p>I understand that it's quite likely that any of the patterns I found break down for some large $k$.</p>
|
differentiation | <p>How to calculate the gradient with respect to $X$ of:
$$
\log \mathrm{det}\, X^{-1}
$$
here $X$ is a positive definite matrix, and det is the determinant of a matrix.</p>
<p>How to calculate this? Or what's the result? Thanks!</p>
| <p>I assume that you are asking for the derivative with respect to the elements of the matrix. In this cases first notice that</p>
<p>$$\log \det X^{-1} = \log (\det X)^{-1} = -\log \det X$$</p>
<p>and thus</p>
<p>$$\frac{\partial}{\partial X_{ij}} \log \det X^{-1} = -\frac{\partial}{\partial X_{ij}} \log \det X = - \frac{1}{\det X} \frac{\partial \det X}{\partial X_{ij}} = - \frac{1}{\det X} \mathrm{adj}(X)_{ji} = - (X^{-1})_{ji}$$</p>
<p>since $\mathrm{adj}(X) = \det(X) X^{-1}$ for invertible matrices (where $\mathrm{adj}(X)$ is the adjugate of $X$, see <a href="http://en.wikipedia.org/wiki/Adjugate">http://en.wikipedia.org/wiki/Adjugate</a>).</p>
| <p>The simplest is probably to observe that
$$-\log\det (X+tH) = -\log\det X -\log\det(I+tX^{-1}H)
\\= -\log\det X - t \textrm{Tr}(X^{-1}H) + o(t),$$</p>
<p>where is used the "obvious" fact that $\det(I+A) = 1+\textrm{Tr}(A)+o(|A|)$ (all the other terms are quadratic expressions of the coefficients of $A$).</p>
<p>Notice that $\textrm{Tr}(X^{-1}H)=(X^{-T},H)$ in the Frobenius scalar product, hence $\nabla [-\log\det(X)] = -X^{-T}$ in this scalar product. (This gives another proof that $\nabla\det (X) = cof(X)$.)</p>
<p>Of course if $X$ is symmetric positive definite then $-X^{-1}$ is also a valid expression. Moreover, one has in this case, for $X,Y$ positive definite, $(-X^{-1}+Y^{-1},X-Y)\ge 0$.</p>
|
probability | <p><strong>Note:</strong> <a href="https://stackoverflow.com/q/3924602/33686">This question has been posted on StackOverflow</a>. I have moved it here because:</p>
<ol>
<li>I am curious about the answer</li>
<li>The OP has not shown any interest in moving it himself</li>
</ol>
<hr>
<p>In the Communications of the ACM, <a href="http://delivery.acm.org/10.1145/1790000/1787260/p128-winkler.html?key1=1787260&key2=3588796821&coll=portal&dl=ACM&CFID=108548737&CFTOKEN=59330921" rel="noreferrer">August 2008 "Puzzled" column</a>, Peter Winkler asked the following question:</p>
<blockquote>
<p>On the table before us are 10 dots,
and in our pocket are 10 $1 coins.
Prove the coins can be placed on the
table (no two overlapping) in such a
way that all dots are covered. Figure
2 shows a valid placement of the coins
for this particular set of dots; they
are transparent so we can see them.
The three coins at the bottom are not
needed.</p>
</blockquote>
<p>In the <a href="http://delivery.acm.org/10.1145/1820000/1810917/p110-winkler.html?key1=1810917&key2=2298796821&coll=portal&dl=ACM&CFID=108548737&CFTOKEN=59330921" rel="noreferrer">following issue</a>, he presented his proof:</p>
<blockquote>
<p>We had to show that any 10 dots on a
table can be covered by
non-overlapping $1 coins, in a problem
devised by Naoki Inaba and sent to me
by his friend, Hirokazu Iwasawa, both
puzzle mavens in Japan.</p>
<p>The key is to note that packing disks
arranged in a honeycomb pattern cover
more than 90% of the plane. But how do
we know they do? A disk of radius one
fits inside a regular hexagon made up
of six equilateral triangles of
altitude one. Since each such triangle
has area $\frac{\sqrt{3}}{3}$, the hexagon
itself has area $2 \sqrt{3}$; since the
hexagons tile the plane in a honeycomb
pattern, the disks, each with area $\pi$,
cover $\frac{\pi}{2\sqrt{3}}\approx .9069$ of the
plane's surface.</p>
<p>It follows that if the disks are
placed randomly on the plane, the
probability that any particular point
is covered is .9069. Therefore, if we
randomly place lots of $1 coins
(borrowed) on the table in a hexagonal
pattern, on average, 9.069 of our 10
points will be covered, meaning at
least some of the time all 10 will be
covered. (We need at most only 10
coins so give back the rest.)</p>
<p>What does it mean that the disks cover
90.69% of the infinite plane? The easiest way to answer is to say,
perhaps, that the percentage of any
large square covered by the disks
approaches this value as the square
expands. What is "random" about the
placement of the disks? One way to
think it through is to fix any packing
and any disk within it, then pick a
point uniformly at random from the
honeycomb hexagon containing the disk
and move the disk so its center is at
the chosen point.</p>
</blockquote>
<p>I don't understand. Doesn't the probabilistic nature of this proof simply mean that in the <strong>majority</strong> of configurations, all 10 dots can be covered. Can't we still come up with a configuration involving 10 (or less) dots where one of the dots can't be covered?</p>
| <p>Nice! The above proof proves that <em>any</em> configuration of 10 dots can be covered. What you have here is an example of the <a href="http://en.wikipedia.org/wiki/Probabilistic_method">probabilistic method</a>, which uses probability but gives a certain (not a probabilistic) conclusion (an example of <a href="http://en.wikipedia.org/wiki/Probabilistic_proofs_of_non-probabilistic_theorems">probabilistic proofs of non-probabilistic theorems</a>). This proof also implicitly uses the linearity of expectation, a fact that seem counter-intuitive in some cases until you get used to it.</p>
<p>To clarify the proof: given <em>any</em> configuration of 10 dots, <strong>fix</strong> the configuration, and consider placing honeycomb-pattern disks randomly. Now, what is the expected number $X$ of dots covered? Let $X_i$ be 1 if dot $i$ is covered, and $0$ otherwise. We know that $E[X] = E[X_1] + \dots + E[X_{10}]$, and also that $E[X_i] = \Pr(X_i = 1) \approx 0.9069$ as explained above, for all $i$. So $E[X] = 9.069$. (Note that we have obtained this result using linearity of expectation, even though it would be hard to argue about the events of covering the dots being independent.)</p>
<p>Now, since the average over placements of the disks (for the fixed configuration of points!) is 9.069, not <em>all</em> placements can cover ≤9 dots — at least one placement must cover all 10 dots.</p>
| <p>The key point is that the 90.69% probability is with respect to "the <em>disks</em> [being] placed randomly on the plane", not the <em>points</em> being placed randomly on the plane. That is, the set of points on the plane is <em>fixed</em>, but the honeycomb arrangement of the disks is placed over it at a random displacement. Since the probability that any such placement covers a given point is 0.9069, a random placement of the honeycomb will cover, on average, 9.069 points (this follows from linearity of expectation; I can expand on this if you like). Now the only way random placements can cover 9.069 points <em>on average</em> is if <em>some</em> of these placements cover 10 points -- if all placements covered 9 points or less, the average number of points covered would be at most 9. Therefore, <em>there exists</em> a placement of the honeycomb arrangement that covers 10 points (though this proof doesn't tell you what it is, or how to find it).</p>
|
matrices | <p>Suppose $A=uv^T$ where $u$ and $v$ are non-zero column vectors in ${\mathbb R}^n$, $n\geq 3$. $\lambda=0$ is an eigenvalue of $A$ since $A$ is not of full rank. $\lambda=v^Tu$ is also an eigenvalue of $A$ since
$$Au = (uv^T)u=u(v^Tu)=(v^Tu)u.$$
Here is my question:</p>
<blockquote>
<p>Are there any other eigenvalues of $A$?</p>
</blockquote>
<p>Added:</p>
<p>Thanks to Didier's comment and anon's answer, $A$ can not have other eigenvalues than $0$ and $v^Tu$. I would like to update the question:</p>
<blockquote>
<p>Can $A$ be diagonalizable?</p>
</blockquote>
| <p>We're assuming $v\ne 0$. The orthogonal complement of the linear subspace generated by $v$ (i.e. the set of all vectors orthogonal to $v$) is therefore $(n-1)$-dimensional. Let $\phi_1,\dots,\phi_{n-1}$ be a basis for this space. Then they are linearly independent and $uv^T \phi_i = (v\cdot\phi_i)u=0 $. Thus the the eigenvalue $0$ has multiplicity $n-1$, and there are no other eigenvalues besides it and $v\cdot u$.</p>
| <p>As to your last question, when is $A$ diagonalizable?</p>
<p>If $v^Tu\neq 0$, then from anon's answer you know the algebraic multiplicity of $\lambda$ is at least $n-1$, and from your previous work you know $\lambda=v^Tu\neq 0$ is an eigenvalue; together, that gives you at least $n$ eigenvalues (counting multiplicity); since the geometric and algebraic multiplicities of $\lambda=0$ are equal, and the other eigenvalue has algebraic multiplicity $1$, it follows that $A$ is diagonalizable in this case.</p>
<p>If $v^Tu=0$, on the other hand, then the above argument does not hold. But if $\mathbf{x}$ is nonzero, then you have $A\mathbf{x} = (uv^T)\mathbf{x} = u(v^T\mathbf{x}) = (v\cdot \mathbf{x})u$; if this is a multiple of $\mathbf{x}$, $(v\cdot\mathbf{x})u = \mu\mathbf{x}$, then either $\mu=0$, in which case $v\cdot\mathbf{x}=0$, so $\mathbf{x}$ is in the orthogonal complement of $v$; or else $\mu\neq 0$, in which case $v\cdot \mathbf{x} = v\cdot\left(\frac{v\cdot\mathbf{x}}{\mu}\right)u = \left(\frac{v\cdot\mathbf{x}}{\mu}\right)(v\cdot u) = 0$, and again $\mathbf{x}$ lies in the orthogonal complement of $v$; that is, the only eigenvectors lie in the orthogonal complement of $v$, and the only eigenvalue is $0$. This means the eigenspace is of dimension $n-1$, and therefore the geometric multiplicity of $0$ is strictly smaller than its algebraic multiplicity, so $A$ is not diagonalizable.</p>
<p>In summary, $A$ is diagonalizable if and only if $v^Tu\neq 0$, if and only if $u$ is not orthogonal to $v$. </p>
|
game-theory | <p>Define a game with S players to be Symmetric if all players have the same set of options and the payoff of a player depends only on the player's choice and the set of choices of all players.
Equivalently A game is symmetric if applying a permutation to the options chosen by people induces the same permutation on the payoffs. For example if the original set of options chosen were 1,2,1,3 and the pay-offs were 6,0,6,100 respectively then if the game is symmetric the set of options 2,1,1,3 would have to lead to the pay-offs 0,6,6,100</p>
<p>Suppose a Symmetric Game S has at least 1 nash equilbrium, then must S have a symmetric Nash equilbrium i.e. a nash equilbrium where all players use the same strategy? If not under what conditions does there exist a nash equilbrium. If so is there a simple proof or a simple idea behind the proof?</p>
<p>Clearly this doesn't hold if we restrict to pure strategies the game with the following payoff matrix where all pure equilbria are asymmetric serves as a counter example, But I've yet to find a counterexample for impure strategies.</p>
<p>0/0 1/1<br>
1/1 0/0 </p>
| <p>The answer is yes for finite games and mixed strategies and this was already shown in the <a href="http://www.princeton.edu/mudd/news/faq/topics/Non-Cooperative_Games_Nash.pdf">Ph.D thesis</a> of John Nash, where it occurs as Theorem 4. Nash considered actually slightly more invariances in his theorem.</p>
<p>The proof amounts to the verification that one can do the usual fixed-point argument used for the proof that every finite game has a Nash equilibrium in mixed strategies, restricted to the set of symmetric strategy profiles and to symmetric best responses.</p>
| <p>The answer is yes for finite games and for zero-sum games. In general, however, the answer is no: <a href="http://www.rochester.edu/college/faculty/markfey/papers/SymmGame3.pdf">http://www.rochester.edu/college/faculty/markfey/papers/SymmGame3.pdf</a></p>
|
differentiation | <p>Let the function $f:\mathbb{R}\rightarrow\mathbb{R}$ be differentiable at $x=0$. Prove that $\lim_{x\rightarrow 0}\frac{f(x^2)-f(0)}{x}=0$.</p>
<p>The result is pretty obvious to me but I am having a difficult time arguing it precise enough for a proof. What I have so far is of course that since $f$ is differentiable;
$$f'(0)=\lim_{x\rightarrow 0}\frac{f(x)-f(0)}{x}$$
exists.</p>
<p>Any help would be greatly appreciated.</p>
| <p>HINT:</p>
<p>$$\frac{f(x^2)-f(0)}{x}=\left(\frac{f(x^2)-f(0)}{x^2}\right)x$$</p>
| <p>Recall the chain rule,
$$
\left(f(g(x))\right)'=f'(g(x))g'(x)
$$
here $g(x)=x^2$, and
$$
\lim_{x\to 0}\frac{f(g(x))-f(g(0))}{x}=f'(g(0))(g'(0))=f'(0)\cdot 0=0
$$</p>
|
differentiation | <p>While I do know that $\frac{dy}{dx}$ isn't a fraction and shouldn't be treated as such, in many situations, doing things like multiplying both sides by $dx$ and integrating, cancelling terms, doing things like $\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$ works out just fine.</p>
<p>So I wanted to know: Are there any particular cases (in single-variable calculus) we have to look out for, where treating $\frac{dy}{dx}$ as a fraction gives incorrect answers, in particular, at an introductory level?</p>
<p><strong>Note: Please provide specific instances and examples where treating $\frac{dy}{dx}$ as a fraction fails</strong></p>
| <p>It is because of the extraordinary power of Leibniz's differential notation, which allows you to treat them as fractions while solving problems. The justification for this mechanical process is apparent from the following general result:</p>
<blockquote>
<p>Let $ y=h(x)$ be any solution of the separated differential equation
$A(y)\dfrac{dy}{dx} = B(x)$... (i) such that $h'(x)$ is continuous on an open interval $I$, where $B(x)$ and $A(h(x))$ are assumed to be continuous on $I$. If $g$ is any primitive of $A$ (i.e. $g'=A$) on $I$, then $h$ satisfies the equation $g(y)=\int {B(x)dx} + c$...(ii) for some constant $c$. Conversely, if $y$ satisfies (ii) then $y$ is a solution of (i).</p>
</blockquote>
<p>Also, it would be advisable to say $\dfrac{dy}{dx}=\dfrac{1}{\dfrac{dx}{dy}}$ only when the function $y(x)$ is invertible.</p>
<p>Say you are asked to find the equation of normal to a curve $y(x)$ at a particular point $(x_1,y_1)$. In general you should write the slope of the equation as $-\dfrac{1}{\dfrac{dy}{dx}}\big|_{(x_1,y_1)}$ instead of simply writing it as $-\dfrac{dx}{dy}\big|_{(x_1,y_1)}$ without checking for the invertibility of the function (which would be redundant here). However, the numerical calculations will remain the same in any case.</p>
<p><strong>EDIT.</strong> </p>
<p>The Leibniz notation ensures that no problem will arise if one treats the differentials as fractions because it beautifully works out in single-variable calculus. But explicitly stating them as 'fractions' in any exam/test could cost one the all important marks. One could be criticised in this case to be not formal enough in his/her approach.</p>
<p>Also have a look at <a href="https://math.stackexchange.com/a/1784701/321264">this answer</a> which explains the likely pitfalls of the fraction treatment.</p>
| <p>In calculus we have this relationship between differentials: $dy = f^{\prime}(x) dx$ which could be written $dy = \frac{dy}{dx} dx$. If you have $\frac{dy}{dx} = \sin x$, then it's legal to multiply both sides by $dx$. On the left you have $\frac{dy}{dx} dx$. When you replace it with $dy$ using the above relationship, it looks just like you've cancelled the $dx$'s. Such a replacement is so much like division we can hardly tell the difference. </p>
<p>However if you have an implicitly defined function $f(x,y) = 0$, the total differential is $f_x \;dx + f_y \;dy = 0$. "Solving" for $\frac{dy}{dx}$ gives $$\frac{dy}{dx} = -\frac{f_x}{f_y} = -\frac{\partial f / \partial x}{\partial f /\partial y}.$$ This is the correct formula for implicit differentiation, which we arrived at by treating $\frac{dy}{dx}$ as a ration, but then look at the last fraction. If you simplify it, it makes the equation $$\frac{dy}{dx} = -\frac{dy}{dx}.$$ That pesky minus sign sneeks in because we reversed the roles of $x$ and $y$ between the two partial derivatives. Maddening.</p>
|
matrices | <blockquote>
<p>How can I prove <span class="math-container">$\operatorname{rank}A^TA=\operatorname{rank}A$</span> for any <span class="math-container">$A\in M_{m \times n}$</span>?</p>
</blockquote>
<p>This is an exercise in my textbook associated with orthogonal projections and Gram-Schmidt process, but I am unsure how they are relevant.</p>
| <p>Let $\mathbf{x} \in N(A)$ where $N(A)$ is the null space of $A$. </p>
<p>So, $$\begin{align} A\mathbf{x} &=\mathbf{0} \\\implies A^TA\mathbf{x} &=\mathbf{0} \\\implies \mathbf{x} &\in N(A^TA) \end{align}$$ Hence $N(A) \subseteq N(A^TA)$.</p>
<p>Again let $\mathbf{x} \in N(A^TA)$</p>
<p>So, $$\begin{align} A^TA\mathbf{x} &=\mathbf{0} \\\implies \mathbf{x}^TA^TA\mathbf{x} &=\mathbf{0} \\\implies (A\mathbf{x})^T(A\mathbf{x})&=\mathbf{0} \\\implies A\mathbf{x}&=\mathbf{0}\\\implies \mathbf{x} &\in N(A) \end{align}$$ Hence $N(A^TA) \subseteq N(A)$.</p>
<p>Therefore $$\begin{align} N(A^TA) &= N(A)\\ \implies \dim(N(A^TA)) &= \dim(N(A))\\ \implies \text{rank}(A^TA) &= \text{rank}(A)\end{align}$$</p>
| <p>Let $r$ be the rank of $A \in \mathbb{R}^{m \times n}$. We then have the SVD of $A$ as
$$A_{m \times n} = U_{m \times r} \Sigma_{r \times r} V^T_{r \times n}$$
This gives $A^TA$ as $$A^TA = V_{n \times r} \Sigma_{r \times r}^2 V^T_{r \times n}$$ which is nothing but the SVD of $A^TA$. From this it is clear that $A^TA$ also has rank $r$. In fact the singular values of $A^TA$ are nothing but the square of the singular values of $A$.</p>
|
matrices | <p>In which cases is the inverse of a matrix equal to its transpose, that is, when do we have <span class="math-container">$A^{-1} = A^{T}$</span>? Is it when <span class="math-container">$A$</span> is orthogonal? </p>
| <p>If $A^{-1}=A^T$, then $A^TA=I$. This means that each column has unit length and is perpendicular to every other column. That means it is an orthonormal matrix.</p>
| <p>You're right. This is the definition of orthogonal matrix.</p>
|
number-theory | <p>I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?</p>
| <p>The prime number theorem states that the number of primes less than or equal to
$x$ is approximately equal to $\int_2^x \dfrac{dt}{\log t}.$ The Riemann hypothesis gives a precise answer to how good this approximation is; namely, it states that the difference between the exact number of primes below $x$, and the given integral, is (essentially) $\sqrt{x} \log x$. </p>
<p>(Here "essentially" means that one should actually take the absolute value of the difference, and also that one might have to multiply $\sqrt{x} \log x$ by some positive constant. Also, I should note that the Riemann hypothesis is more usually stated in terms of the location of the zeroes
of the Riemann zeta function; the previous paragraph is giving an equivalent form, which may be easier to understand, and also may help to explain the interest of the statement. See <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis#Distribution_of_prime_numbers">the wikipedia entry</a> for the formulation in terms of counting primes, as well as various other formlations.)</p>
<p>The difficulty of the problem is (it seems to me) as follows: there is no approach currently known to understanding the distribution of prime numbers well enough to establish the desired approximation, other than by studying
the Riemann zeta function and its zeroes. (The information about the primes
comes from information about the zeta function via a kind of Fourier transform.) On the other hand, the zeta function is not easy to understand; there is no straightforward formula for it that allows one to study its zeroes, and because of this any such study ends up being somewhat indirect.
So far, among the various possible such indirect approaches, no-one has found
one that is powerful enough to control all the zeroes. </p>
<p>A very naive comment, that nevertheless might give some flavour of the problem, is that there are an infinite number of zeroes that one must contend with, so there is no obvious finite computation that one can make to solve
the problem; ingenuity of some kind is necessarily required.</p>
<p>Finally, one can remark that the Riemann hypothesis, when phrased in terms of the location of the zeroes, is very simple (to state!) and very beautiful: it says that all the non-trivial zeros have real part $1/2$. This suggests that perhaps there is some secret symmetry underlying the Riemann zeta function that would "explain" the Riemann hypothesis. Mathematicians have had, and continue to have, various ideas about what this secret symmetery might be (in this they are inspired by an analogy with what is called "the function field case" and the
deep and beautiful theory of <a href="http://en.wikipedia.org/wiki/Weil_conjectures">the Weil conjectures</a>), but so far they haven't managed to establish any underlying phenonemon which implies the Riemann hypothesis.</p>
| <p>A direct translation of RH (Riemann Hypothesis) would be very baffling in layman's terms. But, there are many problems that are equivalent to RH and hence, defining them would be actually indirectly stating RH. Some of the equivalent forms of RH are much easier to understand than RH itself. I give what I think is the most easy equivalent form that I have encountered:</p>
<blockquote>
<p>The Riemann hypothesis is equivalent
to the statement that an integer has
an equal probability of having an odd
number or an even number of distinct
prime factors. (Borwein page. 46)</p>
</blockquote>
|
logic | <p>I'm learning math.</p>
<p>I've recently thought more about the proof by contradiction technique, and I have a question that I would like cleared up. Let me set the stage.</p>
<p>Suppose I am trying to prove a theorem.</p>
<p>Theorem: If A and $\neg$B, then $\neg$C.</p>
<p>Proof (contradiction): Let us suppose that A is true and $\neg$B is true. Let us assume that C is true ($\neg$C is false).</p>
<p>[blah blah blah]</p>
<p>From this, we arrive at a contradiction because we see that B is true ($\neg$B is false), but we know that $\neg$B is true (because we assumed it to be true). Thus, since assuming that C is true lead us to a contradiction, it must be the case that C is false ($\neg$C is true). QED.</p>
<p><strong>My issue with this</strong>: why is it that C leading to a contradiction must mean that $\neg$C is true? What if $\neg$C also leads to a contradiction? In that case, doesn't a proof by contradiction not prove anything? <em>Why can we be sure that C leading to a contradiction must mean that $\neg$C doesn't lead to a contradiction?</em></p>
<p>I'm sorry if this question has already been asked. I searched for a bit before asking to see if anyone had this same specific question, but most results just asked why a proof by contradiction works in general without any clear question.</p>
| <p>If both $C$ and $\neg C$ lead to a contradiction, then you must be working with an inconsistent set of assumptions ... from which anything can be inferred ... including $\neg C$. As such, $\neg C$ can still be concluded given that assumption $C$ leads to a contradiction.</p>
<p>So, regardless of whether $\neg C$ also leads to a contradiction or whether it does not, we can conclude $\neg C$ once assumption $C$ leads to a contradiction.</p>
<p>The thing to remember is that when in logic we say that we can 'conclude' something, we mean that that something <em>follows from</em> the assumptions ... not that that something is in fact true. I think that's the source of your confusion. You seem to be saying: "OK, if $C$ leads to a contradiction, then we want to say that $\neg C$ is true ... But wait! What if $\neg C$ leads to a contradiction as well .. wouldn't that mean that $\neg C$ cannot be true either? So, how can we say $\neg C$ is true?!". But it's not that $\neg C$ is true .. it's just that it logically follows from the assumptions. That is: <em>if</em> the assumptions are all true, then $\neg C$ will be true as well. Well, are they? .. and is it? Funny thing is, as logicians, we don't really care :)</p>
| <blockquote>
<p>why is it that $C$ leading to a contradiction must mean that $\neg C$ is true? </p>
</blockquote>
<p>Rather: why does $\def\false{\mathsf{contradiction}} A,\neg B, C\vdash \false$ let us infer that $A,\neg B\vdash \neg C$</p>
<p>Well, $A,\neg B,C\vdash \false$ means that $\false$ is true assuming that $\{A,\neg B, C\}$ all are true. However, a $\false$ is, by definition, false, so that informs us that at least one from $\{A,\neg B, C\}$ must be false. Thus when we assume that $\{A,\neg B\}$ are both true, we are implicating that $C$ is false. That is $A,\neg B\vdash \neg C$</p>
<p>Notice: We are not <em>unconditionally</em> declaring that $\neg C$ is true; we are asserting that it is so <em>under assumption</em> that $A$ and $\neg B$ are true.</p>
<blockquote>
<p>What if $\neg C$ also leads to a contradiction? </p>
</blockquote>
<p>Why, if we can prove $A,\neg B,C\vdash \false$ and also that $A,\neg B,\neg C\vdash \false$ then we have shown that: at least one from $\{A,\neg B, C\}$ and at least one from $\{A,\neg B,\neg C\}$ are false; simultaneously even. Since we usually accept that $C$ cannot have two different truth assignments at once (the law of noncontradiction: $\neg(\neg C\wedge C)$) we must conclude that at least one from $\{A,\neg B\}$ are false. Therefore we infer from those two proofs that that: $A,\neg B\vdash\false$.$$\begin{split}A,\neg B,C &\vdash \false\\ A,\neg B,\neg C&\vdash\false\\\hline A,\neg B&\vdash \false\end{split}\text{ because } \begin{split}A,\neg B &\vdash \neg C\\ A,\neg B&\vdash\neg\neg C\\\hline A,\neg B&\vdash \false\end{split}$$</p>
|
game-theory | <p>This is a problem that has haunted me for more than a decade. Not all the time - but from time to time, and always on windy or rainy days, it suddenly reappears in my mind, stares at me for half an hour to an hour, and then just grins at me, and whispers whole day: "You will never solve me..."</p>
<p>Please save me from this torturer.</p>
<p>Here it is:</p>
<p><em>Let's say there are two people and a sandwich. They want to share the sandwich, but they don't trust each other. However, they found the way how both of them will have a lunch without feeling deceived: One of them will cut the sandwich in two halves, and another will choose which half will be his. Fair, right?</em></p>
<p><img src="https://i.sstatic.net/n7S6i.jpg" alt="Split sandwich"></p>
<p>The problem is:</p>
<p><em>Is there such mechanism for three people and a sandwich?</em></p>
<hr>
<p>EDIT: This was roller-coaster for me. Now, it turns out that there are at least two <strong>books</strong> devoted exclusively on this problem and its variations:</p>
<p><em><a href="https://rads.stackoverflow.com/amzn/click/com/0521556449" rel="noreferrer" rel="nofollow noreferrer">Fair Division</a></em></p>
<p><em><a href="https://rads.stackoverflow.com/amzn/click/com/1568810768" rel="noreferrer" rel="nofollow noreferrer">Cake Cutting Algorithms</a></em></p>
<p><img src="https://i.sstatic.net/vSX3i.jpg" alt="Books on fair division"></p>
<hr>
<p>Yesterday, I was in a coffee shop in a small company. We ordered coffee and some chocolate cakes. As I was cutting my cake for my first bite, I felt sweat on my forehead. I thought, 'What if some of my buddies just interrupt me and say: Stop! You are not cutting the cake in a fair manner!' My hands started shaking in fear of that. But, no, nothing happened, fortunately.</p>
| <p>For more than two, the moving knife is a nice solution. Somebody takes a knife and moves it slowly across the sandwich. Any player may say "cut". At that moment, the sandwich is cut and the piece given to the one who said "cut". As he has said that is an acceptable piece, he believes he has at least $\frac 1n$ of the sandwich. The rest have asserted (by not saying "cut") that is it at most $\frac 1n$ of the sandwich, so the average available is now at least their share. Recurse.</p>
| <p>Just for the record, here's the <a href="https://en.wikipedia.org/w/index.php?title=Selfridge-Conway_discrete_procedure">Selfridge–Conway discrete procedure</a> mentioned in the comments. The Wikipedia article also contains some commentary on its origin and why it works.</p>
<p>This procedure was the first envy-free discrete procedure devised for three players. The maximal number of cuts in the procedure is five. The pieces are not always contiguous. Solutions for n players were also found later.</p>
<blockquote>
<p>Suppose we have three players P1, P2 and P3. Where the procedure gives a criterion for a decision it means that criterion gives an
optimum choice for the player.</p>
<p>Step 1. P1 divides the cake into three pieces he considers of equal size.</p>
<p>Step 2. Let's call A the largest piece according to P2.</p>
<p>Step 3. P2 cuts off a bit of A to make it the same size as the second largest. Now A is divided into: </p>
<ul>
<li>the trimmed piece A1 </li>
<li>the trimmings A2. </li>
</ul>
<p>Leave the trimmings A2 to one side. If P2 thinks that the two largest parts are equal, then each player chooses a part in this
order: P3, P2 and finally P1.</p>
<p>Step 4. P3 chooses a piece among A1 and the two other pieces.</p>
<p>Step 5. P2 chooses a piece with the limitation that if P3 didn't choose A1, P2 must choose it.</p>
<p>Step 6. P1 chooses the last piece leaving just the trimmings A2 to be divided.</p>
<p>Now, the cake minus the trimmings A2 has been divided in an envy free manner. The trimmed piece A1 has been chosen by either P2 or P3.
Let's call the player who chose it PA and the other one Player PB. </p>
<p>Step 7. PB cuts A2 into three equal pieces.</p>
<p>Step 8. PA chooses a piece of A2 - we name it A2<sub>1</sub>.</p>
<p>Step 9. P1 chooses a piece of A2 - we name it A2<sub>2</sub>.</p>
<p>Step 10. PB chooses the last remaining piece of A2 - we name it A2<sub>3</sub>.</p>
</blockquote>
<p>Wikipedia on the origins this procedure:</p>
<blockquote>
<p>This procedure is named after <a href="https://en.wikipedia.org/wiki/John_Selfridge">John Selfridge</a> and <a href="https://en.wikipedia.org/wiki/John_Horton_Conway">John Horton Conway</a>.
Selfridge discovered it in 1960, and told it to Richard Guy, who told
it to many people, but Selfridge did not publish it. John Conway
discovered it independently in 1993, and also never published it, but
the result is attributed to them in a number of books.</p>
</blockquote>
|
geometry | <p><a href="https://i.sstatic.net/EUaPe.png" rel="noreferrer"><img src="https://i.sstatic.net/EUaPe.png" alt="enter image description here"></a></p>
<p>Don't let the simplicity of this diagram <strong>fool</strong> you. I have been wondering about this for quite some time, but I can't think of an <strong>easy</strong>/smart way of finding it.</p>
<p><strong>Any ideas?</strong></p>
<hr>
<p><em>For <strong>reference</strong>, the <strong>Area</strong> is:</em></p>
<p>$$\bbox[10pt, border:2pt solid grey]{90−18.75\pi−25\cdot \arctan\left(\frac 12\right)}$$</p>
| <p><a href="https://i.sstatic.net/DNr5O.png"><img src="https://i.sstatic.net/DNr5O.png" alt="enter image description here"></a></p>
<p>We observe that $\triangle PRT$ can be partitioned into five congruent sub-triangles. Therefore, the entire shaded region has area given by ...
$$\begin{align}
3 u + |\text{region}\; PAT| &= 3u + |\square OAPT| - |\text{sector}\;OAT| \\[6pt]
&= 3u + \frac{3}{5}\,|\triangle PRT| - |\text{sector}\;OAT| \\[6pt]
&= 3\cdot\frac{1}{4} r^2 \left( 4 - \pi \right) \;+\; \frac{3}{5}\cdot r^2 \;-\; \frac{1}{2}r^2\cdot 2\theta
\end{align}$$
Since $\theta = \operatorname{atan}\frac{1}{2}$, this becomes</p>
<blockquote>
<p>$$r^2\left(\; \frac{18}{5} - \frac{3}{4}\pi - \operatorname{atan}\frac{1}{2} \;\right) \qquad\stackrel{r=5}{\to}\qquad
90 - \frac{75}{4}\pi - 25\;\operatorname{atan}\frac{1}{2}$$</p>
</blockquote>
| <p>[<strong>Note:</strong> <a href="https://math.stackexchange.com/a/1875751/409">My second answer</a> is much better.]</p>
<p>I'll focus on the <em>unshaded</em> region at the bottom-left.</p>
<p><a href="https://i.sstatic.net/o8jYN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o8jYN.png" alt="enter image description here"></a></p>
<p>By an aspect of the <a href="https://en.wikipedia.org/wiki/Inscribed_angle#Theorem" rel="nofollow noreferrer">Inscribed Angle Theorem</a>, we know that $\angle AOB = 2\;\angle ABP$ (justifying marking these $\theta$ and $2\theta$). By a related result, we have that
$$\phi = \frac{1}{2}\left(\angle BOC - \angle AOB\right) = 45^\circ - \theta$$
Moreover, we know that
$$\phi = \operatorname{atan}\frac{1}{2} \approx 26.56^\circ \qquad\to\qquad \theta = 45^\circ - \operatorname{atan}\frac{1}{2} \approx 18.43^\circ$$</p>
<p>From here, knowing the circle's radius, one may calculate the lower-left area as ...
$$\begin{align}
&|\triangle PAB| + |\triangle OAB| - |\text{sector } OAB| \\
\end{align}$$
... from which we readily derive the area in the original question. For now, I'll leave these details to the reader.</p>
|
game-theory | <p>I heard a riddle once, which goes like this:</p>
<p>There are N lions and 1 sheep in a field. All the lions really want to eat the sheep, but the problem is that if a lion eats a sheep, it becomes a sheep. A lion would rather stay a lion than be eaten by another lion. (There is no other way for a lion to die than to become a sheep and then be eaten). </p>
<p>I was presented with this solution:</p>
<p>If there were 1 lion and 1 sheep, then the lion would simply eat the sheep. </p>
<p>If there were 2 lions and 1 sheep, then no lion would eat the sheep, because if one of them would, it would surely be eaten by the other lion afterwards. </p>
<p>If there were 3 lions, then one of the lions could safely eat the sheep, because it would turn in to the scenario with 2 lions, where no one can eat.</p>
<p>Continuing this argument, the conclusion is as follows:</p>
<ul>
<li><p>If there is an even number of lions, then nothing happens.</p></li>
<li><p>If there is an odd number of lions, then any lion could safely eat the sheep. </p></li>
</ul>
<p>But to me this seems utterly absurd. I think this is similar to the Unexpected Hanging Paradox (Link: <a href="http://en.wikipedia.org/wiki/Unexpected_hanging_paradox">http://en.wikipedia.org/wiki/Unexpected_hanging_paradox</a>). I might have forgotten some assumptions, and those assumptions might actually solve this problem. </p>
<p>Is there a fault in the argument which I haven't discovered? Does anyone have any insights? Is the argument sound?</p>
| <p>Maybe you have doubts whether lions can count up to 101 or have the notion of odd and even. So here is another version of the story:</p>
<p>A certain university has just one math chair, which is inhabited right now. There are $N$ (male) mathematicians aspiring for that chair, and the guy who kills the prof becomes his successor.</p>
| <p>None of the answers posted here are actually fully correct.</p>
<p>Let us use the formulation of the problem as posted on the braingle website: <a href="http://www.braingle.com/brainteasers/teaser.php?id=9026&comm=0" rel="nofollow noreferrer">http://www.braingle.com/brainteasers/teaser.php?id=9026&comm=0</a> </p>
<p>The backward induction solution to the problem (where the sheep survives if the number of lions is even, and gets eaten if the number of lions is odd) can only be applied if lions have <em>common knowledge of lion rationality</em>. It is <strong>not enough</strong> for lions to just be "infinitely logical, smart, and completely aware of their surroundings". It is even not enough for lions to know that all other lions are rational! Common knowledge is much stronger than that, and means that everyone knows that everyone knows that everyone knows ... etc. etc.</p>
<p>Only then can we start applying backward induction! And the original problem formulation makes no such statement about common knowledge of rationality - which is a <strong>very</strong> strong statement to make!</p>
<p>Note that when I used the word "knowledge" above, I used a very strict definition of knowledge - knowledge is a true belief, which holds regardless of any new information becoming available.</p>
<p>Merely having <em>common belief in rationality</em> is not enough for a backward induction solution here! Imagine a situation with 4 lions: what happens if the 4th lion decides to eat the sheep, contrary to what the backward induction solution says he should do? This will invalidate the common belief of all the other lions in common rationality, which was the prerequisite for the backward induction solution! So lion 3 can no longer assume that lion 2 will never eat the sheep - after all, lion 4 just did! And since above all, lions do not want to be eaten, lion 3 will no longer eat the sheep. Turns out the behavior of lion 4 was rational after all, <strong>assuming</strong> lions did not have common knowledge in rationality at the start!</p>
<p>This explains why a lot of people feel unsatisfied with the backward induction solution to this problem, and why they feel that this problem is a paradox. They are completely right to be unsatisfied! Common knowledge in rationality is an <strong>extremely</strong> strong condition, and is unrealistic in most practical scenarios. And in fact in this problem, even common belief in rationality is not stated, merely the rationality of each individual lion, which completely invalidates the backward induction solution - even with only 2 lions, lion 2 cannot reason about what 1 lion would do if he doesn't hold a belief or knowledge of that lion's rationality (which cannot automatically be assumed)!</p>
|
probability | <p>Show $$\lim_{n \to\infty} \int_0^1 \cdots \int_0^1 \int_0^1 \frac{ x_1^2 + \cdots + x_n^2}{x_1 + \cdots + x_n} \, dx_1 \cdots dx_n = \frac 2 3.$$</p>
<p>Not sure how to start off this iterated integral question, any help would be appreciated. </p>
| <p>Let <span class="math-container">$X_1,\ldots,X_n$</span> be independent random variables, each distributed uniformly on the interval <span class="math-container">$[0,1]$</span>. Your question is then equivalent to
<span class="math-container">$$
\lim_{n\to\infty}\mathbb E\frac{X_1^2+\cdots+X_n^2}{X_1+\cdots+X_n}=\frac{2}{3}.
$$</span></p>
<p>We will deduce this from the Strong Law of Large Numbers and the Bounded Convergence Theorem. Consider an infinite iid sequence <span class="math-container">$(X_i)_{i=1}^{\infty}$</span> of uniform <span class="math-container">$[0,1]$</span> random variables on a probability space <span class="math-container">$\Omega$</span>. By the Strong Law of Large Numbers, each of the events
<span class="math-container">$$
\left\{\omega\in\Omega\colon\lim_{n\to\infty}\frac{X_1(\omega)+\cdots+X_n(\omega)}{n}=\frac{1}{2}\right\}
$$</span>
and
<span class="math-container">$$\left\{\omega\in\Omega\colon\lim_{n\to\infty}\frac{X_1(\omega)^2+\cdots+X_n(\omega)^2}{n}=\frac{1}{3}\right\}
$$</span>
occurs with probability 1. Therefore, it holds with probability 1 that
<span class="math-container">$$
\lim_{n\to\infty}\frac{X_1^2+\cdots+X_n^2}{X_1+\cdots+X_n}=\frac{2}{3}.
$$</span>
Taking expectations gives that
<span class="math-container">$$
\mathbb E\lim_{n\to\infty}\frac{X_1^2+\cdots+X_n^2}{X_1+\cdots+X_n}=\frac{2}{3}.
$$</span>
Since <span class="math-container">$X_i^2\leq X_i$</span> for all <span class="math-container">$i$</span>, the quantity inside the limit is bounded above by <span class="math-container">$1$</span>. Thus by the Bounded Convergence Theorem we may interchange the expectation with the limit, and therefore
<span class="math-container">$$
\lim_{n\to\infty}\mathbb E\frac{X_1^2+\cdots+X_n^2}{X_1+\cdots+X_n}=\frac{2}{3}.
$$</span></p>
| <p>Suppose $X_1,X_2,X_3,\ldots$ are independent random variables, each uniformly distributed on the interval $[0,1].$ Then for each value of $n$ we have $\operatorname{E}(X_n^2) = 1/3.$ The weak law of large numbers says
$$
\operatorname*{l.i.p.}_{n\to\infty} \frac{X_1^2 + \cdots + X_n^2} n = \frac 1 3
$$
where $\operatorname{l.i.p.}$ means "limit in probability", and that is defined by saying
$$
\text{for every } \varepsilon>0\ \lim_{n\to\infty} \Pr\left( \left| \frac{X_1^2+\cdots + X_n^2} n - \frac 1 3 \right| < \varepsilon \right) = 1.
$$
Similarly
$$
\operatorname*{l.i.p.}_{n\to\infty} \frac{X_1+\cdots + X_n} n = \frac 1 2.
$$
In general, $\Pr(A\cap B) \ge \Pr(A) + \Pr(B) - 1.$ Thus
\begin{align}
& \Pr\left( \left| \frac{X_1^2+\cdots + X_n^2} n - \frac 1 3 \right| < \varepsilon \text{ and } \left| \frac{X_1+\cdots + X_n} n - \frac 1 2 \right| < \varepsilon \right) \\[10pt]
\ge {} & \Pr\left( \left| \frac{X_1^2+\cdots + X_n^2} n - \frac 2 3 \right| < \varepsilon\right) + \Pr\left( \left| \frac{X_1+\cdots + X_n} n - \frac 1 2 \right| \right) - 1.
\end{align}
Next you need to say that if one number is near $1/3$ and another near $1/2$, then the quotient is near $2/3.$</p>
<p>This sketch of an argument leaves a lot of details to be filled in.</p>
|
number-theory | <p>How would you find a primitive root of a prime number such as 761? How do you pick the primitive roots to test? Randomly?</p>
<p>Thanks</p>
| <p>There is no general formula to find a primitive root. Typically, what you do is you pick a number and test. Once you find one primitive root, you find all the others.</p>
<p><strong>How you test</strong></p>
<p>To test that $a$ is a primitive root of $p$ you need to do the following. First, let $s=\phi(p)$ where $\phi()$ is <a href="http://en.wikipedia.org/wiki/Euler%27s_totient_function">the Euler's totient function</a>. If $p$ is prime, then $s=p-1$. Then you need to determine all the prime factors of $s$: $p_1,\ldots,p_k$. Finally, calculate $a^{s/p_i}\mod p$ for all $i=1\ldots k$, and if you find $1$ among residuals then it is NOT a primitive root, otherwise it is.</p>
<p>So, basically you need to calculate and check $k$ numbers where $k$ is the number of different prime factors in $\phi(p)$.</p>
<p>Let us find the lowest primitive root of $761$:</p>
<ul>
<li>$s=\phi(761)=760=2^3\times5\times19$</li>
<li>the powers to test are: $760/2=380$, $760/5=152$ and $760/19=40$ (just 3 instead of testing all of them)</li>
<li>test 2:
<ul>
<li>$2^{380}\equiv 1\mod 761$ oops</li>
</ul></li>
<li>test 3:
<ul>
<li>$3^{380}\equiv -1\mod 761$ OK</li>
<li>$3^{152}\equiv 1\mod 761$ oops</li>
</ul></li>
<li>test 5 (skip 4 because it is $2^2$):
<ul>
<li>$5^{380}\equiv 1\mod 761$ oops</li>
</ul></li>
<li>test 6:
<ul>
<li>$6^{380}\equiv -1\mod 761$ OK</li>
<li>$6^{152}\equiv 67\mod 761$ OK</li>
<li>$6^{40}\equiv -263\mod 761$ hooray!</li>
</ul></li>
</ul>
<p>So, the least primitive root of 761 is 6.</p>
<p><strong>How you pick</strong></p>
<p>Typically, you either pick at random, or starting from 2 and going up (when looking for the least primitive root, for example), or in any other sequence depending on your needs.</p>
<p>Note that when you choose at random, the more prime factors are there in $\phi(p)$, the less, in general, is the probability of finding one at random. Also, there will be more powers to test.</p>
<p>For example, if you pick a random number to test for being a primitive root of $761$, then the probability of finding one is roughly $\frac{1}{2}\times\frac{4}{5}\times\frac{18}{19}$ or 38%, and there are 3 powers to test. But if you are looking for primitive roots of, say, $2311$ then the probability of finding one at random is about 20% and there are 5 powers to test.</p>
<p><strong>How you find all the other primitive roots</strong></p>
<p>Once you have found one primitive root, you can easily find all the others. Indeed, if $a$ is a primitive root mod $p$, and $p$ is prime (for simplicity), then $a$ can generate all other remainders $1\ldots(p-1)$ as powers: $a^1\equiv a,a^2,\ldots,a^{p-1}\equiv 1$. And $a^m \mod p$ is another primitive root if and only if $m$ and $p-1$ are coprime (if $\gcd(m,p-1)=d$ then $(a^m)^{(p-1)/d}\equiv (a^{p-1})^{m/d}\equiv 1\mod p$, so we need $d=1$). By the way, this is exactly why you have $\phi(p-1)$ primitive roots when $p$ is prime.</p>
<p>For example, $6^2=36$ or $6^{15}\equiv 686$ are not primitive roots of $761$ because $\gcd(2,760)=2>1$ and $\gcd(15,760)=5>1$, but, for example, $6^3=216$ is another primitive root of 761.</p>
| <p>Let $p$ be an odd prime number. If $p-1$ is divisible by $4$ and $a$ is a primitive element then $p-a$ is also primitive. For example $761-6=755$ is primitive because $760$ is divisible by $4$. </p>
|
differentiation | <p>I am a senior in high school so I know I am simply misunderstanding something but I don't know what, please have patience.</p>
<p>I was tasked to find the derivative for the following function:</p>
<p><span class="math-container">$$ y = \frac{ (4x)^{1/5} }{5} + { \left( \frac{1}{x^3} \right) } ^ {1/4} $$</span></p>
<p>Simplifying:</p>
<p><span class="math-container">$$ y = \frac{ 4^{1/5} }{5} x^{1/5} + { \frac{1 ^ {1/4}}{x ^ {3/4}} } $$</span></p>
<p><span class="math-container">$$ y = \frac{ 4^{1/5} }{5} x^{1/5} + { \frac{\pm 1}{x ^ {3/4}} } $$</span></p>
<p>Because <span class="math-container">$ 1 ^ {1/n} = \pm 1 $</span>, given <span class="math-container">$n$</span> is even</p>
<p><span class="math-container">$$ y = \frac{ 4^{1/5} }{5} x^{1/5} \pm { x ^ {-3/4} } $$</span></p>
<p>Taking the derivative using power rule:</p>
<p><span class="math-container">$$ \frac{dy}{dx} = \frac{ 4^{1/5} }{25} x^{-4/5} \pm \frac{-3}{4} { x ^ {-7/4} } $$</span></p>
<p>which is the same as</p>
<p><span class="math-container">$$ \frac{dy}{dx} = \frac{ 4^{1/5} }{25} x^{-4/5} \pm \frac{3}{4} { x ^ {-7/4} } $$</span></p>
<p>And that is the part that I find difficult to understand. I know that I should be adding the second term(I graphed it multiple times to make sure), but I cannot catch my error and my teacher did't want to discuss it.</p>
<p>So I know I am doing something wrong because one function cannot have more than one derivative.</p>
| <p>The <span class="math-container">$(\cdot)^{\frac{1}{4}}$</span> operation has to be understood as a function. A function can only have one image for any argument. Depending upon how you interpret the fourth root, the image could be positive or negative. But once you set how you interpret your function (positive or negative valued), you have to stick with that interpretation throughout. </p>
<p>When you write <span class="math-container">$y = \frac{ 4^{1/5} }{5} x^{1/5} \pm { \frac{1}{x ^ {3/4}} }$</span>, you are working with both interpretations simultaneously. In other words, when you differentiate, you don't get two derivatives for one function, rather two derivatives corresponding to two different functions, one <span class="math-container">$y = \frac{ 4^{1/5} }{5} x^{1/5} + { \frac{1}{x ^ {3/4}} }$</span>, and the other, <span class="math-container">$y = \frac{ 4^{1/5} }{5} x^{1/5} - { \frac{1}{x ^ {3/4}} }$</span>.</p>
| <p>You are confused about what <span class="math-container">$y^{1/4}$</span> actually means.</p>
<p>Suppose that <span class="math-container">$x^4=1$</span>. We could raise both sides to the <span class="math-container">$1/4$</span> power: <span class="math-container">$$\left(x^4\right)^{1/4}=1^{1/4}$$</span></p>
<p>The right side is <em>unambiguously</em> <span class="math-container">$1$</span>. It is not <span class="math-container">$\pm1$</span>. But read on. <span class="math-container">$$\left(x^4\right)^{1/4}=1$$</span> The left side does <em>not</em> simplify to <span class="math-container">$x$</span> unless you somehow know ahead of time that <span class="math-container">$x$</span> is positive. Otherwise, all you can say is the left side simplifies to <span class="math-container">$\lvert x\rvert$</span>. So you have <span class="math-container">$$\lvert x\rvert = 1$$</span> That implies that "either <span class="math-container">$x=1$</span> or <span class="math-container">$x=-1$</span>". Out of laziness (or a minor efficiency boost) people write <span class="math-container">$x=\pm1$</span>.</p>
<p>Now we started with <span class="math-container">$x^4=1$</span> and ended with <span class="math-container">$x=\pm1$</span>. And because of this and applying the <span class="math-container">$1/4$</span> power in the middle of that process, you have inferred that <span class="math-container">$1^{1/4}=\pm1$</span>. But that is a misunderstanding of the process in its entirety. <span class="math-container">$1^{1/4}$</span> is unambiguously equal to <span class="math-container">$1$</span> when working with arithmetic and real numbers.</p>
|
geometry | <blockquote>
<p>What is the height of the red bar?</p>
</blockquote>
<p><img src="https://i.sstatic.net/qLf9ll.jpg" alt="the problem"></p>
<p>My try: with respect to the picture, it seems for the green bar $\frac{h}{H}=\frac{2}{3}$. So, I think that ratio is the same for the red bar, and the height of the red bar is
$$\frac{h}{6+4}=\frac 23\qquad\to\qquad h_{red}=\frac{20}{3}$$</p>
<p>Is this correct?</p>
| <p>Here is a different visualization.
<a href="https://i.sstatic.net/wUgrN.png" rel="noreferrer"><img src="https://i.sstatic.net/wUgrN.png" alt="Shadow visualization"></a></p>
| <p>3D histograms are evil according to <a href="https://en.wikipedia.org/wiki/Edward_Tufte" rel="noreferrer">Edward Tufte</a>. Here, they are used to obfuscate information and make this geometry problem harder than it is. Also, as mentioned by @CandiedOrange and @LamarLatrell, the original drawing isn't to scale.</p>
<p>Here's a 3D render with correct heights:</p>
<p><a href="https://i.sstatic.net/GsJeK.png" rel="noreferrer"><img src="https://i.sstatic.net/GsJeK.png" alt="enter image description here"></a></p>
<p>By playing with perspective and point of view, you can seamlessly merge lengths that appear on distinct axes. It might give you the wrong impression that you could simply add those lengths.</p>
<p><a href="https://i.sstatic.net/vs6GP.png" rel="noreferrer"><img src="https://i.sstatic.net/vs6GP.png" alt="from above"></a></p>
<p>But if you select the correct perspective, the problem becomes much clearer.</p>
<p><a href="https://i.sstatic.net/Qlsw0.gif" rel="noreferrer"><img src="https://i.sstatic.net/Qlsw0.gif" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/MFcsE.png" rel="noreferrer"><img src="https://i.sstatic.net/MFcsE.png" alt="enter image description here"></a></p>
|
probability | <p>Let $H_n$ denote the $n$th harmonic number; i.e., $H_n = \sum\limits_{i=1}^n \frac{1}{i}$. I've got a couple of proofs of the following limiting expression, which I don't think is that well-known: $$\lim_{n \to \infty} \left(H_n - \frac{1}{2^n} \sum_{k=1}^n \binom{n}{k} H_k \right) = \log 2.$$
I'm curious about other ways to prove this expression, and so I thought I would ask here to see if anybody knows any or can think of any. I would particularly like to see a combinatorial proof, but that might be difficult given that we're taking a limit and we have a transcendental number on one side. I'd like to see any proofs, though. I'll hold off from posting my own for a day or two to give others a chance to respond first.</p>
<p>(The probability tag is included because the expression whose limit is being taken can also be interpreted probabilistically.)</p>
<p><HR></p>
<p>(<strong>Added</strong>: I've accepted Srivatsan's first answer, and I've posted my two proofs for those who are interested in seeing them. </p>
<p>Also, the sort of inverse question may be of interest. Suppose we have a function $f(n)$ such that $$\lim_{n \to \infty} \left(f(n) - \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} f(k) \right) = L,$$ where $L$ is finite and nonzero. What can we say about $f(n)$? <a href="https://math.stackexchange.com/questions/8415/asymptotic-difference-between-a-function-and-its-binomial-average">This question was asked</a> and <a href="https://math.stackexchange.com/questions/8415/asymptotic-difference-between-a-function-and-its-binomial-average/22582#22582">answered a while back</a>; it turns out that $f(n)$ must be $\Theta (\log n)$. More specifically, we must have $\frac{f(n)}{\log_2 n} \to L$ as $n \to \infty$.) </p>
| <p>I made an quick estimate in my comment. The basic idea is that the binomial distribution $2^{−n} \binom{n}{k}$ is concentrated around $k= \frac{n}{2}$. Simply plugging this value in the limit expression, we get $H_n−H_{n/2} \sim \ln 2$ for large $n$. Fortunately, formalizing the intuition isn't that hard. </p>
<p>Call the giant sum $S$. Notice that $S$ can be written as $\newcommand{\E}{\mathbf{E}}$
$$
\sum_{k=0}^{\infty} \frac{1}{2^{n}} \binom{n}{k} (H(n) - H(k)) = \sum_{k=0}^{\infty} \Pr[X = k](H(n) - H(k)) = \E \left[ H(n) - H(X) \right],
$$
where $X$ is distributed according to the binomial distribution $\mathrm{Bin}(n, \frac12)$. We need the following two facts about $X$: </p>
<ul>
<li>With probability $1$, $0 \leqslant H(n) - H(X) \leqslant H(n) = O(\ln n)$.</li>
<li>From the <a href="http://en.wikipedia.org/wiki/Bernstein_inequalities_%28probability_theory%29" rel="noreferrer">Bernstein inequality</a>, for any $\varepsilon \gt 0$, we know that $X$ lies in the range $\frac{1}{2}n (1\pm \varepsilon)$, except with probability at most $e^{- \Omega(n \varepsilon^2) }$. </li>
</ul>
<p>Since the function $x \mapsto H(n) - H(x)$ is monotone decreasing, we have
$$
S \leqslant \color{Red}{H(n)} \color{Blue}{-H\left( \frac{n(1-\varepsilon)}{2} \right)} + \color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}.
$$
Plugging in the standard estimate $H(n) = \ln n + \gamma + O\Big(\frac1n \Big)$ for the harmonic sum, we get:
$$
\begin{align*}
S
&\leqslant \color{Red}{\ln n + \gamma + O \Big(\frac1n \Big)} \color{Blue}{- \ln \left(\frac{n(1-\varepsilon)}{2} \right) - \gamma + O \Big(\frac1n \Big)} +\color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}
\\ &\leqslant \ln 2 - \ln (1- \varepsilon) + o_{n \to \infty}(1)
\leqslant \ln 2 + O(\varepsilon) + o_{n \to \infty}(1). \tag{1}
\end{align*}
$$</p>
<p>An analogous argument gets the lower bound
$$
S \geqslant \ln 2 - \ln (1+\varepsilon) - o_{n \to \infty}(1) \geqslant \ln 2 - O(\varepsilon) - o_{n \to \infty}(1). \tag{2}
$$
Since the estimates $(1)$ and $(2)$ hold for all $\varepsilon > 0$, it follows that $S \to \ln 2$ as $n \to \infty$. </p>
| <p>Here's a different proof. We will simplify the second term as follows:
$$
\begin{eqnarray*}
\frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \frac{1}{t} \right]
&=&
\frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \int_{0}^1 x^{t-1} dx \right]
\\ &=&
\frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} x^{t-1} \right] dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \cdot \frac{x^k-1}{x-1} \right] dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \frac{\sum\limits_{k=0}^n \binom{n}{k} x^k- \sum\limits_{k=0}^n \binom{n}{k}}{x-1} dx
\\ &=&
\frac{1}{2^n} \int_{0}^1 \frac{(x+1)^n- 2^n}{x-1} dx.
\end{eqnarray*}
$$</p>
<p>Make the substitution $y = \frac{x+1}{2}$, so the new limits are now $1/2$ and $1$. The integral then changes to:
$$
\begin{eqnarray*}
\int_{1/2}^1 \frac{y^n- 1}{y-1} dy
&=&
\int_{1/2}^1 (1+y+y^2+\ldots+y^{n-1}) dy
\\ &=&
\left. y + \frac{y^2}{2} + \frac{y^3}{3} + \ldots + \frac{y^n}{n} \right|_{1/2}^1
\\ &=&
H_n - \sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i.
\end{eqnarray*}
$$
Notice that conveniently $H_n$ is the first term in our function. Rearranging, the expression under the limit is equal to:
$$
\sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i.
$$
The final step is to note that this is just the $n$th partial sum of the Taylor series expansion of $f(y) = -\ln(1-y)$ at $y=1/2$. Therefore, as $n \to \infty$, this sequence approaches the value $$-\ln \left(1-\frac{1}{2} \right) = \ln 2.$$</p>
<p><em>ADDED:</em> As Didier's comments hint, this proof also shows that the given sequence, call it $u_n$, is monotonoic and is hence always smaller than $\ln 2$. Moreover, we also have a tight error estimate:
$$
\frac{1}{n2^n} < \ln 2 - u_n < \frac{2}{n2^n}, \ \ \ \ (n \geq 1).
$$</p>
|
probability | <p>If I have two variables $X$ and $Y$ which randomly take on values uniformly from the range $[a,b]$ (all values equally probable), what is the expected value for $\max(X,Y)$?</p>
| <p>Here are some useful tools:</p>
<ol>
<li>For every nonnegative random variable $Z$, $$\mathrm E(Z)=\int_0^{+\infty}\mathrm P(Z\geqslant z)\,\mathrm dz=\int_0^{+\infty}(1-\mathrm P(Z\leqslant z))\,\mathrm dz.$$</li>
<li>As soon as $X$ and $Y$ are independent, $$\mathrm P(\max(X,Y)\leqslant z)=\mathrm P(X\leqslant z)\,\mathrm P(Y\leqslant z).$$</li>
<li>If $U$ is uniform on $(0,1)$, then $a+(b-a)U$ is uniform on $(a,b)$.</li>
</ol>
<p>If $(a,b)=(0,1)$, items 1. and 2. together yield $$\mathrm E(\max(X,Y))=\int_0^1(1-z^2)\,\mathrm dz=\frac23.$$ Then item 3. yields the general case, that is, $$\mathrm E(\max(X,Y))=a+\frac23(b-a)=\frac13(2b+a).$$</p>
| <p>I very much liked Martin's approach but there's an error with his integration. The key is on line three. The intution here should be that when y is the maximum, then x can vary from 0 to y whereas y can be anything and vice-versa for when x is the maximum. So the order of integration should be flipped: </p>
<p><img src="https://i.sstatic.net/4AR8n.png" alt="enter image description here"></p>
|
combinatorics | <p>In <a href="https://math.stackexchange.com/questions/38350/n-lines-cannot-divide-a-plane-region-into-x-regions-finding-x-for-n">this</a> post it is mentioned that $n$ straight lines can divide the plane into a maximum number of $(n^{2}+n+2)/2$ different regions. </p>
<p>What happens if we use circles instead of lines? That is, what is the maximum number of regions into which n circles can divide the plane?</p>
<p>After some exploration it seems to me that in order to get maximum division the circles must intersect pairwise, with no two of them tangent, none of them being inside another and no three of them concurrent (That is no three intersecting at a point).</p>
<p>The answer seems to me to be affirmative, as the number I obtain is $n^{2}-n+2$ different regions. Is that correct?</p>
| <p>For the question stated in the title, the answer is yes, if more is interpreted as "more than or equal to".</p>
<p>Proof: let <span class="math-container">$\Lambda$</span> be a collection of lines, and let <span class="math-container">$P$</span> be the extended two plane (the Riemann sphere). Let <span class="math-container">$P_1$</span> be a connected component of <span class="math-container">$P\setminus \Lambda$</span>. Let <span class="math-container">$C$</span> be a small circle entirely contained in <span class="math-container">$P_1$</span>. Let <span class="math-container">$\Phi$</span> be the <a href="http://en.wikipedia.org/wiki/M%C3%B6bius_transformation" rel="nofollow noreferrer">conformal inversion</a> of <span class="math-container">$P$</span> about <span class="math-container">$C$</span>. Then by elementary properties of conformal inversion, <span class="math-container">$\Phi(\Lambda)$</span> is now a collection of circles in <span class="math-container">$P$</span>. The number of connected components of <span class="math-container">$P\setminus \Phi(\Lambda)$</span> is the same as the number of connected components of <span class="math-container">$P\setminus \Lambda$</span> since <span class="math-container">$\Phi$</span> is continuous. So this shows that <strong>for any collection of lines, one can find a collection of circles that divides the plane into at least the same number of regions</strong>.</p>
<p>Remark: via the conformal inversion, all the circles in <span class="math-container">$\Phi(\Lambda)$</span> thus constructed pass through the center of the circle <span class="math-container">$C$</span>. One can imagine that by perturbing one of the circles somewhat to reduce concurrency, one can increase the number of regions.</p>
<hr />
<p>Another way to think about it is that lines can be approximated by really, really large circles. So starting with a configuration of lines, you can replace the lines with really really large circles. Then in the finite region "close" to where all the intersections are, the number of regions formed is already the same as that coming from lines. But when the circles "curve back", additional intersections can happen and that can only introduce "new" regions.</p>
<hr />
<p>Lastly, <a href="http://mathworld.wolfram.com/PlaneDivisionbyCircles.html" rel="nofollow noreferrer">yes, the number you derived is correct</a>. See <a href="http://oeis.org/A014206" rel="nofollow noreferrer">also this OEIS entry</a>.</p>
| <p>One may deduce the formula $n^{2}-n+2$ as follows: Start with $m$ circles already drawn on the plane with no two of them tangent, none of them being inside another and no three of them concurrent. Then draw the $m+1$ circle $C$ so that is does not violate the propeties stated before and see how it helps increase the number of regions. Indeed, we can see that that $C$ intersects each of the remaining $m$ circles at two points. Therefore, $C$ is divided into $2m$ arcs, each of which divides in two a region formed previously by the first $m$ circles. But a circle divides the plane into two regions, and so we can count step by step ($m=1,2,\cdots, n$) the total number of regions obatined after drawing the $n$-th circle. That is,
$$
2+2(2-1)+2(3-1)+2(4-1)+\cdots+2(n-1)=n^{2}-n+2
$$</p>
<p>Since $n^{2}-n+2\ge (n^{2}+n+2)/2$ for $n\ge 1$ the answer is affirmative. </p>
<p>ADDENDUM: An easy way to see that the answer to my question is affirmative without finding a formula may be as follows: Suppose that $l_{n}$ is the maximum number of regions into which the plane $\mathbb{R}^{2}$ can be divided by $n$ lines, and that $c_{n}$ is the maximum number of regions into which the plane can be divided by $n$ circles. </p>
<p>Now, in the one-point compactification $\mathbb{R}^{2}\cup\{\infty\}$ of the plane, denoted by $S$ (a sphere), the $n$ lines become circles intersecting all at the point $\infty$. Therefore, these circles divide $S$ into at least $l_{n}$ regions. Now, if we pick a point $p$ in the complement in $S$ of the circles and take the stereographic projection through $p$ mapping onto the plane tangent to $S$ at the antipode of $p$ we obtain a plane which is divided by $n$ circles into at least $l_{n}$ regions. Therefore, $l_{n}\le c_{n}$.</p>
<p>Moreover, from this we can see that the plane and the sphere have equal maximum number of regions into which they can be divided by circles. </p>
|
probability | <p>My friend was asked the following problem in an interview a while back, and it has a nice answer, leading me to believe that there is an equally nice solution.</p>
<blockquote>
<p>Suppose that there are 42 bags, labeled $0$ though $41$. Bag $i$ contains $i$ red balls and $42-(i+1)$ blue balls. Suppose that you pick a bag, then pull out three balls without replacement. What is the probability that all 3 balls are the same color?</p>
</blockquote>
<p>The problem can be solved easily by using some basic identities with binomial coefficients, and the answer is $1/2$. Moreover, if $42$ is replaced by $n$, the answer does not change, assuming $n>3$. However, this computational approach obscures any hidden structure there might be. Ideally, I would like a simple and direct proof that the probability of getting RRR is the same as the probability of getting RBB.</p>
<p>So, is there a nice solution to the problem, one that could be explained fully to someone without the use of paper? Or is there no good way to explain this beyond computational coincidence?</p>
| <p>The game can be reformulated in the following way: There is one urn with 42 balls numbered 0 through 41. You start by drawing and keeping one ball (this corresponds to picking a bag in your version). The urn is then taken away while an assistant paints all balls with a <em>lower</em> number than your first draw red and the rest of them blue. Then you draw three more balls and see whether they have the same color.</p>
<p>Now, this is equivalent to first drawing <em>four</em> of the numbered balls, then among those four pick a random one to be the "bag" ball and coloring the other three according to that choice. But doing it that way, we can see that the initial drawing of four balls is entirely superfluous -- only the order relation between them matters, and any ordered set of four balls have the same structure. So we might as well forgo the initial draw and just start out with four balls numbered 1, 2, 3, and 4. Then you win if the one you pick in the second draw is either 1 or 4, and the probability of that is, naturally, 1/2.</p>
| <p>While I really like Henning Makholm's solution, it also might be interesting to see a bijection from the RRR outcomes to the RRB outcomes to the RBB outcomes to the BBB outcomes. The bijection doesn't depend on the number of balls chosen (e.g., the process gives RRRR to RRRB to RRBB to RBBB to BBBB in the case of four balls), and so it shows that if we choose $k$ balls rather than $3$ from each bag, the probability of all $k$ balls being the same color is $\frac{2}{k+1}$.</p>
<p>For bag $i$, number the red balls $1$ through $i$ and the blue balls $1$ through $n-(i+1)$. Then, for any outcome in any bag, draw an X on the three balls chosen. For an RRR outcome, take the highest-numbered red ball with an X, paint it (retain the X) and all higher-numbered red balls blue, and renumber them so that they are now the highest-numbered blue balls, preserving the internal ordering of the balls switched. Finally, change the label on the bag to account for the new number of red balls. This gives a mapping from the set of RRR outcomes to the set of RRB outcomes. Moreover, the mapping is reversible, since for any RRB outcome you can take the blue ball with an X, paint it and all higher-numbered blue balls red, renumber them so that they are now the highest-numbered red balls, and change the label on the bag to account for the new number of red balls. So we have a bijection between the set of RRR outcomes and the set of RRB outcomes. </p>
<p>You can then apply this process again for any RRB outcome, painting the highest-numbered red ball with an X and and all higher-numbered red balls blue. This gives a bijection between the RRB outcomes and the RBB outcomes. Applied again gives a bijection between the RBB outcomes and the BBB outcomes. Thus these four sets are the same size, and so the probability that we obtain an RRR or BBB outcome is $\frac{1}{2}$.</p>
|
geometry | <p>I learned that the volume of a sphere is <span class="math-container">$\frac{4}{3}\pi r^3$</span>, but why? The <span class="math-container">$\pi$</span> kind of makes sense because its round like a circle, and the <span class="math-container">$r^3$</span> because it's 3-D, but <span class="math-container">$\frac{4}{3}$</span> is so random! How could somebody guess something like this for the formula?</p>
| <p>In addition to the methods of calculus, Pappus, and Archimedes already mentioned, <a href="https://en.wikipedia.org/wiki/Cavalieri%27s_principle" rel="nofollow noreferrer">Cavalieri's Principle</a> can be useful for these kinds of problems.</p>
<p>Suppose you have two solid figures lined up next to each other, each fitting between the same two parallel planes. (E.g., two stacks of pennies lying on the table, of the same height). Then, consider cutting the two solids by a plane parallel to the given two and in between them. If the cross-sectional area thus formed is the same for each of the solids for any such plane, the volumes of the solids are the same.</p>
<p>If you're willing to accept that you know the volume of a cone is 1/3 that of the cylinder with the same base and height, you can use Cavalieri, comparing a hemisphere to a cylinder with an inscribed cone, to get the volume of the sphere. This diagram (from Wikipedia) illustrates the construction: <a href="https://en.wikipedia.org/wiki/File:Sphere_cavalieri.svg" rel="nofollow noreferrer">look here</a></p>
<p>Consider a cylinder of radius <span class="math-container">$R$</span> and height <span class="math-container">$R$</span>, with, inside it, an inverted cone, with base of radius <span class="math-container">$R$</span> coinciding with the top of the cylinder, and again height <span class="math-container">$R$</span>. Put next to it a hemisphere of radius <span class="math-container">$R$</span>. Now consider the cross section of each at height <span class="math-container">$y$</span> above the base. For the cylinder/cone system, the area of the cross-section is <span class="math-container">$\pi (R^2-y^2)$</span>. It's the same for the hemisphere cross-section, as you can see by doing the Pythagorean theorem with any vector from the sphere center to a point on the sphere at height y to get the radius of the cross section (which is circular).</p>
<p>Since the cylinder/cone and hemisphere have the same height, by Cavalieri's Principle the volumes of the two are equal. The cylinder volume is <span class="math-container">$\pi R^3$</span>, the cone is a third that, so the hemisphere volume is <span class="math-container">$\frac{2}{3} \pi R^3$</span>. Thus the sphere of radius <span class="math-container">$R$</span> has volume <span class="math-container">$\frac{4}{3} \pi R^3$</span>.</p>
| <p>The volume of a sphere with radius <span class="math-container">$a$</span> may be found by evaluating the triple integral<span class="math-container">$$V=\iiint \limits _S\,dx\,dy\,dz,$$</span>where <span class="math-container">$S$</span> is the volume enclosed by the sphere <span class="math-container">$x^2+y^2+z^2=a^2$</span>. Changing variables to spherical polar coordinates, we obtain<span class="math-container">$$V=\int \limits _0^{2\pi}d\phi \int \limits _0^\pi d\theta \int \limits _0^ar^2\sin \theta \,dr=\int \limits _0^{2\pi}d\phi \int \limits _0^\pi \sin \theta \,d\theta \int \limits _0^ar^2\,dr=\frac{4\pi a^3}{3},$$</span>as expected.</p>
|
linear-algebra | <p>I am relatively new to the world of academical mathematics, but I have noticed that most, if not all, mathematical textbooks that I've had the chance to come across, seem completely oblivious to the existence of lambda notation. </p>
<p>More specifically, in a linear algebra course I'm taking, I found it a lot easier to understand "higher order functionals" from the second dual space, by putting them in lambda expressions. It makes a lot more sense to me to put them in the neat, clear notation of lambda expressions, rather than in multiple variable functions where not all the arguments are of the same "class" as some are linear functionals and others are vectors. For example, consider the canonical isomorphism -
$$A:V \rightarrow V^{**}$$</p>
<p>It would usually be expressed by $$Av(f) = f(v)$$
This was a notation I found particularly difficult to understand at first as there are several processes taking place "under the hood", that can be put a lot more clearly, in my opinion, this way:</p>
<p>$$A = \lambda v \in V. \lambda f \in V^{*}. f(v)$$</p>
<p>I agree that this notation may become tedious and over-explanatory over time, but as a first introduction of the concept I find it a lot easier as it makes it very clear what goes where.</p>
<p>My question is, basically, why isn't this widespread, super popular notation in the world of computer science, not as popular in the field of mathematics? Or is it, and I'm just not aware? </p>
| <p>As Derek already said, there is no essential difference between functions $A\times B \to C$ and functions $A\to (B \to C)$ via Currying (this is also more abstractly expressed by the universal property of an <a href="https://en.wikipedia.org/wiki/Exponential_object" rel="noreferrer">exponential</a> which unifies the set-theoretical currying and currying in a typed lambda calculus).</p>
<p>On the notational side of things, I <em>personally</em> prefer $x\mapsto f(x)$ to $\lambda x. f(x)$ and I suspect many other mathematicians feel the same (especially since $\lambda$ is such a commonly used letter).</p>
<hr>
<p>EDIT: (now that my answer stopped being one, let me add some rambling that the 29 people so far have <em>not</em> upvoted for):</p>
<p>I'm guessing many mathematicians are less "comfortable" with nested expressions like $v\mapsto (f \mapsto f(v))$. That would be nothing extraordinary, since there are various concepts that some mathematicians feel less comfortable about. Here are two (unrelated) things that I have encountered:</p>
<ul>
<li>empty metric spaces: Some people deliberately require metric spaces to be non-empty which is a nuisance: given a metric space $(X,d)$ and $Y\subseteq X$, $(Y,d|_{Y^2})$ is a metric space again... unless of course $Y=\emptyset$; apparently it doesn't feel "right" for metric spaces to be empty</li>
<li>$f(x)$ instead of $f$: Some people refer to a function $f$ as $f(x)$; this is (unfortunately) what I learned in high school and is (rein)forced by notation like $\frac{d f(x)}{d x}$ and $\int f(x) \,dx$</li>
</ul>
<p>Although, your example:</p>
<blockquote>
<p>Let $A : V\to V^{**}$ such that $Av(f) = f(v)$ for all $v\in V$ and $f\in V^*$</p>
</blockquote>
<p>is fine and not hard to understand, in my opinion. For every $v\in V$ we have $Av\in V^{**}$, i.e. $Av : V^* \to \mathbb K$. Hence we can plug in an $f\in V^*$ to get $f(v) \in \mathbb{K}$. If the author thinks it is easy to understand <em>and</em> is more used to it than $v\mapsto (f \mapsto f(v))$ then they would obviously have no reason to change the notation.</p>
<p>So the reason why $v\mapsto (f\mapsto f(v))$ (or a variant thereof) is not used as much is probably: "I'm not used to this notation and I'm perfectly happy with mine."</p>
<p>By the way, <em>my personal</em> favourite is <em>also</em> not:
$$A : V \to V^{**}, v \mapsto (f\mapsto f(v))$$ but
$$A : V\to V^{**}, v\mapsto \_(v)$$ where it is implied that $\_$ is a placeholder, i.e. $\_(v) : V^* \to \mathbb{K}, f\to f(v)$.</p>
| <p>Lambda calculus is related with computer science through and through. To quote Wikipedia:</p>
<blockquote>
<p>Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing <em>computation</em> based on function abstraction and <em>application</em> using variable binding and <em>substitution</em>.</p>
</blockquote>
<p>Highlights mine. Here, "computation", "application" and "substitution" are very well defined operations on symbols as understood in CS. That is literally what lambda calculus is all about, to start out with: to reason about substituting symbols in formal languages.</p>
<p>Processes like Currying are there because they have relatively practical applications - for example, they make abstract reasoning easier (by reducing all lambdas with multiple arguments to ones with single arguments). "Meta" topics like lazy evalation, typing, strictness etc. can all be explored in the context of lambda calculus and have little impact on general mathematic formulae. For CS, it is important to be super exact with these things, as computers, basically, are machines for manipulating symbols.</p>
<p>So, lambdas have use for the theoretical computer linguist / computer scientist / logician; on the surface you could probably use the notation for general mathematics, but many of the advanced "benefits" do not transfer (or at least not in a helpful manner). In most parts of mathematics, especially applied mathematics (physics...), the question of how exactly to "apply" and "substitute" variables is crystal clear and of little interest to anybody - it is often quite usual to skip writing bound variables completely. </p>
<p>Oh, and the other answer: people are just used to the usual representation. Plenty of mathematical areas tend to have their own notations for quite similar things. It's just how it is.</p>
|
linear-algebra | <p>I know smilar questions have been asked and I have looked at them but none of them seems to have satisfactory answer. I am reading the book <a href="http://amzn.com/0521406498">a course in mathematics for student of physics vol. 1</a> by Paul Bamberg and Shlomo Sternberg. In Chapter 1 authors define affine space and writes:</p>
<blockquote>
<p>The space $\Bbb{R}^2$ is an example of a <em>vector space</em>. The distinction between <em>vector space</em> $\Bbb{R}^2$ and <em>affine space</em> $A\Bbb{R}^2$ lies in the fact that in $\Bbb{R}^2$ the point (0,0) has a special significance ( it is the additive identity) and the addition of two vectors in $\Bbb{R}^2$ makes sense. These do not hold for $A\Bbb{R}^2$.</p>
</blockquote>
<p>Please explain. </p>
<p>Edit:</p>
<p>How come $A\Bbb{R}^2$ has point (0,0) without special significance? and why the addition of two vectors in $A\Bbb{R}^2$ does not make sense? Please give concrete examples instead of abstract answers . I am a physics major and have done courses in Calculus, Linear Algebra and Complex Analysis.</p>
| <p>Consider the vector space <span class="math-container">$\mathbb{R}^3$</span>. Inside <span class="math-container">$\mathbb{R}^3$</span> we can choose two planes, as in the picture below. We'll call the green one <span class="math-container">$P_1$</span> and the blue one <span class="math-container">$P_2$</span>. The plane <span class="math-container">$P_1$</span> passes through the origin but the plane <span class="math-container">$P_2$</span> does not. It is a standard homework exercise in linear algebra to show that the <span class="math-container">$P_1$</span> is a <strong>sub-vector space</strong> of <span class="math-container">$\mathbb{R}^3$</span> but the plane <span class="math-container">$P_2$</span> is <strong>not</strong>. However, the plane <span class="math-container">$P_2$</span> <strong>looks</strong> almost exactly the same as <span class="math-container">$P_1$</span>, having the exact same, flat geometry, and in fact <span class="math-container">$P_2$</span> and <span class="math-container">$P_1$</span> are simply translates of one another. This plane <span class="math-container">$P_2$</span> is a classical example of an affine space.</p>
<p><span class="math-container">$\,\,\,\,\,\,\,\,\,$</span><img src="https://i.sstatic.net/LwSZZ.gif" alt="enter image description here" /></p>
<p>Suppose we wanted to turn <span class="math-container">$P_2$</span> into a vector space, would it be possible? Sure. What we would need to do is align <span class="math-container">$P_2$</span> with <span class="math-container">$P_1$</span> using some translation, and then use this alignment to re-define the algebraic operations on <span class="math-container">$P_2$</span>. Let's make this precise. If <span class="math-container">$T: P_2 \to P_1$</span> is the alignment, for <span class="math-container">$p,q \in P_2$</span> we'll define <span class="math-container">$p \oplus q = T^{-1}(T(p) + T(q))$</span>. In words, we shift <span class="math-container">$p$</span> and <span class="math-container">$q$</span> down to <span class="math-container">$P_1$</span>, add them, and then shift them back. Note that this is different than simply adding <span class="math-container">$p+q$</span>, as this vector need not lie on <span class="math-container">$P_2$</span> at all (one of the reasons <span class="math-container">$P_2$</span> is not a vector space, it is not closed under addition).</p>
<p>There are, however, many ways of aligning <span class="math-container">$P_2$</span> with <span class="math-container">$P_1$</span>, and so many different ways of turning <span class="math-container">$P_2$</span> into a vector space, and none of them are <strong>canonical</strong>. Here is one way to make these alignments: pick a vector <span class="math-container">$v \in P_2$</span>, and translate <span class="math-container">$P_2$</span> by <span class="math-container">$-v$</span>, so that <span class="math-container">$T(p) = p-v$</span>. This translates <span class="math-container">$P_2$</span> on to <span class="math-container">$P_1$</span>, and sends <span class="math-container">$v$</span> to <span class="math-container">$0$</span>. Conceptually, this translation "sends <span class="math-container">$v$</span> to zero", and this approach of "redefining some chosen vector to be the zero vector" always works to turn an affine space into a vector space.</p>
<p>If you want to do algebra on <span class="math-container">$P_2$</span> without picking a "zero vector", you can use the following trick: instead of trying to trying to add together vectors in <span class="math-container">$P_2$</span> (which, as we've seen, need not stay in <span class="math-container">$P_2$</span>), you can add vectors in <span class="math-container">$P_1$</span> to vectors in <span class="math-container">$P_2$</span>. Note that if <span class="math-container">$v_1 \in P_1$</span> and <span class="math-container">$v_2 \in P_2$</span> then <span class="math-container">$v_1 + v_2 \in P_2$</span>. What we obtain is a funny situation where the addition takes place between two sets: a vector space <span class="math-container">$P_1$</span> on the one hand, and the non-vector-space <span class="math-container">$P_2$</span> on the other. This lets us work with <span class="math-container">$P_2$</span> without having to force it to be a vector space.</p>
<p>Affine spaces are an abstraction and generalization of this situation.</p>
| <p>Consider an infinite sheet (of idealised paper, if you like). If it is blank, then there is absolutely no way to distinguish between any two points on the sheet. Nonetheless, if you do have two points on the sheet, you can measure the distance between them. And if there is a uniform magnetic field parallel to the sheet, then you can even measure the bearing from one point to another. Thus, given any point $P$ on the sheet, you can uniquely describe every other point on the sheet by its distance and bearing from $P$; and conversely, given any distance and bearing, there is a point with that distance and bearing from $P$. <em>This</em> is the situation that the notion of a 2-dimensional affine space is an abstraction of.</p>
<p>Now suppose we have marked a point $O$ on the sheet. Then we can "add" points $P$ and $Q$ on the sheet by drawing the usual parallelogram diagram. The result $P + Q$ of the "addition" depends on the choice of $O$ (and, of course, $P$ and $Q$), but nothing else. <em>This</em> is what the notion of a 2-dimensional vector space is an abstraction of.</p>
|
matrices | <p>I'm in the process of writing an application which identifies the closest matrix from a set of square matrices $M$ to a given square matrix $A$. The closest can be defined as the most similar.</p>
<p>I think finding the distance between two given matrices is a fair approach since the smallest Euclidean distance is used to identify the closeness of vectors. </p>
<p>I found that the distance between two matrices ($A,B$) could be calculated using the <a href="http://mathworld.wolfram.com/FrobeniusNorm.html">Frobenius distance</a> $F$:</p>
<p>$$F_{A,B} = \sqrt{trace((A-B)*(A-B)')} $$</p>
<p>where $B'$ represents the conjugate transpose of B.</p>
<p>I have the following points I need to clarify</p>
<ul>
<li>Is the distance between matrices a fair measure of similarity?</li>
<li>If distance is used, is Frobenius distance a fair measure for this problem? any other suggestions?</li>
</ul>
| <p>Some suggestions. Too long for a comment:</p>
<p>As I said, there are many ways to measure the "distance" between two matrices. If the matrices are $\mathbf{A} = (a_{ij})$ and $\mathbf{B} = (b_{ij})$, then some examples are:
$$
d_1(\mathbf{A}, \mathbf{B}) = \sum_{i=1}^n \sum_{j=1}^n |a_{ij} - b_{ij}|
$$
$$
d_2(\mathbf{A}, \mathbf{B}) = \sqrt{\sum_{i=1}^n \sum_{j=1}^n (a_{ij} - b_{ij})^2}
$$
$$
d_\infty(\mathbf{A}, \mathbf{B}) = \max_{1 \le i \le n}\max_{1 \le j \le n} |a_{ij} - b_{ij}|
$$
$$
d_m(\mathbf{A}, \mathbf{B}) = \max\{ \|(\mathbf{A} - \mathbf{B})\mathbf{x}\| : \mathbf{x} \in \mathbb{R}^n, \|\mathbf{x}\| = 1 \}
$$
I'm sure there are many others. If you look up "matrix norms", you'll find lots of material. And if $\|\;\|$ is any matrix norm, then $\| \mathbf{A} - \mathbf{B}\|$ gives you a measure of the "distance" between two matrices $\mathbf{A}$ and $\mathbf{B}$.</p>
<p>Or, you could simply count the number of positions where $|a_{ij} - b_{ij}|$ is larger than some threshold number. This doesn't have all the nice properties of a distance derived from a norm, but it still might be suitable for your needs.</p>
<p>These distance measures all have somewhat different properties. For example, the third one shown above will tell you that two matrices are far apart even if all their entries are the same except for a large difference in one position.</p>
| <p>If we have two matrices $A,B$.
Distance between $A$ and $B$ can be calculated using Singular values or $2$ norms.</p>
<p>You may use Distance $= \vert(\text{fnorm}(A)-\text{fnorm}(B))\vert$
where fnorm = sq root of sum of squares of all singular values. </p>
|
matrices | <blockquote>
<p>Find the largest eigenvalue of the following matrix
$$\begin{bmatrix}
1 & 4 & 16\\
4 & 16 & 1\\
16 & 1 & 4
\end{bmatrix}$$</p>
</blockquote>
<p>This matrix is symmetric and, thus, the eigenvalues are real. I solved for the possible eigenvalues and, fortunately, I found that the answer is $21$.</p>
<p>My approach:</p>
<p>The determinant on simplification leads to the following third degree polynomial.
$$\begin{vmatrix}
1-\lambda & 4 &16\\
4 &16-\lambda&1\\
16&1&4-\lambda
\end{vmatrix}
= \lambda^3-21\lambda^2-189\lambda+3969.$$</p>
<p>At a first glance seen how many people find the roots of this polynomial with pen and paper using elementary algebra. I managed to find the roots and they are $21$, $\sqrt{189}$, and $-\sqrt{189}$ and the largest value is $21$.</p>
<p>Now the problem is that my professor stared at this matrix for a few seconds and said that the largest eigenvalue is $21$. Obviously, he hadn't gone through all these steps to find that answer. So what enabled him answer this in a few seconds? Please don't say that he already knew the answer.</p>
<p>Is there any easy way to find the answer in a few seconds? What property of this matrix makes it easy to compute that answer?</p>
<p>Thanks in advance.</p>
| <p>Requested by @Federico Poloni:</p>
<p>Let $A$ be a matrix with positive entries, then from the Perron-Frobenius theorem it follows that the dominant eigenvalue (i.e. the largest one) is bounded between the lowest sum of a row and the biggest sum of a row. Since in this case both are equal to $21$, so must the eigenvalue.</p>
<p>In short: since the matrix has positive entries and all rows sum to $21$, the largest eigenvalue must be $21$ too.</p>
| <p>The trick is that $\frac1{21}$ of your matrix is a <a href="https://en.wikipedia.org/wiki/Doubly_stochastic_matrix" rel="noreferrer">doubly stochastic matrix</a> with positive entries, hence the bound of 21 for the largest eigenvalue is a straightforward consequence of the <a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem" rel="noreferrer">Perron-Frobenius theorem</a>.</p>
|
logic | <p>Recently I have started reviewing mathematical notions, that I have always just accepted. Today it is one of the fundamental ones used in equations: </p>
<blockquote>
<p>If we have an equation, then the equation holds if we do the same to both sides. </p>
</blockquote>
<p>This seems perfectly obvious, but it must be stated as an axiom somewhere, presumably in formal logic(?). Only, I don't know what it would be called, or indeed how to search for it - does anybody knw?</p>
| <p>This axiom is known as the <em>substitution property of equality</em>. It states that if $f$ is a function, and $x = y$, then $f(x) = f(y)$. See, for example, <a href="https://en.wikipedia.org/wiki/First-order_logic#Equality_and_its_axioms" rel="nofollow noreferrer">Wikipedia</a>.</p>
<p>For example, if your equation is $4x = 2$, then you can apply the function $f(x) = x/2$ to both sides, and the axiom tells you that $f(4x) = f(2)$, or in other words, that $2x = 1$. You could then apply the axiom again (with the same function, even) to conclude that $x = 1/2$.</p>
| <p>"Do the same to both sides" is rather vague. What we can say is that if $f:A \rightarrow B$ is a <em>bijection</em> between sets $A$ and $B$ then, by definition</p>
<p>$\forall \space x,y \in A \space x=y \iff f(x)=f(y)$</p>
<p>The operation of adding $c$ (and its inverse subtracting $c$) is a bijection in groups, rings and fields, so we can conclude that</p>
<p>$x=y \iff x+c=y+c$</p>
<p>However, multiplication by $c$ is only a bijection for certain values of $c$ ($c \ne 0$ in fields, $\gcd(c,n)=1$ in $\mathbb{Z}_n$ etc.), so although we can conclude</p>
<p>$x=y \Rightarrow xc = yc$</p>
<p>it is not safe to assume the converse i.e. in general</p>
<p>$xc=yc \nRightarrow x = y$</p>
<p>and we have to take care about which values of $c$ we can "cancel" from both sides of the equation.</p>
<p>Some polynomial functions are bijections in $\mathbb{R}$ e.g.</p>
<p>$x=y \iff x^3=y^3$</p>
<p>but others are not e.g.</p>
<p>$x^2=y^2 \nRightarrow x = y$</p>
<p><em>unless</em> we restrict the domain of $f(x)=x^2$ to, for example, non-negative reals. Similarly</p>
<p>$\sin(x) = \sin(y) \nRightarrow x = y$</p>
<p><em>unless</em> we restrict the domain of $\sin(x)$.</p>
<p>So in general we can only "cancel" a function from both sides of an equation if we are sure it is a bijection, or if we have restricted its domain or range to create a bijection.</p>
|
number-theory | <p>In <a href="https://math.stackexchange.com/questions/3533451/will-there-at-some-point-be-more-numbers-with-n-factors-than-prime-numbers-for?noredirect=1&lq=1">this</a> question I plotted the number of numbers with <span class="math-container">$n$</span> prime factors. It appears that the further out on the number line you go, the number of numbers with <span class="math-container">$3$</span> prime factors get ahead more and more.</p>
<p>The charts show the number of numbers with exactly <span class="math-container">$n$</span> prime factors, counted with multiplicity:
<a href="https://i.sstatic.net/DcHmw.png" rel="noreferrer"><img src="https://i.sstatic.net/DcHmw.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/UTIUA.png" rel="noreferrer"><img src="https://i.sstatic.net/UTIUA.png" alt="enter image description here"></a>
(Please ignore the 'Divisors' in the chart legend, it should read 'Factors')</p>
<p><strong>My question is</strong>: will the line for numbers with <span class="math-container">$3$</span> prime factors be overtaken by another line or do 'most numbers have <span class="math-container">$3$</span> prime factors'? It it is indeed that case that most numbers have <span class="math-container">$3$</span> prime factors, what is the explanation for this?</p>
| <p>Yes, the line for numbers with <span class="math-container">$3$</span> prime factors will be overtaken by another line. As shown & explained in <a href="https://hermetic.ch/prf/plotting_prime_number_freq.htm" rel="nofollow noreferrer">Prime Factors: Plotting the Prime Factor Frequencies</a>, even up to <span class="math-container">$10$</span> million, the most frequent count is <span class="math-container">$3$</span>, with the mean being close to it. However, it later says</p>
<blockquote>
<p>For <span class="math-container">$n = 10^9$</span> the mean is close to <span class="math-container">$3$</span>, and for <span class="math-container">$n = 10^{24}$</span> the mean is close to <span class="math-container">$4$</span>.</p>
</blockquote>
<p>The most common # of prime factors increases, but only very slowly, and with the mean having "no upper limit".</p>
<p>OEIS A<span class="math-container">$001221$</span>'s closely related (i.e., where multiplicities are not counted) <a href="http://oeis.org/A001221" rel="nofollow noreferrer">Number of distinct primes dividing n (also called omega(n))</a> says</p>
<blockquote>
<p>The average order of <span class="math-container">$a(n): \sum_{k=1}^n a(k) \sim \sum_{k=1}^n \log \log k.$</span> - <a href="http://oeis.org/wiki/User:Daniel_Forgues" rel="nofollow noreferrer">Daniel Forgues</a>, Aug 13-16 2015</p>
</blockquote>
<p>Since this involves the log of a log, it helps explain why the average order increases only very slowly.</p>
<p>In addition, the <a href="https://en.wikipedia.org/wiki/Hardy%E2%80%93Ramanujan_theorem" rel="nofollow noreferrer">Hardy–Ramanujan theorem</a> says</p>
<blockquote>
<p>... the normal order of the number <span class="math-container">$\omega(n)$</span> of distinct prime factors of a number <span class="math-container">$n$</span> is <span class="math-container">$\log(\log(n))$</span>.</p>
<p>Roughly speaking, this means that most numbers have about this number of distinct prime factors.</p>
</blockquote>
<p>Also, regarding the statistical distribution, you have the <a href="https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Kac_theorem" rel="nofollow noreferrer">Erdős–Kac theorem</a> which states</p>
<blockquote>
<p>... if <span class="math-container">$ω(n)$</span> is the number of distinct prime factors of <span class="math-container">$n$</span> (sequence <a href="https://oeis.org/A001221" rel="nofollow noreferrer">A001221</a> in the <a href="https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences" rel="nofollow noreferrer">OEIS</a>, then, loosely speaking, the probability distribution of</p>
</blockquote>
<p><span class="math-container">$$\frac {\omega (n)-\log \log n}{\sqrt {\log \log n}}$$</span></p>
<blockquote>
<p>is the standard <a href="https://en.wikipedia.org/wiki/Normal_distribution" rel="nofollow noreferrer">normal distribution</a>.</p>
</blockquote>
<p>To see graphs related to this distribution, the first linked page of <a href="https://hermetic.ch/prf/plotting_prime_number_freq.htm" rel="nofollow noreferrer">Prime Factors: Plotting the Prime Factor Frequencies</a> has one which shows the values up to <span class="math-container">$10$</span> million.</p>
| <p>Just another plot to about <span class="math-container">$250\times10^9$</span>, showing the relative amount of numbers below with x factors (with multiplicity)
<a href="https://i.sstatic.net/KE3Fa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KE3Fa.png" alt="enter image description here" /></a></p>
<p>Somewhere between <span class="math-container">$151,100,000,000$</span> and <span class="math-container">$151,200,000,000$</span> 4 overtakes r3.</p>
|
combinatorics | <p>This is an offshoot of <a href="https://math.stackexchange.com/questions/2313084/how-to-prove-if-an-n-size-jigsaw-puzzle-is-solvable">this question</a>. </p>
<p>Suppose we have jigsaw puzzle pieces which are basically squares but where each side can be either straight, concave or convex. An example of three such pieces is shown below:
<a href="https://i.sstatic.net/Nvk7T.jpg" rel="noreferrer"><img src="https://i.sstatic.net/Nvk7T.jpg" alt="enter image description here"></a></p>
<p>It is clear that there can be $3^4=81$ different types of pieces. I was wondering, given one of each type (and with no rotations or flips allowed), whether it was possible to create a standard jigsaw puzzle, using just those $81$ pieces. By "standard jigsaw puzzle" I mean one of dimensions $m \times n$ where all perimeter sides are straight. </p>
<p>A $1\times 81$ puzzle is clearly not possible as it would require all $81$ pieces to be straight on both the left and the right side. A similar argument holds for a $81\times 1$ puzzle. </p>
<p>A $3\times 27$ puzzle would require $27$ pieces with a left straight side and $27$ pieces with a right straight edge, and while it is true that such two sets exist, they have an overlap of $9$ pieces. This is therefore not possible. As before, a similar argument holds for a $27\times 3$ puzzle.</p>
<p>This leaves the $9\times 9$ possibility. A priori, I can see no reason this shouldn't be possible. And I think I have a proof that it is possible, which is what I would like your opinion on.</p>
<p>Given a $9\times 9$ puzzle, we have the situation below:
<a href="https://i.sstatic.net/DjvNX.jpg" rel="noreferrer"><img src="https://i.sstatic.net/DjvNX.jpg" alt="enter image description here"></a></p>
<p>Each black side of a piece is fixed, but each green side of a piece represents a possible connection type. A connection type could be a "straight - straight", "concave - convex" or "convex - concave" type. It seems to me that if we run through every possible connection type for each piece, one of those scenarios must give a puzzle where each of the $81$ piece types is used exactly once. </p>
<p>Am I right?</p>
| <p>I wouldn't call what you say a proof ... It is not immediately clear from what you say that you will indeed get all possible pieces in there.</p>
<p>However, your hunch is correct: this puzzle is indeed solvable. Here is a proof:</p>
<p>Let's call the straight edge $1$, the concave edge $2$, and the convex edge $3$. Now, label the $10$ vertical edges in each row of your $9 \times 9$ board with $1,1,2,3,1,3,3,2,2,1$ in that order. Note that if you look at two sucessive edges (so these would be the left and right edges of a piece), you get all possible pairings: $11,12,23,31,13,33,32,22,21$. Hence, pieces in the same column will have the same left and right edge, but pieces in different columns will have differen left/right pairings. </p>
<p>Ok, do the same with the horizontal edges from top to bottom. This will give you all possible pairings for top and bottom edges. So, since different rows have different top/bottom pairings, and since different columns will have different left/right pairings, you do indeed get all $81$ possible combinations, and thus all the $81$ different pieces in there.</p>
<p>Here is the resulting solved puzzle (thanks @Frxstrem !)</p>
<p><a href="https://i.sstatic.net/wvzpK.jpg" rel="noreferrer"><img src="https://i.sstatic.net/wvzpK.jpg" alt="enter image description here"></a></p>
| <p>This answer, complementing the excellent answer of @Bram28, explains the idea with line coloring. It is possible (see figure 1) to place 10 horizontal and 10 vertical lines using (R,G,B) colors in such a way that the <span class="math-container">$3^4=81$</span> different squares are present (for example, there is a unique square with North, West, South, East colored (G,B,R,R) resp.: this square is found in the South East corner). Check that the <span class="math-container">$3^4=81$</span> different squares are present once and only once.</p>
<p>Now, attribute colors to each side according to its shape: </p>
<ul>
<li>R<span class="math-container">$\to$</span>"flat side", </li>
<li>G<span class="math-container">$\to$</span>"convex side", </li>
<li>B<span class="math-container">$\to$</span>"concave side".</li>
</ul>
<p>This gives figure 2.</p>
<p><strong>Remarks:</strong></p>
<p>1) The color sequencing of lines and columns has been chosen different in a deliberate way. In fact there are 24 different color sequencements. Why that ? Identify East and West side of the board ; similarly, identify the North and South sides ; said otherwise, consider our puzzle as drawn on a torus, so <span class="math-container">$9 \times 9$</span> is the number of pieces and as well the number of lines. This number <span class="math-container">$9$</span> is, I would say by chance, a power <span class="math-container">$9=3^2$</span>, allowing to use the framework of <strong>de Bruijn sequences</strong> here of type <span class="math-container">$B(3,2)$</span>. Such a sequence is made of <span class="math-container">$3^2$</span> "letters" (imagined arranged in a ring) such that, by sliding a window of width 2 on the sequence, one gets all the words on an alphabet of 3 items : R,G,B. An example : RRGGBBRBG where the window sliding will provide RR,RG,GG,GB,BB,BR,RB,BG and GR, all the words with 2 letters on an alphabet of 3 (please note that the last word, GR, has been obtained by wrapping the sequence). For more, see (<a href="https://en.wikipedia.org/wiki/De_Bruijn_sequence" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/De_Bruijn_sequence</a>) where you will find a formula explaining that there are <span class="math-container">$6^3/3^2=24$</span> of them.</p>
<p>2) Out of a solution, one can generally build a big amount of other solutions. For example, in the case of the given figure, consider the two identical <span class="math-container">$1 \times 4$</span> (or <span class="math-container">$1 \times 5$</span>, or <span class="math-container">$3 \times 3$</span>... !) rectangles in the South West and North East corners of the puzzle : another solution is provided by switching them.</p>
<p><a href="https://i.sstatic.net/dMTW2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dMTW2.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/US3ZS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/US3ZS.jpg" alt="enter image description here"></a></p>
<hr>
<p><strong>Edit :</strong> (Feb 8 '18) </p>
<p>Related :</p>
<p><a href="http://erikdemaine.org/papers/Jigsaw_GC/paper.pdf" rel="nofollow noreferrer">http://erikdemaine.org/papers/Jigsaw_GC/paper.pdf</a></p>
<p><a href="https://math.stackexchange.com/q/619053">What is known about this jigsaw combinatorics problem?</a></p>
<p>Somewhat related : </p>
<p><a href="https://puzzling.stackexchange.com/a/48864">https://puzzling.stackexchange.com/a/48864</a></p>
<p>I found an old reference mentionning an analogous type of pieces in an old-fashioned delicious booklet entitled "New Mathematical Pastimes" by Major McMahon, Cambridge, 1921
(<a href="https://archive.org/details/newmathematicalp00macmuoft" rel="nofollow noreferrer">https://archive.org/details/newmathematicalp00macmuoft</a>) (page 69) . </p>
|
logic | <p>Other ways to put it: Is there any faith required in the adoption of a system of axioms? How is a given system of axioms accepted or rejected if not based on blind faith?</p>
| <p>To paraphrase Robert Mastragostino's comment, a system of axioms doesn't make any assertions that you can accept or reject as true or false; it only specifies the rules of a certain kind of game to play. </p>
<p>It's worth clarifying that a modern mathematician's attitude towards mathematical words is very different from that of a non-mathematician's attitude towards ordinary words (and possibly also very different from a classical mathematician's attitude towards mathematical words). A mathematical word with a precise definition means precisely what it was defined to mean. It's not possible to claim that such a definition is wrong; at best, you can only claim that a definition doesn't capture what it was intended to capture. </p>
<p>Thus a modern interpretation of, say, Euclid's axioms is that they describe the rules of a certain kind of game. Some of the pieces that we play with are called points, some of the pieces are called lines, and so forth, and the pieces obey certain rules. Euclid's axioms are not, from this point of view, asserting anything about the geometry of the world in which we actually live, so one can't accept or reject them on that basis. One can, at best, claim that they don't capture the geometry of the world in which we actually live. But people play unrealistic games all the time. </p>
<p>I think this is an important point which is not communicated well to non-mathematicians about how mathematics works. For a non-mathematician it is easy to say things like "but $i$ can't possibly be a number" or "but $\infty$ can't possibly be a number," and to a mathematician what those statements actually mean is that $i$ and $\infty$ aren't parts of the game <a href="http://en.wikipedia.org/wiki/Real_number">Real Numbers</a>, but there are all sorts of other wonderful games we can play using these new pieces, like <a href="http://en.wikipedia.org/wiki/Complex_number">Complex Numbers</a> and <a href="http://en.wikipedia.org/wiki/Projective_geometry">Projective Geometry</a>... </p>
<p>I want to emphasize that I am not using the word "game" in support of a purely <a href="http://en.wikipedia.org/wiki/Formalism_(mathematics)">formalist</a> viewpoint on mathematics, but I think some formalism is an appropriate answer to this question as a way of clarifying what exactly it is that a mathematical axiom is asserting. Some people use the word "game" in this context to emphasize that mathematics is <a href="http://www.brainyquote.com/quotes/quotes/d/davidhilbe181560.html">"meaningless"</a>. The word "meaningless" here has to be interpreted carefully; it is not meant in the colloquial sense (or at least I would not mean it this way). It means that the syntax of mathematics can be separated from its semantics, and that it is often less confusing to do so. But anyone who believes that games are meaningless in the colloquial sense has clearly never played a game... </p>
| <p>The role of axioms is to describe a mathematical universe. Some settings, some objects. Axioms are there only to tell us what we know on such universe.</p>
<p>Indeed we have to believe that ZFC is consistent (or assume an even stronger theory, and believe <em>that</em> one is consistent, or assume... wait, I'm getting recursive here). But the role of ZFC is just to tell us how sets are behaved in a certain mathematical settings.</p>
<p>The grand beauty of mathematics is that we are able to extract <strong>so much</strong> merely from these rules which describe what properties sets should have.</p>
<p>Whether or not <em>you</em> should accept an axiomatic theory or not is up to you. The usual test is to see whether or not the properties described by the axioms make sense and seem to describe the idea behind the object in a reasonable manner.</p>
<p>We want to know that if a set exists, then its power set exists. Therefore the axiom of power set is reasonable. We want to know that two sets are equal if and only if they have the same elements, which is a very very reasonable requirement from sets, membership and equality. Therefore the axiom of extensionality makes sense.</p>
<p>What you should do when you attempt to decide whether or not you accept some axioms is to try and understand the idea these axioms try to formalize. If they convince you that the formalization is "good enough" then you should believe that the axioms are consistent and use them. Otherwise you should look for an alternative.</p>
<hr>
<p><strong>Related:</strong></p>
<ol>
<li>I talked about the difference between an idea and its mathematical implementation <a href="https://math.stackexchange.com/a/173002">in this answer of mine</a> which might be relevant to this discussion as well (to some extent).</li>
</ol>
|
linear-algebra | <p>I found out that there exist positive definite matrices that are non-symmetric, and I know that symmetric positive definite matrices have positive eigenvalues.</p>
<p>Does this hold for non-symmetric matrices as well?</p>
| <p>Let <span class="math-container">$A \in M_{n}(\mathbb{R})$</span> be any non-symmetric <span class="math-container">$n\times n$</span> matrix but "positive definite" in the sense that: </p>
<p><span class="math-container">$$\forall x \in \mathbb{R}^n, x \ne 0 \implies x^T A x > 0$$</span>
The eigenvalues of <span class="math-container">$A$</span> need not be positive. For an example, the matrix in David's comment:</p>
<p><span class="math-container">$$\begin{pmatrix}1&1\\-1&1\end{pmatrix}$$</span></p>
<p>has eigenvalue <span class="math-container">$1 \pm i$</span>. However, the real part of any eigenvalue <span class="math-container">$\lambda$</span> of <span class="math-container">$A$</span> is always positive.</p>
<p>Let <span class="math-container">$\lambda = \mu + i\nu\in\mathbb C $</span> where <span class="math-container">$\mu, \nu \in \mathbb{R}$</span> be an eigenvalue of <span class="math-container">$A$</span>. Let <span class="math-container">$z \in \mathbb{C}^n$</span> be a right eigenvector associated with <span class="math-container">$\lambda$</span>. Decompose <span class="math-container">$z$</span> as <span class="math-container">$x + iy$</span> where <span class="math-container">$x, y \in \mathbb{R}^n$</span>.</p>
<p><span class="math-container">$$(A - \lambda) z = 0 \implies \left((A - \mu) - i\nu\right)(x + iy) = 0
\implies \begin{cases}(A-\mu) x + \nu y = 0\\(A - \mu) y - \nu x = 0\end{cases}$$</span>
This implies</p>
<p><span class="math-container">$$x^T(A-\mu)x + y^T(A-\mu)y = \nu (y^T x - x^T y) = 0$$</span></p>
<p>and hence
<span class="math-container">$$\mu = \frac{x^TA x + y^TAy}{x^Tx + y^Ty} > 0$$</span></p>
<p>In particular, this means any real eigenvalue <span class="math-container">$\lambda$</span> of <span class="math-container">$A$</span> is positive.</p>
| <p>I am answering the first part of @nukeguy's comment, who asked:</p>
<blockquote>
<p>Is the converse true? If all of the eigenvalues of a matrix <span class="math-container">$𝐴$</span> have
positive real parts, does this mean that <span class="math-container">$𝑥^𝑇𝐴𝑥>0$</span> for any
<span class="math-container">$𝑥≠0∈ℝ^𝑛$</span>? What if we assume <span class="math-container">$𝐴$</span> is diagonalizable?</p>
</blockquote>
<p>I have a counterexample, where <span class="math-container">$A$</span> has positive eigenvalues, but it is not positive definite:
<span class="math-container">$ A=
\begin{bmatrix}
7 & -2 & -4 \\
-17 & 40 & -19 \\
-21 & -9 & 31
\end{bmatrix}
$</span>. Eigenvalues of this matrix are <span class="math-container">$1.2253$</span>, <span class="math-container">$27.4483$</span>, and <span class="math-container">$49.3263$</span>, but it indefinite because if <span class="math-container">$x_1 = \begin{bmatrix}-48 & -10& -37\end{bmatrix}$</span> and <span class="math-container">$x_2 = \begin{bmatrix}-48 &10 &-37\end{bmatrix}$</span>, then we have <span class="math-container">$𝑥_1𝐴𝑥_1^T = -1313$</span> and <span class="math-container">$𝑥_2𝐴𝑥_2^T = 37647.$</span></p>
|
linear-algebra | <p>I am having a difficulty setting up the proof of the fact that two bases of a vector space have the same cardinality for the infinite-dimensional case. In particular, let <span class="math-container">$V$</span> be a vector space over a field <span class="math-container">$K$</span> and let <span class="math-container">$\left\{v_i\right\}_{i \in I}$</span> be a basis where <span class="math-container">$I$</span> is infinite countable. Let <span class="math-container">$\left\{u_j\right\}_{j \in J}$</span> be another basis. Then <span class="math-container">$J$</span> must be infinite countable as well. Any ideas on how to approach the proof?</p>
| <p>In <em>spirit</em>, the proof is very similar to the proof that two <em>finite</em> bases must have the same cardinality: express each vector in one basis in terms of the vectors in the other basis, and leverage that to show the cardinalities must be equal, by using the fact that the "other" basis must span and be lineraly independent. </p>
<p>Suppose that $\{v_i\}_{i\in I}$ and $\{u_j\}_{j\in J}$ are two infinite bases for $V$.</p>
<p>For each $i\in I$, $v_i$ is in the linear span of $\{u_j\}_{j\in J}$. Therefore, there exists a <strong>finite</strong> subset $J_i\subseteq J$ such that $v_i$ is a linear combination of the vectors $\{u_j\}_{j\in J_i}$ (since a linear combination involves only finitely many vectors with nonzero coefficient). </p>
<p>Therefore, $V=\mathrm{span}(\{v_i\}_{i\in I}) \subseteq \mathrm{span}\{u_j\}_{j\in \cup J_i}$. Since no proper subset of $\{u_j\}_{j\in J}$ can span $V$, it follows that $J = \mathop{\cup}\limits_{i\in I}J_i$. </p>
<p>Now use this to show that $|J|\leq |I|$, and a symmetric argument to show that $|I|\leq |J|$. </p>
<p><em>Note.</em> The argument I have in mind in the last line involves some (simple) cardinal arithmetic, but it is enough that at least some form of the Axiom of Choice may be needed in its full generality.</p>
| <p>Once you have the necessary facts about infinite sets, the argument is very much like that used in the finite-dimensional case. The two crucial pieces of information are (1) that if <span class="math-container">$I$</span> is an infinite set of cardinality <span class="math-container">$\kappa$</span>, say, then <span class="math-container">$I$</span> has <span class="math-container">$\kappa$</span> finite subsets, and (2) that if <span class="math-container">$|J|>\kappa$</span>, and <span class="math-container">$J$</span> is expressed as the union of <span class="math-container">$\kappa$</span> subsets, then at least one of those subsets must be infinite.</p>
<p>Let <span class="math-container">$B_1 = \{v_i:i\in I \}$</span> and <span class="math-container">$B_2 = \{u_j:j \in J \}$</span>, and suppose that <span class="math-container">$|J|>|I| = \kappa$</span>. Each <span class="math-container">$u_j \in B_2$</span> can be written as a linear combination of some finite subset of <span class="math-container">$B_1$</span>, say <span class="math-container">$u_j = \sum\limits_{i \in F_j}k_{ji}v_i$</span>, where <span class="math-container">$F_j$</span> is a finite subset of <span class="math-container">$I$</span>. For each finite <span class="math-container">$F \subseteq I$</span> let <span class="math-container">$J_F = \{j \in J:F_j = F\}$</span>; clearly <span class="math-container">$J$</span> is the union of these sets <span class="math-container">$J_F$</span>. But by (1) above <span class="math-container">$I$</span> has only <span class="math-container">$\kappa$</span> finite subsets, and <span class="math-container">$|J|>\kappa$</span>, so by (2) above there must be some finite <span class="math-container">$F \subseteq I$</span> such that <span class="math-container">$J_F$</span> is infinite.</p>
<p>To simplify the notation, let <span class="math-container">$F = \{i_1,i_2,\dots,i_n\}$</span>, and for <span class="math-container">$\mathcal{l}=1,2,\dots,n$</span> let <span class="math-container">$v_\mathcal{l} = v_{i_\mathcal{l}}$</span>; then every vector <span class="math-container">$u_j$</span> with <span class="math-container">$j \in J_F$</span> is a linear combination of the vectors <span class="math-container">$v_1,v_2,\dots,v_n$</span>. In other words, <span class="math-container">$\{u_j:j \in J_F\} \subseteq \operatorname{span}\{v_1,v_2,\dots,v_n\}$</span>, and of course <span class="math-container">$\{u_j:j \in J_F\}$</span>, being a subset of the basis <span class="math-container">$B_2$</span>, is linearly independent. But <span class="math-container">$\operatorname{span}\{v_1,v_2,\dots,v_n\}$</span> is of dimension <span class="math-container">$n$</span> over <span class="math-container">$K$</span>, so any set of more than <span class="math-container">$n$</span> vectors in <span class="math-container">$\operatorname{span}\{v_1,v_2,\dots,v_n\}$</span> must be linearly <em>dependent</em>, and we have a contradiction. It follows that we must have <span class="math-container">$|J| \le |I|$</span>. By symmetry (or by the same argument with the rôles of <span class="math-container">$I$</span> and <span class="math-container">$J$</span> interchanged), <span class="math-container">$|I| \le |J|$</span>, and hence <span class="math-container">$|I|=|J|$</span>.</p>
|
game-theory | <p>Alice and Bob play the following game. There is one pile of $N$ stones. Alice and Bob take turns to pick stones from the pile. Alice always begins by picking at least one, but less than $N$ stones. Thereafter, in each turn a player must pick at least one stone, but no more stones than were picked in the immediately preceding turn. The player who takes the last stone wins. With what property of $N$, will Alice win? When will Bob win?</p>
<p>For odd $N$ the outcome is quite clear, as Alice will start by picking one stone and will enforce the win. But what then?</p>
| <p><em>Alice wins when $N$ is not a power of 2</em>. </p>
<p>Let $i(N)$ be the maximal $i$ such that $N$ is divisble by $2^i$.</p>
<p><strong>Alice's strategy</strong>: If there are $m$ stones left, Alice picks $2^{i(m)}$ stones. </p>
<p>Let me explain why this strategy is winning for Alice. Assume that Bob won by picking $b > 0$ remaining stones. Assume that before that Alice picked $2^j$ stones. By definition $j = i(2^j + b)$. Hence $2^j + b$ is divisible by $2^j$. Therefore $b$ is also divisible by $2^j$. This means that $b\ge 2^j$. On the other hand by the rules of the game $b\le 2^j$. Therefore $b = 2^j$. </p>
<p>But then $i(2^j + b) = i(2^j + 2^j) = j + 1$, contradiction.</p>
<p>This strategy is correct because (a) since $N$ is not a power of 2, Alice picks less than $N$ stones in the first turn; (b) Alice never picks more stones than Bob in the previous turn. Indeed, assume that Bob picked $b$ stones, before Alice picked $2^j$ stones, and we are left with $m > 0$ stones. </p>
<p>Let us verify that $2^{i(m)} \le b$. We will do it by showing that $b$ is divisble by $2^{i(m)}$. </p>
<p>Indeed, by definition of Alice's strategy $j = i(2^j + b + m)$ and by the rules of the game $b\le 2^j$. Let us show that $i(m) \le j$. Indeed, if $i(m) > j$, then $b$ is divisble by $2^j$, since both $2^j + b + m$ and $m$ are divisble by $2^j$. But since $b\le 2^j$, this means that $b = 2^j$. This contradicts the fact that $j = i(2^j + b + m)$. Indeed, since $i(m) > j$, we have that $i(2^j + b + m) = i(2^{j + 1} + m) \ge j + 1$. </p>
<p>Thus we have proved that $i(m) \le j$. This means $2^j + b + m$ is divisble by $2^{i(m)}$, as well as $m$. Hence $b$ is also divisble by $2^{i(m)}$ as required.</p>
<p><em>Bob wins when $N$ is a power of 2</em>. Assume that $N = 2^i$ and Alice picks $a$ stones. Then Bob can use Alice's strategy described above. We only have to check that Bob then picks at most $a$ stones. Indeed, assume that $j$ is such that $2^i - a$ is divisible by $2^j$. Then $a$ is also divisible by $2^j$ hence $2^j \le a$. </p>
| <p><strong>First, we establish that whenever $N = 2^m$, Bob wins</strong>. We'll do this by induction. Suppose Bob is guaranteed victory for $2^k$ stones. Consider a game starting with $2^{k+1}$ stones. These can be grouped into two groups of $2^k$ stones each. Since Bob has a winning strategy for $2^k$ stones, He has a strategy which will allow him to play a move which finishes the first group of $2^k$ stones, forcing Alice to play with $2^k$ stones remaining. This is a situation where Bob wins, by our hypothesis. The base case is easy - If there are only 2 stones, Bob wins.</p>
<p><strong>If $N \neq 2^m$ then Alice wins</strong> . In such a case, write $N = 2^k +r$, where $2^k$ is the largest power of $2$ which is still smaller than $N$. Alice can then start by taking off $r$ stones. Now, since $r$ < $2^k$, Bob can't take away all the stones that are left. So Bob is forced to play with $2^k$ stones remaining, which Alice now wins (Bob is the one playing the move when $2^k$ stones are left.)</p>
|
matrices | <p>According to Wikipedia:</p>
<blockquote>
<p>A common convention is to list the singular values in descending order. In this case, the diagonal matrix <span class="math-container">$\Sigma$</span> is uniquely determined by <span class="math-container">$M$</span> (though the matrices <span class="math-container">$U$</span> and <span class="math-container">$V$</span> are not).</p>
</blockquote>
<p>My question is, are <span class="math-container">$U$</span> and <span class="math-container">$V$</span> uniquely determined up to some equivalence relation (and what equivalence relation)?</p>
| <p>Let $A = U_1 \Sigma V_1^* = U_2 \Sigma V_2^*$. Let us assume that $\Sigma$ has distinct diagonal elements and that $A$ is tall. Then</p>
<p>$$A^* A = V_1 \Sigma^* \Sigma V_1^* = V_2 \Sigma^* \Sigma V_2^*.$$</p>
<p>From this, we get</p>
<p>$$\Sigma^* \Sigma V_1^* V_2 = V_1^* V_2 \Sigma^* \Sigma.$$</p>
<p>Notice that $\Sigma^* \Sigma$ is diagonal with all different diagonal elements (that's why we needed $A$ to be tall) and $V_1^* V_2$ is unitary. Defining $V := V_1^* V_2$ and $D := \Sigma^* \Sigma$, we have</p>
<p>$$D V = V D.$$</p>
<p>Now, since $V$ and $D$ commute, they have the same eigenvectors. But, $D$ is a diagonal matrix with distinct diagonal elements (i.e., distinct eigenvalues), so it's eigenvectors are the elements of the canon basis. That means that $V$ is diagonal too, which means that</p>
<p>$$V = \operatorname{diag}(e^{{\rm i}\varphi_1}, e^{{\rm i}\varphi_2}, \dots, e^{{\rm i}\varphi_n}),$$</p>
<p>for some $\varphi_i$, $i=1,\dots,n$.</p>
<p>In other words, $V_2 = V_1 V$. Plug that back in the formula for $A$ and you get</p>
<p>$$A = U_1 \Sigma V_1^* = U_2 \Sigma V_2^* = U_2 \Sigma V^* V_1^* = U_2 V^* \Sigma V_1^*.$$</p>
<p>So, $U_2 = U_1 V$ if $\Sigma$ (and, in extension, $A$) is square nonsingular. Other options, somewhat similar to this, are possible if $\Sigma$ has zeroes on the diagonal and/or is rectangular.</p>
<p>If $\Sigma$ has repeating diagonal elements, much more can be done to change $U$ and $V$ (for example, one or both can permute corresponding columns).</p>
<p>If $A$ is not thin, but wide, you can do the same thing by starting with $AA^*$.</p>
<p>So, to answer your question: for a square, nonsingular $A$, there is a nice relation between different pairs of $U$ and $V$ (multiplication by a unitary diagonal matrix, applied in the same way to the both of them). Otherwise, you get quite a bit more freedom, which I believe is hard to formalize.</p>
| <p>I'm going to provide a full characterisation of the set of SVDs for a given matrix <span class="math-container">$A$</span>, using two different (but of course ultimately equivalent) kinds of formalisms. First a standard matrix formalism, and then using dyadic notation.</p>
<p>The TL;DR is that if <span class="math-container">$A=UDV^\dagger=\tilde U \tilde D\tilde V^\dagger$</span> for <span class="math-container">$U,\tilde U,V,\tilde V$</span> isometries and <span class="math-container">$D,\tilde D>0$</span> squared strictly positive diagonal matrices, then we can safely assume <span class="math-container">$D=\tilde D$</span> by trivially rearranging the basis for the underlying space, and furthermore that <span class="math-container">$\tilde V=V W$</span> with <span class="math-container">$W$</span> a unitary block-diagonal matrix that leaves invariant certain subspaces directly determined by <span class="math-container">$A$</span> (further details below). This characterises all and only the possible SVDs. So given any SVD, the freedom in the choice of other SVDs corresponds to the freedom in choosing these unitaries <span class="math-container">$W$</span> (how much freedom is that, depends in turn on the degeneracy of the singular values of <span class="math-container">$A$</span>).</p>
<h2>Regular notation</h2>
<p>Consider the SVD of a given matrix <span class="math-container">$A$</span>, in the form <span class="math-container">$A=UDV^\dagger$</span> with <span class="math-container">$D>0$</span> a diagonal squared matrix with strictly positive entries, and <span class="math-container">$U,V$</span> isometries (this writing is general: the SVD is often written in a way that makes <span class="math-container">$U,V$</span> unitaries and <span class="math-container">$D$</span> not necessarily squared, but you can have instead <span class="math-container">$D>0$</span> squared if you allow <span class="math-container">$U,V$</span> to be isometries).</p>
<p>The question is, if you have
<span class="math-container">$$A = UDV^\dagger = \tilde U \tilde D \tilde V^\dagger,$$</span>
with <span class="math-container">$U,\tilde U,V,\tilde V$</span> isometries, and <span class="math-container">$D,\tilde D>0$</span> diagonal squared matrices, what does this imply for <span class="math-container">$\tilde U,\tilde D,\tilde V$</span>? And more specifically, can we somehow find an explicit relation between them?</p>
<ol>
<li><p>The first easy observation is that you must have <span class="math-container">$D=\tilde D$</span>, modulo trivial rearrangements of the basis elements. This follows from <span class="math-container">$AA^\dagger=UD^2 U^\dagger = \tilde U \tilde D^2 \tilde U^\dagger$</span>, which means that <span class="math-container">$D^2$</span> and <span class="math-container">$\tilde D^2$</span> both contain the eigenvalues of <span class="math-container">$AA^\dagger$</span>. Because the spectrum is determined algebraically from <span class="math-container">$AA^\dagger$</span>, the set of eigenvalues must be identical, and the singular values are by definition positive reals, we must have <span class="math-container">$D=\tilde D$</span>.</p>
</li>
<li><p>The above reduces the question to: if <span class="math-container">$UDV^\dagger=\tilde U D\tilde V^\dagger$</span>, with <span class="math-container">$U,\tilde U,V,\tilde V$</span> isometries, what can we say about <span class="math-container">$\tilde U,\tilde V$</span>? To this end, we observe that the freedom in the choice of <span class="math-container">$U,V$</span> amounts to the different possible ways to decompose the subspaces associated to each distinct singular value.</p>
<p>More precisely, consider the subspace <span class="math-container">$V^{(d)}\equiv \{v: \, \|Av\|=d\}$</span> corresponding to a singular value <span class="math-container">$d$</span>. We can then uniquely decompose the matrix as <span class="math-container">$A=\sum_d A_d$</span> where <span class="math-container">$A_d\equiv A \Pi_d$</span> and <span class="math-container">$\Pi_d$</span> is the projection onto <span class="math-container">$V^{(d)}$</span>, and the sum is over all nonzero singular values <span class="math-container">$d$</span> of <span class="math-container">$A$</span>. We can now observe that any and all SVDs of <span class="math-container">$A$</span> correspond to a choice of orthonormal basis for each <span class="math-container">$V^{(d)}$</span>. Namely, for any such basis <span class="math-container">$\{\mathbf v_k\}$</span> we associate the partial isometry <span class="math-container">$V_d\equiv \sum_k \mathbf v_k \mathbf e_k^\dagger$</span>. The corresponding basis for the image of <span class="math-container">$A_d$</span> is then determined as <span class="math-container">$\mathbf u_k= A \mathbf v_k$</span>, and we then define the partial isometry <span class="math-container">$U_d\equiv \sum_k \mathbf u_k \mathbf e_k^\dagger$</span>.
Here, <span class="math-container">$\mathbf e_k$</span> denotes an orthonormal basis spanning the elements of <span class="math-container">$D$</span> corresponding to the singular value <span class="math-container">$d$</span>.
This procedure provides a decomposition <span class="math-container">$A_d= U_d D V_d^\dagger$</span>, and therefore an SVD for <span class="math-container">$A$</span> itself by summing these.
Any SVD can be constructed this way.</p>
<p>In conclusion, the freedom in choosing an SVD is entirely in the
choice of bases <span class="math-container">$\{\mathbf v_k\}$</span> above. We can summarise this freedom concisely by saying that given any SVD <span class="math-container">$A=UDV^\dagger$</span>, any other SVD can be written as <span class="math-container">$A=UW D (VW)^\dagger$</span> for some unitary <span class="math-container">$W$</span> such that <span class="math-container">$[W,D]=0$</span>. This commutation property is a concise way to state that <span class="math-container">$W$</span> is only allowed to mix vectors corresponding to the same singular value, that is, to the same eigenspace of <span class="math-container">$D$</span>.</p>
</li>
</ol>
<h4>Toy example #1</h4>
<p>Let's work out a simple toy example to illustrate the above results.</p>
<p>Let
<span class="math-container">$$H \equiv \begin{pmatrix}1&1\\1&-1\end{pmatrix}.$$</span>
This is a somewhat trivial example because <span class="math-container">$H$</span> is Hermitian, but still illustrates some aspects. A standard SVD reads
<span class="math-container">$$ H =
\underbrace{\frac{1}{\sqrt2}\begin{pmatrix}1&1\\-1&1\end{pmatrix} }_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt2 & 0\\0&\sqrt2\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}0&1\\1&0\end{pmatrix}}_{\equiv V^\dagger}.$$</span>
In this case, we have two identical singular values. According to our discussion above, this means that we can apply a(ny) unitary transformation to the columns of <span class="math-container">$V$</span> and still obtain another SVD. That is, given any unitary <span class="math-container">$W$</span>, <span class="math-container">$\tilde V\equiv V W$</span> gives another SVD for <span class="math-container">$H$</span>. In this simple case, you can also observe this directly, as <span class="math-container">$D=\sqrt2 I$</span>, and therefore
<span class="math-container">$$H = UDV^\dagger= UD W \tilde V^\dagger
= (UW) D V^\dagger,$$</span>
hence <span class="math-container">$\tilde U\equiv UW$</span>, <span class="math-container">$\tilde V\equiv UV$</span> give the alternative SVD <span class="math-container">$H=\tilde U D\tilde V^\dagger$</span>, and all SVDs have this form.</p>
<h4>Toy example #2</h4>
<p>Consider a simple non-squared case. Let
<span class="math-container">$$A \equiv \begin{pmatrix}1&1\\1 & \omega \\ 1 & \omega^2\end{pmatrix},
\qquad \omega\equiv e^{2\pi i/3}.$$</span>
This is again almost trivial because <span class="math-container">$A$</span> is an isometry, up to a constant. Still, we can write its SVD as
<span class="math-container">$$A =
\underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\1&\omega\\1&\omega^2\end{pmatrix}}_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}0&1\\1&0\end{pmatrix}}_{\equiv V^\dagger}.
$$</span>
Notice that now <span class="math-container">$U,V$</span> are isometries, but not unitaries, and that <span class="math-container">$D>0$</span> is squared. Are per our results above, <em>any</em> SVD will have the form <span class="math-container">$A=\tilde U D \tilde V^\dagger$</span> with <span class="math-container">$\tilde V=V W$</span> for some unitary <span class="math-container">$W$</span>.</p>
<p>for example, taking <span class="math-container">$W=V$</span> (we can do this, because here <span class="math-container">$V$</span> is also unitary), we get the alternative SVD <span class="math-container">$A=\tilde U D \tilde V^\dagger$</span> with <span class="math-container">$\tilde V=VW=VV=I$</span> and <span class="math-container">$\tilde U= U W^\dagger=UV=\frac1{\sqrt3}\begin{pmatrix}1&\omega&\omega^2\\1&1&1\end{pmatrix}^T$</span>, that is,
<span class="math-container">$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\\omega&1\\\omega^2&1\end{pmatrix}}_{\equiv \tilde U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}1&0\\0&1\end{pmatrix}}_{\equiv \tilde V^\dagger}.$$</span></p>
<h4>Toy example #3</h4>
<p>Let's do an example with non-degenerate singular values. Let
<span class="math-container">$$A = \begin{pmatrix}1& 2 \\ 1 & 2\omega \\ 1 & 2\omega^2\end{pmatrix},
\qquad \omega\equiv e^{2\pi i/3}.$$</span>
This time the singular values are <span class="math-container">$\sqrt3$</span> and <span class="math-container">$2\sqrt3$</span>.
One SVD is easily derived as
<span class="math-container">$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}1&1\\1&\omega\\1&\omega^2\end{pmatrix}}_{\equiv U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&2\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}1&0\\0&1\end{pmatrix}}_{\equiv V^\dagger}.$$</span>
However, in this case there is much less freedom in choosing other SVDs, because these must correspond to <span class="math-container">$\tilde V=VW$</span> where <span class="math-container">$W$</span> only mixes columns of <span class="math-container">$V$</span> corresponding to the same values of <span class="math-container">$D$</span>. In this case <span class="math-container">$D$</span> is non-degenerate, thus <span class="math-container">$W$</span> must be diagonal, and therefore the full set of SVDs must correspond to
<span class="math-container">$W=\begin{pmatrix}e^{i\alpha}&0\\0&e^{i\beta}\end{pmatrix}$</span>, that is,
<span class="math-container">$$A = \underbrace{\frac{1}{\sqrt3}\begin{pmatrix}e^{-i\alpha}&e^{-i\beta}\\e^{-i\alpha}&\omega e^{-i\beta}\\e^{-i\alpha}&\omega^2 e^{-i\beta}\end{pmatrix}}_{\equiv \tilde U}
\underbrace{\begin{pmatrix}\sqrt3 & 0 \\0&2\sqrt3\end{pmatrix}}_{\equiv D}
\underbrace{\begin{pmatrix}e^{i\alpha}&0\\0&e^{i\beta}\end{pmatrix}}_{\equiv \tilde V^\dagger}.$$</span>
All SVDs will look like this, for some <span class="math-container">$\alpha,\beta\in\mathbb{R}$</span>.
Although note that by permuting the elements of <span class="math-container">$D$</span>, we obtain SVDs which look different, although are ultimately equivalent to the above.</p>
<h2>In dyadic notation</h2>
<h4>SVD in dyadic notation removes "trivial" redundancies</h4>
<p>The SVD of an arbitrary matrix <span class="math-container">$A$</span> can be written in <a href="https://en.wikipedia.org/wiki/Dyadics" rel="nofollow noreferrer">dyadic notation</a> as
<span class="math-container">$$A=\sum_k s_k u_k v_k^*,\tag A$$</span>
where <span class="math-container">$s_k\ge0$</span> are the singular values, and <span class="math-container">$\{u_k\}_k$</span> and <span class="math-container">$\{v_k\}_k$</span> are orthonormal sets of vectors spanning <span class="math-container">$\mathrm{im}(A)$</span> and <span class="math-container">$\ker(A)^\perp$</span>, respectively.
The connection between this and the more standard way of writing the SVD of <span class="math-container">$A$</span> as <span class="math-container">$A=UDV^\dagger$</span> is that <span class="math-container">$u_k$</span> is the <span class="math-container">$k$</span>-th column of <span class="math-container">$U$</span>, and <span class="math-container">$v_k$</span> is the <span class="math-container">$k$</span>-th column of <span class="math-container">$V$</span>.</p>
<h4>Global phase redundancies are always present</h4>
<p>If <span class="math-container">$A$</span> is nondegenerate, the only freedom in the choice of vectors <span class="math-container">$u_k,v_k$</span>
is their global phase: replacing <span class="math-container">$u_k\mapsto e^{i\phi}u_k$</span> and <span class="math-container">$v_k\mapsto e^{i\phi}v_k$</span> does not affect <span class="math-container">$A$</span>.</p>
<h4>Degeneracy gives more freedom</h4>
<p>On the other hand, when there are repeated singular values, there is additional freedom in the choice of <span class="math-container">$u_k,v_k$</span>, similarly to how there is more freedom in the choice of eigenvectors corresponding to degenerate eigenvalues.
More precisely, note that (A) implies
<span class="math-container">$$AA^\dagger=\sum_k s_k^2 \underbrace{u_k u_k^*}_{\equiv\mathbb P_{u_k}},
\qquad
A^\dagger A=\sum_k s_k^2 \mathbb P_{v_k}.$$</span>
This tells us that, whenever there are degenerate singular values, the corresponding set of principal components is defined up to a unitary rotation in the corresponding degenerate eigenspace.
In other words, the set of vectors <span class="math-container">$\{u_k\}$</span> in (A) can be chosen as any orthonormal basis of the eigenspace <span class="math-container">$\ker(AA^\dagger-s_k^2)$</span>, and similarly <span class="math-container">$\{v_k\}_k$</span> can be any basis of <span class="math-container">$\ker(A^\dagger A-s_k^2)$</span>.</p>
<p>However, note that a choice of <span class="math-container">$\{v_k\}_k$</span> determines <span class="math-container">$\{u_k\}$</span>, and vice-versa (otherwise <span class="math-container">$A$</span> wouldn't be well-defined, or injective outside its kernel).</p>
<h4>Summary</h4>
<p>A choice of <span class="math-container">$U$</span> uniquely determines <span class="math-container">$V$</span>, so we can restrict ourselves to reason about the freedom in the choice of <span class="math-container">$U$</span>. There are twe main sources of redundancy:</p>
<ol>
<li>The vectors can be always scaled by a phase factor: <span class="math-container">$u_k\mapsto e^{i\phi_k}u_k$</span> and <span class="math-container">$v_k\mapsto e^{i\phi_k}v_k$</span>. In matrix notation, this corresponds to changing <span class="math-container">$U\mapsto U \Lambda$</span> and <span class="math-container">$V\mapsto V\Lambda$</span> for an arbitrary diagonal unitary matrix <span class="math-container">$\Lambda$</span>.</li>
<li>When there are "degenerate singular values" <span class="math-container">$s_k$</span> (that is, singular values corresponding to degenerate eigenvalues of <span class="math-container">$A^\dagger A$</span>), there is additional freedom in the choice of <span class="math-container">$U$</span>, which can be chosen as any matrix whose columns form a basis for the eigenspace <span class="math-container">$\ker(AA^\dagger-s_k^2)$</span>.</li>
</ol>
<p>Finally, we should note that the former point is included in the latter, which therefore encodes all of the freedom allowed in choosing the vectors <span class="math-container">$\{v_k\}$</span>. This is because multiplying the elements of an orthonormal basis by phases does not affect its being an orthonormal basis.</p>
|
number-theory | <p>The Riemann $\zeta$ function plays a significant role in number theory and is defined by $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} \qquad \text{ for } \sigma > 1 \text{ and } s= \sigma + it$$</p>
<blockquote>
<p>The <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis" rel="noreferrer">Riemann hypothesis</a> asserts that all the non-trivial zeroes of the $\zeta$ function lie on the line $\text{Re}(s) = \frac{1}{2}$. </p>
</blockquote>
<p>My question is:</p>
<blockquote>
<p>Why are we interested in the zeroes of the $\zeta$ function? Does it give any information about something?</p>
</blockquote>
<p>What is the use of writing $$\zeta(s) = \prod_{p} \biggl(1-\frac{1}{p^s}\biggr)^{-1}$$</p>
| <p><strong>Short answer:</strong> Understanding the distribution of the prime numbers is directly related to understanding the zeros of the Riemann Zeta Function.**</p>
<p><strong>Long Answer:</strong> The prime counting function is defined as $\pi(x)=\sum_{p\leq x} 1,$ which counts the number of primes less than $x$. Usually we consider its weighted modification $$\psi(x)=\sum_{p^{m}\leq x}\log p$$ where we are also counting the prime powers. It is not hard to show that $$\pi(x)=\frac{\psi(x)}{\log x}\left(1+O\left(\frac{1}{\log x}\right)\right),$$ which means that these two functions differ by about a factor of $\log x$.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Prime_number_theorem" rel="noreferrer">prime number theorem</a> states that $\psi(x)\sim x$, but this is quite hard to show. It was first conjectured by Legendre in 1797, but took almost 100 years to prove, finally being resolved in 1896 by Hadamard and de la Vallée Poussin. In 1859 Riemann outlined a proof, and gave a remarkable identity which changed how people thought about counting primes. He showed that (more or less) $$\psi(x)=x-\sum_{\rho:\zeta(\rho)=0}\frac{x^{\rho}}{\rho}-\frac{\zeta^{'}(0)}{\zeta(0)},$$ where the sum is taken over all the zeros of the zeta function. ${}^{++}$</p>
<p>Notice that this is <em>an equality.</em> The left hand side is a step function, and on the right hand side, somehow, the zeros of the zeta function <em>conspire at exactly the prime numbers</em> to make that sum jump. (It is an infinite series whose convergence is not uniform) If you remember only 1 thing from this answer, make it the above explicit formula.</p>
<p><strong>An equivalence to RH:</strong> Current methods allow us to prove that $$\psi(x)=x+O\left(xe^{-c\sqrt{\log x}}\right).$$ This error term decreases faster then $\frac{x}{(\log x)^A}$ for any $A$, but increases faster then $x^{1-\delta}$ for any small $\delta>0$. In particular, proving that the error term was of the form $O\left(x^{1-\delta}\right)$ for some $\delta>0$ would be an enormous breakthrough. The Riemann Hypothesis is equivalent to showing the error term is like square root $x$, that is proving the statement $$\psi(x)=x+O\left(x^{\frac{1}{2}}\log^{2}x\right).$$ In other words, the Riemann Hypothesis is equivalent to improving the error term when counting the prime numbers.</p>
<p><strong>Remark:</strong> In your question you incorrectly state the Riemann Hypothesis, which says that <strong>all</strong> zeros have real part $\frac{1}{2}$. The fact that infinitely many zeros lie on the line was shown by Hardy in 1917, and in 1942 Selberg showed that a positive proportion lie on the line. In 1974 Levinson showed that this proportion was at least $\frac{1}{3}$, and Conrey 1989 improved this to $\frac{2}{5}$.</p>
<p>** Of course, there may be some people who are interested in the zeros of the zeta function for other reasons. Historically the prime numbers are what first motivated the study of the zeros.</p>
<p>${}^{++}$: Usually the trivial zeros will be separated out of the sum, but I do not make this distinction here. Also, Riemann's original paper states things in terms of $\Pi(x)$ and $\text{li}(x)$, the Riemann pi function and logarithmic integral, rather then $\psi(x)$. This is a very slight difference, and I use $\psi(x)$ above because it is easier and cleaner to do so.</p>
<p><strong>See also:</strong> <a href="https://math.stackexchange.com/questions/1379583/why-is-zeta1it-neq-0-equivalent-to-the-prime-number-theorem/1379690#1379690">Why is $\zeta(1+it) \neq 0$ equivalent to the prime number theorem?</a></p>
| <p>The initial interest in the zeros is their connection with the distribution of primes, which is often done via asymptotic statements about the <a href="http://en.wikipedia.org/wiki/Prime-counting_function" rel="noreferrer">prime counting function</a>. In analytic number theory, it is standard fare to have an arithmetic function defined by a summation formula, and then modify it into a form that is easier to manipulate and obtain results for, in such a way that the asymptotic results about the modified function can be translated into results about the original function very easily. This is certainly the case with <span class="math-container">$\pi(x)$</span>, which is why I bring this up. Most of the information that is relevant here can be found on the <a href="https://en.wikipedia.org/wiki/Explicit_formulae_(L-function)" rel="noreferrer">Explicit formula</a> article at Wikipedia, for explicit formulas for the <span class="math-container">$\pi(x)$</span> function using the zeros of the Riemann zeta function. Two key highlights:</p>
<blockquote>
<p><span class="math-container">$(1)$</span> <em>"This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their 'expected' positions."</em></p>
<p><span class="math-container">$(2)$</span> <em>"Roughly speaking, the explicit formula says the Fourier transform of the zeros of the zeta function is the set of prime powers plus some elementary factors."</em></p>
</blockquote>
<p>With the very basics of complex numbers we see that <span class="math-container">$x^\rho$</span>, as a function of <span class="math-container">$x$</span>, has a magnitude given by <span class="math-container">$x^{\Re (\rho)}$</span> and its argument by <span class="math-container">$\Im(\rho)\cdot\log x$</span>. The imaginary parts thus contribute oscillatory behavior to the explicit formulas, while the real parts say which imaginary parts dominate over others and by how much - this is some meaning behind the 'Fourier transform' description. Indeed, given the dominant term in an asymptote for <span class="math-container">$\pi$</span> we have roughly the primes' "expected positions" (we are taking some license in referring to positioning when we are actually speaking of distribution in the limit), and the outside terms will speak to how much <span class="math-container">$\pi$</span> <em>deviates</em> from the expected dominant term as we take <span class="math-container">$x$</span> higher and higher in value. If one of the real parts differed from the others, it would privilege some deviation over others, changing our view of the regularity in the primes' distribution.</p>
<p>Eventually it also became clear that more and more results in number theory - even very accessible results that belie how deep the Riemann Hypothesis really has come to be - were equivalent to or could only be proven on the assumption of RH. See for example <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis#Consequences_of_the_Riemann_hypothesis" rel="noreferrer">here</a> or <a href="https://mathoverflow.net/questions/17209/consequences-of-the-riemann-hypothesis">here</a> or <a href="http://aimath.org/pl/rhequivalences" rel="noreferrer">here</a>. I'm not sure if any truly comprehensive list of the consequences or equivalences actually exists!</p>
<p>Moreover it is clear now that RH is not an isolated phenomenon, and instead exists as a piece in a much bigger puzzle (at least as I see it). The <span class="math-container">$\zeta$</span> function is a trivial case of a <a href="http://en.wikipedia.org/wiki/Dirichlet_L-function" rel="noreferrer">Dirichlet <span class="math-container">$L$</span>-function</a> as well a case of a trivial case of a <a href="http://en.wikipedia.org/wiki/Dedekind_zeta_function" rel="noreferrer">Dedekind <span class="math-container">$\zeta$</span> function</a>, and there is respectively a Generalized Riemann Hypothesis (GRH) and Extended Riemann Hypothesis for these two more general classes of functions. There are <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis#Generalizations_and_analogs_of_the_Riemann_hypothesis" rel="noreferrer">numerous analogues</a> to the zeta function and RH too - many of these have already gained more ground or already had the analogous RH proven!</p>
<p>It is now wondered what the appropriate definition of an <span class="math-container">$L$</span>-function "should" be, that is, morally speaking - specifically it must have some analytic features and of course a functional equation involving a reflection, gamma function, weight, conductor etc. but the precise recipe we need to create a slick theory is not yet known. (Disclaimer: this paragraph comes from memory of reading something a long time ago that I cannot figure out how to find again to check. Derp.)</p>
<p>Finally, there is the spectral interpretation of the zeta zeros that has arisen. There is the <a href="http://en.wikipedia.org/wiki/Hilbert%E2%80%93P%C3%B3lya_conjecture" rel="noreferrer">Hilbert-Pólya conjecture</a>. As the Wikipedia entry describes it,</p>
<blockquote>
<p>In a letter to Andrew Odlyzko, dated January 3, 1982, George Pólya said that while he was in Göttingen around 1912 to 1914 he was asked by Edmund Landau for a physical reason that the Riemann hypothesis should be true, and suggested that this would be the case if the imaginary parts of the zeros of the Riemann zeta function corresponded to eigenvalues of an unbounded self adjoint operator.</p>
</blockquote>
<p>This has spurred quantum-mechanical approaches to the Riemann Hypothesis. Moreover, we now have serious empirical evidence of a connection between the zeta zeros and <a href="http://en.wikipedia.org/wiki/Random_matrix" rel="noreferrer">random matrix theory</a>, specifically that their pair-correlation matches that of Gaussian Unitary Ensembles (GUEs)...</p>
<blockquote>
<p>The year: 1972. The scene: Afternoon tea in Fuld Hall at the Institute for Advanced Study. The camera pans around the Common Room, passing by several Princetonians in tweeds and corduroys, then zooms in on Hugh Montgomery, boyish Midwestern number theorist with sideburns. He has just been introduced to Freeman Dyson, dapper British physicist.</p>
<p><strong>Dyson</strong>: So tell me, Montgomery, what have you been up to?<br />
<strong>Montgomery</strong>: Well, lately I've been looking into the distribution of the zeros of the Riemann zeta function.<br />
<strong>Dyson</strong>: Yes? And?<br />
<strong>Montgomery</strong>: It seems the two-point correlations go as... (turning to write on a nearby blackboard): <span class="math-container">$$1-\left(\frac{\sin\pi x}{\pi x}\right)^2$$</span><br />
<strong>Dyson</strong>: Extraordinary! Do you realize that's the pair-correlation function for the eigenvalues of a random Hermitian matrix?</p>
</blockquote>
<p>(Source: <a href="http://www.americanscientist.org/issues/pub/the-spectrum-of-riemannium" rel="noreferrer"><em>The Spectrum of Riemannium</em></a>.)</p>
<p>If so inclined one can see the empirical evidence in pretty pictures e.g. <a href="http://www.ams.org/journals/bull/1999-36-01/S0273-0979-99-00766-1/S0273-0979-99-00766-1.pdf" rel="noreferrer">here</a>.</p>
|
matrices | <blockquote>
<p>Show that the determinant of a matrix $A$ is equal to the product of its eigenvalues $\lambda_i$.</p>
</blockquote>
<p>So I'm having a tough time figuring this one out. I know that I have to work with the characteristic polynomial of the matrix $\det(A-\lambda I)$. But, when considering an $n \times n$ matrix, I do not know how to work out the proof. Should I just use the determinant formula for any $n \times n$ matrix? I'm guessing not, because that is quite complicated. Any insights would be great.</p>
| <p>Suppose that <span class="math-container">$\lambda_1, \ldots, \lambda_n$</span> are the eigenvalues of <span class="math-container">$A$</span>. Then the <span class="math-container">$\lambda$</span>s are also the roots of the characteristic polynomial, i.e.</p>
<p><span class="math-container">$$\begin{array}{rcl} \det (A-\lambda I)=p(\lambda)&=&(-1)^n (\lambda - \lambda_1 )(\lambda - \lambda_2)\cdots (\lambda - \lambda_n) \\ &=&(-1) (\lambda - \lambda_1 )(-1)(\lambda - \lambda_2)\cdots (-1)(\lambda - \lambda_n) \\ &=&(\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots (\lambda_n - \lambda)
\end{array}$$</span></p>
<p>The first equality follows from the factorization of a polynomial given its roots; the leading (highest degree) coefficient <span class="math-container">$(-1)^n$</span> can be obtained by expanding the determinant along the diagonal.</p>
<p>Now, by setting <span class="math-container">$\lambda$</span> to zero (simply because it is a variable) we get on the left side <span class="math-container">$\det(A)$</span>, and on the right side <span class="math-container">$\lambda_1 \lambda_2\cdots\lambda_n$</span>, that is, we indeed obtain the desired result</p>
<p><span class="math-container">$$ \det(A) = \lambda_1 \lambda_2\cdots\lambda_n$$</span></p>
<p>So the determinant of the matrix is equal to the product of its eigenvalues.</p>
| <p>I am a beginning Linear Algebra learner and this is just my humble opinion. </p>
<p>One idea presented above is that </p>
<p>Suppose that $\lambda_1,\ldots \lambda_2$ are eigenvalues of $A$. </p>
<p>Then the $\lambda$s are also the roots of the characteristic polynomial, i.e.</p>
<p>$$\det(A−\lambda I)=(\lambda_1-\lambda)(\lambda_2−\lambda)\cdots(\lambda_n−\lambda)$$.</p>
<p>Now, by setting $\lambda$ to zero (simply because it is a variable) we get on the left side $\det(A)$, and on the right side $\lambda_1\lambda_2\ldots \lambda_n$, that is, we indeed obtain the desired result</p>
<p>$$\det(A)=\lambda_1\lambda_2\ldots \lambda_n$$.</p>
<p>I dont think that this works generally but only for the case when $\det(A) = 0$. </p>
<p>Because, when we write down the characteristic equation, we use the relation $\det(A - \lambda I) = 0$ Following the same logic, the only case where $\det(A - \lambda I) = \det(A) = 0$ is that $\lambda = 0$.
The relationship $\det(A - \lambda I) = 0$ must be obeyed even for the special case $\lambda = 0$, which implies, $\det(A) = 0$</p>
<p><strong>UPDATED POST</strong></p>
<p>Here i propose a way to prove the theorem for a 2 by 2 case.
Let $A$ be a 2 by 2 matrix. </p>
<p>$$ A = \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{pmatrix}$$ </p>
<p>The idea is to use a certain property of determinants, </p>
<p>$$ \begin{vmatrix} a_{11} + b_{11} & a_{12} \\ a_{21} + b_{21} & a_{22}\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\\end{vmatrix} + \begin{vmatrix} b_{11} & a_{12}\\b_{21} & a_{22}\\\end{vmatrix}$$</p>
<p>Let $ \lambda_1$ and $\lambda_2$ be the 2 eigenvalues of the matrix $A$. (The eigenvalues can be distinct, or repeated, real or complex it doesn't matter.)</p>
<p>The two eigenvalues $\lambda_1$ and $\lambda_2$ must satisfy the following condition :</p>
<p>$$\det (A -I\lambda) = 0 $$
Where $\lambda$ is the eigenvalue of $A$.</p>
<p>Therefore,
$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = 0 $$</p>
<p>Therefore, using the property of determinants provided above, I will try to <em>decompose</em> the determinant into parts. </p>
<p>$$\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}= \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix}-\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix}$$</p>
<p>The final determinant can be further reduced. </p>
<p>$$
\begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} - \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix}
$$</p>
<p>Substituting the final determinant, we will have </p>
<p>$$
\begin{vmatrix} a_{11} - \lambda & a_{12} \\ a_{21} & a_{22} - \lambda\\\end{vmatrix} = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\\\end{vmatrix} - \begin{vmatrix} a_{11} & a_{12} \\ 0 & \lambda \\\end{vmatrix} - \begin{vmatrix} \lambda & 0 \\ a_{21} & a_{22} \\\end{vmatrix} + \begin{vmatrix} \lambda & 0\\ 0 & \lambda\\\end{vmatrix} = 0
$$</p>
<p>In a polynomial
$$ a_{n}\lambda^n + a_{n-1}\lambda^{n-1} ........a_{1}\lambda + a_{0}\lambda^0 = 0$$
We have the product of root being the coefficient of the term with the 0th power, $a_{0}$.</p>
<p>From the decomposed determinant, the only term which doesn't involve $\lambda$ would be the first term </p>
<p>$$
\begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\\end{vmatrix} = \det (A)
$$</p>
<p>Therefore, the product of roots aka product of eigenvalues of $A$ is equivalent to the determinant of $A$. </p>
<p>I am having difficulties to generalize this idea of proof to the $n$ by $$ case though, as it is complex and time consuming for me. </p>
|
combinatorics | <p>Let <span class="math-container">$n$</span> be greater or equal to <span class="math-container">$1$</span>, and let <span class="math-container">$S$</span> be an <span class="math-container">$(n+1)$</span>-subset of <span class="math-container">$[2n]$</span>. Prove that there exist two numbers in <span class="math-container">$S$</span> such that one divides the other.</p>
| <p>HINT: Create a pigeonhole for each odd positive integer $2k+1<2n$, and put into it all numbers in $[2n]$ of the form $(2k+1)2^r$ for some $r\ge 0$.</p>
| <p>I thought it would be worthwhile to write out Brian's proof with more detail. </p>
<p>Let $A=\{1,2,\dots,2n\}$. Write $A=E\cup O$ where $E$ are the evens and $O$ are the odds. Then $|O|$, the size of $O$, is $n$. Now let $x\in A$. Then by the unique factorization of integers we can write $x=2^ab$ where $b$ is odd. The association $f:x\mapsto b$ therefore gives a well defined mapping $A\rightarrow O$. Since $|O|=n$, $|f(C)|\leq n$ for any subset $C\subseteq A$. Therefore, if $C\subseteq A$ has $n+1$ elements, there must be two elements $c_1,c_2\in C$ such that $f(c_1)=f(c_2)$. In other words $c_1=2^{a_1}b$ and $c_2=2^{a_2}b$. So if $a_1<a_2$ then $c_1$ divides $c_2$. Otherwise $a_1>a_2$ and $c_2$ divides $c_1$. Q.E.D.</p>
|
logic | <p>Me and my friend were arguing over this "fact" that we all know and hold dear. However, I do know that <span class="math-container">$1+1=2$</span> is an axiom. That is why I beg to differ. Neither of us have the required mathematical knowledge to convince each other.</p>
<p>And that is why, we decided to turn to Math Stackexchange for help.</p>
<p>What would be stack's opinion?</p>
| <p>It seems that you and your friend lack the mathematical knowledge to handle this delicate point. What is a proof? What is an axiom? What are $1,+,2,=$?</p>
<p>Well, let me try and be concise about things.</p>
<ul>
<li><p>A proof is a short sequence of deductions from axioms and assumptions, where at every step we deduce information from our axioms, our assumptions and previously deduced sentences.</p></li>
<li><p>An axiom is simply an assumption.</p></li>
<li><p>$1,+,2,=$ are just letters and symbols. We usually associate $=$ with equality; that is two things are equal if and only if they are the same thing. As for $1,2,+$ we have a natural understanding of what they are but it is important to remember those are just letters which can be used elsewhere (and they are used elsewhere, often).</p></li>
</ul>
<p>You want to prove to your friend that $1+1=2$, where those symbols are interpreted as they are naturally perceived. $1$ is the amount of hands attached to a healthy arm of a human being; $2$ is the number of arms attached to a healthy human being; and $+$ is the natural sense of addition. </p>
<p>From the above, what you want to show, mathematically, is that if you are a healthy human being then you have exactly two hands.</p>
<p>But in mathematics we don't talk about hands and arms. We talk about mathematical objects. We need a suitable framework, and we need axioms to define the properties of these objects. For the sake of the natural numbers which include $1,2,+$ and so on, we can use the <strong>Peano Axioms</strong> (PA). These axioms are commonly accepted as the definition of the natural numbers in mathematics, so it makes sense to choose them.</p>
<p>I don't want to give a full exposition of PA, so I will only use the part I need from the axioms, the one discussing addition. We have three primary symbols in the language: $0, S, +$. And our axioms are:</p>
<ol>
<li>For every $x$ and for every $y$, $S(x)=S(y)$ if and only if $x=y$.</li>
<li>For every $x$ either $x=0$ or there is some $y$ such that $x=S(y)$.</li>
<li>There is no $x$ such that $S(x)=0$.</li>
<li>For every $x$ and for every $y$, $x+y=y+x$.</li>
<li>For every $x$, $x+0=x$.</li>
<li>For every $x$ and for every $y$, $x+S(y)=S(x+y)$.</li>
</ol>
<p>This axioms tell us that $S(x)$ is to be thought as $x+1$ (the successor of $x$), and it tells us that addition is commutative and what relations it bears with the successor function.</p>
<p>Now we need to define what are $1$ and $2$. Well, $1$ is a shorthand for $S(0)$ and $2$ is a shorthand for $S(1)$, or $S(S(0))$.</p>
<p><strong>Finally!</strong> We can write a proof that $1+1=2$:</p>
<blockquote>
<ol>
<li>$S(0)+S(0)=S(S(0)+0)$ (by axiom 6).</li>
<li>$S(0)+0 = S(0)$ (by axiom 5).</li>
<li>$S(S(0)+0) = S(S(0))$ (by the second deduction and axiom 1).</li>
<li>$S(0)+S(0) = S(S(0))$ (from the first and third deductions).</li>
</ol>
</blockquote>
<p>And that is what we wanted to prove.</p>
<hr>
<p>Note that the context is quite important. We are free to define the symbols to mean whatever it is we want them to mean. We can easily define a new context, and a new framework in which $1+1\neq 2$. Much like we can invent a whole new language in which <em>Bye</em> is a word for greeting people when you meet them, and <em>Hi</em> is a word for greeting people as they leave.</p>
<p>To see that $1+1\neq2$ in <em>some</em> context, simply define the following axioms:</p>
<ol>
<li>$1\neq 2$</li>
<li>For every $x$ and for every $y$, $x+y=x$.</li>
</ol>
<p>Now we can write a proof that $1+1\neq 2$:</p>
<ol>
<li>$1+1=1$ (axiom 2 applied for $x=1$).</li>
<li>$1\neq 2$ (axiom 1).</li>
<li>$1+1\neq 2$ (from the first and second deductions).</li>
</ol>
<hr>
<p><strong>If you read this far, you might also be interested to read these:</strong></p>
<ol>
<li><a href="https://math.stackexchange.com/questions/95069/how-would-one-be-able-to-prove-mathematically-that-11-2/">How would one be able to prove mathematically that $1+1 = 2$?</a></li>
<li><a href="https://math.stackexchange.com/questions/190690/what-is-the-basis-for-a-proof/">What is the basis for a proof?</a></li>
<li><a href="https://math.stackexchange.com/questions/182303/how-is-a-system-of-axioms-different-from-a-system-of-beliefs">How is a system of axioms different from a system of beliefs?</a></li>
</ol>
| <p>Those interested in pushing this question back further than Asaf Karagila did (well past logic and into the morass of philosophy) may be interested in the following comments that were written in 1860 (full reference below). Also, although Asaf's treatment here avoids this, there are certain issues when defining addition of natural numbers in terms of the successor operation that are often overlooked. See my <a href="http://mathforum.org/kb/message.jspa?messageID=7614303" rel="noreferrer">22 November 2011</a> and <a href="http://mathforum.org/kb/message.jspa?messageID=7617938" rel="noreferrer">28 November 2011</a> posts in the Math Forum group math-teach.</p>
<blockquote>
<p><span class="math-container">$[\ldots]$</span> consider this case. There is a world in which, whenever two pairs of things are either placed in proximity or are contemplated together, a fifth thing is immediately created and brought within the contemplation of the mind engaged in putting two and two together. This is surely neither inconceivable, for we can readily conceive the result by thinking of common puzzle tricks, nor can it be said to be beyond the power of Omnipotence, yet in such a world surely two and two would make five. That is, the result to the mind of contemplating two two’s would be to count five. This shows that it is not inconceivable that two and two might make five; but, on the other hand, it is perfectly easy to see why in this world we are absolutely certain that two and two make four. There is probably not an instant of our lives in which we are not experiencing the fact. We see it whenever we count four books, four tables or chairs, four men in the street, or the four corners of a paving stone, and we feel more sure of it than of the rising of the sun to-morrow, because our experience upon the subject is so much wider and applies to such an infinitely greater number of cases.</p>
</blockquote>
<p>The above passage comes from:</p>
<p><a href="http://en.wikipedia.org/wiki/James_Fitzjames_Stephen" rel="noreferrer">James Fitzjames Stephen</a> (1829-1894), Review of <a href="http://en.wikipedia.org/wiki/Henry_Longueville_Mansel" rel="noreferrer">Henry Longueville Mansel</a> (1820-1871), <a href="http://books.google.com/books?id=59UNAAAAYAAJ" rel="noreferrer"><strong>Metaphysics; or, the Philosophy of Consciousness, Phenomenal and Real</strong></a> (1860), <strong>The Saturday Review</strong> 9 #244 (30 June 1860), pp. 840-842. [see <a href="http://books.google.com/books?id=yVwwAQAAMAAJ&pg=PA842" rel="noreferrer">page 842</a>]</p>
<p>Stephen’s review of Mansel's book is reprinted on pp. 320-335 of Stephen's 1862 book <a href="http://books.google.com/books?id=RssBAAAAQAAJ" rel="noreferrer"><strong>Essays</strong></a>, where the quote above can be found on <a href="http://books.google.com/books?id=RssBAAAAQAAJ&pg=PA333" rel="noreferrer">page 333</a>.</p>
<p><em>(ADDED 2 YEARS LATER)</em> Because my answer continues to receive sporadic interest and because I came across something this weekend related to it, I thought I would extend my answer by adding a couple of items.</p>
<p>The first new item, <strong>[A]</strong>, is an excerpt from a 1945 paper by Charles Edward Whitmore. I came across Whitmore's paper several years ago when I was looking through all the volumes of the journal <strong>Journal of the History of Ideas</strong> at a nearby university library. Incidentally, Whitmore's paper is where I learned about speculations of James Fitzjames Stephen that are given above. The second new item, <strong>[B]</strong>, is an excerpt from an essay by Augustus De Morgan that I read this last weekend. De Morgan's essay is item <strong>[15]</strong> in my answer to the History of Science and Math StackExchange question <a href="https://hsm.stackexchange.com/questions/451/did-galileos-writings-on-infinity-influence-cantor">Did Galileo's writings on infinity influence Cantor?</a>, and his essay is also mentioned in item <strong>[8]</strong>. I've come across references to De Morgan's essay from time to time over the years, but I've never read it because I never bothered trying to look it up in a university library. However, when I found to my surprise (but I really shouldn't have been surprised) that a digital copy of the essay was freely available on the internet when I searched for it about a week ago, I made a print copy, which I then read through when I had some time (this last weekend).</p>
<p><strong>[A]</strong> Charles Edward Whitmore (1887-1970), <a href="http://www.jstor.org/stable/2707061" rel="noreferrer"><em>Mill and mathematics: An historical note</em></a>, <strong>Journal of the History of Ideas</strong> 6 #1 (January 1945), 109-112. MR 6,141n; Zbl 60.01622</p>
<blockquote>
<p><strong>(first paragraph of the paper, on p. 109)</strong> In various philosophical works one encounters the statement that J. S. Mill somewhere asserted that two and two might conceivably make five. Thus, Professor Lewis says<span class="math-container">$^1$</span> that Mill "asked us to suppose a demon sufficiently powerful and maleficent so that every time two things were brought together with two other things, this demon should always introduce a fifth"; but he gives no specific reference. <strong>{{footnote:</strong> <span class="math-container">$^1$</span>C. I. Lewis, <em>Mind and the World Order</em> (1929), 250.<strong>}}</strong> C. S. Peirce<span class="math-container">$^2$</span> puts it in the form, "when two things were put together a third should spring up," calling it a doctrine usually attributed to Mill. <strong>{{footnote:</strong> <span class="math-container">$^2$</span><em>Collected Papers</em>, IV, 91 (dated 1893). The editors supply a reference to <em>Logic</em>, II, vi, 3.<strong>}}</strong> Albert Thibaudet<span class="math-container">$^3$</span> ascribes to "a Scottish philosopher cited by Mill" the doctrine that the addition of two quantities might lead to the production of a third. <strong>{{footnote:</strong> <span class="math-container">$^3$</span>Introduction to <em>Les Idées de Charles Maurras</em> (1920), 7.<strong>}}</strong> Again, Professor Laird remarks<span class="math-container">$^4$</span> that "Mill suggested, we remember, that two and two might not make four in some remote part of the stellar universe," referring to <em>Logic</em> III, xxi, 4 and II, vi, 2. <strong>{{footnote:</strong> <span class="math-container">$^4$</span>John Laird, Knowledge, Belief, and Opinion (1930), 238.<strong>}}</strong> These instances, somewhat casually collected, suggest that there is some confusion in the situation.</p>
<p><strong>(from pp. 109-111)</strong> Moreover, the notion that two and two should ["could" intended?] make five is entirely opposed to the general doctrine of the <em>Logic</em>. <span class="math-container">$[\cdots]$</span> Nevertheless, though these views stand in the final edition of the <em>Logic</em>, it is true that Mill did, in the interval, contrive to disallow them. After reading through the works of Sir William Hamilton three times, he delivered himself of a massive Examination of that philosopher, in the course of which he reverses his position--but at the suggestion of another thinker. In chapter VI he falls back on the inseparable associations generated by uniform experience as compelling us to conceive two and two as four, so that "we should probably have no difficulty in putting together the two ideas supposed to be incompatible, if our experience had not first inseparably associated one of them with the contradictory of the other." To this he adds, "That the reverse of the most familiar principles of arithmetic and geometry might have been made conceivable even to our present mental faculties, if those faculties had coexisted with a totally different constitution of external nature, is ingeniously shown in the concluding paper of a recent volume, anonymous, but of known authorship, Essays, by a Barrister." The author of the work in question was James Fitzjames Stephen, who in 1862 had brought together various papers which had appeared in <em>Saturday Review</em> during some three previous years. Some of them dealt with philosophy, and it is from a review of Mansel's <em>Metaphysics</em> that Mill proceeds to quote in support of his new doctrine <span class="math-container">$[\cdots]$</span></p>
<p><strong>Note:</strong> On p. 111 Whitmore argues against Mill's and Stephen's empirical viewpoint of "two plus two equals four". Whitmore's arguments are not very convincing to me.</p>
<p><strong>(from p. 112)</strong> Mill, then, did not originate the idea, but adopted it from Stephen, in the form that two and two might make five to our present faculties, if external nature were differently constituted. He did not assign it to some remote part of the universe, nor did he call in the activity of some maleficent demon; neither did he say that one and one might make three. He did not explore its implications, or inquire how it might be reconciled with what he had said in other places; but at least he is entitled to a definite statement of what he did say. I confess that I am somewhat puzzled at the different forms in which it has been quoted, and at the irrelevant details which have been added.</p>
</blockquote>
<p><strong>[B]</strong> Augustus De Morgan (1806-1871), <em>On infinity; and on the sign of equality</em>, <strong>Transactions of the Cambridge Philosophical Society</strong> 11 Part I (1871), 145-189.</p>
<blockquote>
<p><a href="http://catalog.hathitrust.org/Record/000125921" rel="noreferrer">Published separately as a booklet</a> by Cambridge University Press in 1865 (same title; i + 45 pages). The following excerpt is from the version published in 1865.</p>
<p><strong>(footnote 1 on p. 14)</strong> We are apt to pronounce that the admirable <em>pre-established harmony</em> which exists between the subjective and objective is a necessary property of mind. It may, or may not, be so. Can we not grant to omnipotence the power to fashion a mind of which the primary counting is by twos, <span class="math-container">$0,$</span> <span class="math-container">$2,$</span> <span class="math-container">$4,$</span> <span class="math-container">$6,$</span> &c.; a mind which always finds its first indicative notion in <em>this and that</em>, and only with effort separates <em>this</em> from <em>that</em>. I cannot invent the fundamental forms of language for this mind, and so am obliged to make it contradict its own nature by using our terms. The attempt to think of such things helps towards the habit of distinguishing the subjective and objective.</p>
<p><strong>Note:</strong> Those interested in such speculations will also want to look at De Morgan's lengthy footnote on p. 20.</p>
</blockquote>
<p><em>(ADDED 6 YEARS LATER)</em> I recently read Ian Stewart's 2006 book <a href="https://rads.stackoverflow.com/amzn/click/com/0465082319" rel="noreferrer" rel="nofollow noreferrer"><strong>Letters to a Young Mathematician</strong></a> and in this book there is a passage (see below) that I think is worth including here.</p>
<blockquote>
<p><strong>(from pp. 30-31)</strong> I think human math is more closely linked to our particular physiology, experiences, and psychological preferences than we imagine. It is parochial, not universal. Geometry's points and lines may seem the natural basis for a theory of shape, but they are also the features into which our visual system happens to dissect the world. An alien visual system might find light and shade primary, or motion and stasis, or frequency of vibration. An alien brain might find smell, or embarrassment, but not shape, to be fundamental to its perception of the world. And while discrete numbers like <span class="math-container">$1,$</span> <span class="math-container">$2,$</span> <span class="math-container">$3,$</span> seem universal to us, they trace back to our tendency to assemble similar things, such as sheep, and consider them property: has one of <em>my</em> sheep been stolen? Arithmetic seems to have originated through two things: the timing of the seasons and commerce. But what of the blimp creatures of distant Poseidon, a hypothetical gas giant like Jupiter, whose world is a constant flux of turbulent winds, and who have no sense of individual ownership? Before they could count up to three, whatever they were counting would have blown away on the ammonia breeze. They would, however, have a far better understanding than we do of the math of turbulent fluid flow.</p>
</blockquote>
|
Subsets and Splits