tag
stringclasses 9
values | question_body
stringlengths 61
12.9k
| accepted_answer
stringlengths 38
36.4k
| second_answer
stringlengths 63
33k
|
---|---|---|---|
game-theory | <p><strong>Some basic background</strong></p>
<p>The Kalman filter is a (linear) state estimation algorithm that presumes that there is some sort of uncertainty (optimally Gaussian) in the state observations of the dynamical system. The Kalman filter has been extended to nonlinear systems, constrained for cases where direct state observation is impossible, etc. As one should know, the algorithm essentially performs a prediction step based on prior state knowledge, compares that prediction to measurements, and updates the knowledge of the state based on a state covariance estimate that is also updated at every step.</p>
<p>In short, we have a state-space vector usually represented by a vector in $\mathbb{R}^n$. This vector is operated on by a plant matrix, added to a control term (which is operated on by a control matrix), operated on by a covariance matrix, etc. There are usually not intrinsic constraints on the state-space vector beyond those which may be embedded in the description of the dynamical system.</p>
<p><strong>My question</strong></p>
<p>Can we describe the Kalman filter using the language of category theory? In other words, can we generalize the algebra of what is happening with the Kalman filter for a dynamical system to develop Kalman-like estimators for other processes with vector-like components derived from some known (or partially known) model?</p>
<p><strong>A motivating example</strong></p>
<p>A repeated matrix game involves two or more players choosing strategies according to some payoff function -- which is a function of the game's state space and each player's action -- typically with some notion of equilibrium (e.g. Nash equilibrium). A repeated game with a finite action space implies that strategies are discrete probability distributions representable by a vector in $\mathbb{R}^N$ such that every element of the strategy vector is non-negative and the vector sums to unity.</p>
<p>This is a more restrictive case than a general dynamical system because we have constraints on the vector representation of the strategy. But, given input (i.e. actions taken), output (state measurement of the game's state space), and a payoff structure, it might make sense to attempt a Kalman-like estimator for the strategy vector in the game.</p>
<p>The challenge, of course, is developing a compatible notion of covariance. It's not clear what covariance would mean in relation to probability distributions, since we don't treat the quantity $P(\textrm{action} = \textrm{index})$ as random variable. But if we had a category theoretical view of the Kalman filter, could we derive a similar structure based on its algebraic properties in such a way that we can guarantee that our estimate of the opponent's strategy vector converges in some way?</p>
<p><strong>Does what I'm asking even make sense?</strong></p>
| <p>The most direct approach, already mentioned by Chris Culter,
is to use the fact that Kalman filters are a kind of graphical model and look at categorical treatments of graphical models.
This is a rather active area particularly amog people interested in quantum computation, quantum information and quantum foundations in general.
Graphical models are usually treated as <a href="http://en.wikipedia.org/wiki/String_diagram" rel="noreferrer">string</a>/<a href="http://research.microsoft.com/apps/pubs/default.aspx?id=79791" rel="noreferrer">tensor</a>/<a href="http://en.wikipedia.org/wiki/Trace_diagram" rel="noreferrer">trace</a> diagrams, a.k.a <a href="http://en.wikipedia.org/wiki/Penrose_graphical_notation" rel="noreferrer">Penrose notation</a> or graphical languages, rather than directly as categories.
The main reference on the abstract categorical treatment of these kinds of diagrams is Peter Selinger, <a href="http://arxiv.org/abs/0908.3347" rel="noreferrer"><em>A survey of graphical languages for monoidal categories</em></a>
There is also a very nice interdisciplinary <a href="http://arxiv.org/abs/0903.0340" rel="noreferrer">survey</a> by John Baez that covers this and a number of related areas.
People with publications in this are include Samson Abramsky, Bob Coecke, Jared Culbertson, Kirk Strutz, Robert Spekkens, John Baez, Jacob Biamonte, Mike Stay, Bart Jacobs, David Spivak and Brendan Fong whose thesis was already provided by Chris Culter in his answer.</p>
<p>Categorical ttreatments graphical models require dealing with probability which is usually done with monads, often using <a href="http://ncatlab.org/nlab/show/Giry%27s+monad" rel="noreferrer">Giry's monad</a>.</p>
<p>You may also want to look at the work of the group at ETH Zurich that includes Patrick Vontobel & Hans-Andrea Loeliger. They work on a kind of graphical model they call Forney factor graphs. They do not use category theory themselves but they have papers on how to translate many kinds of computations, including Kalman filters, electrical circuits and EM algorithms to & from their graphical models.
This also provides a means to translate between Kalman filters and electrical circuits indirectly, so their work could be useful to transform KFs into other things things that already have categorical representations.</p>
<p>Here are links to some of the papers:</p>
<ul>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.173.113" rel="noreferrer">Kalman Filters, Factor Graphs, and Electrical Networks</a></li>
<li><a href="http://golem.ph.utexas.edu/category/2007/09/category_theory_in_machine_lea.html" rel="noreferrer">Category theory in Machine learning on nCategory-cafe</a></li>
<li><a href="http://arxiv.org/abs/1102.2368" rel="noreferrer">Picturing classical and quantum Bayesian inference</a></li>
<li><a href="http://arxiv.org/abs/1312.1445" rel="noreferrer">Bayesian machine learning via category theory</a></li>
<li><a href="http://arxiv.org/abs/1205.1488" rel="noreferrer">A categorical foundation for Bayesian probability</a></li>
<li><a href="http://archive.is/o/Q9yUZ/http://qubit.org/images/files/book.pdf" rel="noreferrer">Lectures on Penrose graphical notation for tensor network states</a></li>
<li><a href="http://arxiv.org/abs/1306.0831" rel="noreferrer">Towards a Categorical Account of Conditional Probability</a></li>
<li><a href="http://arxiv.org/abs/1307.6894" rel="noreferrer">The operad of temporal wiring diagrams: formalizing a graphical language for discrete-time processes</a></li>
</ul>
| <p>A Kalman filter performs inference on a model which is a simple Bayesian network, and a Bayesian network is a kind of graphical model, and graphical models kind of look like categories, so there may be a connection there. Unfortunately, I can't personally go beyond that vague description. Searching the web for "graphical model" + "category theory" turned up an interesting thesis: Brendan Fong (2012) "Causal Theories: A Categorical Perspective on Bayesian Networks". There's no attempted connection to game theory, but it might be worth a look!</p>
|
geometry | <p>I have a camera looking at a computer monitor from varying angles. Since the camera is a grid of pixels, I can define the bounds of the monitor in the camera image as:
<img src="https://i.sstatic.net/MGwUh.png" alt="alt text"></p>
<p>I hope that makes sense. What I want to do is come up with an algorithm to translate points within this shape to this:
<img src="https://i.sstatic.net/niJYY.png" alt="alt text"></p>
<p>I have points within the same domain as ABCD, as determined from the camera, but I need to draw these points in the domain of the monitor's resolution.</p>
<p>Does that makes sense? Any ideas?</p>
| <p>In general there is no affine transformation that maps an arbitrary quadrangle onto a rectangle. But there is (exactly one) projective transformation $T$ that maps a given quadrangle $(A, B, C, D)$ in the projective plane onto a given quadrangle $(A', B', C' D')$ in the same or another projective plane. This $T$ is ${\it collinear}$, i.e., it maps lines to lines. To do the calculations you have to introduce homogeneous coordinates $(x,y,z)$ such that $D=(0,0,1)$, $C=(1,0,1)$, $A=(0,1,1)$, $B=(1,1,1)$ and similarly for $A'$, $B'$, $C'$, $D'$. With respect to these coordinates the map $T$ is linear and its matrix is the identity matrix.</p>
| <p>The best solution I've found so far on a forum lost in the sea of forums is to decompose your problem like this :</p>
<p><img src="https://i.sstatic.net/a0CYT.png" alt="enter image description here"></p>
<p>Here, U and V represent coordinates within the quadrilateral (scaled between 0 and 1).</p>
<p>From $P0$, $P1$, $P2$ & $P3$ we can easily compute the normalized normal vectors $N0$, $N1$, $N2$ & $N3$.
Then, it's easy to see that :
$$u = \frac{dU0}{dU0 + dU1} = \frac{(P-P0) \cdot N0}{(P-P0).N0 + (P-P2) \cdot N2} \\
v = \frac{dV0}{dV0 + dV1} = \frac{(P-P0) \cdot N1}{(P-P0).N1 + (P-P3) \cdot N3}.$$</p>
<p>This parametrization works like a charm and is really easy to compute within a shader for example.
What's tricky is the reverse: finding $P(x,y)$ from $(u,v)$ so here is the result:</p>
<p>$$x = \frac{vKH \cdot uFC - vLI \cdot uEB}{vJG \cdot uEB - vKH \cdot uDA}, \\
y = \frac{vLI \cdot uDA - uFC \cdot vJG}{vJG \cdot uEB - vKH \cdot uDA},$$</p>
<p>where:
$$uDA = u \cdot (D-A), \quad uEB = u \cdot (E-B), \quad uFC = u \cdot (F-C), \\
vJG = v \cdot (J-G), \quad vKH = v \cdot (K-H), \quad vJG = v \cdot (J-G),$$</p>
<p>and finally:
$$A = N0_x, \qquad \qquad B = N0_y, \quad C = -P0 \cdot N0, \qquad \\
D = N0_x + N2_x, \quad E = N0_y + N2_y, \quad F = -P0 \cdot N0 - P2 \cdot N2, \\
G = N1_x, \qquad \qquad H = N1_y, \quad I = -P0 \cdot N1, \qquad \\
J = N1_x + N3_x, \quad K = N1_y + N3_y, \quad L = -P0 \cdot N1 - P2 \cdot N3.$$</p>
<p>I've been using this successfully for shadow mapping of a deformed camera frustum mapped into a regular square texture and I can assure you it's working great! :D</p>
|
probability | <p>Linearity of expectation is a very simple and "obvious" statement, but has many non-trivial applications, e.g., to analyze randomized algorithms (for instance, the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem#Calculating_the_expectation" rel="noreferrer">coupon collector's problem</a>), or in some proofs where dealing with non-independent random variables would otherwise make any calculation daunting.</p>
<p>What are the cleanest, most elegant, or striking applications of the linearity of expectation you've encountered?</p>
| <p><strong>Buffon's needle:</strong> rule a surface with parallel lines a distance <span class="math-container">$d$</span> apart. What is the probability that a randomly dropped needle of length <span class="math-container">$\ell\leq d$</span> crosses a line?</p>
<p>Consider dropping <em>any</em> (continuous) curve of length <span class="math-container">$\ell$</span> onto the surface. Imagine dividing up the curve into <span class="math-container">$N$</span> straight line segments, each of length <span class="math-container">$\ell/N$</span>. Let <span class="math-container">$X_i$</span> be the indicator for the <span class="math-container">$i$</span>-th segment crossing a line. Then if <span class="math-container">$X$</span> is the total number of times the curve crosses a line,
<span class="math-container">$$\mathbb E[X]=\mathbb E\left[\sum X_i\right]=\sum\mathbb E[X_i]=N\cdot\mathbb E[X_1].$$</span>
That is to say, the expected number of crossings is proportional to the length of the curve (and independent of the shape).</p>
<p>Now we need to fix the constant of proportionality. Take the curve to be a circle of diameter <span class="math-container">$d$</span>. Almost surely, this curve will cross a line twice. The length of the circle is <span class="math-container">$\pi d$</span>, so a curve of length <span class="math-container">$\ell$</span> crosses a line <span class="math-container">$\frac{2\ell}{\pi d}$</span> times.</p>
<p>Now observe that a straight needle of length <span class="math-container">$\ell\leq d$</span> can cross a line either <span class="math-container">$0$</span> or <span class="math-container">$1$</span> times. So the probability it crosses a line is precisely this expectation value <span class="math-container">$\frac{2\ell}{\pi d}$</span>.</p>
| <p>As <a href="https://math.stackexchange.com/users/252071/lulu">lulu</a> mentioned in a comment, the fact that a uniformly random permutation <span class="math-container">$\pi\colon\{1,2,\dots,n\}\to\{1,2,\dots,n\}$</span> has in expectation <em>one</em> fixed point is a quite surprising statement, with a one-line proof.</p>
<p>Let <span class="math-container">$X$</span> be the number of fixed points of such a uniformly random <span class="math-container">$\pi$</span>. Then <span class="math-container">$X=\sum_{k=1}^n \mathbf{1}_{\pi(k)=k}$</span>, and thus
<span class="math-container">$$
\mathbb{E}[X] = \mathbb{E}\left[\sum_{k=1}^n \mathbf{1}_{\pi(k)=k}\right]
= \sum_{k=1}^n \mathbb{E}[\mathbf{1}_{\pi(k)=k}]
= \sum_{k=1}^n \mathbb{P}\{\pi(k)=k\}
= \sum_{k=1}^n \frac{1}{n}
= 1\,.
$$</span></p>
|
probability | <p><strong>Context:</strong> I'm a high school student, who has only ever had an introductory treatment, if that, on combinatorics. As such, the extent to which I have seen combinatoric applications is limited to situations such as "If you need a group of 2 men and 3 women and you have 8 men and 9 women, how many possible ways can you pick the group" (They do get slightly more complicated, but are usually similar).</p>
<p><strong>Question:</strong> I apologise in advance for the naive question, but at an elementary level it seems as though combinatorics (and the ensuing probability that can make use of it), seems not overly rigorous. It doesn't seem as though you can "prove" that the number of arrangements you deemed is the correct number. What if you forget a case?</p>
<p>I know that you could argue that you've considered all cases, by asking if there is another case other than the ones you've considered. But, that doesn't seem to be the way other areas of mathematics is done. If I wish to prove something, I couldn't just say "can you find a situation where the statement is incorrect" as we don't just assume it is correct by nature.</p>
<p><strong>Is combinatorics rigorous?</strong></p>
<p>Thanks</p>
| <p>Combinatorics certainly <em>can</em> be rigourous but is not usually presented that way because doing it that way is:</p>
<ul>
<li>longer (obviously)</li>
<li>less clear because the rigour can obscure the key ideas</li>
<li>boring because once you know intuitively that something works you lose interest in a rigourous argument</li>
</ul>
<p>For example, compare the following two proofs that the binomial coefficient is $n!/k!(n - k)!$ where I will define the binomial coefficient as the number of $k$-element subsets of $\{1,\dots,n\}$.</p>
<hr>
<p><strong>Proof 1:</strong></p>
<p>Take a permutation $a_1,\dots, a_n$ of $n$. Separate this into $a_1,\dots,a_k$ and $a_{k + 1}, \dots, a_n$. We can permute $1,\dots, n$ in $n!$ ways and since we don't care about the order of $a_1,\dots,a_k$ or $a_{k + 1},\dots,a_n$ we divide by $k!(n - k)!$ for a total of $n!/k!(n - k)!$.</p>
<hr>
<p><strong>Proof 2:</strong></p>
<p>Let $B(n, k)$ denote the set of $k$-element subsets of $\{1,\dots,n\}$. We will show that there is a bijection</p>
<p>$$ S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}. $$</p>
<p>The map $\to$ is defined as follows. Let $\pi \in S_n$. Let $A = \{\pi(1),\pi(2),\dots,\pi(k)\}$ and let $B = \{\pi(k + 1),\dots, \pi(n)\}$. For each finite subset $C$ of $\{1,\dots,n\}$ with $m$ elements, fix a bijection $g_C : C \longleftrightarrow \{1,\dots,m\}$ by writting the elements of $C$ in increasing order $c_1 \le \dots \le c_m$ and mapping $c_i \longleftrightarrow i$.</p>
<p>Define maps $\pi_A$ and $\pi_B$ on $\{1,\dots,k\}$ and $\{1,\dots,n-k\}$ respectively by defining
$$ \pi_A(i) = g_A(\pi(i)) \text{ and } \pi_B(j) = g_B(\pi(j)). $$</p>
<p>We map the element $\pi \in S_n$ to the triple $(A, \pi_A, \pi_B) \in B(n, k) \times S_k \times S_{n - k}$.</p>
<p>Conversely, given a triple $(A, \sigma, \rho) \in B(n, k) \times S_k \times S_{n - k}$ we define $\pi \in S_n$ by
$$
\pi(i) =
\begin{cases}
g_A^{-1}(\sigma(i)) & \text{if } i \in \{1,\dots,k\} \\
g_B^{-1}(\rho(i-k)) & \text{if } i \in \{k + 1,\dots,n \}
\end{cases}
$$
where $B = \{1,\dots,n\} \setminus A$.</p>
<p>This defines a bijection $S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}$ and hence</p>
<p>$$ n! = {n \choose k} k!(n - k)! $$</p>
<p>as required.</p>
<hr>
<p><strong>Analysis:</strong></p>
<p>The first proof was two sentences whereas the second was some complicated mess. People with experience in combinatorics will understand the second argument is happening behind the scenes when reading the first argument. To them, the first argument is all the rigour necessary. For students it is useful to teach the second method a few times to build a level of comfort with bijective proofs. But if we tried to do all of combinatorics the second way it would take too long and there would be rioting.</p>
<hr>
<p><strong><em>Post Scriptum</em></strong></p>
<p>I will say that a lot of combinatorics textbooks and papers do tend to be written more in the line of the second argument (i.e. rigourously). Talks and lectures tend to be more in line with the first argument. However, higher level books and papers only prove "higher level results" in this way and will simply state results that are found in lower level sources. They will also move a lot faster and not explain each step exactly.</p>
<p>For example, I didn't show that the map above was a bijection, merely stated it. In a lower level book there will be a proof that the two maps compose to the identity in both ways. In a higher level book, you might just see an example of the bijection and a statement that there is a bijection in general with the assumption that the person reading through the example could construct a proof on their own.</p>
| <p>Essentially, all (nearly all?) of combinatorics comes down to two things, the multiplication rule and the addition rule.</p>
<blockquote>
<p>If <span class="math-container">$A,B$</span> are finite sets then <span class="math-container">$|A\times B|=|A|\,|B|$</span>.</p>
<p>If <span class="math-container">$A,B$</span> are finite sets and <span class="math-container">$A\cap B=\emptyset$</span>, then <span class="math-container">$|A\cup B|=|A|+|B|$</span>.</p>
</blockquote>
<p>These can be rigorously proved, and more sophisticated techniques can be rigorously derived from them, for example, the fact that the number of different <span class="math-container">$r$</span>-element subsets of an <span class="math-container">$n$</span>-element set is <span class="math-container">$C(n,r)$</span>.</p>
<p>So, this far, combinatorics is perfectly rigorous. IMHO, the point at which it may become (or may appear to become) less rigorous is when it moves from pure to applied mathematics. So, with your specific example, you have to assume (or justify if you can) that counting the number of choices of <span class="math-container">$2$</span> men and <span class="math-container">$3$</span> women from <span class="math-container">$8$</span> men and <span class="math-container">$9$</span> women is the same as evaluating
<span class="math-container">$$|\{A\subseteq M: |A|=2\}\times \{B\subseteq W: |B|=3\}|\ ,$$</span>
where <span class="math-container">$|M|=8$</span> and <span class="math-container">$|W|=9$</span>.</p>
<p>It should not be surprising that the applied aspect of the topic requires some assumptions that may not be mathematically rigorous. The same is true in many other cases: for example, modelling a physical system by means of differential equations. Solving the equations once you have them can be done (more or less) rigorously, but deriving the equations in the first place usually cannot.</p>
<p>Hope this helps!</p>
|
game-theory | <p>Let's play a game of NIM, but with a catch!</p>
<p>We have exactly three piles of stones with sizes $a$, $b$ and $c$, all of which are different.</p>
<p>We move in turns. In every move, we can select a pile and remove any number of stones from it. But there's a restriction: At no point during the game can we have two equally high piles of non-zero size. The player who cannot make a move loses.</p>
<ul>
<li>the game $(1, 3, 5)$ can be transformed into $(0, 3, 5)$, $(1, 2, 5)$, $(1, 0, 5)$, $(1, 3, 4)$, $(1, 3, 2)$ or $(1, 3, 0)$ in one move</li>
<li>$(0, 1, 2)$ can be transformed into $(0, 0, 2)$ or $(0, 1, 0)$</li>
<li>the game $(0, 0, 0)$ is an immediate loss</li>
</ul>
<p>I happen to know a way to compute the outcome of such a game: Unless $a = b = c = 0$, the first player loses exactly if $(a+1) \oplus (b+1) \oplus (c+1) = 0$. $(0,0,0)$ is a loss too. I think it can be proven via induction on $a + b + c$.</p>
<p>What I don't know is how one would derive that result. I have puzzled over this quite some time and figured out a formula for the case with two piles, but could not generalize it. Then I looked up the solution. How would you approach this problem to get an intuition on what the winning or losing positions are? Or even better, is there some general method that often works for these types of games?</p>
<p>I know about the Sprague-Grundy theorem and P and N positions in general games on DAGs, so I can just use "brute force" to solve the problem, but unfortunately the numbers were too large to solve the problem that way and the results for smallish $a,b,c$ didn't really help me derive the formula. One important observation I can draw from this in hindsight however is that the value $(a+1) \oplus (b+1) \oplus (c+1)$ does not seem to be the grundy number of the game, they just happen to be zero for the same assignments of $a$, $b$ and $c$.</p>
<p>The source of the problem is the <a href="http://codeforces.com/gym/100338" rel="nofollow">Andrew Stankevich Programming Contest 22</a>, task D.</p>
<p><strong>UPDATE:</strong> For the two pile case, exactly the positions $(2k, 2k - 1)$ are the losses. We can get from every other position to one of these or to $(0, 0)$, but we can't get from one of these into another of these. The base case is that $(1, 2)$ is a loss and $(0, a)$ is a win for $a > 0$.</p>
| <p>This doesn't answer your question about how you may proceed with coming up with the conditions, but there is a cute way of proving it by comparing it to a game of regular Nim without induction as such.</p>
<p>The winning strategy for $a,b,c>0$ is to pretend you are playing a game of regular Nim with piles of size $a+1,b+1,c+1$ (where a 'move' is removing a fixed number of stones from a certain pile), and play to win.</p>
<p>If $(a+1)\oplus (b+1)\oplus (c+1) \neq 0$, then consider the winning move in Nim. This cannot be to remove an entire pile, since we know the other piles are unequal and thus not a losing position. Also, the winning move never leaves two equal sized piles and a nonzero third pile, as that is a losing position in Nim. Therefore this is a valid move to play. </p>
<p>Conversely, any move which is valid in the first game is always valid in the Nim game.</p>
<p>So the first player can ensure the Nim game is always valid, and thus by winning the Nim game cannot lose the original game.</p>
| <p>Not an answer, but too long for a comment: Even the two pile game is interesting. From $(0,a)$ the first player wins by moving to $(0,0)$ So a pile of $1$ is dead, and if there is one the first person can move to $(1,2)$ and win unless you are already there. Then $(2,n)$ is a first player win. It is looking like we should catalog the $P$ positions, where the second player wins. Again a pile of $3$ is dead because the other player can move to $(1,2)$. $(3,4)$ is $P$ because you must make a $1$ or $2$. I agree with your comment that $(n,n+1)$ with $n$ odd are the $P$ positions. If the difference is two or more the first player can attain that, so they are $N$ positions.<br>
This drives the three pile game. If there are two piles of the form $(n,n+1)$ with $n$ odd the first player wins by taking the other pile to zero. </p>
|
linear-algebra | <p>I'm getting started learning engineering math. I'm really interested in physics especially quantum mechanics, and I'm coming from a strong CS background.</p>
<p>One question is haunting me. </p>
<p>Why do I need to learn to do complex math operations on paper when most can be done automatically in software like Maple. For instance, as long as I learn the concept and application for how aspects of linear algebra and differential equations work, won't I be able to enter the appropriate info into such a software program and not have to manually do the calculations?</p>
<p>Is the point of math and math classes to learn the big-picture concepts of how to apply mathematical tools or is the point to learn the details to the ground level?</p>
<p>Just to clarify, I'm not trying to offend any mathematicians or to belittle the importance of math. From CS I recognize that knowing the deep details of an algorithm can be useful, but that is equally important to be able to work abstractly. Just trying to get some perspective on how to approach the next few years of study.</p>
| <blockquote>
<p>Is the point of math and math classes to learn the big-picture concepts of how to apply mathematical tools or is the point to learn the details to the ground level?</p>
</blockquote>
<p>Both. One is difficult without the other. How are you going to solve equations that Maple can't solve? How are you going to solve it, exactly or numerically? What's the best way to solve something numerically? How can you simplify the problem to get an approximate answer? How are you going to interpret Maple's output, and any issues you have with its solution? How can you simplify the answer it gives you? What if you are only interested in the problem for a particular set of values/parameters/in a particular range? What happens if a parameter is small? How many solutions are there? Does a solution even exist? </p>
<p>Using a CAS without knowing the background maths behind the problems you're trying to solve is like punching the buttons on a calculator without knowing what numbers are, what the operations mean or what the order of operations might be. </p>
| <blockquote>
<p>Is the point of math and math classes to learn the big-picture concepts of how to apply mathematical tools or is the point to learn the details to the ground level?</p>
</blockquote>
<p>I will second Bennett, the point is both. Consider the analogy that learning mathematics and physics is much like constructing maps. First, you will see maps others have created, how details are crafted, norms, what are usual rules, what are the great maps for certain regions. This is the highview.</p>
<p>However, you must be sure these maps are correct. Therefore, you'll go to the places they give you directions to and check if it matches. This is the ground level. You have to make sure you are following instructions correctly, arriving at the same results, be able to walk yourself through the path.</p>
<p>It's the only way you have a firm, solid, sharp knowledge of anything you study. Learning how to switch between the bird's-eye view and sniffing the ground is part of the apprenticeship of anyone in science.</p>
<p>I will end this answer with a quote from <a href="http://en.wikipedia.org/wiki/Richard_Hamming">Richard Hamming</a>:</p>
<blockquote>
<p><em>The purpose of computing is insight, not numbers.</em></p>
</blockquote>
|
logic | <p>I'm near the end of Velleman's <em>How to Prove It</em>, self-studying and learning a lot about proofs. This book teaches you how to express ideas rigorously in logic notation, prove the theorem logically, and then "translate" it back to English for the written proof. I've noticed that because of the way it was taught I have a really hard time even approaching a proof without first expressing everything rigorously in logic statements. Is that a problem? I feel like I should be able to manipulate the concepts correctly enough without having to literally encode everything. Is logic a crutch? Or is it normal to have to do that?</p>
| <p>It's perfectly normal. In fact, I think that's how a mathematitian's mind grows.</p>
<ul>
<li>First, you are naive and "intuistic", and you do a lot of "well, of course this is so!" like statements that are not well founded.</li>
<li>After you are repeatedly hit over the head with examples where your intuition fails, you take a huge step back. You realize that even simple statements may be wrong, and that you need a rigorous way of proving them. You use strict logic notation to avoid any and all confusion.</li>
<li>When you are more and more practiced, you begin to transition back a little. Yes, you still know that every statement has a strict logical form, but you don't write it down anymore. You begin, again, to rely on intuition.</li>
</ul>
<hr>
<p>At least, that's how it's worked for me. Mind you, the intuition in the final step is very different than the original intuition. The original one is the brash "d'ah, how can you ever doubt that?!" kind of thing that is <em>embarasingly</em> bad at doing math.</p>
<p>The more developed intuition in step 3 is much different. It has a lot more thought put into it. It's more "yeah, this is so, and I know approximately how I can prove this using strict logic, but since it would take 3 pages, I won't use strict logic."</p>
<p>Sure, the second intuition can also be wrong. And every once in a while, it is. But its performance is way way <em>way</em> <strong>way</strong> <strong><em>way</em></strong> better than in the beginning, and it also has a fall-back. If all else fails, your final answer is no longer "well, but, but how could it <em>not</em> be true?", the final answer is "oh alright fine, I'll spell it out rigorously!"</p>
| <p>Logic is not a crutch. It is a way to check that you haven't made a mistake.
It might not be the best <em>first</em> step to make a proof, but it is definitely a necessary step. </p>
<p>As you get more experienced, you tend to start by sketching the proof in very informal terms. This is like navigating by large landmarks: "I am headed towards that mountain over there, hm, that ridge seems like a good approach. OK, lets go." </p>
<p>However, to actually get to that mountain you still put one foot in front of another, one step at a time for hours. Logic is like walking one step at a time, while looking at your feet and the nearby ground. It is necessary, but sometimes you need to look up to see where you should be going.</p>
<p>Now, if you are going to write the proof in English eventually, you don't have to actually write down all the logic in symbols. I find that I can write English (or Norwegian) while keeping the formal symbols in my mind.</p>
<p>Some published proofs aren't very formal. The reason for this is that formal proofs can be very long. Instead they give arguments that convince the reader that it is <em>possible</em> to work out the proof in all its details, even if neither the writer or the reader have actually done so.</p>
<p>Professional mathematicians become good at telling whether such an informal argument is valid or not, but they occasionally slip up, even peer-reviewed articles can be wrong. So, take care.</p>
|
linear-algebra | <p>In linear algebra and differential geometry, there are various structures which we calculate with in a basis or local coordinates, but which we would like to have a meaning which is basis independent or coordinate independent, or at least, changes in some covariant way under changes of basis or coordinates. One way to ensure that our structures adhere to this principle is to give their definitions without reference to a basis. Often we employ universal properties, functors, and natural transformations to encode these natural, coordinate/basis free structures. But the Riemannian volume form does not appear to admit such a description, nor does its pointwise analogue in linear algebra.</p>
<p>Let me list several examples.</p>
<ul>
<li><p>In linear algebra, an inner product on $V$ is an element of $\operatorname{Sym}^2{V^*}$. The symmetric power is a space which may be defined by a universal property, and constructed via a quotient of a tensor product. No choice of basis necessary. Alternatively an inner product can be given by an $n\times n$ symmetric matrix. The correspondence between the two alternatives is given by $g_{ij}=g(e_i,e_j)$. Calculations are easy with this formulation, but one should check (or require) that the matrix transforms appropriately under changes of basis.</p></li>
<li><p>In linear algebra, a volume form is an element of $\Lambda^n(V^*)$. Alternatively one may define a volume form operator as the determinant of the matrix of the components of $n$ vectors, relative to some basis.</p></li>
<li><p>In linear algebra, an orientation is an element of $\Lambda^n(V^*)/\mathbb{R}^>$.</p></li>
<li><p>In linear algebra, a symplectic form is an element of $\Lambda^2(V^*)$. Alternatively may be given as some $\omega_{ij}\,dx^i\wedge dx^j$.</p></li>
<li><p>In linear algebra, given a symplectic form, a canonical volume form may be chosen as $\operatorname{vol}=\omega^n$. This operation can be described as a natural transformation $\Lambda^2\to\Lambda^n$. That is, to each vector space $V$, we have a map $\Lambda^2(V)\to\Lambda^n(V)$ taking $\omega\mapsto \omega^n$ and this map commutes with linear maps between spaces.</p></li>
<li><p>In differential geometry, all the above linear algebra concepts may be specified pointwise. Any smooth functor of vector spaces may be applied to the tangent bundle to give a smooth vector bundle. Thus a Riemannian metric is a section of the bundle $\operatorname{Sym}^2{T^*M}$, etc. A symplectic form is a section of the bundle $\Lambda^2(M)$, and the wedge product extends to an operation on sections, and gives a symplectic manifold a volume form. This is a global operation; this definition of a Riemannian metric gives a smoothly varying inner product on every tangent space of the manifold, <em>even if the manifold is not covered by a single coordinate patch</em></p></li>
<li><p>In differential geometry, sometimes vectors are defined as $n$-tuples which transform as $v^i\to \tilde{v}^j\frac{\partial x^i}{\partial \tilde{x}^j}$ under a change of coordinates $x \to \tilde{x}$. But a more invariant definition is to say a vector is a derivation of the algebra of smooth functions. Cotangent vectors can be defined with a slightly different transformation rule, or else invariantly as the dual space to the tangent vectors. Similar remarks hold for higher rank tensors.</p></li>
<li><p>In differential geometry, one defines a connection on a bundle. The local coordinates definition makes it appear to be a tensor, but it does not behave the transformation rules set forth above. It's only clear why when one sees the invariant definition.</p></li>
<li><p>In differential geometry, there is a derivation on the exterior algebra called the exterior derivative. It may be defined as $d\sigma = \partial_j\sigma_I\,dx^j\wedge dx^I$ in local coordinates, or better via an invariant formula $d\sigma(v_1,\dotsc,v_n) = \sum_i(-1)^iv_i(\sigma(v_1,\dotsc,\hat{v_i},\dotsc,v_n)) + \sum_{i+j}(-1)^{i+j}\sigma([v_i,v_j],v_1,\dotsc,\hat{v_i},\dotsc,\hat{v_j},\dotsc,v_n)$</p></li>
<li><p>Finally, the volume form on an oriented inner product space (or volume density on an inner product space) in linear algebra, and its counterpart the Riemannian volume form on an oriented Riemannian manifold (or volume density form on a Riemannian manifold) in differential geometry. Unlike the above examples which all admit global basis-free/coordinate-free definitions, we can define it only in a single coordinate patch or basis at a time, and glue together to obtain a globally defined structure. There are two definitions seen in the literature:</p>
<ol>
<li>choose an (oriented) coordinate neighborhood of a point, so we have a basis for each tangent space. Write the metric tensor in terms of that basis. Pretend that the bilinear form is actually a linear transformation (this can always be done because once a basis is chosen, we have an isomorphism to $\mathbb{R}^n$ which is isomorphic to its dual (via a different isomorphism than that provided by the inner product)). Then take the determinant of resulting mutated matrix, take the square root, multiply by the wedge of the basis one-forms (the positive root may be chosen in the oriented case; in the unoriented case, take the absolute value to obtain a density).</li>
<li>Choose an oriented orthonormal coframe in a neighborhood. Wedge it together. (Finally take the absolute value in the unoriented case).</li>
</ol></li>
</ul>
<p>Does anyone else think that one of these definitions sticks out like a sore thumb? Does it bother anyone else that in linear algebra, the volume form on an oriented inner product space doesn't exist as natural transformation $\operatorname{Sym}^2 \to \Lambda^n$? Do the instructions to "take the determinant of a bilinear form" scream out to anyone else that we're doing it wrong? Does it bother anyone else that in Riemannian geometry, in stark contrast to the superficially similar symplectic case, the volume form cannot be defined using invariant terminology for the whole manifold, but rather requires one to break the manifold into patches, and choose a basis for each? Is there any other structure in linear algebra or differential geometry which suffers from this defect?</p>
<p><strong>Answer:</strong> I've accepted Willie Wong's answer below, but let me also sum it up, since it's spread across several different places. There is a canonical construction of the Riemannian volume form on an oriented vector space, or pseudoform on a vector space. At the level of level of vector spaces, we may define an inner product on the dual space $V^*$ by $\tilde{g}(\sigma,\tau)=g(u,v)$ where $u,v$ are the dual vectors to $\sigma,\tau$ under the isomorphism between $V,V^*$ induced by $g$ (which is nondegenerate). Then extend $\tilde{g}$ to $\bigotimes^k V^*$ by defining $\hat{g}(a\otimes b\otimes c,\dotsb,x\otimes y\otimes z\dotsb)=\tilde{g}(a,x)\tilde{g}(b,y)\tilde{g}(c,z)\dotsb$. Then the space of alternating forms may be viewed as a subspace of $\bigotimes^k V^*$, and so inherits an inner product as well (note, however that while the alternating map may be defined canonically, there are varying normalization conventions which do not affect the kernel. I.e. $v\wedge w = k! Alt(v\otimes w)$ or $v\wedge w = Alt(v\otimes w)$). Then $\hat{g}(a\wedge b\dotsb,x\wedge y\dotsb)=\det[\tilde{g}(a,x)\dotsc]$ (with perhaps a normalization factor required here, depending on how Alt was defined).</p>
<p>Thus $g$ extends to an inner product on $\Lambda^n(V^*)$, which is a 1 dimensional space, so there are only two unit vectors, and if $V$ is oriented, there is a canonical choice of volume form. And in any event, there is a canonical pseudoform.</p>
| <p>A few points:</p>
<ul>
<li>It is necessary to define "Riemannian volume forms" a patch at a time: you can have non-orientable Riemannian manifolds. (Symplectic manifolds are however <em>necessarily</em> orientable.) So you cannot just have a <strong>global</strong> construction mapping Riemannian metric to Riemannian volume form. (Consider the Möbius strip with the standard metric.)</li>
<li>It is however to possible to give a definition of the Riemannian volume form locally in a way that does not depend on choosing a coordinate basis. This also showcases why there <strong>cannot</strong> be a natural map from <span class="math-container">$\mathrm{Sym}^2\to \Lambda^n$</span> sending inner-products to volume forms. We start from the case of the vector space. Given a vector space <span class="math-container">$V$</span>, we know that <span class="math-container">$V$</span> and <span class="math-container">$V^*$</span> are isomorphic as vector spaces, but not canonically so. However if we also take a positive definite symmetric bilinear form <span class="math-container">$g\in \mathrm{Sym}_+^2(V^*)$</span>, we can pick out a unique compatible isomorphism <span class="math-container">$\flat: V\to V^*$</span> and its inverse <span class="math-container">$\sharp: V^*\to V$</span>. A corollary is that <span class="math-container">$g$</span> extends to (by abuse of notation) an element of <span class="math-container">$\mathrm{Sym}_+^2(V)$</span>. Then by taking wedges of <span class="math-container">$g$</span> you get that the metric <span class="math-container">$g$</span> (now defined on <span class="math-container">$V^*$</span>) extends to uniquely to a metric<sup>1</sup> on <span class="math-container">$\Lambda^k(V^*)$</span>. Therefore, <strong>up to sign</strong> there is a unique (using that <span class="math-container">$\Lambda^n(V^*)$</span> is one-dimensional) volume form <span class="math-container">$\omega\in \Lambda^n(V^*)$</span> satisfying <span class="math-container">$g(\omega,\omega) = 1$</span>. <em>But be very careful that this definition is only up to sign.</em></li>
<li>The same construction extends directly to the Riemannian case. Given a differentiable manifold <span class="math-container">$M$</span>. There is a natural map from sections of positive definite symmetric bilinear forms on the tangent space <span class="math-container">$\Gamma\mathrm{Sym}_+^2(T^*M) \to \Gamma\left(\Lambda^n(M)\setminus\{0\} / \pm\right)$</span> to the non-vanishing top forms <em>defined up to sign</em>. From which the usual topological arguments shows that if you fix an orientation (either directly in the case where <span class="math-container">$M$</span> is orientable or lifting to the orientable double cover if not) you get a map whose image now is a positively oriented volume form.</li>
</ul>
<p>Let me just summarise by giving the punch line again:</p>
<p>For every inner product <span class="math-container">$g$</span> on a vector space <span class="math-container">$V$</span> there are <strong>two</strong> compatible volume forms in <span class="math-container">$\Lambda^n V$</span>: they differ by sign. Therefore the natural mapping from inner products takes image in <span class="math-container">$\Lambda^n V / \pm$</span>!</p>
<p>Therefore if you want to construct a map based on fibre-wise operations on <span class="math-container">$TM$</span> sending Riemannian metrics to volume forms, you run the very real risk that, due to the above ambiguity, what you construct is not even continuous anywhere. The "coordinate patch" definition has the advantage that it sweeps this problem under the rug by implicitly choosing one of the two admissible local (in the sense of open charts) orientation. You can do without the coordinate patch if you start, instead, with an orientable Riemannian manifold <span class="math-container">$(M,g,\omega)$</span> and use <span class="math-container">$\omega$</span> to continuously choose one of the two admissible pointwise forms.</p>
<hr />
<p><sup>1</sup>: this used to be linked to a post on MathOverflow, which has since been deleted. So for completeness: the space of <span class="math-container">$k$</span>-tensors is the span of tensors of the form <span class="math-container">$v_1 \otimes \cdots \otimes v_k$</span>, and you can extend <span class="math-container">$g$</span> to the space of <span class="math-container">$k$</span>-tensors by setting
<span class="math-container">$$ g(v_1\otimes\cdots v_k, w_1\otimes\cdots\otimes w_k) := g(v_1, w_1) g(v_2, w_2) \cdots g(v_k, w_k) $$</span>
and extending using bilinearity. The space <span class="math-container">$\Lambda^k(V^*)$</span> embeds into <span class="math-container">$\otimes^k V^*$</span> in the usual way and hence inherits a inner product.</p>
| <p>A coordinate-free definition of <a href="http://en.wikipedia.org/wiki/Volume_form">volume form</a> is in fact well-known and frequently used, e.g. the cited Wikipedia article. I will try to reproduce it the nutshell to the best of my understanding.</p>
<p>Let $V$ be a (real, for certainty) vector space of finite dimension $\dim V = n$. The space of $n$-forms $\Lambda^n (V)$ has dimension 1. Thus $\Lambda^n (V)$ isomorphic to $\mathbb{R}$, however this isomorphism is <em>not canonical</em>: any choice of non-trivial $n$-form $\omega$ can be mapped to $1 \in \mathbb{R}$.</p>
<p><strong>A volume form</strong> on a finite-dimensional vector space $V$ is <em>a choice</em> of a top-rank non-trivial exterior form (skew-symmetric $n$-linear functional) $\omega \in \Lambda^n (V)$. I think that this definition is quite coordinate-free.</p>
<p>Once such a form has been chosen, it can be used to divide the space of bases in $V$ into two classes that are called <em>orientations</em>. There are two of them, <em>positive</em> ($\omega > 0$) and <em>negative</em> ($\omega < 0$). Having a volume form chosen, one can speak about oriented volumes of parallelotopes, for instance.</p>
<p>If for any reason we have an <em>inner product</em> $g$ in $V$ we can make this choice canonical. One needs to consider orthonormal frames (with respect to $g$). The canonical volume form will take value 1 on positively oriented orthonormal frames.</p>
<p><strong>The volume form</strong> of an inner-product space $(V, g)$ is that canonical choice of a volume form. It can be denoted by $Vol_{g}$ provided one also keeps in mind that there is a choice of orientation involved.</p>
<p>Along these lines one can obtain an understanding of the volume form as the <a href="http://en.wikipedia.org/wiki/Hodge_dual">Hodge dual</a> of 1 in a pretty coordinate-free manner.</p>
|
probability | <p>I gave my friend <a href="https://math.stackexchange.com/questions/42231/obtaining-irrational-probabilities-from-fair-coins">this problem</a> as a brainteaser; while her attempted solution didn't work, it raised an interesting question.</p>
<p>I flip a fair coin repeatedly and record the results. I stop as soon as the number of heads is equal to twice the number of tails (for example, I will stop after seeing HHT or THTHHH or TTTHHHHHH). What's the probability that I never stop?</p>
<p>I've tried to just compute the answer directly, but the terms got ugly pretty quickly. I'm hoping for a hint towards a slick solution, but I will keep trying to brute force an answer in the meantime.</p>
| <blockquote>
<p>The game stops with probability $u=\frac34(3-\sqrt5)=0.572949017$.</p>
</blockquote>
<p>See the end of the post for generalizations of this result, first to every asymmetric heads-or-tails games (Edit 1), and then to every integer ratio (Edit 2).</p>
<hr>
<p><strong>To prove this</strong>, consider the random walk which goes two steps to the right each time a tail occurs and one step to the left each time a head occurs. Then the number of tails is the double of the number of heads each time the walk is back at its starting point (and only then). In other words, the probability that the game never stops is $1-u$ where $u=P_0(\text{hits}\ 0)$ for the random walk with equiprobable steps $+2$ and $-1$.</p>
<p>The classical one-step analysis of hitting times for Markov chains yields $2u=v_1+w_2$ where, for every positive $k$, $v_k=P_{-k}(\text{hits}\ 0)$ and $w_k=P_{k}(\text{hits}\ 0)$. We first evaluate $w_2$ then $v_1$.</p>
<p>The $(w_k)$ part is easy: the only steps to the left are $-1$ steps hence to hit $0$ starting from $k\ge1$, the walk must first hit $k-1$ starting from $k$, then hit $k-2$ starting from $k-1$, and so on. These $k$ events are equiprobable hence $w_k=(w_1)^k$. Another one-step analysis, this time for the walk starting from $1$, yields
$$
2w_1=1+w_3=1+(w_1)^3
$$
hence $w_k=w^k$ where $w$ solves $w^3-2w+1=0$. Since $w\ne1$, $w^2+w=1$ and since $w<1$, $w=\frac12(\sqrt5-1)$.</p>
<p>Let us consider the $(v_k)$ part. The random walk has a drift to the right hence its position converges to $+\infty$ almost surely. Let $k+R$ denote the first position visited on the right of the starting point $k$. Then $R\in\{1,2\}$ almost surely, the distribution of $R$ does not depend on $k$ because the dynamics is invariant by translations, and
$$
v_1=r+(1-r)w_1\quad\text{where}\ r=P_{-1}(R=1).
$$
Now, starting from $0$, $R=1$ implies that the first step is $-1$ hence $2r=P_{-1}(A)$ with $A=[\text{hits}\ 1 \text{before}\ 2]$. Consider $R'$ for the random walk starting at $-1$. If $R'=2$, $A$ occurs. If $R'=1$, the walk is back at position $0$ hence $A$ occurs with probability $r$. In other words, $2r=(1-r)+r^2$, that is, $r^2-3r+1=0$. Since $r<1$, $r=\frac12(3-\sqrt5)$ (hence $r=1-w$).</p>
<p>Plugging these values of $w$ and $r$ into $v_1$ and $w_2$ yields the value of $u$.</p>
<hr>
<p><strong>Edit 1</strong> Every asymmetric random walk which performs elementary steps $+2$ with probability $p$ and $-1$ with probability $1-p$ is transient to $+\infty$ as long as $p>\frac13$ (and naturally, for every $p\le\frac13$ the walk hits $0$ with full probability). In this regime, one can compute the probability $u(p)$ to hit $0$. The result is the following.</p>
<blockquote>
<p>For every $p$ in $(0,\frac13)$, $u(p)=\frac32\left(2-p-\sqrt{p(4-3p)}\right).$</p>
</blockquote>
<p>Note that $u(p)\to1$ when $p\to\frac13$ and $u(p)\to0$ when $p\to1$, as was to be expected.</p>
<hr>
<p><strong>Edit 2</strong>
Coming back to symmetric heads-or-tails games, note that, for any fixed integer $N\ge2$, the same techniques apply to compute the probability $u_N$ to reach $N$ times more tails than heads. </p>
<p>One gets $2u_N=E(w^{R_N-1})+w^N$ where $w$ is the unique solution in $(0,1)$ of the polynomial equation $2w=1+w^{1+N}$, and the random variable $R_N$ is almost surely in $\{1,2,\ldots,N\}$. The distribution of $R_N$ is characterized by its generating function, which solves
$$
(1-(2-r_N)s)E(s^{R_N})=r_Ns-s^{N+1}\quad\text{with}\quad r_N=P(R_N=1).
$$
This is equivalent to a system of $N$ equations with unknowns the probabilities $P(R_N=k)$ for $k$ in $\{1,2,\ldots,N\}$. One can deduce from this system that $r_N$ is the unique root $r<1$ of the polynomial $(2-r)^Nr=1$. One can then note that $r_N=w^N$ and that $E(w^{R_N})=\dfrac{Nr_N}{2-r_N}$ hence some further simplifications yield finally the following general result.</p>
<blockquote>
<p>For every $N\ge2$, $u_N=\frac12(N+1)r_N$ where $r_N<1$ solves the equation $(2-r)^Nr=1$.</p>
</blockquote>
| <p>(Update: The answer to the original question is that the probability of stopping is $\frac{3}{4} \left(3 - \sqrt{5}\right)$. See end of post for an infinite series expression in the general case.)
<HR></p>
<p>Let $S(n)$ denote the number of ways to stop after seeing $n$ tails. Seeing $n$ tails means seeing $2n$ heads, so this would be stopping after $3n$ flips. Since there are $2^{3n}$ possible sequences in $3n$ flips, the probability of stopping is $\sum_{n=1}^{\infty} S(n)/8^n$.</p>
<p>To determine $S(n)$, we see that there are $\binom{3n}{n}$ ways to choose which $n$ of $3n$ flips will be tails. However, this overcounts for $n > 1$, as we could have seen twice as many heads as tails for some $k < n$. Of these $\binom{3n}{n}$ sequences, there are $S(k) \binom{3n-3k}{n-k}$ sequences of $3n$ flips in which there are $k$ tails the first time we would see twice as many heads as tails, as any of the $S(k)$ sequences of $3k$ flips could be completed by choosing $n-k$ of the remaining $3n-3k$ flips to be tails. Thus $S(n)$ satisfies the recurrence $S(n) = \binom{3n}{n} - \sum_{k=1}^{n-1} \binom{3n-3k}{n-k}S(k)$, with $S(1) = 3$.</p>
<p>The solution to this recurrence is $S(n) = \frac{2}{3n-1} \binom{3n}{n}.$ This can be verified easily, as substituting this expression into the recurrence yields a slight variation on Identity 5.62 in <em>Concrete Mathematics</em> (p. 202, 2nd ed.), namely,
$$\sum_k \binom{tk+r}{k} \binom{tn-tk+s}{n-k} \frac{r}{tk+r} = \binom{tn+r+s}{n},$$
with $t = 3$, $r = -1$, $s=0$.</p>
<p>So the probability of stopping is $$\sum_{n=1}^{\infty} \binom{3n}{n} \frac{2}{3n-1} \frac{1}{8^n}.$$</p>
<p>Mathematica gives the closed form for this probability of stopping to be $$2 \left(1 - \cos\left(\frac{2}{3} \arcsin \frac{3 \sqrt{3/2}}{4}\right) \right) \approx 0.572949.$$</p>
<p><em>Added</em>: The sum is hypergeometric and has a simpler representation. See Sasha's comments for why the sum yields this closed form solution and also why the answer is $$\frac{3}{4} \left(3 - \sqrt{5}\right) \approx 0.572949.$$</p>
<p><HR>
<em>Added 2</em>: This answer is generalizable to other ratios $r$ up to the infinite series expression. For the general $r \geq 2$ case, the argument above is easily adapted to produce the recurrence
$S(n) = \binom{(r+1)n}{n} - \sum_{k=1}^{n-1} \binom{(r+1)n-(r+1)k}{n-k}S(k)$, with $S(1) = r+1$. The solution to the recurrence is $S(n) = \frac{r}{(r+1) n - 1} \binom{(r+1) n}{n}$ and can be verified easily by using the binomial convolution formula given above. Thus, for the ratio $r$, the probability of stopping has the infinite series expression $$\sum_{n=1}^{\infty} \binom{(r+1)n}{n} \frac{r}{(r+1)n-1} \frac{1}{2^{(r+1)n}}.$$
This can be expressed as a hypergeometric function, but I am not sure how to simplify it any further for general $r$ (and neither does Mathematica). It can also be expressed using the generalized binomial series discussed in <em>Concrete Mathematics</em> (p. 200, 2nd ed.), but I don't see how to simplify it further in that direction, either.</p>
<p><HR>
<em>Added 3</em>: In case anyone is interested, I found a <a href="https://math.stackexchange.com/questions/60991/combinatorial-proof-of-binom3nn-frac23n-1-as-the-answer-to-a-coin-fli/66146#66146">combinatorial proof of the formula for $S(n)$</a>. It works in the general $r$ case, too.</p>
|
probability | <p>I found this problem in a contest of years ago, but I'm not very good at probability, so I prefer to see how you do it:</p>
<blockquote>
<p>A man gets drunk half of the days of a month. To open his house, he has a set of keys with <span class="math-container">$5$</span> keys that are all very similar, and only one key lets him enter his house. Even when he arrives sober he doesn't know which key is the correct one, so he tries them one by one until he chooses the correct key. When he's drunk, he also tries the keys one by one, but he can't distinguish which keys he has tried before, so he may repeat the same key.</p>
<p>One day we saw that he opened the door on his third try.</p>
<p>What is the probability that he was drunk that day?</p>
</blockquote>
| <p>The key thing here is this: let $T$ be the number of tries it takes him to open the door. Let $D$ be the event that the man is drunk. Then
$$
P(D\mid T=3)=\frac{P(T=3, D)}{P(T=3)}.
$$
Now, the event that it takes three tries to open the door can be decomposed as
$$
P(T=3)=P(T=3\mid D)\cdot P(D)+P(T=3\mid \neg D)\cdot P(\neg D).
$$
By assumption, $P(D)=P(\neg D)=\frac{1}{2}$. So, we just need to compute the probability of requiring three attempts when drunk and when sober.</p>
<p>When he's sober, it takes three tries precisely when he chooses a wrong key, followed by a different wrong key, followed by the right key; the probability of doing this is
$$
P(T=3\mid \neg D)=\frac{4}{5}\cdot\frac{3}{4}\cdot\frac{1}{3}=\frac{1}{5}.
$$</p>
<p>When he's drunk, it is
$$
P(T=3\mid D)=\frac{4}{5}\cdot\frac{4}{5}\cdot\frac{1}{5}=\frac{16}{125}.
$$</p>
<p>So, all told,
$$
P(T=3)=\frac{16}{125}\cdot\frac{1}{2}+\frac{1}{5}\cdot\frac{1}{2}=\frac{41}{250}.
$$
Finally,
$$
P(T=3, D)=P(T=3\mid D)\cdot P(D)=\frac{16}{125}\cdot\frac{1}{2}=\frac{16}{250}
$$
(intentionally left unsimplified). So, we get
$$
P(D\mid T=3)=\frac{\frac{16}{250}}{\frac{41}{250}}=\frac{16}{41}.
$$</p>
| <p>Let's first compute the probability that he wins on the third try in each of the two cases:</p>
<p>Sober: The key has to be one of the (ordered) five, with equal probability for each, so $p_{sober}=p_s=\frac 15$.</p>
<p>Drunk: Success on any trial has probability $\frac 15$. To win on the third means he fails twice then succeeds, so $p_{drunk}=p_d=\frac 45\times \frac 45 \times \frac 15 = \frac {16}{125}$</p>
<p>Since our prior was $\frac 12$ the new estimate for the probability is $$\frac {.5\times p_d}{.5p_d+.5p_s}=\frac {16}{41}=.\overline {39024}$$</p>
|
number-theory | <p>$$
\begin{align}
1100 & = 2\times2\times5\times5\times11 \\
1101 & =3\times 367 \\
1102 & =2\times19\times29 \\
1103 & =1103 \\
1104 & = 2\times2\times2\times2\times 3\times23 \\
1105 & = 5\times13\times17 \\
1106 & = 2\times7\times79
\end{align}
$$
In looking at this list of prime factorizations, I see that <b>all</b> of the first 10 prime numbers, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, appear within the factorizations of only seven consecutive integers. (The next prime number, 31, has its multiples as far from 1100 as it could hope to get them (1085 and 1116).) So no nearby number could hope to be divisible by 29 or 23, nor even by 7 for suitably adjusted values of "nearby". Consequently when you're factoring nearby numbers, you're deprived of those small primes as potential factors by which they might be divisible. So nearby numbers, for lack of small primes that could divide them, must be divisible by large primes. And accordingly, not far away, we find $1099=7\times157$ (157 doesn't show up so often---only once every 157 steps---that you'd usually expect to find it so close by) and likewise 1098 is divisible by 61, 1108 by 277, 1096 by 137, 1095 by 73, 1094 by 547, etc.; and 1097 and 1109 are themselves prime.</p>
<p>So if an unusually large number of small primes occur unusually close together as factors, then an unusually large number of large primes must also be in the neighborhood.</p>
<p>Are there known precise results quantifying this phenomenon?</p>
| <p>The phenomenon you describe seems to be the concept behind the <a href="http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes" rel="noreferrer">Sieve of Eratosthenes</a>.</p>
<p>Every number below $ \sqrt{N} $ where $ N $ is the number you want to factorize will appear at least once in the list of possible factors. Looking at the list generated by the Sieve, it will become obvious that only primes remain. A factorization using the Sieve then implies trial division by the primes up to $ \sqrt{N}$. This concept lets us see that obviously, if "many" small primes were used to remove elements in the Sieve, the remaining elements will have larger prime factors, possibly being primes themselves.</p>
<p>However: no generalization of any kind is possible, since every number's occurence is cyclic. Think of a <a href="http://en.wikipedia.org/wiki/Fourier_transform" rel="noreferrer">Fourier Transform</a> , using prime factors as frequencies. 2 appears every second number, 3 every third and so on. At any point $N$, there is no way to determine if there are primes nearby of if the numbers nearby will have "large primes" as factors without it's value.</p>
<p>I could also relate what you're saying to the concept of <a href="http://en.wikipedia.org/wiki/Mersenne_prime" rel="noreferrer">Mersenne Primes</a>. Essentially, primes in the form $2^{n}–1$ the largest known being $(2^{43,112,609} – 1)$, which also happens to be the largest prime known. In this case, they are looking for primes in the viscinity of exponents of 2, which is also saying the that largest factor is a large prime, right?</p>
<p>So yes, it stands to reason that if $N$ isn't a prime and has small factors, numbers nearby have a chance to have greater factors. No quantification of that is useful, however.</p>
| <p>If you take $7$ consecutive numbers, the chance that one of them is divisible by a prime $p$ is $\dfrac{7}{p}$. If $x$ of those $7$ numbers are prime, then the chance that one of the remaining $7-x$ numbers is divisible by a prime $p$ is still $\dfrac{7}{p}$. This is why the numbers around primes have relatively more prime factors than numbers which are farther away from primes.</p>
|
number-theory | <p>Today, as I was flipping through my copy of <em>Higher Algebra</em> by Barnard and Child, I came across a theorem which said,</p>
<blockquote>
<p>The series $$ 1+\frac{1}{2^p} +\frac{1}{3^p}+...$$ diverges for $p\leq 1$ and converges for $p>1$.</p>
</blockquote>
<p>But later I found out that the zeta function is defined for all complex values other than 1. Now I know that Riemann analytically continued this function to fit all complex values, but how do I explain, to a layman, that $\zeta(0)=1+1+1+...=-\frac{1}{2}$?</p>
<p>The Wiki articles on these topics go way over my head. I'd appreciate it if someone can explain it to me what analytic continuation actually is, and which functions can be analytically continued?</p>
<hr>
<h3>Edit</h3>
<p>If the function diverges for $p\leq1$, how is WolframAlpha able to compute $\zeta(1/5)$? Shouldn't it give out infinity as the answer?</p>
| <p>I'll give you the world's simplest example. $1+x+x^2+\dots$ converges for $|x|\lt1$ only. The function $1/(1-x)$ is analytic everywhere except for a pole at $x=1$, and agrees with $1+x+x^2+\dots$ everywhere the latter is defined, so $1/(1-x)$ is the analytic continuation of $1+x+x^2+\dots$. In that sense, $1+2+4+\dots=-1$. </p>
| <p>If all you're interested in is an explanation to a layman of why a thing like analytic continuation makes any sense to begin with (in particular, why it spits out a unique answer), the answer is the <a href="http://en.wikipedia.org/wiki/Identity_theorem">identity theorem</a> for <a href="http://en.wikipedia.org/wiki/Holomorphic_function">holomorphic functions</a>, which in its stronger form says that if two holomorphic functions $f, g$ defined on a connected open subset $U$ of $\mathbb{C}$ are equal on a set of points in $U$ with an accumulation point (in particular any open subset of $U$), then in fact $f = g$. This is an extremely strong rigidity theorem generalizing the corresponding fact for polynomials, and shows that if you want to extend a holomorphic function $f$ defined on some domain $U$ to a function $\tilde{f}$ defined on some larger connected domain $V$, then there is at most one way to do it (since any two extensions agree on $U$ and hence agree on $V$). </p>
<p>One sense of "analytic continuation" is that it refers to any function $\tilde{f}$ with the above property, and another sense of "analytic continuation" is that it refers to methods for defining $\tilde{f}$ given $f$. </p>
<p>Sometimes it is the case that a function $f$ can be analytically continued to a very large domain $U$ but that a series or integral defining $f$ can't be made to converge on all of $U$. The philosophical point to take away from this is that holomorphic functions have a sort of "Platonic reality" that isn't necessarily perfectly captured by any particular series or integral definition of them, which are just imperfect "shadows" of this Platonic reality. The slogan I use in this situation and <a href="http://en.wikipedia.org/wiki/Divergent_series">others like it</a> is that</p>
<blockquote>
<p>convergence is overrated. </p>
</blockquote>
<p>In response to the edit, WolframAlpha is using "shadows" other than the standard series definition of the Riemann zeta function. </p>
|
probability | <blockquote>
<p>You play a game where a fair coin is flipped. You win 1 if it shows heads and lose 1 if it shows tails. You start with 5 and decide to play until you either have 20 or go broke. What is the probability that you will go broke?</p>
</blockquote>
| <p>You can use symmetry here - Starting at $5$, it is equally likely to get to $0$ first or to $10$ first. Now, if you get to $10$ first, then it is equally likely to get to $0$ first or to $20$ first.</p>
<p>What does that mean for the probability of getting to $0$ before getting to $20$?</p>
| <p>It is a fair game, so your expected value at the end has to be $5$ like you started. You must have $\frac 34$ chance to go broke and $\frac 14$ chance to end with $20$.</p>
|
logic | <p>Why is this true?</p>
<p><span class="math-container">$\exists x\,\big(P(x) \rightarrow \forall y\:P(y)\big)$</span></p>
| <p>Since this may be homework, I do not want to provide the full formal proof, but I will share the informal justification. Classical first-order logic typically makes the assumption of existential import (i.e., that the domain of discourse is non-empty). In classical logic, the principle of excluded middle holds, i.e., that for any $\phi$, either $\phi$ or $\lnot\phi$ holds. Since I first encountered this kind of sentence where $P(x)$ was interpreted as "$x$ is a bird," I will use that in the following argument. Finally, recall that a material conditional $\phi \to \psi$ is true if and only if either $\phi$ is false or $\psi$ is true.</p>
<p>By excluded middle, it is either true that everything is a bird, or that not everything is a bird. Let us consider these cases:</p>
<ul>
<li>If everything is a bird, then pick an arbitrary individual $x$, and note that the conditional “if $x$ is a bird, then everything is a bird,” is true, since the consequent is true. Therefore, if everything is a bird, then there is something such that if it is a bird, then everything is a bird.</li>
<li>If it is not the case that everything is a bird, then there must be some $x$ which is not a bird. Then consider the conditional “if $x$ is a bird, then everything is a bird.” It is true because its antecedent, “$x$ is a bird,” is false. Therefore, if it is not the case that everything is a bird, then there is something (a non-bird, in fact) such that if it is a bird, then everything is a bird.</li>
</ul>
<p>Since it holds in each of the exhaustive cases that there is something such that if it is a bird, then everything is a bird, we conclude that there is, in fact, something such that if it is a bird, then everything is a bird.</p>
<h2>Alternatives</h2>
<p>Since questions about the domain came up in the comments, it seems worthwhile to consider the three preconditions to this argument: existential import (the domain is non-empty); excluded middle ($\phi \lor \lnot\phi$); and the material conditional ($(\phi \to \psi) \equiv (\lnot\phi \lor \psi)$). Each of these can be changed in a way that can affect the argument. This might not be the place to examine <em>how</em> each of these affects the argument, but we can at least give pointers to resources about the alternatives.</p>
<ul>
<li>Existential import asserts that the universe of discourse is non-empty. <a href="http://plato.stanford.edu/entries/logic-free/">Free logics</a> relax this constraint. If the universe of discourse were empty, it would seem that $\exists x.(P(x) \to \forall y.P(y))$ should be vacuously false.</li>
<li><a href="http://en.wikipedia.org/wiki/Intuitionistic_logic">Intuitionistic logics</a> do not presume the excluded middle, in general. The argument above started with a claim of the form “either $\phi$ or $\lnot\phi$.”</li>
<li>There are plenty of <a href="http://en.wikipedia.org/wiki/Material_conditional#Philosophical_problems_with_material_conditional">philosophical difficulties with the material conditional</a>, especially as used to represent “if … then …” sentences in natural language. If we took the conditional to be a counterfactual, for instance, and so were considering the sentence “there is something such that if it were a bird (even if it is not <em>actually</em> a bird), then everything would be a bird,” it seems like it should no longer be provable.</li>
</ul>
| <p>Hint: The only way for $A\implies B$ to be false is for $A$ to be true and $B$ to be false.</p>
<p>I don't think this is actually true unless you know your domain isn't empty. If your domain is empty, then $\forall y: P(y)$ is true "vacuously," but $\exists x: Q$ is not true for any $Q$.</p>
|
logic | <p>Suppose we have a line of people that starts with person #1 and goes for a (finite or infinite) number of people behind him/her, and this property holds for every person in the line: </p>
<blockquote>
<p><em>If everyone in front of you is bald, then you are bald</em>.</p>
</blockquote>
<p>Without further assumptions, does this mean that the first person is necessarily bald? Does it say <em>anything</em> about the first person at all? </p>
<p>In my opinion, it means: </p>
<blockquote>
<p><em>If there exist anyone in front of you and they're all bald, then you're bald.</em> </p>
</blockquote>
<p>Generally, for a statement that consists of a subject and a predicate, if the subject doesn't exist, then does the statement have a truth value? </p>
<p>I think there's a convention in math that if the subject doesn't exist, then the statement is right.</p>
<p>I don't have a problem with this convention (in the same way that I don't have a problem with the meaning of '<em>or</em>' in math). My question is whether it's a clear logical implication of the facts, or we have to <strong>define the truth value</strong> for these subject-less statements.</p>
<hr>
<p><strong>Addendum:</strong> </p>
<p>You can read up on this matter <a href="https://en.wikipedia.org/wiki/Syllogism#Existential_import" rel="noreferrer">here</a> (too).</p>
| <p>You can see what's going on by reformulating the assumption in its equivalent contrapositive form: </p>
<blockquote>
<p><em>If I'm not bald, then there is someone in front of me who is not bald.</em></p>
</blockquote>
<p>Now the first person in line finds himself thinking, "There is <em>no one</em> in front of me. So it's not true that there is someone in front of me who is not bald. So it's not true that I'm not bald. So I must be bald!"</p>
| <p>Mathematical logic <em>defines</em> a statement about <a href="https://en.wikipedia.org/wiki/Universal_quantification#The_empty_set" rel="nofollow noreferrer">all elements of an empty set</a> to be true. This is called <a href="https://en.wikipedia.org/wiki/Vacuous_truth" rel="nofollow noreferrer">vacuous truth</a>. It may be somewhat confusing since it doesn't agree with common everyday usage, where making a statement tends to suggest that there is some object for which the statement actually holds (like the person in front of you in your example).</p>
<p>But it is <em>exactly</em> the right thing to do in a formal setup, for several reasons. One reason is that logical statements don't <em>suggest</em> anything: you must not assume any meaning in excess of what's stated explicitly. Another reason is that it makes several conversions possible without special cases. For example,</p>
<p>$$\forall x\in(A\cup B):P(x)\;\Leftrightarrow
\forall x\in A:P(x)\;\wedge\;\forall x\in B:P(x)$$</p>
<p>holds even if $A$ (or $B$) happens to be the empty set. Another example is the conversion between universal and existential quantification <a href="https://math.stackexchange.com/users/86747/barry-cipra">Barry Cipra</a> <a href="https://math.stackexchange.com/a/1669020/35416">used</a>:</p>
<p>$$\forall x\in A:\neg P(x)\;\Leftrightarrow \neg\exists x\in A:P(x)$$</p>
<p>If you are into programming, then the following pseudocode snippet may also help explaining this:</p>
<pre><code>bool universal(set, property) {
for (element in set)
if (not property(element))
return false
return true
}
</code></pre>
<p>As you can see, the universally quantified statement is <em>only</em> false if there <em>exists</em> an element of the set for which it does not hold. Conversely, you could define</p>
<pre><code>bool existential(set, property) {
for (element in set)
if (property(element))
return true
return false
}
</code></pre>
<p>This is also similar to other empty-set definitions like</p>
<p>$$\sum_{x\in\emptyset}f(x)=0\qquad\prod_{x\in\emptyset}f(x)=1$$</p>
<blockquote>
<p>If everyone in front of you is bald, then you are bald.</p>
</blockquote>
<p>Applying the above to the statement from your question: from</p>
<p>$$\bigl(\forall y\in\text{People in front of }x: \operatorname{bald}(y)
\bigr)\implies\operatorname{bald}(x)$$</p>
<p>one can derive</p>
<p>$$\emptyset=\text{People in front of }x\implies\operatorname{bald}(x)$$</p>
<p>so <strong>yes, the first person must be bald</strong> because there is noone in front of him.</p>
<p>Some formalisms prefer to write the “People in front of” as a pair of predicates instead of a set. In such a setup, you'd see fewer sets and more implications:</p>
<p>$$\Bigl(\forall y: \bigl(\operatorname{person}(y)\wedge(y\operatorname{infrontof}x)\bigr)\implies\operatorname{bald}(y)
\Bigr)\implies\operatorname{bald}(x)$$</p>
<p>If there is no $y$ satisfying both predicates, then the left hand side of the first implication is always false, rendering the implication as a whole always true, thus allowing us to conclude the baldness of the first person. The fact that an implication with a false antecedent is always true is another form of vacuous truth.</p>
<p>Note to self: <a href="https://math.stackexchange.com/questions/556117/good-math-bed-time-stories-for-children#comment1182925_556133">this comment</a> indicates that <em>Alice in Wonderland</em> was dealing with vacuous truth at some point. I should re-read that book and quote any interesting examples when I find the time.</p>
|
linear-algebra | <p>Suppose $A=uv^T$ where $u$ and $v$ are non-zero column vectors in ${\mathbb R}^n$, $n\geq 3$. $\lambda=0$ is an eigenvalue of $A$ since $A$ is not of full rank. $\lambda=v^Tu$ is also an eigenvalue of $A$ since
$$Au = (uv^T)u=u(v^Tu)=(v^Tu)u.$$
Here is my question:</p>
<blockquote>
<p>Are there any other eigenvalues of $A$?</p>
</blockquote>
<p>Added:</p>
<p>Thanks to Didier's comment and anon's answer, $A$ can not have other eigenvalues than $0$ and $v^Tu$. I would like to update the question:</p>
<blockquote>
<p>Can $A$ be diagonalizable?</p>
</blockquote>
| <p>We're assuming $v\ne 0$. The orthogonal complement of the linear subspace generated by $v$ (i.e. the set of all vectors orthogonal to $v$) is therefore $(n-1)$-dimensional. Let $\phi_1,\dots,\phi_{n-1}$ be a basis for this space. Then they are linearly independent and $uv^T \phi_i = (v\cdot\phi_i)u=0 $. Thus the the eigenvalue $0$ has multiplicity $n-1$, and there are no other eigenvalues besides it and $v\cdot u$.</p>
| <p>As to your last question, when is $A$ diagonalizable?</p>
<p>If $v^Tu\neq 0$, then from anon's answer you know the algebraic multiplicity of $\lambda$ is at least $n-1$, and from your previous work you know $\lambda=v^Tu\neq 0$ is an eigenvalue; together, that gives you at least $n$ eigenvalues (counting multiplicity); since the geometric and algebraic multiplicities of $\lambda=0$ are equal, and the other eigenvalue has algebraic multiplicity $1$, it follows that $A$ is diagonalizable in this case.</p>
<p>If $v^Tu=0$, on the other hand, then the above argument does not hold. But if $\mathbf{x}$ is nonzero, then you have $A\mathbf{x} = (uv^T)\mathbf{x} = u(v^T\mathbf{x}) = (v\cdot \mathbf{x})u$; if this is a multiple of $\mathbf{x}$, $(v\cdot\mathbf{x})u = \mu\mathbf{x}$, then either $\mu=0$, in which case $v\cdot\mathbf{x}=0$, so $\mathbf{x}$ is in the orthogonal complement of $v$; or else $\mu\neq 0$, in which case $v\cdot \mathbf{x} = v\cdot\left(\frac{v\cdot\mathbf{x}}{\mu}\right)u = \left(\frac{v\cdot\mathbf{x}}{\mu}\right)(v\cdot u) = 0$, and again $\mathbf{x}$ lies in the orthogonal complement of $v$; that is, the only eigenvectors lie in the orthogonal complement of $v$, and the only eigenvalue is $0$. This means the eigenspace is of dimension $n-1$, and therefore the geometric multiplicity of $0$ is strictly smaller than its algebraic multiplicity, so $A$ is not diagonalizable.</p>
<p>In summary, $A$ is diagonalizable if and only if $v^Tu\neq 0$, if and only if $u$ is not orthogonal to $v$. </p>
|
logic | <p>Completeness is defined as if given $\Sigma\models\Phi$ then $\Sigma\vdash\Phi$.
Meaning if for every truth placement $Z$ in $\Sigma$ we would get $T$, then $\Phi$ also would get $T$. If the previous does indeed exists, then we can prove $\Phi$ using the rules in $\Sigma$.</p>
<p>Soundness is defined as: when given that $\Sigma\vdash\Phi$ then $\Sigma\models\Phi$ , which is the opposite. </p>
<p>Can you please explain the basic difference between the two of them ? </p>
<p>Thanks ,Ron</p>
| <p>In brief:</p>
<p>Soundness means that you <em>cannot</em> prove anything that's wrong.</p>
<p>Completeness means that you <em>can</em> prove anything that's right.</p>
<p>In both cases, we are talking about a some fixed system of rules for proof (the one used to define the relation $\vdash$ ).</p>
<p>In more detail: Think of $\Sigma$ as a set of hypotheses, and $\Phi$ as a statement we are trying to prove. When we say $\Sigma \models \Phi$, we are saying that $\Sigma$ <em>logically implies</em> $\Phi$, i.e., in every circumstance in which $\Sigma$ is true, then $\Phi$ is true. Informally, $\Phi$ is "right" given $\Sigma$.</p>
<p>When we say $\Sigma \vdash \Phi$, on the other hand, we must have some set of rules of proof (sometimes called "inference rules") in mind. Usually these rules have the form, "if you start with some particular statements, then you can derive these other statements". If you can derive $\Phi$ starting from $\Sigma$, then we say that $\Sigma \vdash \Phi$, or that $\Phi$ is provable from $\Sigma$. </p>
<p>We are thinking of a proof as something used to convince others, so it's important that the rules for $\vdash$ are mechanical enough so that another person or a computer can <em>check</em> a purported proof (this is different from saying that the other person/computer could <em>create</em> the proof, which we do <em>not</em> require).</p>
<p>Soundness states: $\Sigma \vdash \Phi$ implies $\Sigma \models \Phi$. If you can prove $\Phi$ from $\Sigma$, then $\Phi$ is true given $\Sigma$. Put differently, if $\Phi$ is not true (given $\Sigma$), then you can't prove $\Phi$ from $\Sigma$. Informally: "You can't prove anything that's wrong."</p>
<p>Completeness states: $\Sigma \models \Phi$ imples $\Sigma \vdash \Phi$. If $\Phi$ is true given $\Sigma$, then you can prove $\Phi$ from $\Sigma$. Informally: "You can prove anything that's right."</p>
<p>Ideally, a proof system is both sound and complete.</p>
| <p>From the perspective of trying to write down axioms for first-order logic that satisfy both completeness and soundness, soundness is the easy direction: all you have to do is make sure that all of your axioms are true and that all of your inference rules preserve truth. Completeness is the hard direction: you need to write down strong enough axioms to capture semantic truth, and it's not obvious from the outset that this is even possible in a non-trivial way. </p>
<p>(A trivial way would be to admit all truths as your axioms, but the problem with this logical system is that you can't recognize what counts as a valid proof.) </p>
|
logic | <p>I don't know if this is appropriate for math.stackexchange, or whether philosophy.stackexchange would have been a better bet, but I'll post it here because the content is somewhat technical.</p>
<p>In ZFC, we can prove that the (second-order) Peano axioms have a model, call it $\mathbb{N} = (N,0,S)$. Furthermore, $\mathbb{N}$ is unique up to isomorphism. Thus, it would seem that we've pinned down $\mathbb{N}$.</p>
<p>However, if ZFC is consistent, then it has some very peculiar models. In particular, it has a model $(V_0,\in_0)$ whose "native" $\mathbb{N}$, let call it $\mathbb{N}_0$, is a model for the sentence "ZFC is consistent," and another model $(V_1,\in_1)$ whose "native" $\mathbb{N}$, lets call it $\mathbb{N}_1$, is a model for the sentence "ZFC is inconsistent."</p>
<p>But in reality, in other words, for the "real" $\mathbb{N}$, only one of these can be sentences can be true. Furthermore, the objects $\mathbb{N}_0$ and $\mathbb{N}_1$ aren't isomorphic, since the set of sentences they satisfy are different.</p>
<p>So it seems to me that $\mathbb{N}$ cannot <em>truly</em> be pinned down. Furthermore, its categoricity is, in some sense, illusory. Is this correct, or am I missing something?</p>
| <p>If you are a Platonist, or in any other sense believe that there is a "true" set of natural numbers, then you have pinned them down. If you believe that there is one concrete universe of sets, and suppose it even satisfies the axioms of $\sf ZF$, then that's universe $\omega$ can be thought of as the one true set of natural numbers.</p>
<p>But if you are not a Platonist. Whether you are a formalist, or supporting a multiverse approach, or maybe you just don't care enough to believe that one thing is true or another, then there is indeed a slight problem because we can switch between models of $\sf PA$ and models of $\sf ZF$, and thus get different "true" natural numbers.</p>
<p>But the point is this, in my opinion, that when we are set to work and we take $\sf ZF$ to be our foundational theory, then we fix one universe of set theory that we work in. And being foundational this universe cannot disagree with the notion of natural numbers coming from the logic outside of it (so in particular it is going to have other models of $\sf ZF$ within it). Then the natural numbers are the $\omega$ of that universe.</p>
<p>When you are done working with this universe you throw it in the bin, and get another when you need to. Or you can save that universe in a scrapbook if you like.</p>
<p>The reason we can do this, or better yet, the reason that people don't care about foundational problems in many places throughout mathematics, is that we use properties which are internal to that set, and therefore are true in any set with those properties, regardless it being the "true" or "false" set of natural numbers. If you write the sentence $\lnot\rm Con(\sf PA)$, and shown it to be independent of $\sf PA$ then you have shown that first-order logic is insufficient to determine all the true properties of the natural numbers. But those that it does determine are enough for a lot of things, and most people don't mind switching to second-order in some of the cases. In which case the above argument fails to hold.</p>
| <p>There are three closely related concepts:</p>
<ul>
<li><p>The collection of natural numbers</p></li>
<li><p>The collection of (correct) formal proofs</p></li>
<li><p>The collection of finite sequences from a fixed finite alphabet (for example, the collection of finite sequences of 0s and 1s). </p></li>
</ul>
<p>Forgetting about formal systems for a moment, if we can pin down just <em>one</em> of these three concepts, the other two are also going to be pinned down - but it is very difficult to define any of these three concepts without referring to the others. That difficulty shows up in particular when we try to define the natural numbers in formal systems, or try to formalize proofs within a formal system such as ZFC.</p>
<p>There is no reason that we must interpret this as saying that the categoricity of $\mathbb{N}$ is "illusory" - we could also view it as saying that effective formal systems are simply never strong enough to prove all number-theoretic truths. This deficiency of formal systems only affects $\mathbb{N}$ in settings where $\mathbb{N}$ is defined using a formal system. Many mathematicians feel they already understand what $\mathbb{N}$ is before they learn anything about formal systems, however. </p>
|
linear-algebra | <p>It is well known that for invertible matrices $A,B$ of the same size we have $$(AB)^{-1}=B^{-1}A^{-1} $$
and a nice way for me to remember this is the following sentence:</p>
<blockquote>
<p>The opposite of putting on socks and shoes is taking the shoes off, followed by taking the socks off.</p>
</blockquote>
<p>Now, a similar law holds for the transpose, namely:</p>
<p>$$(AB)^T=B^TA^T $$</p>
<p>for matrices $A,B$ such that the product $AB$ is defined. My question is: is there any intuitive reason as to why the order of the factors is reversed in this case? </p>
<p>[Note that I'm aware of several proofs of this equality, and a proof is not what I'm after]</p>
<p>Thank you!</p>
| <p>One of my best college math professor always said:</p>
<blockquote>
<p>Make a drawing first.</p>
</blockquote>
<p><img src="https://i.sstatic.net/uGxff.gif" alt="enter image description here"></p>
<p>Although, he couldn't have made this one on the blackboard.</p>
| <p>By dualizing $AB: V_1\stackrel{B}{\longrightarrow} V_2\stackrel{A}{\longrightarrow}V_3$, we have $(AB)^T: V_3^*\stackrel{A^T}{\longrightarrow}V_2^*\stackrel{B^T}{\longrightarrow}V_1^*$. </p>
<p>Edit: $V^*$ is the dual space $\text{Hom}(V, \mathbb{F})$, the vector space of linear transformations from $V$ to its ground field, and if $A: V_1\to V_2$ is a linear transformation, then $A^T: V_2^*\to V_1^*$ is its dual defined by $A^T(f)=f\circ A$. By abuse of notation, if $A$ is the matrix representation with respect to bases $\mathcal{B}_1$ of $V_1$ and $\mathcal{B}_2$ of $V_2$, then $A^T$ is the matrix representation of the dual map with respect to the dual bases $\mathcal{B}_1^*$ and $\mathcal{B}_2^*$.</p>
|
combinatorics | <p><strong>Context:</strong> I'm a high school student, who has only ever had an introductory treatment, if that, on combinatorics. As such, the extent to which I have seen combinatoric applications is limited to situations such as "If you need a group of 2 men and 3 women and you have 8 men and 9 women, how many possible ways can you pick the group" (They do get slightly more complicated, but are usually similar).</p>
<p><strong>Question:</strong> I apologise in advance for the naive question, but at an elementary level it seems as though combinatorics (and the ensuing probability that can make use of it), seems not overly rigorous. It doesn't seem as though you can "prove" that the number of arrangements you deemed is the correct number. What if you forget a case?</p>
<p>I know that you could argue that you've considered all cases, by asking if there is another case other than the ones you've considered. But, that doesn't seem to be the way other areas of mathematics is done. If I wish to prove something, I couldn't just say "can you find a situation where the statement is incorrect" as we don't just assume it is correct by nature.</p>
<p><strong>Is combinatorics rigorous?</strong></p>
<p>Thanks</p>
| <p>Combinatorics certainly <em>can</em> be rigourous but is not usually presented that way because doing it that way is:</p>
<ul>
<li>longer (obviously)</li>
<li>less clear because the rigour can obscure the key ideas</li>
<li>boring because once you know intuitively that something works you lose interest in a rigourous argument</li>
</ul>
<p>For example, compare the following two proofs that the binomial coefficient is $n!/k!(n - k)!$ where I will define the binomial coefficient as the number of $k$-element subsets of $\{1,\dots,n\}$.</p>
<hr>
<p><strong>Proof 1:</strong></p>
<p>Take a permutation $a_1,\dots, a_n$ of $n$. Separate this into $a_1,\dots,a_k$ and $a_{k + 1}, \dots, a_n$. We can permute $1,\dots, n$ in $n!$ ways and since we don't care about the order of $a_1,\dots,a_k$ or $a_{k + 1},\dots,a_n$ we divide by $k!(n - k)!$ for a total of $n!/k!(n - k)!$.</p>
<hr>
<p><strong>Proof 2:</strong></p>
<p>Let $B(n, k)$ denote the set of $k$-element subsets of $\{1,\dots,n\}$. We will show that there is a bijection</p>
<p>$$ S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}. $$</p>
<p>The map $\to$ is defined as follows. Let $\pi \in S_n$. Let $A = \{\pi(1),\pi(2),\dots,\pi(k)\}$ and let $B = \{\pi(k + 1),\dots, \pi(n)\}$. For each finite subset $C$ of $\{1,\dots,n\}$ with $m$ elements, fix a bijection $g_C : C \longleftrightarrow \{1,\dots,m\}$ by writting the elements of $C$ in increasing order $c_1 \le \dots \le c_m$ and mapping $c_i \longleftrightarrow i$.</p>
<p>Define maps $\pi_A$ and $\pi_B$ on $\{1,\dots,k\}$ and $\{1,\dots,n-k\}$ respectively by defining
$$ \pi_A(i) = g_A(\pi(i)) \text{ and } \pi_B(j) = g_B(\pi(j)). $$</p>
<p>We map the element $\pi \in S_n$ to the triple $(A, \pi_A, \pi_B) \in B(n, k) \times S_k \times S_{n - k}$.</p>
<p>Conversely, given a triple $(A, \sigma, \rho) \in B(n, k) \times S_k \times S_{n - k}$ we define $\pi \in S_n$ by
$$
\pi(i) =
\begin{cases}
g_A^{-1}(\sigma(i)) & \text{if } i \in \{1,\dots,k\} \\
g_B^{-1}(\rho(i-k)) & \text{if } i \in \{k + 1,\dots,n \}
\end{cases}
$$
where $B = \{1,\dots,n\} \setminus A$.</p>
<p>This defines a bijection $S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}$ and hence</p>
<p>$$ n! = {n \choose k} k!(n - k)! $$</p>
<p>as required.</p>
<hr>
<p><strong>Analysis:</strong></p>
<p>The first proof was two sentences whereas the second was some complicated mess. People with experience in combinatorics will understand the second argument is happening behind the scenes when reading the first argument. To them, the first argument is all the rigour necessary. For students it is useful to teach the second method a few times to build a level of comfort with bijective proofs. But if we tried to do all of combinatorics the second way it would take too long and there would be rioting.</p>
<hr>
<p><strong><em>Post Scriptum</em></strong></p>
<p>I will say that a lot of combinatorics textbooks and papers do tend to be written more in the line of the second argument (i.e. rigourously). Talks and lectures tend to be more in line with the first argument. However, higher level books and papers only prove "higher level results" in this way and will simply state results that are found in lower level sources. They will also move a lot faster and not explain each step exactly.</p>
<p>For example, I didn't show that the map above was a bijection, merely stated it. In a lower level book there will be a proof that the two maps compose to the identity in both ways. In a higher level book, you might just see an example of the bijection and a statement that there is a bijection in general with the assumption that the person reading through the example could construct a proof on their own.</p>
| <p>Essentially, all (nearly all?) of combinatorics comes down to two things, the multiplication rule and the addition rule.</p>
<blockquote>
<p>If <span class="math-container">$A,B$</span> are finite sets then <span class="math-container">$|A\times B|=|A|\,|B|$</span>.</p>
<p>If <span class="math-container">$A,B$</span> are finite sets and <span class="math-container">$A\cap B=\emptyset$</span>, then <span class="math-container">$|A\cup B|=|A|+|B|$</span>.</p>
</blockquote>
<p>These can be rigorously proved, and more sophisticated techniques can be rigorously derived from them, for example, the fact that the number of different <span class="math-container">$r$</span>-element subsets of an <span class="math-container">$n$</span>-element set is <span class="math-container">$C(n,r)$</span>.</p>
<p>So, this far, combinatorics is perfectly rigorous. IMHO, the point at which it may become (or may appear to become) less rigorous is when it moves from pure to applied mathematics. So, with your specific example, you have to assume (or justify if you can) that counting the number of choices of <span class="math-container">$2$</span> men and <span class="math-container">$3$</span> women from <span class="math-container">$8$</span> men and <span class="math-container">$9$</span> women is the same as evaluating
<span class="math-container">$$|\{A\subseteq M: |A|=2\}\times \{B\subseteq W: |B|=3\}|\ ,$$</span>
where <span class="math-container">$|M|=8$</span> and <span class="math-container">$|W|=9$</span>.</p>
<p>It should not be surprising that the applied aspect of the topic requires some assumptions that may not be mathematically rigorous. The same is true in many other cases: for example, modelling a physical system by means of differential equations. Solving the equations once you have them can be done (more or less) rigorously, but deriving the equations in the first place usually cannot.</p>
<p>Hope this helps!</p>
|
geometry | <p>Intuitively, it seems like the area of a square should always be greater than the length of one of its sides because you can "fit" one of its sides in the space of its area, and still have room left over. </p>
<p>However when the length of a side, <span class="math-container">$s$</span>, is less than <span class="math-container">$1$</span>, then the area <span class="math-container">$s^2 < s$</span>, which doesn't make sense to me for the reason above.</p>
| <p>I think your intuition is failing you because you are trying to compare a 1-dimensional object (the length of a side) with a 2-dimensional object (the area of the interior of the square). You can fit <em>loads</em> of segments into a square of <em>any</em> size -- infinitely many, in fact! That comparison doesn't really mean anything.</p>
<p>On the other hand, here's a comparison that does make sense: Set a square of side length $s$ side-by-side with a rectangle whose sides are $s \times 1$. Now you are comparing area to area. The rectangle's area will fit inside the square if and only if $s>1$.</p>
| <p>Because you're misunderstanding units. The first assumption you make is that a square with a side of $1$ has an area of $1$ - that assumption is incorrect.</p>
<p>A square with a side of $1000$ $m$ / $1$ $km$ / $0.001$ $Mm$ has an area of $1$ $km^2$, $1000000$ $m^2$, or $0.000001$ $Mm^2$ (square-Mega-meters), depending on how you chose to <em>present</em> it. It's all about presentation, not mathematical properties.</p>
<p>What you need to intuitively understand is that by doubling the length of the side of a square, you get 4 times the area. And by shrinking the side by half you shrink the area to a quarter, regardless of units. </p>
<p>Once you have that intuitive understanding, it will overrule your current understanding. Knowing that areas shrink "faster" than side lengths, it will be obvious that on a square with a side length of $1$ <em>grok</em> and an area of $1$ <em>grikk</em>, when you reduce the side length the area has to shrink faster than the side length - same is true for a square with a side length of $42$ <em>gruk</em> and an area of $42$ <em>grakk</em>: the area will shrink faster than the side length.</p>
|
logic | <p>In <a href="https://math.stackexchange.com/a/121131/111520">Henning Makholm's answer</a> to the question, <a href="https://math.stackexchange.com/q/121128/111520">When does the set enter set theory?</a>, he states:</p>
<blockquote>
<p>In axiomatic set theory, the axioms themselves <em>are</em> the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p>
</blockquote>
<p>This assertion clashes with my (admittedly limited) understanding of how first-order logic, model theory, and axiomatic set theories work.
From what I understand, the axioms of a set theory are properties we would like the objects we call "sets" to have, and then each possible model of the theory is a different definition of the notion of a set. But the axioms themselves do not constitute a definition of set, unless we can show that any model of the axioms is isomorphic (in some meaningful way) to a given model.</p>
<p>Am I misunderstanding something? Is the definition of a set specified by the axioms, or by a model of the axioms? I would appreciate any clarification/direction on this.</p>
<hr>
<p><strong>Update:</strong> <em>In addition to all the answers below, I have written up my own answer (marked as community wiki) gathering the excerpts from other answers (to this question as well as some others) which I feel are most pertinent to the question I originally posed.
Since it's currently buried at the bottom (and accepting it won't change its position), I'm linking to it <a href="https://math.stackexchange.com/a/1792236/111520">here</a>. Cheers!</em></p>
| <p>This is the commonplace clash between the semi-Platonic view of the laymathematician and the foundational approach for mathematics through set theory.</p>
<p>It is often convenient, when working in "concrete" mathematics, to assume that there is a single, fixed universe of mathematics. And everyone who took a course or two in logic and set theory should be able to tell you that we can assume this universe is in fact a universe of $\sf ZFC$.</p>
<p>Then we do everything there, and we take the notion of "set" as somewhat primitive. Sets are not defined, they are just the objects of the universe.</p>
<p>But the word "set" is just a word in English. We use it to name this fickle, abstract, primitive object. But how can you ensure that my intuitive understanding of "set" is the same as yours?</p>
<p>This is where "axioms as definitions" come into play. Axioms define the basic ground rules for what it means to be a set. For example, if you don't have a power set, you're not a set: because every set has a power set. The axioms of set theory define what are the basic properties of sets. And once we agree on a list of axioms, we really agree on a list of definitions for what it means to be a set. And even if we grasp sets differently, we can still agree on some common properties and do some math.</p>
<p>You can see this in set theorists who disagree about philosophical outlooks, and whether or not some conjecture should be "true" or "false", or if the question is meaningless due to independence. Is the HOD Conjecture "true", "false" or is it simply "provable" or "independent"? That's a very different take on what are sets, and different set theorists will take different sides of the issue. But all of these set theorists have agreed, at least, that $\sf ZFC$ is a good way to define the basic properties of sets.</p>
<hr>
<p>As we've thrown Plato's name into the mix, let's talk a bit about "essence". How do you define a chair? or a desk? You can't <em>really</em> define a chair, because either you've defined it along the lines of "something you can sit on", in which case there will be things you can sit on which are certainly not chairs (the ground, for example, or a tree); or you've run into circularity "a chair is a chair is a chair"; or you're just being meaninglessly annoying "dude... like... everything is a chair... woah..."</p>
<p>But all three options are not good ways to define a chair. And yet, you're not baffled by this on a daily basis. This is because there is a difference between the "essence" of being a chair, and physical chairs.</p>
<p>Mathematical objects shouldn't be so lucky. Mathematical objects are abstract to begin with. We cannot perceive them in a tangible way, like we perceive chairs. So we are only left with some ideal definition. And this definition should capture the basic properties of sets. And that is exactly what the axioms of $\sf ZFC$, and any other set theory, are trying to do.</p>
| <blockquote>
<p>In axiomatic set theory, the axioms themselves are the definition of the notion of a set: A set is whatever behaves like the axioms say sets behave.</p>
</blockquote>
<p>I half-agree with this. But recall that the axioms of group theory don't axiomatize the concept "element of a group." Rather, they axiomatize the concept "group." In a similar way, the axioms of ZFC don't axiomatize the concept "set." They axiomatize the concept "universe of sets" (or "von Neumann universe" or "cumulative hiearchy", if you prefer).</p>
|
linear-algebra | <p>I'm in the process of writing an application which identifies the closest matrix from a set of square matrices $M$ to a given square matrix $A$. The closest can be defined as the most similar.</p>
<p>I think finding the distance between two given matrices is a fair approach since the smallest Euclidean distance is used to identify the closeness of vectors. </p>
<p>I found that the distance between two matrices ($A,B$) could be calculated using the <a href="http://mathworld.wolfram.com/FrobeniusNorm.html">Frobenius distance</a> $F$:</p>
<p>$$F_{A,B} = \sqrt{trace((A-B)*(A-B)')} $$</p>
<p>where $B'$ represents the conjugate transpose of B.</p>
<p>I have the following points I need to clarify</p>
<ul>
<li>Is the distance between matrices a fair measure of similarity?</li>
<li>If distance is used, is Frobenius distance a fair measure for this problem? any other suggestions?</li>
</ul>
| <p>Some suggestions. Too long for a comment:</p>
<p>As I said, there are many ways to measure the "distance" between two matrices. If the matrices are $\mathbf{A} = (a_{ij})$ and $\mathbf{B} = (b_{ij})$, then some examples are:
$$
d_1(\mathbf{A}, \mathbf{B}) = \sum_{i=1}^n \sum_{j=1}^n |a_{ij} - b_{ij}|
$$
$$
d_2(\mathbf{A}, \mathbf{B}) = \sqrt{\sum_{i=1}^n \sum_{j=1}^n (a_{ij} - b_{ij})^2}
$$
$$
d_\infty(\mathbf{A}, \mathbf{B}) = \max_{1 \le i \le n}\max_{1 \le j \le n} |a_{ij} - b_{ij}|
$$
$$
d_m(\mathbf{A}, \mathbf{B}) = \max\{ \|(\mathbf{A} - \mathbf{B})\mathbf{x}\| : \mathbf{x} \in \mathbb{R}^n, \|\mathbf{x}\| = 1 \}
$$
I'm sure there are many others. If you look up "matrix norms", you'll find lots of material. And if $\|\;\|$ is any matrix norm, then $\| \mathbf{A} - \mathbf{B}\|$ gives you a measure of the "distance" between two matrices $\mathbf{A}$ and $\mathbf{B}$.</p>
<p>Or, you could simply count the number of positions where $|a_{ij} - b_{ij}|$ is larger than some threshold number. This doesn't have all the nice properties of a distance derived from a norm, but it still might be suitable for your needs.</p>
<p>These distance measures all have somewhat different properties. For example, the third one shown above will tell you that two matrices are far apart even if all their entries are the same except for a large difference in one position.</p>
| <p>If we have two matrices $A,B$.
Distance between $A$ and $B$ can be calculated using Singular values or $2$ norms.</p>
<p>You may use Distance $= \vert(\text{fnorm}(A)-\text{fnorm}(B))\vert$
where fnorm = sq root of sum of squares of all singular values. </p>
|
logic | <p>I strive to find a statement $S(n)$ with $n \in N$ that can be proven to be not generally true despite the fact that noone knows a counterexample, i.e. it holds true for all $n$ ever tested so far. Any help?</p>
| <p>Statement: "There are no primes greater than $2^{60,000,000}$".
No known counter-example.
Counter example must exist since the set of primes is infinite (Euclid).</p>
| <p>According the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Skewes%27_number">Skewes' number</a>, there is no explicit value $x$ known (yet) for which $\pi(x)\gt\text{li}(x)$. (There are, however, candidate values, and there are <em>ranges</em> within which counterexamples are known to lie, so this may not be what the OP is after.)</p>
<p>Another example along the same lines is the <a href="http://en.wikipedia.org/wiki/Mertens_conjecture">Mertens conjecture</a>.</p>
<p>A somewhat silly example would be the statement "$(100!)!+n+2$ is composite." It's clear that $S(n)$ is <em>true</em> for all "small" values of $n\in\mathbb{N}$, and it's clear that it's <em>false</em> in general, but I'd be willing to bet a small sum of money that no counterexample will be found in the next $100!$ years....</p>
<p>(Note: I edited in a "$+2$" to make sure that my silly $S(n)$ is clearly true for $n=0$ and $1$ as well as other "small" values of $n$.)</p>
|
probability | <p>Suppose that we have two different discreet signal vectors of $N^\text{th}$ dimension, namely $\mathbf{x}[i]$ and $\mathbf{y}[i]$, each one having a total of $M$ set of samples/vectors.</p>
<p>$\mathbf{x}[m] = [x_{m,1} \,\,\,\,\, x_{m,2} \,\,\,\,\, x_{m,3} \,\,\,\,\, ... \,\,\,\,\, x_{m,N}]^\text{T}; \,\,\,\,\,\,\, 1 \leq m \leq M$<br>
$\mathbf{y}[m] = [y_{m,1} \,\,\,\,\, y_{m,2} \,\,\,\,\, y_{m,3} \,\,\,\,\, ... \,\,\,\,\, y_{m,N}]^\text{T}; \,\,\,\,\,\,\,\,\, 1 \leq m \leq M$</p>
<p>And, I build up a covariance matrix in-between these signals.</p>
<p>$\{C\}_{ij} = E\left\{(\mathbf{x}[i] - \bar{\mathbf{x}}[i])^\text{T}(\mathbf{y}[j] - \bar{\mathbf{y}}[j])\right\}; \,\,\,\,\,\,\,\,\,\,\,\, 1 \leq i,j \leq M $</p>
<p>Where, $E\{\}$ is the "expected value" operator.</p>
<p>What is the proof that, for all arbitrary values of $\mathbf{x}$ and $\mathbf{y}$ vector sets, the covariance matrix $C$ is always semi-definite ($C \succeq0$) (i.e.; not negative definte; all of its eigenvalues are non-negative)?</p>
| <p>A symmetric matrix $C$ of size $n\times n$ is semi-definite if and only if $u^tCu\geqslant0$ for every $n\times1$ (column) vector $u$, where $u^t$ is the $1\times n$ transposed (line) vector. If $C$ is a covariance matrix in the sense that $C=\mathrm E(XX^t)$ for some $n\times 1$ random vector $X$, then the linearity of the expectation yields that $u^tCu=\mathrm E(Z_u^2)$, where $Z_u=u^tX$ is a real valued random variable, in particular $u^tCu\geqslant0$ for every $u$. </p>
<p>If $C=\mathrm E(XY^t)$ for two centered random vectors $X$ and $Y$, then $u^tCu=\mathrm E(Z_uT_u)$ where $Z_u=u^tX$ and $T_u=u^tY$ are two real valued centered random variables. Thus, there is no reason to expect that $u^tCu\geqslant0$ for every $u$ (and, indeed, $Y=-X$ provides a counterexample).</p>
| <p>Covariance matrix <strong>C</strong> is calculated by the formula,
<span class="math-container">$$
\mathbf{C} \triangleq E\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}.
$$</span>
Where are going to use <a href="https://en.wikipedia.org/wiki/Definite_matrix" rel="nofollow noreferrer">the definition of positive semi-definite matrix</a> which says:</p>
<blockquote>
<p>A real square matrix <span class="math-container">$\mathbf{A}$</span> is positive semi-definite if and only if<br />
<span class="math-container">$\mathbf{b}^T\mathbf{A}\mathbf{b}\succeq0$</span><br />
is true for arbitrary real column vector <span class="math-container">$\mathbf{b}$</span> in appropriate size.</p>
</blockquote>
<p>For an arbitrary real vector <strong>u</strong>, we can write,
<span class="math-container">$$
\begin{array}{rcl}
\mathbf{u}^T\mathbf{C}\mathbf{u} & = & \mathbf{u}^TE\{(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\}\mathbf{u} \\
& = & E\{\mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}})(\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}\} \\
& = & E\{s^2\} \\
& = & \sigma_s^2. \\
\end{array}
$$</span>
Where <span class="math-container">$\sigma_s$</span> is the variance of the zero-mean scalar random variable <span class="math-container">$s$</span>, that is,
<span class="math-container">$$
s = \mathbf{u}^T(\mathbf{x}-\bar{\mathbf{x}}) = (\mathbf{x}-\bar{\mathbf{x}})^T\mathbf{u}.
$$</span>
Square of any real number is equal to or greater than zero.
<span class="math-container">$$
\sigma_s^2 \ge 0
$$</span>
Thus,
<span class="math-container">$$
\mathbf{u}^T\mathbf{C}\mathbf{u} = \sigma_s^2 \ge 0.
$$</span>
Which implies that covariance matrix of any real random vector is always positive semi-definite.</p>
|
number-theory | <p>That is, is it the case that for every natural number $n$, there is a prime number of $n$ digits? Or, is there some $n$ such that no primes of $n$-digits exist?</p>
<p>I am wondering this because of this Project Euler problem: <a href="https://projecteuler.net/problem=37">https://projecteuler.net/problem=37</a>. I find it very surprising that there are only a finite number of truncatable primes (and even more surprising that there are only 11)! However, I was thinking that result would make total sense if there is an $n$ such that there are no $n$-digit primes, since any $k$-digit truncatable prime implies the existence of at least one $n$-digit prime for every $n\leq k$.</p>
<p>If not, does anyone have insight into an intuitive reason why there are finitely many trunctable primes (and such a small number at that)? Thanks! </p>
| <p>Yes, there is always such a prime. Bertrand's postulate states that for any $k>3$, there is a prime between $k$ and $2k-2$. This specifically means that there is a prime between $10^n$ and $10\cdot 10^n$.</p>
<p>To commemorate $50$ upvotes, here are some additional details: Bertrand's postulate has been proven, so what I've written here is not just conjecture. Also, the result can be strengthened in the following sense (by the prime number theorem): For any $\epsilon > 0$, there is a $K$ such that for any $k > K$, there is a prime between $k$ and $(1+\epsilon)k$. For instance, for $\epsilon = 1/5$, we have $K = 24$ and for $\epsilon = \frac{1}{16597}$ the value of $K$ is $2010759$ (numbers gotten from <a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate#Better_results">Wikipedia</a>).</p>
| <p>While the answer using Bertrand's postulate is correct, it may be misleading. Since it only guarantees one prime between $N$ and $2N$, you might expect only three or four primes with a particular number of digits. This is very far from the truth.</p>
<p>The primes do become scarcer among larger numbers, but only very gradually. An important result dignified with the name of the ``Prime Number Theorem'' says (roughly) that the probability of a random number of around the size of $N$ being prime is approximately $1/\ln(N)$.</p>
<p>To take a concrete example, for $N = 10^{22}$, $1/\ln(N)$ is about $0.02$, so one would expect only about $2\%$ of $22$-digit numbers to be prime. In some sense, $2\%$ is small, but since there are $9\cdot 10^{21}$ numbers with $22$ digits, that means about $1.8\cdot 10^{20}$ of them are prime; not just three or four! (In fact, there are exactly $180,340,017,203,297,174,362$ primes with $22$ digits.)</p>
<p>In short, the number of $n$-digit numbers increases with $n$ much faster than the density of primes decreases, so the number of $n$-digit primes increases rapidly as $n$ increases.</p>
|
probability | <p>In competitive Pokémon-play, two players pick a team of six Pokémon out of the 718 available. These are picked independently, that is, player $A$ is unaware of player $B$'s choice of Pokémon. Some online servers let the players see the opponents team before the match, allowing the player to change the order of its Pokémon. (Only the first matters, as this is the one that will be sent into the match first. After that, the players may switch between the chosen six freely, as explained below.) Each Pokémon is assigned four moves out of a list of moves that may or may not be unique for that Pokémon. There are currently 609 moves in the move-pool. Each move is assigned a certain type, and may be more or less effective against Pokémon of certain types. However, a Pokémon may have more than one type. In general, move effectiveness is given by $0.5\times$, $1\times$ and $2\times$. However, there are exceptions to this rule. Ferrothorn, a dual-type Pokémon of steel and grass, will take $4\times$ damage against fire moves, since both of its types are weak against fire. All moves have a certain probability that it will work.</p>
<p>In addition, there are moves with other effects than direct damage. For instance, a move may increase one's attack, decrease your opponent's attack, or add a status deficiency on your opponent's Pokémon, such as making it fall asleep. This will make the Pokémon unable to move with a relatively high probability. If it is able to move, the status of "asleep" is lifted. Furthermore, each Pokémon has a "Nature" which increases one stat (out of Attack, Defense, Special Attack, Special Defense, Speed), while decreases another. While no longer necessary for my argument, one could go even deeper with things such as IV's and EV's for each Pokémon, which also affects its stats.</p>
<p>A player has won when all of its opponents Pokémon are out of play. A player may change the active Pokémon freely. (That is, the "battles" are 1v1, but the Pokémon may be changed freely.)</p>
<p>Has there been any serious mathematical research towards competitive Pokémon play? In particular, has there been proved that there is always a best strategy? What about the number of possible "positions"? If there always is a best strategy, can one evaluate the likelihood of one team beating the other, given best play from both sides? (As is done with chess engines today, given a certain position.)</p>
<p>EDIT: For the sake of simplicity, I think it is a good idea to consider two positions equal when </p>
<p>1) Both positions have identical teams in terms of Pokémon. (Natures, IVs, EVs and stats are omitted.) As such, one can create a one-to-one correspondence between the set of $12$ Pokémon in position $A$ and the $12$ in position $B$ by mapping $a_A \mapsto a_B$, where $a_A$ is Pokémon $a$ in position $A$.</p>
<p>2) $a_A$ and $a_B$ have the same moves for all $a\in A, B$.</p>
| <p>Has there been serious <em>research</em>? Probably not. Have there been modeling efforts? Almost certainly, and they probably range anywhere from completely ham-handed to somewhat sophisticated.</p>
<p>At its core, the game is finite; there are two players and a large but finite set of strategies. As such, existence of a mixed-strategy equilibrium is guaranteed. This is the result of Nash, and is actually an interesting application of the Brouwer Fixed-Point theorem.</p>
<p>That said, the challenge isn't really in the math; if you could set up the game, it's pretty likely that you could solve it using some linear programming approach. The challenge is in modeling the payoff, capturing the dynamics and, to some degree (probably small), handling the few sources of intrinsic uncertainty (i.e. uncertainty generated by randomness, such as hit chance).</p>
<p>Really, though, this is immaterial since the size of the action space is so large as to be basically untenable. LP solutions suffer a curse of dimensionality -- the more actions in your discrete action space, the more things the algorithm has to look at, and hence the longer it takes to solve.</p>
<p>Because of this, most tools that people used are inherently Monte Carlo-based -- simulations are run over and over, with new random seeds, and the likelihood of winning is measured statistically.</p>
<p>These Monte Carlo methods have their down-sides, too. Certain player actions, such as switching your Blastoise for your Pikachu, are deterministic decisions. But we've already seen that the action space is too large to prescribe determinism in many cases. Handling this in practice becomes difficult. You could treat this as a random action with some probability (even though in the real world it is not random at all), and increase your number of Monte Carlo runs, or you could apply some heuristic, such as "swap to Blastoise if the enemy type is fire and my current pokemon is under half-health." However, writing these heuristics relies on an assumption that your breakpoints are nearly-optimal, and it's rarely actually clear that such is the case.</p>
<p>As a result, games like Pokemon are interesting because optimal solutions are difficult to find. If there were 10 pokemon and 20 abilities, it would not be so fun. The mathematical complexity, if I were to wager, is probably greater than chess, owing simply to the size of the action space and the richer dynamics of the measurable outcomes. This is one of the reasons the game and the community continue to be active: people find new ideas and new concepts to explore.</p>
<p>Also, the company making the game keeps making new versions. That helps.</p>
<hr>
<p>A final note: one of the challenges in the mathematical modeling of the game dynamics is that the combat rules are very easy to implement programmatically, but somewhat more difficult to cleanly describe mathematically. For example, one attack might do 10 damage out front, and then 5 damage per round for 4 rounds. Other attacks might have cooldowns, and so forth. This is easy to implement in code, but more difficult to write down a happy little equation for. As such, it's a bit more challenging to do things like try to identify gradients etc. analytically, although it could be done programmatically as well. It would be an interesting application for automatic differentiation, as well.</p>
| <p>I think it's worth pointing out that even stripping away most of the complexity of the game still leaves a pretty hard problem. </p>
<p>The very simplest game that can be said to bear any resemblance to Pokemon is rock-paper-scissors (RPS). (Imagine, for example, that there are only three Pokemon - let's arbitrarily call them Squirtle, Charmander, and Bulbasaur - and that Squirtle always beats Charmander, Charmander always beats Bulbasaur, and Bulbasaur always beats Squirtle.)</p>
<p>Already it's unclear what "best strategy" means here. There is a unique <a href="http://en.wikipedia.org/wiki/Nash_equilibrium">Nash equilibrium</a> given by randomly playing Squirtle, Charmander, or Bulbasaur with probability exactly $\frac{1}{3}$ each, but in general just because there's a Nash equilibrium, even a unique Nash equilibrium, doesn't mean that it's the strategy people will actually gravitate to in practice. </p>
<p>There is in fact a <a href="http://www.chessandpoker.com/rps_rock_paper_scissors_strategy.html">professional RPS tournament scene</a>, and in those tournaments nobody is playing the Nash equilibrium because nobody can actually generate random choices with probability $\frac{1}{3}$; instead, everyone is playing some non-Nash equilibrium strategy, and if you want to play to win (not just win $\frac{1}{3}$ of the time, which is the best you can hope for playing the Nash equilibrium) you'll instead play strategies that fare well against typical strategies you'll encounter. Two examples:</p>
<ul>
<li>Novice male players tend to open with rock, and to fall back on it when they're angry or losing, so against such players you should play paper.</li>
<li>Novice RPS players tend to avoid repeating their plays too often, so if a novice player's played rock twice you should expect that they're likely to play scissors or paper next.</li>
</ul>
<p>There is even an <a href="http://www.rpscontest.com/">RPS programming competition scene</a> where people design algorithms to play repeated RPS games against other algorithms, and nobody's playing the Nash equilibrium in these games unless they absolutely have to. Instead the idea is to try to predict what the opposing algorithm is going to do next while trying to prevent your opponent from predicting what you'll do next. <a href="http://ofb.net/~egnor/iocaine.html">Iocaine Powder</a> is a good example of the kind of things these algorithms get up to. </p>
<p>So, even the very simple-sounding question of figuring out the "best strategy" in RPS is in some sense open, and people can dedicate a lot of time both to figuring out good strategies to play against other people and good algorithms to play against other algorithms. I think it's safe to say that Pokemon is strictly more complicated than RPS, enough so even this level of analysis is probably impossible. </p>
<p><strong>Edit:</strong> It's also worth pointing out that another way that Pokemon differs from a game like chess is that it is <em>imperfect information</em>: you don't know everything about your opponent's Pokemon (movesets, EV distribution, hold items, etc.) even if you happen to know what they are. That means both players should be trying to predict these hidden variables about each other's Pokemon while trying to trick the other player into making incorrect predictions. My understanding is that a common strategy for doing this is to use Pokemon that have very different viable movesets and try to trick your opponent into thinking you're playing one moveset when you're actually playing another. So in this respect Pokemon resembles poker more than it does chess. </p>
|
matrices | <p>This is a question from the free Harvard online abstract algebra <a href="http://www.extension.harvard.edu/open-learning-initiative/abstract-algebra" rel="noreferrer">lectures</a>. I'm posting my solutions here to get some feedback on them. For a fuller explanation, see <a href="https://math.stackexchange.com/questions/98291/autv-is-isomorphic-to-s-3">this</a> post.</p>
<p>This problem is from assignment 4.</p>
<blockquote>
<p>Prove that the transpose of a permutation matrix $P$ is its inverse.</p>
</blockquote>
<p>A permutation matrix $P$ has a single 1 in each row and a single 1 in each column, all other entries being 0. So column $j$ has a single 1 at position $e_{i_jj}$. $P$ acts by moving row $j$ to row $i_j$ for each column $j$. Taking the transpose of $P$ moves each 1 entry from $e_{i_jj}$ to $e_{ji_j}$. Then $P^t$ acts by moving row $i_j$ to row $j$ for each row $i_j$. Since this is the inverse operation, $P^t=P^{-1}$.</p>
<p>Again, I welcome any critique of my reasoning and/or my style as well as alternative solutions to the problem.</p>
<p>Thanks.</p>
| <p>A direct computation is also fine:
$$(PP^T)_{ij} = \sum_{k=1}^n P_{ik} P^T_{kj} = \sum_{k=1}^n P_{ik} P_{jk}$$
but $P_{ik}$ is usually 0, and so $P_{ik} P_{jk}$ is usually 0. The only time $P_{ik}$ is nonzero is when it is 1, but then there are no other $i' \neq i$ such that $P_{i'k}$ is nonzero ($i$ is the only row with a 1 in column $k$). In other words,
$$\sum_{k=1}^n P_{ik} P_{jk} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{otherwise} \end{cases}$$
and this is exactly the formula for the entries of the identity matrix, so
$$PP^T = I$$</p>
| <p>Another way to prove it is to realize that any permutation matrix is the product of <em>elementary</em> permutations, where by <em>elementary</em> I mean a permutation that swaps two entries. Since in an identity matrix swapping $i$ with $j$ in a row is the same as swapping $j$ with $i$ in a column, such matrix is symmetric and it coincides with its inverse. Then, assuming $P=P_1\cdots P_k$, with $P_1,\ldots,P_k$ <em>elementary</em>, we have</p>
<p>$$
P^{-1} = (P_1\cdots P_k)^{-1}=P_k^{-1}\cdots P_1^{-1}=P_k\cdots P_1=P_k^t\cdots P_1^t = (P_1\cdots P_k)^t=P^t
$$</p>
|
logic | <p>In fact I don't understand the meaning of the word "metamathematics". I just want to know, for example, why can we use mathematical induction in the proof of logical theorems, like The Deduction Theorem, or even some more fundamental proposition like "every formula has equal numbers of left and right brackets"?</p>
<p>What exactly can we use when talking about metamathematics? If induction is OK, then how about axiom of choice/determincacy? Can I use axiom of choice on collection of sets of formulas?(Of course it may be meaningless. By the way I don't understand why we can talk about a "set" of formulas either)</p>
<p>I have asked one of my classmates about these, and he told me he had stopped thinking about this kind of stuff. I feel like giving up too......</p>
| <p>This is not an uncommon confusion for students that are introduced to formal logic for the first time. It shows that you have a slightly wrong expectations about what metamathematics is for and what you'll get out of it.</p>
<p>You're probably expecting that it <em>ought to</em> go more or less like in first-year real analysis, which started with the lecturer saying something like</p>
<blockquote>
<p>In high school, your teacher demanded that you take a lot of facts about the real numbers on faith. Here is where we stop taking those facts on faith and instead prove from first principles that they're true.</p>
</blockquote>
<p>This led to a lot of talk about axioms and painstaking quasi-formal proofs of things you already knew, and at the end of the month you were able to reduce everything to a small set of axioms including something like the supremum principle. Then, if you were lucky, Dedekind cuts or Cauchy sequences were invoked to convince you that if you believe in the counting numbers and a bit of set theory, you should also believe that there is <em>something</em> out there that satisfies the axioms of the real line.</p>
<p>This makes it natural to expect that formal logic will work in the same way:</p>
<blockquote>
<p>As undergraduates, your teachers demanded that you take a lot of proof techniques (such as induction) on faith. Here is where we stop taking them on faith and instead prove from first principles that they're valid.</p>
</blockquote>
<p>But <em><strong>that is not how it goes</strong></em>. You're <em>still</em> expected to believe in ordinary mathematical reasoning for whichever reason you already did -- whether that's because they make intuitive sense to you, or because you find that the conclusions they lead to usually work in practice when you have a chance to verify them, or simply because authority says so.</p>
<p>Instead, metamathematics is a quest to be precise about <em>what it is</em> you already believe in, such that we can use <em>ordinary mathematical reasoning</em> <strong>about</strong> those principles to get to know interesting things about the limits of what one can hope to prove and how different choices of what to take on faith lead to different things you can prove.</p>
<p>Or, in other words, the task is to use ordinary mathematical reasoning to build a <strong>mathematical model</strong> of ordinary mathematical reasoning itself, which we can use to study it.</p>
<p>Since metamathematicians are interested in knowing <em>how much</em> taken-on-faith foundation is necessary for this-or-that ordinary-mathematical argument to be made, they also tend to apply this interest to <em>their own</em> reasoning about the mathematical model. This means they are more likely to try to avoid high-powered reasoning techniques (such as general set theory) when they can -- not because such methods are <em>forbidden</em>, but because it is an interesting fact that they <em>can</em> be avoided for such-and-such purpose.</p>
<p>Ultimately though, it is recognized that there are <em>some</em> principles that are so fundamental that we can't really do anything without them. Induction of the natural numbers is one of these. That's not a <em>problem</em>: it is just an interesting (empirical) fact, and after we note down that fact, we go on to use it when building our model of ordinary-mathematical-reasoning.</p>
<p>After all, ordinary mathematical reasoning <em>already exists</em> -- and did so for thousands of years before formal logic was invented. We're not trying to <em>build</em> it here (the model is not the thing itself), just to better understand the thing we already have.</p>
<hr />
<p>To answer your concrete question: Yes, you can ("are allowed to") use the axiom of choice if you need to. It is good form to keep track of the fact that you <em>have</em> used it, such that you have an answer if you're later asked, "the metamathematical argument you have just carried out, can that itself be formalized in such-and-such system?" Formalizing metamathematical arguments within your model has proved to be a very powerful (though also confusing) way of establishing certain kinds of results.</p>
<p>You can use the axiom of determinacy too, if that floats your boat -- so long as you're aware that doing so is not really "ordinary mathematical reasoning", so it becomes doubly important to disclose faithfully that you've done so when you present your result (lest someone tries to combine it with something <em>they</em> found using AC instead, and get nonsense out of the combination).</p>
| <p>This is not at all intended to be an answer to your question. (I like Henning Makholm's answer above.) But I thought you might be interested to hear <a href="https://en.wikipedia.org/wiki/Thoralf_Skolem" rel="noreferrer">Thoralf Skolem's</a> remarks on this issue, because they are quite pertinent—in particular one of his points goes exactly to your question about proving that every formula has equal numbers of left and right brackets—but they are much too long to put in a comment.</p>
<blockquote>
<p>Set-theoreticians are usually of the opinion that the notion of integer should be defined and that the principle of mathematical induction should be proved. But it is clear that we cannot define or prove ad infinitum; sooner or later we come to something that is not definable or provable. Our only concern, then, should be that the initial foundations be something immediately clear, natural, and not open to question. This condition is satisfied by the notion of integer and by inductive inferences, but it is decidedly not satisfied by <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory" rel="noreferrer">set-theoretic axioms of the type of Zermelo's</a> or anything else of that kind; if we were to accept the reduction of the former notions to the latter, the set-theoretic notions would have to be simpler than mathematical induction, and reasoning with them less open to question, but this runs entirely counter to the actual state of affairs.</p>
<p>In a paper (1922) <a href="https://en.wikipedia.org/wiki/David_Hilbert" rel="noreferrer">Hilbert</a> makes the following remark about <a href="https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9" rel="noreferrer">Poincaré's</a> assertion that the principle of mathematical induction is not provable: “His objection that this principle could not be proved in any way other than by mathematical induction itself is unjustified and is refuted by my theory.” But then the big question is whether we can prove this principle by means of simpler principles and <em>without using any property of finite expressions or formulas that in turn rests upon mathematical induction or is equivalent to it</em>. It seems to me that this latter point was not sufficiently taken into consideration by Hilbert. For example, there is in his paper (bottom of page 170), for a lemma, a proof in which he makes use of the fact that in any arithmetic proof in which a certain sign occurs that sign must necessarily occur for a first time. Evident though this property may be on the basis of our perceptual intuition of finite expressions, a formal proof of it can surely be given only by means of mathematical induction. In set theory, at any rate, we go to the trouble of proving that every ordered finite set is well-ordered, that is, that every subset has a first element. Now why should we carefully prove this last proposition, but not the one above, which asserts that the corresponding property holds of finite arithmetic expressions occurring in proofs? Or is the use of this property not equivalent to an inductive inference?</p>
<p>I do not go into Hilbert's paper in more detail, especially since I have seen only his first communication. I just want to add the following remark: It is odd to see that, since the attempt to find a foundation for arithmetic in set theory has not been very successful because of logical difficulties inherent in the latter, attempts, and indeed very contrived ones, are now being made to find a different foundation for it—as if arithmetic had not already an adequate foundation in inductive inferences and recursive definitions.</p>
</blockquote>
<p>(Source: Thoralf Skolem, “Some remarks on axiomatized set theory”, address to the Fifth Congress of Scandinavian Mathematicians, August 1922. English translation in <em>From Frege to Gödel</em>, p299–300. Jean van Heijenoort (ed.), Harvard University Press, 1967.)</p>
<p>I think it is interesting to read this in the light of Henning Makholm's excellent answer, which I think is in harmony with Skolem's concerns. I don't know if Hilbert replied to Skolem.</p>
|
linear-algebra | <p>A polynomial $p$ over a field $k$ is called irreducible if $p=fg$ for polynomials $f,g$ implies $f$ or $g$ are constant. One can consider the determinant of an $n\times n$ matrix to be a polynomial in $n^2$ variables. Does anyone know of a slick way to prove this polynomial is irreducible?</p>
<p>It feels like this should follow quite easily from basic properties of the determinant or an induction argument, but I cannot think of a nice proof. One consequence of this fact is that $GL_n$ is the complement of a hypersurface in $M_{n}$. Thanks.</p>
| <p>Deonote <span class="math-container">$p$</span> the determinant polyonomial. Observing that <span class="math-container">$p$</span> is of degree one in <span class="math-container">$x_{ij}$</span> for every <span class="math-container">$(i,j)$</span>.</p>
<p>Now we can prove <span class="math-container">$p$</span> is irreducible. Suppose <span class="math-container">$p=fg$</span>. Consider <span class="math-container">$x_{11}$</span>. Suppose <span class="math-container">$x_{11}$</span> appears in <span class="math-container">$f$</span>, then <span class="math-container">$f$</span> is of degree one in <span class="math-container">$x_{11}$</span> and <span class="math-container">$g$</span> is of degree zero in <span class="math-container">$x_{11}$</span>. Now consider <span class="math-container">$x_{1j}$</span>, then <span class="math-container">$x_{1j}$</span> must appear in <span class="math-container">$f$</span>, otherwise <span class="math-container">$g$</span> is of degree one in <span class="math-container">$x_{1j}$</span> and <span class="math-container">$f$</span> is degree zero in <span class="math-container">$x_{1j}$</span>, then the equality
<span class="math-container">$$fg=(ax_{11}+b)(cx_{1j}+d)=acx_{11}x_{1j}+bcx_{1j}+adx_{11}+bd\in \mathbb{F}[x_{11}, x_{1j}, \dots]$$</span>
leads to contradiction. So all <span class="math-container">$x_{1j}$</span> in <span class="math-container">$f$</span> for <span class="math-container">$j=1,\ldots,n$</span>. Similar <span class="math-container">$x_{j1}$</span> are all in <span class="math-container">$f$</span>. And since <span class="math-container">$x_{j1}$</span> is in <span class="math-container">$f$</span>, it follows <span class="math-container">$x_{jk}$</span> are in <span class="math-container">$f$</span>. Finally, all <span class="math-container">$x_{ij}$</span> are in <span class="math-container">$f$</span>. And <span class="math-container">$g$</span> is a constant. We are done!</p>
<hr />
<p><strong>Edit:</strong>
Contradiction: view <span class="math-container">$p$</span> be a polynomial of <span class="math-container">$x_{11},x_{1j}$</span>, then <span class="math-container">$$p=x_{11}h_1+x_{1j}h_2+h_3\in \mathbb{F}[x_{11},x_{1j}, \dots],$$</span> where <span class="math-container">$h_1,h_2,h_3 \in \mathbb{F}[\{x_{ij}\}\mid x_{ij}\neq x_{11},x_{1j}]$</span>, i.e., they are "constant" about <span class="math-container">$x_{11},x_{1j}$</span>, but <span class="math-container">$$fg=acx_{11}x_{1j}+bcx_{1j}+adx_{11}+bd,$$</span> while <span class="math-container">$0\neq ac \in \mathbb{F}[\{x_{ij}\}\mid x_{ij}\neq x_{11},x_{1j}]$</span> and <span class="math-container">$bc,ad,bd$</span> are "constant" about <span class="math-container">$x_{11},x_{1j}$</span>(all the results come from the assumption <span class="math-container">$f$</span> is a polynomial of degree one in <span class="math-container">$x_{11}$</span> and of degree zero in <span class="math-container">$x_{1j}$</span> and <span class="math-container">$g$</span> is of degree one in <span class="math-container">$x_{1j}$</span> and of degree zero in <span class="math-container">$x_{11}$</span>), so <span class="math-container">$p$</span> cannot equal to <span class="math-container">$fg$</span> since the definition of the determinant.</p>
| <p><a href="https://math.stackexchange.com/a/147085/39797">This answer</a> is basically a proof from M.Bocher "Introduction to higher algebra" (Dover 2004)
on pages <a href="http://books.google.com/books?id=3XBQm4uWkiwC&lpg=PA176&pg=PA176#v=onepage&q&f=false" rel="nofollow noreferrer" title="Google Books">176-7</a>.</p>
<p>A better link is <a href="https://archive.org/details/introductiontoh00bcgoog/page/n190/mode/2up" rel="nofollow noreferrer">here</a></p>
<p>Let me sketch what's there. Argue by contradiction. Let the determinant <span class="math-container">$D=\det(a_{ij})_{i,j}$</span>
be the product <span class="math-container">$D=fg$</span> of non-constant in <span class="math-container">$a_{ij}$</span> polynomials <span class="math-container">$f$</span>, <span class="math-container">$g$</span>.</p>
<p>Note that <span class="math-container">$D$</span> involves <span class="math-container">$a_{ij}$</span> for any <span class="math-container">$1\leq i,j\leq n$</span>, and it is linear in <span class="math-container">$a_{ij}$</span>, as can be seen by expanding <span class="math-container">$D$</span> w.r.t. the <span class="math-container">$i$</span>-th row. W.l.o.g. <span class="math-container">$f$</span> is linear in <span class="math-container">$a_{ii}$</span>, and <span class="math-container">$g$</span> does not depend on <span class="math-container">$a_{ii}$</span>. Now, <span class="math-container">$g$</span> cannot depend on <span class="math-container">$a_{ij}$</span> for any <span class="math-container">$j\neq i$</span>, as this would imply that <span class="math-container">$D$</span> contains a term divisible by <span class="math-container">$a_{ii}a_{ij}$</span> - impossible by definition of the determinant. Thus <span class="math-container">$g$</span> does not depend on any <span class="math-container">$a_{ij}$</span>, and, by symmetry, <span class="math-container">$a_{ji}$</span>, <span class="math-container">$1\leq j\leq n$</span>.
Thus <span class="math-container">$f$</span> depends linearly on all the <span class="math-container">$a_{ij}$</span>, and <span class="math-container">$a_{ji}$</span>, <span class="math-container">$1\leq j\leq n$</span>.</p>
<p>Therefore if <span class="math-container">$g$</span> is not constant it must depend linearly on <span class="math-container">$a_{jj}$</span>, for some <span class="math-container">$j\neq i$</span>. But then <span class="math-container">$a_{ij}$</span> must occur in both <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, the final contradiction.</p>
|
probability | <p>I am going to give a presentation about the <em>indicator functions</em>, and I am looking for some interesting examples to include. The examples can be even an overkill solution since I am mainly interested in demonstrating the creative ways of using it.</p>
<p>I would be grateful if you share your examples. The diversity of answers is appreciated.</p>
<p>To give you an idea, here are my examples. Most of my examples are in probability and combinatorics so examples from other fields would be even better.</p>
<ol>
<li><p>Calculating the expected value of a random variable using linearity of expectations. Most famously the number of fixed points in a random permutation.</p>
</li>
<li><p>Showing how <span class="math-container">$|A \Delta B| = |A|+|B|-2|A \cap B|$</span> and <span class="math-container">$(A-B)^2 = A^2+B^2-2AB$</span> are related.</p>
</li>
<li><p>An overkill proof for <span class="math-container">$\sum \deg(v) = 2|E|$</span>.</p>
</li>
</ol>
| <p>Whether it's overkill is open to debate, but I feel that the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="noreferrer">inclusion-exclusion principle</a> is best seen through the prism of indicator functions.</p>
<p>Basically, the classical formula is just what you get numerically from the (clear) identity:
<span class="math-container">$$ 1 - 1_{\bigcup_{i=1}^n A_i} = 1_{\bigcap_{i=1}^n \overline A_i} = \prod_{i=1}^n (1-1_{A_i}) = \sum_{J \subseteq [\![ 1,n ]\!]} (-1)^{|J|} 1_{\bigcap_{j\in J} A_j}.$$</span></p>
| <p>Indicator functions are often very useful in conjunction with Fubini’s theorem.</p>
<p>Suppose you want to show:
<span class="math-container">$$\newcommand\dif{\mathop{}\!\mathrm{d}}
\int_Y \int_{X_y} f(x, y) \dif x \dif y = \int_X \int_{Y_x} f(x,y) \dif y \dif x$$</span>
where the two subsets <span class="math-container">$X_y \subseteq X$</span> and <span class="math-container">$Y_x \subseteq Y$</span> describe the same relation <span class="math-container">$x \in X_y \iff y \in Y_x$</span>.</p>
<p>Because of the variable in the inner integral’s domain, you cannot use Fubini right away to permutate the two sums directly.</p>
<p>But you can do it if you use an indicator function to describe the set <span class="math-container">$$Z = \left\{ (x,y) \in X \times Y \mid x \in X_y \right\} = \left\{ (x,y) \in X \times Y \mid y \in Y_x \right\}.$$</span></p>
<p>Finally:
<span class="math-container">\begin{align*}
\int_Y \int_{X_y} f(x, y) \dif x \dif y & = \int_Y \int_X 1_Z(x,y) f(x,y) \dif x \dif y \\
& = \int_X \int_Y 1_Z(x,y) f(x,y) \dif y \dif x \\
& = \int_X \int_{Y_x} f(x,y) \dif y \dif x.
\end{align*}</span></p>
|
combinatorics | <p>I read about this game as a kid, but my maths was never up to solving it:</p>
<p>The score starts at zero. Take a shuffled pack of cards and keep dealing face up until you reach the first Ace, at which the score becomes 1. Deal on until you reach the next 2, at which the score becomes 2, although you may not reach this if all the 2s came before the first Ace. If you reach 2, deal on until you reach the first 3, at which, if you reach it, the score becomes 3, and so on. What is the most likely final score? And how do you calculate the probability of any particular score? </p>
<p>I once wrote a program that performed this routine millions of times on randomised packs with different numbers of suits up to about 10. To my surprise, the most likely score for any pack seemed empirically to always be the same as the number of suits in the pack. I would love to see this proved, though it is beyond my powers. </p>
| <p>Getting an average score of the number of suits doesn't surprise me. Intuitively, it takes $13$ cards to get the first ace, which uses up one suit full. Then $13$ more cards should get you a $2$, etc. Clearly this is a very rough argument. But when the number of suites gets very high you can only get a score of 13.</p>
<p>You can exactly calculate the chance of any given score, but it gets messier as you go up. To get $1,$ you have to have all the $2$'s before the first $1$. You can ignore the rest of the cards. If there are $n$ suits, the probability of this is $\frac {n!^2}{(2n)!}$. To get $2$, you need all the $3$'s before the first $2$ that is after the first $1$. You can enumerate the allowable orders of $1$'s, $2$'s, and $3$'s and calculate the probability.</p>
| <p>For small number of suits simulation confirms the hypothesis. Here is result for $n=4$ suits and $N=10^6$ trials:</p>
<p><img src="https://i.sstatic.net/mI0zI.jpg" alt="$n=4$, $N=10^6$"></p>
<p>But for greater number of suits it seems to become a local maximum, the largest value being 13. For $n=8$ and $n=10$ suits, $N=10^6$ it is</p>
<p><img src="https://i.sstatic.net/nb3zY.jpg" alt="enter image description here"></p>
<p><img src="https://i.sstatic.net/qzM5k.jpg" alt="$n=4$, $N=10^6$"></p>
|
differentiation | <p>I have just started learning about differential equations, as a result I started to think about this question but couldn't get anywhere. So I googled and wasn't able to find any particularly helpful results. I am more interested in the reason or method rather than the actual answer. Also I do not know if there even is a solution to this but if there isn't I am just as interested to hear why not.</p>
<p>Is there a solution to the differential equation:</p>
<p>$$f(x)=\sum_{n=1}^\infty f^{(n)}(x)$$</p>
| <p>$f(x)=\exp(\frac{1}{2}x)$ is such a function, since $f^{(n)}=2^{-n} f(x)$, you have</p>
<p>$$\sum_{n=1}^\infty f^{(n)}(x)=\sum_{n=1}^\infty 2^{-n}f(x)=(2-1)f(x)=f(x)$$</p>
<p>This is the only function (up to a constant prefactor) for which $\sum_{n}f^{(n)}$ and its derivatives converge uniformly (on compacta), as $$f'=\sum_{n=1}^\infty f^{(n+1)}=f-f'$$ follows from this assumption. But this is the same as $f-2 f'=0$, of which the only (real) solutions are $f(x)= C \exp{\frac{x}{2}}$ for some $C \in \mathbb R$.</p>
| <p>Differentiate both sides to get $f'(x)=f''(x)+f^{(3)}(x)+...$ </p>
<p>So the starting equation becomes $f(x)=f'(x)+f'(x)\Rightarrow f(x)=2f'(x)$ </p>
<p>Multiply now both sides by $e^{-\frac{x}{2}}$ and this becomes<br>
$$[e^{-\frac{x}{2}}f(x)]'=0$$<br>
So $f(x)=ce^{\frac{x}{2}}$<br>
Done.</p>
|
matrices | <p>A Magic Square of order <span class="math-container">$n$</span> is an arrangement of <span class="math-container">$n^2$</span> numbers, usually distinct integers, in a square, such that the <span class="math-container">$n$</span> numbers in all rows, all columns, and both diagonals sum to the same constant.</p>
<p><img src="https://i.sstatic.net/iseMy.png" alt="a 3x3 magic square" /></p>
<p>How to prove that a normal <span class="math-container">$3\times 3$</span> magic square where the integers from 1 to 9 are arranged in some way, must have <span class="math-container">$5$</span> in its middle cell?</p>
<p>I have tried taking <span class="math-container">$a,b,c,d,e,f,g,h,i$</span> and solving equations to calculate <span class="math-container">$e$</span> but there are so many equations that I could not manage to solve them.</p>
| <p>The row, column, diagonal sum must be $15$, e.g. because three disjoint rows must add up to $1+\ldots +9=45$. The sum of all four lines through the middle is therefore $60$ and is also $1+\ldots +9=45$ plus three times the middle number.</p>
| <p>A <em>normal</em> <strong>magic square</strong> of order $3$ can be represented by a $3 \times 3$ matrix</p>
<p>$$\begin{bmatrix} x_{11} & x_{12} & x_{13}\\ x_{21} & x_{22} & x_{23}\\ x_{31} & x_{32} & x_{33}\end{bmatrix}$$</p>
<p>where $x_{ij} \in \{1, 2,\dots, 9\}$. Since $1 + 2 + \dots + 9 = 45$, the sum of the elements in each column, row and diagonal must be $15$, as mentioned in the other answers. Vectorizing the matrix, these $8$ equality constraints can be written in matrix form as follows</p>
<p>$$\begin{bmatrix} 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1& 1 & 1\\ 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0\end{bmatrix} \begin{bmatrix} x_{11}\\ x_{21}\\ x_{31}\\ x_{12}\\ x_{22}\\ x_{32}\\ x_{13}\\ x_{23}\\ x_{33}\end{bmatrix} = \begin{bmatrix} 15\\ 15\\ 15\\ 15\\ 15\\ 15\\ 15\\ 15\end{bmatrix}$$</p>
<p>Note that each row of the matrix above has $3$ ones and $6$ zeros. We can guess a <strong>particular</strong> solution by visual inspection of the matrix. This particular solution is $(5,5,\dots,5)$, the $9$-dimensional vector whose nine entries are all equal to $5$. To find a <strong>homogeneous</strong> solution, we compute the RREF</p>
<p>$$\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 & -1\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & -1 & -2\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 2\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}$$ </p>
<p>Thus, the <strong>general</strong> solution is of the form</p>
<p>$$\begin{bmatrix} 5\\ 5\\ 5\\ 5\\ 5\\ 5\\ 5\\ 5\\ 5\end{bmatrix} +
\begin{bmatrix} 0 & -1\\ -1 & 0\\ 1 & 1\\ 1 & 2\\ 0 & 0\\ -1 & -2\\ -1 & -1\\
1 & 0\\ 0 & 1\end{bmatrix} \eta$$</p>
<p>where $\eta \in \mathbb{R}^2$. Un-vectorizing, we obtain the solution set</p>
<p>$$\left\{ \begin{bmatrix} 5 & 5 & 5\\ 5 & 5 & 5\\ 5 & 5 & 5\end{bmatrix} + \eta_1 \begin{bmatrix} 0 & 1 & -1\\ -1 & 0 & 1\\ 1 & -1 & 0\end{bmatrix} + \eta_2 \begin{bmatrix} -1 & 2 & -1\\ 0 & 0 & 0\\ 1 & -2 & 1\end{bmatrix} : (\eta_1, \eta_2) \in \mathbb{R}^2 \right\}$$</p>
<p>which is a $2$-dimensional <strong>affine matrix space</strong> in $\mathbb{R}^{3 \times 3}$. For example, the magic square illustrated in the question can be decomposed as follows</p>
<p>$$\begin{bmatrix} 2 & 9 & 4\\ 7 & 5 & 3\\ 6 & 1 & 8\end{bmatrix} = \begin{bmatrix} 5 & 5 & 5\\ 5 & 5 & 5\\ 5 & 5 & 5\end{bmatrix} - 2 \begin{bmatrix} 0 & 1 & -1\\ -1 & 0 & 1\\ 1 & -1 & 0\end{bmatrix} + 3 \begin{bmatrix} -1 & 2 & -1\\ 0 & 0 & 0\\ 1 & -2 & 1\end{bmatrix}$$</p>
<p>Note that the sum of the elements in each column, row and diagonal of each of the two basis matrices is equal to $0$. Note also that the $(2,2)$-th entry of each of the two basis matrices is $0$. Hence, no matter what parameters $\eta_1, \eta_2$ we choose, the $(2,2)$-th entry of the normal magic square of order $3$ will be equal to $5$.</p>
|
game-theory | <p>Is there a bounded real-valued sequence with divergent Cesaro means (i.e. <em>not</em> Cesaro summable)?</p>
<p>More specifically, is there a bounded sequence $\{w_k\}\in l^\infty$ such that $$\lim_{M\rightarrow\infty} \frac{\sum_{k=1}^M w_k}{M}$$ does <em>not</em> exist?</p>
<p>I encountered this problem while studying the <em>limit of average payoffs</em> criterion for ranking payoff sequences in infinitely repeated games.</p>
| <p>Consider <span class="math-container">$1,-1,-1,1,1,1,1,-1,-1,-1,-1,-1,-1,-1,-1,\cdots$</span> (one <span class="math-container">$1$</span>, two <span class="math-container">$-1$</span>, four <span class="math-container">$1$</span>, eight <span class="math-container">$-1$</span>, ...) Then
<span class="math-container">$$\frac{1-2+2^2-2^3+\cdots+(-2)^n}{1+2+2^2+\cdots+2^n}=\frac{1-(-2)^{n+1}}{3(2^{n+1}-1)}$$</span>
This sequence is divergent. So <span class="math-container">$(\sum_{k\le M}a_k)/M$</span> has divergent subsequence, and it implies nonexistence of Cesaro mean of <span class="math-container">$a_n$</span>.</p>
| <p>Let $(a_n)$ be any increasing sequence of positive integers such that
$$
a_{n+1}\ge 2(a_1+\cdots+a_{n})
$$
for all $n$. Now let $(x_n)$ be the sequence
$$
\underbrace{0,\ldots,0}_{a_1 \text{ times }},\underbrace{1,\ldots,1}_{a_2 \text{ times }},\underbrace{0,\ldots,0}_{a_3 \text{ times }},\underbrace{1,\ldots,1}_{a_4 \text{ times }},\ldots
$$
It easily follows that
$$
\frac{x_1+\cdots+x_{a_1+\cdots+a_{2n}}}{a_1+\cdots+a_{2n}}\ge \frac{a_{2n}}{a_1+\cdots+a_{2n}}=1-\frac{a_1+\cdots+a_{2n-1}}{a_1+\cdots+a_{2n}}\ge 1-\frac{1}{1+2}=\frac{2}{3}
$$
for all $n$ and, on the other hand,
$$
\frac{x_1+\cdots+x_{a_1+\cdots+a_{2n-1}}}{a_1+\cdots+a_{2n-1}}\le \frac{a_1+\cdots+a_{2n-1}}{a_1+\cdots+a_{2n}} \le \frac{1}{1+2}=\frac{1}{3},
$$
hence it cannot converge.</p>
|
linear-algebra | <p>It would seem that one way of proving this would be to show the existence of non-algebraic numbers. Is there a simpler way to show this?</p>
| <p>The cardinality argument mentioned by Arturo is probably the simplest. Here is an alternative: an explicit example of an infinite <span class="math-container">$\, \mathbb Q$</span>-independent set of reals. Consider the set consisting of the logs of all primes <span class="math-container">$\, p_i.\,$</span> If <span class="math-container">$ \, c_1 \log p_1 +\,\cdots\, + c_n\log p_n =\, 0,\ c_i\in\mathbb Q,\,$</span> multiplying by a common denominator we can assume that all <span class="math-container">$\ c_i \in \mathbb Z\,$</span> so, exponentiating, we obtain <span class="math-container">$\, p_1^{\large c_1}\cdots p_n^{\large c_n}\! = 1\,\Rightarrow\ c_i = 0\,$</span> for all <span class="math-container">$\,i,\,$</span> by the uniqueness of prime factorizations.</p>
| <p>As Steve D. noted, a finite dimensional vector space over a countable field is necessarily countable: if $v_1,\ldots,v_n$ is a basis, then every vector in $V$ can be written uniquely as $\alpha_1 v_1+\cdots+\alpha_n v_n$ for some scalars $\alpha_1,\ldots,\alpha_n\in F$, so the cardinality of the set of all vectors is exactly $|F|^n$. If $F$ is countable, then this is countable. Since $\mathbb{R}$ is uncountable and $\mathbb{Q}$ is countable, $\mathbb{R}$ cannot be finite dimensional over $\mathbb{Q}$. (Whether it has a basis or not depends on your set theory).</p>
<p>Your further question in the comments, whether a vector space over $\mathbb{Q}$ is finite dimensional <em>if and only if</em> the set of vectors is countable, has a negative answer. If the vector space is finite dimensional, then it is a countable set; but there are infinite-dimensional vector spaces over $\mathbb{Q}$ that are countable as sets. The simplest example is $\mathbb{Q}[x]$, the vector space of all polynomials with coefficients in $\mathbb{Q}$, which is a countable set, and has dimension $\aleph_0$, with basis $\{1,x,x^2,\ldots,x^n,\ldots\}$. </p>
<p><strong>Added:</strong> Of course, if $V$ is a vector space over $\mathbb{Q}$, then it has <em>countable dimension</em> (finite or denumerable infinite) if and only if $V$ is countable as a set. So the counting argument in fact shows that not only is $\mathbb{R}$ infinite dimensional over $\mathbb{Q}$, but that (if you are working in an appropriate set theory) it is <em>uncountably</em>-dimensional over $\mathbb{Q}$. </p>
|
geometry | <p>In calculus, how we calculate the arc length of a curve is by approximating the curve with a series of line segments, and then we take the limit as the number of line segments goes to infinity. This is a perfectly valid approach to calculating arc length, and obviously it will allow you calculate correctly the length of any (rectifiable) curve. But it's obviously not the way people intuitively think about the length of a curve.</p>
<p>Here is how they introduced arclength to us in elementary school. If you want to measure the length of a straight line segment, use a ruler. If you want to measure the length of a curve, overlay the curve with a piece of string, then straighten the string and measure it with a ruler.</p>
<p>So I was wondering if it's possible to make a definition of arc length that preserves the spirit of that definition. Without using the calculus-based definition of length, is there any way to define what it means for one curve to be a "length-preserving deformation" of another curve? If that's possible, we could construct equivalence classes of curves that are length-preserving deformations of one another, and we can define the length associated with an equivalence class to be the length of the straight line that's in the class.</p>
<p>Is there anything in topology that would allow us to make such a definition? We'd need to account for the Euclidean metric somehow, since, e.g. in Taxicab geometry the <a href="http://taxicabgeometry.net/measures/arc_length.html" rel="noreferrer">circumference of a circle is $8r$</a> rather than $2\pi r$ (which is why your friends keep sending you that dumb $\pi = 4$ picture).</p>
<p>Any help would be greatly appreciated.</p>
<p>Thank You in Advance.</p>
| <p>It's somewhat simpler, I think, to characterize maps that <strong>don't increase</strong> length rather than those that preserve it.</p>
<p>A map $f: X \to Y$ (where $X$ and $Y$ are metric spaces, with metric denoted $d$ in both cases) is said to be contractive if $d(f(x),f(y)) \le d(x,y)$ for all $x, y \in X$. </p>
<p>EDIT (following Jim Belk's remark)
The length of a curve $C$ is the infimum of $L$ such that there exists a contractive map from $[0,L]$ onto $C$.</p>
| <p>One neat way to make this precise is using the language of nonstandard analysis. Very generally, given two compact metric spaces $X$ and $Y$, say a map $f:X\to Y$ is length-preserving if whenever $a,b\in {}^*X$ are infinitely close, $\frac{d(a,b)-d(f(a),f(b))}{d(a,b)}$ is infinitesimal. That is, $f$ preserves infinitesimal distances up to an infinitesimally smaller error. For differentiable parametrizations of curves in $\mathbb{R}^n$, this recovers the usual characterization of length-preserving parametrizations as those such that the derivative has norm $1$ everywhere.</p>
|
logic | <p>In Wikipedia page on <a href="http://en.wikipedia.org/wiki/Intuitionistic_logic">intuitionistic logic</a>, it is stated that excluded middle and double negation elimination are not axioms. Does this mean that De Morgan's laws, stated $$ \lnot (p \land q) \iff \lnot p \lor \lnot q \\ \lnot (p \lor q) \iff \lnot p \land \lnot q,$$
cannot be proven in propositional intuitionistic logic?</p>
| <p>The answer is "three quarters yes, one quarter no."</p>
<p>The one which is valid is the one with the disjunction <em>inside</em> the negation:
$$\lnot p \land \lnot q \dashv \vdash \lnot (p \lor q)$$
For the other law, only one implication is valid:
$$\lnot p \lor \lnot q \vdash \lnot (p \land q)$$</p>
<p>The proofs are left as an exercise to the reader.</p>
<p>To show that the last implication is invalid, we need to know some model theory for intuitionistic propositional logic. Recall that the rules of inference for intuitionistic propositional logic are sound when interpreted in a Heyting algebra: that is, if $p \vdash q$ in intuitionistic logic, and $[p]$ and $[q]$ are the corresponding interpretations in some Heyting algebra $\mathfrak{A}$, then $[p] \le [q]$.</p>
<p>Now, there is a rich and fruitful source of Heyting algebras in mathematics: the frame of open sets of any topological space is automatically a Heyting algebra, with the Heyting implication defined by
$$(U \Rightarrow V) = \bigcup_{W \cap U \le V} W$$
Hence, the negation of $U$ is the interior of the complement of $U$. Now, consider $X = (0, 2)$, and let $U = (0, 1)$ and $V = (1, 2)$. Then, $\lnot U = (1, 2)$ and $\lnot V = (0, 1)$, so $\lnot U \cup \lnot V = X \setminus \{ 1 \}$. On the other hand, $U \cap V = \emptyset$, so $\lnot (U \cap V) = X$. Thus, $\lnot U \cup \lnot V \le \lnot (U \cap V)$, as expected, but $\lnot (U \cap V) \nleq \lnot U \cup \lnot V$. We conclude that
$$\lnot (p \land q) \nvdash \lnot p \lor \lnot q$$</p>
| <p>It seems I managed to prove three implications using Curry-Howard isomorphism, but the fourth seems to be false.</p>
<p><span class="math-container">$\neg(p \lor q) \Rightarrow \neg p \land \neg q$</span>:
<span class="math-container">$$
f = \lambda g.\ \langle
\lambda x.\ g\ (\mathtt{Left}\ x),
\lambda y.\ g\ (\mathtt{Right}\ y)
\rangle
$$</span>
<span class="math-container">$\neg(p \lor q) \Leftarrow \neg p \land \neg q$</span>:</p>
<p><span class="math-container">\begin{align*}
f &= \lambda (g, h).\ \lambda (\mathtt{Left}\ x).\ g\ x \\\
f &= \lambda (g, h).\ \lambda (\mathtt{Right}\ x).\ h\ x
\end{align*}</span></p>
<p><span class="math-container">$\neg(p \land q) \Leftarrow \neg p \lor \neg q$</span>:</p>
<p><span class="math-container">\begin{align*}
f &= \lambda (\mathtt{Left}\ g).\ \lambda (x, y).\ g\ x \\\
f &= \lambda (\mathtt{Right}\ h).\ \lambda (x, y).\ h\ y
\end{align*}</span></p>
<p>To prove <span class="math-container">$$\neg(p \land q) \Rightarrow \neg p \lor \neg q$$</span> I would need
to transform a function <span class="math-container">$p \times q \to \alpha$</span> to one of the <span class="math-container">$p \to \alpha$</span> or <span class="math-container">$q \to \alpha$</span>, but it is impossible to obtain two of them (both <span class="math-container">$p$</span> and <span class="math-container">$q$</span>) at once. This is the intuition, but I would need something more for the proof.</p>
<p><strong>Edit 1:</strong> Relevant link: <a href="http://ncatlab.org/nlab/show/de+Morgan+duality" rel="nofollow noreferrer">http://ncatlab.org/nlab/show/de+Morgan+duality</a> .</p>
<p><strong>Edit 2:</strong> Here is a proof attempt (but I am not sure it is correct, if someone can tell, please do):</p>
<p>Let's assume that there exists a function
<span class="math-container">$$F : \forall \alpha, p, q.\ (p \times q \to \alpha) \to (p \to \alpha) + (q \to \alpha).$$</span> Then, by the naturality of <span class="math-container">$F$</span> we have that it always returns <span class="math-container">$\mathtt{Left}$</span> or always returns <span class="math-container">$\mathtt{Right}$</span>. Without the loss of generality let's assume that <span class="math-container">$F(f) = \mathtt{Left}\ g$</span> for any <span class="math-container">$f$</span>. Then it follows that there exists
<span class="math-container">$$ F_1 : \forall \alpha, p, q.\ (p \times q \to \alpha) \to (p \to \alpha). $$</span> However, <span class="math-container">$F_1(\lambda x.\ \lambda y.\ y) : \forall \alpha, \beta.\ \beta \to \alpha$</span> what means <span class="math-container">$\forall \beta.\ \beta \to \bot$</span> and that concludes the proof.</p>
|
linear-algebra | <p>A Hermitian matrix always has real eigenvalues and real or complex orthogonal eigenvectors. A real symmetric matrix is a special case of Hermitian matrices, so it too has orthogonal eigenvectors and real eigenvalues, but could it ever have complex eigenvectors?</p>
<p>My intuition is that the eigenvectors are always real, but I can't quite nail it down.</p>
| <p>Always try out examples, starting out with the simplest possible examples (it may take some thought as to which examples are the simplest). Does for instance the identity matrix have complex eigenvectors? This is pretty easy to answer, right? </p>
<p>Now for the general case: if $A$ is any real matrix with real eigenvalue $\lambda$, then we have a choice of looking for real eigenvectors or complex eigenvectors. The theorem here is that the $\mathbb{R}$-dimension of the space of real eigenvectors for $\lambda$ is equal to the $\mathbb{C}$-dimension of the space of complex eigenvectors for $\lambda$. It follows that (i) we will always have non-real eigenvectors (this is easy: if $v$ is a real eigenvector, then $iv$ is a non-real eigenvector) and (ii) there will always be a $\mathbb{C}$-basis for the space of complex eigenvectors consisting entirely of real eigenvectors.</p>
<p>As for the proof: the $\lambda$-eigenspace is the kernel of the (linear transformation given by the) matrix $\lambda I_n - A$. By the rank-nullity theorem, the dimension of this kernel is equal to $n$ minus the rank of the matrix. Since the rank of a real matrix doesn't change when we view it as a complex matrix (e.g. the reduced row echelon form is unique so must stay the same upon passage from $\mathbb{R}$ to $\mathbb{C}$), the dimension of the kernel doesn't change either. Moreover, if $v_1,\ldots,v_k$ are a set of real vectors which are linearly independent over $\mathbb{R}$, then they are also linearly independent over $\mathbb{C}$ (to see this, just write out a linear dependence relation over $\mathbb{C}$ and decompose it into real and imaginary parts), so any given $\mathbb{R}$-basis for the eigenspace over $\mathbb{R}$ is also a $\mathbb{C}$-basis for the eigenspace over $\mathbb{C}$.</p>
| <p>If $A$ is a symmetric $n\times n$ matrix with real entries, then viewed as an element of $M_n(\mathbb{C})$, its eigenvectors always include vectors with non-real entries: if $v$ is any eigenvector then at least one of $v$ and $iv$ has a non-real entry.</p>
<p>On the other hand, if $v$ is any eigenvector then at least one of $\Re v$ and $\Im v$ (take the real or imaginary parts entrywise) is non-zero and will be an eigenvector of $A$ with the same eigenvalue. So you can always pass to eigenvectors with real entries.</p>
|
probability | <p>I am looking at a proof that <span class="math-container">$\text{Var}(X)= E((X - EX)^2) = E(X^2) - (E(X))^2$</span></p>
<p><span class="math-container">$E((X - EX)^2) =$</span></p>
<p><span class="math-container">$E(X^2 - 2XE(X) + (E(X))^2) =$</span></p>
<p><span class="math-container">$E(X^2) - 2E(X)E(X) + (E(X))^2)$</span></p>
<p>I can't see how the second line can be equal to the third line. I would have had the following for the third line -</p>
<p><span class="math-container">$E(X^2) - E(2XE(X)) + E((E(X))^2))$</span></p>
<p>Which seems very messy... There must be something I am not understanding about the properties of expected values?</p>
| <p>There are some things you can cancel in yours.</p>
<p>$(E((E(X)))^{2}=(E(X))^{2}$, since the expected value of an expected value is just that. It stops being random once you take one expected value, so iteration doesn't change.</p>
<p>Furthermore, $-E(2XE(X))=-2E(XE(X))=-2E(X)E(X)$ The first step here is just a constant factoring. For the same reason, in the second step, we see that $E(X)$ was actually a constant at this point, not random at all, so it can be factored out as well.</p>
| <p>Your intermediate step is correct. All you need to realize is that $E(X)$ is a <em>number</em>, not a random variable, so you can treat it like any other constant, like $2$ or $4$. That is, $E(Y \cdot E(X)) = E(X) E(Y)$, just as you would write $E(Y \cdot 2) = 2 E(Y)$.</p>
|
differentiation | <p>Differentiation rules have been bugging me ever since I took Basic Calculus. I thought I'd develop some intuitive understanding of them eventually, but so far all my other math courses (including Multivariable Calculus) take the rules for granted.</p>
<p>I know how to prove some of the rules. The problem is that algebra manipulation alone isn't quite convincing to me. Is there any possibility of understanding <em>why</em> the algebra happens to work that way? For example, why do the slopes of the tangent line to the parabola x^2 happen to be determined by 2x? Looking at it graphically, there's no way I could've told that.</p>
<p>Any sources covering this issue (books; internet sites; etc) would be very greatly appreciated. Thanks in advance. </p>
| <p>The key intuition, first of all, is that the product of two tiny differences is negligible. You can intuit this just by doing computations:</p>
<p>$$3.000001 \cdot 2.0001 = 6.0003020001$$</p>
<p>If we are doing any sort of rounding of hand computations, we'd likely round away that $0.0000000001$ part. If you were doing computations to eight significant digits, a value $v$ is really a value in a range roughly of $v\left(1 \pm 10^{-8}\right)$ and the error when you multiply $v_1$ by $v_2$ is almost entirely $10^{-8}|v_1v_2|$. The other part of the error is so tiny you'd probably ignore it.</p>
<p><strong>Case: $f(x)=x^2$</strong></p>
<p>Now, consider a square with corners $(0,0), (0,x), (x,0), (x,x)$. Grow $x$ a little bit, and you see the area grows by proportionally by the size of two of the edges, plus a tiny little square. That tiny square is negligible.</p>
<p>This is a little harder to visualize for $x^n$, but it actually works the same way when $n$ is a positive integer, by considering an $n$-dimensional hypercube.</p>
<p>This geometric reason is also why the circumference of a circle is equal to the derivative of its area – if you increase the radius a little, the area is increased by approximately that "little" times the circumference. So the derivative of $\pi r^2$ is the circumference of the circle, $2\pi r$.</p>
<p>It's also a way to understand the product rule. (Or, indeed, FOIL.)</p>
<p><strong>Case: The chain rule</strong></p>
<p>The chain rule is better seen by considering an odd-shaped tub. Let's say that when the volume of the water in a tube is $v$ then the tub is filled to depth $h(v)$. Then assume that we have a hose that, between time $0$ and time $t$, has sent a volume of $v(t)$ water.</p>
<p>At time $t$, what is the rate that the height of the water is increasing? </p>
<p>Well, we know that when the current volume is $v$, then the rate at which the height is increasing is $h'(v)$ times the rate the volume is increasing. And the rate the volume is increasing is $v'(t)$. So the rate the height is increasing is $h'(v(t)) \cdot v'(t)$.</p>
<p><strong>Case: Inverse function</strong></p>
<p>This is the one case where it is obvious from the graph. When you flip the coordinates of a Cartesian plane, a line of slope $m$ gets sent to a line of slope $1/m$. So if $f$ and $g$ are inverse functions, then the slope of $f$ at $(x,f(x))$ is the inverse of the slope of $g$ at $(f(x),x)=(f(x),g(f(x)))$. So $g'(f(x))=1/f'(x)$.</p>
<p><strong>$x^2$ revisited</strong></p>
<p>Another way of dealing with $f(x)=x^2$ is thinking again of area, but thinking of it in terms of units. If we have a square that is $x$ centimeters, and we change that by a small amount, $\Delta x$ centimeters, then the area is $x^2\mathrm{cm}^2$ and it goes to approximately $f(x+\Delta x)-f(x)=f'(x)\Delta x$.</p>
<p>On the other hand, if we measure the square in meters, it has side length $x/100$ meters and area $(x/100)^2$. The change in the side length is $(\Delta x)/100$ meters. So the expected area change is $f'(x/100)\cdot (\Delta x)/100$ square meters. But this difference should be the same, so
$$f'(x)\Delta x = f'(x/100)\cdot\frac{\Delta x}{100}\cdot \left(100^2 \text{m}^2/\text{cm}^2\right) = 100 f'(x/100)$$</p>
<p>More generally, then, we see that $f'(ax)=af'(x)$ when $f(x)=x^2$ by changing units from centimeters to a unit that is $1/a$ centimeters.</p>
<p>So we see that $f'(x)$ is linear, although it doesn't explain why $f'(1)=2$.</p>
<p>If you do the same for $f(x)=x^n$, with units $\mu$ and another unit $\rho$ where $a\rho = \mu$, then you get that the a change in volume when changing by $\Delta x\,\mu$ is $f'(x)\Delta x\,\mu^n$. It is also $f'(ax)\cdot a(\Delta x)\,\rho^n$. Since $\mu/\rho = a$, this means $f'(ax) =a^{n-1}f'(x)$.</p>
<p>Again, we still don't know why $f'(1)=n$, but we know $f'(x)=f'(1)x^{n-1}$.</p>
| <p>For the first hundred years or so, before people formalized
differentiation and integration by using limits,
the general intuition behind taking the derivative of $f(x)$ was,
"Let's add a tiny increment to $x$ and see how much $f(x)$ changes."</p>
<p>The "tiny increment" was called $o$ (lower-case letter O), at least by some people.</p>
<p>For $f(x) = x^2$, for example, you could show that
$$f(x + o) = (x + o)^2 = x^2 + 2xo + o^2 = f(x) + 2xo + o^2.$$
So the amount of "change" in $f(x)$ is $2xo + o^2$,
which is $2x + o$ times the amount by which you changed $x$.
And then the mathematicians would say that only the $2x$ part of
$2x + o$ matters, since $o$ is "vanishingly" small.</p>
<p>I think for most of the differentiation rules developed back then
(which may be all you'll see in the table of derivatives in an
elementary calculus book), the intuition <em>was</em> to do the arithmetic.
What they did <em>not</em> do was to encumber that arithmetic with all
the extra mechanisms needed to establish a limit, as the
standard-analysis approach does today.</p>
<p>On the other hand, the arithmetic usually went hand-in-hand with
practical problems (usually in what we would consider physics or engineering)
that people wanted to solve.
People also tended to make a connection between arithmetic and
geometry, so linking the function $f(x) = x^2$ to the area of a square
of side $x$ would have been an obvious thing to do
(and the visualization in Thomas Andrews's answer would have worked
very well, I think).</p>
<p>For example, visualize a particle running along a circular track at a constant speed. In fact, make the circular track be the circle given by $x^2 + y^2 = 1$ in the Cartesian plane. (Putting everything into Cartesian coordinates was all the rage when calculus was young.)
You can then see (by symmetry, or by other arguments)
that the direction the particle is going is always perpendicular to
the direction in which the particle lies from the center of the circle
at that moment. So if the angle to the particle at that instant is
$\theta$, the $x$-coordinate of the particle is $\sin\theta$,
but the velocity vector is pointing in a direction $\frac\pi2$ radians
"ahead" of $\theta$, and if we let $\theta$ increase at the rate of
$1$ radian per unit of time the magnitude of the velocity is $1$,
so its $x$-coordinate is $\sin\left(\theta + \frac\pi2\right) = \cos\theta$,
which is the derivative of $\sin\theta$ when $\theta$ is measured in radians.</p>
|
matrices | <p>This may be a trivial question yet I was unable to find an answer:</p>
<p>$$\left \| A \right \| _2=\sqrt{\lambda_{\text{max}}(A^{^*}A)}=\sigma_{\text{max}}(A)$$</p>
<p>where the spectral norm $\left \| A \right \| _2$ of a complex matrix $A$ is defined as $$\text{max} \left\{ \|Ax\|_2 : \|x\| = 1 \right\}$$</p>
<p>How does one prove the first and the second equality?</p>
| <p>Put <span class="math-container">$B=A^*A$</span> which is a Hermitian matrix. A linear transformation of the Euclidean vector space <span class="math-container">$E$</span> is Hermite iff there exists an orthonormal basis of E consisting of all the eigenvectors of <span class="math-container">$B$</span>. Let <span class="math-container">$\lambda_1,...,\lambda_n$</span> be the eigenvalues of <span class="math-container">$B$</span> and <span class="math-container">$\left \{ e_1,...e_n \right \}$</span> be an orthonormal basis of <span class="math-container">$E$</span>. Denote by <span class="math-container">$\lambda_{j_{0}}$</span> to be the largest eigenvalue of <span class="math-container">$B$</span>.</p>
<p>For <span class="math-container">$x=a_1e_1+...+a_ne_n$</span>, we have <span class="math-container">$\left \| x \right \|=\left \langle \sum_{i=1}^{n}a_ie_i,\sum_{i=1}^{n}a_ie_i \right \rangle^{1/2} =\sqrt{\sum_{i=1}^{n}a_i^{2}}$</span> and
<span class="math-container">$Bx=B\left ( \sum_{i=1}^{n}a_ie_i \right )=\sum_{i=1}^{n}a_iB(e_i)=\sum_{i=1}^{n}\lambda_ia_ie_i$</span>. Therefore:</p>
<p><span class="math-container">$\left \| Ax \right \|=\sqrt{\left \langle Ax,Ax \right \rangle}=\sqrt{\left \langle x,A^*Ax \right \rangle}=\sqrt{\left \langle x,Bx \right \rangle}=\sqrt{\left \langle \sum_{i=1}^{n}a_ie_i,\sum_{i=1}^{n}\lambda_ia_ie_i \right \rangle}=\sqrt{\sum_{i=1}^{n}a_i\overline{\lambda_ia_i}} \leq \underset{1\leq j\leq n}{\max}\sqrt{\left |\lambda_j \right |} \times (\left \| x \right \|)$</span></p>
<p>So, if <span class="math-container">$\left \| A \right \|$</span> = <span class="math-container">$\max \left\{ \|Ax\| : \|x\| = 1 \right\}$</span> then <span class="math-container">$\left \| A \right \|\leq \underset{1\leq j\leq n}\max\sqrt{\left |\lambda_j \right |}$</span>. (1)</p>
<p>Consider: <span class="math-container">$x_0=e_{j_{0}}$</span> <span class="math-container">$\Rightarrow \left \| x_0 \right \|=1$</span> so that <span class="math-container">$\left \| A \right \|^2 \geq \left \langle x_0,Bx_0 \right \rangle=\left \langle e_{j_0},B(e_{j_0}) \right \rangle=\left \langle e_{j_0},\lambda_{j_0} e_{j_0} \right \rangle = \lambda_{j_0}$</span>. (2)</p>
<p>Combining (1) and (2) gives us <span class="math-container">$\left \| A \right \|= \underset{1\leq j\leq n}{\max}\sqrt{\left | \lambda_{j} \right |}$</span> where <span class="math-container">$\lambda_j$</span> is the eigenvalue of <span class="math-container">$B=A^*A$</span></p>
<p>Conclusion: <span class="math-container">$$\left \| A \right \| _2=\sqrt{\lambda_{\text{max}}(A^{^*}A)}=\sigma_{\text{max}}(A)$$</span></p>
| <p>First of all,
<span class="math-container">$$\begin{align*}\sup_{\|x\|_2 =1}\|Ax\|_2 & = \sup_{\|x\|_2 =1}\|U\Sigma V^Tx\|_2 = \sup_{\|x\|_2 =1}\|\Sigma V^Tx\|_2\end{align*}$$</span>
since <span class="math-container">$U$</span> is unitary, that is, <span class="math-container">$\|Ux_0\|_2^2 = x_0^TU^TUx_0 = x_0^Tx_0 = \|x_0\|_2^2$</span>, for some vector <span class="math-container">$x_0$</span>.</p>
<p>Then let <span class="math-container">$y = V^Tx$</span>. By the same argument above, <span class="math-container">$\|y\|_2 = \|V^Tx\|_2 = \|x\|_2 = 1$</span> since <span class="math-container">$V$</span> is unitary.
<span class="math-container">$$\sup_{\|x\|_2 =1}\|\Sigma V^Tx\|_2 = \sup_{\|y\|_2 =1}\|\Sigma y\|_2$$</span>
Since <span class="math-container">$\Sigma = \mbox{diag}(\sigma_1, \cdots, \sigma_n)$</span>, where <span class="math-container">$\sigma_1$</span> is the largest singular value. The max for the above, <span class="math-container">$\sigma_1$</span>, is attained when <span class="math-container">$y = (1,\cdots,0)^T$</span>. You can find the max by, for example, solving the above using a Lagrange Multiplier.</p>
|
probability | <blockquote>
<p>You are playing a game in which you have $100$ jellybeans, $10$ of them are poisonous (You eat one, you die). Now you have to pick $10$ at random to eat.<br>
<strong>Question</strong>: What is the probability of dying?</p>
</blockquote>
<p>How I tried to solve it:</p>
<p>Each jellybean has a $\frac{1}{10}$ chance of being poisonous. Since you need to take $10$ of them, I multiply it by $10$ which gave $1$ (Guaranteed death).</p>
<p><em>How other people tried to solve it</em>:</p>
<p>Each jellybean is picked out separately. The first jellybean has a $\frac{10}{100}$ chance of being poisonous, the second -- $\frac{10}{99}$, the third -- $\frac{10}{98}$ and so on.. which gives a sum of roughly $\sim 1.04$ (More than guaranteed death!)</p>
<p>Both these results make no sense since there are obviously multiple possibilities were you survive since there are $90$ jellybeans to pick out of.</p>
<p>Can someone explain this to me?</p>
| <p>Both probablyme and carmichael561 have given a good approach to the problem, but I thought I'd point out why the solutions given by you and your classmates (?) are erroneous.</p>
<p>The problem common to both approaches is that they neglect the probability that you die from earlier jelly beans. You take the first jelly bean, and you have a $1/10$ probability of dying; that is all right. But although your classmates are almost right (and you are not) that the second jelly bean has a $10/99$ chance of killing you, that is only true <em>if the first jelly bean didn't already kill you</em>.</p>
<p>In other words, the probability that you are killed by the first jelly bean is $1/10$, and the probability that you survive to eat a second jelly bean <em>and</em> it kills you is $9/10 \times 10/99 = 1/11$. Each succeeding jelly bean does, in truth, have a higher probability of killing you <em>if you survive to eat it</em>, but the decreasing probability that you do in fact survive conspires to make its <em>overall</em> effect smaller, so the numbers do not add up to anywhere near $1$, complete certainty.</p>
<p>It is possible to continue along in a similar vein: You can add $1/10+1/11 = 21/110$ to obtain the probability that the first two jelly beans kill you; the remainder, $89/110$, is the probability that you survive to eat the third jelly bean, which kills you with probability $10/98$. Your probability of surviving the first two jelly beans only to be killed by the third is then $89/110 \times 10/98 = 89/1078$. You would then have to add up $1/10+1/11+89/1078$ to find the probability that the first three jelly beans kill you, etc.</p>
<p>I think you can see that the solutions provided by the other answerers are much more straightforward; this way of approaching the problem by considering its inverse is a common tactic when there are multiple ways to satisfy the conditions of the problem, but only one way to <em>violate</em> them.</p>
| <p>So you live if you do not choose a deadly jellybean :)<br>
And we die if we select at least one deadly bean, so I think it goes as follows
$$P(\text{Die}) = 1-P(\text{Live}) = 1-\frac{\binom{10}{0}\binom{90}{10}}{\binom{100}{10}}=0.6695237889.$$</p>
<p>In this case, we literally have good beans and bad beans, and we select without replacement. Then the number of bad beans selected follows a hypergeometric distribution. So for completeness, there are $\binom{10}{0}$ ways to choose a zero bad beans, $\binom{90}{10}$ ways to choose 10 good beans, and finally $\binom{100}{10}$ ways to choose 10 beans from the total.</p>
<p>Note: $\binom nk = \frac{n!}{k!(n-k)!}$, the <a href="https://en.wikipedia.org/wiki/Binomial_coefficient">binomial coefficient</a>.</p>
|
geometry | <p>In the textbook I am reading, it says a dimension is the number of independent parameters needed to specify a point. In order to make a circle, you need two points to specify the $x$ and $y$ position of a circle, but apparently a circle can be described with only the $x$-coordinate? How is this possible without the $y$-coordinate also?</p>
| <p>Suppose we're talking about a unit circle. We could specify any point on it as:
$$(\sin(\theta),\cos(\theta))$$
which uses only one parameter. We could also notice that there are only $2$ points with a given $x$ coordinate:
$$(x,\pm\sqrt{1-x^2})$$
and we would generally not consider having to specify a sign as being an additional parameter, since it is discrete, whereas we consider only continuous parameters for dimension.</p>
<p>That said, a <a href="http://en.wikipedia.org/wiki/Hilbert_curve">Hilbert curve</a> or <a href="http://en.wikipedia.org/wiki/Z-order_curve">Z-order curve</a> parameterizes a square in just one parameter, but we would certainly not say a square is one dimensional. The definition of dimension that you were given is kind of sloppy - really, the fact that the circle is of dimension one can be taken more to mean "If you zoom in really close to a circle, it looks basically like a line" and this happens, more or less, to mean that it can be paramaterized in one variable.</p>
| <p>Continuing ploosu2, the circle can be parameterized with one parameter (even for those who have not studied trig functions)...
$$
x = \frac{2t}{1+t^2},\qquad y=\frac{1-t^2}{1+t^2}
$$</p>
|
linear-algebra | <blockquote>
<p>Let $ A, B $ be two square matrices of order $n$. Do $ AB $ and $ BA $ have same minimal and characteristic polynomials?</p>
</blockquote>
<p>I have a proof only if $ A$ or $ B $ is invertible. Is it true for all cases?</p>
| <p>Before proving $AB$ and $BA$ have the same characteristic polynomials show that if $A_{m\times n}$ and $B_{n\times m} $ then characteristic polynomials of $AB$ and $BA$ satisfy following statement: $$x^n|xI_m-AB|=x^m|xI_n-BA|$$ therefore easily conclude if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p>
<p>Define $$C = \begin{bmatrix} xI_m & A \\B & I_n \end{bmatrix},\ D = \begin{bmatrix} I_m & 0 \\-B & xI_n \end{bmatrix}.$$ We have
$$
\begin{align*}
\det CD &= x^n|xI_m-AB|,\\
\det DC &= x^m|xI_n-BA|.
\end{align*}
$$
and we know $\det CD=\det DC$ if $m=n$ then $AB$ and $BA$ have the same characteristic polynomials.</p>
| <p>If $A$ is invertible then $A^{-1}(AB)A= BA$, so $AB$ and $BA$ are similar, which implies (but is stronger than) $AB$ and $BA$ have the same minimal polynomial and the same characteristic polynomial.
The same goes if $B$ is invertible.</p>
<p>In general, from the above observation, it is not too difficult to show that $AB$, and $BA$ have the same characteristic polynomial, the type of proof could depends on the field considered for the coefficient of your matrices though.
If the matrices are in $\mathcal{M}_n(\mathbb C)$, you use the fact that $\operatorname{GL}_n(\mathbb C)$ is dense in $\mathcal{M}_n(\mathbb C)$ and the continuity of the function which maps a matrix to its characteristic polynomial. There are at least 5 other ways to proceed (especially for other field than $\mathbb C$).</p>
<p>In general $AB$ and $BA$ do not have the same minimal polynomial. I'll let you search a bit for a counter example.</p>
|
matrices | <p>Another <a href="https://math.stackexchange.com/questions/4101200/a-3-times3-triangular-matrix-has-a-repeated-eigenvalue-if-it-is-the-square-of#4101200">question</a> brought this up. The only definition I have ever seen for a matrix being upper triangular is, written in component forms, "all the components below the main diagonal are zero." But of course that property is basis dependent. It is not preserved under change of basis.</p>
<p>Yet it doesn't seem as if it would be purely arbitrary because the product of upper triangular matrices is upper triangular, and so forth. It has closure. Is there some other sort of transformation besides a basis transformation that might be relevant here? It seems as if a set of matrices having this property should have some sort of invariants.</p>
<p>Is there some sort of isomorphism between the sets of upper triangular matrices in different bases?</p>
| <p>Many true things can be said about upper-triangular matrices, obviously... :)</p>
<p>In my own experience, a useful more-functional (rather than notational) thing that can be said is that the subgroup of <span class="math-container">$GL_n$</span> consisting of upper-triangular matrices is the <em>stabilizer</em> of the <em>flag</em> (nested sequence) of subspaces consisting of the span of <span class="math-container">$e_1$</span>, the span of <span class="math-container">$e_1$</span> and <span class="math-container">$e_2$</span>, ... with standard basis vectors.</p>
<p>Concretely, this means the following. The matrix multiplication of a triangular matrix <span class="math-container">$A$</span> and <span class="math-container">$e_1$</span>, <span class="math-container">$Ae_1$</span>, is equal to a multiple of <span class="math-container">$e_1$</span>, right? However, <span class="math-container">$Ae_2$</span> is more than a multiple of <span class="math-container">$e_2$</span>: it can be any linear combination of <span class="math-container">$e_1$</span> and <span class="math-container">$e_2$</span>. Generally, if you set <span class="math-container">$V_i= \operatorname{span}(e_1, \ldots, e_i) $</span>, try to show that <span class="math-container">$A$</span> is upper triangular if and only if <span class="math-container">$A(V_i) \subseteq V_i$</span>. The nested sequence of spaces</p>
<p><span class="math-container">$$ 0 = V_0 \subset V_1 \subset \ldots \subset V_n = \mathbb{R}^n$$</span></p>
<p>is called a <em>flag</em> of the total space.</p>
<p>One proves a lemma that <em>any</em> maximal chain of subspaces can be mapped to that "standard" chain by an element of <span class="math-container">$GL_n$</span>. In other words, no matter which basis you are using: being triangular is intrinsically to respect a flag with <span class="math-container">$\dim(V_i) = i$</span> (the last condition translate the <em>maximality</em> of the flag).</p>
<p>As Daniel Schepler aptly commented, while an ordered basis gives a maximal flag, a maximal flag does <em>not</em> quite specify a basis. There are more things that can be said about flags versus bases... unsurprisingly... :)</p>
| <p>Being upper triangular is not a property of linear transformations unless you have an ordered basis. Even changing the order of a basis can change an upper triangular matrix to a matrix which is not, or vice versa.</p>
<p>The upper triangular <span class="math-container">$n\times n$</span> matrices are the second most simple form of an <a href="https://en.wikipedia.org/wiki/Incidence_algebra" rel="noreferrer">incidence algebra</a>, where we take the partially ordered set to be <span class="math-container">$\{1,2,\dots,n\}$</span> with the usual order.</p>
<p>The most trivial incidence algebra is the algebra of diagonal matrices, where the order is <span class="math-container">$i\preccurlyeq j$</span> iff <span class="math-container">$i=j.$</span></p>
<p>One interesting thing about upper-triangular matrices is that they form an algebra even when infinite-dimensional. This is because, while multiplication requires infinite sums, all but finitely many of them are zero. You can even take upper triangular matrices with rows/columns infinite in both directions by using <span class="math-container">$(\mathbb Z,\leq)$</span> for your Poset.</p>
<p>(In a general infinite poset <span class="math-container">$P$</span> what is required for the algebra’s multiplication to be well-defined is for the poset to be “locally finite.”)</p>
|
linear-algebra | <p>I am looking for the terms to use for particular types of diagonals in two dimensional matrices. I have heard the longest diagonal, from top-left element and in the direction down-right often called the "leading diagonal". </p>
<p>-What about the 'other' diagonal, from the top right going down-left? Does it have a common name? </p>
<p>-Any general name to use for any diagonal going top-left to bottom-right direction, not necessarily just the longest one(s)? I am searching for a term to distinguish between these types of diagonals and the ones in the 'other' direction. Preferably the term should not be restricted to square matrices. </p>
| <p>$\diagdown$ Major, Principal, Primary, Main; diagonal $\diagdown$ </p>
<p>$\diagup$ Minor, Counter, Secondary, Anti-; diagonal $\diagup$ </p>
| <p>The general term for <em>any</em> diagonal going top-left to bottom-right direction is <a href="https://en.wikipedia.org/wiki/Diagonal#Matrices" rel="noreferrer">$k$-diagonal</a> where $k$ is an offset form the main diagonal.</p>
<p>$k=1$ is the <a href="http://mathworld.wolfram.com/Superdiagonal.html" rel="noreferrer">superdiagonal</a>,
$k=0$ is the main diagonal, and $k=-1$ is the <a href="http://mathworld.wolfram.com/Subdiagonal.html" rel="noreferrer">subdiagonal</a>.</p>
<p>According to Mathworld, the general term for the antidiagonals seems to be <a href="http://mathworld.wolfram.com/SkewDiagonal.html" rel="noreferrer">skew-diagonals</a>.</p>
|
combinatorics | <p>Source: <a href="http://www.math.uci.edu/%7Ekrubin/oldcourses/12.194/ps1.pdf" rel="noreferrer">German Mathematical Olympiad</a></p>
<h3>Problem:</h3>
<blockquote>
<p>On an arbitrarily large chessboard, a generalized knight moves by jumping p squares in one direction and q squares in a perpendicular direction, p, q > 0. Show that such a knight can return to its original position only after an even number of moves.</p>
</blockquote>
<h3>Attempt:</h3>
<p>Assume, wlog, the knight moves <span class="math-container">$q$</span> steps <strong>to the right</strong> after its <span class="math-container">$p$</span> steps. Let the valid moves for the knight be "LU", "UR", "DL", "RD" i.e. when it moves <strong>L</strong>eft, it has to go <strong>U</strong>p("LU"), or when it goes <strong>U</strong>p , it has to go <strong>R</strong>ight("UR") and so on.</p>
<p>Let the knight be stationed at <span class="math-container">$(0,0)$</span>. We note that after any move its coordinates will be integer multiples of <span class="math-container">$p,q$</span>. Let its final position be <span class="math-container">$(pk, qr)$</span> for <span class="math-container">$ k,r\in\mathbb{Z}$</span>. We follow sign conventions of coordinate system.</p>
<p>Let knight move by <span class="math-container">$-pk$</span> horizontally and <span class="math-container">$-qk$</span> vertically by repeated application of one step. So, its new position is <span class="math-container">$(0,q(r-k))$</span> I am thinking that somehow I need to cancel that <span class="math-container">$q(r-k)$</span> to achieve <span class="math-container">$(0,0)$</span>, but don't be able to do the same.</p>
<p>Any hints please?</p>
| <p>Case I: If $p+q$ is odd, then the knight's square changes colour after each move, so we are done.</p>
<p>Case II: If $p$ and $q$ are both odd, then the $x$-coordinate changes by an odd number after every move, so it is odd after an odd number of moves. So the $x$-coordinate can be zero only after an even number of moves.</p>
<p>Case III: If $p$ and $q$ are both even, we can keep dividing each of them by $2$ until we reach Case I or Case II. (Dividing $p$ and $q$ by the same amount doesn't change the shape of the knight's path, only its size.)</p>
| <p>This uses complex numbers.</p>
<p>Define $z=p+qi$. Say that the knight starts at $0$ on the complex plane. Note that, in one move, the knight may add or subtract $z$, $iz$, $\bar z$, $i\bar z$ to his position.</p>
<p>Thus, at any point, the knight is at a point of the form:
$$(a+bi)z+(c+di)\bar z$$
where $a$ and $b$ are integers.</p>
<p>Note that the parity (evenness/oddness) of the quantity $a+b+c+d$ changes after every move. This means it's even after an even number of moves and odd after an odd number of moves. Also note that:
$$a+b+c+d\equiv a^2+b^2-c^2-d^2\pmod2$$
(This is because $x\equiv x^2\pmod2$ and $x\equiv-x\pmod2$ for all $x$.)</p>
<p>Now, let's say that the knight has reached its original position. Then:
\begin{align}
(a+bi)z+(c+di)\bar z&=0\\
(a+bi)z&=-(c+di)\bar z\\
|a+bi||z|&=|c+di||z|\\
|a+bi|&=|c+di|\\
\sqrt{a^2+b^2}&=\sqrt{c^2+d^2}\\
a^2+b^2&=c^2+d^2\\
a^2+b^2-c^2-d^2&=0\\
a^2+b^2-c^2-d^2&\equiv0\pmod2\\
a+b+c+d&\equiv0\pmod2
\end{align}
Thus, the number of moves is even.</p>
<blockquote>
<p>Interestingly, this implies that $p$ and $q$ do not need to be integers. They can each be any real number. The only constraint is that we can't have $p=q=0$.</p>
</blockquote>
|
geometry | <p><a href="http://math.gmu.edu/~eobrien/Venn4.html" rel="noreferrer">This page</a> gives a few examples of Venn diagrams for <span class="math-container">$4$</span> sets. Some examples:<br>
<img src="https://i.sstatic.net/fHbmV.gif" alt="alt text">
<img src="https://i.sstatic.net/030jM.gif" alt="alt text"><br>
Thinking about it for a little, it is impossible to partition the plane into the <span class="math-container">$16$</span> segments required for a complete <span class="math-container">$4$</span>-set Venn diagram using only circles as we could do for <span class="math-container">$<4$</span> sets. Yet it is doable with ellipses or rectangles, so we don't require non-convex shapes as <a href="http://en.wikipedia.org/wiki/Venn_diagram#Edwards.27_Venn_diagrams" rel="noreferrer">Edwards</a> uses. </p>
<p>So what properties of a shape determine its suitability for <span class="math-container">$n$</span>-set Venn diagrams? Specifically, why are circles not good enough for the case <span class="math-container">$n=4$</span>?</p>
| <p>The short answer, from a <a href="http://www.ams.org/notices/200611/ea-wagon.pdf" rel="noreferrer">paper</a> by Frank Ruskey, Carla D. Savage, and Stan Wagon is as follows:</p>
<blockquote>
<p>... it is impossible to draw a Venn diagram with circles that will represent all the possible intersections of four (or more) sets. This is a simple consequence of the fact that circles can finitely intersect in at most two points and <a href="http://en.wikipedia.org/wiki/Planar_graph#Euler.27s_formula" rel="noreferrer">Euler’s relation</a> F − E + V = 2 for the number of faces, edges, and vertices in a plane graph.</p>
</blockquote>
<p>The same paper goes on in quite some detail about the process of creating Venn diagrams for higher values of <em>n</em>, especially for simple diagrams with rotational symmetry.</p>
<p>For a simple summary, the best answer I could find was on <a href="http://wiki.answers.com/Q/How_do_you_solve_a_four_circle_venn_diagram" rel="noreferrer">WikiAnswers</a>:</p>
<blockquote>
<p>Two circles intersect in at most two points, and each intersection creates one new region. (Going clockwise around the circle, the curve from each intersection to the next divides an existing region into two.)</p>
<p>Since the fourth circle intersects the first three in at most 6 places, it creates at most 6 new regions; that's 14 total, but you need 2^4 = 16 regions to represent all possible relationships between four sets.</p>
<p>But you can create a Venn diagram for four sets with four ellipses, because two ellipses can intersect in more than two points.</p>
</blockquote>
<p>Both of these sources indicate that the critical property of a shape that would make it suitable or unsuitable for higher-order Venn diagrams is the number of possible intersections (and therefore, sub-regions) that can be made using two of the same shape.</p>
<p>To illustrate further, consider some of the complex shapes used for <em>n</em>=5, <em>n</em>=7 and <em>n</em>=11 (from <a href="http://mathworld.wolfram.com/VennDiagram.html" rel="noreferrer">Wolfram Mathworld</a>):</p>
<p><img src="https://i.sstatic.net/JnpeY.jpg" alt="Venn diagrams for n=5, 7 and 11" /></p>
<p>The structure of these shapes is chosen such that they can intersect with each-other in as many different ways as required to produce the number of unique regions required for a given <em>n</em>.</p>
<p>See also: <a href="http://www.brynmawr.edu/math/people/anmyers/PAPERS/Venn.pdf" rel="noreferrer">Are Venn Diagrams Limited to Three or Fewer Sets?</a></p>
| <p>To our surprise, we found that the standard proof that a rotationally symmetric <span class="math-container">$n$</span>-Venn diagram is impossible when <span class="math-container">$n$</span> is not prime is incorrect. So Peter Webb and I found and published a correct proof that addresses the error. The details are all discussed at the paper</p>
<p>Stan Wagon and Peter Webb, <a href="https://www-users.cse.umn.edu/%7Ewebb/Publications/WebbWagonVennNote8.pdf" rel="nofollow noreferrer">Venn symmetry and prime numbers: A seductive proof revisited</a>, American Mathematical Monthly, August 2008, pp 645-648.</p>
<p>We discovered all this after the long paper with Savage et al. cited in another answer.</p>
|
differentiation | <p>I stumbled upon this very peculiar function last summer, namely: $f(x)=x^{x^{x^{...^{x}}}}$, where there is a number $n$ of $x$'s in the exponent, I tried to find the derivative for the function and I was successful, it turned out not to be the most elegant formula but it worked. (Firstly, I invented a new notation, namely, a function such as $f(x)$ we can write it as the following: $f(x) =x^{\langle x \vert n\rangle}$ where $x$ is the exponent that is getting "powered" up $n$ times.) The formula I obtained by pattern matching was: $$f^{\prime}(x)=x^{\langle x \vert n\rangle +\langle x \vert n-1\rangle -1}\left[1+\prod_{i=0}^{n-2}x^{\langle x \vert i\rangle}\cdot \ln(x)^n+\sum_{j=1}^{n-1}\prod_{k=n-1-j}^{n-2}x^{\langle x \vert k\rangle}\cdot \ln(x)^j\right]\tag{$n\geqslant 2$}.$$ I know this looks like a mad mess and I am aware that people <a href="https://math.stackexchange.com/questions/534820/how-can-we-calculate-xx">like this</a> have done it more elegantely, but now for the question. This is only the first derivative of the function, is there a way, or rather is there a general derivative i.e a $n^{th}$ derivative of this function? </p>
<p><strong>Update: December 23th</strong></p>
<p>I have tried to approach the problem myself since I asked the question and I have not gotten to a stage to say if it is impossible or possible to do, however I think I am on the right track. At first, I thought of distributing the factor $x^{\langle x \vert n \rangle +\langle x \vert n-1 \rangle -1}$ to all the terms in the parentheses, but I quickly realized I had to deal with at least derivatives of triple products. Now I have come to realize that the easiest way is to differentiate the function just as it is and get a normal product and thus I must use the following formula: $$(f \cdot g)^{(n)}=\sum_{k=0}^{n}{n\choose k}f^{(k)}\cdot g^{(n-k)}$$ where $f =x^{\langle x \vert n \rangle +\langle x \vert n-1 \rangle -1}$ and $g=1+\prod_{i=0}^{n-2}x^{\langle x \vert i\rangle}\cdot \ln(x)^n+\sum_{j=1}^{n-1}\prod_{k=n-1-j}^{n-2}x^{\langle x \vert k\rangle}\cdot \ln(x)^j$. Since $k$ and $n-k$ are arbitrary numbers this leads us to find the general derivative for $f$ and $g$, this is where I am right now. (I do realize that I am trying to find the $n^{th}$ derivative of the first derivative but that is easily fixed later). Please come with suggestions on how to tackle this problem. </p>
<p><strong>Update: December 24th</strong></p>
<p>I have made progress with the help of Maple 17, namely, I have found a repeating pattern in at least a part of the general derivative, but there is still a part of it I cannot yet explain. Nonetheless, I present to you the part of the general derivative I have found: $$D_x^{\xi}f(x) = x^{\langle x \vert n\rangle +\langle x \vert n-1\rangle -\xi} \Big[(-1)^{\xi}\cdot\xi! +O(x)\Big]$$</p>
<p>I renamed the degree of the derivative as $\xi$ since $n$ is taken for the number of $x$s. The $O(x)$ is the (perhaps) series which I am currently working on finding, I do think I am on the right track though. The approach above with the product rule turned out to be less successful.</p>
| <p>I found a method of doing the first derivative but it's relatively messy and makes you go all the way down the "n" chain as it were. Hopefully my ideas can give you some inspiration or spark a secondary idea.</p>
<p>I am using <span class="math-container">$^nx$</span> as the nth tetration of x.</p>
<p>What I did to reach my answer was start taking the derivatives of <span class="math-container">$^nx$</span> with increasing values of <span class="math-container">$n$</span> using <span class="math-container">$e^{lnx}$</span>. So for n=1 you obviously get
<span class="math-container">$$\frac{d}{dx}(^1x) = \frac{d}{dx}(x) = 1$$</span></p>
<p>For <span class="math-container">$n=2$</span> you get
<span class="math-container">$$\frac{d}{dx}(^2x) = \frac{d}{dx}(x^x)$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{\ln x^x})$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{x\ln x})$$</span>
<span class="math-container">$$=e^{xlnx}\frac{d}{dx}(x\ln x)$$</span>
<span class="math-container">$$=(^2x)\Bigl(\ln x\frac{d}{dx}(x)+x\frac{d}{dx}(\ln x)\Bigl)$$</span>
We already know the value of <span class="math-container">$\frac{d}{dx}(x)$</span> from the last problem, so we can plug it right in.
<span class="math-container">$$=(^2x)\Bigl(\ln x+\frac{x}{x}\Bigl)$$</span>
<span class="math-container">$$=(^2x)(\ln x+1)$$</span></p>
<p>For n=3 you get
<span class="math-container">$$\frac{d}{dx}(^3x) = \frac{d}{dx}(x^{x^x})$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{\ln(x^{x^x})})$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{(x^x)(\ln x)})$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{(e^{\ln(x^x)})(\ln x)})$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{(e^{x\ln(x)})(\ln x)})$$</span>
<span class="math-container">$$=(^3x)\frac{d}{dx}(e^{x\ln(x)})(\ln x)$$</span>
<span class="math-container">$$=(^3x)\Bigl((\ln x)\frac{d}{dx}(e^{x\ln(x)})+(e^{x\ln(x)})\frac{d}{dx}(\ln x)\Bigl)$$</span>
Note here that <span class="math-container">$e^{x\ln(x)}$</span> equals <span class="math-container">$^2x$</span>. This means that we can substitute in values we already know, just like in the last problem, and a pattern starts to emerge.
<span class="math-container">$$=(^3x)\Bigl((\ln x)\frac{d}{dx}(^2x)+(^2x)\frac{d}{dx}(\ln x)\Bigl)$$</span>
<span class="math-container">$$=(^3x)\Bigl((\ln x)\frac{d}{dx}(^2x)+\frac{^2x}{x}\Bigl)$$</span>
We don't need to plug in the end result of <span class="math-container">$\frac{d}{dx}(e^{x\ln(x)})$</span> here, because the form the equation is in now will end up fitting our generalization later.</p>
<p>For now, let's check by plugging in <span class="math-container">$n=4$</span>
<span class="math-container">$$\frac{d}{dx}(^4x) = \frac{d}{dx}(x^{x^{x^x}})$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{\ln(x^{x^{x^x}})})$$</span>
<span class="math-container">$$.$$</span>
<span class="math-container">$$.$$</span>
<span class="math-container">$$.$$</span>
<span class="math-container">$$=\frac{d}{dx}(e^{(e^{(e^{x\ln(x)})(\ln x)}(\ln x)})$$</span>
<span class="math-container">$$=(^4x)\frac{d}{dx}(e^{(e^{x\ln(x)})(\ln x)}(\ln x)$$</span>
<span class="math-container">$$=(^4x)\Bigl((\ln x)\frac{d}{dx}(e^{(e^{x\ln(x)})(\ln x)})+(e^{(e^{x\ln(x)}(\ln x)})\frac{d}{dx}(\ln x)\Bigl)$$</span>
Here again, <span class="math-container">$e^{(e^{x\ln(x)})(\ln x)}$</span> is the same as <span class="math-container">$^3x$</span>. So if we rewrite our equation as
<span class="math-container">$$(^4x)\Bigl((\ln x)\frac{d}{dx}(^3x)+\frac{^3x}{x}\Bigl)$$</span>
we can see that a general form of the first derivative can be written as
<span class="math-container">$$\frac{d}{dx}(^nx) = (^nx)\Bigl((\ln x)\frac{d}{dx}(^{n-1}x)+\frac{^{n-1}x}{x}\Bigl)$$</span></p>
<p>Obviously this has the problem of relying on the derivatives down the power tower, but I think this could have interesting applications. Hope these inane ramblings of a 17 year old help!</p>
| <p>Excellent question, and a good result. I am also impressed that you have developed your own notation. That is often a very effective way of getting to grips with a problem, especially one that has not yet become popular. I think there is a trend, however.</p>
<p>The notation tends to evolve with use. The notation here involves a redundancy which one can ill-afford in a subject already pushing at against conceptual boundaries. if you study your remarkable formula for the derivative, you will see that all the references to tetration involve the incomplete symbol: <span class="math-container">$x^{<x\mid...}$</span></p>
<p>In this usage, the initial exponent symbol <span class="math-container">$x$</span> is redundant, and complicates the expression. Thus the evolutionary pressure of being concise will force the rejection of this appendage, and one may use the symbol <span class="math-container">$\langle x \vert n \rangle$</span> by itself. this is conveniently defined by (if I have understood correctly):
<span class="math-container">$$ \langle x \vert 0 \rangle = 1 \\
\langle x \vert n+1 \rangle = x^{\langle x \vert n \rangle}
$$</span>
For the purpose of differentiation, the logarithm is useful i.e. since
<span class="math-container">$$ \ln \langle x \vert n+1 \rangle = \langle x \vert n \rangle \ln \;x
$$</span> we obtain :
<span class="math-container">$$\frac{\langle x \vert n+1 \rangle'} {\langle x \vert n+1 \rangle} = \frac{ \langle x \vert n \rangle }{x} \left( \frac{\langle x \vert n \rangle'}{\langle x \vert n \rangle}x\ln \;x + 1 \right)
$$</span>perhaps as might be expected, the logarithmic derivative <span class="math-container">$\frac{f'}f$</span> looms large here, and it is hardly surprising to see the "entropy" function also make an appearance.</p>
<p>We may abbreviate the form considerably if we define:
<span class="math-container">$$T^n(x) = \frac{\langle x \vert n \rangle'}{\langle x \vert n \rangle}
$$</span> So that we have a form fairly well-suited to recursive evaluation :
<span class="math-container">$$T^{n+1}(x) = \frac{ \langle x \vert n \rangle }{x} \left( x \ln\ x\; T^n(x)+ 1 \right)
$$</span>
Congratulations on your achievement! I hope these casual remarks will be of some use or interest.</p>
|
number-theory | <p>Perhaps, I've been thinking too long about <a href="https://web.archive.org/web/20200224014929/http://www.zyymat.com:80/ramanujans-proof-of-bertrands-postulate.html" rel="nofollow noreferrer">Ramanujan's proof</a>, but it appears to me that his argument can be generalized beyond <span class="math-container">$x$</span> and <span class="math-container">$2x$</span>. My argument below attempts to show that for <span class="math-container">$x \ge 1331$</span>, there is always a prime between <span class="math-container">$4x$</span> and <span class="math-container">$5x$</span>.</p>
<p>I can use a similar argument to establish there is a prime between <span class="math-container">$2x$</span> and <span class="math-container">$3x$</span> and between <span class="math-container">$3x$</span> and <span class="math-container">$4x$</span>. Based on some rough estimates, it looks it should also work to prove a prime between <span class="math-container">$5x$</span> and <span class="math-container">$6x$</span> as well as a prime between <span class="math-container">$6x$</span> and <span class="math-container">$7x$</span>.</p>
<p>Since I am still getting up to speed on analytic number theory, I will be very glad if someone can point out the mistake that I am making in my reasoning. I am not yet able to find it.</p>
<p>Let <span class="math-container">$$\vartheta(x) = \sum_{p \le x}\ln(p)$$</span></p>
<p>Let <span class="math-container">$$\psi(x) = \sum_{n=1}^{\infty}\vartheta(x^{\frac{1}{n}})$$</span></p>
<p>Following Ramanujan [see (6)]:</p>
<p><span class="math-container">$$\psi(x) - 2\psi(\sqrt{x}) \le \vartheta(x) \le \psi(x)$$</span></p>
<p>Analogous to Ramanujan's statement about:</p>
<p><span class="math-container">$$\ln(\lfloor{x}\rfloor]!) - \ln(\lfloor\frac{x}{2}\rfloor!) - \ln(\lfloor\frac{x}{2}\rfloor!) = \psi(x) - \psi(\frac{x}{2}) + \psi(\frac{x}{3}) - \psi(\frac{x}{4}) + \ldots$$</span></p>
<p>Here's my restatement in terms of <span class="math-container">$4x$</span> and <span class="math-container">$5x$</span>:</p>
<p><span class="math-container">$$\ln(\lfloor\frac{x}{4}\rfloor!) - \ln(\lfloor\frac{x}{5}\rfloor!) - \ln(\lfloor\frac{x}{20}\rfloor!) = \psi(\frac{x}{4}) - \psi(\frac{x}{5}) + \psi(\frac{x}{8}) - \psi(\frac{x}{10}) + \ldots$$</span></p>
<p>where for each successive term we can see:</p>
<p><span class="math-container">$$\psi(\frac{x}{4}) \ge \psi(\frac{x}{5}) \ge \psi(\frac{x}{8}) \ge \psi(\frac{x}{10}) \ge \ldots$$</span></p>
<p>Since, for any integer <span class="math-container">$v \ge 1$</span>, we have:</p>
<p><span class="math-container">$$\psi(\frac{x}{20v+4}) - \psi(\frac{x}{20v+5}) + \psi(\frac{x}{20v+8})-\psi(\frac{x}{20v+10})+\psi(\frac{x}{20v+12}) -\psi(\frac{x}{20v+15}) + \psi(\frac{x}{20v+16}) - \psi(\frac{x}{20v+20}) + \ldots$$</span></p>
<p>That is, a decreasing sequence of real numbers tending to 0, where each successive term has an alternating sign.</p>
<p>So, based on reasoning found <a href="https://math.stackexchange.com/a/354350/48606">here</a>, it follows:</p>
<p><span class="math-container">$$\psi(\frac{x}{4}) - \psi(\frac{x}{5}) + \psi(\frac{x}{8}) - \psi(\frac{x}{10}) + \psi(\frac{x}{12}) \ge \ln(\lfloor\frac{x}{4}\rfloor!) - \ln(\lfloor\frac{x}{5}\rfloor!) -\ln(\lfloor\frac{x}{20}\rfloor!)$$</span></p>
<p>From <span class="math-container">$\psi(x) - 2\psi(\sqrt{x}) \le \vartheta(x) \le \psi(x)$</span>, it follows that:</p>
<p><span class="math-container">$$\psi(\frac{x}{4}) - \psi(\frac{x}{5}) + \psi(\frac{x}{8}) - \psi(\frac{x}{10}) + \psi(\frac{x}{12}) \le \vartheta(\frac{x}{4}) - \vartheta(\frac{x}{5}) + 2\psi(\sqrt{\frac{x}{4}}) + \psi(\frac{x}{8}) - \psi(\frac{x}{10}) + \psi(\frac{x}{12})$$</span></p>
<p>Using the same reasoning as above, it can be noted that:</p>
<p><span class="math-container">$$\psi(\frac{x}{10}) - \psi(\frac{x}{12}) \le \ln(\lfloor\frac{x}{10}\rfloor!) - \ln(\lfloor\frac{x}{12}\rfloor!) - \ln(\lfloor\frac{x}{60}\rfloor!)$$</span></p>
<p>So that we have:</p>
<p><span class="math-container">$$\psi(\frac{x}{4}) - \psi(\frac{x}{5}) + \psi(\frac{x}{8}) - \psi(\frac{x}{10}) + \psi(\frac{x}{12}) \le \vartheta(\frac{x}{4}) - \vartheta(\frac{x}{5}) + 2\psi(\sqrt{\frac{x}{4}}) + \psi(\frac{x}{8}) - [ \ln(\lfloor\frac{x}{10}\rfloor!) - \ln(\lfloor\frac{x}{12}\rfloor!) - \ln(\lfloor\frac{x}{60}\rfloor!) ]$$</span></p>
<p>which implies:</p>
<p><span class="math-container">$$\vartheta(\frac{x}{4}) - \vartheta(\frac{x}{5}) \ge \ln(\lfloor\frac{x}{4}\rfloor!) - \ln(\lfloor\frac{x}{5}\rfloor!) -\ln(\lfloor\frac{x}{20}\rfloor!) - 2\psi(\sqrt{\frac{x}{4}}) - \psi(\frac{x}{8}) + \ln(\lfloor\frac{x}{10}\rfloor!) - \ln(\lfloor\frac{x}{12}\rfloor!) - \ln(\lfloor\frac{x}{60}\rfloor!)$$</span></p>
<p>From <a href="https://projecteuclid.org/journals/illinois-journal-of-mathematics/volume-6/issue-1/Approximate-formulas-for-some-functions-of-prime-numbers/10.1215/ijm/1255631807.full" rel="nofollow noreferrer">Rosser and Schoenfeld (1961)</a>, we know that (see Theorem 12):</p>
<p><span class="math-container">$$\psi(x) < 1.03883x$$</span></p>
<p>So that:</p>
<p><span class="math-container">$$\vartheta(\frac{x}{4}) - \vartheta(\frac{x}{5}) \ge \ln(\lfloor\frac{x}{4}\rfloor!) - \ln(\lfloor\frac{x}{5}\rfloor!) -\ln(\lfloor\frac{x}{20}\rfloor!) - 2(1.03883)(\sqrt{\frac{x}{4}}) - (1.03883)(\frac{x}{8}) + \ln(\lfloor\frac{x}{10}\rfloor!) - \ln(\lfloor\frac{x}{12}\rfloor!) - \ln(\lfloor\frac{x}{60}\rfloor!)$$</span></p>
<p>Based on <a href="https://en.wikipedia.org/wiki/Stirling_approximation" rel="nofollow noreferrer">Stirling's Approximation</a> and my reasoning found <a href="https://math.stackexchange.com/questions/355598/analyzing-the-lower-bound-of-a-logarithm-of-factorials-using-stirlings-approxim">here</a>, it follows that <span class="math-container">$\vartheta(\frac{x}{4}) - \vartheta(\frac{x}{5}) > 0$</span> for <span class="math-container">$x \ge 1331$</span></p>
<p>I have also verified that for <span class="math-container">$1331 > x > 2$</span>, there is always a prime between <span class="math-container">$5x$</span> and <span class="math-container">$4x$</span> so if my argument is valid, this would be enough to establish that there is always a prime between <span class="math-container">$5x$</span> and <span class="math-container">$4x$</span> for <span class="math-container">$x \ge 3$</span>.</p>
<p>Is this approach valid?</p>
<hr />
<p><strong>Update:</strong> I have found my mistake. The following step is invalid:</p>
<p><span class="math-container">$$\psi(\frac{x}{4}) - \psi(\frac{x}{5}) + \psi(\frac{x}{8}) - \psi(\frac{x}{10}) + \psi(\frac{x}{12}) \le \vartheta(\frac{x}{4}) - \vartheta(\frac{x}{5}) + 2\psi(\sqrt{\frac{x}{4}}) + \psi(\frac{x}{8}) - [ \ln(\lfloor\frac{x}{10}\rfloor!) - \ln(\lfloor\frac{x}{12}\rfloor!) - \ln(\lfloor\frac{x}{60}\rfloor!) ]$$</span></p>
<p><strong>Edit:</strong> I have added a clarification below on what type of answer I am looking for to this question.</p>
<p><strong>Clarification:</strong> I am especially interested in one of these answers to this question:</p>
<ul>
<li>Is this approach already "well-understood" (in which case, I would be interested in a reference)</li>
<li>Does this approach have "a major gap" (if so, which part of the argument is wrong or needs additional detail)</li>
<li>Could it be interesting "if it shows..." (what result is needed for this approach to be interesting to mathematician).</li>
<li>How could it be "improved and made more clear..." (what theorems or analytic techniques would strengthen or clarify the argument)</li>
<li>If the argument looks good, what would be the recommended next step from here?</li>
</ul>
| <p>Ramanujan's proof is actually a simplification of Chebyshev's original [1852] proof of Bertrands postulate (the article is <em>Memoire sur les nombres premiers</em>. J. Math. Pures Appl. <strong>17</strong> 1852). Chebyshev uses a stronger approach proving a lot more than Bertrand's postulate, in particular your statement follows directly from his bound while his methods are essentially the same as Ramanujan's</p>
<p>He starts deriving the identity
$$ \log [x]! = \psi(x) + \psi\left(\frac x2 \right) + \psi\left(\frac x3 \right) + \dots $$
and then he uses it to derive the following identity (similar to Ramanujan's or your's but stronger):</p>
<p>$$ \log \frac{ \left\lfloor x\right\rfloor! \left\lfloor \frac x{30} \right\rfloor!}{
\left\lfloor \frac x2 \right\rfloor!\left\lfloor \frac x3 \right\rfloor!\left\lfloor \frac x5 \right\rfloor!
} = \psi\left(x\right)-\psi\left(\frac{x}{6}\right)+\psi\left(\frac{x}{7}\right)
-\psi\left(\frac{x}{10}\right)+\psi\left(\frac{x}{11}\right)
-\psi\left(\frac{x}{12}\right)+\psi\left(\frac{x}{13}\right)
-\psi\left(\frac{x}{15}\right)+\psi\left(\frac{x}{17}\right)
-\psi\left(\frac{x}{18}\right)+\psi\left(\frac{x}{19}\right)
-\psi\left(\frac{x}{20}\right)+\psi\left(\frac{x}{23}\right)
-\psi\left(\frac{x}{24}\right)+\psi\left(\frac{x}{29}\right)
-\psi\left(\frac{x}{30}\right) + \dots - \dots
$$
where the sequence in the right continues with period 30 in he denominator, ie the first missing terms are $\psi(x/31)-\psi(x/36)+\psi(x/37)- \dots $.</p>
<p>As you can see this is similar to the argument you give above, as we have an alternating sequence of non-increasing terms, so you can bound it above and below stopping after an odd/even number of terms:
$$ \psi(x) - \psi\left(\frac x6\right) \le \log \frac{ \left\lfloor x\right\rfloor! \left\lfloor \frac x{30} \right\rfloor!}{
\left\lfloor \frac x2 \right\rfloor!\left\lfloor \frac x3 \right\rfloor!\left\lfloor \frac x5 \right\rfloor!
} \le \psi(x) $$</p>
<p>Now he uses Stirling to find the approximation
$$ Ax - \frac{5}{2}\log x - 1 < \log \frac{ \left\lfloor x\right\rfloor! \left\lfloor \frac x{30} \right\rfloor!}{
\left\lfloor \frac x2 \right\rfloor!\left\lfloor \frac x3 \right\rfloor!\left\lfloor \frac x5 \right\rfloor!
} < A x + \frac{5}{2}\log x $$
where
$$ A = \log \frac{2^{1/2}3^{1/3}5^{1/5}}{30^{1/30}} = 0.92129202 $$
which combined with the previous inequalities gives:
$$ \psi(x) > Ax -\frac{5}{2}\log x -1 \quad\text{and}\quad \psi(x)-\psi\left(\frac{x}{6}\right) < Ax + \frac{5}{2}\log x $$
He uses an auxiliary funciont to telescope this inequality and finds that:
$$ \psi(x) < \frac{6}{5}Ax + \frac{5}{4\log 6} \log^2 x + \frac{5}{4}\log x + 1 $$
And now he uses the inequality
$$ \psi(x) - 2\psi(\sqrt{x}) < \theta(x) < \psi(x) - \psi(\sqrt{x}) $$
to derive (after some technical work) that there are more than $k$ primes between $l$ and $L$ if
$$ l = \frac{5}{6} L - 2 \sqrt{L} - \frac{25 \log^2 L}{16\log 6 A}-\frac{5}{6A}\left(\frac{25}{4}+k\right)\log L - \frac{25}{6A} $$
So with $k = 0$ we find that there is a prime between $4x$ and $5x$ for $x >= 2034$ and we can check numerically that the same holds for $x \ge 1331$.</p>
<p>(However this falls short to prove that there is always a prime between $5x$ and $6x$.)</p>
<p>Some comments about your clarifications:</p>
<ul>
<li><p>As it has been commented there is an elementary proof of the prime number theorem so there is an elementary proof that there is a prime between $x$ and $(1 + \epsilon) x$ for all $\epsilon$. </p></li>
<li><p>This is the first result of his kind, but the bounds obtained by Chebyshev are not easy to improve without using the PNT. First Sylvester [1881] <em>On Tchebycheff's theory of the totality of the prime numbers comprised within given limits.</em> Improved the upper bound on $\psi(x)$ to
$$ 0.95695 x \le \psi(x) \le 1.04423 x $$
Using (I think) combinations of identities as Chebyshev's and Ramanujan's. I never was able to find the article so I'm not sure about Sylvester's method or results (so please check them). With these inequalities it should be possible to prove the existence of a prime between $10x$ and $11x$ for large enough $x$. </p></li>
<li><p>In <a href="http://arxiv.org/pdf/0709.1977v1.pdf" rel="noreferrer">http://arxiv.org/pdf/0709.1977v1.pdf</a> you can find a list of all possible identities that can be used almost directly in Chebyshev's method, there are several infinite families and 52 isolated identities. (however I believe Chebyshev's identity is the one that gives the best possible bound using only his method). </p></li>
<li><p>I think using Rosser and Schoenfeld's result $\psi(x)<1.03883x$ is (in some sense) a major <em>gap</em> in your argument. I don't know how it is derived but I think as there are a lot of results in that article that imply the existence of a prime between $kx$ and $(k+1)x$ for every prime, so I would avoid it to prevent circular reasoning. </p></li>
</ul>
| <p>See and search for Generalized Ramanujan Primes</p>
<p><a href="https://en.wikipedia.org/wiki/Ramanujan_prime#Generalized_Ramanujan_primes" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Ramanujan_prime#Generalized_Ramanujan_primes</a></p>
|
linear-algebra | <p>Simply as the title says. I've done some research, but still haven't arrived at an answer I am satisfied with. I know the answer varies in different fields, but in general, why would someone study linear algebra?</p>
| <p>Linear algebra is vital in multiple areas of <strong>science</strong> in general. Because linear equations are so easy to solve, practically every area of modern science contains models where equations are approximated by linear equations (using Taylor expansion arguments) and solving for the system helps the theory develop. Beginning to make a list wouldn't even be relevant ; you and I have no idea how people abuse of the power of linear algebra to approximate solutions to equations. Since in most cases, solving equations is a synonym of solving a practical problem, this can be VERY useful. Just for this reason, linear algebra has a reason to exist, and it is enough reason for any scientific to know linear algebra.</p>
<p>More specifically, in mathematics, linear algebra has, of course, its use in abstract algebra ; vector spaces arise in many different areas of algebra such as group theory, ring theory, module theory, representation theory, Galois theory, and much more. Understanding the tools of linear algebra gives one the ability to understand those theories better, and some theorems of linear algebra require also an understanding of those theories ; they are linked in many different intrinsic ways.</p>
<p>Outside of algebra, a big part of analysis, called <em>functional analysis</em>, is actually the infinite-dimensional version of linear algebra. In infinite dimension, most of the finite-dimension theorems break down in a very interesting way ; some of our intuition is preserved, but most of it breaks down. Of course, none of the algebraic intuition goes away, but most of the analytic part does ; closed balls are never compact, norms are not always equivalent, and the structure of the space changes a lot depending on the norm you use. Hence even for someone studying analysis, understanding linear algebra is vital.</p>
<p>In other words, if you wanna start thinking, learn how to think straight (linear) first. =)</p>
<p>Hope that helps,</p>
| <p>Having studied Engineering, I can tell you that Linear Algebra is fundamental and an extremely powerful tool in <strong>every single</strong> discipline of Engineering.</p>
<p>If you are reading this and considering learning linear algebra then I will first issue you with a warning: Linear algebra is mighty stuff. You should be both manically excited and scared by the awesome power it will give you!!!!!!</p>
<p>In the abstract, it allows you to manipulate and understand whole systems of equations with huge numbers of dimensions/variables on paper without any fuss, and solve them computationally. Here are some of the real-world relationships that are governed by linear equations and some of its applications:</p>
<ul>
<li>Load and displacements in structures</li>
<li>Compatability in structures</li>
<li>Finite element analysis (has Mechanical, Electrical, and Thermodynamic applications)</li>
<li>Stress and strain in more than 1-D</li>
<li>Mechanical vibrations</li>
<li>Current and voltage in LCR circuits</li>
<li>Small signals in nonlinear circuits = amplifiers</li>
<li>Flow in a network of pipes</li>
<li>Control theory (governs how state space systems evolve over time, discrete and continuous)</li>
<li>Control theory (Optimal controller can be found using simple linear algebra)</li>
<li>Control theory (Model Predictive control is heavily reliant on linear algebra)</li>
<li>Computer vision (Used to calibrate camera, stitch together stereo images)</li>
<li>Machine learning (Support Vector Machine)</li>
<li>Machine learning (Principal component analysis)</li>
<li>Lots of optimization techniques rely on linear algebra as soon as the dimensionality starts to increase.</li>
<li>Fit an arbitrary polynomial to some data.</li>
</ul>
<p>Arbitrarily large problems of the types listed above can be converted into simple matrix equations, and most of those equations are of the form <code>A x = b</code>. Nearly all other problems are of the form <code>A x = λ x</code>. Yep, you read that right! Nearly all engineering problems, no matter how huge, can be reduced to one of two equations!</p>
<p>Linear algebra is so powerful that it also deals with small deviations in lots of non-linear systems! A typical engineering way to deal with a non-linear system might be to linearize it, then use Linear Algebra to understand it!</p>
|
linear-algebra | <p>I am looking for an intuitive explanation as to why/how row rank of a matrix = column rank. I've read the <a href="https://en.wikipedia.org/wiki/Rank_(linear_algebra)#Proofs_that_column_rank_=_row_rank" rel="noreferrer">proof on Wikipedia</a> and I understand the proof, but I don't "get it". Can someone help me out with this ?</p>
<p>I find it hard to wrap my head around the idea of how the column space and the row space is related at a fundamental level.</p>
| <p>You can apply elementary row operations and elementary column operations to bring a matrix <span class="math-container">$A$</span> to a matrix that is in <strong>both</strong> row reduced echelon form and column reduced echelon form. In other words, there exist invertible matrices <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> (which are products of elementary matrices) such that
<span class="math-container">$$PAQ=E:=\begin{pmatrix}I_k\\&0_{(n-k)\times(n-k)}\end{pmatrix}.$$</span>
As <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are invertible, the maximum number of linearly independent rows in <span class="math-container">$A$</span> is equal to the maximum number of linearly independent rows in <span class="math-container">$E$</span>. That is, the row rank of <span class="math-container">$A$</span> is equal to the row rank of <span class="math-container">$E$</span>. Similarly for the column ranks. Now it is evident that the row rank and column rank of <span class="math-container">$E$</span> are identical (to <span class="math-container">$k$</span>). Hence the same holds for <span class="math-container">$A$</span>.</p>
| <p>This post is quite old, so my answer might come a bit late.
If you are looking for an intuition (you want to "get it") rather than a demonstration (of which there are several), then here is my 5c.</p>
<p>If you think of a matrix A in the context of solving a system of simultaneous equations, then the row-rank of the matrix is the number of independent equations, and the column-rank of the matrix is the number of independent parameters that you can estimate from the equation. That I think makes it a bit easier to see why they should be equal.</p>
<p>Saad.</p>
|
combinatorics | <p>The question that I saw is as follows:</p>
<blockquote>
<p>In the Parliament of Sikinia, each member has at most three enemies. Prove that the house can be separated into two houses, so that each member has at most one enemy in his own house.</p>
</blockquote>
<p>I built a graph where each person corresponds to a vertex and there is an edge between them if the two are enemies. Then I tried to color the vertices of the graph using two colors and remove edges that were between vertices of different colors. The goal is to arrive at a graph with max degree 1. I tried a couple of examples. It seems to workout fine, but I don't know how to prove it. </p>
| <p>Split the house however you like. Let $E_i$ be the number of enemies person $i$ has in their group, and let $E = \sum E_i.$ For any person having more than $1$ enemy in their group, i.e. at least $2$, move them to the other group, where they will have at most $1$ enemy. This decreases $E.$ Since $E$ is always non-negative, this process must end eventually, at which point the desired configuration is reached.</p>
| <p>The question is ambiguous in two ways: we are not told whether being an enemy is a symmetric or asymmetric relation, nor whether the parliament has a finite or infinite number of members. I will show that the assertion is false for asymmetric relations, even in the finite case; in fact, I give an (asymmetric) example of <span class="math-container">$15$</span> people, each having only <span class="math-container">$2$</span> enemies, who can't be divided into two groups in which each member has at most one enemy. On the other hand, if the relation is symmetric, the assertion is true even in the infinite case.</p>
<p><strong>An asymmetric counterexample.</strong> There are <span class="math-container">$15$</span> people, call them <span class="math-container">$x_i,y_i,y'_i,z_i,z'_i\ (i=0,1,2).$</span> The enemies of <span class="math-container">$x_i$</span> are <span class="math-container">$y_i,y'_i;$</span> the enemies of <span class="math-container">$y_i,y'_i$</span> are <span class="math-container">$z_i,z'_i;$</span> the enemies of <span class="math-container">$z_i,z'_i$</span> are <span class="math-container">$x_i,x_{i+1}$</span> (addition modulo <span class="math-container">$3$</span>). Let each of the <span class="math-container">$15$</span> people be colored red or blue. Then at least two among <span class="math-container">$x_0,x_1,x_2$</span> have the same color; without loss of generality, suppose that <span class="math-container">$x_0,x_1$</span> are red. If either <span class="math-container">$z_0$</span> or <span class="math-container">$z'_0$</span> is red, then we have a red person with two red enemies; so we may assume that <span class="math-container">$z_0,z'_0$</span> are blue. If either <span class="math-container">$y_0$</span> or <span class="math-container">$y'_0$</span> is blue, then we have a blue person with two blue enemies; so we may assume that <span class="math-container">$y_0,y'_0$</span> are red. But then <span class="math-container">$x_0$</span> is a red person with two red enemies.</p>
<p><strong>The finite symmetric case.</strong> (Already proved in the <a href="https://math.stackexchange.com/questions/2062522/each-person-has-at-most-3-enemies-in-a-group-show-that-we-can-separate-them-int/2062534#2062534">answer</a> by @cats, repeated here for convenience.) If <span class="math-container">$G=(V,E)$</span> is a finite <em>undirected</em> graph with maximum degree at most <span class="math-container">$3,$</span> then the vertex set <span class="math-container">$V$</span> can be partitioned into two sets <span class="math-container">$V_1,V_2$</span> so that, for <span class="math-container">$i=1,2,$</span> each vertex in <span class="math-container">$V_i$</span> has at most one neighbor (i.e. "enemy") in <span class="math-container">$V_i.$</span> This can be done by choosing a partition which maximizes the number of edges with one endpoint in <span class="math-container">$_1$</span> and the other in <span class="math-container">$V_2.$</span> (This argument does not work if <span class="math-container">$V$</span> is infinite.)</p>
<p><strong>The infinite symmetric case.</strong> If <span class="math-container">$G=(V,E)$</span> is any (not necessarily finite) undirected graph with maximum degree at most <span class="math-container">$3,$</span> then the vertex set <span class="math-container">$V$</span> can be partitioned into two sets <span class="math-container">$V_1,V_2$</span> so that, for <span class="math-container">$i=1,2,$</span> each vertex in <span class="math-container">$V_i$</span> has at most one neighbor in <span class="math-container">$V_i.$</span> This follows from the finite case by the usual sort of compactness argument, i.e., using the fact that the <a href="https://en.wikipedia.org/wiki/Tychonoff%27s_theorem" rel="nofollow noreferrer">Tychonoff product</a> of any family of finite spaces is compact.</p>
<p><strong>More generally,</strong> let <span class="math-container">$k$</span> be a positive integer and let <span class="math-container">$d_1,\dots,d_k$</span> be nonnegative integers. If <span class="math-container">$G=(V,E)$</span> is any (not necessarily finite) undirected graph with maximum degree at most <span class="math-container">$d_1+\cdots+d_k+k-1$</span>, then the vertex set <span class="math-container">$V$</span> can be partitioned into <span class="math-container">$k$</span> sets <span class="math-container">$V_1,\dots,V_k$</span> so that, for each <span class="math-container">$i=1,\dots,k$</span>, each vertex in <span class="math-container">$V_i$</span> has at most <span class="math-container">$d_i$</span> neighbors in <span class="math-container">$V_i$</span>.</p>
<p><strong>Proof.</strong> We may assume that <span class="math-container">$G$</span> is a finite graph and that <span class="math-container">$k=2$</span>; the general case will then follow by induction on <span class="math-container">$k$</span> and a compactness argument. Let <span class="math-container">$d_1,d_2$</span> be nonnegative integers, and let <span class="math-container">$G=(V,E)$</span> be a finite graph with maximum degree at most <span class="math-container">$d_1+d_2+1$</span>. Let <span class="math-container">$\{V_1,V_2\}$</span> be a partition of <span class="math-container">$V$</span> which minimizes the quantity <span class="math-container">$(d_2+1)e_1+(d_1+1)e_2$</span> where <span class="math-container">$e_i$</span> is the number of edges joining two vertices in <span class="math-container">$V_i$</span>. Then each vertex in <span class="math-container">$V_i$</span> has at most <span class="math-container">$d_i$</span> neighbors in <span class="math-container">$V_i$</span>.</p>
|
logic | <p>With <span class="math-container">$Q$</span> the set of rational numbers, I'm wondering:</p>
<blockquote>
<p>Is the predicate "Int(<span class="math-container">$x$</span>) <span class="math-container">$\equiv$</span> <span class="math-container">$x$</span> is an integer" first-order definable in <span class="math-container">$(Q, +, <)$</span> where there is one additional constant symbol for each element of <span class="math-container">$Q$</span>?</p>
</blockquote>
<p>I know this is the case if multiplication is allowed. I guess the fact that <span class="math-container">$<$</span> is dense would imply a negative answer to this question, via an EF game maybe; is there a similar structure with a dense order but with Int(<span class="math-container">$x$</span>) FO-definable?</p>
| <p>This is a very interesting question. </p>
<p>The answer is no. </p>
<p>First, I claim that the theory of the structure $\langle\mathbb{Q},+,\lt\rangle$ admits elimination of quantifiers. That is, every formula $\varphi(\vec x)$ in this language is equivalent over this theory to a quantifier-free formula. This can be proved by a brute-force hands-on induction over formulas. Allow me merely to sketch the argument. It is true already for the atomic formulas, and the property of being equivalent to a quantifier-free formulas is preserved by Boolean connectives. So consider a formula of the form $\exists x\varphi(x,\vec z)$, where $\varphi$ is quantifier-free. We may put $\varphi$ in disjunctive normal form and then distribute the quantifier over the disjunct, which reduces to the case where $\varphi$ is a conjunction of atomic and negated atomic formulas. We may assume that $x$ appears freely in each of these conjuncts (since otherwise we may remove it from the scope of the quantifier). If a formula of the form $x+y=z$ appears in $\varphi$, then we may replace all occurences of $x$ with $z-y$ and thereby eliminate the need to quantify over $x$ (one must also subsequently eliminate the minus sign after the replacing, but this is easy by elementary algebraic operations). We may do this even if $x$ appears multiply, as in $x+x+y=z$, for then we replace $x$ everywhere with $(z-y)/2$, but then clear both the $2$ and the minus sign by elementary algebraic manipulations. Thus, we may assume that equality atomic assertions appear only negatively in $\varphi$. All the other assertions merely concern the order. Note that a negated order relation $\neg(u\lt v)$ is equivalent to $v\lt u\vee v=u$, and we may distribute the quantifier again over this disjunct. So negated order relations do not appear in $\varphi$. The atomic order formulas have the form $x+y\lt u+v$ and so on. We may cancel similar variables on each side, and so $x$ appears on only one side. By allowing minus, we see that every conjunct formula in $\varphi$ says either that $a\cdot x\neq t$, or that $b\cdot x\lt s$ or that $u\lt c\cdot x$, for some terms $t,s,u$, in which $+$ and $-$ may both appear, and where $a$, $b$ and $c$ are fixed positive integer constants. By temporarily allowing rational constant coefficients, we may move these coefficients to the other side away from $x$. Thus, the assertion $\exists x\,\varphi(x,\vec t,\vec s,\vec u)$ is equivalent to the assertion that every $\frac{1}{c}u$ is less than every $\frac 1b s$. We may then clear the introduced rational constant multiples by multiplying through (which means adding that many times on the other side). Clearly, if such an $x$ exists, then this will be the case, and if this is the case, then there will be $x$'s in between, and so infinitely many, so at least one of them will be unequal to the $t$'s. This final assertion can be re-expressed without minus, and so the original assertion is equivalent to a quantifier-free assertion. So the theory admits elimination of quantifiers. </p>
<p>It now follows that the definable classes are all defined by quantifier-free formulas. By induction, it is easy to see that any such class will be a finite union of intervals, and so the class of integers is not definable. </p>
| <p>A bit belatedly, here's another approach:</p>
<p>The presence of constant symbols naming each element of the structure in question does <strong>not</strong> entirely prevent an automorphism-based argument from working. We just have to pass to a different structure!</p>
<p>By compactness + downward Lowenheim-Skolem, our original structure <span class="math-container">$\mathcal{Q}$</span> has a countable non-Archimedean elementary extension <span class="math-container">$\mathcal{X}$</span>. By a back-and-forth argument, the substructure of <span class="math-container">$\mathcal{X}$</span> consisting of the positive infinite elements is a single automorphism orbit, and so any <em>(parameter-freely-)</em><span class="math-container">${}$</span>definable-in-<span class="math-container">$\mathcal{X}$</span> set contains either all or none of the infinite elements of <span class="math-container">$\mathcal{X}$</span>.</p>
<p>Now supposing <span class="math-container">$\varphi$</span> defines <span class="math-container">$\mathbb{Z}$</span> in <span class="math-container">$\mathcal{Q}$</span>, consider <span class="math-container">$\varphi^\mathcal{X}$</span> and apply elementarity to the property "is unbounded and co-unbounded."</p>
|
linear-algebra | <p><strong>Background:</strong> Many (if not all) of the transformation matrices used in $3D$ computer graphics are $4\times 4$, including the three values for $x$, $y$ and $z$, plus an additional term which usually has a value of $1$.</p>
<p>Given the extra computing effort required to multiply $4\times 4$ matrices instead of $3\times 3$ matrices, there must be a substantial benefit to including that extra fourth term, even though $3\times 3$ matrices <em>should</em> (?) be sufficient to describe points and transformations in 3D space.</p>
<p><strong>Question:</strong> Why is the inclusion of a fourth term beneficial? I can guess that it makes the computations easier in some manner, but I would really like to know <em>why</em> that is the case.</p>
| <p>I'm going to copy <a href="https://stackoverflow.com/questions/2465116/understanding-opengl-matrices/2465290#2465290">my answer from Stack Overflow</a>, which also shows why 4-component vectors (and hence 4×4 matrices) are used instead of 3-component ones.</p>
<hr>
<p>In most 3D graphics a point is represented by a 4-component vector (x, y, z, w), where w = 1. Usual operations applied on a point include translation, scaling, rotation, reflection, skewing and combination of these. </p>
<p>These transformations can be represented by a mathematical object called "matrix". A matrix applies on a vector like this:</p>
<pre><code>[ a b c tx ] [ x ] [ a*x + b*y + c*z + tx*w ]
| d e f ty | | y | = | d*x + e*y + f*z + ty*w |
| g h i tz | | z | | g*x + h*y + i*z + tz*w |
[ p q r s ] [ w ] [ p*x + q*y + r*z + s*w ]
</code></pre>
<p>For example, scaling is represented as</p>
<pre><code>[ 2 . . . ] [ x ] [ 2x ]
| . 2 . . | | y | = | 2y |
| . . 2 . | | z | | 2z |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p>and translation as</p>
<pre><code>[ 1 . . dx ] [ x ] [ x + dx ]
| . 1 . dy | | y | = | y + dy |
| . . 1 dz | | z | | z + dz |
[ . . . 1 ] [ 1 ] [ 1 ]
</code></pre>
<p><strong><em>One of the reason for the 4th component is to make a translation representable by a matrix.</em></strong></p>
<p>The advantage of using a matrix is that multiple transformations can be combined into one via matrix multiplication.</p>
<p>Now, if the purpose is simply to bring translation on the table, then I'd say (x, y, z, 1) instead of (x, y, z, w) and make the last row of the matrix always <code>[0 0 0 1]</code>, as done usually for 2D graphics. In fact, the 4-component vector will be mapped back to the normal 3-vector vector via this formula:</p>
<pre><code>[ x(3D) ] [ x / w ]
| y(3D) ] = | y / w |
[ z(3D) ] [ z / w ]
</code></pre>
<p>This is called <a href="http://en.wikipedia.org/wiki/Homogeneous_coordinates#Use_in_computer_graphics" rel="noreferrer">homogeneous coordinates</a>. <strong><em>Allowing this makes the perspective projection expressible with a matrix too,</em></strong> which can again combine with all other transformations.</p>
<p>For example, since objects farther away should be smaller on screen, we transform the 3D coordinates into 2D using formula</p>
<pre><code>x(2D) = x(3D) / (10 * z(3D))
y(2D) = y(3D) / (10 * z(3D))
</code></pre>
<p>Now if we apply the projection matrix</p>
<pre><code>[ 1 . . . ] [ x ] [ x ]
| . 1 . . | | y | = | y |
| . . 1 . | | z | | z |
[ . . 10 . ] [ 1 ] [ 10*z ]
</code></pre>
<p>then the real 3D coordinates would become</p>
<pre><code>x(3D) := x/w = x/10z
y(3D) := y/w = y/10z
z(3D) := z/w = 0.1
</code></pre>
<p>so we just need to chop the z-coordinate out to project to 2D.</p>
| <blockquote>
<p>Even though 3x3 matrices should (?) be sufficient to describe points and transformations in 3D space.</p>
</blockquote>
<p>No, they aren't enough! Suppose you represent points in space using 3D vectors. You can transform these using 3x3 matrices. But if you examine the definition of matrix multiplication you should see immediately that multiplying a zero 3D vector by a 3x3 matrix gives you another zero vector. So simply multiplying by a 3x3 matrix can never move the origin. But translations and rotations do need to move the origin. So 3x3 matrices are not enough.</p>
<p>I haven't tried to explain exactly how 4x4 matrices are used. But I hope I've convinced you that 3x3 matrices aren't up to the task and that something more is needed.</p>
|
geometry | <p>I am struggling understanding this finding. Can somebody explain intuitively why randomly drawn high-dimensional vectors will tend to be mutually orthogonal? I realize that intuition in high dimensions might be too much to ask for, still, an explanation without having to integrate over several pages of symbols would be preferred. </p>
| <p>A random uniform unit vector is $X/\|X\|$ where $X$ is standard normal, thus the scalar product of two independent unit vectors $U$ and $V$ is $\langle U,V\rangle=\langle X,Y\rangle/(\|X\|\cdot\|Y\|)$ where $X$ and $Y$ are independent and standard normal. When $n\to\infty$, by the law of large numbers, $\|X\|/\sqrt{n}\to1$ almost surely and $\|Y\|/\sqrt{n}\to1$ almost surely, and by the central limit theorem, $\langle X,Y\rangle/\sqrt{n}$ converges in distribution to a standard one-dimensional normal random variable $Z$. </p>
<p>Thus, $\sqrt{n}\cdot\langle U,V\rangle\to Z$ in distribution, in particular, for every $\varepsilon\gt0$, $P(|\langle U,V\rangle|\geqslant\varepsilon)\to0$. In this sense, when $n\to\infty$, the probability that $U$ and $V$ are nearly orthogonal goes to $1$.</p>
<p>Likewise, $k$ independent uniform unit vectors are nearly orthogonal with very high probability when $n\to\infty$, for every fixed $k$.</p>
| <p>Here is one way to reason, chosen for simplicity of calculations: Consider the unit vector $e=(1,0,0,\ldots,0)\in\mathbb R^n$. One way to measure how 'orthogonal' $e$ is to other vectors is to calculate the average of $(e\cdot x)^2$ as $x$ ranges over the unit sphere. If $S$ denotes the surface measure on the unit sphere corresponding to (normalized) area, then
$$
\int |e\cdot y|^2 dS(y) =\int |y_1|^2 dS(y)=\frac{1}{n}\int \sum_{j=1}^n |y_j|^2 dS(y)=\frac{1}{n}.
$$
Thus, in this sense, vectors are generally 'more' orthogonal in higher dimensional spaces.</p>
<p>Edit: This line of reasoning follows closely the argument given by JyrkiLahtonen in the comments above, as one sees by considering a random $\mathbb R^n$-valued vector $Y$, uniformly distributed on the unit sphere. If we consider the random variable $e\cdot Y$, then
$$
E \; e\cdot Y=\int e\cdot y \;dS(y)=0,
$$
because $S$ in invariant under the transformation $y\mapsto -y$. On the other hand
$$
V(e\cdot Y)=\int |e\cdot y|^2 dS(y) =\frac{1}{n},
$$
as shown above. Therefore, intuitively, $e\cdot Y$ is small when $n$ is large. Rigorously, we can employ Chebyshev's inequality to obtain
$$
P(|e\cdot Y|\geq \epsilon)\leq \frac{1}{n\epsilon^2}.
$$</p>
|
logic | <p>Is there such a logical thing as proof by example?</p>
<p>I know many times when I am working with algebraic manipulations, I do quick tests to see if I remembered the formula right.</p>
<p>This works and is completely logical for counter examples. One specific counter example disproves the general rule. One example might be whether $(a+b)^2 = a^2+b^2$. This is quickly disproven with most choices of a counter example. </p>
<p>However, say I want to test something that is true like $\log_a(b) = \log_x(b)/\log_x(a)$. I can pick some points a and b and quickly prove it for one example. If I test a sufficient number of points, I can then rest assured that it does work in the general case. <strong>Not that it probably works, but that it does work assuming I pick sufficiently good points</strong>. (Although in practice, I have a vague idea of what makes a set of sufficiently good points and rely on that intuition/observation that it it should work)</p>
<p>Why is this thinking "it probably works" <strong>correct</strong>?</p>
<p>I've thought about it, and here's the best I can come up with, but I'd like to hear a better answer:</p>
<blockquote>
<p>If the equation is false (the two sides aren't equal), then there is
going to be constraints on what a and b can be. In this example it is
one equation and two unknowns. If I can test one point, see it fits
the equation, then test another point, see it fits the equation, and
test one more that doesn't "lie on the path formed by the other two
tested points", then I have proven it.</p>
</blockquote>
<p>I remember being told in school that this is not the same as proving the general case as I've only proved it for specific examples, but thinking about it some more now, I am almost sure it is a rigorous method to prove the general case provided you pick the right points and satisfy some sort of "not on the same path" requirement for the chosen points.</p>
<p>edit: Thank you for the great comments and answers. I was a little hesitant on posting this because of "how stupid a question it is" and getting a bunch of advice on why this won't work instead of a good discussion. I found the polynomial answer the most helpful to my original question of whether or not this method could be rigorous, but I found the link to the small numbers intuition quiz pretty awesome as well.</p>
<p>edit2: Oh I also originally tagged this as linear-algebra because the degrees of freedom nature when the hypothesis is not true. But I neglected to talk about that, so I can see why that was taken out. When a hypothesis is not true (ie polynomial LHS does not equal polynomial RHS), the variables can't be anything, and there exists a counter example to show this. By choosing points that slice these possibilities in the right way, it's proof that the hypothesis is true, at least for polynomials. The points have to be chosen so that there is no possible way the polynomial can meet all of them. If it still meets these points, the only possibility is that the polynomials are the same, proving the hypothesis by example. I would imagine there is a more general version of this, but it's probably harder than writing proofs the more straightforward way in a lot of cases. Maybe "by example" is asking to be stoned and fired. I think "brute force" was closer to what I was asking, but I didn't realize it initially.</p>
| <p>In mathematics, "it probably works" is never a good reason to think something has been proven. There are certain patterns that hold for a large amount of small numbers - most of the numbers one would test - and then break after some obscenely large $M$ (see <a href="http://arxiv.org/pdf/1105.3943.pdf">here</a> for an example). If some equation or statement doesn't hold in general but holds for certain values, then yes, there will be constraints, but those constraints might be very hard or even impossible to quantify: say an equation holds for all composite numbers, but fails for all primes. Since we don't know a formula for the $n$th prime number, it would be very hard to test your "path" to see where this failed.</p>
<p>However, there is such a thing as a proof by example. We often want to show two structures, say $G$ and $H$, to be the same in some mathematical sense: for example, we might want to show $G$ and $H$ are <a href="http://en.wikipedia.org/wiki/Group_isomorphism">isomorphic as groups</a>. Then it would suffice to find an isomorphism between them! In general, if you want to show something exists, you can prove it by <em>finding it</em>!</p>
<p>But again, if you want to show something is true for all elements of a given set (say, you want to show $f(x) = g(x)$ for all $x\in\Bbb{R}$), then you have to employ a more general argument: no amount of case testing will prove your claim (unless you can actually test all the elements of the set explicitly: for example when the set is finite, or when you can apply mathematical induction).</p>
| <p>Yes. As pointed out in the comments by CEdgar:</p>
<p>Theorem: There exists an odd prime number.
Proof: 17 is an odd prime number.</p>
<p>Incidently, this is also a proof by example that there are proofs by example.</p>
|
game-theory | <p>I am really confused about how to think about this question. It was presented as a challenge by a peer. </p>
<p>Two people seek to kill a duck at a location $Y$ meters from their origin. They walk from $x=0$ to $x=Y$ together. At any time, one of the two may pull out their gun and shoot at the duck, however, the probability that person A hits is $P_{A}(x)$ and the probability that person B hits is $P_{B}(x)$. It is also known that $P_A(0)=P_B(0)=0$ and $P_A(Y)=P_B(Y)=1$ and both functions are increasing functions. </p>
<p>What is the optimal strategy for each player?</p>
| <p>I believe both should shoot at $P_A(x)+P_B(x)=1$. If either shoots earlier, the chance of winning is reduced. If either shoots later, the other could wait half as much later and have a better chance of winning. But what happens if they both hit or both miss?</p>
| <p>Suppose player $A$ takes a shot at distance $x$, before player $B$. He collects the price with $P_1 = p_A(x)$ probability, while player $B$ collects the price with probability $P_2 = 1-p_A(x)$. </p>
<p>If the player $B$ shoots first, then $A$ wins with probability $Q_1 = 1-p_B(x)$ and $B$ wins with $Q_2 = p_B(x)$.</p>
<p>The optimal strategy for $A$ is to shoot at the point minimizing $B$'s win, i.e. $x_A = \operatorname{argmin}_x \max(p_B(x), 1-p_A(X))$, while the optimal strategy of $B$ is to shoot at $x_B = \operatorname{argmin}_x \max(p_A(x), 1-p_B(X))$.</p>
<hr>
<p>Here is a visualization, assuming duck is located at $Y=1$, and $p_A(x)$ and $p_B(x)$ are beta distribution cumulative distribution functions:</p>
<p><img src="https://i.sstatic.net/umhAa.png" alt="enter image description here"></p>
|
combinatorics | <p>A triangular grid has $N$ vertices, labeled from 1 to $N$. Two vertices $i$ and $j$ are adjacent if and only if $|i-j|=1$ or $|i-j|=2$. See the figure below for the case $N = 7$.</p>
<p><img src="https://i.sstatic.net/q7Glg.jpg" alt="Triangular grid with 7 vertices"></p>
<p>How many trails are there from $1$ to $N$ in this graph? A trail is allowed to visit a vertex more than once, but it cannot travel along the same edge twice.</p>
<p>I wrote a program to count the trails, and I obtained the following results for $1 \le N \le 17$.</p>
<p>$$1, 1, 2, 4, 9, 23, 62, 174, 497, 1433, 4150, 12044, 34989, 101695, 295642, 859566, 2499277$$</p>
<p>This sequence is not in the <a href="http://oeis.org" rel="nofollow noreferrer">OEIS</a>, but <a href="http://oeis.org/ol.html" rel="nofollow noreferrer">Superseeker</a> reports that the sequence satisfies the fourth-order linear recurrence</p>
<p>$$2 a(N) + 3 a(N + 1) - a(N + 2) - 3 a(N + 3) + a(N + 4) = 0.$$</p>
<p>Question: Can anyone prove that this equation holds for all $N$?</p>
| <p>Regard the same graph, but add an edge from $n-1$ to $n$ with weight $x$ (that is, a path passing through this edge contributes $x$ instead of 1).</p>
<p>The enumeration is clearly a linear polynomial in $x$, call it $a(n,x)=c_nx+d_n$ (and we are interested in $a(n,0)=d_n$).</p>
<p>By regarding the three possible edges for the last step, we find $a(1,x)=1$, $a(2,x)=1+x$ and</p>
<p>$$a(n,x)=a(n-2,1+2x)+a(n-1,x)+x\,a(n-1,1)$$</p>
<p>(If the last step passes through the ordinary edge from $n-1$ to $n$, you want a trail from 1 to $n-1$, but there is the ordinary edge from $n-2$ to $n-1$ and a parallel connection via $n$ that passes through the $x$ edge and is thus equivalent to a single edge of weight $x$, so we get $a(n-1,x)$.</p>
<p>If the last step passes through the $x$-weighted edge this gives a factor $x$, and you want a trail from $1$ to $n-1$ and now the parallel connection has weight 1 which gives $x\,a(n-1,1)$.</p>
<p>If the last step passes through the edge $n-2$ to $n$, then we search a trail to $n-2$ and now the parallel connection has the ordinary possibility $n-3$ to $n-2$ and two $x$-weighted possibilities $n-3$ to $n-1$ to $n$ to $n-1$ to $n-2$, in total this gives weight $2x+1$ and thus $a(n-2,2x+1)$.)</p>
<p>Now, plug in the linear polynomial and compare coefficients to get two linear recurrences for $c_n$ and $d_n$. </p>
<p>\begin{align}
c_n&=2c_{n-2}+2c_{n-1}+d_{n-1}\\
d_n&=c_{n-2}+d_{n-2}+d_{n-1}
\end{align}</p>
<p>Express $c_n$ with the second one, eliminate it from the first and you find the recurrence for $d_n$.</p>
<p>(Note that $c_n$ and $a(n,x)$ are solutions of the same recurrence.)</p>
| <p>This is not a new answer, just an attempt to slightly demystify user9325's very elegant answer to make it easier to understand and apply to other problems. Of course this is based on what I myself find easier to understand; others may prefer user9325's original formulation.</p>
<p>The crucial insight, in my view, is not the use of a variable weight and a polynomial (which serve as convenient bookkeeping devices), but that the problem becomes more tractable if we generalize it. This becomes apparent when we try a similar approach without this generalization: We might try to decompose $a(n)$ into two contributions corresponding to the two edges from $n-2$ and $n-1$ by which we can get to $n$, and in each case account for the new possibilities arising from the new vertices and edges. The contribution from $n-1$ is straightforward, but the contribution from $n-2$ causes a problem: We can now travel between $n-3$ and $n-2$ either directly or via $n-1$, and we can't just add a factor of $2$ to take this into account because there are trails using both of these possibilities. This is where the idea of an edge parallel to the final edge arises: Even though we're only interested in the final result without a parallel edge, the recurrence leads to parallel edges, so we need to include that possibility. We can do this without edge weights or polynomials by just counting the number $b(n)$ of trails that use the parallel edge separately from the number $a(n)$ of trails that don't. (I'm not saying we should; the polynomial, like a generating function, is an elegant and useful way to keep track of things; I'm just trying to emphasize that the polynomial isn't an essential part of the central idea of generalizing the original problem.)</p>
<p>Counting the number $a(n)$ of trails that don't use the parallel edge, we have a contribution $a(n-1)$ from trails ending with the normal edge from $n-1$, and a contribution $a(n-2)+b(n-2)$ from trails ending with the edge from $n-2$, which may ($b$) or may not ($a$) go via $n-1$:</p>
<p>$$a(n)=a(n-1)+a(n-2)+b(n-2)\;.$$</p>
<p>Counting the number $b(n)$ of trails that do use the parallel edge, we have a contribution $a(n-1)+b(n-1)$ from trails ending with the parallel edge, which may ($b$) or may not ($a$) go via $n$, a contribution $b(n-1)$ from trails ending with the normal edge from $n-1$, which have to go via $n$ (hence $b$), and a contribution $2b(n-2)$ from trails ending with the edge from $n-2$, which have to go via $n-1$ (hence $b$) and can use the normal edge from $n-1$ and the parallel edge in either order (hence the factor $2$):</p>
<p>$$b(n)=a(n-1)+b(n-1)+b(n-1)+2b(n-2)\;.$$</p>
<p>This is precisely user9325's result, with $a(n)=d_n$ and $b(n)=c_n$. There was a tad more work in counting the possibilities, but then we didn't have to compare coefficients.</p>
|
probability | <p>This problem arose in a different context at work, but I have translated it to pizza.</p>
<p>Suppose you have a circular pizza of radius $R$. Upon this disc, $n$ pepperoni will be distributed completely randomly. All pepperoni have the same radius $r$. </p>
<p>A pepperoni is "free" if it does not overlap any other pepperoni. </p>
<p>You are free to choose $n$.</p>
<p>Suppose you choose a small $n$. The chance that any given pepperoni is free are very large. But $n$ is small so the total number of free pepperoni is small. Suppose you choose a large $n$. The chance that any given pepperoni is free are small. But there are a lot of them.</p>
<p>Clearly, for a given $R$ and $r$, there is some optimal $n$ that maximizes the expected number of free pepperoni. How to find this optimum?</p>
<p><strong>Edit: picking the answer</strong></p>
<p>So it looks like leonbloy's answer given the best approximation in the cases I've looked at:</p>
<pre><code> r/R n* by simulation n_free (sim) (R/2r)^2
0.1581 12 4.5 10
0.1 29 10.4 25
0.01 2550 929.7 2500
</code></pre>
<p>(There's only a few hundred trials in the r=0.01 sim, so 2550 might not be super accurate.)
So I'm going to pick it for the answer. I'd like to thank everyone for their contributions, this has been a great learning experience.</p>
<p>Here are a few pictures of a simulation for r/R = 0.1581, n=12:
<a href="https://i.sstatic.net/Rk63l.png"><img src="https://i.sstatic.net/Rk63l.png" alt="enter image description here"></a></p>
<p><strong>Edit after three answers posted:</strong></p>
<p>I wrote a little simulation. I'll paste the code below so it can be checked (edit: it's been fixed to correctly pick points randomly on a unit disc). I've looked at <s>two</s> three cases so far. First case, r = 0.1581, R = 1, which is roughly p = 0.1 by mzp's notation. At these parameters I got n* = 12 (free pepperoni = 4.52). Arthur's expression did not appear to be maximized here. leonbloy's answer would give 10. I also did r = 0.1, R = 1. I got n* = 29 (free pepperoni = 10.38) in this case. Arthur's expression was not maximized here and leonbloy's answer would give 25. Finally for r = 0.01 I get roughly n*=2400 as shown here:<a href="https://i.sstatic.net/4BzYE.jpg"><img src="https://i.sstatic.net/4BzYE.jpg" alt="enter image description here"></a></p>
<p>Here's my (ugly) code, now edited to properly pick random points on a disc:</p>
<pre><code>from __future__ import division
import numpy as np
# the radius of the pizza is fixed at 1
r = 0.1 # the radius of the pepperoni
n_to_try = [1,5,10,20,25,27,28,29,30,31,32,33,35] # the number of pepperoni
trials = 10000# the number of trials (each trial randomly places n pepperoni)
def one_trial():
# place the pepperoni
pepperoni_coords = []
for i in range(n):
theta = np.random.rand()*np.pi*2 # a number between 0 and 2*pi
a = np.random.rand() # a number between 0 and 1
coord_x = np.sqrt(a) * np.cos(theta) # see http://mathworld.wolfram.com/DiskPointPicking.html
coord_y = np.sqrt(a) * np.sin(theta)
pepperoni_coords.append((coord_x, coord_y))
# how many pepperoni are free?
num_free_pepperoni = 0
for i in range(n): # for each pepperoni
pepperoni_coords_copy = pepperoni_coords[:] # copy the list so the orig is not changed
this_pepperoni = pepperoni_coords_copy.pop(i)
coord_x_1 = this_pepperoni[0]
coord_y_1 = this_pepperoni[1]
this_pepperoni_free = True
for pep in pepperoni_coords_copy: # check it against every other pepperoni
coord_x_2 = pep[0]
coord_y_2 = pep[1]
distance = np.sqrt((coord_x_1 - coord_x_2)**2 + (coord_y_1 - coord_y_2)**2)
if distance < 2*r:
this_pepperoni_free = False
break
if this_pepperoni_free:
num_free_pepperoni += 1
return num_free_pepperoni
for n in n_to_try:
results = []
for i in range(trials):
results.append(one_trial())
x = np.average(results)
print "For pizza radius 1, pepperoni radius", r, ", and number of pepperoni", n, ":"
print "Over", trials, "trials, the average number of free pepperoni was", x
print "Arthur's quantity:", x* ((((1-r)/1)**(x-1) - (r/1)) / ((1-r) / 1))
</code></pre>
| <p><em>Updated: see below (Update 3)</em></p>
<hr>
<p>Here's another approximation. Consider the center of the disks (pepperoni) as an homogeneous point process of density $\lambda = n/A$, where $A=\pi R^2$ is the pizza surface. Let $D$ be nearest neighbour distance from a given center. <a href="https://en.wikipedia.org/wiki/Nearest_neighbour_distribution#Poisson_point_process" rel="nofollow noreferrer">Then</a></p>
<p>$$P(D\le d) = 1- \exp(-\lambda \pi d^2)=1- \exp(-n \,d^2/R^2) \tag{1}$$</p>
<p>A pepperoni is free if $D > 2r$. Let $E$ be the expected number of free peperoni.</p>
<p>Then $$E= n\, P(D>2r) = n \exp (-n \,4 \, r^2/R^2) = n \exp(-n \, p)$$
where $p=(2r)^2/R^2$ (same notation as mzp's answer).</p>
<p>The maximum is attained for (ceil or floor of) $n^*=1/p=(R/2r)^2\tag{2}$ </p>
<p>Update 1: Formula $(1)$ could be corrected for the border effects, the area near the border would be computed as the intersection of two <a href="http://mathworld.wolfram.com/Circle-CircleIntersection.html" rel="nofollow noreferrer">circles</a>. It looks quite cumbersome, though.</p>
<p>Update 2: In the above, I assumed that the center of the pepperoni could be placed anywhere inside the pizza circle. If that's not the case, if the pepperoni must be fully inside the pizza, then $R$ should be replaced by the "effective radius" $R' = R-r$ </p>
<hr>
<p>Update 3: The Poisson approach is really not necessary. Here's an exact solution</p>
<p>Let $$t = \frac{R}{2r}$$</p>
<p>(Equivalently, think of $t$ as the pizza radius, and assume a pepperoni of radius $1/2$). Assume $t>1$. Let $g(x)$ be the area of a unit circle, at a distance $x$ from the origin, intersected with the circle of radius $t$. Then</p>
<p>$$g(x)=\begin{cases}\pi & {\rm if}& 0\le x \le t-1\\
h(x) & {\rm if}& t-1<x \le t \\
0 & {\rm elsewhere}
\end{cases}
\tag{3}$$
<a href="http://mathworld.wolfram.com/Circle-CircleIntersection.html" rel="nofollow noreferrer">where</a>
$$h(x)=\cos^{-1}\left(\frac{x^2+1-t^2}{2x}\right)+t^2
\cos^{-1}\left(\frac{x^2+t^2-1}{2xt}\right) -\frac{1}{2}\sqrt{[(x+t)^2-1][1-(t-x)^2]} \tag{4}$$</p>
<p>Here's a graph of $g(x)/\pi$ for $t=5$
<a href="https://i.sstatic.net/8FFCr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8FFCr.png" alt="enter image description here"></a></p>
<p>Let the random variable $e_i$ be $1$ if the $i$ pepperoni is free, $0$ otherwise.
Then</p>
<p>$$E[e_i \mid x] = \left(1- \frac{g(x)}{\pi \, t^2}\right)^{n-1} \tag{5}$$
(remaining pepperoni fall in the free area). And</p>
<p>$$E[e_i] =E[E(e_i \mid x)]= \int_{0}^t \frac{2}{t^2} x \left(1- \frac{g(x)}{\pi \, t^2}\right)^{n-1} dx = \\
=\left(1- \frac{1}{t^2}\right)^{n-1} \left(1- \frac{1}{t}\right)^2
+\frac{2}{t^2} \int_{t-1}^t x \left(1- \frac{h(x)}{\pi\, t^2}\right)^{n-1} dx
\tag{6}$$</p>
<p>The objective function (expected number of free pepperoni) is then given by:</p>
<p>$$J(n)=n E[e_i ] \tag{7} $$</p>
<p>This is exact... but (almost?) intractable. However, it can be evaluated numerically [**].</p>
<p>We can also take as approximation
$$g(x)=\pi$$ for $0\le x < t$ (neglect border effects) and then it gets simple:</p>
<p>$$E[e_i ] =E[e_i \mid x]= \left(1- \frac{1}{t^2}\right)^{n-1}$$</p>
<p>$$J(n)= n \left(1- \frac{1}{t^2}\right)^{n-1} \tag{8}$$</p>
<p>To maximize, we can write
$$\frac{J(n+1)}{J(n)}= \frac{n+1}{n} \left(1- \frac{1}{t^2}\right)=1 $$
which gives</p>
<p>$$ n^{*}= t^2-1 = \frac{R^2}{4 r^2}-1 \tag{9}$$
quite similar to $(2)$.</p>
<p>Notice that as $t \to \infty$, $J(n^{*})/n^{*} \to e^{-1}$, i.e. the proportion of free pepperoni (when using the optimal number) is around $36.7\%$. Also, the "total pepperoni area" is 1/4 of the pizza.</p>
<p>[**] Some Maxima code to evaluate (numerically) the exact solution $(7)$:</p>
<pre><code>h(x,t) := acos((x^2+1-t^2)/(2*x))+t^2*acos((x^2-1+t^2)/(2*x*t))
-sqrt(((x+t)^2-1)*(1-(t-x)^2))/2 $
j(n,t) := n * ( (1-1/t)^2*(1-1/t^2)^(n-1)
+ (2/t^2) * quad_qag(x * (1-h(x,t)/(%pi*t^2))^(n-1),x,t-1,t ,3)[1]) $
j0(n,t) := n *(1-1/t^2)^(n-1)$
tt : 1/(2*0.1581) $
j(11,tt);
4.521719308511862
j(12,tt);
4.522913706608645
j(13,tt);
4.494540361913981
tt : 1/(2*0.1) $
j(27,tt);
10.37509984083333
j(28,tt);
10.37692859747294
j(29,tt);
10.36601271146961
fpprintprec: 4$
nn : makelist(n, n, 2, round(tt^2*1.4))$
jnn : makelist(j(n,tt),n,nn) $
j0nn : makelist(j0(n,tt),n,nn) $
plot2d([[discrete,nn,jnn],[discrete,nn,j0nn]],
[legend,"exact","approx"],[style,[linespoints,1,2]],[xlabel,"n"], [ylabel,"j(n)"],
[title,concat("t=",tt, " r/R=",1/(2*tt))],[y,2,tt^2*0.5]);
</code></pre>
<p>The first result agrees with the OP simulation, the second is close: I got $n^{*}=28$ instead of $29$. For the third case, I get $n^{*}=2529$ ($j(n)=929.1865331$)</p>
<p><a href="https://i.sstatic.net/IsybQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IsybQ.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/tko9U.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tko9U.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.sstatic.net/6Kmap.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Kmap.jpg" alt="enter image description here"></a></p>
| <p>Let $a \equiv \pi (2r)^2$, $A \equiv \pi R^2$, and $p \equiv \frac{a}{A}$. Denote by $P_i^n$ the probability of having $i$ free pepperoni when $n$ are distributed randomly (according to a uniform distribution) over the pizza. Let $E_n$ denote the expected number of free pepperoni given $n$.</p>
<p>I will assume that the pepperoni can be placed on the pizza as long as their center lies inside it.</p>
<ul>
<li><p>$n=1$: </p>
<ul>
<li>$P_0^1 = 0$;</li>
<li>$P_1^1 = 1$;</li>
<li>$E_1= 0\cdot 0 +1\cdot 1 =1$.</li>
</ul></li>
<li><p>$n=2$: </p>
<ul>
<li>$P_0^2 = p$, that is, the probability of both pepperoni having their centers within a distance of less then $2r$, in which case they overlap;</li>
<li>$P_1^2 = 0$;</li>
<li>$P_2^2 = 1- p$;</li>
<li>$E_2=p\cdot 0 +0\cdot 1+(1-p)\cdot 2 = 2(1-p) $.</li>
</ul></li>
<li><p>$n=3$: </p>
<ul>
<li>$P_0^3 = p^2$;</li>
<li>$P_1^3 = C^3_2 p$, since there are $C^3_2$ combinations of how $2$ out of $3$ pepperoni could overlap;</li>
<li>$P_2^3 = 0$;</li>
<li>$P_3^3 = 1-p^2-C^3_2p$;</li>
<li>$E_3=p^2\cdot 0 +C^3_2 p\cdot 1+0\cdot 2 +(1-p^2-C^3_2p)\cdot 3 = 3(1-p^2)- 2C^3_2 p $.</li>
</ul></li>
<li><p>$n=4$: </p>
<ul>
<li>$P_0^4 = p^3$;</li>
<li>$P_1^4 = C^4_3 p^2$;</li>
<li>$P_2^4 = C^4_2 p$;</li>
<li>$P_3^4 = 0$;</li>
<li>$P_4^4 = 1-p^3-C^4_3p^2-C^4_2p$;</li>
<li>$E_4=p^3\cdot 0 +C^4_3 p^2\cdot 1+C^4_2 p \cdot 2+0\cdot 3 +(1-p^3-C^4_3p^2-C^4_2p)\cdot 4 \\
\;\;\;\;= 4(1-p^3)- 3C^4_3 p^2- 2C^4_2 p $.</li>
</ul></li>
<li><p>By induction, for $n\ge 2$:</p>
<blockquote>
<ul>
<li>$E_n = n(1-p^{n-1})- \sum_{j=1}^{n-2} (n-j)C^n_{n-j}p^{n-1-j}$.</li>
</ul>
</blockquote></li>
</ul>
<p>Hence the problem becomes that of solving</p>
<p>$$\max_{n \in\mathbb N} E_n$$</p>
<p>I was not able to solve this in general, but, for instance, if $p=0.1$ then </p>
<p>$$E_1 = 1, \; E_2 = 1.8, \; E_3 = 2.37, \; E_4 = 2.676, \; E_5 = 2.6795, \; E_6 = 2.3369, \; E_7 = 1.5991,\dots$$</p>
<p>So that the optimal number of pepperoni is $n^{*}=5$.</p>
|
combinatorics | <blockquote>
<p>For which <span class="math-container">$n\in \mathbb{N}$</span> can we divide the set <span class="math-container">$\{1,2,3,\ldots,3n\}$</span> into <span class="math-container">$n$</span> subsets each with <span class="math-container">$3$</span> elements such that in each subset <span class="math-container">$\{x,y,z\}$</span> we have <span class="math-container">$x+y=3z$</span>?</p>
</blockquote>
<p>Since <span class="math-container">$x_i+y_i=3z_i$</span> for each subset <span class="math-container">$A_i=\{x_i,y_i,z_i\}$</span>, we have <span class="math-container">$$4\sum _{i=1}^n z_i=\sum _{i=1}^{3n}i = {3n(3n+1)\over 2} \implies 8\mid n(3n+1) $$</span>
so <span class="math-container">$n=8k$</span> or <span class="math-container">$n=8k-3$</span>. Now it is not difficult to see that if <span class="math-container">$k=1$</span> we have such partition.</p>
<ul>
<li>For <span class="math-container">$n=5$</span> we have:
<span class="math-container">$$A_1= \{9,12,15\}, A_2= \{4,6,14\}, A_3= \{2,5,13\}, \\A_4= \{10,7,11\}, A_5= \{1,3,8\}$$</span></li>
<li>For <span class="math-container">$n=8$</span> we have:
<span class="math-container">$$A_1= \{24,21,15\}, A_2= \{23,19,14\}, A_3= \{22,2,8\}, A_4= \{20,1,7\}, \\A_5= \{17,16,11\}, A_6= \{18,12,10\}, A_7= \{13,5,6\}, A_8= \{9,3,4\}$$</span></li>
</ul>
<p>What about for <span class="math-container">$k\geq 2$</span>? Some clever induction step? Or some ''well'' known configuration?</p>
<p>Source: Serbia 1983, municipal round, 3. grade</p>
| <p>If there is a solution for <span class="math-container">$N$</span>, then there is a solution for <span class="math-container">$7N+5$</span>.<br />
The solution for <span class="math-container">$N$</span> uses up numbers from <span class="math-container">$1$</span> to <span class="math-container">$3N$</span>. Then
<span class="math-container">$$(3N+k, 15N+9+2k, 6N+3+k), k=1..3N+3\\
(12N+8+k,15N+10+2k,9N+6+k), k=1..3N+2$$</span>
sits the numbers from <span class="math-container">$3N+1$</span> to <span class="math-container">$21N+15$</span> on top of them.</p>
<p>A similar method gives a solution for <span class="math-container">$25N+8Q$</span>, for all <span class="math-container">$-13\le Q\le11$</span>, whenever there is a solution for <span class="math-container">$N\ge 13$</span>. Together with @RobPratt's solution, that covers all <span class="math-container">$N=8M$</span> and all <span class="math-container">$N=8M-3$</span>.</p>
<p>I have started a new question for a different version at <a href="https://math.stackexchange.com/questions/4190163/split-1-2-3n-into-triples-with-xy-4z">Split $\{1,2,...,3n\}$ into triples with $x+y=4z$</a> and also <a href="https://math.stackexchange.com/questions/4195206/split-1-3n-into-triples-with-xy-5z-no-solutions">Split $\{1,...,3n\}$ into triples with $x+y=5z$ - no solutions?</a></p>
| <p>Here is the integer linear programming approach I used to find partitions for all such <span class="math-container">$n\le 496$</span> with <span class="math-container">$n \equiv 0,5 \pmod 8$</span>. First enumerate all triples <span class="math-container">$\{x,y,z\}$</span> with <span class="math-container">$x+y=3z$</span> and <span class="math-container">$x,y,z$</span> distinct elements of <span class="math-container">$[3n]:=\{1,\dots,3n\}$</span>. For each such triple <span class="math-container">$T$</span>, let binary decision variable <span class="math-container">$u_T$</span> indicate whether <span class="math-container">$T$</span> appears in the partition. The constraints
<span class="math-container">$$\sum_{T:\ i\in T} u_T = 1 \quad \text{for $i\in[3n]$} \tag1$$</span>
enforce that each element appears exactly once in the partition.</p>
<p>An alternative approach is to introduce nonnegative slack variables <span class="math-container">$s_i$</span>, replace the set partitioning constraints <span class="math-container">$(1)$</span> with (set covering and cardinality) constraints
<span class="math-container">\begin{align}
\sum_{T:\ i\in T} u_T + s_i &\ge 1 &&\text{for $i\in[3n]$} \tag2 \\
\sum_T u_T &= n \tag3
\end{align}</span>
and minimize <span class="math-container">$\sum_{i=1}^{3n} s_i$</span>. A partition of <span class="math-container">$[3n]$</span> into <span class="math-container">$n$</span> triples with <span class="math-container">$x+y=3z$</span> exists if and only if the optimal objective value is <span class="math-container">$0$</span>.</p>
|
logic | <p>Let us consider the class $\cal C$ of countable models of ZFC. For ${\mathfrak A}=(A,{\in}_A)$ and ${\mathfrak B}=(B,{\in}_B)$ in $\cal C$ I say that ${\mathfrak A}<{\mathfrak B}$ iff there is a injective map $i: A \to B$ such that $x {\in}_A y \Leftrightarrow i(x) {\in}_B i(y)$ (note that this is a much weaker requirement for $i$ than to be an elementary embedding). My two questions are :</p>
<p>(1) Is there a simple construction of two incomparable models ${\mathfrak A},{\mathfrak B}$ ?
(i.e. neither ${\mathfrak A}<{\mathfrak B}$ nor ${\mathfrak B}<{\mathfrak A}$).</p>
<p>(2) Given two models ${\mathfrak A},{\mathfrak B}$ in $\cal C$, is there always a third model ${\mathfrak C}$ in $\cal C$ such that ${\mathfrak A}<{\mathfrak C}$ and ${\mathfrak B}<{\mathfrak C}$ ?</p>
| <p>Concerning question (1).</p>
<p>I became very interested in this question last year---obsessed
with it, actually---when I found myself unable to prove that any
of the natural-seeming examples were actually instances of
incomparability (for example, none of the approaches suggested in the various comments actually work). After my numerous attacks on it failed, I began
seriously to doubt the strong intuition underlying the question,
that there should be incomparable models. Eventually, I was a able to show that indeed, any two countable models are comparable by embeddability. My paper is available at:</p>
<ul>
<li><a href="http://jdh.hamkins.org/every-model-embeds-into-own-constructible-universe/">J. D. Hamkins, "Every countable model of set theory
embeds into its own constructible universe"</a>, also at the <a href="http://arxiv.org/abs/1207.0963">math arχiv</a>. </li>
</ul>
<p>The main theorems are:</p>
<p><em>Theorem 1.</em> Every countable model of set theory $\langle
M,{\in^M}\rangle$ is isomorphic to a submodel of its own
constructible universe $\langle L^M,{\in^M}\rangle$. Thus, there
is an embedding
$$j:\langle M,{\in^M}\rangle\to \langle L^M,{\in^M}\rangle$$
that is elementary for quantifier-free assertions in the language
of set theory.</p>
<p>The proof uses universal digraph combinatorics, including an
acyclic version of the countable random digraph, which I call the
countable random $\mathbb{Q}$-graded digraph, and higher analogues
arising as uncountable Fraisse limits, leading eventually to what
I call the hypnagogic digraph, a set-homogeneous, class-universal,
surreal-numbers-graded acyclic class digraph, which is closely
connected with the surreal numbers. The proof shows that $\langle
L^M,{\in^M}\rangle$ contains a submodel that is a universal
acyclic digraph of rank $\text{Ord}^M$, and so in fact this model
is universal for all countable acyclic binary relations of this
rank. When $M$ is ill-founded, this includes all acyclic binary
relations.</p>
<p>The method of proof also establishes the following, which answers
question (1). Version 2 on the archive, which will become visible
in a few days, cites this question and Ewan Delanoy.</p>
<p><em>Theorem 2.</em> The countable models of set theory are linearly
pre-ordered by embeddability: for any two countable models of set
theory $\langle M,{\in^M}\rangle$ and $\langle N,{\in^N}\rangle$,
either $M$ is isomorphic to a submodel of $N$ or conversely.
Indeed, the countable models of set theory are pre-well-ordered by
embeddability in order type exactly $\omega_1+1$.</p>
<p>The proof shows that the embedability relation on the models of
set theory conforms with their ordinal heights, in that any two
models with the same ordinals are bi-embeddable; any shorter model
embeds into any taller model; and the ill-founded models are all
bi-embeddable and universal.</p>
<p>The proof method arises most easily in finite set theory, showing
that the nonstandard hereditarily finite sets $\text{HF}^M$ coded
in any nonstandard model $M$ of PA or even of $I\Delta_0$ are
similarly universal for all acyclic binary relations. This
strengthens a classical theorem of Ressayre, while simplifying the
proof, replacing a partial saturation and resplendency argument
with a soft appeal to graph universality.</p>
<p><em>Theorem 3.</em> If $M$ is any nonstandard model of PA, then
every countable model of set theory is isomorphic to a submodel of
the hereditarily finite sets $\langle \text{HF}^M,{\in^M}\rangle$
of $M$. Indeed, $\langle\text{HF}^M,{\in^M}\rangle$ is universal
for all countable acyclic binary relations.</p>
<p>In particular, every countable model of ZFC and even of ZFC plus
large cardinals arises as a submodel of
$\langle\text{HF}^M,{\in^M}\rangle$. Thus, inside any nonstandard
model of finite set theory, we may cast out some of the finite
sets and thereby arrive at a copy of any desired model of infinite
set theory, having infinite sets, uncountable sets or even large
cardinals of whatever type we like.</p>
<p>The article closes with a number of questions, which you may find
on <a href="http://jdh.hamkins.org/every-model-embeds-into-own-constructible-universe/">my blog post about the article</a>. I plan to make
some mathoverflow questions about them in the near future.</p>
| <p>(2) seems true. Choose models with universes $\lbrace m_j\vert j<\omega\rbrace$, $\lbrace n_j\vert j<\omega\rbrace$ Consider language $L=\lbrace \in, a_i,b_i\vert i<\omega\rbrace$, and the theory $T=ZFC\cup\lbrace a_i\neq a_j\vert i,j\in\omega, m_i\neq m_j\rbrace\cup \lbrace a_i\in a_j\vert i,j\in\omega, m_i\in^{M_1} m_j\rbrace \cup \ldots$</p>
<p>It is clear that if $T$ is consistent, we can obtain the countable model in which $M_1,M_2$ embed monomorphically by downward Skolem.</p>
<p>Choose a finite fragment of $T$. The formulas of $T$ don't relate $a$ and $b$ in any way, so it is effectively a fragment of ZFC plus two finite (well-founded and consistent) membership+non-membership graphs. But any such graph can be realized by a finite set in any model of ZFC, so by compactness $T$ is consistent.</p>
<p>For (1) i think you can try to look at models which realize different subtrees of the Cantor tree (as subgraphs of their membership graphs). For example, one of them could have infinite descending sequence and other might not. It should be doable by omitting types theorem.</p>
|
probability | <p>I understand that the variance of the sum of two independent normally distributed random variables is the sum of the variances, but how does this change when the two random variables are correlated? </p>
| <p>For any two random variables:
$$\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y)+2\text{Cov}(X,Y).$$
If the variables are uncorrelated (that is, $\text{Cov}(X,Y)=0$), then</p>
<p>$$\tag{1}\text{Var}(X+Y) =\text{Var}(X)+\text{Var}(Y).$$
In particular, if $X$ and $Y$ are independent, then equation $(1)$ holds.</p>
<p>In general
$$
\text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i)+
2\sum_{i< j} \text{Cov}(X_i,X_j).
$$
If for each $i\ne j$, $X_i$ and $X_j$ are uncorrelated, in particular if the $X_i$ are pairwise independent (that is, $X_i$ and $X_j$ are independent whenever $i\ne j$), then
$$
\text{Var}\Bigl(\,\sum_{i=1}^n X_i\,\Bigr)= \sum_{i=1}^n\text{Var}( X_i) .
$$</p>
| <p>Let's work this out from the definitions. Let's say we have 2 random variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span> with means <span class="math-container">$\mu_x$</span> and <span class="math-container">$\mu_y$</span>. Then variances of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> would be:</p>
<p><span class="math-container">$${\sigma_x}^2 = \frac{\sum_i(\mu_x-x_i)(\mu_x-x_i)}{N}$$</span>
<span class="math-container">$${\sigma_y}^2 = \frac{\sum_i(\mu_y-y_i)(\mu_y-y_i)}{N}$$</span></p>
<p>Covariance of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> is:</p>
<p><span class="math-container">$${\sigma_{xy}} = \frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N}$$</span></p>
<p>Now, let us consider the weighted sum <span class="math-container">$p$</span> of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>:</p>
<p><span class="math-container">$$\mu_p = w_x\mu_x + w_y\mu_y$$</span></p>
<p><span class="math-container">$${\sigma_p}^2 = \frac{\sum_i(\mu_p-p_i)^2}{N} = \frac{\sum_i(w_x\mu_x + w_y\mu_y - w_xx_i - w_yy_i)^2}{N} = \frac{\sum_i(w_x(\mu_x - x_i) + w_y(\mu_y - y_i))^2}{N} = \frac{\sum_i(w^2_x(\mu_x - x_i)^2 + w^2_y(\mu_y - y_i)^2 + 2w_xw_y(\mu_x - x_i)(\mu_y - y_i))}{N} \\ = w^2_x\frac{\sum_i(\mu_x-x_i)^2}{N} + w^2_y\frac{\sum_i(\mu_y-y_i)^2}{N} + 2w_xw_y\frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N} \\ = w^2_x\sigma^2_x + w^2_y\sigma^2_y + 2w_xw_y\sigma_{xy}$$</span></p>
|
differentiation | <p>I don't have a lot of places to turn because i am still in high school. So please bear with me as i had to create some notation. </p>
<p>In order to understand my notation you must observe this <a href="http://en.wikipedia.org/wiki/Bell_polynomials#Convolution_identity">identity for bell polynomials</a></p>
<p>$a = (f'(x),f''(x),\cdots)$ and $b = (g'(x),g''(x),\cdots)$
$$
B_{n,k}(f'(x),f''(x),\cdots,f^{(n-k+1)}(x))_{(f \rightarrow g)^c} = \frac{(a^{(k-c)_\diamond} \diamond b^{c_\diamond})_n}{(k-c)!c!}
$$</p>
<p>Also note that $d_n= \frac{d^n}{dx^n}[f(x)\ln(g(x))]$</p>
<p>I must prove that</p>
<p>$$
\sum_{k=1}^{n}\ln^k(g(x)) B_{n,k}(f'(x),f''(x),\cdots,f^{(n-k+1)}(x))
$$</p>
<p>$$
=\sum_{k=1}^n[ B_{n,k}(d_1,d_2,\cdots,d_{n-k+1})- \sum_{m=0}^{n-k}\sum_{j=0}^{m} {m \choose j} \frac{\ln^{m-j}(g(x))}{g(x)^k} \frac{d^j}{d(f(x))^j}[(f(x))_k] B_{n,m+k}(f'(x),\cdots,f^{(n-m-k+1)}(x))_{(f \rightarrow g)^k}]
$$</p>
<p>Where $(f(x))_k$ is the Pochhammer symbol for falling factorial</p>
<p>I have been trying to prove this for quite a while. Any advice on doing so would be amazing. Perhaps this can be put into a determinant or something of the sort, But I am not sure about that double summation. If you have advice PLEASE do so through a comment.</p>
| <p><em>Note:</em> Here are at least some few hints which may help to solve this nice identity. In fact it's hardly more than a starter. But hopefully some aspects are nevertheless useful for the interested reader.</p>
<blockquote>
<p><strong>Introduction:</strong> The following information is provided:</p>
<ul>
<li><em>Definition of partial Bell polynomials $B_{n,k}$</em>:</li>
</ul>
<p>I state the definition of Bell polynomials according to <em>Comtet's</em>: classic <a href="http://rads.stackoverflow.com/amzn/click/9027703809">Advanced Combinatorics</a> section 3.3 <em>Bell Polynomials</em> as coefficients of generating functions. The idea is, that a proper representation via generating functions could help to find the solution.</p>
<ul>
<li><em>The convolution product $x \diamond y$ demystified</em></li>
</ul>
<p>We will observe, that the convolution product is strongly related with an iterative representation of the generating functions of the Bell numbers. This enables us to transform the stated identity to gain some more insight.</p>
<ul>
<li><em>Representation of the identity with the help of generating functions</em></li>
</ul>
<p>In fact its just another representation regrettably without clever simplifications</p>
<ul>
<li><em>Verification of the identity for $n=2,3$</em></li>
</ul>
<p>In order to better see what's going on, the identity is also verified for small $n=2,3$. An analysis of these examples could provide some hints how to appropriately transform the generating functions in the general case.</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Definition of partial Bell polynomials $B_{n,k}$:</strong></p>
</blockquote>
<p>According to Comtet's <em><a href="http://rads.stackoverflow.com/amzn/click/9027703809">Advanced Combinatorics</a></em> <em>section 3.3 Bell Polynomials</em> we define as follows:</p>
<blockquote>
<p>Let $\Phi(t,u)$ be the generating function of the <em>(exponential) partial Bell polynomials</em> $B_{n,k}=B_{n,k}(x_1,x_2,\ldots,x_{n-k+1})$ in an infinite number of variables $x_1,x_2,\ldots$ defined by the formal double series expansion:</p>
<p>\begin{align*}
\Phi(t,u)&:=exp\left(u\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)=\sum_{n,k\geq 0}B_{n,k}\frac{t^n}{n!}u^k\\
&=1+\sum_{n\geq 1}\frac{t^n}{n!}\left(\sum_{k=1}^{n}u^kB_{n,k}(x_1,x_2,\ldots)\right)
\end{align*}
or what amounts to the same by the series expansion:
\begin{align*}
\frac{1}{k!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^k=\sum_{n\geq k}B_{n,k}\frac{t^n}{n!},\qquad k=0,1,2,\ldots\tag{1}
\end{align*}</p>
</blockquote>
<p>In the following the focus is put on the representation (1).</p>
<p>Let's use the <em><a href="http://arxiv.org/abs/math/9402216">coefficient of</a></em> operator $[t^n]$ to denote the coefficient $a_n=[t^n]A(t)$ of a formal generating series $A(t)=\sum_{k\geq 0}a_kt^k$.</p>
<blockquote>
<p>We observe for $n\geq 0$:</p>
<p>\begin{align*}
B_{n,k}=\frac{n!}{k!}[t^n]\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^k,\qquad k\geq 0\tag{2}
\end{align*}</p>
</blockquote>
<p><em>Note:</em> In the following it is sufficient to consider $B_{n,k}$ for $n,k\geq 1$.</p>
<hr>
<blockquote>
<p><strong>The convolution product $x\diamond y$</strong> (somewhat demystified)</p>
<p>The <em>convolution product</em> $x \diamond y$ for sequences $x=(x_j)_{j\geq 1}$ and $y=(y_j)_{j\geq 1}$ is defined according to <a href="http://en.wikipedia.org/wiki/Bell_polynomials#Convolution_identity">this link</a> as</p>
<p>\begin{align*}
x \diamond y :=\sum_{j=1}^{n-1}\binom{n}{j}x_jy_{n-j}\tag{3}
\end{align*}</p>
</blockquote>
<p>The polynomial $B_{n,k}$ can be written using the $k$-fold product $$x^{k_\diamond}=(x_n^{k\diamond})_{n\geq 1}:=\underbrace{x\diamond \ldots \diamond x}_{k \text{ factors}}$$ as</p>
<p>\begin{align*}
B_{n,k}=\frac{x_n^{k\diamond}}{k!}, \qquad n,k\geq 1\tag{4}
\end{align*}</p>
<blockquote>
<p>We obtain according to the definition:
\begin{align*}
x\diamond y&=\left(0,\sum_{j=1}^{1}\binom{2}{j}x_jy_{2-j},\sum_{j=1}^{2}\binom{3}{j}x_jy_{3-j},\sum_{j=1}^{3}\binom{4}{j}x_jy_{4-j},\ldots\right)\\
&=\left(0,2x_1y_1,3x_1y_2+3x_2y_1,4x_1y_3+6x_2y_2+4x_3y_1,\ldots\right)\\
\end{align*}
which implies
\begin{align*}
x^{1\diamond}&=\left(x_1,x_2,x_3,x_4,\ldots\right)
&=&1!(B_{1,1},B_{2,1},B_{3,1},B_{4,1}\ldots)\\
x^{2\diamond}&=\left(0,2x_1^2,6x_1x_2,8x_1x_3+6x_2^2,\ldots\right)
&=&2!(0,B_{2,2},B_{3,2},B_{4,2},\ldots)\\
x^{3\diamond}&=\left(0,0,6x_1^3,36x_1^2x_2,\ldots\right)
&=&3!(0,0,B_{3,3},B_{4,3}\ldots)\\
\end{align*}</p>
</blockquote>
<p>Observe, that the <em>multiplication</em> of exponential generating functions $A(x)=\sum_{k\geq 0}a_k\frac{x^k}{k!}$ and $B(x)=\sum_{l\geq 0}b_l\frac{x^l}{l!}$ gives:
\begin{align*}
A(x)B(x)&=\left(\sum_{k\geq 0}a_k\frac{x^k}{k!}\right)\left(\sum_{l\geq 0}b_l\frac{x^l}{l!}\right)\\
&=\sum_{n\geq 0}\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{a_k}{k!}\frac{b_l}{l!}\right)x^n\\
&=\sum_{n\geq 0}\left(\sum_{k=0}^n\binom{n}{k}a_kb_{n-k}\right)\frac{x^n}{n!}
\end{align*}</p>
<blockquote>
<p>According to the definition of $B_{n,k}$ the sequences $x^{k_\diamond}$ generated via the convolution product are simply the coefficients of the <em>vertical generating functions</em> for the Bell polynomials $B_{n,k}, n\geq 1$:</p>
<p>\begin{align*}
\frac{1}{1!}\sum_{m\geq 1}x^{1_\diamond}_n\frac{t^n}{n!}&=
\frac{1}{1!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)=\sum_{m\geq 1}B_{n,1}\frac{t^n}{n!}\\
\frac{1}{2!}\sum_{m\geq 1}x^{2_\diamond}_n\frac{t^n}{n!}&=
\frac{1}{2!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^2=\sum_{m\geq 2}B_{n,2}\frac{t^n}{n!}\\
&\qquad\qquad\cdots\\
\frac{1}{k!}\sum_{m\geq 1}x^{k_\diamond}_n\frac{t^n}{n!}&=
\frac{1}{k!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^k=\sum_{m\geq k}B_{n,k}\frac{t^n}{n!}\\
\end{align*}</p>
<p>We observe: A <em>convolution</em> with $x^{1\diamond}$ corresponds essentially to a multiplication of the generating function $\sum_{m\geq 1}x_m\frac{t^m}{m!}$</p>
</blockquote>
<p>In order to keep complex expressions better manageable, we introduce some abbreviations:</p>
<blockquote>
<p>$$B_{n,k}^{f}(x) := B_{n,k}(f^\prime(x),f^{\prime\prime}(x),\ldots,f^{(n-k+1)}(x))$$</p>
</blockquote>
<p>The $n$-th derivatives will be abbreviated as</p>
<p>$$f_n:=\frac{d^n}{dx^n}f(x)\qquad\text{ and }\qquad g_n:= \frac{d^n}{dx^n}g(x),\qquad n \geq 1$$</p>
<p>we also use OPs shorthand $a=(f_1,f_2,\ldots)$ and $b=(g_1,g_2,\ldots)$.</p>
<blockquote>
<p>According to the statements above the expression </p>
<p>$$B_{n,k}\left(f^\prime(x),f^{\prime\prime}(x),\ldots,f^{(n-k+1)}\right)_{(f\rightarrow g)^c}
=\frac{\left(a^{(k-c)_{\diamond}}\diamond b^{c_\diamond}\right)_n}{(k-c)!c!}$$</p>
<p>can now be written as coefficients of the product of the generating functions</p>
<p>\begin{align*}
\sum_{n\geq k}&B^f_{{n,k}_{(f\rightarrow g)^c}}\frac{t^n}{n!}
=\frac{1}{(k-c)!}\left(\sum_{m\geq1}f_m\frac{t^m}{m!}\right)^{k-c}
\frac{1}{c!}\left(\sum_{m\geq1}g_m\frac{t^m}{m!}\right)^{c}
\end{align*}</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Representation of the identity via generating functions</strong></p>
</blockquote>
<p>We are now in a state to represent OPs identity with the help of generating functions based upon (1).</p>
<p>To simplify the notation somewhat I will often omit the argument and write e.g. $(\ln\circ g)^k$ instead of $\left(\ln(g(x))\right)^k$. Now, putting the <em>Complete Bell polynomial</em> in OPs question on the left hand side and the other terms to the right hand side we want to show</p>
<blockquote>
<p>Following identity is valid:</p>
<p>\begin{align*}
\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}&=\sum_{k=1}^{n}(\ln\circ g)^{k}B_{n,k}^f\tag{5}\\
&\qquad+\sum_{k=1}^{n}\sum_{m=0}^{n-k}\sum_{j=0}^{m}\binom{m}{j}
\frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]B_{{n,m+k}_{(f\rightarrow g)^k}}^f\qquad\qquad n\geq 1
\end{align*}</p>
</blockquote>
<p>Please note, the following abbreviations are used in (5) and the expressions below:
\begin{align*}
&f:= f(x), \qquad g := g(x), \qquad f_k := \frac{d^k}{dx^k}f(x),
\qquad g_k :=\frac{d^k}{dx^k}g(x)\\
&d_k := \frac{d^k}{dx^k}\left(f(x)\ln(g(x)\right),\qquad\frac{d^j}{d(f)^j}:= \frac{d^j}{d(f(x))^j}\\
&(f)_k := f(x)\left(f(x)-1\right)\cdot\ldots\cdot\left(f(x)-k+1\right)
\end{align*}</p>
<p>Using the generating function (1) we observe:</p>
<p>\begin{align*}
\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}&=n!\sum_{k=1}^n\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}d_m\frac{t^m}{m!}\right)^k\\
\\
\sum_{k=1}^{n}(\ln\circ g)^{k}B_{n,k}^f&=n!\sum_{k=1}^{n}(\ln\circ g)^{k}\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}f_m\frac{t^m}{m!}\right)^k
\\
\\
\sum_{k=1}^{n}\sum_{m=0}^{n-k}\sum_{j=0}^{m}\binom{m}{j}&
\frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]B_{{n,m+k}_{(f\rightarrow g)^k}}^f\\
&=n!\sum_{k=1}^n\frac{1}{k!}\frac{1}{g^k}\sum_{m=0}^{n-k}\frac{1}{m!}\sum_{j=0}^m
\binom{m}{j}\left(\ln\circ g\right)^{m-j}\frac{d^j}{d(f)^j}[(f)_k]\\
&\qquad\cdot[t^n]\left(\sum_{j\geq 1}f_{j}\frac{t^{j}}{j!}\right)\left(\sum_{j\geq 1}g_{j}\frac{t^{j}}{j!}\right)
\end{align*}</p>
<blockquote>
<p>Putting all together gives following reformulation of the identity:</p>
<p>\begin{align*}
\sum_{k=1}^n&\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}d_m\frac{t^m}{m!}\right)^k\\
&=\sum_{k=1}^{n}\left(\ln \circ g\right)^k\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}f_m\frac{t^m}{m!}\right)^k\\
&\qquad+\sum_{k=1}^n\frac{1}{k!}\frac{1}{g^k}\sum_{m=0}^{n-k}\frac{1}{m!}
\sum_{j=0}^m\binom{m}{j}\left(\ln \circ g\right)^{m-j}\frac{d^j}{d(f)^j}[(f)_k]\\
&\qquad\qquad\cdot[t^n]\left(\sum_{j\geq 1}f_{j}\frac{t^{j}}{j!}\right)\left(\sum_{j\geq 1}g_{j}\frac{t^{j}}{j!}\right)
\end{align*}</p>
<p><strong>Note:</strong> Maybe this alternative representation could help to show OPs identity.</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Verification of the identity for $n=2,3$</strong></p>
</blockquote>
<p>In order to verify the identity for small $n$, we need some polynomials $B_{n,k}$ in variables $f_j$ and $g_j$ ($j$-th derivative of $f$ and $j$). We do so by applying the $\diamond$ operator to $a=(f_1,f_2,\ldots)$ and $b=(g_1,g_2,\ldots)$.</p>
<p>\begin{array}{rlllll}
a^{1\diamond}&=\left(f_1,\right.&f_2,&f_3,&f_4,&\left.\ldots\right)\\
b^{1\diamond}&=\left(g_1,\right.&g_2,&g_3,&g_4,&\left.\ldots\right)\\
\\
a^{2\diamond}&=\left(0,\right.&2f_1^2,&6f_1f_2,&8f_1f_3+6f_2^2,&\left.\ldots\right)\\
a^{1\diamond}\diamond b^{1\diamond}&=\left(0,\right.&2f_1g_1,&3f_1g_2+3f_2g_1,&
4f_1g_3+6f_2g_2+4f_3g_1,&\left.\ldots\right)\\
b^{2\diamond}&=\left(0,\right.&2g_1^2,&6g_1g_2,&8g_1g_3+6g_2^2,&\left.\ldots\right)\\
\\
a^{3\diamond}&=\left(0,\right.&0,&6f_1^3,&36f_1^2f_2,&\left.\ldots\right)\\
a^{2\diamond}\diamond b^{1\diamond}&=\left(0,\right.&0,&6f_1^2g_1,&12f_1^2g_2+24f_1f_2g_1,&\left.\ldots\right)\\
a^{1\diamond}\diamond b^{2\diamond}&=\left(0,\right.&0,&6f_1g_1^2,&24f_1g_1g_2+12f_2g_1^2,&\left.\ldots\right)\\
b^{3\diamond}&=\left(0,\right.&0,&6g_1^3,&36g_1^2g_2,&\left.\ldots\right)\\
\end{array}</p>
<hr>
<blockquote>
<p><strong>Case $n=2$:</strong></p>
<p>Each of the three sums of the identity is calculated separately.</p>
</blockquote>
<p>\begin{align*}
\sum_{k=1}^{2}&B_{2,k}^{f\cdot(\ln \circ g)}\\
&=B_{2,1}^{f\cdot(\ln \circ g)}+B_{2,2}^{f\cdot(\ln \circ g)}\\
&=\frac{d^2}{{dx}^2}\left(f (\ln \circ g)\right)+\left(\frac{d}{dx}\left(f(\ln \circ g)\right)\right)^2\\
&=\left(f_2(\ln \circ g)+2f_1\frac{g_1}{g}+f\frac{g_2}{g}-f\frac{g_1^2}{g^2}\right)+\left(f_1(\ln \circ g)+f\frac{g_1}{g}\right)^2\\
&=\left(f_2(\ln \circ g)+2f_1\frac{g_1}{g}+f\frac{g_2}{g}-f\frac{g_1^2}{g^2}\right)+\left(f_1^2(\ln \circ g)^2+2ff_1(\ln \circ g)\frac{g_1}{g}+f^2\frac{g_1^2}{g^2}\right)\\
\\
\sum_{k=1}^{2}&(\ln \circ g)^kB_{2,k}^f\\
&=(\ln \circ g)B_{2,1}^f+(\ln \circ g)^2B_{2,2}^f\\
&=(\ln \circ g)f_2+(\ln \circ g)^2f_1^2\\
\\
\sum_{k=1}^{2}&\sum_{m=0}^{2-k}\sum_{j=0}^m\binom{m}{j}
\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_2}{m!k!}\\
&=\sum_{k=1}^{2}\frac{1}{g^k}\sum_{m=0}^{2-k}\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_2}{m!k!}
\sum_{j=0}^m\binom{m}{j}\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\\
&=\frac{1}{g}\sum_{m=0}^1\frac{\left(a^{m_\diamond}\diamond b^{1_{\diamond}}\right)_2}{m!1!}
\sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}(f)\\
&\qquad+\frac{1}{g^2}\sum_{m=0}^0\frac{\left(a^{m_\diamond}\diamond b^{2_{\diamond}}\right)_2}{m!2!}
\sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}\left(f(f-1)\right)\\
&=\frac{1}{g}\left[\frac{\left(b^{1_\diamond}\right)_2}{1!}\binom{0}{0}f+\frac{\left(a^{1_\diamond}\diamond
b^{1_\diamond}\right)_2}{1!1!}\left(\binom{1}{0}(\ln \circ g)f+\binom{1}{1}\frac{d}{d(f)}f\right)\right]\\
&\qquad+\frac{1}{g^2}\left[\frac{\left(b^{2_\diamond}\right)_2}{2!}\binom{0}{0}f\left(f-1\right)\right]\\
&=\frac{1}{g}\left[g_2f+2f_1g_1\left((\ln \circ g) f+1\right)\right]+\frac{1}{g^2}\left[g_1^2f\left(f-1\right)\right]
\end{align*}</p>
<blockquote>
<p>Comparison of the results of these sums shows the validity of the claim:
\begin{align*}
\sum_{k=1}^{2}B_{2,k}^{f\cdot(\ln \circ g)}&=\sum_{k=1}^{2}(\ln \circ g)^kB_{2,k}^f\\
&\qquad+\sum_{k=1}^{2}\sum_{m=0}^{2-k}\sum_{j=0}^m\binom{m}{j}
\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_2}{m!k!}\\
\end{align*}</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Case $n=3$:</strong></p>
<p>Each of the three sums of the identity is calculated separately.</p>
</blockquote>
<p>\begin{align*}
\sum_{k=1}^{3}&B_{3,k}^{f\cdot(\ln \circ g)}\\
&=B_{3,1}^{f\cdot(\ln \circ g)}+B_{3,2}^{f\cdot(\ln \circ g)}+B_{3,3}^{f\cdot(\ln \circ g)}\\
&=\frac{d^3}{{dx}^3}\left(f(\ln \circ g)\right)+3\frac{d}{dx}\left(f(\ln \circ g)\right)\frac{d^2}{{dx}^2}\left(f(\ln \circ g)\right)
+\left(\frac{d}{dx}(\ln \circ g)\right)^3\\
&=\left(f_3(\ln \circ g)+3f_2\frac{g_1}{g}+3f_1\frac{g_2}{g}-3f_1\frac{g_1^2}{g^2}+f\frac{g_3}{g}
+2f\frac{g_1^3}{g^3}-3f\frac{g_1g_2}{g^2}\right)\\
&\qquad+3\left(f_1(\ln \circ g)+f\frac{g_1}{g}\right)\left(f_2(\ln \circ g)+2f_1\frac{g_1}{g}+f\frac{g_2}{g}-f\frac{g_1^2}{g^2}\right)\\
&\qquad+\left(f_1(\ln \circ g)+f\frac{g_1}{g}\right)^3\\
&=\left(f_3(\ln \circ g)+3f_2\frac{g_1}{g}+3f_1\frac{g_2}{g}-3f_1\frac{g_1^2}{g^2}+f\frac{g_3}{g}
+2f\frac{g_1^3}{g^3}-3f\frac{g_1g_2}{g^2}\right)\\
&\qquad+\left(3f_1f_2(\ln \circ g)^2+3ff_2(\ln \circ g)\frac{g_1}{g}+6f_1^2(\ln \circ g)\frac{g_1}{g}+6ff_1\frac{g_1^2}{g^2}\right.\\
&\qquad\qquad\left.+3ff_1(\ln \circ g)\frac{g_2}{g}+3f^2\frac{g_1g_2}{g^2}-3ff_1(\ln \circ g)\frac{g_1^2}{g^2}-3f^2\frac{g_1^3}{g^3}\right)\\
&\qquad+\left(f_1^3(\ln \circ g)^3+3ff_1^2(\ln \circ g)^2\frac{g_1}{g}+3f^2f_1(\ln \circ g)\frac{g_1^2}{g^2}+f^3\frac{g_1^3}{g^3}\right)
\\
\\
\sum_{k=1}^{3}&(\ln \circ g)^kB_{3,k}^f\\
&=(\ln \circ g)B_{3,1}^f+(\ln \circ g)^2B_{3,2}^f+(\ln \circ g)^3B_{3,3}^f\\
&=(\ln \circ g)f_3+3(\ln \circ g)^2f_1f_2+(\ln \circ g)^3f_1^3
\end{align*}
\begin{align*}
\sum_{k=1}^{3}&\sum_{m=0}^{3-k}\sum_{j=0}^m\binom{m}{j}
\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_3}{m!k!}\\
&=\sum_{k=1}^{3}\frac{1}{g^k}\sum_{m=0}^{3-k}\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_3}{m!k!}
\sum_{j=0}^m\binom{m}{j}\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\\
&=\frac{1}{g}\sum_{m=0}^2\frac{\left(a^{m_\diamond}\diamond b^{1_{\diamond}}\right)_3}{m!1!}
\sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}(f)\\
&\qquad+\frac{1}{g^2}\sum_{m=0}^1\frac{\left(a^{m_\diamond}\diamond b^{2_{\diamond}}\right)_3}{m!2!}
\sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}\left(f(f-1)\right)\\
&\qquad+\frac{1}{g^3}\sum_{m=0}^0\frac{\left(a^{m_\diamond}\diamond b^{3_{\diamond}}\right)_3}{m!3!}
\sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}\left(f(f-1)(f-2)\right)\\
&=\frac{1}{g}\left[\frac{\left(b^{1_\diamond}\right)_3}{1!}\binom{0}{0}f
+\frac{\left(a^{1_\diamond}\diamond b^{1_\diamond}\right)_3}{1!1!}\left[\binom{1}{0}(\ln \circ g) f
+\binom{1}{1}\frac{d}{d(f)}f\right]\right.\\
&\qquad\qquad+\left.\frac{\left(a^{2_\diamond}\diamond b^{1_\diamond}\right)_3}{2!1!}
\left[\binom{2}{0}(\ln \circ g)^2f+\binom{2}{1}(\ln \circ g)\frac{d}{df}f
+\binom{2}{2}\frac{d^2}{{df}^2}f\right]\right]\\
&\qquad+\frac{1}{g^2}\left[\frac{\left(b^{2_\diamond}\right)_3}{2!}\binom{0}{0}f\left(f-1\right)\right.\\
&\qquad\qquad+\left.\frac{\left(a^{1_\diamond}\diamond b^{2_\diamond}\right)_3}{1!2!}\left[\binom{1}{0}(\ln \circ g) f(f-1)
+\binom{1}{1}\frac{d}{d(f)}f(f-1)\right]\right]\\
&\qquad+\frac{1}{g^3}\left[\frac{\left(b^{3_\diamond}\right)_3}{3!}\binom{0}{0}f\left(f-1\right)(f-2)\right]\\
&=\frac{1}{g}\left[g_3f+\left(3f_1g_2+3f_2g_1\right)\left[(\ln \circ g)f+1\right]
+\left(3f_1^2g_1\right)\left[(\ln \circ g)^2f+2(\ln \circ g)\right]\right]\\
&\qquad+\frac{1}{g^2}\left[3g_1g_2f\left(f-1\right)
+3f_1g_1^2\left[(\ln \circ g) f(f-1)+2f-1\right]\right]\\
&\qquad+\frac{1}{g^3}\left[g_1^3f(f-1)(f-2)\right]\\
\end{align*}</p>
<blockquote>
<p>Comparison of the results of these sums shows the validity of the claim:
\begin{align*}
\sum_{k=1}^{3}B_{3,k}^{f\cdot(\ln \circ g)}&=\sum_{k=1}^{3}(\ln \circ g)^kB_{3,k}^f\\
&\qquad+\sum_{k=1}^{3}\sum_{m=0}^{3-k}\sum_{j=0}^m\binom{m}{j}
\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_3}{m!k!}
\end{align*}</p>
</blockquote>
| <p><strong>Note:</strong> This answer is aimed to provide a <em>complete solution</em> to this challenging problem. It is based upon a series of papers about <em>compositae of functions</em> by Vladimir Kruchinin and a paper about <em>Bell polynomials</em> by Warren P. Johnsen (see references at the end).</p>
<p>If this somewhat extensive elaboration is too cumbersome for a detailed reading, I suggest to read the overview and take instead a look at <a href="https://math.stackexchange.com/questions/992807">Manipulation of Bell Polynomials</a>. This was my <em>starter</em> in applying the techniques used here. Another way to go through this answer is to focus on <em>highlighted regions</em> and skip this way less important details.</p>
<blockquote>
<p><strong>Note: [2015-03-08]</strong></p>
<p>It's a great pleasure to provide now a <strong>complete answer</strong> to OPs challenging question. I could close the gap by adding <em>Step 2</em> and <em>Step 3</em> of <em>Part 3</em> of my answer. It was a <em>really tough</em> job to perform all the calculations (and much more verification steps behind the scene). So, if the interested reader is willing to review parts of this answer any fruitful hints to reduce the size of this proof are appreciated.</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Overview:</strong></p>
<p>The answer is divided into three parts. </p>
<p><strong>Part 1:</strong> Convenient representation of OPs formula</p>
<p>We transform OPs expression conveniently and replace the <em>convolution product</em> $a\diamond b$ with corresponding <em>partial Bell polynomials</em> to ease further calculations. We show</p>
<p>OPs formula is equivalent to (argument $x$ omitted):</p>
<p>\begin{align*}
\sum_{k=1}^{n}&B_{n,k}^{f\cdot (\ln\circ g)}=
\sum_{k=1}^{n}\left((\ln\circ g)^{k}B_{n,k}^f+\frac{(f)_k}{g^k}B_{n,k}^g\right)\tag{1}\\
&+\sum_{l=1}^{n-1}\sum_{m=1}^{l}\sum_{k=1}^{n-l}\sum_{j=0}^{m}\binom{n}{l}\binom{m}{j}
\frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]B_{l,m}^{f}B_{n-l,k}^{g}\qquad\qquad n\geq 1
\end{align*}</p>
<p><strong>Part 2:</strong> Derivation of $B_{n,k}^{f\cdot (\ln\circ g)}$</p>
<p>The strategy to proof OPs formula (1) is to derive the partial Bell polynomials for $B_{n,k}^{f\cdot (\ln\circ g)}$ from scratch and show that the so resulting formula is the same as (1). We do so by starting with partial Bell polynomials of $f$ and $g$ and building <em>stepwise</em> more complex partial Bell polynomials. </p>
<p><strong>2.1 Preliminaries:</strong></p>
<p>We will recognise that deriving Bell polynomials is strongly related to <em>composites of generating functions</em> and their <em>Taylor expansion series</em>. We introduce the necessary concepts and notation.</p>
<p><strong>2.2 From $B_{n,k}^{\ln}$ and $B_{n,k}^g$ to $B_{n,k}^{\ln \circ g}$</strong></p>
<p>We create the partial Bell polynomials $B_{n,k}^{\ln \circ g}$ from $B_{n,k}^{\ln}$ and $B_{n,k}^g$ and show</p>
<p>The following is valid
\begin{align*}
B^{\ln\circ g}_{n,k}&=\sum_{j=k}^{n}\frac{(-1)^{j-k}}{g^j}\begin{bmatrix}j\\k\end{bmatrix}B^{g}_{n,j}\qquad 1\leq k \leq n\tag{2}
\end{align*}</p>
<p>with $\begin{bmatrix}j\\k\end{bmatrix}$ the <em><a href="http://en.wikipedia.org/wiki/Stirling_numbers_of_the_first_kind#Unsigned_Stirling_numbers_of_the_first_kind" rel="nofollow noreferrer">unsigned Stirling numbers of the first kind</a></em>.</p>
<p><strong>2.3 From $B_{n,k}^{f}$ and $B_{n,k}^h$ to $B_{n,k}^{f\cdot h}$</strong></p>
<p>We create the partial Bell polynomials $B_{n,k}^{f \cdot h}$ from $B_{n,k}^{f}$ and $B_{n,k}^h$ and show</p>
<p>The following is valid</p>
<p>\begin{align*}
B_{n,k}^{f\cdot h}&=h^k B_{n,k}^{f} + f^k B_{n,k}^{h}\\
&\qquad+\frac{1}{k!}\sum_{m_1=1}^k\sum_{m_2=1}^{k}\binom{k}{m_1}\binom{m_1}{k-m_2}m_{1}! m_{2}!f^{k-m_1}h^{k-m_2}\tag{3}\\
&\qquad\qquad\cdot \sum_{l=1}^{n-1}\binom{n}{l}B_{l,m_1}^fB_{n-l,m_2}^{h}\qquad\qquad 1\leq k \leq n\\
\end{align*}</p>
<p>We thereby find some nice combinatorial identities and use interesting techniques (by Egorychev) in order to prove them.</p>
<p><strong>2.4 From $B_{n,k}^{f \cdot h}$ and $B_{n,k}^{\ln \circ g}$ to $B_{n,k}^{f\cdot (\ln \circ g)}$</strong></p>
<p>We combine the partial Bell polynomials $B_{n,k}^{f \cdot h}$ and $B_{n,k}^{\ln \circ g}$ and summing from $1 \leq k \leq n$ results in</p>
<p>\begin{align*}
\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln \circ g)}
&=\sum_{k=1}^{n}\left((\ln \circ g)^k B_{n,k}^{f} + f^k \sum_{j=k}^{n}\frac{(-1)^{j-k}}{g^j}\begin{bmatrix}j\\k\end{bmatrix}B^{g}_{n,j}\right)\\
&\qquad+\sum_{k=1}^{n}\sum_{m_1=1}^k\sum_{m_2=1}^{k}\frac{m_2!}{(k-m_1)!}\binom{m_1}{k-m_2}f^{k-m_1} (\ln \circ g)^{k-m_2}\\
&\qquad\qquad\cdot\sum_{l=1}^{n-1}\sum_{j=m_2}^{n-l}\frac{(-1)^{j-m_2}}{g^j}\begin{bmatrix}j\\m_2\end{bmatrix}\binom{n}{l}B_{l,m_1}^fB^{g}_{n-l,j}\tag{4}
\end{align*}</p>
<p><strong>Part 3:</strong> </p>
<p>We have now derived an identity for $\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln \circ g)}$. We finally show that this identity (4) is equivalent to OPs formula (1) and the proof is completed.</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Solution:</strong></p>
<p><strong>Part 1:</strong></p>
<p>First we take a closer look at the structure of OPs expression. In order to do so we introduce the abbreviation:</p>
<p>$$B_{n,k}^{f}(x) := B_{n,k}(f^\prime(x),f^{\prime\prime}(x),\ldots,f^{(n-k+1)})$$</p>
<p>OPs expression can therefore be written as</p>
<p>\begin{align*}
\sum_{k=1}^{n}&\ln^{k}(g(x))B_{n,k}^f(x)=\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}(x)\\
&-\sum_{k=1}^{n}\sum_{m=0}^{n-k}\sum_{j=0}^{m}\binom{m}{j}
\frac{\ln^{m-j}(g(x))}{g(x)^k}\frac{d^j}{d(f(x))^j}[(f(x))_k]B_{{n,m+k}_{(f\rightarrow g)^k}}^f(x)
\end{align*}</p>
</blockquote>
<p>We see partial Bell polynomials with different complexity. The most complex is $B_{n,k}^{f\cdot (\ln \circ g)}$ which is based upon a composition of $\ln$ with a function $g$ and a multiplication with a function $f$. Next we have $B_{{n,m+k}_{(f\rightarrow g)^k}}^f$, which is an expression containing the Bell polynomials $B_{n,k}^f$ and $B_{n,k}^g$ and is therefore based upon $f$ and $g$. The simplest one is $B_{n,k}^{f}$ based upon $f$ only. Now we reorganise the expression and put thereby $B_{n,k}^{f\cdot (\ln \circ g)}$ on the LHS.</p>
<blockquote>
<p>In order to keep the complicated expressions better manageable, we will also sometimes <em>omit the variable $x$</em>. OPs expression can now be written as:</p>
<p>\begin{align*}
\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}=\sum_{k=1}^{n}&(\ln\circ g)^{k}B_{n,k}^f\tag{5}\\
&+\sum_{k=1}^{n}\sum_{m=0}^{n-k}\sum_{j=0}^{m}\binom{m}{j}
\frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]B_{{n,m+k}_{(f\rightarrow g)^k}}^f\qquad\qquad n\geq 1
\end{align*}</p>
<p>The next step is to take a closer look at the <em><a href="http://en.wikipedia.org/wiki/Bell_polynomials#Convolution_identity" rel="nofollow noreferrer">convolution product</a></em>
\begin{align*}
B_{{n,k}_{(f\rightarrow g)^c}}^f:= \frac{(a^{(k-c)\diamond} \diamond b^{c\diamond})_n}{(k-c)!c!}\qquad 1\leq k\leq n, 0\leq c\leq k
\end{align*}
with $a=(f^{\prime},f^{\prime\prime},\ldots)$ and $b=(g^{\prime},g^{\prime\prime},\ldots)$</p>
</blockquote>
<p><em>Note:</em> You might have a look at my <a href="https://math.stackexchange.com/questions/1045505/any-ideas-on-how-i-can-prove-this-expression/1057934#1057934">first answer</a> to this question. See the section: <em>The convolution product $x \diamond y$ demystified</em> which provides some information about it.</p>
<p>We note that the partial Bell polynomial $B_{n,k}^f$ is defined as:</p>
<p>\begin{align*}
\frac{1}{n!}B_{n,k}^f=[t^n]\frac{1}{k!}\left(\sum_{m\geq 1}f^{(m)}\frac{t^m}{m!}\right)^k\qquad 1 \leq k \leq n
\end{align*}</p>
<p>You might also have a look at the section: <em>Definition of partial Bell polynomials $B_{n,k}$</em> of my <a href="https://math.stackexchange.com/questions/1045505/any-ideas-on-how-i-can-prove-this-expression/1057934#1057934">first answer</a> for more information.</p>
<p>With this information at hand we can show that</p>
<blockquote>
<p>the following is valid for $1\leq k \leq n$:</p>
<p>\begin{equation*}
B_{{n,k}_{(f\rightarrow g)^c}}^f=
\begin{cases}
B_{n,k}^f\qquad& c=0\\
\sum_{l=1}^{n-1}\binom{n}{l}B_{l,k-c}^fB_{n-l,c}^g\qquad& n>1, 0 < c < k\tag{6}\\
B_{n,k}^g\qquad& c=k\\
\end{cases}
\end{equation*}</p>
</blockquote>
<p>We observe for $n>1,0<c<k$</p>
<p>\begin{align*}
\frac{1}{n!}B_{{n,k}_{(f\rightarrow g)^c}}^f&=[t^n]\frac{1}{(k-c)!}\left(\sum_{m\geq 1}f^{(m)}\frac{t^m}{m!}\right)^{k-c}
\frac{1}{c!}\left(\sum_{m\geq 1}g^{(m)}\frac{t^m}{m!}\right)^{c}\\
&=\sum_{l=1}^{n-1}[t^l]\frac{1}{(k-c)!}\left(\sum_{m\geq 1}f^{(m)}\frac{t^m}{m!}\right)^{k-c}
[t^{n-l}]\frac{1}{c!}\left(\sum_{m\geq 1}g^{(m)}\frac{t^m}{m!}\right)^{c}\\
&=\sum_{l=1}^{n-1}\frac{1}{l!}B_{l,k-c}^{f}\frac{1}{(n-l)!}B_{n-l,c}^{g}\\
&=\frac{1}{n!}\sum_{l=1}^{n-1}\binom{n}{l}B_{l,k-c}^{f}B_{n-l,c}^{g}\\
\end{align*}</p>
<p>The cases $n=1$ and $c\in \{0,k\}$ can be easily verified and (6) follows.</p>
<blockquote>
<p>With the help of (6) we can now write OPs expression (5) as:</p>
<p>\begin{align*}
\sum_{k=1}^{n}&B_{n,k}^{f\cdot (\ln\circ g)}=
\sum_{k=1}^{n}\left((\ln\circ g)^{k}B_{n,k}^f+\frac{(f)_k}{g^k}B_{n,k}^g\right)\tag{7}\\
&+\sum_{k=1}^{n}\sum_{m=1}^{n-k}\sum_{j=0}^{m}\binom{m}{j}
\frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]\sum_{l=1}^{n-1}\binom{n}{l}B_{l,m}^{f}B_{n-l,k}^{g}\qquad\qquad n\geq 1
\end{align*}</p>
<p>Observe that in (7) the index range of $m$ starts with $1$ instead of $0$. The case $m=0$ is now treated as $\sum_{k=1}^{n}\frac{(f)_k}{g^k}B_{n,k}^g$ corresponding to the case $c=k$ in (6) whereas the other summand $\sum_{k=1}^{n}(\ln\circ g)^{k}B_{n,k}^f$ corresponds to the case $c=0$.</p>
<p>We note that $B_{l,m}^{f}=0$ for $m>l$ and $B_{n-l,k}^{g}=0$ for $k>n-l$. In order to analyse the sum in (7) conveniently we now restrict the index range precisely to those values, which give a contribution to the sum.</p>
</blockquote>
<p>Relevant for these considerations is the expression </p>
<p>\begin{align*}
\sum_{k=1}^{n-1}&\sum_{l=1}^{n-1}\sum_{m=1}^{n-k}B_{l,m}^fB_{n-l,k}^g\tag{8}\\
&=\sum_{k=1}^{n-1}\sum_{l=1}^{n-k}\sum_{m=1}^{n-k}B_{l,m}^fB_{n-l,k}^g\tag{9}\\
&=\sum_{k=1}^{n-1}\sum_{l=1}^{n-k}\sum_{m=1}^{l}B_{l,m}^fB_{n-l,k}^g\tag{10}\\
&=\sum_{l=1}^{n-1}\sum_{k=1}^{n-l}\sum_{m=1}^{l}B_{l,m}^fB_{n-l,k}^g\tag{11}\\
\end{align*}</p>
<p><em>Comment:</em></p>
<ul>
<li><p>(8) Index $k\leq n-1$ since $(l\geq 1$ and $k=n)\Rightarrow (B_{n-l,k}^g=0)$</p></li>
<li><p>(9) Index $l\leq n-k$ since otherwise $B_{n-l,k}^g=0$.</p></li>
<li><p>(10) Index $m\leq l$ since otherwise $B_{l,m}^g=0$.</p></li>
<li><p>(11) Exchange sums with index $l$ and $k$</p></li>
</ul>
<blockquote>
<p>OPs expression can therefore be written as:</p>
<p>\begin{align*}
\sum_{k=1}^{n}&B_{n,k}^{f\cdot (\ln\circ g)}=
\sum_{k=1}^{n}\left((\ln\circ g)^{k}B_{n,k}^f+\frac{(f)_k}{g^k}B_{n,k}^g\right)\tag{12}\\
&+\sum_{l=1}^{n-1}\sum_{m=1}^{l}\sum_{k=1}^{n-l}\sum_{j=0}^{m}\binom{n}{l}\binom{m}{j}
\frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]B_{l,m}^{f}B_{n-l,k}^{g}\qquad\qquad n\geq 1
\end{align*}</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Summary of part 1:</strong> We have now finalised this part. OPs expression in the form (12) is a convenient representation for further analysis.</p>
<p><em>And now for something completely different:</em> In the following part 2 we develop <em>a formula</em> for
$$\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}$$
from scratch. In the last part 3 we will show that both formulae are equal and the job is done.</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Part 2:</strong></p>
<p>The development of a formula for $\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}$ from scratch is done in several steps. We start with some</p>
<p><strong>Part 2.1: Preliminaries</strong></p>
<p>Let's consider a function $f=f(z)$ and its Taylor series expansion at a point $x$
\begin{align*}
f(z+x)=\sum_{n\geq 0}\frac{f^{(n)}(x)}{n!}z^n
\end{align*}
We will take a look at the $k$-th power of the difference $f(x+z)-f(x)$ since there is a strong relationship with the <em><a href="http://en.wikipedia.org/wiki/Bell_polynomials" rel="nofollow noreferrer">partial Bell Polynomials</a></em> $B_{n,k}\ \ (1\leq k\leq n)$:
\begin{align*}
B_{n,k}(f^{\prime},f^{\prime\prime},\ldots,f^{(n-k+1)})
&=\frac{1}{k!}\sum_{\pi_k \in C_n}\binom{n}{\lambda_1,\lambda_2,\ldots,\lambda_k}f^{(\lambda_1)}f^{(\lambda_2)}\cdot\ldots\cdot f^{(\lambda_n)}\qquad\\
&=\frac{n!}{k!}\sum_{\pi_k \in C_n}\frac{f^{(\lambda_1)}(x)}{\lambda_1!}\frac{f^{(\lambda_2)}(x)}{\lambda_2!}\cdot\ldots\cdot\frac{f^{(\lambda_n)}(x)}{\lambda_n!}
\end{align*}
Here $f^{(n)}$ is the $n$-th derivative of $f=f(x)$, $C_n$ is the set of <a href="http://en.wikipedia.org/wiki/Composition_%28combinatorics%29" rel="nofollow noreferrer">compositions</a>
of $n$ and $\pi_k$ is the composition of $n$ with $k$ parts exactly $\lambda_1+\lambda_2+\ldots+\lambda_k=n$.</p>
<p>In order to do so its convenient to define
\begin{align*}
Y_f(x,z) := f(x+z)-f(x) =\sum_{n> 0}\frac{f^{(n)}(x)}{n!}z^n
\end{align*}
and consider the $k$-th power of $Y_f(x,z)$. We get
\begin{align*}
\left(Y_f(x,z)\right)^k=\left(f(x+z)-f(x)\right)^k=\sum_{n\geq k}Y_f^{\triangle}(n,k,x)z^n
\end{align*}
$Y_f^{\triangle}(n,k,x)$ is introduced to denote the coefficient of $z^n$ of $\left(Y_f(x,z)\right)^k$. In the following we use the <em><a href="http://arxiv.org/abs/math/9402216" rel="nofollow noreferrer">coefficient of</a></em> operator $[z^n]$ to denote the coefficient of $z^n$ of a power series. We observe</p>
<p>\begin{align*}
Y_f^\triangle(n,k,x)&=[z^n]\left(f(x+z)-f(x)\right)^k\\
&=\sum_{\pi_{k}\in C_n}\frac{f^{(\lambda_1)}(x)}{\lambda_1!}\frac{f^{(\lambda_2)}(x)}{\lambda_2!}\cdot\ldots\cdot\frac{f^{(\lambda_n)}(x)}{\lambda_n!}\\
&=\frac{k!}{n!}B_{n,k}(f^{\prime},f^{\prime\prime},\ldots,f^{(n-k+1)})\qquad\qquad 1\leq k \leq n
\end{align*}
It follows
\begin{align*}
B_{n,k}(f^{\prime},f^{\prime\prime},\ldots,f^{(n-k+1)})=\frac{n!}{k!}Y_f^\triangle(n,k,x)
\end{align*}</p>
</blockquote>
<p>We again use following abbreviation for the $B_{n,k}$:
\begin{align*}
B^{f}_{n,k}(x):=B_{n,k}(f^{\prime},f^{\prime\prime},\ldots,f^{(n-k+1)})
\end{align*}</p>
<blockquote>
<p><strong>Part 2.2: Calculation of $B^{\ln \circ g}_{n,k}$</strong></p>
<p>Using the notation introduced in part 2.1 we consider the difference of the composita of $(\ln \circ g)$:
$$Y_{\ln \circ g}(x,z)=\ln(g(x+z))-\ln(g(x))=\sum_{n>0}\frac{(\ln \circ g)^{(n)}(x)}{n!}z^n$$
and calculate the coefficient $Y^\triangle_{\ln \circ g}(n,k,x)$ of $z^n$ of the $k$-th powers
of the composita $Y^{\ln \circ g}(x,z), k\geq 1$.</p>
<p>We do it in three steps. First we calculate $B^{g}_{n,k}$, then $B^{\ln}_{n,k}$ and finally we get $B^{\ln \circ g}_{n,k}$.</p>
</blockquote>
<p><strong>Step 2.2.1: Calculation of $B^{g}_{n,k}$</strong></p>
<p>\begin{align*}
Y_g(x,z)&=g(x+z)-g(x)=\sum_{n>0}\frac{g^{(n)}(x)}{n!}z^n\\
\left(Y_g(x,z)\right)^k&=\left(g(x+z)-g(x)\right)^k=\sum_{n\geq k}Y_g^{\triangle}(n,k,x)z^n\\
\\
Y_g^{\triangle}(n,k,x)&=[z^n]\left(g(x+z)-g(x)\right)^k\\
&=\frac{k!}{n!}B^g_{n,k}(x)
\end{align*}</p>
<blockquote>
<p>It follows
\begin{align*}
B^g_{n,k}(x)=\frac{n!}{k!}Y_g^{\triangle}(n,k,x)
\end{align*}</p>
</blockquote>
<p><strong>Step 2.2.2: Calculation of $B^{\ln}_{n,k}$</strong></p>
<p>\begin{align*}
Y_{\ln}&=\ln(x+z)-\ln(x)=\sum_{n>0}\frac{\ln^{(n)}(x)}{n!}z^n\\
\left(Y_{\ln}\right)^k&=\left(\ln(x+z)-\ln(x)\right)^k=\sum_{n\geq k}Y_{\ln}^{\triangle}(n,k,x)z^n\\
\\
Y_{\ln}^{\triangle}(n,k,x)&=[z^n]\left(\ln(x+z)-\ln(x)\right)^k\\
&=[z^n]\left(\ln\left(1+\frac{z}{x}\right)\right)^k\\
&=[z^n]\sum_{n\geq k}k!(-1)^{n-k}
\begin{bmatrix}n\\k\end{bmatrix}
x^{-n}\frac{z^n}{n!}\tag{13}\\
&=\frac{k!}{n!}(-1)^{n-k}\begin{bmatrix}n\\k\end{bmatrix}x^{-n}\\
&=\frac{k!}{n!}B^{\ln}_{n,k}(x)
\end{align*}</p>
<blockquote>
<p>It follows
\begin{align*}
B^{\ln}_{n,k}(x)=
(-1)^{n-k}\begin{bmatrix}n\\k\end{bmatrix}
x^{-n}
\end{align*}
In (13) we use the <a href="http://en.wikipedia.org/wiki/Stirling_numbers_of_the_first_kind" rel="nofollow noreferrer">unsigned Stirling numbers of the first kind</a> $\begin{bmatrix}n\\k\end{bmatrix}$, which are defined as coefficients in the <em>exponential power series</em> for the $k$-th powers of the logarithm $\ln(1+z)$:
$$\sum_{n\geq k}(-1)^{n-k}\begin{bmatrix}n\\k\end{bmatrix}\frac{z^n}{n!}=\frac{1}{k!}\left(\ln (1+z)\right)^k$$</p>
</blockquote>
<p><strong>Step 2.2.3: Calculation of $B^{\ln\circ g}_{n,k}$</strong>
\begin{align*}
Y_{\ln\circ g}(x,z)&=\ln\left(g(x+z)\right)-\ln\left(g(x)\right)=\sum_{n>0}\frac{(\ln\circ g)^{(n)}(x)}{n!}z^n\\
\left(Y_{\ln\circ g}(x,z)\right)^k&=\left(\ln\left(g(x+z)\right)-\ln\left(g(x)\right)\right)^k=\sum_{n\geq k}Y_{\ln\circ g}^{\triangle}(n,k,x)z^n\\
\end{align*}</p>
<blockquote>
<p>According to <em>Theorem 6</em> in <a href="http://arxiv.org/abs/1104.5065" rel="nofollow noreferrer">Kruch_3</a> the composita of $(\ln \circ g)$ are given as
\begin{align*}
Y_{\ln\circ g}^{\triangle}(n,k,x)=\sum_{j=k}^{n}Y_{g}^{\triangle}(n,j,x)\cdot Y_{\ln}^{\triangle}(j,k,g(x))
\end{align*}
It follows
\begin{align*}
\frac{k!}{n!}B^{\ln\circ g}_{n,k}(x)&=\sum_{j=k}^{n}\frac{j!}{n!}B^{g}_{n,j}(x)
\cdot\frac{k!}{j!}B^{\ln}_{j,k}(g(x))\\
&=\sum_{j=k}^{n}\frac{k!}{n!}B^{g}_{n,j}(x)(-1)^{j-k}\begin{bmatrix}j\\k\end{bmatrix}\left(g(x)\right)^{-j}
\end{align*}
We therefore get
\begin{align*}
B^{\ln\circ g}_{n,k}(x)&=\sum_{j=k}^{n}B^{g}_{n,j}(x)(-1)^{j-k}\begin{bmatrix}j\\k\end{bmatrix}\left(g(x)\right)^{-j}\tag{14}
\end{align*}</p>
</blockquote>
<p>The next step is to calculate $B_{n,k}$ for the multiplication of $f\cdot h$ of two functions $f$ and $h$.</p>
<blockquote>
<p><strong>Part 2.3: Calculation of $B^{f\cdot h}_{n,k}$</strong></p>
</blockquote>
<p>We start similarly as in part 2.2:</p>
<p><strong>Step 2.3.1: Derivation of $B^{f\cdot h}_{n,k}$</strong></p>
<p>\begin{align*}
Y_{f\cdot h}(x,z)&=f(x+z)h(x+z)-f(x)h(x)=\sum_{n>0}\frac{(f\cdot h)^{(n)}(x)}{n!}z^n\\
\left(Y_{f\cdot h}(x,z)\right)^k&=\left(f(x+z)h(x+z)-f(x)h(x)\right)^k=\sum_{n\geq k}Y_{f\cdot h}^{\triangle}(n,k,x)z^n\\
\end{align*}</p>
<blockquote>
<p>\begin{align*}
Y_{f\cdot h}^{\triangle}(n,k,x)&=[z^n]\left(f(x+z)h(x+z)-f(x)h(x)\right)^k\\
&=[z^n]\sum_{j=0}^{k}\binom{k}{j}\left(f(x+z)h(x+z)\right)^j\left(-f(x)h(x)\right)^{k-j}\\
&=\sum_{j=0}^{k}\binom{k}{j}\left(-f(x)h(x)\right)^{k-j}[z^n]\left(f(x+z)h(x+z)\right)^j\\
&=\sum_{j=0}^{k}\binom{k}{j}\left(-f(x)h(x)\right)^{k-j}\\
&\qquad\cdot\sum_{i=0}^{n}\left([z^i]\left(f(x+z)\right)^j\right)\left([z^{n-i}]\left(h(x+z)\right)^j\right)\tag{15}
\end{align*}
We want to replace the expressions $[z^n]\left(f(x+z)\right)^j$ in (15) with an expression containing $Y_{f}(x,z)$ instead. Then we can again substitute $Y_f(x,z)$ with the corresponding $B_{n,k}^f$. We observe:
\begin{align*}
[z^n]f(x+z)^k&=[z^n]\left(f(x+z)-f(x)+f(x)\right)^k\\
&=[z^n]\sum_{j=0}^{k}\binom{k}{j}\left(f(x+z)-f(x)\right)^j\left(f(x)\right)^{k-j}\\
&=\left(f(x)\right)^k\delta_{n,0}+\sum_{j=1}^{k}\binom{k}{j}\left(f(x)\right)^{k-j}[z^n]\left(f(x+z)-f(x)\right)^j\\
&=\left(f(x)\right)^k\delta_{n,0}+\sum_{j=1}^{k}\binom{k}{j}\left(f(x)\right)^{k-j}Y_f^\triangle(n,j,x)\tag{16}
\end{align*}
Since $Y_f^\triangle(n,j,x)$ is defined for $n\geq 1$ only we have to consider the case $j=0$ separately. We use thereby the <em><a href="http://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker $\delta$-symbol</a></em>. Now substituting (16) in (15) gives:</p>
</blockquote>
<p>\begin{align*}
Y_{f\cdot h}^{\triangle}(n,k,x)&=\sum_{j=0}^{k}\binom{k}{j}\left(-f(x)h(x)\right)^{k-j}\sum_{l=0}^{n}\\
&\qquad\left(\left(f(x)\right)^j\delta_{l,0}+\sum_{m_1=0}^{j}\binom{j}{m_1}\left(f(x)\right)^{j-m_1}Y_f^\triangle(l,m_1,x)\right)\\
&\qquad\cdot\left(\left(h(x)\right)^j\delta_{n-l,0}+\sum_{m_2=0}^{j}\binom{j}{m_2}\left(h(x)\right)^{j-m_2}Y_h^\triangle(n-l,m_2,x)\right)\\
&=\left(f(x)\right)^k\sum_{j=0}^{k}\binom{k}{j}\left(-1\right)^{k-j}\sum_{m_2=1}^{j}\binom{j}{m_2}\left(h(x)\right)^{k-m_2}Y_h^\triangle(n,m_2,x)\\
&\qquad+\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\sum_{l=1}^{n-1}
\left(\sum_{m_1=0}^{j}\binom{j}{m_1}\left(f(x)\right)^{k-m_1}Y_f^\triangle(l,m_1,x)\right)\\
&\qquad\qquad\cdot\left(\sum_{m_2=0}^{j}\binom{j}{m_2}\left(h(x)\right)^{k-m_2}Y_h^\triangle(n-l,m_2,x)\right)\\
&\qquad+\left(h(x)\right)^k\sum_{j=0}^{k}\binom{k}{j}\left(-1\right)^{k-j}\sum_{m_1=1}^{j}\binom{j}{m_1}\left(f(x)\right)^{k-m_1}Y_f^\triangle(n,m_1,x)\\
\end{align*}</p>
<blockquote>
<p>Substituting $B_{n,k}^{f\cdot h}(x)$ for $Y_{f\cdot h}^{\triangle}(n,k,x)$ (and omitting the argument $x$) gives:</p>
<p>\begin{align*}
k!\frac{B_{n,k}^{f\cdot h}}{(fh)^{k}}&=\sum_{j=1}^{k}\binom{k}{j}(-1)^{k-j}
\left(\sum_{m=1}^j\binom{j}{m}m!\left(\frac{B_{n,m}^{h}}{h^{m}} + \frac{B_{n,m}^{f}}{f^{m}}\right)\right.\\
&\qquad+\sum_{l=1}^{n-1}\binom{n}{l}\left(\sum_{m_1=1}^j\binom{j}{m_1}m_{1}!\frac{B_{l,m_1}^f}{f^{m_1}}\right)
\left.\left(\sum_{m_2=1}^j\binom{j}{m_2}m_{2}!\frac{B_{n-l,m_2}^h}{h^{m_2}}\right)\right)\tag{17}\\
\end{align*}
with the factor $\frac{k!}{(fh)^k}$ on the left-hand side to better show the nice symmetry.</p>
<p><strong>Step 2.3.2: Simplification of $B^{f\cdot h}_{n,k}$</strong></p>
<p>We can <em>simplify (17) considerably</em>. Observe, that the RHS consists of two parts, the first being a double sum with structure</p>
<p>\begin{align*}
\sum_{j=1}^{k}\binom{k}{j}(-1)^{k-j}\sum_{m=1}^j\binom{j}{m}\varphi_m \qquad\qquad k\geq 1
\end{align*}
and $\varphi_m$ equal to
\begin{align*}
\varphi_m=m!\left(\frac{B_{n,m}^{f}}{f^{m}} + \frac{B_{n,m}^{h}}{h^{m}}\right)\tag{18}
\end{align*}
We show that following is valid</p>
<p>\begin{align*}
\sum_{j=1}^{k}\binom{k}{j}(-1)^{k-j}\sum_{m=1}^j\binom{j}{m}\varphi_m=\varphi_k \qquad\qquad k\geq 1\tag{19}
\end{align*}</p>
</blockquote>
<p>We observe
\begin{align*}
\sum_{j=1}^{k}&\binom{k}{j}(-1)^{k-j}\sum_{m=1}^j\binom{j}{m}\varphi_m\\
&=\sum_{m=1}^k\varphi_m\sum_{j=m}^{k}\binom{k}{j}\binom{j}{m}(-1)^{k-j}\\
&=\sum_{m=1}^k\varphi_m\sum_{j=m}^{k}\binom{k}{m}\binom{k-m}{j-m}(-1)^{k-j}\\
&=\sum_{m=1}^k\binom{k}{m}\varphi_m\sum_{j=0}^{k-m}\binom{k-m}{j}(-1)^{(k-m)-j}\\
&=\sum_{m=1}^k\binom{k}{m}\varphi_m\delta_{{k-m},0}\\
&=\varphi_k
\end{align*}
and (19) follows.</p>
<blockquote>
<p>Now we take a look at the second part of the RHS of (17). We identify following structure
\begin{align*}
\sum_{j=1}^{k}\binom{k}{j}(-1)^{k-j}\sum_{m_1=1}^j\binom{j}{m_1}\psi_{m_1}\sum_{m_2=1}^j\binom{j}{m_2}\psi_{m_2} \qquad\qquad k\geq 1
\end{align*}</p>
<p>with $\psi_{m_1},\psi_{m_2}$ equal to
\begin{align*}
\psi_{m_1}=m_{1}!\frac{B_{i,m_1}^f}{f^{m_1}}\qquad\qquad\psi_{m_2}=m_{2}!\frac{B_{n-i,m_2}^{h}}{h^{m_2}}\tag{20}
\end{align*}
The sum $\sum_{i=1}^{n-1}\binom{n}{i}$ is not part of our consideration. It does neither depend on $\psi_{m_1}$ nor on $\psi_{m_2}$ and we think of it as being at the front of the RHS (outside from $\sum_{j=1}^{k}$).</p>
<p>We show the following is valid</p>
<p>\begin{align*}
\sum_{j=1}^{k}&\binom{k}{j}(-1)^{k-j}\sum_{m_1=1}^j\binom{j}{m_1}\psi_{m_1}\sum_{m_2=1}^j\binom{j}{m_2}\psi_{m_2}\\
&\quad=\sum_{m_1=1}^k\sum_{m_2=1}^{k}\binom{k}{m_1}\binom{m_1}{k-m_2}\psi_{m_1}\psi_{m_2}\qquad\qquad k\geq 1\tag{21}
\end{align*}
This identity is less simple than the identity (19) above but with the <em>proper tool</em> at hand we can prove this identity <em>en passant</em>. We will use <em>Egorychevs residual calculus of formal power series.</em></p>
<p><em>Note:</em> This powerful technique is based upon <a href="http://en.wikipedia.org/wiki/Residue_theorem" rel="nofollow noreferrer">Cauchys residue theorem</a> and was introduced by <em>G.P. Egorychev</em> (<a href="http://www.amazon.de/Representation-Computation-Combinatorial-Translations-Mathematical/dp/0821845128" rel="nofollow noreferrer">Integral Representation and the Computation of Combinatorial Sums</a>) to compute binomial identies.</p>
<p>We use only two aspects of this theory:</p>
<p>Let $A(z)=\sum_{j=0}^{\infty}a_jz^j$ be a <em>formal power series</em>, then</p>
<ul>
<li>Write the binomial coeffients as <em>residuals</em> of corresponding <em>formal power series</em></li>
</ul>
<p>\begin{align*}
\mathop{res}_{z}\frac{A(z)}{z^{j+1}}=a_j\tag{22}
\end{align*}</p>
<ul>
<li>Apply the <em>substitution rule</em> for formal power series:</li>
</ul>
<p>\begin{align*}
A(z)=\sum_{j=0}^{\infty}a_jz^{j}=\sum_{j=0}^{\infty}z^j\mathop{res}_{w}\frac{A(w)}{w^{j+1}}\tag{23}
\end{align*}</p>
</blockquote>
<p>We start the proof with elementary transformations:</p>
<p>\begin{align*}
\sum_{j=1}^{k}&\binom{k}{j}(-1)^{k-j}\sum_{m_1=1}^j\binom{j}{m_1}\psi_{m_1}\sum_{m_2=1}^j\binom{j}{m_2}\psi_{m_2}\\
&=\sum_{m_1=1}^{k}\psi_{m_1}\sum_{j=m_1}^k\binom{k}{j}\binom{j}{m_1}(-1)^{k-j}\sum_{m_2=1}^j\binom{j}{m_2}\psi_{m_2}\tag{24}\\
&=\sum_{m_1=1}^{k}\psi_{m_1}\left(\sum_{m_2=1}^{m_1}\psi_{m_2}\sum_{j=m_1}^k\binom{k}{j}\binom{j}{m_1}\binom{j}{m_2}(-1)^{k-j}\right.\\
&\qquad\qquad\left.+\sum_{m_2=m_1+1}^{k}\psi_{m_2}\sum_{j=m_2}^k\binom{k}{j}\binom{j}{m_1}\binom{j}{m_2}(-1)^{k-j}\right)\tag{25}\\
&=\sum_{m_1=1}^{k}\psi_{m_1}\left(\sum_{m_2=1}^{m_1}\psi_{m_2}\sum_{j=m_1}^k\binom{k}{m_1}\binom{k-m_1}{j-m_1}\binom{j}{m_2}(-1)^{k-j}\right.\\
&\qquad\qquad\left.+\sum_{m_2=m_1+1}^{k}\psi_{m_2}\sum_{j=m_2}^k\binom{k}{m_2}\binom{k-m_2}{j-m_2}\binom{j}{m_1}(-1)^{k-j}\right)\tag{26}\\
&=\sum_{m_1=1}^{k}\psi_{m_1}\left(\binom{k}{m_1}\sum_{m_2=1}^{m_1}\psi_{m_2}\sum_{j=0}^{k-m_1}\binom{k-m_1}{j}\binom{j+m_1}{m_2}(-1)^{(k-m_1)-j}\right.\\
&\qquad\qquad\left.+\sum_{m_2=m_1+1}^{k}\binom{k}{m_2}\psi_{m_2}\sum_{j=0}^{k-m_2}\binom{k-m_2}{j}\binom{j+m_2}{m_1}(-1)^{(k-m_2)-j}\right)\tag{27}\\
\end{align*}</p>
<p><em>Comment:</em></p>
<ul>
<li><p>(24) Exchange of the sums with indices $j$ and $m_1$.</p></li>
<li><p>(25) Exchange of the sums with indices $j$ and $m_2$.</p></li>
<li><p>(26) Usage of the binomial identity $\binom{k}{j}\binom{j}{m}=\binom{k}{m}\binom{k-m}{j-m}$</p></li>
<li><p>(27) Index shift $j\rightarrow j-m_1,j\rightarrow j-m_2$</p></li>
</ul>
<blockquote>
<p>We now apply <em>Egorychevs</em> technique to show that</p>
<p>The following is valid (and symmetrically with $m_1,m_2$ exchanged):
\begin{align*}
\sum_{j=0}^{k-m_1}\binom{k-m_1}{j}\binom{j+m_1}{m_2}(-1)^{(k-m_1)-j}=\binom{m_1}{k-m_2}\qquad\qquad k\geq 1\tag{28}
\end{align*}
We observe
\begin{align*}
\sum_{j=0}^{k-m_1}&\binom{k-m_1}{j}\binom{j+m_1}{m_2}(-1)^{(k-m_1)-j}\\
&=\sum_{j\geq 0}\binom{k-m_1}{j}\binom{-(m_2+1)}{j+m_1-m_2}(-1)^{k-m_2}\tag{29}\\
&=(-1)^{k-m_2}\sum_{j\geq 0}\mathop{res}_{z}\frac{(1+z)^{k-m_1}}{z^{j+1}}
\mathop{res}_u\frac{(1+u)^{-(m_2+1)}}{u^{j+m_1-m_2+1}}\tag{30}\\
&=(-1)^{k-m_2}\mathop{res}_u\frac{(1+u)^{-(m_2+1)}}{u^{m_1-m_2+1}}\sum_{j\geq 0}u^{-j}\mathop{res}_{z}\frac{(1+z)^{k-m_1}}{z^{j+1}}\tag{31}\\
&=(-1)^{k-m_2}\mathop{res}_u\frac{(1+u)^{-(m_2+1)}}{u^{m_1-m_2+1}}\left(1+\frac{1}{u}\right)^{k-m_1}\tag{32}\\
&=(-1)^{k-m_2}\mathop{res}_u\frac{(1+u)^{k-m_2-m_1-1}}{u^{k-m_2+1}}\tag{33}\\
&=(-1)^{k-m_2}\binom{k-m_2-m_1-1}{k-m_2}\tag{34}\\
&=\binom{m_1}{k-m_2}\tag{35}
\end{align*}</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (28) we use $\binom{-n}{k}=\binom{n+k-1}{k}(-1)^k$ and changed the limit to $\infty$ without changing the value, since we add only $0$.</p></li>
<li><p>In (29) we use $(1+w)^n$ as <em>formal power series</em> with the coefficients $\binom{n}{k}$ according to (22). Observe that $\binom{k-m_2}{j}$ is the coefficient of $[z^{j}]$ in $(1+z)^{k-m_1}$ and similarly for $u$.</p></li>
<li><p>In (30) we do some rearrangement to prepare the application of the <em>substitution rule</em></p></li>
<li><p>In (31) we apply the substitution rule according to (23)</p></li>
<li><p>In (32) there is a simple rearrangement</p></li>
<li><p>In (33) we do the same as we did from (29) to (30) but in reverse direction.</p></li>
</ul>
<blockquote>
<p>We now proceed substituting the identity (28) in (27) and get:</p>
<p>\begin{align*}
\sum_{j=1}^{k}&\binom{k}{j}(-1)^{k-j}\sum_{m_1=1}^j\binom{j}{m_1}\psi_{m_1}\sum_{m_2=1}^j\binom{j}{m_2}\psi_{m_2}\\
&=\sum_{m_1=1}^{k}\left(\sum_{m_2=1}^{m_1}\binom{k}{m_1}\binom{m_1}{k-m_2}\psi_{m_1}\psi_{m_2}\right.\\
&\qquad\left.+\sum_{m_2=m_1+1}^k\binom{k}{m_2}\binom{m_2}{k-m_1}\psi_{m_1}\psi_{m_2}\right)\\
&=\sum_{m_1=1}^{k}\sum_{m_2=1}^{k}\binom{k}{m_1}\binom{m_1}{k-m_2}\psi_{m_1}\psi_{m_2}\\
\end{align*}</p>
</blockquote>
<p>In the last line we use the identity $\binom{k}{m_1}\binom{m_1}{k-m_2}=\binom{k}{m_2}\binom{m_2}{k-m_1}$
and (21) follows.</p>
<blockquote>
<p>Let's summarise:</p>
<p>According to (18) and (19) we get</p>
<p>\begin{align*}
\sum_{j=1}^{k}&\binom{k}{j}(-1)^{k-j}\sum_{m=1}^{j}\binom{j}{m}
m!\left(\frac{B_{n,m}^{f}}{f^{m}} + \frac{B_{n,m}^{h}}{h^{m}}\right)\\
&=k!\left(\frac{B_{n,k}^{f}}{f^{k}} + \frac{B_{n,k}^{h}}{h^{k}}\right)\qquad\qquad k \geq 1\tag{34}
\end{align*}</p>
<p>and according to (20) and (21) we get</p>
<p>\begin{align*}
\sum_{j=1}^{k}&\binom{k}{j}(-1)^{k-j}\sum_{m_1=1}^j\binom{j}{m_1}m_{1}!\frac{B_{i,m_1}^f}{f^{m_1}}\sum_{m_2=1}^j\binom{j}{m_2}m_{2}!\frac{B_{n-i,m_2}^{h}}{h^{m_2}}\\
&\quad=\sum_{m_1=1}^k\sum_{m_2=1}^{k}\binom{k}{m_1}\binom{m_1}{k-m_2}m_{1}!m_{2}!\frac{B_{i,m_1}^f}{f^{m_1}}\frac{B_{n-i,m_2}^{h}}{h^{m_2}}\qquad\qquad k\geq 1\tag{35}\\
\end{align*}</p>
</blockquote>
<p>With the help of (34) and (35) the identity (17) simplifies to
\begin{align*}
k!\frac{B_{n,k}^{f\cdot h}}{(fh)^{k}}&=k!\left(\frac{B_{n,k}^{f}}{f^{k}} + \frac{B_{n,k}^{h}}{h^{k}}\right)+\sum_{l=1}^{n-1}\binom{n}{l}\\
&\qquad\cdot\sum_{m_1=1}^k\sum_{m_2=1}^{k}\binom{k}{m_1}\binom{m_1}{k-m_2}m_{1}!m_{2}!\frac{B_{l,m_1}^f}{f^{m_1}}\frac{B_{n-l,m_2}^{h}}{h^{m_2}}\\
\end{align*}</p>
<blockquote>
<p>We finally get</p>
<p>\begin{align*}
B_{n,k}^{f\cdot h}&=h^k B_{n,k}^{f} + f^k B_{n,k}^{h}\\
&\qquad+\frac{1}{k!}\sum_{m_1=1}^k\sum_{m_2=1}^{k}\binom{k}{m_1}\binom{m_1}{k-m_2}m_{1}! m_{2}!f^{k-m_1}h^{k-m_2}\\
&\qquad\qquad\cdot \sum_{l=1}^{n-1}\binom{n}{l}B_{l,m_1}^fB_{n-l,m_2}^{h}\tag{36}\\
\end{align*}</p>
<p><strong>Step 2.4: Calculation of $B^{f\cdot (\ln \circ g)}_{n,k}$</strong></p>
<p>Now it's <strong>time to harvest</strong> the first time. We can now combine the expression with Bell polynomials of $f\cdot h$ and of $(\ln\circ g)$ to
calculate the Bell polynomials of $f\cdot (\ln\circ g)$. </p>
<p>We conclude:
\begin{align*}
B_{n,k}^{f\cdot (\ln \circ g)}&= (\ln \circ g)^k B_{n,k}^{f} + f^k B_{n,k}^{\ln \circ g}\\
&\qquad+\frac{1}{k!}\sum_{m_1=1}^k\sum_{m_2=1}^{k}\binom{k}{m_1}\binom{m_1}{k-m_2}m_{1}! m_{2}!f^{k-m_1} (\ln \circ g)^{k-m_2}\\
&\qquad\qquad\cdot\sum_{l=1}^{n-1}\binom{n}{l}B_{l,m_1}^fB_{n-l,m_2}^{\ln \circ g}\tag{37}
\end{align*}</p>
<p>According to (14) we know
\begin{align*}
B^{\ln\circ g}_{n,k}&=\sum_{j=k}^{n}\frac{(-1)^{j-k}}{g^j}\begin{bmatrix}j\\k\end{bmatrix}B^{g}_{n,j}
\end{align*}</p>
<p>Substituting this expression in (37) and summing over $k=1,\ldots,n$ gives:</p>
<p>\begin{align*}
\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln \circ g)}
&=\sum_{k=1}^{n}\left((\ln \circ g)^k B_{n,k}^{f} + f^k \sum_{j=k}^{n}\frac{(-1)^{j-k}}{g^j}\begin{bmatrix}j\\k\end{bmatrix}B^{g}_{n,j}\right)\\
&\qquad+\sum_{k=1}^{n}\sum_{m_1=1}^k\sum_{m_2=1}^{k}\frac{m_2!}{(k-m_1)!}\binom{m_1}{k-m_2}f^{k-m_1} (\ln \circ g)^{k-m_2}\\
&\qquad\qquad\cdot\sum_{l=1}^{n-1}\sum_{j=m_2}^{n-l}\frac{(-1)^{j-m_2}}{g^j}\begin{bmatrix}j\\m_2\end{bmatrix}\binom{n}{l}B_{l,m_1}^fB^{g}_{n-l,j}\tag{38}
\end{align*}</p>
</blockquote>
<hr>
<blockquote>
<p><strong>Summary of part 2:</strong> With expression (38) we have developed a formula for $\sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln \circ g)}$ and part 2 is finished. The last challenge is to show that OPs expression is equal to this one. This is the theme of part 3.</p>
</blockquote>
<hr>
<p><strong>Note:</strong> The body of this answer is limited with $30000$ chars. I'll provide part 3 in a follow-up answer.</p>
|
combinatorics | <p>I am going to give a presentation about the <em>indicator functions</em>, and I am looking for some interesting examples to include. The examples can be even an overkill solution since I am mainly interested in demonstrating the creative ways of using it.</p>
<p>I would be grateful if you share your examples. The diversity of answers is appreciated.</p>
<p>To give you an idea, here are my examples. Most of my examples are in probability and combinatorics so examples from other fields would be even better.</p>
<ol>
<li><p>Calculating the expected value of a random variable using linearity of expectations. Most famously the number of fixed points in a random permutation.</p>
</li>
<li><p>Showing how <span class="math-container">$|A \Delta B| = |A|+|B|-2|A \cap B|$</span> and <span class="math-container">$(A-B)^2 = A^2+B^2-2AB$</span> are related.</p>
</li>
<li><p>An overkill proof for <span class="math-container">$\sum \deg(v) = 2|E|$</span>.</p>
</li>
</ol>
| <p>Whether it's overkill is open to debate, but I feel that the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="noreferrer">inclusion-exclusion principle</a> is best seen through the prism of indicator functions.</p>
<p>Basically, the classical formula is just what you get numerically from the (clear) identity:
<span class="math-container">$$ 1 - 1_{\bigcup_{i=1}^n A_i} = 1_{\bigcap_{i=1}^n \overline A_i} = \prod_{i=1}^n (1-1_{A_i}) = \sum_{J \subseteq [\![ 1,n ]\!]} (-1)^{|J|} 1_{\bigcap_{j\in J} A_j}.$$</span></p>
| <p>Indicator functions are often very useful in conjunction with Fubini’s theorem.</p>
<p>Suppose you want to show:
<span class="math-container">$$\newcommand\dif{\mathop{}\!\mathrm{d}}
\int_Y \int_{X_y} f(x, y) \dif x \dif y = \int_X \int_{Y_x} f(x,y) \dif y \dif x$$</span>
where the two subsets <span class="math-container">$X_y \subseteq X$</span> and <span class="math-container">$Y_x \subseteq Y$</span> describe the same relation <span class="math-container">$x \in X_y \iff y \in Y_x$</span>.</p>
<p>Because of the variable in the inner integral’s domain, you cannot use Fubini right away to permutate the two sums directly.</p>
<p>But you can do it if you use an indicator function to describe the set <span class="math-container">$$Z = \left\{ (x,y) \in X \times Y \mid x \in X_y \right\} = \left\{ (x,y) \in X \times Y \mid y \in Y_x \right\}.$$</span></p>
<p>Finally:
<span class="math-container">\begin{align*}
\int_Y \int_{X_y} f(x, y) \dif x \dif y & = \int_Y \int_X 1_Z(x,y) f(x,y) \dif x \dif y \\
& = \int_X \int_Y 1_Z(x,y) f(x,y) \dif y \dif x \\
& = \int_X \int_{Y_x} f(x,y) \dif y \dif x.
\end{align*}</span></p>
|
probability | <p>Suppose $X$ and $Y$ are iid random variables taking values in $[0,1]$, and let $\alpha > 0$. What is the maximum possible value of $\mathbb{E}|X-Y|^\alpha$? </p>
<p>I have already asked this question for $\alpha = 1$ <a href="https://math.stackexchange.com/questions/2149565/maximum-mean-absolute-difference-of-two-iid-random-variables">here</a>: one can show that $\mathbb{E}|X-Y| \leq 1/2$ by integrating directly, and using some clever calculations. Basically, one has the useful identity $|X-Y| = \max{X,Y} - \min{X,Y}$, which allows a direct calculation. There is an easier argument to show $\mathbb{E}|X - Y|^2 \leq 1/2$. In both cases, the maximum is attained when the distribution is Bernoulli 1/2, i.e. $\mathbb{P}(X = 0) = \mathbb{P}(X = 1) = 1/2$. I suspect that this solution achieves the maximum for all $\alpha$ (it is always 1/2), but I have no ideas about how to try and prove this. </p>
<p><strong>Edit 1:</strong> @Shalop points out an easy proof for $\alpha > 1$, using the case $\alpha = 1$. Since $|x-y|^\alpha \leq |x-y|$ when $\alpha > 1$ and $x,y \in [0,1]$, </p>
<p>$E|X-Y|^\alpha \leq E|X-Y| \leq 1/2$. </p>
<p>So it only remains to deal with the case when $\alpha \in (0,1)$.</p>
| <p>Throughout this answer, we will fix $\alpha \in (0, 1]$.</p>
<p>Let $\mathcal{M}$ denote the set of all finite signed Borel measures on $[0, 1]$ and $\mathcal{P} \subset \mathcal{M}$ denote the set of all Borel probability measure on $[0, 1]$. Also, define the pairing $\langle \cdot, \cdot \rangle$ on $\mathcal{M}$ by</p>
<p>$$ \forall \mu, \nu \in \mathcal{M}: \qquad \langle \mu, \nu\rangle = \int_{[0,1]^2} |x - y|^{\alpha} \, \mu(dx)\nu(dy). $$</p>
<p>We also write $I(\mu) = \langle \mu, \mu\rangle$. Then we prove the following claim.</p>
<blockquote>
<p><strong>Proposition.</strong> If $\mu \in \mathcal{P}$ satisfies $\langle \mu, \delta_{t} \rangle = \langle \mu, \delta_{s} \rangle$ for all $s, t \in [0, 1]$, then $$I(\mu) = \max\{ I(\nu) : \nu \in \mathcal{P}\}.$$</p>
</blockquote>
<p>We defer the proof of the lemma to the end and first rejoice its consequence.</p>
<blockquote>
<ul>
<li><p>When $\alpha = 1$, it is easy to see that the choice $\mu_1 = \frac{1}{2}(\delta_0 + \delta_1)$ works.</p></li>
<li><p>When $\alpha \in (0, 1)$, we can focus on $\mu_{\alpha}(dx) = f_{\alpha}(x) \, dx$ where $f_{\alpha}$ is given by</p></li>
</ul>
<p>$$ f_{\alpha}(x) = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})} \cdot \frac{1}{(x(1-x))^{\frac{1+\alpha}{2}}}, $$</p>
<p>Indeed, for $y \in [0, 1]$, apply the substitution $x = \cos^2(\theta/2)$ and $k = 2y-1$ to write</p>
<p>$$ \langle \mu_{\alpha}, \delta_y \rangle = \int_{0}^{1} |y - x|^{\alpha} f_{\alpha}(x) \, dx = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \left| \frac{\sin\theta - k}{\cos\theta} \right|^{\alpha} \, d\theta. $$</p>
<p>Then letting $\omega(t) = \operatorname{Leb}\left( \theta \in \left(-\frac{\pi}{2}, \frac{\pi}{2}\right) \ : \ \left| \frac{\sin\theta - k}{\cos\theta} \right| > t \right)$, we can check that this satisfies $\omega(t) = \pi - 2\arctan(t)$, which is independent of $k$ (and hence of $y$). Moreover,</p>
<p>$$ \langle \mu_{\alpha}, \delta_y \rangle = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})} \int_{0}^{\infty} \frac{2t^{\alpha}}{1+t^2} \, dt = \frac{\pi}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})\cos(\frac{\pi\alpha}{2})} $$</p>
<p>Integrating both sides over $\mu(dy)$, we know that this is also the value of $I(\mu_{\alpha})$.</p>
</blockquote>
<p>So it follows that</p>
<p>\begin{align*}
&\max \{ \mathbb{E} [ |X - Y|^{\alpha}] : X, Y \text{ i.i.d. and } \mathbb{P}(X \in [0, 1]) = 1 \} \\
&\hspace{1.5em}
= \max_{\mu \in \mathcal{P}} I(\mu)
= I(\mu_{\alpha})
= \frac{\pi}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})\cos(\frac{\pi\alpha}{2})}.
\end{align*}</p>
<p>Notice that this also matches the numerical value of Kevin Costello as</p>
<p>$$ I(\mu_{1/2}) = \frac{\sqrt{2}\pi^{3/2}}{\Gamma\left(\frac{1}{4}\right)^2} \approx 0.59907011736779610372\cdots. $$</p>
<p>The following is the graph of $\alpha \mapsto I(\mu_{\alpha})$.</p>
<p>$\hspace{8em} $<a href="https://i.sstatic.net/YuTSF.png" rel="noreferrer"><img src="https://i.sstatic.net/YuTSF.png" alt="enter image description here"></a></p>
<hr>
<p><em>Proof of Proposition.</em> We first prove the following lemma.</p>
<blockquote>
<p><strong>Lemma.</strong> If $\mu \in \mathcal{M}$ satisfies $\mu([0,1]) = 0$, then we have $I(\mu) \leq 0$.</p>
</blockquote>
<p>Indeed, notice that there exists a constant $c > 0$ for which</p>
<p>$$ \forall x \in \mathbb{R}: \qquad |x|^{\alpha} = c\int_{0}^{\infty} \frac{1 - \cos (xt)}{t^{1+\alpha}} \, dt $$</p>
<p>holds. Indeed, this easily follows from the integrability of the integral and the substitution $|x|t \mapsto t$. So by the Tonelli's theorem, for any positive $\mu, \nu \in \mathcal{M}$,</p>
<p>\begin{align*}
\langle \mu, \nu \rangle
&= c\int_{0}^{\infty} \int_{[0,1]^2} \frac{1 - \cos ((x - y)t)}{t^{1+\alpha}} \, \mu(dx)\nu(dy)dt \\
&= c\int_{0}^{\infty} \frac{\hat{\mu}(0)\hat{\nu}(0) - \operatorname{Re}( \hat{\mu}(t)\overline{\hat{\nu}(t)} )}{t^{1+\alpha}} \, dt,
\end{align*}</p>
<p>where $\hat{\mu}(t) = \int_{[0,1]} e^{itx} \, \mu(dx)$ is the Fourier transform of $\mu$. In particular, this shows that the right-hand side is integrable. So by linearity this relation extends to all pairs of $\mu, \nu$ in $\mathcal{M}$. So, if $\mu \in \mathcal{M}$ satisfies $\mu([0,1]) = 0$ then $\hat{\mu}(0) = 0$ and thus</p>
<p>$$ I(\mu) = -c\int_{0}^{\infty} \frac{|\hat{\mu}(t)|^2}{t^{1+\alpha}} \, dt \leq 0, $$</p>
<p>completing the proof of Lemma. ////</p>
<p>Let us return to the original proof. Let $m$ denote the common values of $\langle \mu, \delta_t\rangle$ for $t \in [0, 1]$. Then for any $\nu \in \mathcal{P}$</p>
<p>$$ \langle \mu, \nu \rangle
= \int \left( \int_{[0,1]} |x - y|^{\alpha} \, \mu(dx) \right) \, \nu(dy)
= \int \langle \mu, \delta_y \rangle \, \nu(dy)
= m. $$</p>
<p>So it follows that</p>
<p>$$ \forall \nu \in \mathcal{P} : \qquad I(\nu)
= I(\mu) + 2\underbrace{\langle \mu, \nu - \mu \rangle}_{=m-m = 0} + \underbrace{I(\nu - \mu)}_{\leq 0} \leq I(\mu) $$</p>
<p>as desired.</p>
| <p>This isn't a full solution, but it's too long for a comment. </p>
<p>For fixed $0<\alpha<1$ we can get an approximate solution by considering the problem discretized to distributions that only take on values of the form $\frac{k}{n}$ for some reasonably large $n$. Then the problem becomes equivalent to
$$\max_x x^T A x$$
where $A$ is the $(n+1) \times (n+1)$ matrix whose $(i,j)$ entry is $\left(\frac{|i-j|}{n}\right)^{\alpha}$, and the maximum is taken over all non-negative vectors summing to $1$. </p>
<p>If we further assume that there is a maximum where all entries of $x$ are non-zero, Lagrange multipliers implies that the optimal $x$ in this case is a solution to
$$Ax=\lambda {\mathbb 1_{n+1}}$$
(where $1_{n+1}$ is the all ones vector), so we can just take $A^{-1} \mathbb{1_{n+1}}$ and rescale. </p>
<p>For $n=1000$ and $n=\frac{1}{2}$, this gives a maximum of approximately $0.5990$, with a vector whose first few entries are $(0.07382, 0.02756, 0.01603, 0.01143)$.</p>
<hr>
<p>If the optimal $x$ has a density $f(x)$ that's positive everywhere,
and we want to maximize
$\int_0^1 \int_0^1 f(x) f(y) |x-y|^{\alpha}$
the density "should" (by analogue to the above, which can probably be made rigorous) satisfy
$$\int_{0}^1 f(y) |x-y|^{\alpha} \, dy= \textrm{ constant independent of } x,$$
but I'm not familiar enough with integral transforms to know if there's a standard way of inverting this. </p>
|
logic | <p>I understand that naive set theory, whose axioms are extensionality and unrestricted comprehension, is inconsistent, due to paradoxes like Russell, Curry, Cantor, and Burali-Forti.</p>
<p>But these all seem to me like pathological, esoteric, ad-hoc examples, that really only matter in foundations, and most non-foundational and applied mathematics wouldn't go anywhere near touching them.</p>
<p>Am I wrong here? If we were to do non-foundational math over naive set theory and just ignore the paradoxes, what problems might we face? Yes, I know that we can technically prove <span class="math-container">$0=1$</span> because logic, but I'm looking for more interesting examples, particularly ones that could arise without having to specifically look for them.</p>
<blockquote>
<p><strong>Question:</strong> Notwithstanding technicalities like explosion, are there any "natural" examples of contradictions arising in non-foundational or applied math due to the paradoxes of naive set theory?</p>
<p>Has anyone ever arrived at a false statement in, say, algebra or number theory, using naive sets?</p>
</blockquote>
<p><em>edit</em>: I'd like to be clear that I'm playing devil's advocate. I'm of course aware that relying on an inconsistent theory is in general a bad idea, but of course not all flawed structures collapse immediately. How far could we go <em>in practice</em> before we ran into problems?</p>
<p><em>edit</em>: By "non-foundational" I basically mean anything outside of set theory or mathematical logic. If the question of a theory's consistency comes up at all (this thought experiment notwithstanding), then it's probably "foundational". But it's of course fuzzy.</p>
| <p>With a strict enough definition of "non-foundational mathematics" I think the answer is probably "no" (although I would be very interested in seeing potential examples.) However, this shouldn't make mathematicians working on such mathematics feel safe about using unrestricted comprehension. The reason is that it's not always clear <em>a priori</em> what mathematics will turn out to be "foundational".</p>
<p>Indeed, people may start working on some mathematics that seems non-foundational but then turns in a foundational direction. For example, Cantor's development of set theory was a natural consequence of his study of <a href="http://en.wikipedia.org/wiki/Set_of_uniqueness">sets of uniqueness</a> in harmonic analysis.</p>
<p>If someone working in a supposedly non-foundational branch of mathematics ended up with a contradiction by using unrestricted comprehension, then with the benefit of hindsight we could say that he or she must have been working in an area related to foundations after all.</p>
<p>It might seem like cheating to make such a declaration after the fact, but perhaps it is not: It seems likely that, from a novel use of unrestricted comprehension to obtain a contradiction, one could obtain a novel use of replacement to obtain a theorem that could not have been obtained without replacement (<em>i.e.</em> using only restricted comprehension). I say this because replacement is a natural intermediate step between restricted and unrestricted comprehension.</p>
<p>Mathematics that uses replacement in an essential way is often considered <em>ipso facto</em> to be foundational. So I think it is likely that mathematics that uses unrestricted comprehension an an essential way (to the extent that it can be salvaged) would be considered foundational as well.</p>
<p>(This answer doesn't address the question of how long, on average, it would take people using unrestricted comprehension in non-foundational-seeming areas of mathematics to run into problems. I think that question is very interesting but probably also very hard to answer.)</p>
| <p>The whole idea of using set theory as a foundational theory is that you want a theory that if you believe is consistent, the rest of mathematics is consistent.</p>
<p>Naive set theory is inconsistent. So you can't really continue, you cannot trust it to give you the rest of mathematics. And it is not important that you "don't seem to appeal to the paradoxes".</p>
<p>Axiomatic set theories, like $\sf ZFC$, come to make an effort to at least fight the problems that come up with naive set theory.</p>
<p>Why should you <em>really</em> care about the axioms of $\sf ZFC$? You shouldn't. You should care about the fact that $\sf ZFC$ is sufficient to develop basic model theory, and create the rest of mathematics inside its universe. And this makes set theory like the back-end architecture of your CPU. Do you really care that your computer is using an Alpha back-end or a SPARC back-end? No. You care that it is able to run Chicken Invaders, or YouTube, or $\LaTeX$.</p>
<p>Of course, if you are interested in mathematics, then you should learn at least a little bit how this processor works. Much like learning programming involves understanding the operating system, or the hardware design. So you should care because you want to know how your mathematics is being modeled, but in general you should care because naive set theory is provably bad for you.</p>
|
logic | <p>Though searching for previous questions returns thousands of results for the query <code>"for any" "for all"</code>, none specifically address the following query:</p>
<p>I'm reading a textbook in which one definition requires that some condition holds <em>for <strong>any</strong></em> <span class="math-container">$x,\ x' \in X,$</span> and right afterwards another definition requires that some other condition holds <em>for <strong>all</strong></em> <span class="math-container">$x,\ x' \in X.$</span></p>
<p>Is there a difference between 'for any' and 'for all'?</p>
| <p>This seems to depend on the context: "For all $x \in X \ P(x)$" is the same as "For any $x∈X \ P(x)$" On the other hand "If for any $x∈X \ P(x)$, then $Q$" means that the existence of at least one $x\in X$ with $P(x)$ implies $Q$, so $P(x)$ doesn't need to hold for all $x \in X$ to imply $Q$. </p>
| <p>"Any" is ambiguous and it depends on the context. It can refer to "there exists", "for all", or to a third case which I will talk about in the end.</p>
<p><a href="https://en.oxforddictionaries.com/definition/any" rel="nofollow noreferrer">https://en.oxforddictionaries.com/definition/any</a></p>
<p>Oxford Dictionary:</p>
<ol>
<li>[usually with negative or in questions] Used to refer to one or some of a thing or number of things, no matter how much or how many.<br>
[as determiner] ‘I don't have any choice’
<br><strong>which means there does not exist a choice</strong></li>
<li>Whichever of a specified class might be chosen.<br>
[as determiner] ‘these constellations are visible at any hour of the night’
<br><strong>which means for all hours</strong></li>
</ol>
<p>Also in Oracle database I remember a query to return the employees where employee.salary > any(10,20,30,40) which means the salary of the returned employee must be bigger than 10 or 20 or 30 or 40 which means "<strong>there exist" one salary</strong> in that tuple such that the employee.salary is bigger than it.</p>
<p>Same ambiguity is in math, since math did not come from nothing, rather it is a notation system for the language.</p>
<p><strong>However:</strong></p>
<p>Some times "any" is used for the meaning of "any" and not "exist" or "all". For example, in the definition of the little o asymptotic notation we have:</p>
<p><span class="math-container">$o(g(n))$</span> = { <span class="math-container">$f(n)$</span>: for any positive constant c>0, there exists a constant <span class="math-container">$n_0>0$</span> such that <span class="math-container">$0 \leq f(n) < c g(n)$</span> for all <span class="math-container">$n \geq n_0$</span> }</p>
<p>Here "any" means "any" which is two things "there exists" and "for all" how??</p>
<ol>
<li><p>If you take any as "for all c" then the meaning is wrong because <span class="math-container">$n_0$</span>>0 is attached to some choice of c and for each c there may be a different <span class="math-container">$n_0$</span>>0.
And you can not find a fixed <span class="math-container">$n_0$</span> for all c that satisfies the remaining because c can go very close to zero like c=0.000....00001</p>
</li>
<li><p>If you take any as "there exist c" then the meaning is wrong also because for some c the remaining may apply but for another c the remaining may not apply.<br/>
Example: let <span class="math-container">$f(n)=n$</span> and <span class="math-container">$g(n)=2n$</span>:<br/>
If <span class="math-container">$c=1$</span> then <span class="math-container">$n < 1 \times (2n)$</span> for <span class="math-container">$n \geq n0>0$</span><br/>
But if <span class="math-container">$c=0.1$</span> then <span class="math-container">$n > 0.1 \times (2n)$</span> for all <span class="math-container">$n\geq n0>0$</span></p>
</li>
</ol>
<p><strong>So</strong> here "any" means "any" which is for all but one at a time , so in the little o asymptotic notation "any" means for all c>0 pick one at a time and the remaining should be satisfied.</p>
<p><strong>Conclusion:</strong> Either do not use "any" in Math or explain to the reader what it means in your context.</p>
|
logic | <p><a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems">Gödel's first incompleteness theorem</a> states that "...For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system".</p>
<p>What does it mean that a statement is true if it's not provable?</p>
<p>What is the difference between true and provable?</p>
| <p>Consider this claim: <strong>John Smith will never be able to prove this statement is true.</strong></p>
<p>If the statement is false, then John Smith will be able to prove it's true. But clearly that can't be, since it's impossible to prove that a false statement is true. (Assuming John Smith is sensible.)</p>
<p>If it's true, there's no contradiction. It just means John Smith won't be able to prove it's true. So it's true, and John Smith won't be able to prove it's true. This is a limit on what John Smith can do. (So if John Smith is sensible, there are truths he cannot prove.)</p>
<p>What Goedel showed is that for any sensible formal axiom system, there will be formal versions of "this axiom system cannot prove this claim is true". It will be a statement expressible in that formal system but, obviously, not provable within that axiom system.</p>
| <p>Provable means that there is a formal proof using the axioms that you want to use.
The set of axioms (in this case axioms for arithmetic, i.e., natural numbers) is the "system" that your quote mentions. True statements are those that hold for the natural numbers (in this particular situation).</p>
<p>The point of the incompleteness theorems is that for every reasonable system of axioms for the natural numbers there will always be true statements that are unprovable. You can of course cheat and say that your axioms are simply all true statements about the natural numbers, but this is not a reasonable system since there is no algorithm that decides whether or not a given statement is one of your axioms or not.</p>
<p>As a side note, your quote is essentially the first incompleteness theorem, in the form in which it easily follows from the second.</p>
<p>In general (not speaking about natural numbers only) given a structure, i.e., a set together with relations on the set, constants in the set, and operations on the set, there is a natural way to define when a (first order) sentence using the corresponding symbols for your relations, constants, and operations holds in the structure. ($\forall x\exists y(x\cdot y=e)$ is a sentence using symbols corresponding to constants and operations that you find in a group, and the sentence says that every element has an inverse.)</p>
<p>So this defines what is true in a structure.
In order to prove a statement (sentence), you need a set of axioms (like the axioms of group theory) and a notion of formal proof from these axioms. I won't eleborate here, but the important connection between true statements and provable statements is the completeness theorem:</p>
<p>A sentence is provable from a set of axioms iff every structure that satisfies the axioms also satisfies the sentence. </p>
<p>This theorem tells you what the deal with the incompleteness theorems is:
We consider true statements about the natural numbers, but a statement is provable from a set of axioms only if it holds for all structures satisying the axioms. And there will be structures that are not isomorphic to the natural numbers.</p>
|
probability | <p>If nine coins are tossed, what is the probability that the number of heads is even?</p>
<p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p>
<p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p>
<p><span class="math-container">$n = 9, k = 0$</span></p>
<p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p>
<p><span class="math-container">$n = 9, k = 2$</span></p>
<p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p>
<p><span class="math-container">$n = 9, k = 4$</span>
<span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p>
<p><span class="math-container">$n = 9, k = 6$</span></p>
<p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p>
<p><span class="math-container">$n = 9, k = 8$</span></p>
<p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p>
<p>Add all of these up: </p>
<p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
| <p>The probability is <span class="math-container">$\frac{1}{2}$</span> because the last flip determines it.</p>
| <p>If there are an even number of heads then there must be an odd number of tails. But heads and tails are symmetrical, so the probability must be <span class="math-container">$1/2$</span>.</p>
|
logic | <p>We were trying to prove that if $3p^2=q^2$ for nonnegative integers $p$ and $q$, then $3$ divides both $p$ and $q$. I finished writing the solution (using Euclid's lemma) when a student asked me </p>
<blockquote>
<p>"How can you assume $3p^2=q^2$ when that implies $\sqrt 3$ is rational which we know is false?"</p>
</blockquote>
<p>I told him that question at hand is used as a lemma in proving that $\sqrt 3$ is irrational. But he gave another argument using fundamental theorem of arithmetic to independently prove that $\sqrt 3$ is irrational. So I could not convince him that one can give a chain of arguments starting from a hypothesis without actually knowing the truth value of the hypothesis itself. Later I came back and tried to think more about this a bit fundamentally to understand what it means to prove the implication $"A \to B"$ without worrying about truth of $A$ and how different is it from proving $B$ when we know $A$ to be true (or false, as in this case, using some other method). But I am also confused. Another student made this remark, "you are giving me one false statement, and asking me to prove another false statement, I don't understand". Can someone please resolve this?</p>
| <p>Proof by contradiction is simple. It is not so much that if a false statement is true, then we arrive at a contradiction. Rather, a better way to think of it is, "if a given statement is true--a statement whose truth or falsity is <strong>not yet established</strong>--then we arrive at a contradiction; thus the original statement cannot be true."</p>
<p>Take your proof of the irrationality of $\sqrt{3}$ as an example. At the outset, we have not yet established whether or not $\sqrt{3}$ is irrational (or rational). And we don't know until the proof is complete. But you certainly can reason about it by first supposing that <em>if</em> $\sqrt{3}$ were rational, then there exist two positive integers $p, q$ such that $q$ does not divide $p$, for which $p/q = \sqrt{3}$. That follows from the supposition of rationality. Then by continuing the logical chain, you arrive at a contradiction. So if all the consequential steps from the original supposition are valid, but the conclusion is inconsistent, then the original supposition could not be true. Note that in this chain of reasoning, at no point do we actually claim that the original supposition is true, because as we have taken care to mention, we do not know if it is or not until the proof is complete. We are merely saying that <em>if</em> it were the case, we would arrive at a contradiction.</p>
<p>Here is a less trivial example. Some conjectures remain open in the sense that we do not know what the answer is. Consider the Collatz recursion $$x_{n+1} = \begin{cases} 3 x_n + 1, & x_n \text{ is odd} \\ x_n / 2, & x_n \text{ is even}. \end{cases}$$ The conjecture is that for every positive integer $m$, the sequence $\{x_n\}_{n=1}^\infty$ with $x_0 = m$ contains $1$ as an element in the sequence.</p>
<p>If we were to attempt to prove this conjecture is <strong>TRUE</strong> by contradiction, the supposition we would presumably start with is that there exists a positive integer $m^*$ such that if $x_0 = m^*$, the sequence $\{x_n\}_{n=1}^\infty$ <strong>never</strong> contains $1$, and then by using some as-yet-unknown logical deduction from this supposition, if we can arrive at a contradiction, we then have shown that the conjecture is true.</p>
<p>But note what I said above: this is an open question! Nobody knows if it is true or false. So you cannot claim that my logic is invalid because I have <em>a priori</em> assumed something that I already know not to be the case!</p>
<p>Now, if I were to desire to <strong>disprove</strong> the conjecture, it would suffice to find just such an instance of $m^*$ as I have described above. But an extensive computer search has turned up no such counterexample. This leads many to believe the conjecture is true, but no constructive method to substantiate that belief. Mathematics--and number theory in particular--contains many examples of claims that appear true but have their smallest known counterexamples for very large numbers.</p>
<p>So, there is my "informal" explanation of proof by contradiction. The essence is that we don't presume to know the truth or falsity of a given claim or statement at the outset of the proof, so we cannot be accused of assuming that which is false. The falsity is established once the contradiction is shown.</p>
| <p><strong>Step 0.</strong> Use truth tables to convince him that $A \rightarrow \bot$ is equivalent to $\neg A$. Deduce that to prove $\neg P$, we can assume $P$ and attempt to derive $\bot$, since this allows us to infer $P \rightarrow \bot$, which is the same as $\neg P$.</p>
<p><strong>Step 1.</strong> Use truth tables to convince him that $A \rightarrow B$ is equivalent to $\neg(A \wedge \neg B).$ Conclude that to prove $A \rightarrow B$, we may assume $A \wedge \neg B$ and attempt to deduce $\bot$.</p>
<hr>
<p><strong>Edit.</strong> My original answer (above) focuses on the logic of proof by contradiction, but there is also a pedagogical issue here that deserves to be addressed. The following material is taken from the user Keen and from Steve Jessop's excellent answer.</p>
<blockquote>
<p>Unfortunately, <em>assume</em> is one of those words where the mathematical
definition differs from its popular English definition. In English,
<em>assume S</em> means <em>believe S</em>. In mathematics, it means only <em>imagine if S were true</em>. So when we say, "assume $3p^2=q^2$", we are not asserting that there exists an ordered pair $(p,q)$ with this property. Rather, we are considering the hypothetical properties that such a pair would have, if it did exist.</p>
</blockquote>
|
geometry | <p>Having a bit of a problem calculating the volume of a take-away box:</p>
<p><a href="https://i.sstatic.net/acgR1.jpg" rel="noreferrer"><img src="https://i.sstatic.net/acgR1.jpg" alt="enter image description here"></a></p>
<p>I originally wanted to use integration to measure it by rotating around the x-axiz, but realised that when folded the top becomes a square, and the whole thing becomes rather irregular. Since it differs in circumference I won't be able to measure it like I planned.</p>
<p>Is there any method or formula that can be used to measure a shape like this, or do I just have to approximate a cylinder and approximate a box and add those two together? </p>
| <p>A different approximation could be to take the cross-section at each height not as a linear interpolation between top and bottom surface, but as squares with rounded corners. This fits the photograph more closely, as the linear interpolation approach has a discontinuity in the radius of curvature at the bottom: The point that at the top is an edge of the square is modelled a crease along the entire side, while in the photograph, no such crease can be seen.</p>
<p><img src="https://i.sstatic.net/oyoHZ.png" alt="The cross-section of the model, having side length a, corner radius r and straight side length b"></p>
<p>If we take the cross-section to have the above shape of a rounded square, we can calculate its area by using the formulas for circles and rectangles:</p>
<p><span class="math-container">$$A = \pi \cdot r^2 + 2 \cdot a \cdot b - b^2$$</span></p>
<p>Using <span class="math-container">$b=a-2\cdot r$</span>, we can eliminate that variable:</p>
<p><span class="math-container">$$A = a^2 + (\pi -4) \cdot r^2$$</span></p>
<p>Now we assume that those variables vary linearly between top and bottom: <span class="math-container">$r$</span> goes from <span class="math-container">$r_0$</span> at the bottom to <span class="math-container">$0$</span> at the top, and <span class="math-container">$a$</span> goes from <span class="math-container">$2r_0$</span> at the bottom to <span class="math-container">$a_0$</span> at the top, giving us:</p>
<p><span class="math-container">$$r(h) = \left(1-\frac{h}{h_0}\right)\cdot r_0$$</span></p>
<p><span class="math-container">$$a(h) = \left(1-\frac{h}{h_0}\right)\cdot 2r_0 + \frac{h}{h_0} \cdot a_0$$</span></p>
<p>and thus:</p>
<p><span class="math-container">$$A(h) = \left(1-\frac{h}{h_0}\right)^2 \cdot \pi \cdot r_0^2 + \left(1-\frac{h}{h_0}\right) \cdot \frac{h}{h_0} \cdot 4 \cdot r_0 \cdot a_0 + \left(\frac{h}{h_0}\right)^2 \cdot a_0^2$$</span></p>
<p>or, better ordered for integration:</p>
<p><span class="math-container">$$A(h) = \pi \cdot r_0^2 + \frac{h}{h_0} \cdot \left( 4\cdot r_o \cdot a_0 - 2 \cdot \pi \cdot r_0^2 \right) + \left(\frac{h}{h_0}\right)^2 \cdot \left( a_0^2 - 4\cdot r_0 \cdot a_0 + \pi \cdot r_0^2\right) $$</span></p>
<p>Which gives us then:</p>
<p><span class="math-container">$$ V = \int_0^{h_0} A(h) \mathrm{d}h= h_0 \cdot \pi \cdot r_0^2 + \frac{1}{2} \cdot \frac{h_0^2}{h_0} \cdot \left( 4\cdot r_o \cdot a_0 - 2 \cdot \pi \cdot r_0^2 \right) + \frac{1}{3}\cdot\frac{h_0^3}{h_0^2} \cdot \left( a_0^2 - 4\cdot r_0 \cdot a_0 + \pi \cdot r_0^2\right)$$</span></p>
<p><span class="math-container">$$=h_0\cdot\left(\frac{1}{3}\pi r_0^2 + \frac{2}{3} r_0 a_0 + \frac{1}{3} a_0^2\right)$$</span></p>
<p>And here is a quick render of what this looks like in 3d:</p>
<p><a href="https://i.sstatic.net/Pv5HJ.png" rel="noreferrer"><img src="https://i.sstatic.net/Pv5HJ.png" alt="e3d render of box model as given above"></a></p>
<p>This result can be obtained classically without any integrals as well, using only the formulas for the volume of conic solid and for cuboids. To do this, we split the solid into nine parts:</p>
<p><a href="https://i.sstatic.net/we9Ah.png" rel="noreferrer"><img src="https://i.sstatic.net/we9Ah.png" alt="The takeaway box split into coloured parts"></a></p>
<p>The central, yellow part is simply a square pyramid, and has a volume of <span class="math-container">$\frac{1}{3}\cdot h_0\cdot a_0^2$</span>. The four magenta pieces are oblique cones with a quarter-circle as base. Again, using the formula for conic solids, their volume is each <span class="math-container">$\frac{1}{12}\cdot h_0\cdot\pi\cdot r_0^2$</span></p>
<p>And the four cyan pieces are irregularly shaped tetrahedrons, but we can determine their volume by adding a few pieces to see how they fit into a cuboid of the measurements <span class="math-container">$a_0\times r_0 \times h_0$</span>:</p>
<p><a href="https://i.sstatic.net/uo6Gw.png" rel="noreferrer"><img src="https://i.sstatic.net/uo6Gw.png" alt="How adding some pieces to the cyan tetrahedron gives a cuboid"></a></p>
<p>In the more general case of <span class="math-container">$\frac{a_0}{2} \neq r_0$</span>, there will be a fourth green piece needed (which would go in front and block our view of everything). However, the green pieces together form a prisma of height <span class="math-container">$a_0$</span> and a top surface area of <span class="math-container">$\frac{1}{2} \cdot r_0 \cdot h_0$</span>, and the two blue-grey pieces are two pyramids with height <span class="math-container">$h_0$</span> and rectangular bases of size <span class="math-container">$\frac{1}{2} a_0 \times r_0$</span>. Therefore, the volume of the tetrahedron is:</p>
<p><span class="math-container">$$a_0 \cdot h_0 \cdot r_0 - a_0 \cdot \frac{1}{2} \cdot r_0 \cdot h_0 - 2 \cdot \frac{1}{3} \cdot h_0 \cdot \frac{1}{2} a_0 \cdot r_0 = \frac{1}{6} h_0 \cdot r_0 \cdot a_0$$</span></p>
<p>Putting it all together, we get the volume as:</p>
<p><span class="math-container">$$V=\frac{1}{3}\cdot h_0\cdot a_0^2 + 4\cdot \frac{1}{12}\cdot h_0\cdot\pi\cdot r_0^2 + 4 \cdot \frac{1}{6} h_0 \cdot r_0 \cdot a_0$$</span>
<span class="math-container">$$=h_0\cdot\left(\frac{1}{3}\pi r_0^2 + \frac{2}{3} r_0 a_0 + \frac{1}{3} a_0^2\right)$$</span></p>
<p>A more smooth model along the same lines would be to interpolate <a href="https://en.wikipedia.org/wiki/Superellipse" rel="noreferrer">superellipses</a> between the bottom and top, which similarly would give a creaseless change in the radius of curvature beneath the corners. However, superellipse areas have a formula involving the gamma function, and are therefore not easy to integrate again to get a volume.</p>
| <p>As the shape of the solid is not clearly defined, I'll make the simplest assumption: lateral surface is made of lines, connecting every point <span class="math-container">$P$</span> of square base to the point <span class="math-container">$P'$</span> of circular base with the same longitude <span class="math-container">$\theta$</span> (see figure below).</p>
<p>In that case, if <span class="math-container">$r$</span> is the radius of the circular base, <span class="math-container">$2a$</span> the side of the square base, <span class="math-container">$h$</span> the distance between bases,
a section of the solid at height <span class="math-container">$x$</span> (red line in the figure) is formed by points <span class="math-container">$M$</span> having a radial distance <span class="math-container">$OM=\rho(\theta)$</span> from the axis of the solid given by:
<span class="math-container">$$
\rho(\theta)={a\over\cos\theta}{x\over h}+r\left(1-{x\over h}\right),
\quad\text{for}\quad -{\pi\over4}\le\theta\le{\pi\over4}.
$$</span>
A quarter of the solid has then a volume given by:
<span class="math-container">$$
{V\over4}=\int_0^h\int_{-\pi/4}^{\pi/4}\int_0^{\rho(\theta)}r\,dr\,d\theta\,dx=
\frac{h}{12} \left(4 a^2+\pi r^2 + 2ar \ln{\sqrt2+1\over\sqrt2-1}\right).
$$</span></p>
<p><a href="https://i.sstatic.net/TPky8.png" rel="noreferrer"><img src="https://i.sstatic.net/TPky8.png" alt="enter image description here"></a></p>
|
differentiation | <p>Are the symbols $d$ and $\delta$ equivalent in expressions like $dy/dx$? Or do they mean something different?</p>
<p>Thanks</p>
| <p>As Giuseppe Negro said in a comment, $\delta$ is never used in mathematics in $$\frac{dy}{dx}.$$</p>
<p>(I am a physics ignoramus, so I do not know whether it is used in that context in physics, or what it might mean if it is.)</p>
<p>You do sometimes see $$\frac{\partial y}{\partial x}$$</p>
<p>which means that $y$ is a function of several variables, including $x$, and you are taking the <a href="http://enwp.org/partial_derivative"><em>partial</em> derivative</a> of $y$ with respect to $x$. This is a slightly different meaning than just $\frac{dy}{dx}$. For example, suppose that $f(x,y)$ is a function of both $x$ and $y$, and that each of $x$ and $y$ can in turn be expressed as functions of a third variable, $t$. Then one can write:</p>
<p>$$\frac{df}{dt} = \frac{\partial f}{\partial x}\frac{dx}{dt} +
\frac{\partial f}{\partial y}\frac{dy}{dt}$$</p>
<p>The $\partial$ symbol is not a Greek delta ($\delta$), but a variant on the Latin letter 'd'. In $\TeX$, you get it by writing <code>\partial</code>.</p>
| <p>I am so excited that I can help! I just learned about this in Thermodynamics (pg 95 of Fundamentals of Thermodynamics, Borgnakke & Sonntag). ' "d" stands for the exact differential (as often used in mathmatics); where the change in volume depends only on the initial and final states. "δ" refers to an inexact differential (which is used in physics when calculating things like work), where the quasi-equilibrium process between the two given states depends on the path followed. The differentials of path functions are inexact differentials and designated by, "δ." '</p>
|
logic | <p>I am trying to understand what mathematics is really built up of. I thought mathematical logic was the foundation of everything. But from reading a book in mathematical logic, they use "="(equals-sign), functions and relations.</p>
<p>Now is the "=" taken as undefined? I have seen it been defined in terms of the identity relation.</p>
<p>But in order to talk about functions and relations you need set theory.
However, <a href="https://en.wikipedia.org/wiki/Set_theory">set theory</a> seems to be a part of mathematical logic.</p>
<p>Does this mean that (naive) set theory comes before sentential and predicate logic? Is (naive)set-theory at the absolute bottom, where we can define relations and functions and the eqality relation. And then comes sentential logic, and then predicate logic?</p>
<p>I am a little confused because when I took an introductory course, we had a little logic before set-theory. But now I see in another book on introduction to proofs that set-theory is in a chapter before logic. <strong>So what is at the bottom/start of mathematics, logic or set theory?, or is it circular at the bottom?</strong></p>
<p>Can this be how it is at the bottom?</p>
<p>naive set-theory $\rightarrow$ sentential logic $\rightarrow $ predicate logic $\rightarrow$ axiomatic set-theory(ZFC) $\rightarrow$ mathematics</p>
<p>(But the problem with this explanation is that it seems that some naive-set theory proofs use logic...)</p>
<p>(The arrows are of course not "logical" arrows.)</p>
<p><strong>simple explanation of the problem:</strong></p>
<p><strong>a book on logic uses at the start</strong>: functions, relations, sets, ordered pairs, "="</p>
<p><strong>a book on set theory uses at the start:</strong> logical deductions like this: "$B \subseteq A$", means every element in B is in A, so if $C \subseteq B, B \subseteq A$, a proof can be "since every element in C is in B, and every element in B is in A, every element of C is in A: $C \subseteq A$". But this is first order logic? ($(c \rightarrow b \wedge b \rightarrow a)\rightarrow (c\rightarrow a)$).</p>
<p><strong>Hence, both started from each other?</strong></p>
| <p>Most set theories, such as ZFC, require an underlying knowledge of first-order logic formulas (as strings of symbols). This means that they require acceptance of facts of string manipulations (which is essentially equivalent to accepting arithmetic on natural numbers!) First-order logic does not require set theory, but if you want to prove something <strong>about</strong> first-order logic, you need some stronger framework, often called a meta theory/system. Set theory is one such stronger framework, but it is not the only possible one. One could also use a higher-order logic, or some form of type theory, both of which need not have anything to do with sets.</p>
<p>The circularity comes only if you say that you can <strong>justify</strong> the use of first-order logic or set theory or whatever other formal system by proving certain properties about them, because in most cases you would be using a stronger meta system to prove such meta theorems, which <strong>begs the question</strong>. However, if you use a <strong>weaker</strong> meta system to prove some meta theorems about stronger systems, then you might consider that justification more reasonable, and this is indeed done in the field called Reverse Mathematics.</p>
<p>Consistency of a formal system has always been the worry. If a formal system is inconsistent, then anything can be proven in it and so it becomes useless. One might hope that we can use a weaker system to prove that a stronger system is consistent, so that if we are convinced of the consistency of the weaker system, we can be convinced of the consistency of the stronger one. However, as Godel's incompleteness theorems show, this is impossible if we have arithmetic on the naturals.</p>
<p>So the issue dives straight into philosophy, because any proof in any formal system will already be a finite sequence of symbols from a finite alphabet of size at least two, so simply <strong>talking about</strong> a proof requires understanding finite sequences, which (almost) requires natural numbers to model. This means that any meta system powerful enough to talk about proofs and 'useful' enough for us to prove meta theorems in it (If you are a Platonist, you could have a formal system that simply has all truths as axioms. It is completely useless.) will be able to do something equivalent to arithmetic on the naturals and hence suffer from incompleteness.</p>
<p>There are two main parts to the 'circularity' in Mathematics (which is in fact a sociohistorical construct). The first is the understanding of logic, including the conditional and equality. If you do not understand what "if" means, no one can explain it to you because any purported explanation will be circular. Likewise for "same". (There are many types of equality that philosophy talks about.) The second is the understanding of the arithmetic on the natural numbers including induction. This boils down to the understanding of "repeat". If you do not know the meaning of "repeat" or "again" or other forms, no explanation can pin it down.</p>
<p>Now there arises the interesting question of how we could learn these basic undefinable concepts in the first place. We do so because we have an innate ability to recognize similarity in function. When people use words in some ways consistently, we can (unconsciously) learn the functions of those words by seeing how they are used and abstracting out the similarities in the contexts, word order, grammatical structure and so on. So we learn the meaning of "same" and things like that automatically.</p>
<p>I want to add a bit about the term "mathematics" itself. What we today call "mathematics" is a product of not just our observations of the world we live in, but also historical and social factors. If the world were different, we will not develop the same mathematics. But in the world we do live in, we cannot avoid the fact that there is no non-circular way to explain some fundamental aspects of the mathematics that we have developed, including equality and repetition and conditionals as I mentioned above, even though these are based on the real world. We can only explain them to another person via a shared experiential understanding of the real world.</p>
| <p>What you are butting your head against here IMO is the fact that you need a meta-language at the beginning. Essentially at some point you have to agree with other people what your axioms and methods of derivation are and these concepts cannot be intrinsic to your model. </p>
<p>Usually I think we take axioms in propositional logic as understood, with the idea that they apply to purely abstract notions of sentences and symbols. You might somethimes see proofs of the basic axioms such as Modus Ponens in terms of a meta-language i.e. not inside the system of logic but rather outside of it.</p>
<p>There is a lot of philosophical fodder at this level since really you need some sort of understanding between different people (real language perhaps or possibly just shared brain structures which allow for some sort of inherent meta-deduction) to communicate the basic axioms.</p>
<p>There is some extra confusion in the way these subjects are usually taught since propositional logic will often be explained in terms of, for example, truth tables, which seem to already require having some methods for modeling in place. The actual fact IMO is that at the bottom there is a shared turtle of interhuman understanding which allows you to grasp what the axioms you define are supposed to mean and how to operate with them.</p>
<p>Anyhow that's my take on the matter.</p>
|
logic | <p>The Continuum Hypothesis says that there is no set with cardinality between that of the reals and the natural numbers. Apparently, the Continuum Hypothesis can't be proved or disproved using the standard axioms of set theory.</p>
<p>In order to disprove it, one would only have to construct one counterexample of a set with cardinality between the naturals and the reals. It was proven that the CH can't be disproven. Equivalently, it was proven that one cannot construct a counterexample for the CH. Doesn't this prove it?</p>
<p>Of course, the issue is that it was also proven that it can't be proved. I don't know the details of this unprovability proof, but how can it avoid a contradiction? I understand the idea of something being independent of the axioms, I just don't see how if there is provably no counterexample the hypothesis isn't immediately true, since it basically just says a counterexample doesn't exist.</p>
<p>I'm sure I'm making some horrible logical error here, but I'm not sure what it is.</p>
<p>So my question is this: what is the flaw in my argument? Is it a logical error, or a gross misunderstanding of the unprovability proof in question? Or something else entirely?</p>
| <p>Here's an example axiomatic system:</p>
<ol>
<li>There exist exactly three objects $A, B, C$.</li>
<li>Each of these objects is either a banana, a strawberry or an orange.</li>
<li>There exists at least one strawberry.</li>
</ol>
<p>Let's name the system $X$.</p>
<p><strong>Vincent's Continuum Hypothesis (VCH)</strong>: Every object is either a banana or a strawberry (i.e., there are no oranges).</p>
<p>Now, to disprove this in $X$, you would have to show that one of $A, B, C$ is an orange ("construct a counterexample"). But this does not follow from $X$, because the following model is consistent with $X$: A and B are bananas, C is a strawberry.</p>
<p>On the other hand, VCH does not follow from $X$ either, because the following model is consistent with $X$: A is a banana, B is a strawberry, C is an orange.</p>
<p>As you can see, there is no contradiction, because you have to take into account different models of the axiomatic system.</p>
| <p>I think the basic problem is in your statement that "In order to disprove it, one would only have to construct one counterexample of a set with cardinality between the naturals and the reals." Actually, to disprove CH by this strategy, one would have to produce a counterexample <strong>and prove</strong> that it actually has cardinality between those of $\mathbb N$ and $\mathbb R$. </p>
<p>So, from the fact that CH can't be disproved in ZFC, you can't infer that there is no counterexample but only that no set can be proved in ZFC to be a counterexample.</p>
|
logic | <p>I've seen both symbols used to mean "therefore" or logical implication. It seems like $\therefore$ is more frequently used when reaching the conclusion of an argument, while $\implies$ is for intermediate claims that imply each other. Is there any agreed upon way of using these symbols, or are they more or less interchangeable?</p>
| <blockquote>
<p>"It seems like <span class="math-container">$\therefore$</span> is more frequently used when reaching the conclusion of an argument, while <span class="math-container">$\implies$</span> (alternatively <span class="math-container">$\rightarrow$</span>) is for intermediate claims that imply each other."</p>
</blockquote>
<p>Your supposition is largely correct; my only concern is your description of <span class="math-container">$\implies$</span> being used to denote intermediate claims (in a proof or an argument, for example) that <em><strong>imply each other.</strong></em> The <span class="math-container">$\implies$</span> denotation, as in <span class="math-container">$p \implies q$</span>, merely conveys that the preceding claim (<span class="math-container">$p$</span>, if true) implies the subsequent claim <span class="math-container">$q$</span>; i.e., it does not denote a bi-direction implication <span class="math-container">$\iff$</span> which reads "if and only if".</p>
<p>'<span class="math-container">$\implies$</span>' or '<span class="math-container">$\rightarrow$</span>' is often used in a "modus ponens" style (short in scope) argument: If <span class="math-container">$p\implies q$</span>, and if it's the case that <span class="math-container">$p$</span>, then it follows that <span class="math-container">$q$</span>.</p>
<p>Typically, as you note, <span class="math-container">$\therefore$</span> helps to signify the conclusion of an argument: given what we know (or are assuming as given) to be true and given the intermediate implications which follow, we conclude that...</p>
<p>So, put briefly, <span class="math-container">$\implies$</span> ("which implies that") is typically shorter in scope, usually intended to link, by implication, the preceding statement and what follows from it, whereas '<span class="math-container">$\therefore$</span>' has typically, though not always, greater scope, so to speak, linking the initial assumptions/givens, the intermediate implications, with "that which was to be shown" in, say, a proof or argument.</p>
<p>Added:</p>
<p>I found the following Wikipedia entry on the meaning/use of <a href="http://en.wikipedia.org/wiki/Therefore_sign" rel="nofollow noreferrer">the symbol'<span class="math-container">$\therefore$</span>'</a>, from which I'll quote:</p>
<blockquote>
<p>To denote logical implication or entailment, various signs are used in mathematical logic: <span class="math-container">$\rightarrow, \;\implies, \;\supset$</span> and <span class="math-container">$\vdash$</span>, <span class="math-container">$\models$</span>. These symbols are then part of a mathematical formula, and are not considered to be punctuation. In contrast, the therefore sign <span class="math-container">$[\;\therefore\;]$</span> is traditionally used as a punctuation mark, and does not form part of a formula.</p>
</blockquote>
<p>It also refers to the "complementary" of the "therefore" symbol<span class="math-container">$\;\therefore\;$</span>, namely the symbol <span class="math-container">$\;\because\;$</span>, which denotes "because".</p>
<p>Example:</p>
<p><span class="math-container">$\because$</span> All men are mortal.<br>
<span class="math-container">$\because$</span> Socrates is a man.<br>
<span class="math-container">$\therefore$</span> Socrates is mortal.<br></p>
| <p>There are four logic symbols to get clear about:</p>
<p>$$\to,\quad \vdash,\quad \vDash,\quad \therefore$$</p>
<ol>
<li>'$\to$' (or '$\supset$') is a symbol belonging to various formal languages (e.g. the language of propositional logic or the language of the first-order predicate calculus) to express [usually!] the truth-functional conditional. $A \to B$ is a single conditional proposition, which of course asserts neither $A$ nor $B$.</li>
<li>'$\vdash$' is an expression added to logician's English (or Spanish or whatever) -- it belongs to the metalanguage in which we talk about consequence relations between formal sentences. And e.g. $A, A \to B \vdash B$ says in augmented English that in some relevant deductive system, there is a proof from the premisses $A$ and $A \to B$ to the conclusion $B$. (If we are being really pernickety we would write '$A$', '$A \to B$' $\vdash$ '$B$' but it is always understood that $\vdash$ comes with invisible quotes.)</li>
<li>'$\vDash$' is another expression added to logician's English (or Spanish or whatever) -- it again belongs to the metalanguage in which we talk about consequence relations between formal sentences. And e.g. $A, A \to B \vDash B$ says that in the relevant semantics, there is no valuation which makes the premisses $A$ and $A \to B$ true and the conclusion $B$ false.</li>
<li>$\therefore$ is added as punctuation to some formal languages as an inference marker. Then $A, A \to B \therefore B$ is an <em>object language</em> expression; and (unlike the metalinguistic $A, A \to B \vdash B$), this consists in <em>three</em> separate assertions $A$, $A \to B$ and $B$, with a marker that is appropriately used when the third is a consequence of the first two. (But NB an inference marker should not be thought of as <em>asserting</em> that an inference is being made.) </li>
</ol>
<p>As for '$\Rightarrow$', this -- like the use of 'implies' -- seems to be used informally (especially by non-logicians), in different contexts for any of the first three. So I'm afraid you just have to be careful to let context disambiguate. (And NB in the second and third uses where '$\Rightarrow$' is more appropriately read as 'implies' there's no <em>scope</em> difference with '$\therefore$'. In either case, we can have many wffs before the implication/inference marker.) </p>
|
linear-algebra | <p>This is more a conceptual question than any other kind. As far as I know, one can define matrices over arbitrary fields, and so do linear algebra in different settings than in the typical freshman-year course. </p>
<p>Now, how does the concept of eigenvalues translate when doing so? Of course, a matrix need not have any eigenvalues in a given field, that I know. But do the eigenvalues need to be numbers?</p>
<p>There are examples of fields such as that of the rational functions. If we have a matrix over that field, can we have rational functions as eigenvalues?</p>
| <p>Of course. The definition of an eigenvalue does not require that the field in question is that of the real or complex numbers. In fact, it doesn't even need to be a matrix. All you need is a vector space $V$ over a field $F$, and a linear mapping $$L: V\to V.$$</p>
<p>Then, $\lambda\in F$ is an eigenvalue of $L$ if and only if there exists a nonzero element $v\in V$ such that $L(v)=\lambda v$.</p>
| <p>Eigenvalues need to be elements of the field. The most common examples of fields contain objects that we usually call numbers, but this is not part of the definition of an eigenvalue. As a counterexample, consider the field $\mathbb R(x)$ of rational expressions in a variable $x$ with real coefficients. The $2\times 2$ matrix over that field</p>
<p>$$\left(\begin{matrix}
x & 0 \\
0 & \frac1x
\end{matrix}\right)$$</p>
<p>has eigenvalues $x$ and $1/x$: not unknown numbers, but known elements of the field $\mathbb R(x).$</p>
|
geometry | <p>My friend gave me this puzzle:</p>
<blockquote>
<p>What is the probability that a point chosen at random from the interior of an equilateral triangle is closer to the center than any of its edges? </p>
</blockquote>
<hr>
<p>I tried to draw the picture and I drew a smaller (concentric) equilateral triangle with half the side length. Since area is proportional to the square of side length, this would mean that the smaller triangle had $1/4$ the area of the bigger one. My friend tells me this is wrong. He says I am allowed to use calculus but I don't understand how geometry would need calculus. Thanks for help.</p>
| <p>You are right to think of the probabilities as areas, but the set of points closer to the center is not a triangle. It's actually a weird shape with three curved edges, and the curves are parabolas. </p>
<hr>
<p>The set of points equidistant from a line $D$ and a fixed point $F$ is a parabola. The point $F$ is called the focus of the parabola, and the line $D$ is called the directrix. You can read more about that <a href="https://en.wikipedia.org/wiki/Conic_section#Eccentricity.2C_focus_and_directrix">here</a>.</p>
<p>In your problem, if we think of the center of the triangle $T$ as the focus, then we can extend each of the three edges to give three lines that correspond to the directrices of three parabolas. </p>
<p>Any point inside the area enclosed by the three parabolas will be closer to the center of $T$ than to any of the edges of $T$. The answer to your question is therefore the area enclosed by the three parabolas, divided by the area of the triangle. </p>
<hr>
<p>Let's call $F$ the center of $T$. Let $A$, $B$, $C$, $D$, $G$, and $H$ be points as labeled in this diagram:</p>
<p><a href="https://i.sstatic.net/MTg9U.png"><img src="https://i.sstatic.net/MTg9U.png" alt="Voronoi diagram for a triangle and its center"></a></p>
<p>The probability you're looking for is the same as the probability that a point chosen at random from $\triangle CFD$ is closer to $F$ than to edge $CD$. The green parabola is the set of points that are the same distance to $F$ as to edge $CD$.</p>
<p>Without loss of generality, we may assume that point $C$ is the origin $(0,0)$ and that the triangle has side length $1$. Let $f(x)$ be equation describing the parabola in green. </p>
<hr>
<p>By similarity, we see that $$\overline{CG}=\overline{GH}=\overline{HD}=1/3$$</p>
<p>An equilateral triangle with side length $1$ has area $\sqrt{3}/4$, so that means $\triangle CFD$ has area $\sqrt{3}/12$. The sum of the areas of $\triangle CAG$ and $\triangle DBH$ must be four ninths of that, or $\sqrt{3}/27$.</p>
<p>$$P\left(\text{point is closer to center}\right) = \displaystyle\frac{\frac{\sqrt{3}}{12} - \frac{\sqrt{3}}{27} - \displaystyle\int_{1/3}^{2/3} f(x) \,\mathrm{d}x}{\sqrt{3}/12}$$</p>
<p>We know three points that the parabola $f(x)$ passes through. This lets us create a system of equations with three variables (the coefficients of $f(x)$) and three equations. This gives</p>
<p>$$f(x) = \sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}$$</p>
<p>The <a href="http://goo.gl/kSMPmv">integral of this function from $1/3$ to $2/3$</a> is $$\int_{1/3}^{2/3} \left(\sqrt{3}x^2 - \sqrt{3}x + \frac{\sqrt{3}}{3}\right) \,\mathrm{d}x = \frac{5}{54\sqrt{3}}$$ </p>
<hr>
<p>This <a href="http://goo.gl/xEFB0s">gives our final answer</a> of $$P\left(\text{point is closer to center}\right) = \boxed{\frac{5}{27}}$$</p>
| <p>In response to Benjamin Dickman's request for a solution without calculus, referring to dtldarek's nice diagram in Zubin Mukerjee's answer (with all areas relative to that of the triangle $FCD$):</p>
<p>The points $A$ and $B$ are one third along the bisectors from $F$, so the triangle $FAB$ has area $\frac19$. The vertex $V$ of the parabola is half-way between $F$ and the side $CD$, so the triangle $VAB$ has width $\frac13$ of $FCD$ and height $\frac16$ of $FCD$ and thus area $\frac1{18}$. By <a href="https://en.wikipedia.org/wiki/The_Quadrature_of_the_Parabola">Archimedes' quadrature of the parabola</a> (long predating the advent of calculus), the area between $AB$ and the parabola is $\frac43$ of the area of $VAB$. Thus the total area in $FCD$ closer to $F$ than to $CD$ is</p>
<p>$$
\frac19+\frac43\cdot\frac1{18}=\frac5{27}\;.
$$</p>
<p>P.S.: Like Dominic108's solution, this is readily generalized to a regular $n$-gon. Let $\phi=\frac\pi n$. Then the condition $FB=BH$, expressed in terms of the height $h$ of triangle $FAB$ relative to that of $FCD$, is</p>
<p>$$
\frac h{\cos\phi}=1-h\;,\\
h=\frac{\cos\phi}{1+\cos\phi}\;.
$$</p>
<p>This is also the width of $FAB$ relative to that of $FCD$. The height of the arc of the parabola between $A$ and $B$ is $\frac12-h$. Thus, the proportion of the area of triangle $FCD$ that's closer to $F$ than to $CD$ is</p>
<p>$$
h^2+\frac43h\left(\frac12-h\right)=\frac23h-\frac13h^2=\frac{2\cos\phi(1+\cos\phi)-\cos^2\phi}{3(1+\cos\phi)^2}=\frac13-\frac1{12\cos^4\frac\phi2}\;.
$$</p>
<p>This doesn't seem to take rational values except for $n=3$ and for $n\to\infty$, where the limit is $\frac13-\frac1{12}=\frac14$, the value for the circle.</p>
|
probability | <p>A colleague popped into my office this afternoon
and asked me the following question. He told me there is a
clever proof when $n=2$. I couldn't do
anything with it, so I thought I'd post it here and see what happens.</p>
<p><em>Prove or find a counterexample</em></p>
<p>For positive, i.i.d. random variables $Z_1,\dots, Z_n$
with finite mean, and positive constants $a_1,\dots, a_n$,
we have
$$\mathbb{E}\left({\sum_{i=1}^n a_i^2 Z_i\over\sum_{i=1}^n a_i Z_i}\right)
\leq {\sum_{i=1}^n a_i^2\over\sum_{i=1}^n a_i}.$$</p>
<hr>
<p><strong>Added:</strong> This problem originates from the thesis of a student in Computer and Electrical Engineering at the University of Alberta. Here is the response from his supervisor: "Many thanks for this! It is a nice result in addition to being useful in a practical problem of antenna placement."</p>
| <p>Yes, the inequality always holds for i.i.d. random variables $Z_1,\ldots,Z_n$. In fact, as suggested by Yuval and joriki, it is enough to suppose that the joint distribution is invariant under permuting the $Z_i$. Rearranging the inequality slightly, we just need to show that the following is nonnegative (here, I am using $\bar a\equiv\sum_ia_i^2/\sum_ia_i$)
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\sum_ia_i(\bar a-a_i)\mathbb{E}\left[\frac{Z_i}{\sum_ja_jZ_j}\right].
$$
I'll write $c_i\equiv\mathbb{E}[Z_i/\sum_ja_jZ_j]$ for brevity. Then, noting that $\sum_ia_i(\bar a-a_i)=0$, choosing any constant $\bar c$ that we like,
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\sum_ia_i(\bar a-a_i)(c_i-\bar c).
$$
To show that this is nonnegative, it is enough to show that $c_i$ is a decreasing function of $a_i$ (that is, $c_i\le c_j$ whenever $a_i\ge a_j$). In that case, we can choose $\bar c$ so that $\bar c\ge c_i$ whenever $a_i\ge\bar a$ and $\bar c\le c_i$ whenever $a_i\le\bar a$. This makes each term in the final summation above positive and completes the proof.</p>
<p>Choosing $i\not=j$ such that $a_i \ge a_j$,
$$
c_i-c_j=\mathbb{E}\left[\frac{Z_i-Z_j}{\sum_ka_kZ_k}\right]
$$
Let $\pi$ be the permutation of $\{1,\ldots,n\}$ which exchanges $i,j$ and leaves everything else fixed. Using invariance under permuting $Z_i,Z_j$,
$$
\begin{align}
2(c_i-c_j)&=\mathbb{E}\left[\frac{Z_i-Z_j}{\sum_ka_kZ_k}\right]-\mathbb{E}\left[\frac{Z_i-Z_j}{\sum_ka_kZ_{\pi(k)}}\right]\cr
&=\mathbb{E}\left[\frac{(a_j-a_i)(Z_i-Z_j)^2}{\sum_ka_kZ_k\sum_ka_kZ_{\pi(k)}}\right]\cr
&\le0.
\end{align}
$$
So $c_i$ is decreasing in $a_i$ as claimed.</p>
<hr>
<p><strong>Note:</strong> In the special case of $n=2$, we can always make the choice $\bar c=(c_1+c_2)/2$. Then, both terms of the summation on the right hand side of the second displayed equation above are the same, giving
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\frac{a_1a_2(a_2-a_1)(c_1-c_2)}{a_1+a_2}.
$$
Plugging in my expression above for $2(c_1-c_2)$ gives the identity
$$
\bar a-\mathbb{E}\left[\frac{\sum_ia_i^2Z_i}{\sum_ia_iZ_i}\right]=\frac{a_1a_2(a_1-a_2)^2}{2(a_1+a_2)}\mathbb{E}\left[\frac{(Z_1-Z_2)^2}{(a_1Z_1+a_2Z_2)(a_1Z_2+a_2Z_1)}\right],
$$
which is manifestly nonnegative. I'm guessing this could be the "clever" proof mentioned in the question.</p>
<p><strong>Note 2:</strong> The proof above relies on $\mathbb{E}[Z_i/\sum_ja_jZ_j]$ being a decreasing function of $a_i$. More generally, for any decreasing $f\colon\mathbb{R}^+\to\mathbb{R}^+$, then $\mathbb{E}[Z_if(\sum_ja_jZ_j)]$ is a decreasing function of $a_i$. Choosing positive $b_i$ and setting $\bar a=\sum_ia_ib_i/\sum_ib_i$ then $\sum_ib_i(\bar a-a_i)=0$. Applying the argument above gives the inequality
$$
\mathbb{E}\left[f\left(\sum_ia_iZ_i\right)\sum_ia_ib_iZ_i\right]
\le
\mathbb{E}\left[f\left(\sum_ia_iZ_i\right)\sum_ib_iZ_i\right]\frac{\sum_ia_ib_i}{\sum_ib_i}
$$
The inequality in the question is the special case with $b_i=a_i$ and $f(x)=1/x$.</p>
| <p>Since the $Z_i$ are i.i.d., the expectation is the same if we rename the variables. Taking all permutations, your inequality is equivalent to
$$ \mathbb{E} \left[ \frac{1}{n!} \sum_{\pi \in S_n} \frac{\sum_{i=1}^n a_i^2 Z_{\pi(i)}}{\sum_{i=1}^n a_i Z_{\pi(i)}} \right] \leq \frac{\sum_{i=1}^n a_i^2}{\sum_{i=1}^n a_i}. $$
Going over all possible values of $Z_1,\ldots,Z_n$, this is the same as the following inequality for positive real numbers:
$$ \frac{1}{n!} \sum_{\pi \in S_n} \frac{\sum_{i=1}^n a_i^2 z_{\pi(i)}}{\sum_{i=1}^n a_i z_{\pi(i)}} \leq \frac{\sum_{i=1}^n a_i^2}{\sum_{i=1}^n a_i}. $$
Intuitively, the maximum is attained at $z_i = \text{const}$, which is why we get the right-hand side. This is indeed true for $n = 2$, which is easy to check directly.</p>
|
probability | <blockquote>
<p>Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show,
$$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$
when $X$ has : a) a discrete distribution, b) a continuous distribution.</p>
</blockquote>
<p>I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.</p>
| <p>For <strong>every</strong> nonnegative random variable <span class="math-container">$X$</span>, whether discrete or continuous or a mix of these,
<span class="math-container">$$
X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt,
$$</span>
hence, by applying <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem#Tonelli%27s_theorem_for_non-negative_measurable_functions" rel="noreferrer">Tonelli's Theorem</a>,</p>
<blockquote>
<p><span class="math-container">$$
\mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt.
$$</span></p>
</blockquote>
<hr />
<p>Likewise, for every <span class="math-container">$p>0$</span>, <span class="math-container">$$
X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt,
$$</span>
hence</p>
<blockquote>
<p><span class="math-container">$$
\mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt.
$$</span></p>
</blockquote>
| <p>Copied from <a href="https://stats.stackexchange.com/questions/18438/does-a-univariate-random-variables-mean-always-equal-the-integral-of-its-quanti">Cross Validated / stats.stackexchange</a>:</p>
<p><img src="https://i.sstatic.net/mSb6a.png" alt="enter image description here"></p>
<p>where $S(t)$ is the <a href="http://en.wikipedia.org/wiki/Survival_analysis#Quantities_derived_from_the_survival_distribution" rel="noreferrer">survival function</a> equal to $1- F(t)$. The two areas are clearly identical.</p>
|
differentiation | <p>I am able to differentiate $x^n$ with respect to $x$ from first principles using the definition of differentiation. Also it seems natural that the gradient of a finite polynomial will be one order lower. However the fact that the coefficient of the derivative of $x^5$ should be $5$, for example, seems less obvious. </p>
<p>Is there a way of showing this result so that it is intuitive?</p>
| <p>Picture a cube in $n$-space bounded by the coordinate hyperplanes and other hyperplanes parallel to them at a distance $x$ from them. In the case $n=2$, it's easy to draw the picture: the boundaries are the two coordinate axes and two lines parallel to those, and you're looking at an $x$-by-$x$ square.</p>
<p>The volume of the cube is $x^n$. As $x$ changes, how fast does the volume change?</p>
<p>Use what I call the <b>"boundary rule"</b>:</p>
<p>\begin{align}
& \text{[size of moving boundary]} \times \text{[rate of motion of boundary]} \\[6pt]
= {} & \text{[rate of change of size of bounded region].}
\end{align}</p>
<p>There are $n$ moving boundaries, each of size $x^{n-1}$. The rate at which each moves is the rate at which $x$ changes. Hence the rate of change of $x^n$ is $nx^{n-1}$ times the rate of change of $x$.</p>
<p><b>PS:</b> The boundary rule can be used to prove the product rule if you use a rectangle rather than a square. The two moving boundaries have lengths $f$ and $g$; the rate of motion of each is the rate of change of the other.</p>
<p><b>PPS:</b> The fundamental theorem also follows from the boundary rule (as noted in comments below).
$$
A(x)=\int_a^x f(t)\,dx.
$$
The size of the boundary is $f(x)$; the rate at which the boundary moves is the rate at which $x$ changes; hence $\dfrac{dA(x)}{dx}=f(x)$.</p>
| <p>If you think about $x^n$ as the volume, in $n$ dimensions, of a cube of side $n$, you can ask "how does that grow when $x$ increases?" Answer: count the number of sides of dimension $n-1$. For a square in the plane, with one corner fixed at the origin, you have the change in area being produced by the upper and right edges, each multiplied by a thickness $\Delta x$, when you change $x$ to $x + \Delta x$. For a cube in 3-space, you have three sides, each of whose areas is multiplied by $\Delta x$ to get the change in volume. For a segment in $\mathbb R^1$, you have the right hand point moving through a distance $\Delta x$ to get the change in length, and so on. </p>
<p>In general, there are $n$ "sides" of a hypercube in dimension $n$ (with one corner fixed at the origin), corresponding to incrementing each coordinate individually. (This is also where the $n$ in the binomial expansion of $(x + \Delta x)^n$ comes from, of course.) </p>
|
linear-algebra | <p>During a year and half of studying Linear Algebra in academy, I have never questioned why we use the word "scalar" and not "number". When I started the course our professor said we would use "scalar" but he never said why.</p>
<p>So, why do we use the word "scalar" and not "number" in Linear Algebra?</p>
| <p>So first of all, "integer" would not be adequate; vector spaces have <em>fields</em> of scalars and the integers are not a field. "Number" would be adequate in the common cases (where the field is $\mathbb{R}$ or $\mathbb{C}$ or some other subfield of $\mathbb{C}$), but even in those cases, "scalar" is better for the following reason. We can identify $c$ in the base field with the function $*_c : V \to V,*_c(v)=cv$. Especially when the field is $\mathbb{R}$, you can see that geometrically, this function acts on the space by "scaling" a vector (stretching or contracting it and possibly reflecting it). Thus the role of the scalars is to scale the vectors, and the word "scalar" hints us toward this way of thinking about it.</p>
| <p>Not all fields are fields of numbers. For instance, it makes sense to talk about vector spaces over the field of rational functions $\mathbb R(X)$ but the scalars in this case are definitely not numbers.</p>
|
differentiation | <p>I am not too grounded in differentiation but today, I was posed with a supposedly easy question $w = f(x,y) = x^2 + y^2$ where $x = r\sin\theta $ and $y = r\cos\theta$ requiring the solution to $\partial w / \partial r$ and $\partial w / \partial \theta $. I simply solved the former using the trig identity $\sin^2 \theta + \cos^2 \theta = 1$, resulting to $\partial w / \partial r = 2r$.</p>
<p>However I was told that this solution could not be applied to this question because I should be solving for the <strong><em>total derivative</em></strong>. I could not find any good resource online to explain clearly to me the difference between a <em>normal</em> derivative and a <em>total</em> derivative and why my solution here was <em>wrong</em>. Is there anyone who could explain the difference to me using a practical example? Thanks!</p>
| <p>The key difference is that when you take a <em>partial derivative</em>, you operate under a sort of assumption that you hold one variable fixed while the other changes. When computing a <em>total derivative</em>, you allow changes in one variable to affect the other.</p>
<p>So, for instance, if you have $f(x,y) = 2x+3y$, then when you compute the partial derivative $\frac{\partial f}{\partial x}$, you temporarily assume $y$ constant and treat it as such, yielding $\frac{\partial f}{\partial x} = 2 + \frac{\partial (3y)}{\partial x} = 2 + 0 = 2$.</p>
<p>However, if $x=x(r,\theta)$ and $y=y(r,\theta)$, then the assumption that $y$ stays constant when $x$ changes is no longer valid. Since $x = x(r,\theta)$, then if $x$ changes, this implies that at least one of $r$ or $\theta$ change. And if $r$ or $\theta$ change, then $y$ changes. And if $y$ changes, then obviously it has some sort of effect on the derivative and we can no longer assume it to be equal to zero.</p>
<p>In your example, you are given $f(x,y) = x^2+y^2$, but what you really have is the following:</p>
<p>$f(x,y) = f(x(r,\theta),y(r,\theta))$.</p>
<p>So if you compute $\frac{\partial f}{\partial x}$, you cannot assume that the change in $x$ computed in this derivative has no effect on a change in $y$.</p>
<p>What you need to compute instead is $\frac{\rm{d} f}{\rm{d}\theta}$ and $\frac{\rm{d} f}{\rm{d} r}$, the first of which can be computed as:</p>
<p>$\frac{\rm{d} f}{\rm{d}\theta} = \frac{\partial f}{\partial \theta} + \frac{\partial f}{\partial x}\frac{\rm{d} x}{\rm{d} \theta} + \frac{\partial f}{\partial y}\frac{\rm{d} y}{\rm{d} \theta}$</p>
| <p>I know this answer is incredibly delayed; but just to summarise the last post:</p>
<p>If I gave you the function </p>
<p><span class="math-container">$$ f(x,y) = \sin(x)+3y^2$$</span></p>
<p>and asked you for the partial derivative with respect to <span class="math-container">$x$</span>, you should write:</p>
<p><span class="math-container">$$ \frac{\partial f(x,y)}{\partial x} = \cos(x)+0$$</span></p>
<p>since <span class="math-container">$y$</span> is effectively a <strong>constant with respect to <span class="math-container">$x$</span></strong>. In other words, substituting a value for <span class="math-container">$y$</span> has no effect on <span class="math-container">$x$</span>. However, if I asked you for the total derivative with respect to <span class="math-container">$x$</span>, you should write:</p>
<p><span class="math-container">$$\frac{df(x,y)}{dx}=\cos(x)\cdot {dx\over dx} + 6y\cdot {dy\over dx}$$</span></p>
<p>Of course I've utilized the chain rule in the bottom case. You wouldn't write <span class="math-container">$dx\over dx$</span> in practice since it's just <span class="math-container">$1$</span>, but you need to realise that it is there :)</p>
|
probability | <p>What is some intuitive insight regarding the conditional probability definition: $P(A\mid B) = \large \frac{P(A \cap B)}{P(B)}$ ? I am looking for an intuitive motivation. My textbook merely gives a definition, but no true development of that definition. Hopefully that's not too much to ask.</p>
| <p>Consider probabilities as proportions. To say that something has probability one-sixth is to say it occurs one-sixth of the time (this is only one interpretation: it suits our purposes and our intuition, so let's not worry too much about what it means philosophically). Often we calculate probabilities simply by dividing the number of possibilities in which our event of interest occurs, by the number of possibilities total – e.g. to calculate the odds of throwing an even number on a six-sided dice, we calculate <span class="math-container">$3/6$</span>. (This works because each of the possibilities we are counting is equally likely, by assumption).</p>
<p>Now let's say we want to work out how often <span class="math-container">$A$</span> occurs, given that we know <span class="math-container">$B$</span> has occurred. Well, we need to find the occurrences of <span class="math-container">$A$</span> in this scenario, and divide by the total number of possibilities. When we know <span class="math-container">$B$</span> occurred, the occurrences of <span class="math-container">$A$</span> are all and exactly those situations in which <em>both</em> <span class="math-container">$A$</span> and <span class="math-container">$B$</span> occur, and since we're assuming <span class="math-container">$B$</span> occurred, the <strong>total</strong> number of possibilities are reduced to only those where that happened.</p>
<p>Hence \[\mathbb P(A\mid B) = \frac{\text{# occurrences of A and B}}{\text{# occurrences of B}} = \frac{\mathbb P(A \cap B)}{\mathbb P(B)}\]</p>
<p>because the "total number of possibilities" in the expressions for <span class="math-container">$\mathbb P(B)$</span> and <span class="math-container">$\mathbb P(A \cap B)$</span> cancel.</p>
<p>Essentially, what we are doing is focussing on a particular subsection of the potential events, and considering what proportion of that subsection satisfies whatever property you're interested in (think of Venn diagrams). So, for example, given that your roll result was even, on a six-sided die, it is less likely to be less than <span class="math-container">$4$</span>, because half the numbers <span class="math-container">$\{1,2,3,4,5,6\}$</span> are less than <span class="math-container">$4$</span> <em>but</em> only a third of the numbers in our subsection <span class="math-container">$\{2,4,6\}$</span> are.</p>
| <p>Think about this: if $B$ is very unlikely but when it happens $A$ becomes likely then $P(A \text{ and } B)$ is small while $P(A|B)$ is large.</p>
<p>I am extremely unlikely to win the lottery jackpot this weekend ($B$) but if I do I am likely to become a millionaire ($A$), so the probability I win the lottery jackpot and then become a millionaire $P(A \text{ and } B)$ is small, but the probability I become a millionaire if I win the lottery jackpot $P(A|B)$ is high. </p>
|
game-theory | <p>In this question, <span class="math-container">$\mathbb{N}_0$</span> is the set of all nonnegative integers. The notation <span class="math-container">$\mathbb{N}$</span> is reserved for the set of all positive integers.</p>
<blockquote>
<p>Alex and Beth are playing the following game. Initially, there are <span class="math-container">$n\in\mathbb{N}_0$</span> marbles on a table. Alex and Beth alternatively pick out some marbles out of the table, with Alex going first. The number of removed marbles per turn must be a prime number (i.e., <span class="math-container">$2$</span>, <span class="math-container">$3$</span>, <span class="math-container">$5$</span>, <span class="math-container">$7$</span>, <span class="math-container">$11$</span>, etc.). The first person who cannot make a move loses the game. If <span class="math-container">$W$</span> is the set of all nonnegative integers such that Alex can always win if <span class="math-container">$n\in W$</span>, whereas <span class="math-container">$L$</span> is the set of all nonnegative integers such that Beth has a winning strategy if <span class="math-container">$n\in L$</span>.</p>
<p><strike>Is it true that <span class="math-container">$L$</span> is an infinite subset of <span class="math-container">$\mathbb{N}_0$</span> (clearly, <span class="math-container">$W$</span> is infinite, since it contains all the primes)?</strike> We now know that both <span class="math-container">$W$</span> and <span class="math-container">$L$</span> are infinite subsets of <span class="math-container">$\mathbb{N}_0$</span>. What are natural densities of <span class="math-container">$W$</span> and <span class="math-container">$L$</span>? What are logarithmic densities of <span class="math-container">$W$</span> and <span class="math-container">$L$</span>? Does <span class="math-container">$L$</span> contain an infinite number of pairs of consecutive integers? Does <span class="math-container">$L$</span> contain only finitely many even integers? For a positive integer <span class="math-container">$k$</span>, does each of <span class="math-container">$W$</span> and <span class="math-container">$L$</span> always have infinitely many subsets of consecutive integers of size <span class="math-container">$k$</span>? Does <span class="math-container">$L$</span> contain an infinitely number of semiprimes? Is the sum <span class="math-container">$\sum_{n\in L\setminus\{0\}}\,\frac{1}{n}$</span> finite (clearly, <span class="math-container">$\sum_{n\in W}\,\frac{1}{n}$</span> is infinite)?</p>
</blockquote>
<p>For a start, <span class="math-container">$W=\{2,3,4,5,6,7,8,11,12,13,14,15,\ldots\}$</span> and <span class="math-container">$L=\{0,1,9,10,25,34,35,\ldots\}$</span>. I conjecture that <span class="math-container">$L$</span> is an infinite subset of <span class="math-container">$\mathbb{N}_0$</span> containing infinitely many semiprimes <strike>such that <span class="math-container">$\sum_{n\in L\setminus\{0\}}\,\frac{1}{n}$</span> is finite</strike>. <strike>Furthermore, I expect that the natural density and the logarithmic density of <span class="math-container">$W$</span> are both <span class="math-container">$1$</span>, whereas the natural density and the logarithmic density of <span class="math-container">$L$</span> are both <span class="math-container">$0$</span>.</strike> References are very welcome.</p>
<p>According to the OEIS link given by Michael Lugo in a comment below, it seems that <span class="math-container">$\{0,1\}$</span>, <span class="math-container">$\{9,10\}$</span>, <span class="math-container">$\{34,35\}$</span>, and <span class="math-container">$\{309,310\}$</span> are the only subsets of <span class="math-container">$L$</span> consisting of consecutive integers (I could be wrong to assume that these are the only ones), and <span class="math-container">$W$</span> has at least three subsets of size <span class="math-container">$29$</span> which are composed by consecutive integers. Also, this set of data suggests that the natural density and the logarithmic density of <span class="math-container">$L$</span> exist and are equal, with the value somewhere between <span class="math-container">$0.1$</span> and <span class="math-container">$0.2$</span>. In addition, <span class="math-container">$\sum_{n\in L\setminus\{0\}}\,\frac{1}{n}>2.2$</span>, and of course, if the logarithmic density of <span class="math-container">$L$</span> exists and is positive, then this sum is infinite.</p>
<p>An interesting remark by Michael (<span class="math-container">$\neq$</span> Michael Lugo) is that the set <span class="math-container">$L$</span> contains very few even numbers. Among the smallest <span class="math-container">$10000$</span> elements of <span class="math-container">$L$</span>, only <span class="math-container">$5$</span> of them are even. This leads to my speculation that <span class="math-container">$L$</span> has only finitely many even numbers. Michael also verifies that <span class="math-container">$L$</span> has a positive natural density of at least <span class="math-container">$0.1$</span> and at most <span class="math-container">$0.25$</span>, if <span class="math-container">$L$</span> has only <span class="math-container">$5$</span> even elements.</p>
<blockquote>
<p>If, instead, the number of removed marbles per turn must belong in nonempty set <span class="math-container">$S \subseteq \mathbb{N}$</span> such that <span class="math-container">$\mathbb{N}\setminus S$</span> is infinite, let <span class="math-container">$W_S$</span> and <span class="math-container">$L_S$</span> be the sets <span class="math-container">$W$</span> and <span class="math-container">$L$</span>, respectively, for this particular <span class="math-container">$S$</span>. What can be said about the finiteness of <span class="math-container">$W_S$</span> and of <span class="math-container">$L_S$</span>? How about the finiteness of the reciprocal sums <span class="math-container">$\sum_{n\in W_S}\,\frac{1}{n}$</span> and <span class="math-container">$\sum_{n\in L_S\setminus\{0\}}\,\frac{1}{n}$</span>? What do we know about the natural and the logarithmic densities of <span class="math-container">$W_S$</span> and <span class="math-container">$L_S$</span>?</p>
</blockquote>
<p>For example, if <span class="math-container">$S=\{1,2,\ldots,m\}$</span> for some <span class="math-container">$m\in\mathbb{N}$</span>, then we have <span class="math-container">$W_S=\big\{n\in\mathbb{N}_0\,\big|\,(m+1)\nmid n\big\}$</span> and <span class="math-container">$L_S=\big\{n\in\mathbb{N}_0\,\big|\,(m+1)\mid n\big\}$</span>. Hence, <span class="math-container">$W_S$</span> and <span class="math-container">$L_S$</span> are infinite sets with <span class="math-container">$\sum_{n\in W_S}\,\frac{1}{n}$</span> and <span class="math-container">$\sum_{n \in L_S\setminus\{0\}}\frac{1}{n}$</span> being also infinite. Note that the natural and logarithmic densities of <span class="math-container">$W_S$</span> are both <span class="math-container">$1-\frac{1}{m+1}$</span>, and the natural and logarithmic densities of <span class="math-container">$L_S$</span> are both <span class="math-container">$\frac{1}{m+1}$</span>.</p>
<p><strong>Reference:</strong></p>
<p><a href="https://oeis.org/A025043" rel="nofollow noreferrer">https://oeis.org/A025043</a></p>
| <p>Assume $L$ is finite, say $\max L=m$. Then For any $n>m$ there exists $p$ with $n-p\in L$. As this implies that all prime gaps are $<m$, it is absurd. Hence $L$ is infinite.</p>
| <p>Suppose there are about $f(N)$ losing points in $[1,N]$. There would be around $f(N+1)\approx f(N)+1f'(N)$ losing points in $[1,N+1]$, so $f'(N)$ is the chance that $N+1$ is a losing point.<br>
On the other hand the odds that $N+1$ is a losing point is the chance that all $N+1-a$ are composite.<br>
Pretend that the $N+1-a$ are all $O(N)$, then this chance would be
$$\left(1-\frac1{\ln N}\right)^{f(N)}=f'(N)$$<br>
Let $f(N)=(\ln N)^2g(N)$.<br>
$(1-1/\ln N)^{\ln N}$ is roughly $1/e$, so the equation is approximately
$$N^{-g(N)}=\frac{2\ln Ng(N)}N+\ln(N)^2g'(N)$$
If $g(N)\approx 1$, then the left-hand side is too small; if $g(N)\approx1-\epsilon$ then the left-hand side is too big. So I believe $f(N)\approx(\ln N)^2$<br>
EDIT :
It turns out this is wrong because the numbers are mostly odd. That makes it much easier for a new odd number to make the list. If say a fifth of the odd numbers are in the list, then the chance of a new even $N$ to make the list would be $(1-1/\ln N)^{N/10}$. This goes to zero much faster than $1/N^2$, so has a finite sum, and I expect a finite number of even $N$ to make the list.</p>
|
matrices | <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$n \times n$</span> matrix and let <span class="math-container">$\Lambda$</span> be an <span class="math-container">$n \times n$</span> diagonal matrix. Is it always the case that <span class="math-container">$A\Lambda = \Lambda A$</span>? If not, when is it the case that <span class="math-container">$A \Lambda = \Lambda A$</span>?</p>
<p>If we restrict the diagonal entries of <span class="math-container">$\Lambda$</span> to being equal (i.e. <span class="math-container">$\Lambda = \text{diag}(a, a, \dots, a)$</span>), then it is clear that <span class="math-container">$A\Lambda = AaI = aIA = \Lambda A$</span>. However, I can't seem to come up with an argument for the general case.</p>
| <p>It is possible that a diagonal matrix $\Lambda$ commutes with a matrix $A$ when $A$ is symmetric and $A \Lambda$ is also symmetric. We have</p>
<p>$$
\Lambda A = (A^{\top}\Lambda^\top)^{\top} = (A\Lambda)^\top = A\Lambda
$$</p>
<p>The above trivially holds when $A$ and $\Lambda$ are both diagonal.</p>
| <p>A diagonal matrix will not commute with every matrix.</p>
<p>$$
\begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}*\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$$</p>
<p>But:</p>
<p>$$\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} * \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} 0 & 2 \\ 0 & 0 \end{pmatrix}.$$</p>
|
combinatorics | <p>I saw this question today, it asks how many triangles are in this picture.</p>
<p><img src="https://i.sstatic.net/c8X1c.jpg" alt="enter image description here"></p>
<p>I don't know how to solve this (without counting directly), though I guess it has something to do with some recurrence.</p>
<p>How can count the number of all triangles in the picture ?</p>
| <p>Say that instead of four triangles along each edge we have $n$. First count the triangles that point up. This is easy to do if you count them by top vertex. Each vertex in the picture is the top of one triangle for every horizontal grid line below it. Thus, the topmost vertex, which has $n$ horizontal gridlines below it, is the top vertex of $n$ triangles; each of the two vertices in the next row down is the top vertex of $n-1$ triangles; and so on. This gives us a total of</p>
<p>$$\begin{align*}
\sum_{k=1}^nk(n+1-k)&=\frac12n(n+1)^2-\sum_{k=1}^nk^2\\
&=\frac12n(n+1)^2-\frac16n(n+1)(2n+1)\\
&=\frac16n(n+1)\Big(3(n+1)-(2n+1)\Big)\\
&=\frac16n(n+1)(n+2)\\
&=\binom{n+2}3
\end{align*}$$</p>
<p>upward-pointing triangles.</p>
<p>The downward-pointing triangles can be counted by their by their bottom vertices, but it’s a bit messier. First, each vertex not on the left or right edge of the figure is the bottom vertex of a triangle of height $1$, and there are $$\sum_{k=1}^{n-1}=\binom{n}2$$ of them. Each vertex that is not on the left or right edge or on the slant grid lines adjacent to those edges is the bottom vertex of a triangle of height $2$, and there are </p>
<p>$$\sum_{k=1}^{n-3}k=\binom{n-2}2$$ of them. In general each vertex that is not on the left or right edge or on one of the $h-1$ slant grid lines nearest each of those edges is the bottom vertex of a triangle of height $h$, and there are </p>
<p>$$\sum_{k=1}^{n+1-2h}k=\binom{n+2-2h}2$$ of them. </p>
<p><strong>Algebra beyond this point corrected.</strong></p>
<p>The total number of downward-pointing triangles is therefore</p>
<p>$$\begin{align*}
\sum_{h\ge 1}\binom{n+2-2h}2&=\sum_{k=0}^{\lfloor n/2\rfloor-1}\binom{n-2k}2\\
&=\frac12\sum_{k=0}^{\lfloor n/2\rfloor-1}(n-2k)(n-2k-1)\\
&=\frac12\sum_{k=0}^{\lfloor n/2\rfloor-1}\left(n^2-4kn+4k^2-n+2k\right)\\
&=\left\lfloor\frac{n}2\right\rfloor\binom{n}2+2\sum_{k=0}^{\lfloor n/2\rfloor-1}k^2-(2n-1)\sum_{k=0}^{\lfloor n/2\rfloor-1}k\\
&=\left\lfloor\frac{n}2\right\rfloor\binom{n}2+\frac13\left\lfloor\frac{n}2\right\rfloor\left(\left\lfloor\frac{n}2\right\rfloor-1\right)\left(2\left\lfloor\frac{n}2\right\rfloor-1\right)\\
&\qquad\qquad-\frac12(2n-1)\left\lfloor\frac{n}2\right\rfloor\left(\left\lfloor\frac{n}2\right\rfloor-1\right)\;.
\end{align*}$$</p>
<p>Set $\displaystyle m=\left\lfloor\frac{n}2\right\rfloor$, and this becomes</p>
<p>$$\begin{align*}
&m\binom{n}2+\frac13m(m-1)(2m-1)-\frac12(2n-1)m(m-1)\\
&\qquad\qquad=m\binom{n}2+m(m-1)\left(\frac{2m-1}3-n+\frac12\right)\;.
\end{align*}$$</p>
<p>This simplifies to $$\frac1{24}n(n+2)(2n-1)$$ for even $n$ and to</p>
<p>$$\frac1{24}\left(n^2-1\right)(2n+3)$$ for odd $n$.</p>
<p>The final figure, then is</p>
<p>$$\binom{n+2}3+\begin{cases}
\frac1{24}n(n+2)(2n-1),&\text{if }n\text{ is even}\\\\
\frac1{24}\left(n^2-1\right)(2n+3),&\text{if }n\text{ is odd}\;.
\end{cases}$$</p>
| <h1>Tabulating numbers</h1>
<p>Let $u(n,k)$ denote the number of upwards-pointing triangles of size $k$ included in a triangle of size $n$, where size is a short term for edge length. Let $d(n,k)$ likewise denote the number of down triangles. You can tabulate a few numbers to get a feeling for these. In the following table, row $n$ and column $k$ will contain two numbers separated by a comma, $u(n,k), d(n,k)$.</p>
<p>$$
\begin{array}{c|cccccc|c}
n \backslash k &
1 & 2 & 3 & 4 & 5 & 6 & \Sigma \\\hline
1 & 1, 0 &&&&&& 1 \\
2 & 3, 1 & 1,0 &&&&& 5 \\
3 & 6, 3 & 3,0 & 1,0 &&&& 13 \\
4 & 10, 6 & 6,1 & 3,0 & 1,0 &&& 27 \\
5 & 15,10 & 10,3 & 6,0 & 3,0 & 1,0 && 48 \\
6 & 21,15 & 15,6 & 10,1 & 6,0 & 3,0 & 1,0 & 78
\end{array}
$$</p>
<h1>Finding a pattern</h1>
<p>Now look for patterns:</p>
<ul>
<li>$u(n, 1) = u(n - 1, 1) + n$ as the size change added $n$ upwards-pointing triangles</li>
<li>$d(n, 1) = u(n - 1, 1)$ as the downward-pointing triangles are based on triangle grid of size one smaller</li>
<li>$u(n, n) = 1$ as there is always exactly one triangle of maximal size</li>
<li>$d(2k, k) = 1$ as you need at least twice its edge length to contain a downward triangle.</li>
<li>$u(n, k) = u(n - 1, k - 1)$ by using the small $(k-1)$-sized triangle at the top as a representant of the larger $k$-sized triangle, excluding the bottom-most (i.e. $n$<sup>th</sup>) row.</li>
<li>$d(n, k) = u(n - k, k)$ as the grid continues to expand, adding one row at a time.</li>
</ul>
<p>Using these rules, you can extend the table above arbitrarily.</p>
<p>The important fact to note is that you get the same sequence of $1,3,6,10,15,21,\ldots$ over and over again, in every column. It describes grids of triangles of same size and orientation, increasing the grid size by one in each step. For this reason, those numbers are also called <a href="http://oeis.org/A000217">triangular numbers</a>. Once you know where the first triangle appears in a given column, the number of triangles in subsequent rows is easy.</p>
<h1>Looking up the sequence</h1>
<p>Now take that sum column to <a href="https://oeis.org/">OEIS</a>, and you'll find this to be <a href="https://oeis.org/A002717">sequence A002717</a> which comes with a nice formula:</p>
<p>$$\left\lfloor\frac{n(n+2)(2n+1)}8\right\rfloor$$</p>
<p>There is also a comment stating that this sequence describes the</p>
<blockquote>
<p>Number of triangles in triangular matchstick arrangement of side $n$.</p>
</blockquote>
<p>Which sounds just like what you're asking.</p>
<h1>References</h1>
<p>If you want to know how to obtain that formula without looking it up, or how to check that formula without simply trusting an encyclopedia, then some of the references given at OEIS will likely help you out:</p>
<ul>
<li>J. H. Conway and R. K. Guy, <em>The Book of Numbers,</em> <a href="http://books.google.com/books?id=0--3rcO7dMYC&pg=PA83">p. 83</a>.</li>
<li>F. Gerrish, <em><a href="http://www.jstor.org/stable/3613774">How many triangles</a>,</em> Math. Gaz., 54 (1970), 241-246.</li>
<li>J. Halsall, <em><a href="http://www.jstor.org/stable/3613117">An interesting series</a>,</em> Math. Gaz., 46 (1962), 55-56.</li>
<li>M. E. Larsen, <em><a href="http://www.math.ku.dk/~mel/mel.pdf">The eternal triangle – a history of a counting problem</a>,</em> College Math. J., 20 (1989), 370-392.</li>
<li>C. L. Hamberg and T. M. Green, <em><a href="http://www.jstor.org/stable/27957564">An application of triangular numbers</a>,</em> Mathematics Teacher, 60 (1967), 339-342. <em>(Referenced by Larsen)</em></li>
<li>B. D. Mastrantone, <em><a href="http://www.jstor.org/stable/3612392">Comment</a>,</em> Math. Gaz., 55 (1971), 438-440.</li>
<li><em><a href="http://www.jstor.org/stable/2688056">Problem 889</a>,</em> Math. Mag., 47 (1974), 289-292.</li>
<li>L. Smiley, <em><a href="http://www.jstor.org/stable/2690473">A Quick Solution of Triangle Counting</a>,</em> Mathematics Magazine, 66, #1, Feb '93, p. 40.</li>
</ul>
|
linear-algebra | <p>I was just wondering why we don't ever define multiplication of vectors as individual component multiplication. That is, why doesn't anybody ever define $\langle a_1,b_1 \rangle \cdot \langle a_2,b_2 \rangle$ to be $\langle a_1a_2, b_1b_2 \rangle$? Is the resulting vector just not geometrically interesting?</p>
| <p>Unlike the usual operations of vector calculus, the product $\bullet$ you defined here is not covariant for Cartesian coordinate changes. This means that an equation involving $\bullet$ is not guaranteed to keep holding true if both members undergo an orthogonal coordinate change, such as a rotation of the axes. </p>
<p>For a 2 dimensional example, consider the following equation:
\begin{equation}
\begin{bmatrix} 1 \\ 0 \end{bmatrix} \bullet \begin{bmatrix} 0 \\ 1\end{bmatrix} = \begin{bmatrix} 0 \\ 0\end{bmatrix}.
\end{equation}
If we rotate the plane 45° counterclockwise then
\begin{align}
\begin{bmatrix} 1 \\ 0 \end{bmatrix} \to \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix},& & \begin{bmatrix} 0 \\ 1 \end{bmatrix} \to \begin{bmatrix} -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix},&&\begin{bmatrix} 0 \\ 0 \end{bmatrix} \to \begin{bmatrix} 0 \\ 0 \end{bmatrix},
\end{align}
but
\begin{equation}\tag{!!}
\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \bullet \begin{bmatrix} -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}\end{bmatrix} \ne\begin{bmatrix} 0 \\ 0\end{bmatrix}.
\end{equation}
From the physicist's point of view, then, this operation is ill-posed as it should be independent of the particular coordinate system one chooses to describe physical space. This is not the case of the dot-product and the cross-product, which are independent of such a choice. </p>
| <p>This is Hadamard product, which is defined for matrices, and hence for vector columns. See Wikipedia page : <a href="http://en.wikipedia.org/wiki/Hadamard_product_%28matrices%29">Hadamard Product</a></p>
|
probability | <blockquote>
<p>If <span class="math-container">$X$</span> is a discrete random variable that can take the values <span class="math-container">$x_1, x_2, \dots $</span> and with probability mass function <span class="math-container">$f_X$</span>, then we define
its mean by the number <span class="math-container">$$\sum x_i f_X(x_i) $$</span> (1)
when the series above is <em>absolutely convergent</em>.</p>
</blockquote>
<p>That's the definition of mean value of a discrete r.v. I've encountered in my books (<em>Introduction to the Theory of Statistics</em> by Mood A., <em>Probability and Statistics</em> by DeGroot M.).</p>
<p>I know that if a series is absolute convergent then it is convergent, but why do we need to ask for the series (1) to converge absolutely, instead of just asking it to converge? I'm taking my introductory courses of probabilty and so far I haven't found a situation that forces us restrict ourselves this way.</p>
<p>Any comments about the subject are appreciated.</p>
| <p>It's because if the series is convergent but not absolutely convergent, you can rearrange the sum to get any value. Any good notion of "mean" or "expectation" should not depend on the ordering of the <span class="math-container">$x_i$</span>'s.</p>
<p>For a more abstract reason, note that we <em>define</em> the expectation <span class="math-container">$E[X]$</span> of a random variable <span class="math-container">$X$</span> defined on a probability space <span class="math-container">$(\Omega, \mathcal{F}, P)$</span> as the Lebesgue integral <span class="math-container">$\int_{\Omega} X dP$</span>. By definition of the Lebesgue integral, this is only well-defined if the integrand is <em>absolutely</em> integrable. If you learn more about measure theory, you will also learn why this definition makes sense. It is done to avoid strange situations like <span class="math-container">$\infty - \infty$</span> in the theory.</p>
| <p>I think any explanation is going to make reference to the fact that without absolute convergence, the value of an infinite sum or an improper Riemann integral depends on the order in which the "pieces" are summed up. That alone may satisfy you, but it didn't fully satisfy me.</p>
<p>The more specific reason that satisfied me is "without absolute convergence the law of large numbers fails". The intuitive reason for this is that when you're taking sample averages, instead of integrating <span class="math-container">$x f(x)$</span> symmetrically, you're integrating it by Monte Carlo integration, literally picking locations randomly. As a consequence, if the integral for the mean only converges conditionally, then there is no guarantee that the sample averages have the same behavior along different sequences of samples, or even that the sample averages converge at all.</p>
<p>To see this on a computer, try running a program like this, which takes successive sample averages from the standard Cauchy distribution (which is symmetric about <span class="math-container">$0$</span>, so its mean "would be zero if it made sense").</p>
<pre><code>n=1e4;
x=pi*(rand(1,n)-1/2);
y=cumsum(tan(x))./(1:n);
plot(1:n,y)
</code></pre>
<p>This program as is will run in Matlab or Octave, but very similar programs can be run in other software with support for random numbers and plotting. What you see is quite dramatic jumps in the sample mean that occur when an entry of x gets too close to <span class="math-container">$\pi/2$</span> or <span class="math-container">$-\pi/2$</span>, and which continue to occur even after thousands of samples have been drawn.</p>
|
logic | <p>Say I am explaining to a kid, <span class="math-container">$A +B$</span> is the same as <span class="math-container">$B+A$</span> for natural numbers.</p>
<p>The kid asks: why?</p>
<p>Well, it's an axiom. It's called commutativity (which is not even true for most groups).</p>
<p>How do I "prove" the axioms?</p>
<p>I can say, look, there are <span class="math-container">$3$</span> pebbles in my right hand and <span class="math-container">$4$</span> pebbles in my left hand. It's pretty intuitive that the total is <span class="math-container">$7$</span> whether I added the left hand first or the right hand first.</p>
<p>Well, I answer that on any exam and I'll get an F for sure.</p>
<p>There is something about axioms. They can't be proven and yet they are more true than conjectures or even theorems.</p>
<p>In what sense are axioms true then?</p>
<p>Is this just intuition? We simply define natural numbers as things that fit these axioms. If it's not true then well, they're not natural numbers. That may make sense. What do mathematicians think? Is the fact that the number of pebbles in my hand follows the rules of natural numbers "science" instead of "math"? Looks like it's more obvious than that.</p>
<p>It looks to me, truth for axioms, theorems, and science are all truth in a different sense, isn't it? We use just one word to describe them, true. I feel like I am missing something here.</p>
| <p><strong>You only need to "prove" an axiom when using it to model a real-world problem.</strong> In general, mathematicians just say <em>"these are my assumptions (axioms), this is what I can prove with them"</em> - they often don't care whether it models a real-world problem or not.</p>
<hr>
<p>When using math to model real-world problems, it's up to you to show that the axioms actually hold. The idea is that, if the axioms are true for the real-world problem, and all the logical steps taken are sound, then the conclusions (theorems etc.) should also be true in your real-world problem.</p>
<p>In your case, I think your example is actually a convincing "proof" that your axiom <em>(commutativity of addition over natural numbers)</em> holds for your real-world problem of counting stones: if I pick up any number of stones in my left- and right-hands, it doesn't matter whether I count the left or right first, I'll get the same result either way. You can verify this experimentally, or use your intuition. As long as you agree that the axioms of the model fit your problem, you should agree with the conclusions as well <em>(assuming you agree with the proofs, of course)</em>.</p>
<p>Of course, this is not a <em>proof</em> of the axioms, and it's entirely possible for someone to disagree. In that case, they don't believe that the natural numbers are valid model for counting stones, and they'll have to look for a different model instead.</p>
| <p>The problem of what is means for something to be "true" is a general problem in philosophy which has received a lot of attention, so it is impossible for anyone to give a short and complete answer here. The Stanford Encyclopedia has a nice <a href="http://plato.stanford.edu/entries/truth/">article on truth</a>. Mathematics is a useful test case for philosophers so a lot has been written on mathematical truth. </p>
<p>There is a separate problem that the word "true" is used to mean several things in mathematics: it can mean just "true", or it can mean "true in a particular structure". For example, the latter meaning is intended when we say that the axiom of commutativity is true in some groups and not in others. The notion of truth in a structure is well studied in mathematical logic. But I think the question above is about plain "truth" not about "truth in a structure". </p>
<p>The easiest way to define what plain "truth" means is to believe that there is some "real" mathematical structure, consisting of mathematical objects that actually exist. This viewpoint is called mathematical Platonism or mathematical realism. Then a statement is "true" if it is true when interpreted as a statement about these real mathematical objects. For example, from this viewpoint the statement "Addition of natural numbers is commutative" is true because the actual addition operation on the actual set of natural numbers is commutative. </p>
<p>There are other "anti-realist" theories of truth that do not presuppose that there are independently existing mathematical objects that can be used to test the truth of a statement. (One problem with the realist versions is that it is far from clear how we would be able to tell whether mathematical objects have various properties using our five senses.) Some go as far as replacing truth with provability; for example, this is one way to understand the motivations for intuitionism. But most mathematicians maintain that there is a difference between truth and provability. </p>
<p>There is a separate issue that most of the time the word "true" is used in mathematical proofs, it is just a turn of phrase that could be omitted. For example, instead of saying "We know that $A \to B$ is true, and we have proved $A$, so $B$ is true", we can say "We have assumed $A \to B$, and we have proved $A$, so we may conclude $B$". This shows up when the proofs are formalized: formal proofs in most theories (e.g. the theory of groups, ZFC set theory) do not have any way to refer to "truth", they simply manipulate formulas. The idea, of course, is that if the assumptions are true then the conclusion is true. But the formal proof itself will not make reference to plain "truth". </p>
<hr>
<p>The question goes on to ask how we would know (in a realist theory, for example) that addition of natural numbers is commutative. Someone could say "you prove it from other postulates" but then the problem would be how to know that those postulates are true. In the end, the question is how to know that any postulate about the actual natural numbers is true. This is a major issue for mathematical realism, as I mentioned above. The most common answer is that humans have some form of insight which allows us to determine the truth of some (but not all) mathematical propositions directly, without having a formal proof of those propositions. The commutativity of natural number addition is one of those propositions: by thinking about addition and natural numbers, we are drawn to conclude that the addition is commutative. In the end this is how we justify all postulates in geometry, set theory, arithmetic, etc. The realist positions is that although we cannot prove them formally, we can come to believe they are true by thinking about the objects they describe. </p>
|
probability | <p>I found this problem and I've been stuck on how to solve it.</p>
<blockquote>
<p>A miner is trapped in a mine containing 3 doors. The first door leads to a tunnel that will take him to safety after 3 hours of travel. The second door leads to a tunnel that will return him to the mine after 5 hours of travel. The third door leads to a tunnel that will return him to the mine after 7 hours. If we assume that the miner is at all times equally likely to choose any one of doors, what is the expected length of time until he reaches safety?</p>
</blockquote>
<p>The fact that the miner could be stuck in an infinite loop has confused me.
Any help is greatly appreciated.</p>
| <p>Let $T$ be the time spent in the mine. Conditioning on the first door the miner chooses, we get
$$ \mathbb{E}[T]=\frac{1}{3}\cdot3+\frac{1}{3}(5+\mathbb{E}[T])+\frac{1}{3}(7+\mathbb{E}[T])$$
so
$$ \mathbb{E}[T]=5+\frac{2}{3}\mathbb{E}[T].$$
If $\mathbb{E}[T]$ is finite, then we can conclude that $\mathbb{E}[T]=15$.</p>
<p>To see that $\mathbb{E}[T]$ is finite, let $X$ be the number of times the miner chooses a door. Then $ \mathbb{P}(X\geq n)=(\frac{2}{3})^{n-1}$ for $n=1,2,3,\dots$, hence
$$ \mathbb{E}[X]=\sum_{n=1}^{\infty}\mathbb{P}(X\geq n)=\sum_{n=1}^{\infty}\Big(\frac{2}{3}\Big)^{n-1}<\infty$$
And since $T\leq 7(X-1)+3$, we see that $\mathbb{E}[T]<\infty$ as well.</p>
| <p>Let $t$ be the expected time to get out. If he takes the second or third door he returns to the same position as the start, so the expected time after he returns is $t$. Therefore we have $$t=\frac 13(3) + \frac 13(t+5)+\frac 13(t+7)\\\frac 13t=5
\\t=15$$</p>
|
combinatorics | <p>Suppose you have a combination bicycle lock of this sort:</p>
<p><img src="https://bosker.files.wordpress.com/2014/02/2390282005_1b88533aa6_b.jpg?w=480&h=319" width="480" height="319" alt=""></p>
<p>with $n$ dials and $k$ numbers on each dial. Let $m(n,k)$ denote the minimum number of turns that always suffice to open the lock from any starting position, where a turn consists of rotating <strong>any number of adjacent rings</strong> by one place.</p>
<p>For example $m(2,10)=6$ and $m(3,10)=10$.</p>
<p>I have found an <a href="http://bosker.wordpress.com/2014/02/18/the-bicycle-lock-problem/" rel="nofollow noreferrer">efficient algorithm to compute these numbers</a>, which reveals a symmetry I can’t currently explain:</p>
<p>$m(n, k+1) = m(k, n+1)$</p>
<p>This is such a striking symmetry that I guess it has a simple explanation. Can anyone find one?</p>
<p>Here’s the table of values for small $n$ and $k$, exhibiting the conjectured symmetry:</p>
<pre>n\k| 2 3 4 5 6 7 8 9 10
---+---------------------------------------------
1 | 1 1 2 2 3 3 4 4 5
2 | 1 2 2 3 4 4 5 6 6
3 | 2 2 4 4 6 6 8 8 10
4 | 2 3 4 6 6 8 9 10 12
5 | 3 4 6 6 9 9 12 12 15
6 | 3 4 6 8 9 12 12 15 16
7 | 4 5 8 9 12 12 16 16 20
8 | 4 6 8 10 12 15 16 20 20
9 | 5 6 10 12 15 16 20 20 25
10 | 5 7 10 12 15 18 20 24 25</pre>
| <p>It has taken a few days, but I believe I can at last answer my own question. The strategy is to show that, for every combination of the $(n,k)$ lock, there is a combination of the $(k-1,n+1)$ lock that needs the same number of turns to unlock. The argument begins by grouping lock combinations into equivalence classes in such a way that equivalent combinations require the same number of turns to open.</p>
<p><hr>
Assume throughout that the destination combination, that opens the lock, is $n$ zeros.</p>
<p>The first trick is one I used before: instead of looking at the positions of the dials, look at the differences between the positions of adjacent dials. On its own this doesn’t discard any information – the process is reversible – but it opens the door to a simplification: <i>the order of the differences doesn’t matter</i>. So we’ll say that two combinations are equivalent if they have the same multiset of differences, and we’ll write the differences as a nondecreasing sequence, to give a canonical form.</p>
<p>To motivate the next identification we’re going to make, let’s consider an example combination of the $(n=4, k=10)$-lock: <code>2593</code>, which has differences <code>23447</code>. The differences sum to $2k$, which means – as explained in <a href="http://bosker.wordpress.com/2014/02/18/the-bicycle-lock-problem/" rel="nofollow noreferrer">my original blog post</a> – that we can ignore the two largest differences and add up the others, so this combination takes $2+3+4 = 9$ turns to open. But, since the two largest differences didn’t even enter into the calculation, they could have been <i>any</i> pair of numbers that are both at least $4$ and sum to $11$. In particular they could have been $5$ and $6$ so that the differences were <code>23456</code>. In this sense the combinations <code>2593</code> and <code>2594</code> are equivalent. We shall denote this equivalence class by the sequence (2,3,4), which we’ll call a <i>lock sequence</i> for the $(n, k)$-lock. Notice that the number of turns needed to open the lock is the sum of the terms of the lock sequence.</p>
<p>Now we’re going to characterise the lock sequences. Let $d_1, d_2, \dots, d_m$ be a nondecreasing sequence of natural numbers less than $k$ having length $m\leq n$; this is a lock sequence for the $(n,k)$-lock if these two inequalities hold:</p>
<p>$\sum_{i=1}^{m}d_i+(n+1-m)(k-1)\geq (n+1-m)k$</p>
<p>$\sum_{i=1}^{m}d_i + (n+1-m)d_m\leq(n+1-m)k$</p>
<p>They can be simplified to</p>
<p>$n+1-m\leq\sum_{i=1}^{m}d_i\leq(n+1-m)(k-d_m)$</p>
<p>The first inequality is a bit annoying, so let's get rid of it by making one last identification: we’ll identify lock sequences that differ only by leading zeros, and assume a canonical form that has no leading zeros. If the first inequality fails, we can force it to hold by adding leading zeros, thus increasing $m$. So now we’re left with</p>
<p>$\sum_{i=1}^{m}d_i\leq(n+1-m)(k-d_m)$</p>
<p>I like to imagine this condition as meaning, “Is there room in the attic for all the boxes?”. Maybe that will make more sense if I draw a picture:
<img src="https://bosker.files.wordpress.com/2014/02/boxes-and-attic.png" alt="Can we fit the boxes into the attic?"></p>
<p>This picture depicts the lock sequence $(1,1,2,2,2,2,3,3)$ as an arrangement of 16 boxes, and an “attic” of area $(n+1-m)(k-d_m)$, all within an $(n+1)\times k$ rectangle.</p>
<p>Now let’s flip it over, like conjugating a Young diagram, and move the attic back to the top:</p>
<p><img src="https://bosker.files.wordpress.com/2014/02/boxes-and-attic-flipped.png" alt="Flipped"></p>
<p>We still have the same arrangement of boxes – in particular the value of $\sum_{i=1}^{m}d_i$ remains the same – and the attic is the same size. So the conjugate sequence – $(2,6,8)$ in this example – is a valid sequence for the $(k-1, n+1)$-lock provided only the original was a valid sequence for the $(n, k)$-lock.</p>
<p>So, we’ve shown that every lock sequence for the $(n,k)$-lock can be transformed into a lock sequence for the $(k-1,n+1)$-lock that has the same sum. It follows that it takes at least as many moves, in general, to open a $(k-1,n+1)$-lock as a $(n,k)$-lock. Since this works in both directions, we may conclude that $m(n,k) = m(k-1, n+1)$.</p>
<p><hr>
Another way of looking at it is to note that the above implies</p>
<p>$m(n,k) = \max\{\min\{ac,bd\}\ |\ a+b=n+1,c+d=k\mbox{ for }a,b,c,d\in\mathbb{N}\}$</p>
<p>which is symmetrical in $n+1$ and $k$. This expression also suggests another way to compute $m(n,k)$.</p>
| <p>First let's consider a much simpler situation. If, instead of moving any group of adjacent dials, you can only move <em>one</em> dial, the maximum number of turns is trivial -- it's equal to $kn/2$. Note that this expression displays a sort of symmetry very similar to what you're seeing. To be jargony, the state space -- number of combinations -- has volume $kn$, which is a commutative expression, and we can only move through one square of state space at a time.</p>
<p>Now, if we look at the real problem, the state space is still of volume $kn$, but we can move through $k$ squares at a time, with some restrictions. It's easier to analyze if we can only move through one square, so we can look at the space of differences: 3879 becomes 592 and 23856 becomes 1571. In the space of differences we can only move two differences at a time - one up, one down, so 1571 might become 0572 -- but more importantly we have a volume $(k-1) \times n$ and we move a constant amount, so we expect to see a path length of $\Omega((k-1)n)$. </p>
<p>Now, the expression $m(n, k) = \Omega((k-1)n)$ exhibits the observed symmetry! It's just commutation -- if we take $m(k-1, n+1)$ we obviously have the same "volume" of "difference space". There's some intuition here but we still haven't derived the shortest path.</p>
<p>I'll borrow your expression $t = \sup_x(xk - xq - \min(r,x))$ for $xk = q(n+1) + r$ and note that the upper bound and lower bound are achieved precisely where $n+1 | xk$ and $n+1 | x(k-1)$, and that they are both proportional to $x(n+1-x)$, so have a shared maximum $x = (n+1)/2$. This gives us the diophantine equation $q(n+1) = xk$ and if we substitute $x \approx (n+1)/2$ we have $q \approx k/2$. </p>
<p>If these are both integers -- if $n$ is odd and $k$ is even -- we have $t(n, k) = m(n, k) = k(n+1)/4$. So that's a bit simpler already, and it follows the pattern which is nice.</p>
<p>However, the equation does not have a satisfactory solution if $k$ is odd or $n$ is even, so represents a sufficient but not necessary condition for a maximum. We might try to approximately solve the equation by choosing $x = (n+1)/2$, $q = k/2$, and finding a nearby lattice point. For $n = 200$, $k = 9$ we can choose $x = (n+1)/2 - (n+1)/2k = 101.5 - 11.1 = 89.9$ which is close to the true maximum $x = 89$ and suggestive of a more general pattern. I've spent an hour on this already though and should get back to work.</p>
<p>EDIT: I guess the takeaway is that the Diophantine equation $q(n+1) = xk$ is the same if we take $n \rightarrow k-1$, $k \rightarrow n+1$ to get $qk = x(n+1)$. Approximate solutions to this equation correspond to solutions of the general problem.</p>
|
linear-algebra | <p>Suppose <span class="math-container">$A=(a_{ij})$</span> is a <span class="math-container">$n×n$</span> matrix by <span class="math-container">$a_{ij}=\sqrt{i^2+j^2}$</span>. I have tried to check its sign by matlab. l find that the determinant is positive when n is odd and negative when n is even. How to prove it?</p>
| <p>Recall that <span class="math-container">$e^{-|x|}=\int_0^\infty e^{-sx^2}\,d\mu(s)$</span> for some non-negative probability measure <span class="math-container">$\mu$</span> on <span class="math-container">$[0,+\infty)$</span> (Bernstein, totally monotone functions, etc.). Thus <span class="math-container">$e^{-t|x|}=\int_0^\infty e^{-st^2x^2}\,d\mu(s)$</span>. for <span class="math-container">$t>0$</span>. In particular, since <span class="math-container">$(e^{-st^2(i^2+j^2)})_{i,j}$</span> are rank one non-negative matrices, their mixture <span class="math-container">$(e^{-t\sqrt{i^2+j^2}})_{i,j}$</span> is a non-negative matrix for all <span class="math-container">$t>0$</span>. Hence, considering the first two terms in the Taylor expansion as <span class="math-container">$t\to 0+$</span>, we conclude that <span class="math-container">$-A$</span> is non-negative definite on the <span class="math-container">$n-1$</span>-dimensional subspace <span class="math-container">$\sum_i x_i=0$</span>, so the signature of <span class="math-container">$A$</span> is <span class="math-container">$1,n-1$</span>. To finish, we just need to exclude the zero eigenvalue, i.e., to show that the columns are linearly independent, which I leave to someone else (i.e., up to this point we have shown that the determinant has the conjectured sign or is <span class="math-container">$0$</span>).</p>
| <p>What follows is just an explication of fedja's beautiful ideas, with the minor gap of non-zero-ness filled in. There seemed like enough interest to warrant sharing it. Along the way I decided to make it very explicit to make it more accessible, since it's so pretty.</p>
<hr />
<p>Let
<span class="math-container">$$A = \left(\sqrt{i^2+j^2}\right)_{1 \leq i, j \leq n}.$$</span></p>
<p><strong>Theorem:</strong> <span class="math-container">$(-1)^{n-1} \det(A) > 0$</span>.</p>
<p><strong>Proof:</strong> Since <span class="math-container">$A$</span> is symmetric, it has an orthogonal basis of eigenvectors, and its determinant is the product of its eigenvalues. We'll show there are <span class="math-container">$n-1$</span> negative eigenvalues and <span class="math-container">$1$</span> positive eigenvalue, i.e. <span class="math-container">$A$</span> is non-degenerate with <a href="https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia" rel="noreferrer">signature</a> <span class="math-container">$(1, n-1)$</span>.</p>
<p>Let <span class="math-container">$x_0 := (1, \ldots, 1)^\top$</span>. Considering the quadratic form corresponding to <span class="math-container">$A$</span>,
<span class="math-container">$$x_0^\top A x_0 = \sum_{1 \leq i, j \leq n} \sqrt{i^2 + j^2} > 0.$$</span>
Thus there must be at least <span class="math-container">$1$</span> positive eigenvalue <span class="math-container">$\lambda_+>0$</span>. On the other hand, we have the following.</p>
<p><strong>Claim:</strong> If <span class="math-container">$x \cdot x_0 = 0$</span> and <span class="math-container">$x \neq 0$</span>, then <span class="math-container">$x^\top A x < 0$</span>.</p>
<p>Assume the claim for the moment. If <span class="math-container">$A$</span> had another eigenvalue <span class="math-container">$\lambda \geq 0$</span>, then the <span class="math-container">$2$</span>-dimensional subspace <span class="math-container">$\mathrm{span}\{\lambda_0, \lambda\}$</span> would intersect the <span class="math-container">$(n-1)$</span>-dimensional subspace <span class="math-container">$\{x : x \cdot x_0 = 0\}$</span> non-trivially, but at that point <span class="math-container">$y$</span> we would have both <span class="math-container">$y^\top A y \geq 0$</span> and <span class="math-container">$y^\top A y < 0$</span>, a contradiction. So, the theorem follows from the claim. For readability, we break the argument into two pieces.</p>
<p><strong>Subclaim:</strong> If <span class="math-container">$x \cdot x_0 = 0$</span> and <span class="math-container">$x \neq 0$</span>, then <span class="math-container">$x^\top A x \leq 0$</span>.</p>
<p><strong>Proof of subclaim:</strong> We first introduce
<span class="math-container">$$B(t) := \left(e^{-t\sqrt{i^2+j^2}}\right)_{1 \leq i,j \leq n}.$$</span>
Intuitively, <span class="math-container">$A$</span> is the linear coefficient of the Taylor expansion of <span class="math-container">$B$</span> around <span class="math-container">$t=0$</span>. More precisely, working coefficient-wise,
<span class="math-container">$$\lim_{t \to 0} \frac{B_0 - B(t)}{t} = A$$</span>
where <span class="math-container">$B_0 := B(0) = (1)_{1 \leq i, j \leq n}$</span> is the matrix of all <span class="math-container">$1$</span>'s.</p>
<p>Since <span class="math-container">$x \cdot x_0 = 0$</span>, we have <span class="math-container">$B_0 x = 0$</span>. Thus
<span class="math-container">$$\tag{1}\label{1}x^\top A x = \lim_{t \to 0} x^\top \frac{B_0 - B(t)}{t} x
= -\lim_{t \to 0} \frac{x^\top B(t)\,x}{t}.$$</span></p>
<p>The key insight is to express the quadratic form <span class="math-container">$x^\top B(t) x$</span> in a manifestly positive way. For that, it is <a href="https://math.stackexchange.com/questions/347933/compute-the-inverse-laplace-transform-of-e-sqrtz">a fact</a><sup>1</sup> that for all <span class="math-container">$z \geq 0$</span>,
<span class="math-container">$$e^{-\sqrt{z}} = \frac{1}{2\sqrt{\pi}} \int_0^\infty e^{-zs} s^{-3/2} \exp\left(-\frac{1}{4s}\right)\,ds.$$</span></p>
<p>Thus, for all <span class="math-container">$t \geq 0$</span>, we have the following entry-wise equality:
<span class="math-container">$$B(t) = \left(e^{-t\sqrt{i^2+j^2}}\right)_{1 \leq i,j \leq n} = \int_0^\infty \left(e^{-t^2(i^2+j^2)s}\right)_{1 \leq i,j \leq n} \, g(s)\,ds$$</span>
where
<span class="math-container">$$g(s) := \frac{1}{2\sqrt{\pi}} s^{-3/2} \exp\left(-\frac{1}{4s}\right) > 0\quad\text{for all }s > 0.$$</span>
The integrand matrices can be decomposed as an outer product, namely
<span class="math-container">$$\left(e^{-t^2(i^2+j^2)s}\right)_{1 \leq i,j \leq n} = \left(e^{-t^2 i^2 s} e^{-t^2 j^2 s}\right)_{1 \leq i,j \leq n} = u(s, t)u(s, t)^\top$$</span>
where
<span class="math-container">$$u(s, t) := (e^{-t^2 i^2 s})_{1 \leq i \leq n}$$</span>
is a column vector. Thus,
<span class="math-container">$$\begin{align*}
x^\top B(t)\,x
&= x^\top\left(\int_0^\infty u(s, t) u(s, t)^\top \, g(s)\,ds\right)x \\
&= \int_0^\infty (u(s, t)^\top x)^\top (u(s, t)^\top x)\,g(s)\,ds \\
&= \int_0^\infty (u(s, t) \cdot x)^2\,g(s)\,ds.
\end{align*}$$</span>
Hence \eqref{1} becomes
<span class="math-container">$$\tag{2}\label{2}x^\top A\,x = -\lim_{t \to 0^+} \int_0^\infty \frac{(u(s, t) \cdot x)^2}{t} g(s)\,ds.$$</span></p>
<p>It is now clear that <span class="math-container">$x^\top A\,x \leq 0$</span> since <span class="math-container">$g(s) \geq 0$</span>, finishing the subclaim.</p>
<p><strong>Proof of claim:</strong> It will take a little more work to show that <span class="math-container">$x^\top A\,x < 0$</span> is strict. Let <span class="math-container">$t = 1/\alpha$</span> for <span class="math-container">$\alpha > 0$</span>. Apply the substitution <span class="math-container">$s = \alpha^2 r$</span> to the integral in \eqref{2} to get
<span class="math-container">$$\begin{align*}
\int_0^\infty \frac{(u(s, t) \cdot x)^2}{t} g(s)\,ds
&\geq \int_{\alpha^2/2}^{\alpha^2} \frac{(u(s, t) \cdot x)^2}{t} g(s)\,ds \\
&= \int_{1/2}^1 (u(\alpha^2 r, 1/\alpha) \cdot x)^2 \alpha g(\alpha^2 r)\,\alpha^2\,dr \\
&= \int_{1/2}^1 \alpha^3 (u(r, 1) \cdot x)^2 \frac{1}{2\sqrt{\pi}} \alpha^{-3} r^{-3/2} \exp\left(-\frac{1}{4\alpha^2 r}\right)\,dr \\
&= \frac{1}{2\sqrt{\pi}} \int_{1/2}^1 (u(r, 1) \cdot x)^2 r^{-3/2} \exp\left(-\frac{1}{4\alpha^2 r}\right)\,dr.
\end{align*}$$</span></p>
<p>Thus \eqref{2} becomes
<span class="math-container">$$\begin{align*}
x^\top A\,x
&\leq -\lim_{\alpha \to \infty} \frac{1}{2\sqrt{\pi}} \int_{1/2}^1 (u(r, 1) \cdot x)^2 r^{-3/2} \exp\left(-\frac{1}{4\alpha^2 r}\right)\,dr \\
&= -\frac{1}{2\sqrt{\pi}} \int_{1/2}^1 (u(r, 1) \cdot x)^2 r^{-3/2}\,dr.
\end{align*}$$</span></p>
<p>Hence it suffices to show that <span class="math-container">$u(r, 1) \cdot x \neq 0$</span> for some <span class="math-container">$1/2 \leq r \leq 1$</span>. Indeed, <span class="math-container">$\{u(r_j, 1)\}$</span> is a basis for any <span class="math-container">$r_1 < \cdots < r_n$</span>. One way to see this is to note that the matrix is <span class="math-container">$\left(e^{-i^2 r_j}\right)_{1 \leq i, j \leq n} = \left(q_i^{r_j}\right)_{1 \leq i, j \leq n}$</span> where <span class="math-container">$q_i := e^{-i^2}$</span>. Since <span class="math-container">$0 < q_n < \cdots < q_1$</span>, this matrix is an (unsigned) exponential Vandermonde matrix in the terminology of <a href="https://core.ac.uk/download/pdf/82693004.pdf" rel="noreferrer">Robbin--Salamon</a>, and therefore has positive determinant. Hence <span class="math-container">$u(r, 1) \cdot x = 0$</span> for all <span class="math-container">$1/2 \leq r \leq 1$</span> implies <span class="math-container">$x=0$</span>. contrary to our assumption. This completes the claim and proof. <span class="math-container">$\Box$</span></p>
<hr />
<p><sup>1</sup>As fedja points out, existence of an appropriate expression follows easily from <a href="https://djalil.chafai.net/blog/2013/03/23/the-bernstein-theorem-on-completely-monotone-functions/" rel="noreferrer">Bernstein's theorem on completely monotone functions</a>. As you'd expect, this explicit formula can be done by residue calculus.</p>
|
probability | <p>On a disk, choose <span class="math-container">$n$</span> uniformly random points. Then draw the smallest circle enclosing those points. (<a href="https://www.personal.kent.edu/%7Ermuhamma/Compgeometry/MyCG/CG-Applets/Center/centercli.htm" rel="noreferrer">Here</a> are some algorithms for doing so.)</p>
<p>The circle may or may not lie completely on the disk. For example, with <span class="math-container">$n=7$</span>, here are examples of both cases.</p>
<p><a href="https://i.sstatic.net/PReoa.png" rel="noreferrer"><img src="https://i.sstatic.net/PReoa.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xSoOX.png" rel="noreferrer"><img src="https://i.sstatic.net/xSoOX.png" alt="enter image description here" /></a></p>
<blockquote>
<p>What is <span class="math-container">$\lim\limits_{n\to\infty}\{\text{Probability that the circle lies completely on the disk}\}$</span>?</p>
</blockquote>
<p>Is the limiting probability <span class="math-container">$0$</span>? Or <span class="math-container">$1$</span>? Or something in between? My geometrical intuition fails to tell me anything.</p>
<h2>The case <span class="math-container">$n=2$</span></h2>
<p>I have only been able to find that, when <span class="math-container">$n=2$</span>, the probability that the smallest enclosing circle lies completely on the disk, is <span class="math-container">$2/3$</span>.</p>
<p>Without loss of generality, assume that the perimeter of the disk is <span class="math-container">$x^2+y^2=1$</span>, and the two points are <span class="math-container">$(x,y)$</span> and <span class="math-container">$(0,\sqrt t)$</span> where <span class="math-container">$t$</span> is <a href="https://mathworld.wolfram.com/DiskPointPicking.html" rel="noreferrer">uniformly distributed</a> in <span class="math-container">$[0,1]$</span>.</p>
<p>The smallest enclosing circle has centre <span class="math-container">$C\left(\frac{x}{2}, \frac{y+\sqrt t}{2}\right)$</span> and radius <span class="math-container">$r=\frac12\sqrt{x^2+(y-\sqrt t)^2}$</span>. If the smallest enclosing circle lies completely on the disk, then <span class="math-container">$C$</span> lies within <span class="math-container">$1-r$</span> of the origin. That is,</p>
<p><span class="math-container">$$\sqrt{\left(\frac{x}{2}\right)^2+\left(\frac{y+\sqrt t}{2}\right)^2}\le 1-\frac12\sqrt{x^2+(y-\sqrt t)^2}$$</span></p>
<p>which is equivalent to</p>
<p><span class="math-container">$$\frac{x^2}{1-t}+y^2\le1$$</span></p>
<p>The <a href="https://byjus.com/maths/area-of-ellipse/" rel="noreferrer">area</a> of this region is <span class="math-container">$\pi\sqrt{1-t}$</span>, and the area of the disk is <span class="math-container">$\pi$</span>, so the probability that the smallest enclosing circle lies completely on the disk is <span class="math-container">$\sqrt{1-t}$</span>.</p>
<p>Integrating from <span class="math-container">$t=0$</span> to <span class="math-container">$t=1$</span>, the probability is <span class="math-container">$\int_0^1 \sqrt{1-t}dt=2/3$</span>.</p>
<h2>Edit</h2>
<p>From the comments, @Varun Vejalla has run trials that suggest that, for small values of <span class="math-container">$n$</span>, the probability (that the enclosing circle lies completely on the disk) is <span class="math-container">$\frac{n}{2n-1}$</span>, and that the limiting probability is <span class="math-container">$\frac12$</span>. There should be a way to prove these results.</p>
<h2>Edit2</h2>
<p>I seek to generalize this question <a href="https://mathoverflow.net/q/458571/494920">here</a>.</p>
| <p>First, let me state two lemmas that demand tedious computations. Let <span class="math-container">$B(x, r)$</span> denote the circle centered at <span class="math-container">$x$</span> with radius <span class="math-container">$r$</span>.</p>
<p><strong>Lemma 1</strong>: Let <span class="math-container">$B(x,r)$</span> be a circle contained in <span class="math-container">$B(0, 1)$</span>. Suppose we sample two points <span class="math-container">$p_1, p_2$</span> inside <span class="math-container">$B(0, 1)$</span>, and <span class="math-container">$B(x', r')$</span> is the circle with <span class="math-container">$p_1p_2$</span> as diameter. Then we have
<span class="math-container">$$\mathbb{P}(x' \in x + dx, r' \in r + dr) = \frac{8}{\pi} r dxdr.$$</span></p>
<p><strong>Lemma 2</strong>: Let <span class="math-container">$B(x,r)$</span> be a circle contained in <span class="math-container">$B(0, 1)$</span>. Suppose we sample three points <span class="math-container">$p_1, p_2, p_3$</span> inside <span class="math-container">$B(0, 1)$</span>, and <span class="math-container">$B(x', r')$</span> is the circumcircle of <span class="math-container">$p_1p_2p_3$</span>. Then we have
<span class="math-container">$$\mathbb{P}(x' \in x + dx, r' \in r + dr) = \frac{24}{\pi} r^3 dxdr.$$</span>
Furthermore, conditioned on this happening, the probability that <span class="math-container">$p_1p_2p_3$</span> is acute is exactly <span class="math-container">$1/2$</span>.</p>
<p>Given these two lemmas, let's see how to compute the probability in question. Let <span class="math-container">$p_1, \cdots, p_n$</span> be the <span class="math-container">$n$</span> points we selected, and <span class="math-container">$C$</span> is the smallest circle containing them. For each <span class="math-container">$i < j < k$</span>, let <span class="math-container">$C_{ijk}$</span> denote the circumcircle of three points <span class="math-container">$p_i, p_j, p_k$</span>. For each <span class="math-container">$i < j$</span>, let <span class="math-container">$D_{ij}$</span> denote the circle with diameter <span class="math-container">$p_i, p_j$</span>. Let <span class="math-container">$E$</span> denote the event that <span class="math-container">$C$</span> is contained in <span class="math-container">$B(0, 1)$</span>.</p>
<p>First, a geometric statement.</p>
<p><strong>Claim</strong>: Suppose no four of <span class="math-container">$p_i$</span> are concyclic, which happens with probability <span class="math-container">$1$</span>. Then exactly one of the following scenarios happen.</p>
<ol>
<li><p>There exists unique <span class="math-container">$1 \leq i < j < k \leq n$</span> such that <span class="math-container">$p_i, p_j, p_k$</span> form an acute triangle and <span class="math-container">$C_{ijk}$</span> contains all the points <span class="math-container">$p_1, \cdots, p_n$</span>. In this case, <span class="math-container">$C = C_{ijk}$</span>.</p>
</li>
<li><p>There exists unique <span class="math-container">$1 \leq i < j \leq n$</span> such that <span class="math-container">$D_{ij}$</span> contains all the points <span class="math-container">$p_1, \cdots, p_n$</span>. In this case, <span class="math-container">$C = D_{ij}$</span>.</p>
</li>
</ol>
<p><strong>Proof</strong>: This is not hard to show, and is listed <a href="https://en.wikipedia.org/wiki/Smallest-circle_problem" rel="noreferrer">on wikipedia</a>.</p>
<p>Let <span class="math-container">$E_1$</span> be the event that <span class="math-container">$E$</span> happens and we are in scenario <span class="math-container">$1$</span>. Let <span class="math-container">$E_2$</span> be the event that <span class="math-container">$E$</span> happens and we are in scenario <span class="math-container">$2$</span>.</p>
<p>We first compute the probability that <span class="math-container">$E_1$</span> happens. It is
<span class="math-container">$$\mathbb{P}(E_1) = \sum_{1 \leq i < j < k \leq n} \mathbb{P}(\forall \ell \neq i,j,k, p_\ell \in C_{ijk} , C_{ijk} \subset B(0, 1), p_ip_jp_k \text{acute}).$$</span>
Conditioned on <span class="math-container">$C_{ijk} = B(x, r)$</span>, Lemma 2 shows that this happens with probability <span class="math-container">$\frac{1}{2}r^{2(n - 3)} \mathbb{1}_{|x| + r \leq 1}$</span>. Lemma 2 also tells us the distribution of <span class="math-container">$(x, r)$</span>. Integrating over <span class="math-container">$(x, r)$</span>, we conclude that
<span class="math-container">$$\mathbb{P}(\forall \ell \neq i,j,k, p_\ell \in C_{ijk} , C_{ijk} \subset B(0, 1), p_ip_jp_k \text{acute}) = \int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3 dr dx.$$</span>
Thus we have
<span class="math-container">$$\mathbb{P}(E_1) = \binom{n}{3}\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3 dr dx.$$</span>
We can first integrate the <span class="math-container">$x$</span>-variable to get
<span class="math-container">$$\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 3)} \cdot \frac{12}{\pi} r^3dr dx = 12 \int_0^1 r^{2n - 3}(1 - r)^2 dr.$$</span>
Note that
<span class="math-container">$$\int_0^1 r^{2n - 3}(1 - r)^2 dr = \frac{(2n - 3)! * 2!}{(2n)!} = \frac{2}{2n * (2n - 1) * (2n - 2)}.$$</span>
So we conclude that
<span class="math-container">$$\mathbb{P}(E_1) = \frac{n - 2}{2n - 1}.$$</span>
We next compute the probability that <span class="math-container">$E_2$</span> happens. It is
<span class="math-container">$$\mathbb{P}(E_2) = \sum_{1 \leq i < j \leq n} \mathbb{P}(\forall \ell \neq i,j, p_\ell \in D_{ij} , D_{ij} \subset B(0, 1)).$$</span>
Conditioned on <span class="math-container">$D_{ij} = B(x, r)$</span>, this happens with probability <span class="math-container">$r^{2(n - 2)} \mathbb{1}_{|x| + r \leq 1}$</span>. Lemma 1 tells us the distribution of <span class="math-container">$(x, r)$</span>. So we conclude that the probability that
<span class="math-container">$$\mathbb{P}(\forall \ell \neq i,j, p_\ell \in D_{ij} , D_{ij} \subset B(0, 1)) = \int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx.$$</span>
So
<span class="math-container">$$\mathbb{P}(E_2) = \binom{n}{2}\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx.$$</span>
We compute that
<span class="math-container">$$\int_{B(0, 1)}\int_0^{1 - |x|} r^{2(n - 2)} \cdot \frac{8}{\pi} r dr dx = 8\int_0^1 r^{2n - 3} (1 - r)^2 dr = 8 \frac{(2n - 3)! 2!}{(2n)!}.$$</span>
So we conclude that
<span class="math-container">$$\mathbb{P}(E_2) = 8 \binom{n}{2} \frac{(2n - 3)! 2!}{(2n)!} = \frac{2}{2n - 1}.$$</span>
Finally, we get
<span class="math-container">$$\mathbb{P}(E) = \mathbb{P}(E_1) + \mathbb{P}(E_2) = \boxed{\frac{n}{2n - 1}}.$$</span>
The proofs of the two lemmas are really not very interesting. The main tricks are some coordinate changes. Let's look at Lemma 1 for example. The trick is to make the coordinate change <span class="math-container">$p_1 = x + r (\cos \theta, \sin \theta), p_2 = x + r (-\cos \theta, -\sin \theta)$</span>. One can compute the Jacobian of the coordinate change as something like
<span class="math-container">$$J = \begin{bmatrix} 1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 \\
\cos \theta & \sin \theta & -\cos \theta & -\sin \theta \\
-r\sin \theta & r\cos \theta & r\sin \theta & -r\cos \theta \\
\end{bmatrix}.$$</span>
And we can compute that <span class="math-container">$|\det J| = 4r$</span>. As <span class="math-container">$p_1, p_2$</span> has density function <span class="math-container">$\frac{1}{\pi^2} \mathbf{1}_{p_1, p_2 \in B(0, 1)}$</span>, the new coordinate system <span class="math-container">$(x, r, \theta)$</span> has density function
<span class="math-container">$$\frac{4r}{\pi^2} \mathbf{1}_{p_1, p_2 \in B(0, 1)}$$</span>
The second term can be dropped as it is always <span class="math-container">$1$</span> in the neighbor of <span class="math-container">$(x, r)$</span>. To get the density of <span class="math-container">$(x, r)$</span> you can integrate in the <span class="math-container">$\theta$</span> variable to conclude that the density of <span class="math-container">$(x, r)$</span> is
<span class="math-container">$$\frac{8r}{\pi}$$</span>
as desired.</p>
<p>The proof of Lemma 2 is analogous, except you can use the more complicated coordinate change from <span class="math-container">$(p_1, p_2, p_3)$</span> to <span class="math-container">$(x, r, \theta_1, \theta_2, \theta_3)$</span>
<span class="math-container">$$p_1 = x + r (\cos \theta_1, \sin \theta_1), p_2 = x + r (\cos \theta_2, \sin \theta_2), p_3 = x + r (\cos \theta_3, \sin \theta_3).$$</span>
The Jacobian <span class="math-container">$J$</span> is now <span class="math-container">$6$</span> dimensional, and Mathematica tells me that its determinant is
<span class="math-container">$$|\det J| = r^3|\sin(\theta_1 - \theta_2) + \sin(\theta_2 - \theta_3) + \sin(\theta_3 - \theta_1)|.$$</span>
So we just need to integrate this in <span class="math-container">$\theta_{1,2,3}$</span>! Unfortunately, Mathematica failed to do this integration, but I imagine you can do this by hand and get the desired Lemma.</p>
| <p>Intuitive answer:</p>
<p>Let <span class="math-container">$R_d$</span> be the radius of the disk and let <span class="math-container">$R_e$</span> be the radius of the enclosing circle of the random points on the disk and <span class="math-container">$R_p$</span> be the radius of a circle that passes through the outermost points such that all selected random points on the disk either lie on or within <span class="math-container">$R_p$</span>.</p>
<p>The answer to this question is highly dependent on the exact precise definitions of the terms and phrases used in the question.</p>
<p>Assumption 1:
Does the definition of "enclosing disk ...(of the points)... lies completely on the disk" includes the case of the enclosing circle lying exactly on the perimeter of the disk? i.e does it mean <span class="math-container">$R_e < R_d$</span> or <span class="math-container">$R\leq R_d$</span>? I will assume the latter.</p>
<p>Assumption 2:
Does the smallest enclosing circle of the points include the case of some of the enclosed points lying on the enclosing disk? i.e. does it mean <span class="math-container">$R_e > R_p$</span> or <span class="math-container">$R_e = R_p$</span>? I will assume the latter.</p>
<p>It is well known that a circle can be defined by a minimum of 3 non colinear points. The question can now be boiled down to "If there are infinitely points on the disk, what is the probability of at least 3 of the points being on the perimeter of the disk?"</p>
<p>Intuition says that if there are an infinite number of points that are either on or within the perimeter of the disk, then the probability of there being 3 points exactly on the perimeter of the disk is exactly unity. If there are at least 3 points exactly on the perimeter of the disk then the enclosing circle lies completely on the disk, so the answer to the OP question is:</p>
<p>"The probability that the smallest circle enclosing <span class="math-container">$n$</span> random points on a disk lies completely on the disk, as <span class="math-container">$n\to\infty$</span>, is 1.</p>
<p>If we define the meaning of "enclosing circle lies completely on the disk" to mean strictly <span class="math-container">$R_e < R_d$</span> then things get more complicated. Now the question boils down to "What is the probability of an infinite number of random points on the disk not having any points exactly on the perimeter of the disk?"</p>
<p>If any of the random points lie exactly on the perimeter of the disk, then the enclosing circle touches the perimeter of the disk and by the definitions of this alternative interpretation of the question, the enclosing circle does not lie entirely within the perimeter of the disk. The intuitive probability of placing an infinite number of random points on the disk without any of the points landing exactly on the perimeter of the disk is zero, so the answer to this alternative interpretation of the question is:</p>
<p>"The probability that the smallest circle enclosing <span class="math-container">$n$</span> random points on a disk lies completely on the disk, as <span class="math-container">$n\to\infty$</span> = 0.</p>
|
linear-algebra | <blockquote>
<p>If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$. </p>
</blockquote>
<p>I do not understand anything more than the following.</p>
<ol>
<li>Elementary row operations.</li>
<li>Linear dependence.</li>
<li>Row reduced forms and their relations with the original matrix.</li>
</ol>
<p>If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?</p>
<p><strong>P.S.</strong>: Please avoid using the transpose and/or inverse of a matrix.</p>
| <p>Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.</p>
<p>Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).</p>
<p>Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.</p>
<p>Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.</p>
| <p>We have the following general assertion:</p>
<p><strong>Lemma.</strong> <em>Let <span class="math-container">$A$</span> be a finite-dimensional <a href="https://en.wikipedia.org/wiki/Algebra_over_a_field" rel="noreferrer"><span class="math-container">$K$</span>-algebra</a>, and <span class="math-container">$a,b \in A$</span>. If <span class="math-container">$ab=1$</span>, then <span class="math-container">$ba=1$</span>.</em></p>
<p>For example, <span class="math-container">$A$</span> could be the algebra of <span class="math-container">$n \times n$</span> matrices over <span class="math-container">$K$</span>.</p>
<p><strong>Proof.</strong> The sequence of subspaces <span class="math-container">$\cdots \subseteq b^{k+1} A \subseteq b^k A \subseteq \cdots \subseteq A$</span> must be stationary, since <span class="math-container">$A$</span> is finite-dimensional. Thus there is some <span class="math-container">$k$</span> with <span class="math-container">$b^{k+1} A = b^k A$</span>. So there is some <span class="math-container">$c \in A$</span> such that <span class="math-container">$b^k = b^{k+1} c$</span>. Now multiply with <span class="math-container">$a^k$</span> on the left to get <span class="math-container">$1=bc$</span>. Then <span class="math-container">$ba=ba1 = babc=b1c=bc=1$</span>. <span class="math-container">$\square$</span></p>
<p>The proof also works in every left- or right-<a href="https://en.wikipedia.org/wiki/Artinian_ring" rel="noreferrer">Artinian ring</a> <span class="math-container">$A$</span>. In particular, the statement is true in every finite ring.</p>
<p>Remark that we need in an essential way some <strong>finiteness condition</strong>. There is no purely algebraic manipulation with <span class="math-container">$a,b$</span> that shows <span class="math-container">$ab = 1 \Rightarrow ba=1$</span>.</p>
<p>In fact, there is a <span class="math-container">$K$</span>-algebra with two elements <span class="math-container">$a,b$</span> such that <span class="math-container">$ab=1$</span>, but <span class="math-container">$ba \neq 1$</span>. Consider the left shift <span class="math-container">$a : K^{\mathbb{N}} \to K^{\mathbb{N}}$</span>, <span class="math-container">$a(x_0,x_1,\dotsc) := (x_1,x_2,\dotsc)$</span> and the right shift <span class="math-container">$b(x_0,x_1,\dotsc) := (0,x_0,x_1,\dotsc)$</span>. Then <span class="math-container">$a \circ b = \mathrm{id} \neq b \circ a$</span> holds in the <span class="math-container">$K$</span>-algebra <span class="math-container">$\mathrm{End}_K(K^{\mathbb{N}})$</span>.</p>
<p>See <a href="https://math.stackexchange.com/questions/298791">SE/298791</a> for a proof of <span class="math-container">$AB=1 \Rightarrow BA=1$</span> for square matrices over a commutative ring.</p>
|
logic | <p>For a topological space <span class="math-container">$X$</span>, let <span class="math-container">$C(X)$</span> denote the ring of continuous functions <span class="math-container">$X\to\mathbb{R}$</span>, equipped with pointwise addition and multiplication. This question is related to <a href="https://math.stackexchange.com/questions/4211730/lowenheim-skolem-number-for-satisfaction-via-continuous-function-rings/4216349">this</a> one of Noah Schweber's; in particular, the question arises from trying to understand how much topological data about <span class="math-container">$X$</span> is encoded in the first-order theory of the ring <span class="math-container">$C(X)$</span>. For example, <span class="math-container">$C(X)$</span> has a non-trivial idempotent if and only if <span class="math-container">$X$</span> is disconnected. A natural question to ask is when the first-order theory of <span class="math-container">$C(X)$</span> can also detect whether <span class="math-container">$X$</span> is compact, and this is the context in which the question below arises. For further elaboration and additional context, see Noah's post and the answers and comments below it.</p>
<p>Let <span class="math-container">$[0,1]$</span> and <span class="math-container">$(0,1)$</span> be the closed and open unit intervals in <span class="math-container">$\mathbb{R}$</span>. Note that there is an injective ring morphism <span class="math-container">$\iota:C([0,1])\hookrightarrow C((0,1))$</span> that takes a map <span class="math-container">$\alpha:[0,1]\to\mathbb{R}$</span> to its restriction <span class="math-container">$\alpha|_{(0,1)}$</span>.</p>
<blockquote>
<p><strong>Question:</strong> Is the map <span class="math-container">$\iota$</span> elementary? If the answer is no, do we nonetheless have an elementary equivalence <span class="math-container">$C([0,1])\equiv C((0,1))$</span>?</p>
</blockquote>
<p>I suspect a negative answer, but I don't see a path for showing this. A natural first idea is to somehow try to exploit that <span class="math-container">$\operatorname{im}\alpha\subseteq\mathbb{R}$</span> is bounded for every <span class="math-container">$\alpha\in C([0,1])$</span>. So, for example, one can define a formula <span class="math-container">$$u>v\equiv \exists w\big[u-v=w^2\big]\wedge\exists w\big[(u-v)w=1\big].$$</span> Then, for any space <span class="math-container">$X$</span> and any continuous <span class="math-container">$\alpha,\beta:X \to\mathbb{R}$</span>, we have <span class="math-container">$C(X)\models\alpha>\beta$</span> if and only if <span class="math-container">$\alpha(x)>\beta(x)$</span> for each <span class="math-container">$x\in X$</span>. Indeed, <span class="math-container">$C(X)\models \exists w[\alpha-\beta=w^2]$</span> if and only if <span class="math-container">$\alpha(x)\geqslant\beta(x)$</span> for each <span class="math-container">$x\in X$</span>, and <span class="math-container">$C(X)\models \exists w[(\alpha-\beta)w=1]$</span> if and only if <span class="math-container">$\alpha(x)\neq\beta(x)$</span> for each <span class="math-container">$x\in X$</span>.</p>
<p>With this in hand, it is fairly straightforward to come up with a first-order theory that is satisfiable in <span class="math-container">$C((0,1))$</span> but not in <span class="math-container">$C([0,1])$</span>, provided we are willing to add an additional constant symbol to our language. Indeed, let <span class="math-container">$a$</span> be a new constant symbol, and define a theory <span class="math-container">$T$</span> in the language of rings along with <span class="math-container">$a$</span> by taking <span class="math-container">$\neg(a<n)\in T$</span> for each <span class="math-container">$n\in\omega$</span>. Then realizing <span class="math-container">$a$</span> as any homeomorphism <span class="math-container">$(0,1)\to\mathbb{R}$</span> will make <span class="math-container">$C((0,1))$</span> into a model of <span class="math-container">$T$</span>, but no realization of <span class="math-container">$a$</span> can do the same for <span class="math-container">$C([0,1])$</span>, since any continuous image of <span class="math-container">$[0,1]$</span> in <span class="math-container">$\mathbb{R}$</span> is bounded.</p>
<p>However, this is of course not enough to show that <span class="math-container">$C([0,1])\not\equiv C((0,1))$</span> as rings, and I don't see an easy way of extending the argument. If there were some way to uniformly define the subring <span class="math-container">$\mathbb{R}$</span> in <span class="math-container">$C([0,1])$</span> and <span class="math-container">$C((0,1))$</span>, then we would be done: if <span class="math-container">$\theta(w)$</span> were such a definition, then <span class="math-container">$C([0,1])$</span> would be a model of the sentence <span class="math-container">$\forall v\exists w[\theta(w)\wedge (w>v)]$</span> and <span class="math-container">$C((0,1))$</span> would be a model of its negation. But I'm struggling to come up with such a formula <span class="math-container">$\theta$</span>, and I'm not even convinced it can be done. Any insight, either on this approach or on a different one, would be much appreciated.</p>
| <p>Here is an easy answer to your first question. Your map <span class="math-container">$\iota$</span> is not an elementary embedding. Note, for instance, that the function <span class="math-container">$f(x)=x$</span> is not a unit in <span class="math-container">$C([0,1])$</span>, but <span class="math-container">$\iota(f)$</span> is a unit. This is expressible in the first-order language of rings (with <span class="math-container">$f$</span> as a parameter) so <span class="math-container">$\iota$</span> is not an elementary embedding.</p>
<p>The second question also has a negative answer though it is rather more complicated. First, I claim that there is a first-order formula <span class="math-container">$\varphi(f)$</span> in the language of rings which says "<span class="math-container">$f$</span> vanishes at exactly one point" when interpreted in both <span class="math-container">$C([0,1])$</span> and <span class="math-container">$C((0,1))$</span>. Replacing <span class="math-container">$f$</span> with <span class="math-container">$f^2$</span>, we may assume <span class="math-container">$f\geq 0$</span> for this purpose. I claim the vanishing set of a function <span class="math-container">$f\geq 0$</span> is disconnected iff we can write <span class="math-container">$f=gh$</span> where <span class="math-container">$g,h\geq 0$</span>, neither <span class="math-container">$g$</span> nor <span class="math-container">$h$</span> is a unit, and <span class="math-container">$g+h$</span> is a unit. One direction is easy: if such <span class="math-container">$g$</span> and <span class="math-container">$h$</span> exist, then the vanishing set of <span class="math-container">$f$</span> is the disjoint union of two nonempty closed sets, namely the vanishing sets of <span class="math-container">$g$</span> and <span class="math-container">$h$</span>. Conversely, suppose the vanishing set of <span class="math-container">$f$</span> is disconnected, so there is some <span class="math-container">$c\in (0,1)$</span> such that <span class="math-container">$f(c)\neq 0$</span> but <span class="math-container">$f$</span> vanishes at points both below and above <span class="math-container">$c$</span>. Then you can take <span class="math-container">$g(t)=f(t)$</span> for <span class="math-container">$t\leq c$</span> and <span class="math-container">$g(t)=f(c)$</span> for <span class="math-container">$t\geq c$</span> and <span class="math-container">$h(t)=1$</span> for <span class="math-container">$t\leq c$</span> and <span class="math-container">$h(t)=f(t)/f(c)$</span> for <span class="math-container">$t\geq c$</span>.</p>
<p>Also note that <span class="math-container">$f$</span> is a zero-divisor iff <span class="math-container">$f$</span> vanishes on some (nondegenerate) interval. So, we can express that <span class="math-container">$f$</span> vanishes at exactly one point by saying that <span class="math-container">$f^2$</span> is not a unit but its vanishing set is neither disconnected nor contains an interval.</p>
<p>Now the idea is that we use functions vanishing at points as representatives of those points, and distinguish <span class="math-container">$C([0,1])$</span> from <span class="math-container">$C((0,1))$</span> since <span class="math-container">$[0,1]$</span> has endpoints and <span class="math-container">$(0,1)$</span> does not. Note first that if <span class="math-container">$f$</span> and <span class="math-container">$g$</span> vanish at exactly one point, they vanish at the same point iff <span class="math-container">$f^2+g^2$</span> is not a unit. Now, <span class="math-container">$C([0,1])$</span> has a function <span class="math-container">$f$</span> which vanishes at exactly one point such that for any other function <span class="math-container">$g$</span> which vanishes exactly at that same point, either <span class="math-container">$g$</span> or <span class="math-container">$-g$</span> is a square (namely, take <span class="math-container">$f$</span> to vanish only at one of the endpoints of <span class="math-container">$[0,1]$</span>). However, <span class="math-container">$C((0,1))$</span> does not have any such function <span class="math-container">$f$</span> (since for any <span class="math-container">$a\in (0,1)$</span> you can take a function that is positive on one side of <span class="math-container">$a$</span> and negative on the other side).</p>
<hr />
<p>Let me say a bit about how this generalizes. If <span class="math-container">$X$</span> is a topological space, let <span class="math-container">$Z(X)$</span> be the poset of zero sets in <span class="math-container">$X$</span> (i.e., vanishing sets of elements of <span class="math-container">$C(X)$</span>) ordered by inclusion. If <span class="math-container">$X$</span> is a nice (e.g., metrizable) space, then <span class="math-container">$Z(X)$</span> is just the poset of closed subsets of <span class="math-container">$X$</span>. I claim that the poset <span class="math-container">$Z(X)$</span> together with the map <span class="math-container">$z:C(X)\to Z(X)$</span> mapping a function to its vanishing set is interpretable in <span class="math-container">$C(X)$</span> (uniformly in <span class="math-container">$X$</span>).</p>
<p>Indeed, if <span class="math-container">$f,g\in C(X)$</span>, note that <span class="math-container">$z(f)$</span> is disjoint from <span class="math-container">$z(g)$</span> iff <span class="math-container">$f^2+g^2$</span> is a unit. Then, we can say <span class="math-container">$z(f)\subseteq z(g)$</span> iff for all <span class="math-container">$h$</span> such that <span class="math-container">$z(h)$</span> and <span class="math-container">$z(g)$</span> are disjoint, <span class="math-container">$z(h)$</span> and <span class="math-container">$z(f)$</span> are also disjoint. So, we can define the equivalence relation <span class="math-container">$f\sim g$</span> iff <span class="math-container">$z(f)=z(g)$</span>, and thus interpret <span class="math-container">$Z(X)$</span> (together with the map <span class="math-container">$z$</span>) as the quotient of <span class="math-container">$C(X)$</span> by this equivalence relation, and we also can define the ordering on <span class="math-container">$Z(X)$</span>.</p>
<p>So in particular, if <span class="math-container">$C(X)$</span> and <span class="math-container">$C(Y)$</span> are elementarily equivalent, then so are the posets <span class="math-container">$Z(X)$</span> and <span class="math-container">$Z(Y)$</span>. For nice spaces, this means the posets of closed (or equivalently, open) sets are elementarily equivalent. You can also identify the points of <span class="math-container">$X$</span> (for nice <span class="math-container">$X$</span>) as the minimal nonzero elements of the lattice <span class="math-container">$Z(X)$</span>, and, for instance, express statements like "<span class="math-container">$X\setminus\{x\}$</span> is connected for all <span class="math-container">$x\in X$</span>". I don't know a way of using this to express compactness-like statements or whether functions are bounded, though.</p>
| <p>Regarding your more general question, the answer is no: if <span class="math-container">$X$</span> is any <a href="https://topology.pi-base.org/spaces?q=Pseudocompact%2B%7Ecompact" rel="nofollow noreferrer">non-compact pseudocompact space</a>, then we have a natural (topological) isomorphism <span class="math-container">$C(X)\cong C(\beta X)$</span>. For instance, <span class="math-container">$C(\omega_1)\cong C(\omega_1+1)$</span> (with the order topology on both <span class="math-container">$\omega_1$</span> and <span class="math-container">$\omega_1+1$</span>).</p>
<p>Thus, even topological isomorphism type of <span class="math-container">$C(X)$</span> is not enough to characterise compactness of <span class="math-container">$X$</span>.</p>
<p>(Kudos to <a href="https://math.stackexchange.com/a/4218930/30222">Henno Brandsma</a> for pointing out the name of this property and the example of <span class="math-container">$\omega_1$</span>.)</p>
<p>In fact, one might even have non-trivial spaces with trivial <span class="math-container">$C(X)$</span>, although in that case, the space cannot be completely regular. For instance, any directed poset with the final order topology is an example (a non-<span class="math-container">$T_1$</span> one), which is non-compact if the posets has an infinite strong antichain. There are also <span class="math-container">$T_3$</span> examples, see <a href="https://math.stackexchange.com/questions/72442/existence-of-non-constant-continuous-functions/83179#83179">this post</a>.</p>
|
linear-algebra | <p>I’m learning multivariate analysis and I have learnt linear algebra for two semesters when I was a freshman.</p>
<p>Eigenvalues and eigenvectors are easy to calculate and the concept is not difficult to understand. I found that there are many applications of eigenvalues and eigenvectors in multivariate analysis. For example:</p>
<blockquote>
<p>In principal components, proportion of total population variance due to <span class="math-container">$k$</span>th principal component equals <span class="math-container">$$\frac{\lambda_k}{\lambda_1+\lambda_2+\ldots+\lambda_k}$$</span></p>
</blockquote>
<p>I think eigenvalue product corresponding eigenvector has same effect as the matrix product eigenvector geometrically.</p>
<p>I think my former understanding may be too naive so that I cannot find the link between eigenvalue and its application in principal components and others.</p>
<p>I know how to induce almost every step form the assumption to the result mathematically. I’d like to know how to <em><strong>intuitively</strong></em> or <em><strong>geometrically</strong></em> understand eigenvalue and eigenvector in the context of <strong>multivariate analysis</strong> (in linear algebra is also good).</p>
<p>Thank you!</p>
| <p>Personally, I feel that intuition isn't something which is easily explained. Intuition in mathematics is synonymous with experience and you gain intuition by working numerous examples. With my disclaimer out of the way, let me try to present a very informal way of looking at eigenvalues and eigenvectors.</p>
<p>First, let us forget about principal component analysis for a little bit and ask ourselves exactly what eigenvectors and eigenvalues are. A typical introduction to spectral theory presents eigenvectors as vectors which are fixed in direction under a given linear transformation. The scaling factor of these eigenvectors is then called the eigenvalue. Under such a definition, I imagine that many students regard this as a minor curiosity, convince themselves that it must be a useful concept and then move on. It is not immediately clear, at least to me, why this should serve as such a central subject in linear algebra.</p>
<p>Eigenpairs are a lot like the roots of a polynomial. It is difficult to describe why the concept of a root is useful, not because there are few applications but because there are too many. If you tell me all the roots of a polynomial, then mentally I have an image of how the polynomial must look. For example, all monic cubics with three real roots look more or less the same. So one of the most central facts about the roots of a polynomial is that they <em>ground</em> the polynomial. A root literally <em>roots</em> the polynomial, limiting it's shape.</p>
<p>Eigenvectors are much the same. If you have a line or plane which is invariant then there is only so much you can do to the surrounding space without breaking the limitations. So in a sense eigenvectors are not important because they themselves are fixed but rather they limit the behavior of the linear transformation. Each eigenvector is like a skewer which helps to hold the linear transformation into place.</p>
<p>Very (very, very) roughly then, the eigenvalues of a linear mapping is a measure of the distortion induced by the transformation and the eigenvectors tell you about how the distortion is oriented. It is precisely this rough picture which makes PCA very useful. </p>
<p>Suppose you have a set of data which is distributed as an ellipsoid oriented in $3$-space. If this ellipsoid was very flat in some direction, then in a sense we can recover much of the information that we want even if we ignore the thickness of the ellipse. This what PCA aims to do. The eigenvectors tell you about how the ellipse is oriented and the eigenvalues tell you where the ellipse is distorted (where it's flat). If you choose to ignore the "thickness" of the ellipse then you are effectively compressing the eigenvector in that direction; you are projecting the ellipsoid into the most optimal direction to look at. To quote wiki:</p>
<blockquote>
<p>PCA can supply the user with a lower-dimensional picture, a "shadow" of this object when viewed from its (in some sense) most informative viewpoint</p>
</blockquote>
| <p>First let us think what a square matrix does to a vector. Consider a matrix <span class="math-container">$A \in \mathbb{R}^{n \times n}$</span>. Let us see what the matrix <span class="math-container">$A$</span> acting on a vector <span class="math-container">$x$</span> does to this vector. By action, we mean multiplication i.e. we get a new vector <span class="math-container">$y = Ax$</span>.</p>
<p>The matrix acting on a vector <span class="math-container">$x$</span> does two things to the vector <span class="math-container">$x$</span>.</p>
<ol>
<li>It scales the vector.</li>
<li>It rotates the vector.</li>
</ol>
<p>However, for any matrix <span class="math-container">$A$</span>, there are some <em>favored vectors/directions</em>. When the matrix acts on these favored vectors, the action essentially results in just scaling the vector. There is no rotation. These favored vectors are precisely the eigenvectors and the amount by which each of these favored vectors stretches or compresses is the eigenvalue.</p>
<p>So why are these eigenvectors and eigenvalues important? Consider the eigenvector corresponding to the maximum (absolute) eigenvalue. If we take a vector along this eigenvector, then the action of the matrix is maximum. <strong>No other vector when acted by this matrix will get stretched as much as this eigenvector</strong>.</p>
<p>Hence, if a vector were to lie "close" to this eigen direction, then the "effect" of action by this matrix will be "large" i.e. the action by this matrix results in "large" response for this vector. The effect of the action by this matrix is high for large (absolute) eigenvalues and less for small (absolute) eigenvalues. Hence, the directions/vectors along which this action is high are called the principal directions or principal eigenvectors. The corresponding eigenvalues are called the principal values.</p>
|
differentiation | <p>I would like to know how to calculate:</p>
<p><span class="math-container">$$\frac{d}{dt}\det \big(A_1(t), A_2(t), \ldots, A_n (t) \big).$$</span></p>
| <p>Think I can provide a proof for Matias' formula.</p>
<p>So, let</p>
<p>$$
A(t) = \mathrm{det}\left( A_1(t), \dots , A_n(t) \right) \ .
$$</p>
<p>By definition,</p>
<p>$$
\frac{dA(t)}{dt} = \mathrm{lim}_{h\rightarrow 0} \frac{A(t+h) - A(t)}{h} = \mathrm{lim}_{h\rightarrow 0} \frac{\det (A_1(t+h), \dots, A_n(t+h)) - \det(A_1(t), \dots , A_n(t))}{h}
$$</p>
<p>Now, we subtract and add </p>
<p>$$
\det(A_1(t), A_2(t+h), \dots , A_n(t+h))
$$</p>
<p>obtaining:</p>
<p>$$
\frac{dA(t)}{dt} = \mathrm{lim}_{h\rightarrow 0} \frac{\det (A_1(t+h), A_2(t+h),\dots, A_n(t+h)) - \det(A_1(t), A_2(t+h), \dots , A_n(t+h))}{h} +
\mathrm{lim}_{h\rightarrow 0}\frac{
\det(A_1(t), A_2(t+h), \dots , A_n(t+h))-\det(A_1(t), \dots , A_n(t))}{h}
$$</p>
<p>Now we focus on the first addend, which is</p>
<p>$$
\det \left( \mathrm{lim}_{h\rightarrow 0} \frac{A_1(t+h) - A_1(t)}{h}, \mathrm{lim}_{h\rightarrow 0} A_2(t+h), \dots,\mathrm{lim}_{h\rightarrow 0} A_n(t+h) \right)
$$</p>
<p>That is,</p>
<p>$$
\det (A_1'(t), A_2(t), \dots , A_n(t)) \ .
$$</p>
<p>Now, let's go for the second addend to which we substract and add </p>
<p>$$
\det(A_1(t), A_2(t), A_3(t+h), \dots , A_n(t+h)) \ .
$$</p>
<p>From which we will obtain the term</p>
<p>$$
\det (A_1(t), A'_2(t), A_3(t), \dots , A_n(t)) \ .
$$</p>
<p>Keep on doing analogous operations till you get</p>
<p>$$
\det (A_1(t), A_2(t), \dots , A_{n-1}(t), A_n'(t)) \ .
$$</p>
| <p>The formula is $$d(\det(m))=\det(m)Tr(m^{-1}dm)$$ where $dm$ is the matrix with $dm_{ij}$ in the entires. The derivation is based on Cramer's rule, that $m^{-1}=\frac{Adj(m)}{\det(m)}$. It is useful in old-fashioned differential geometry involving principal bundles. </p>
<p>I noticed Terence Tao posted a nice blog entry on <a href="http://terrytao.wordpress.com/2013/01/13/matrix-identities-as-derivatives-of-determinant-identities/">it</a>. So I probably do not need to explain more at here. </p>
|
game-theory | <p><strong>Some basic background</strong></p>
<p>The Kalman filter is a (linear) state estimation algorithm that presumes that there is some sort of uncertainty (optimally Gaussian) in the state observations of the dynamical system. The Kalman filter has been extended to nonlinear systems, constrained for cases where direct state observation is impossible, etc. As one should know, the algorithm essentially performs a prediction step based on prior state knowledge, compares that prediction to measurements, and updates the knowledge of the state based on a state covariance estimate that is also updated at every step.</p>
<p>In short, we have a state-space vector usually represented by a vector in $\mathbb{R}^n$. This vector is operated on by a plant matrix, added to a control term (which is operated on by a control matrix), operated on by a covariance matrix, etc. There are usually not intrinsic constraints on the state-space vector beyond those which may be embedded in the description of the dynamical system.</p>
<p><strong>My question</strong></p>
<p>Can we describe the Kalman filter using the language of category theory? In other words, can we generalize the algebra of what is happening with the Kalman filter for a dynamical system to develop Kalman-like estimators for other processes with vector-like components derived from some known (or partially known) model?</p>
<p><strong>A motivating example</strong></p>
<p>A repeated matrix game involves two or more players choosing strategies according to some payoff function -- which is a function of the game's state space and each player's action -- typically with some notion of equilibrium (e.g. Nash equilibrium). A repeated game with a finite action space implies that strategies are discrete probability distributions representable by a vector in $\mathbb{R}^N$ such that every element of the strategy vector is non-negative and the vector sums to unity.</p>
<p>This is a more restrictive case than a general dynamical system because we have constraints on the vector representation of the strategy. But, given input (i.e. actions taken), output (state measurement of the game's state space), and a payoff structure, it might make sense to attempt a Kalman-like estimator for the strategy vector in the game.</p>
<p>The challenge, of course, is developing a compatible notion of covariance. It's not clear what covariance would mean in relation to probability distributions, since we don't treat the quantity $P(\textrm{action} = \textrm{index})$ as random variable. But if we had a category theoretical view of the Kalman filter, could we derive a similar structure based on its algebraic properties in such a way that we can guarantee that our estimate of the opponent's strategy vector converges in some way?</p>
<p><strong>Does what I'm asking even make sense?</strong></p>
| <p>The most direct approach, already mentioned by Chris Culter,
is to use the fact that Kalman filters are a kind of graphical model and look at categorical treatments of graphical models.
This is a rather active area particularly amog people interested in quantum computation, quantum information and quantum foundations in general.
Graphical models are usually treated as <a href="http://en.wikipedia.org/wiki/String_diagram" rel="noreferrer">string</a>/<a href="http://research.microsoft.com/apps/pubs/default.aspx?id=79791" rel="noreferrer">tensor</a>/<a href="http://en.wikipedia.org/wiki/Trace_diagram" rel="noreferrer">trace</a> diagrams, a.k.a <a href="http://en.wikipedia.org/wiki/Penrose_graphical_notation" rel="noreferrer">Penrose notation</a> or graphical languages, rather than directly as categories.
The main reference on the abstract categorical treatment of these kinds of diagrams is Peter Selinger, <a href="http://arxiv.org/abs/0908.3347" rel="noreferrer"><em>A survey of graphical languages for monoidal categories</em></a>
There is also a very nice interdisciplinary <a href="http://arxiv.org/abs/0903.0340" rel="noreferrer">survey</a> by John Baez that covers this and a number of related areas.
People with publications in this are include Samson Abramsky, Bob Coecke, Jared Culbertson, Kirk Strutz, Robert Spekkens, John Baez, Jacob Biamonte, Mike Stay, Bart Jacobs, David Spivak and Brendan Fong whose thesis was already provided by Chris Culter in his answer.</p>
<p>Categorical ttreatments graphical models require dealing with probability which is usually done with monads, often using <a href="http://ncatlab.org/nlab/show/Giry%27s+monad" rel="noreferrer">Giry's monad</a>.</p>
<p>You may also want to look at the work of the group at ETH Zurich that includes Patrick Vontobel & Hans-Andrea Loeliger. They work on a kind of graphical model they call Forney factor graphs. They do not use category theory themselves but they have papers on how to translate many kinds of computations, including Kalman filters, electrical circuits and EM algorithms to & from their graphical models.
This also provides a means to translate between Kalman filters and electrical circuits indirectly, so their work could be useful to transform KFs into other things things that already have categorical representations.</p>
<p>Here are links to some of the papers:</p>
<ul>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.173.113" rel="noreferrer">Kalman Filters, Factor Graphs, and Electrical Networks</a></li>
<li><a href="http://golem.ph.utexas.edu/category/2007/09/category_theory_in_machine_lea.html" rel="noreferrer">Category theory in Machine learning on nCategory-cafe</a></li>
<li><a href="http://arxiv.org/abs/1102.2368" rel="noreferrer">Picturing classical and quantum Bayesian inference</a></li>
<li><a href="http://arxiv.org/abs/1312.1445" rel="noreferrer">Bayesian machine learning via category theory</a></li>
<li><a href="http://arxiv.org/abs/1205.1488" rel="noreferrer">A categorical foundation for Bayesian probability</a></li>
<li><a href="http://archive.is/o/Q9yUZ/http://qubit.org/images/files/book.pdf" rel="noreferrer">Lectures on Penrose graphical notation for tensor network states</a></li>
<li><a href="http://arxiv.org/abs/1306.0831" rel="noreferrer">Towards a Categorical Account of Conditional Probability</a></li>
<li><a href="http://arxiv.org/abs/1307.6894" rel="noreferrer">The operad of temporal wiring diagrams: formalizing a graphical language for discrete-time processes</a></li>
</ul>
| <p>A Kalman filter performs inference on a model which is a simple Bayesian network, and a Bayesian network is a kind of graphical model, and graphical models kind of look like categories, so there may be a connection there. Unfortunately, I can't personally go beyond that vague description. Searching the web for "graphical model" + "category theory" turned up an interesting thesis: Brendan Fong (2012) "Causal Theories: A Categorical Perspective on Bayesian Networks". There's no attempted connection to game theory, but it might be worth a look!</p>
|
Subsets and Splits