source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
382,377
When reviewing for MathSciNet, I routinely find myself just paraphrasing and abbreviating the introduction provided by the author, and occasionally adding a few words about the quality of the research or the cleverness of the argument (which the authors themselves would not be able to write for obvious reasons). There seems to be very little added value in doing this, since the paper already has the introduction (which has the added benefit of being written by someone who has intimate knowledge of the paper) as well as abstract (which will in most cases be sufficient to decide if the paper is worth reading). Of course, I can imagine edge cases when someone can't quite decide if the paper is worth delving into based on the abstract alone, while the introduction is for some reason difficult to read or the paper is difficult to access. But I can't help feeling that there should be more to it. I would love to hear opinions about what makes a MathSciNet review useful, and how to achieve it. Edit to add: As YCor correctly points out, the same question applies with ZBMath or any other place that hosts public reviews in place of MathSciNet. To avoid creating a question which is a moving target, I will refrain from making edits above.
I'll take a stab at this because in the past I have gotten some feedback from Mathematical Reviews saying that they like my reviews, and they did ask me to write a Featured Review once (back when there were such things as Featured Reviews). The answer to the question depends to some extent on how long a review you want to write. My default length is probably around two to three times the length of the abstract. I typically try to at least give a precise statement of the main result(s). Often the abstract does not do this because stating the main result requires quite a bit of notation and preliminary definitions, which are too long to put in the abstract, but which usually can fit into a review. I do this because I imagine that some people might have access to MathSciNet but for some reason don't have access to the paper, and a precise theorem statement might help them decide whether to put in the extra effort to obtain the paper itself. Another thing I do is to put myself in the shoes of a MathSciNet user, who is looking for relevant papers that he or she is not currently aware of. I ask myself, what keywords can I put into the review that will help such people discover this paper via a keyword search? If you look through the paper with this mindset, you will often find remarks about related topics that will provide good keywords. These don't always make it into the abstract or the introduction, but it's useful to have them in the review. If you happen to know that the objects in the paper are sometimes studied under a different name then that's also something useful to mention in the review. If you're willing to put in the effort, MR will happily accept longer reviews. As I understand it, the Featured Reviews that MR used to have were discontinued for various reasons (e.g., I heard that, contrary to MR's intent, Featured Reviews were being used by the community for hiring and promotion decisions, and MR did not feel qualified to decide which papers were the "best"), but there is nothing to stop you from writing something similar for any paper you feel like. You can search for "featured review" in the review text of reviews from June 2005 or earlier to get a feeling for what these were. Well-written Featured Reviews were not only longer and more detailed than the typical review, they were written with a wide audience in mind. The idea was that a Featured Review would convey some idea of the context and significance of the paper to non-specialists. I will freely admit that I rarely have the energy to write such reviews, but they are certainly of value. Imagine someone stumbling upon your review in their search results and finding your review more accessible than the paper itself; they could very well make a conceptual connection that they wouldn't have otherwise, or be drawn into an area that is close to their own interests but that they didn't know existed.
{ "source": [ "https://mathoverflow.net/questions/382377", "https://mathoverflow.net", "https://mathoverflow.net/users/14988/" ] }
382,442
Let $\mathbb CP^n$ denotes the complex projective space of dimension $n$ , we have a standard complex structure of $\mathbb CP^n$ , and my question is: is this complex structure unique? Or equivalently, let $X$ be a complex manifold diffeomorphic to $\mathbb CP^n$ , is $X$ biholomorphic to $\mathbb CP^n$ ? What I know is from p45 of Morrow&Kodaira's book 《complex manifolds》: $\mathbb CP^n$ is rigid. But this fact only ensures that small deformations don't change the complex structure of $\mathbb CP^n$ , we did not even know whether the large deformations change the complex structure of $\mathbb CP^n$ , or more generally, whether the same diffeomorphic type of $\mathbb CP^n$ admits different complex structures? For dimension 1, I have learnt from some book that the answer is yes. For dimension 2, cited form Yau's 1977 paper 《Calabi's conjecture and some new results in algebraic geometry》, as a corollary of Yau's solution of Calabi's conjecture, the complex structure of $\mathbb CP^2$ is unique. But for higher dimensions, is this problem solved? or any progress has been made?
Let me write this too long comment as an answer. As abx says, what we do know is Theorem 1. If a Kähler manifold $X$ is homeomorphic to $\mathbb{CP}^n$ , then $X$ is biholomorphic to it. This is due to Hirzebruch and Kodaira for $n$ odd (but with the strongest assumption for $X$ to be diffeomorphic to $\mathbb{CP}^n$ , then relaxed to homeomorphic after work of Novikov), and to Yau for $n$ even. For $n=2$ , a stronger result holds, still proved by Yau, namely Theorem 2. If a compact complex surface $S$ is homotopy equivalent to $\mathbb{CP}^2$ , then it is biholomorphic to it. In dimension $n\le 6$ , we have instead a result due to Libgober-Wood which is stronger than Theorem 1, but weaker than Theorem 2, that states Theorem 3. A compact Kähler manifold of complex dimension $n\le 6$ which is homotopy equivalent to $\mathbb{CP}^n$ must be biholomorphic to it. You can find all this (and much more) on the very beautiful survey by V. Tosatti available here .
{ "source": [ "https://mathoverflow.net/questions/382442", "https://mathoverflow.net", "https://mathoverflow.net/users/99826/" ] }
382,795
The convention that $\sin^2 x = (\sin x)^2$ , while in general $f^2(x) = f(f(x))$ , is often called illogical, but it does not lead to conflicts because nobody uses $\sin(\sin x)$ . But is this really true? Or is there a real-world application in which $\sin(\sin x)$ occurs? Or maybe something a bit more general, like $\sin(C \sin x)$ for some constant $C \neq 0$ ?
The intensity of light diffracted at a slit as a function of the angle actually involves a term $\sin\left(\frac{\alpha\beta}{2}\sin(\theta)\right)$ , see https://en.wikipedia.org/wiki/Fraunhofer_diffraction (I'm no physicist at all, but this has been stuck in my head since high school just because it is such an unusual term to encounter naturally)
{ "source": [ "https://mathoverflow.net/questions/382795", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
383,314
Let $C_n=\frac1{n+1}\binom{2n}n$ be the familiar Catalan numbers. QUESTION. Is there a combinatorial or conceptual justification for this identity? $$\sum_{k=1}^n\left[\frac{k}n\binom{2n}{n-k}\right]^2=C_{2n-1}.$$
By the ballot theorem, $\frac{k}{n} \binom{2n}{n+k}$ is the number of Dyck paths, i.e. $(1,1), (1,-1)$ -walks in the quadrant, from the origin to $(2n-1, 2k-1)$ . You need to concatenate a pair of those to get a Dyck path to $(4n-2,0)$ , and $k$ takes values between 1 and $n$ .
{ "source": [ "https://mathoverflow.net/questions/383314", "https://mathoverflow.net", "https://mathoverflow.net/users/66131/" ] }
383,441
On this biography page of André Bloch, it is said that The Académie des Sciences awarded him the Becquerel Prize just before his death. This claim is also repeated in PlanetMath , Wikiversity and also Hersh & John-Steiner's book: Loving and Hating Mathematics: Challenging the Myths of Mathematical Life (+link) . There are several other resources backing this claim. But on the other hand, the official website of the Becquerel Prize says: (emphasis mine) The Alexandre Edmond Becquerel Prize was established in 1989 by the European Commission at the occasion of the 150th anniversary of Becquerel’s classical experiment in which he discovered the photovoltaic effect. Its purpose is to honour scientific, technical or managerial merit in the development of photovoltaic solar energy , attained over a long period of continuous achievements, or very exceptionally, for some extraordinary invention or discovery. So clearly, this Becquerel Prize has nothing to do with mathematics. I suspected that there was another Becquerel Prize in 1930s or 1940s, which was awarded by the Académie des Sciences. But I found no record of that except the Wikipedia page of Yvette Cauchois , who was a physicist known for her contributions to x-ray spectroscopy and x-ray optics: Henri Becquerel Prize from the French Academy of Sciences (1935) Therefore this limited sample just strengthens my suspicion that there might no Becquerel Prize for mathematicians. And this part of André Bloch's biography also adds up to the other myths surrounding him. So my questions, as the title says, are: Was there a Becquerel Prize for mathematicians during that era? Did André Bloch receive it?
Indeed, the Henry Becquerel Foundation awarded a prize of 6000 Francs to André Bloch for his work on the theory of functions, announced posthumously on 13 December 1948. source According to this biography , Bloch was informed of the prize shortly before his death on 11 October 1948.
{ "source": [ "https://mathoverflow.net/questions/383441", "https://mathoverflow.net", "https://mathoverflow.net/users/93602/" ] }
384,218
This is a question about a naming convention. The Barr-Beck theorem (or simply Barr-Beck) is used a lot in descent theory over the past 30 years, almost invariably without a reference, like folklore. To make precise which theorem I am talking about: According to one source the Barr-Beck monadicity theorem gives necessary and sufficient conditions for a category to be equivalent to a category of algebras over a monad . There are occasionally references, e.g., to the Wikipedia page on "Beck's monadicity theorem" which attributes the theorem to Beck's thesis in 1967 under the direction of Eilenberg. This makes me wonder how Barr is related to Barr-Beck. So I did some further research and found out that there is indeed a Barr-Beck paper with a Barr-Beck theorem. But it is about triple cohomology of algebras. This seems to be different. So here are my questions: Is the Barr-Beck theorem different from Beck's theorem? If so, how? If not, how did Barr's name get attached to it? By the way, the second Beck is Jon Beck while the first is Jonathan Mock Beck with a different author number in MathSciNet. So out of curiosity: Are Jon and Jonathan Beck one and the same person?
It is well-attested in the category theory literature (e.g. in Mac Lane's Categories for the Working Mathematician , Chapter VI) that the well-known theorem giving necessary and sufficient conditions for monadicity of a functor is due to Jon Beck. Indeed, most category theorists I know call this "Beck's monadicity theorem". So why do others so often call it the "Barr-Beck theorem" nowadays? As Tim points out in the comments above, there are many variants of the monadicity theorem which give sufficient, but not generally necessary, conditions for a functor to be monadic. Several of these variants may be found in the Exercises to Section VI.7 of Mac Lane (which section is titled "Beck's Theorem"). Exercise 7 reads as follows: CTT (Crude Tripleability Theorem; Barr-Beck). If $G$ is CTT , prove that the comparison functor $K$ is an equivalence of categories. (A functor $G \colon A \to B$ is said by Mac Lane to be CTT when it is has a left adjoint, it preserves and reflects all coequalizers which exist, and the category $A$ has coequalizers of all parallel pairs of arrows $f,g$ such that the pair $Gf,Gg$ has a coequalizer in $B$ . Note that these conditions are much stronger than those appearing in Beck's monadicity theorem. Indeed, the composite of two CTT functors is CTT , whereas in general the composite of two monadic functors is need not be monadic.) Note the attribution to Barr-Beck. Now, this particular theorem was cited by Deligne in Section 4.1 of his paper Catégories tannakiennes in the Grothendieck Festschrift as "Le théorème de Barr-Beck". My theory is that the name "Barr-Beck theorem" was popularised in certain circles by Deligne's usage here, and that over time (how long?) its usage shifted in these circles to refer to Beck's precise monadicity theorem. I fear that this incorrect usage has now been set in stone by its appearance in the works of modern influential authors such as Lurie.
{ "source": [ "https://mathoverflow.net/questions/384218", "https://mathoverflow.net", "https://mathoverflow.net/users/89948/" ] }
384,230
Some mathematicians claim that their field has nothing to do with political concerns; others are deeply involved in political life. Are there many great mathematicians with great political commitments? I am particularly interested in the possible interplay between their research work and their political involvement. For instance, did their political views influence their research topics or collaborations? Did some issues met in their mathematical life (like funding opportunities, e.g.) have an impact on their political commitment? ... This would mean more than just doing maths and politics independently.
Chandler Davis. He was Professor of Mathematics at University of Michigan. He was a member of the Communist Party USA. He refused to cooperate with the House Unamerican Activities Committee. He was fired from the University of Michigan and sentenced to 6 months in jail. A paper from this era has the following acknowledgement: Research supported in part by the Federal Prison System. Opinions expressed in this paper are the author's and are not necessarily those of the Bureau of Prisons. source
{ "source": [ "https://mathoverflow.net/questions/384230", "https://mathoverflow.net", "https://mathoverflow.net/users/158328/" ] }
384,292
A semigroup $S$ is defined to be squared if there exists a subset $A\subseteq S$ such that the function $A\times A\to S$ , $(x,y)\mapsto xy$ , is bijective. Problem: Is each squared finite group trivial? Remarks (corrected in an Edit). I learned this problem from my former Ph.D. student Volodymyr Gavrylkiv. It can be shown that a group with two generators $a,b$ and relation $a^2=1$ is squared. So the adjective finite is essential in the above problem. Computer calculations show that no group of order $<64$ is squared. For any set $X$ the rectangular semigroup $S=X\times X$ endowed with the binary operation $(x,y)*(a,b)=(x,b)$ is squared. This follows from the observation that for the diagonal $D=\{(x,x):x\in X\}$ of $X\times X$ , the map $D\times D\to S$ , $(x,y)\mapsto xy$ , is bijective. So restriction to groups in the formulation of the Problem is also essential.
I think, as implicitly suggested by Yemon Choi, it is possible to explain the proof of the answer of user49822 by making more use of idempotents. Suppose that the finite group $G$ is squared via the subset $A$ . The element $ e = \frac{1}{|G|}\sum_{g \in G} g$ is a primitive idempotent of $\mathbb{C}G.$ Let $ f= \frac{1}{|A|}\sum_{a \in A} a.$ Then we have $f^{2} = ef = fe = e = e^{2}$ . Thus $(f-e)^{2} = 0 = e(f-e) = (f-e)e.$ Now $f = e +(f-e)$ is the sum of commuting matrices (in the regular representation of $\mathbb{C}G$ , say) with the second matrix nilpotent. Thus $f$ has trace $1$ in the regular representation of $\mathbb{C}G.$ This forces $1 \in A$ since all non-identitiy elements of $G$ have trace zero in the regular representation. But then $A = \{1 \}$ , since (as in Jeremy Rickard's comment) if $a \neq 1 \in A$ , then $a = 1a = a1$ gives two different expressions for $a$ . Alternatively, (using traces) the fact that $1$ appears with coefficient $|G|^{-1}$ in $e$ tells us that $1$ appears with multiplicity $|G|^{-1}$ in $f$ as well, so that $\sqrt{|G|} = |A| = |G|$ and $|G| = 1.$
{ "source": [ "https://mathoverflow.net/questions/384292", "https://mathoverflow.net", "https://mathoverflow.net/users/61536/" ] }
385,001
I'm reading Random Circles on a Sphere and the authors did the following to empirically check their results: To make a partial test of the accuracy of the above approximations an experiment was carried out using table tennis balls. These had a mean diameter of 37.2 mm. with a standard deviation around this mean of 0.02mm. One hundred holes of diameter 29.9mm. were punched in an aluminium sheet forming one side of a flat box. The balls were held firmly against the holes by a foam rubber pad, and sprayed with a duco paint. After drying they were removed and replaced at random by hand. Forty sprayings were done in each of three sets of 100 balls. The number of balls not completely covered after N sprayings are shown in Table 2, and fit the theoretical curve rather more closely than the roughness of the approximations used would lead one to expect. The angle α was about 53.43° as used in the calculations. What are other examples of mathematicians turning into carpenters to test theories in modern days (post 60s)?
I am encouraged to give this answer by the comment of the OP "My interest is in creative, non-digital ways of experimenting with mathematical theories, especially aiming for publication". My colleague Hendrik Lenstra used his expertise with elliptic curves to fill in the empty hole at the center of a litho by Escher, see Artful Mathematics : Original litho on the left, zoom into the completed hole at the right. We shall see that the lithograph can be viewed as drawn on a certain elliptic curve over the field of complex numbers and deduce that an idealized version of the picture repeats itself in the middle. More precisely, it contains a copy of itself, rotated clockwise by $157.6255960832\ldots$ degrees and scaled down by a factor of $22.5836845286\ldots$
{ "source": [ "https://mathoverflow.net/questions/385001", "https://mathoverflow.net", "https://mathoverflow.net/users/174926/" ] }
385,167
If $G=(V,E)$ is a finite, simple, undirected graph, and $v\in V$ , we set $N(v) = \{w\in V:\{v,w\}\in E\}$ , and $\text{deg}(v)= |N(v)|$ . We say a vertex $v\in V$ is a king if $\text{deg}(v) > \text{deg}(w)$ for all $w\in N(v)$ . In the graph $G=(\{0,1,2\}, \big\{\{0,1\}, \{1,2\}\big\})$ , one of the $3$ vertices is a king. Let $\text{King}(G)$ be the set of king vertices. Question. Is it true that for any finite connected graph $G=(V,E)$ with $|V|>1$ we have $|\text{King}(G)|/|V|\leq 1/3$ ? If not, how large can this value get?
For this discussion I am assuming we do not consider isolated vertices to be "Kings", even though technically your definition considers them to be so in a vacuous sense (I guess this convention goes back to Shakespeare ). Otherwise of course one can make every vertex a king by having no edges whatseover. For the matching lower bound, observe that no two kings can be adjacent, and if there is at least one king, the set $E'$ of ordered pair edges $(v,w)$ in $E$ with $v \in \mathrm{King}(G)$ and $w \not \in \mathrm{King}(G)$ is non-empty. Now we do weighted double counting: \begin{align*} \# \mathrm{King}(G) &= \sum_{v \in \mathrm{King}(G)} 1 \\ &= \sum_{(v,w) \in E'} \frac{1}{d(v)}\\ &< \sum_{(v,w) \in E'} \frac{1}{d(w)} \\ &\leq \sum_{w \in V \backslash \mathrm{King}(G)} 1 \\ &= \#V - \# \mathrm{King}(G) \end{align*} hence $$ \# \mathrm{King}(G) < \frac{1}{2} \# V.$$ Of course the same claim holds when there are no kings as long as the graph is not the empty graph. So this shows that the lower bound provided by the complete bipartite graph examples (adding an isolated vertex in the case when one wants an even number of vertices) are completely optimal: the maximal number of kings in a graph on $n$ vertices is $\max( \lfloor \frac{n-1}{2} \rfloor, 0)$ . This bound can also be viewed as quantifying a variant of the " friendship paradox ". (Based on this connection, I propose "influencer" as a more modern and gender-neutral terminology alternative to "king".)
{ "source": [ "https://mathoverflow.net/questions/385167", "https://mathoverflow.net", "https://mathoverflow.net/users/8628/" ] }
385,202
One of the famous problem in SDP is the matrix norm minimization (see S. Boyd, Convex Optimization , p. 170). Consider: \begin{equation}\label{eq:Lasse} \begin{aligned} &\min_{\mathbf{x}} & & \|A(x)-M\|_2 \\ & & & A(x)=-A(x)^T \end{aligned} \end{equation} Here $x\in \mathbb{R}^n$ $A(x)=x_1A_1+\cdots+x_nA_n$ , with $A_i\in \mathbb{R}^{n\times n}$ and $A_i=-A_i^T$ . So we consider $A_i$ are skew-symmetric. We also assume each column of $A(x)$ , $A_i(x)$ , $\|A_i\|_2=1$ . $M\in \mathbb{R}^{n\times n}$ is given. Here we consider the spectral norm of matrices. So this SDP finds the optimal solution $x$ to minimize a particular metric between $A(x)$ and $M$ and this metric is quantified by $\|A(x)-M\|$ . Here we suppose $x^*$ is the optimal solution. My question is that is $x^*$ the optimization solution of the following problem? \begin{equation} \begin{aligned} &\max_{\mathbf{x}} & & \langle A(x), M\rangle \\ &\text{ s.t.} & & A(x)=-A(x)^T \end{aligned} \end{equation} The motivation for asking this problem is from the fact that in the vector case (assume $\|c\|_2=1$ ) \begin{equation} \begin{aligned} &\min_{\mathbf{x}, \|x\|_2=1} & & \|x-c\|_2 \end{aligned} \end{equation} the optimal solution is $x^*=c/\|c\|_2$ . And $x^*$ is also the solution of the following problem \begin{equation} \begin{aligned} &\max_{\mathbf{x}, \|x\|_2=1} & & \langle x, c\rangle. \end{aligned} \end{equation} So not sure if it also fits to the case of matrices. Any reference or papers are welcome. Also if it is not the same for spectral norm, will it be the same for Frobenius norm? why and why not? Sincerely appreciate your help.
For this discussion I am assuming we do not consider isolated vertices to be "Kings", even though technically your definition considers them to be so in a vacuous sense (I guess this convention goes back to Shakespeare ). Otherwise of course one can make every vertex a king by having no edges whatseover. For the matching lower bound, observe that no two kings can be adjacent, and if there is at least one king, the set $E'$ of ordered pair edges $(v,w)$ in $E$ with $v \in \mathrm{King}(G)$ and $w \not \in \mathrm{King}(G)$ is non-empty. Now we do weighted double counting: \begin{align*} \# \mathrm{King}(G) &= \sum_{v \in \mathrm{King}(G)} 1 \\ &= \sum_{(v,w) \in E'} \frac{1}{d(v)}\\ &< \sum_{(v,w) \in E'} \frac{1}{d(w)} \\ &\leq \sum_{w \in V \backslash \mathrm{King}(G)} 1 \\ &= \#V - \# \mathrm{King}(G) \end{align*} hence $$ \# \mathrm{King}(G) < \frac{1}{2} \# V.$$ Of course the same claim holds when there are no kings as long as the graph is not the empty graph. So this shows that the lower bound provided by the complete bipartite graph examples (adding an isolated vertex in the case when one wants an even number of vertices) are completely optimal: the maximal number of kings in a graph on $n$ vertices is $\max( \lfloor \frac{n-1}{2} \rfloor, 0)$ . This bound can also be viewed as quantifying a variant of the " friendship paradox ". (Based on this connection, I propose "influencer" as a more modern and gender-neutral terminology alternative to "king".)
{ "source": [ "https://mathoverflow.net/questions/385202", "https://mathoverflow.net", "https://mathoverflow.net/users/93600/" ] }
385,303
The number $f(n)$ of graphs on the vertex set $\{1,\dots,n\}$ , allowing loops but not multiple edges, is $2^{{n+1\choose 2}}$ , with exponential generating function $F(x)=\sum_{n\geq 0} 2^{{n+1\choose 2}}\frac{x^n}{n!}$ . Consider $$ \sqrt{F(x)} = 1+x+3\frac{x^2}{2!}+23\frac{x^3}{3!} +393\frac{x^4}{4!}+13729\frac{x^5}{5!}+\cdots. $$ It's not hard to see that the coefficients 1,1,3,23,393,13729, $\dots$ are positive integers. This is A178315 in OEIS. Do they have a combinatorial interpretation? More generally, we can replace $2^{{n+1\choose 2}}$ with $\sum_G t_1^{c_1(G)} t_2^{c_2(G)}\cdots$ , where $G$ ranges over the same graphs on $\{1,\dots,n\}$ , and where $c_i(G)$ is the number of connected components of $G$ with $i$ vertices. Now we will get polynomials in the $t_i$ 's with positive integer coefficients, the first four being $$ t_1 $$ $$ t_1^2+2t_2 $$ $$ t_1^3+6t_1t_2+16t_3 $$ $$ t_1^4+12t_1^2t_2+64t_1t_3+12t_2^2+304t_4. $$ Again we can ask for a combinatorial interpretation of the coefficients. Note. What happens if we don't allow loops, so we are looking at $\sqrt{\sum_{n\geq 0}2^{{n\choose 2}}\frac{x^n}{n!}}$ ? Now the coefficient of $\frac{x^n}{n!}$ is equal to the coefficient of $\frac{x^n}{n!}$ in $\sqrt{F(x)}$ , divided by $2^n$ , which in general is not an integer. Hence it makes more sense combinatorially to allow loops.
There is a fixed-point-free involution on these graphs which I will call loop-switching , given by adding a loop to every vertex that doesn't have one while simultaneously deleting the loops from all vertices that do. Then $\sqrt{F(x)}$ counts equivalence classes of graphs, where two graphs are in the same class if one can be obtained from the other by loop-switching a subset of its connected components. This follows from $\sqrt{F(x)} = \exp \frac{\log F(x)}{2}$ where $\log F(x)$ counts connected graphs and clearly on those the equivalence classes have size exactly 2. This also nicely explains why allowing loops is important here, and should work the same way in the multivariate version. I guess to turn this into a "proper" combinatorial interpretation of $\sqrt{F(x)}$ as counting some class of structures of which a graph is formed from exactly two, one could somehow choose a canonical representative of each equivalence class on connected graphs. It seems like there is no good way to do this, however, since in some of those classes the two graphs are isomorphic.
{ "source": [ "https://mathoverflow.net/questions/385303", "https://mathoverflow.net", "https://mathoverflow.net/users/2807/" ] }
385,546
Inspired by the question here , I have been trying to understand the sheaf-theoretic approach to forcing, as in MacLane–Moerdijk's book "Sheaves in geometry and logic", Chapter VI. A general comment is that sheaf-theoretic methods do not a priori produce "material set theories". Here "material set theory" refers to set theory axiomatized on the element-of relation $\in$ , as usually done, in ZFC. Rather, they produce "structural set theories", where "structural set theory" refers to set theory axiomatized on sets and morphisms between them, as in the elementary theory of the category of sets ETCS. I will always add a collection (equivalently, replacement) axiom to ETCS; let's denote it ETCSR for brevity. Then Shulman in Comparing material and structural set theories shows that the theories ZFC and ETCSR are "equivalent" (see Corollary 9.5) in the sense that one can go back and forth between models of these theories. From ZFC to ETCSR, one simply takes the category of sets; in the converse direction, one builds the sets of ZFC in terms of well-founded extensional trees (modeling the "element-of" relation) labeled by (structural) sets. So for this question, I will work in the setting of structural set theory throughout. There are different ways to formulate the data required to build a forcing extension. One economic way is to start with an extremally disconnected profinite set $S$ , and a point $s\in S$ . (The partially ordered set is then given by the open and closed subsets of $S$ , ordered by inclusion.) One can endow the category of open and closed subsets $U\subset S$ with the "double-negation topology", where a cover is given by a family $\{U_i\subset U\}_i$ such that $\bigcup_i U_i\subset U$ is dense. Let $\mathrm{Sh}_{\neg\neg}(S)$ denote the category of sheaves on the poset of open and closed subsets of $S$ with respect to this topology. Then $\mathrm{Sh}_{\neg\neg}(S)$ is a boolean (Grothendieck) topos satisfying the axiom of choice, but it is not yet a model of ETCSR. But with our choice of $s\in S$ , we can form the ( $2$ -categorical) colimit $$\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(U)$$ called the filter-quotient construction by MacLane–Moerdijk. I'm highly tempted to believe that this is a model of ETCSR — something like this seems to be suggested by the discussions of forcing in terms of sheaf theory — but have not checked it. (See my answer here for a sketch that it is well-pointed. Edit: I see that well-pointedness is also Exercise 7 of Chapter VI in MacLane–Moerdijk.) Questions: Is it true that $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(U)$ is a model of ETCSR? If the answer to 1) is Yes, how does this relate to forcing? Note that in usual presentations of forcing, if one wants to actually build a new model of ZFC, one has to first choose a countable base model $M$ . This does not seem to be necessary here, but maybe this is just a sign that all of this does not really work this way. Here is another confusion, again on the premise that the answer to 1) is Yes (so probably premature). An example of an extremally disconnected profinite set $S$ is the Stone-Cech compactification of a discrete set $S_0$ . In that case, forcing is not supposed to produce new models. On the other hand, $\mathrm{Sh}_{\neg\neg}(S)=\mathrm{Sh}(S_0)=\prod_{S_0} \mathrm{Set}$ , and if $s$ is a non-principal ultrafilter on $S_0$ , then $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(U)$ is exactly an ultraproduct of $\mathrm{Set}$ – which may have very similar properties to $\mathrm{Set}$ , but is not $\mathrm{Set}$ itself. What is going on?
Yes, this is a model of ETCSR. Unfortunately, I don't know of a proof of this in the literature, which is in general sadly lacking as regards replacement/collection axioms in topos theory. But here's a sketch. As Zhen says, the filterquotient construction preserves finitary properties such as Booleanness and the axiom of choice. Moreover, a maximal filterquotient will be two-valued. But as you point out, a nondegenerate two-valued topos satisfying the (external) axiom of choice is necessarily well-pointed; I wrote out an abstract proof at https://ncatlab.org/nlab/show/well-pointed+topos#boolean_properties . Thus, $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(S)$ is a model of ETCS. As for replacement, the proof that I know (which is not written out in the literature) goes by way of the notions of "stack semantics" and "autology" in my preprint Stack semantics and the comparison of material and structural set theories (the other half, not the part that became the paper of mine cited in the question). Briefly, stack semantics is an extension of the internal logic of a topos to a logic containing unbounded quantifiers of the form "for all objects" or "there exists an object". (My current perspective, sketched in these slides , is that this is a fragment of the internal dependent type theory of a 2-topos of stacks -- hence the name!) This language allows us to ask whether a topos is "internally" a model of structural set theories such as ETCS or ETCSR. It turns out that every topos is "internally (constructively) well-pointed", and moreover satisfies the internal collection axiom schema. But the internal separation axiom schema is a strong condition on the topos, which I called being "autological". If a topos is autological and also Boolean, then the logic of its stack semantics is classical; thus it is internally a model of ETCSR. Since Grothendieck toposes are autological, your $\mathrm{Sh}_{\neg\neg}(S)$ is internally a model of ETCSR. Now we can also prove that if $\mathcal{E}$ is Boolean and autological, so is any filterquotient of it. The idea is to prove a categorical version of Łoś's theorem for the stack semantics. (I don't know whether this is true without Booleanness, which annoys me to no end, but you probably don't care. (-: ) Therefore, $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(S)$ is also autological. Finally, another fact about autology is that a well-pointed topos is autological if and only if it satisfies the ordinary structural-set-theory axiom schemas of separation and collection. Therefore, $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(S)$ satisfies these schemas, hence is a model of ETCSR. However, I doubt that this particular filterquotient is related to forcing at all. The point is the same one that Jacob made in a comment: when set theorists force over a countable base model to make an "actual" new model, they find an actual generic ultrafilter outside that model. A generic ultrafilter in the base model would be a point of the topos $\mathrm{Sh}_{\neg\neg}(S)$ , which as Andreas pointed out in a comment, does not exist. Your "points" $x$ are not points of the topos $\mathrm{Sh}_{\neg\neg}(S)$ , so it's unclear to me whether filterquotients at them have anything to do with forcing. Let me reiterate my argument that the real content of forcing is the internal logic of the topos $\mathrm{Sh}_{\neg\neg}(S)$ . In particular, if you build a model of material set theory in this internal logic, what you get is essentially the Boolean-valued model that set theorists talk about. Edit: I think the rest of this answer is off-base; see the discussion in the comments. I'm pretty sure this is the best kind of "model" you can get if you don't want to start talking about countable models of ZFC sitting inside larger ambient models. At the moment, my best guess for a topos-theoretic gloss on the countable-transitive-model version of forcing is something like the following. Suppose that $E$ is a countable model of ETCSR, containing an internal poset $P$ , which we can equip with its double-negation topology. Then treating $E$ as the base topos, we can build $\mathrm{Sh}(P,E)$ , a bounded $E$ -indexed elementary topos (i.e. " $E$ thinks it is a Grothendieck topos"), which contains the Boolean-valued model associated to $P$ as described above. It is the classifying topos of $P$ -generic filters, hence has in general no $E$ -points. But we also have the larger topos $\rm Set$ in which $E$ is countable, and we can consider the externalization $|P|$ which is a poset in $\rm Set$ , namely $|P| = E(1,P)$ . Then we can build the topos $\mathrm{Sh}(|P|,\rm Set)$ which "really is" a Grothendieck topos and classifies $|P|$ -generic filters. The "Rasiowa–Sikorski lemma" implies that, since $E$ is countable, in this case such a filter does actually exist in $\rm Set$ , so there is a point $p:\mathrm{Set} \to \mathrm{Sh}(|P|,\rm Set)$ . Now we should also have some kind of "externalization functor" $|-| : \mathrm{Sh}(P,E) \to \mathrm{Sh}(|P|,\rm Set)$ . My guess is that the set-theorists' forcing model is the "image" (whatever that means) of the Boolean-valued model in $\mathrm{Sh}(P,E)$ under the composite of this externalization functor with the inverse image functor $p^* : \mathrm{Sh}(|P|,\rm Set) \to Set$ . However, I have not managed to make this precise.
{ "source": [ "https://mathoverflow.net/questions/385546", "https://mathoverflow.net", "https://mathoverflow.net/users/6074/" ] }
385,732
Recently, I figured out that a colleague of mine has had published during recent years a proof of a theorem in which he was actually proving a deeper result which we both thought to be still open. After a closer look at his proof I found that, taking a bit more care and putting some additional emphasis in certain parts of his previous proof, he was actually proving the other still-thought-to-be-open problem: the construction was absolutely the same and therefore the proof of the previously published theorem was certainly a better argument than we first thought. I am curious now about this phenomenon happening more often. Do you know some other recent (let's say from 1700 to the current day) examples of this phenomenon of proofs being stronger than initially stated or proving more than thought at first?
The example given by Wojowu in the comments seems worth posting as an answer. In the NOVA special The Proof , Ken Ribet says the following. I saw Barry Mazur on the campus, and I said, "Let's go for a cup of coffee." And we sat down for cappuccinos at this cafe, and I looked at Barry and I said, "You know, I'm trying to generalize what I've done so that we can prove the full strength of Serre's epsilon conjecture." And Barry looked at me and said, "But you've done it already. All you have to do is add on some extra $\Gamma_0(M)$ structure and run through your argument, and it still works, and that gives everything you need." And this had never occurred to me, as simple as it sounds. I looked at Barry, I looked at my cappuccino, I looked back at Barry, and I said, "My God. You're absolutely right." He also talks about this story in this Numberphile video .
{ "source": [ "https://mathoverflow.net/questions/385732", "https://mathoverflow.net", "https://mathoverflow.net/users/158098/" ] }
386,011
When I was an undergrad student, the first application that was given to me of the construction of the fundamental group was the non-retraction lemma : there is no continuous map from the disk to the circle that induces the identity on the circle. From this lemma, you easily deduce the Brouwer fixed point Theorem for the circle. This was (for me) one of this "WOOOOW" moments where you realize that abstract constructions and some seemingly innocuous functorial lemmas may yield striking results (especially as I knew a quite long and complicated proof of Brouwer Theorem in dimension 2 before taking this topology class). I was wondering if there exist (+ reference if they do) similarly "cute" applications of the construction of the étale fundamental group in Algebraic Geometry. Of course "cute" is not well-defined and may vary for each one of us, but existence of fixed points for the Frobenius morphism would I find especially cute. Any other relatively elementary result related to algebraic geometry over fields of positive characteristic will be appreciated! Edit : I am obviously curious of any application of the étale fundamental group endowed with the aforementioned "WOOOW feeling". However, I'd be really interested in examples I could explain to smart grad students who are taking a first (but relatively advanced) course in Algebraic Geometry.
Using the étale fundamental group one can construct an injective group homomorphism $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}) \hookrightarrow \operatorname{Out}(\widehat{F_2})$ which is canonical in the sense that there are no choices involved in its construction (once the algebraic closure is fixed), $\operatorname{Out}$ refers to the outer automorphism group and $\widehat{F_2}$ to the profinite completion of the free group on two letters. As for your example of a "WOOOOW" moment, this statement no longer contains the étale fundamental group in its statement, even though it's vital for the construction. The fact that the absolute Galois group of the rationals is canonically a subgroup of the outer automorphisms of a (profinite-) free group is completely non-obvious. (Try to prove it from scratch...) One can try to determine the image of this map. This leads to the profinite Grothendieck-Teichmueller group, which sometimes is conjectured to be isomorphic to $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ . The étale fundamental group enters because one considers étale coverings of the projective line minus three points (i.e. the affine line minus two points). Over an algebraically closed field of characteristic zero, this has the étale fundamental group $\widehat{F_2}$ (think of the loops around two of the removed points as its generators). But now study the projective line minus three points over the rationals instead of their algebraic closure, this makes the Galois group of the rationals enter. The projective line minus three points enters because by a construction due to Belyi, every algebraic curve which can be defined over a finite extension of the rationals can be realized as an étale covering of the projective line minus three points (this is an if and only if). The idea is to modify a map to the projective line so that its ramification gets concentrated in only three points. You will find the rest of the story for example in surveys by Leila Schneps on the Grothendieck-Teichmueller theory.
{ "source": [ "https://mathoverflow.net/questions/386011", "https://mathoverflow.net", "https://mathoverflow.net/users/37214/" ] }
386,311
There are lots of structures which have name suffixed by "oid". Off the top of my head, matroid, greedoid, perfectoid, causaloid... Who started this? AFAIK, "matroid", by Whitney, was a start, and led the way to several combinatorial oids. However, the Cardioid has had its name for some centuries now, so the use of the suffix is old. Still, it seems a bit different to name a family of specific objects, and to name some sort of abstract structure.
The suffix "-oid" means the same as "quasi", so "resembling", "like". A groupoid is a quasi-group, like a group. There are hundreds of words in that category, covering many scientific disciplines. In the "early use of mathematical words" database I find: 250 BC: conchoid 200 BC: cissoid 400: trapezoid 1650: trochoid 1672: ellipsoid 1685: cochleoid 1830: epicycloid 1836: paraboloid 1837: strophoid 1844: centroid 1872: geoid, gyroid 1878: nephroid 1879: deltoid 1881: prismatoid 1891: cuboid 1935: matroid The Woid on-Oid by William Safire comments on the proliferation of -oids: We all know that the use of -oid to create a noun has been growing by leapoids and bounds. Among the earliest were android, or "automaton in human form," created in 1727, and asteroid, "small body like a star," in 1802. Scientists and mathematicians were especially attracted to the ending, juggling their cylindroids, globoids and spheroids.
{ "source": [ "https://mathoverflow.net/questions/386311", "https://mathoverflow.net", "https://mathoverflow.net/users/99236/" ] }
386,758
In Chapter I.9 of Chandler-Magnus "The History of Combinatorial Group Theory", a number of important mathematicians in the early history of the development of group theory and sources for their obituaries are given. For example, we certainly find an entry Dehn, Max, 1878-1951 . For other names, less information is known, such as Pick, G., 1859-1943(?) . This latter question mark reflects the fact that Georg Pick died in the Theresienstadt concentration camp in 1942, and finding this information might have been difficult at the time of the writing of the book (1982). All names in this list have a source for where their obituary may be found, and at least one of a birthyear or death year is present -- except for one name. This name is listed simply as H. Vogt: ?-?. , with no further information. Curiosity piqued, this gives my question: Who was H. Vogt? What were his mathematical contributions? Here's the clues I've got this far. The most relevant piece of information is the following paper: Vogt, H. , Sur les invariants fondamentaux des équations différentielles linéaires du second ordre. , Ann. Sci. École Norm. Sup. (3) 6 (1889), 3–72. (Thèse, Paris), This paper is the only paper cited by Chandler and Magnus for Vogt, and is hence the only publication I am certain is by the desired H. Vogt. It also appears to have been his Ph.D. thesis. No result can be found on Mathematics Genealogy matching this. There are a number of matches on MathSciNet for publications by an H. Vogt; the earliest is from 1879, by a Heinrich Vogt, and this could in principle be the same H. Vogt as above. The latest that could conceivably be by our H. Vogt is from 1923 -- this is again on differential equations, so seems very likely to be by the same author! This would give a (very!) rough idea of (1860-1930) as the lifespan of our dear H. Vogt -- perhaps this helps the search. One idea is that H. Vogt could possibly be related to (father of?) Wolfgang Vogt , a young German mathematician whose last paper was in 1914, and who may well have perished, as did so many other young German academics at the time, in World War 1, such as Werner Boy , of Boy's surface fame, and Hugo Gieseking . The topic of his 1906 Ph.D. thesis seems -- at least on a surface level -- somewhat related to what H. Vogt did, especially if some of the other publications on MathSciNet were by the same H. Vogt. Note: there is a 1932 paper by someone called H. Vogt, namely Vogt, H. , Max Wolf. , Astronomische Nachrichten 247, 313-316 (1932). ZBL59.0039.09 . However, this seems to be by the nazi astronomer Heinrich Vogt (1880-1968), who seems unrelated (and likely did not write an article about differential equations at the age of 1).
A short necrology of Henri Gustave Vogt can be found here: https://gallica.bnf.fr/ark:/12148/bpt6k200265z/f5.item His 1924 discourse on Henri Bazin here: https://gallica.bnf.fr/ark:/12148/bpt6k200262t/f6.item Some comments on his entry in the Académie de Stanislas in 1921 are here: https://gallica.bnf.fr/ark:/12148/bpt6k2002594/f57.item.r=Vogt He has received the Légion d'Honneur: http://www2.culture.gouv.fr/public/mistral/leonore_fr?ACTION=RETROUVER&FIELD_1=NOM&VALUE_1=VOGT&NUMBER=17&GRP=0&REQ=%28%28VOGT%29%20%3aNOM%20%29&USRNAME=nobody&USRPWD=4%24%2534P&SPEC=9&SYN=1&IMLY=&MAX1=1&MAX2=1&MAX3=100&DOM=All This says that he is born the 24th of January 1864 in Sermaize (Marne). His "acte de naissance" says that his correct name is "Henry Gustave". He is the son of Jacques Georges Vogt and Charlotte Gabrielle Cavelier. The last document in the Légion d'Honneur file contains a short biography! Here comes a photography as a young student in 1881: https://archive.org/details/ENS01_PHOD_1_1_30 New information : I have got a scan of the front and back pages of his PhD thesis, where we learn that The "Commission d'Examen" was composed of Hermite (président) and Appell and Poincaré (examinateurs) The thesis is dedicated to Appell: "À Monsieur Appell, Hommage de respectueuse reconnaissance". I would be tempted to deduce that Appell was his advisor.
{ "source": [ "https://mathoverflow.net/questions/386758", "https://mathoverflow.net", "https://mathoverflow.net/users/120914/" ] }
386,921
I am 16 years old at the time of writing (so I have no supervisors to seek advice from) and I have written a mathematics research paper, which I plan on submitting to a journal for publication. I asked an identical question on Academia.SE and I was advised to ask the question here. For a couple of the assertions that I make, I use proofs by induction. Now, in school we're encouraged to write proofs by induction in the following (rigid) format: Base case: ... Assumption(s): …. Inductive step: …. Conclusion: …. I have noticed that no research articles that I have seen have written proofs by induction using this sort of format. The authors usually make it flow much more smoothly, eg 'For the base case, the result is trivial. Now assume the result holds for some $n=k$ , so that …. Now consider the expression for $n=k+1$ … and by the inductive hypothesis this equals … hence the result is true by mathematical induction.' So, is it good practice to write proofs by induction in the pretty rigid structure I first outlined or is it ok/better to write the proofs more naturally so that it flows better?
Writing a proof for school is very different from writing a proof for a research paper. Perhaps the most important distinction is that the audiences are completely different. In school, your audience is your instructor, whose job is to assess your ability to learn and apply a principle. The audience of a research article is the professional mathematical community, where favorable viewing of your work hinges on novelty of ideas, correctness, readability, and possibly elegance, not rigid adherence to one person's notion of how to organize thoughts. With that in mind, I know I would prefer to read a proof with a nice natural flow instead of one that is written in rigid adherence to one specific instructor's preferences. When you finish writing your paper, I recommend that you send your paper to a professional researcher with whom you have a good working relationship, someone who can give you candid, meaningful, and constructive feedback. As you go about writing your paper, I recommend reading as many papers in professional journals as you can so that you get a sense for what good writing looks like. This second bit of advice is tricky without knowing what area(s) of research interest you. So perhaps the professional researcher with whom you have a good working relationship might direct you to some examples of quality writing.
{ "source": [ "https://mathoverflow.net/questions/386921", "https://mathoverflow.net", "https://mathoverflow.net/users/167114/" ] }
387,533
The title is a bit deceiving, because what I really mean is the parallel transport that corresponds to the Levi–Civita connection. This is in the vein of many other questions on mathoverflow: What is the Levi-Civita connection trying to describe? What is torsion in differential geometry intuitively? Rolling without slipping interpretation of torsion But the focus is different. Let me summarize my understanding given answers in the questions above: (Very Rough) Summary of Answers in Previous Questions There exists an interpretation in terms of $G$ -structures, as in the chosen answer (by Chris Schommer-Pries) to What is torsion in differential geometry intuitively? . There exists an interpretation as having some universal property, as described in the top answer (by Robert Bryant) to What is the Levi-Civita connection trying to describe? . There exists a deceiving but appealing interpretation by parallelograms whose sides don't really lie in the same space, as in the answer by Gabe K to What is the Levi-Civita connection trying to describe? . (Gabe K did a heroic effort to make sense of the nonsensical diagram, and I thank him dearly.) There exists an interpretation regarding rolling the shape on a surface ( Rolling without slipping interpretation of torsion ). But ultimately, none of that is something that I can intuitively sell to an undergraduate, and by undergraduate I really mean my heart. In the bottom of my heart, I need a better explanation, one that starts with desirable properties, and then proceeds through existence and uniqueness. Outline of the Type of Intuition I Desire I want to start with some desirable behaviors, which I allow to be external (i.e, to reference a given embedding of the Riemannian manifold into $\mathbb{R}^n$ ), and then say that the only notion of parallel transport that satisfies these conditions must be the Levi–Civita connection. (Any reasonable notion of parallel transport will respect the metric, so I'm really thinking of the torsion-free condition.) A base case of a desirable condition is that for the Riemannian manifold $\mathbb{R}^n$ , parallel transport is the trivial thing. (If one identifies the tangent bundle with $\mathbb{R}^n\times\mathbb{R}^n$ then for any path $\gamma$ the parallel transport of the tangent vector $(\gamma(0),v)$ at $\gamma(0)$ to $\gamma(1)$ via $\gamma$ is the tangent vector $(\gamma(1),v)$ at $\gamma(1)$ .) Next, we would like some way to generalize to a general Riemannian manifold. Let $(M,g)$ be a Riemannian manifold, and let $p\in M$ be a point. Then by the implicit function theorem we can have a chart $f:V\rightarrow U\subset \mathbb{R}^d$ where $0\in V\subset \mathbb{R}^n$ , and $p\in U\subset M$ , such that $f(0)=p$ and such that $f$ is the identity on the first $n$ coordinates. My next thought is to look at the most intuitive case of torsion-freeness, which is the case of commuting fields $X$ and $Y$ . By change of coordinates, we can assume WLOG that on $V$ the vector fields $X$ and $Y$ are defined via the constant functions $X(v)=e_1$ and $Y(v)=e_2$ . One can then express $X$ on $Y$ on $M$ via the derivative of $f$ . But I'm missing multiple components to proceed. So let me ask this in terms of several more explicit questions. Questions If a connection satisfies that $\nabla_XY=\nabla_YX$ for any commuting set of vector fields $X$ and $Y$ , then is it torsion-free? (In other words, if you're torsion-free on commuting vector fields, are you torsion-free for all vector fields?) What intuitive desirable condition (that is allowed to use a given embedding of $M$ into some $\mathbb{R}^d$ ), combined with, or perhaps generalizing the desired behavior of parallel transport on Euclidean space, would uniquely determine it as satisfying $\nabla_XY=\nabla_YX$ for commuting vector fields? (Perhaps something about geodesics? Or volumes? I don't really know what's the missing component here.) I feel like Ben McKay's answer to What is the Levi-Civita connection trying to describe? is coming close to what I want, but I did not get to the bottom of it. It appeared at first that he was saying that the Levi–Civita parallel transport is simply parallel transporting in the ambient space, and then projecting to the tangent plane. But in retrospect, my interpretation is clearly wrong. (Imagine for example an upward pointing vector on the equator of a sphere, being parallel transported to the top. If you parallel transport in $\mathbb{R}^3$ you'll get a vector pointing up, which projected to the tangent space will be the $0$ vector.) A little more vaguely, in case you have an entirely different notion in mind, how would you explain parallel transport to the undergraduate in your heart?
This may not reallly be an answer that you like, but I think that, maybe you misunderstood what Ben McKay was trying to describe. Here is a more explicit, extrinsic description that may help: Suppose that $M^m\subset\mathbb{E}^n$ is an isometrically embedded submanifold of Euclidean $n$ -space. Let $\gamma:(a,b)\to M^m$ be a smooth curve in $M$ and let $v:(a,b)\to\mathbb{E}^n$ be a curve of vectors along $\gamma$ , i.e., $v(t)$ lies in the tangent space $T_{\gamma(t)}M$ for all $t\in (a,b)$ . Say that $v$ is parallel (along $\gamma$ ) if $v':(a,b)\to\mathbb{E}^n$ is normal to $TM$ along $\gamma$ , i.e., $v'(t)\perp T_{\gamma(t)}M$ for all $t\in(a,b)$ . In other words, the velocity of $v$ is always perpendicular to the tangent vectors to $M$ at the point of tangency. Then the (easily proved) proposition is that this notion of a tangent vector field along a curve being parallel along $\gamma$ does not depend on the choice of the isometric embedding, i.e., it is intrinsic to the metric induced on $M$ by its embedding. More generally, if $v:(a,b)\to\mathbb{E}^n$ is tangent along $\gamma$ , then letting $D_\gamma v(t)$ be the orthogonal projection of $v'(t)$ onto $T_{\gamma(t)}M$ yields another curve $D_\gamma v:(a,b)\to\mathbb{E}^n$ that is tangent along $\gamma$ , and this operation (actually a derivation) on tangent fields along $\gamma$ depends only on the induced metric on $M$ . Since it is independent of the choice of isometric embedding, it is the 'covariant part' of the ambient derivative, i.e., the 'covariant derivative'. For example, it follows from the definition that if $v$ is a parallel tangent vector field along $\gamma$ , then the length of $v$ is constant. Then the existence and uniqueness of 'parallel transport' follow by elementary ODE arguments. The Leibnitz rule for the 'covariant derivative' and other properties are easily derived from the definition as well. Once you know that $\nabla_{\gamma'}v$ for a curve of tangent vectors depends only on the metric, it's natural to want to find a formula for it that uses only on the metric and not the (superfluous) isometric embedding. That is what leads to the usual characterizations.
{ "source": [ "https://mathoverflow.net/questions/387533", "https://mathoverflow.net", "https://mathoverflow.net/users/98901/" ] }
388,652
I suppose there was at least once in our lifetime the point where we resorted to mathematica for help with an integral.-Unless you chose not to have the pleasure of using the continuum in your mathematical field of research. I am wondering however whether behind a computer algebra system like mathematica that does symbolic computations there is some high-level idea how it would approach an awkward integral and try to find an anti-derivative?-I focus here on the integration part, since I think I would understand how to implement symbolic differentiation, at least conceptually, but integration seems to be rather mysterious to me.
An overview by one of the developers of Mathematica, focusing on definite integrals, is at Symbolic definite integration: methods and open issues. Mathematica knows all the entries in Gradshteyn-Ryzhik, and more generally uses the Marichev-Adamchik Mellin transform to express the integral in terms of Meijer G functions, which are then simplified if possible. An example is worked out here.
{ "source": [ "https://mathoverflow.net/questions/388652", "https://mathoverflow.net", "https://mathoverflow.net/users/119875/" ] }
389,117
Consider the following diagram of algebraic varieties: $$\mathbb{A}^0 \to \mathbb{A}^1 \rightrightarrows \mathbb{A}^2$$ Here the first arrow is the inclusion of the origin into the line, and the next two are the inclusion of the line into the plane as the X and the Y axes. Does this diagram have a colimit in the category of schemes? (The first arrow giving inclusion of the point is not relevant; it just so happens that it was there when I met this question.)
We can rewrite the coequalizer as the pushout of the diagram $$ \begin{array}{ccc} X & \to & \mathbb A^2 \\ \downarrow & & \\ \mathbb A^1 & & \\ \end{array} $$ where $X$ is the union of the $x$ - and $y$ -axis, and the vertical map quotients by the involution swapping the two components. The category of affine schemes has all pushouts: they are given by fibered product of coordinate rings. But a pushout in the category of affine schemes is not necessarily a pushout in the category of all schemes. A sufficient condition for a pushout of affine schemes to be a pushout in the category of schemes can be found in a paper of Karl Schwede ("Gluing schemes and a scheme without closed points", Theorem 3.4): it suffices that one leg of the pushout is a closed immersion. So we are fine. As noted by Gro-Tsen in a comment, the pushout can be written explicitly as $\mathrm{Spec}(R)$ where $R = \{ f \in k[x,y] : f(t,0)=f(0,t)\}$ .
{ "source": [ "https://mathoverflow.net/questions/389117", "https://mathoverflow.net", "https://mathoverflow.net/users/4707/" ] }
391,454
For two sets $O$ and $A$ , we will call a category structure a collection of functions ${\sf dom}:A\to O,\ {\sf cod}:A\to O,\ {\sf 1}:O\to A,\ \circ:A\times_OA\to A$ satisfying the usual axioms for a category. Can we parametrize the number of category structures (up to iso or equivalence) on two sets $O$ and $A$ in terms of their cardinalities? If a closed-form solution is too much to ask, can we get asymptotics for finite sets? Denote by ${\sf Cat}^\cong(n,m)$ the number of isomorphism classes of category structures on two sets $O$ and $A$ as above with $|O|=n$ and $|A|=m$ . We trivially have that $n\leq m$ . For $n=m=0$ we have exactly one category , and for $n=m=1$ we have a unique up to iso category . In general for $n=m$ we have ${\sf Cat}^\cong(n,m)=1$ , but all these observations are trivial. For $n=1$ and $1\leq m$ we are counting the number of monoids on a set with $m$ elements, which I tried to search but was unable to find -- I did find this related question , and the comments by Qiaochu Yuan give a lower bound of $B^\leq_m\leq{\sf Cat}(1,m)$ where $B^\leq_m$ is the $m^{th}$ ordered Bell number . As the comment by Douglas Zare suggests, this indicates that asymptotics are the best we should hope for since the ordered Bell numbers grow faster than exponential in $m$ . The case for semigroups is cutting edge for a set with $12$ elements by a paper linked in the answer to the linked question, so any asymptotics will have to use the existence of units to hopefully shave things down. The second linked paper gives a closed-form solution for the number of nilpotent semigroups of degree $3$ , so adding mild restrictions seems to potentially allow for more tractable counting. Denote by ${\sf Cat}^\simeq(n,m)$ the number of equivalence classes of category structures on two sets $O$ and $A$ as above. We trivially have ${\sf Cat}^\simeq(n,m)\leq{\sf Cat}^\cong(n,m)$ with equality holding for $n=m$ , but beyond this I don't see much concrete to say. It may be useful to use the cardinalities of the hom-sets between objects in the category structures instead of the cardinality of the overall set of arrows, but beyond more obvious observations nothing jumps out at me using this approach either. For $1<n$ and $n<m$ it is trivial (and kind of fun) to count some small cases, but I don't know how to search the OEIS to see if the sequence is already catalogued. Any assistance is appreciated. First linked paper: Distler A., Jefferson C., Kelsey T., Kotthoff L. (2012) The Semigroups of Order 10. In: Milano M. (eds) Principles and Practice of Constraint Programming. CP 2012. Lecture Notes in Computer Science, vol 7514. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33558-7_63 Second linked paper: Distler A., Mitchell J. D., (2012) The Number of Nilpotent Semigroups of Degree 3. In: The Electronic Journal of Combinatorics, Volume 19, Issue 2 (2012). https://doi.org/10.37236/2441
The problem of counting semigroups and monoids of order $n$ up to isomorphism and anti-isomorphism (i.e., contravariant equvialence) is a very classical problem whose answer is conjectured but nobody has made serious progress in proving over the last 60 years. The analogous situation in group theory is that virtually all finite group theorists agree that the vast majority of groups of order less than or equal to $n$ are $p$ -groups and likely $2$ -groups, but nobody knows how to prove it. A semigroup is called $3$ -nilpotent if it has a zero (aka absorbing element) and the product of any $3$ elements is zero. Notice that you build a table in which the product of any three elements is zero, then automatically the associative law is satisfied and so it is very very easy to build such semigroups. In 1976, Kleitman, Rothschild and Spencer showed that asymptotically all associative multiplication tables on the set $\{1,\ldots, n\}$ are $3$ -nilpotent. This is not counting up to isomorphism but the feeling is that these kinds of semigroups admit few automorphisms and so that is not overcounting by much. So the conjecture that has been around forever is the proportion of semigroups of order $n$ which are $3$ -nilpotent is $1$ . In a beautiful paper Distler and Mitchell gave a closed formula for the number of $3$ -nilpotent semigroups up to isomorphism and also up to isomorphism and anti-isomorphism. So conjecturaly this is the asymptotics for the number of finite semigroups. But nobody knows how to prove this conjecture. To the best of my knowledge, but I am a few years out of date, we only know the number of semigroups of order up to 10. The number of semigroups of order up to 10 has been counted up to isomorphism and anti-isomorphism in Distler et al and it is the astronomical number 12,418,001,077,381,302,684. Almost all of these are $3$ -nilpotent. So what does this have to do with counting categories? Well, a one object category is a monoid and natural equivalence is isomorphism (and anti-isomorphism is contravariant equivalence) so this is a special case of the problem. Ah, but you are doing semigroups and we don't like semigroups because they don't even know who they are, lacking an identity (bad pun alert). Well, if you adjoin an identity you get an embedding from the set of isomorphism classes of semigroups of order $n-1$ into the set of isomorphism classes of monoids of order $n$ (same with isomorphism/anti-isomorphism classes). So the number of monoids of order $n$ up to isomorphism is at least as big as the number of semigroup of order $n-1$ . Now I would conjecture that almost all monoids of order $n$ arise this way and so the number of monoids of order $n$ should behave like the number of $3$ -nilpotent semigroups of order $n-1$ . But again this is conjectural. I don't know if this monoid version of the conjecture has been published anywhere and since we don't have conferences these days I can't really ask around. So here is some evidence. In Distler et al it is shown that the number of isomorphism/anti-isomorphism classes of semigroups of order $9$ is 52,989,400,714,478. The number of isomorphism/anti-isomorphism classes of monoids of order $10$ is computed in Distler et al to be 52,991,253,973,742. So in summary, we most likely know the asymptotics for monoids of order $n$ but it may be 100 years before anybody proves it. Added. It was suggested in the comments to count Morita equivalence classes of categories instead, but that doesn't help the matter because two finite monoids are Morita equivalent of and only if they are isomorphic. The indecomposable projective right $M$ -sets are of the form $eM$ with $e$ idempotent and any Morita equivalent monoid must be isomorphic to one of the form $eMe$ for a projective generator $eM$ of $M$ -sets. But if $e\neq 1$ , then $|eM|<|M|$ and so cannot generate $M$ .
{ "source": [ "https://mathoverflow.net/questions/391454", "https://mathoverflow.net", "https://mathoverflow.net/users/92164/" ] }
391,627
Most texts on category theory define a (small) diagram in a category $\mathcal{A}$ as a functor $D : \mathcal{I} \to \mathcal{A}$ on a (small) category $\mathcal{I}$ , called the shape of the diagram. A cone from $A \in \mathcal{A}$ to $D$ is a morphism of functors $\Delta(A) \to D$ , a limit is a universal cone. Observe that, however, that composition in $\mathcal{I}$ is never used to define the limit. One can therefore argue, and this is what I would like to discuss here, that directed multigraphs ("categories without composition") are better suited as the shapes of diagrams: If $\Gamma$ is a directed multigraph, then a diagram of shape $\Gamma$ in $\mathcal{A}$ is a morphism of graphs $D : \Gamma \to U(\mathcal{A})$ , where $U$ forgets composition. A cone from $A \in \mathcal{A}$ to $D$ is a morphism of diagrams $\Delta(A) \to D$ , a limit is a universal cone. In my category theory textbook (published 2015) I chose this definition, which leads to an equivalent theory, but offering several advantages over the more common definition: As alreay indicated, the limit of a functor $\mathcal{I} \to \mathcal{A}$ in $\mathcal{A}$ is just the limit of the graph morphism $U(\mathcal{I}) \to U(\mathcal{A})$ in $\mathcal{A}$ , so it seems awkward to have a category structure around when we do not use it all. Conversely, the limit of a graph morphism $\Gamma \to U(\mathcal{A})$ is just the limit of the corresponding functor $\mathrm{Path}(\Gamma) \to \mathcal{A}$ , so in end we end up with the same limits. In particular, the definition cannot be totally wrong, and much of the discussion will be more of philosophical or pedagogical nature. When we talk about specific types of diagrams and limits, we never really care about composition, and also never write down identities, since they are not relevant at all. For example, binary products are limits of shape $$\bullet ~~ \bullet$$ which is just a graph with two vertices and no edges. We don't need to write down identity morphisms in this approach. Arbitrary products are similar. An equalizer is a limit of the shape $$\bullet \rightrightarrows \bullet$$ which is just a graph with two vertices and two parallel edges between them. A fiber product is a limit of the shape $$\bullet \rightarrow \bullet \leftarrow \bullet.$$ Limits of shape $$\cdots \to \bullet \to \bullet \to \bullet$$ also appear very naturally. Put differently, the typical indexing categories you will find in most texts on category theory are actually already the path categories on directed multigraphs. For me this is the most convincing argument. Barr and Wells argue in their book Toposes, Triples and Theories in a similar way: Limits were originally taken over directed index sets—partially ordered sets in which every pair of elements has a lower bound. They were quickly generalized to arbitrary index categories. We have changed this to graphs to reflect actual mathematical practice: index categories are usually defined ad hoc and the composition of arrows is rarely made explicit. It is in fact totally irrelevant and our replacement of index categories by index graphs reflects this fact. There is no gain—or loss—in generality thereby, only an alignment of theory with practice. Let's talk about interchanging limits. The usual formulation starts with a functor $D : \mathcal{I} \times \mathcal{J} \to \mathcal{A}$ . This includes, in particular all "diagonal" morphisms $D(f,g)$ for morphisms $f$ in $\mathcal{I}$ and $g$ in $\mathcal{J}$ . However, in practice, I only want to define $D(f,j)$ and $D(i,g)$ , and I don't want to show that $D$ is a functor. For example, interchanging fiber products should be about commuting diagrams of shape $$\begin{array}{ccccc} \bullet & \rightarrow & \bullet & \leftarrow & \bullet \\ \downarrow && \downarrow && \downarrow \\ \bullet & \rightarrow & \bullet & \leftarrow & \bullet \\ \uparrow && \uparrow && \uparrow \\ \bullet & \rightarrow & \bullet & \leftarrow & \bullet\end{array}$$ which actually appear in practice (see also here ). I don't want to bother about all the diagonal morphisms (and the identities) in that diagram, and actually nobody does when applying "interchanging limits" in concrete examples. The theorem for directed multigraphs is as follows: Let $\Gamma,\Lambda$ be directed multigraphs. Consider the tensor product $\Gamma \otimes \Lambda$ (pair the vertices, pair edges in $\Gamma$ with vertices of $\Lambda$ , and pair edges in $\Lambda$ with vertices in $\Gamma$ ) and a diagram $D$ of shape $\Gamma \otimes \Lambda$ in $\mathcal{A}$ such that for all edges $i \to j$ in $\Gamma$ and edges $i' \to j'$ in $\Lambda$ the diagram $$\begin{array}{ccc} D(i,j) & \rightarrow & D(i,j') \\ \downarrow && \downarrow \\ D(i',j) & \rightarrow & D(i',j') \end{array}$$ commutes. Then, we have $\lim_{i \in \Gamma} \lim_{j \in \Lambda} D(i,j) \cong \lim_{(i,j) \in \Gamma \otimes \Lambda} D(i,j)$ ; when the left side exists, then also the right side, and they are isomorphic. This is a bit vague, but for me it seems awkward and random, almost like a "type error", that categories have two purposes in the usual theory: One purpose it to collect structured objects and their morphisms. The other purpose is to axiomatize diagram shapes. Similarly, functors have two purposes in the usual theory. I find it quite pleasant when the second purpose is fulfilled by a different thing. Also connected to that is the observation that shapes are usually small, but categories tend to be large. Although the theory works out very well, meanwhile, I am not so confident anymore about my decision, and I am thinking about changing it in the next edition of the book. So here are some disadvantages : 99% of the category theory literature (textbooks and research papers) define diagrams as functors, resp. their shapes are just small categories. It is awkward to do something which nobody else does, and this can also be irritating for the readers as well. I didn't bother about this too much when writing the book, but I am increasingly worried about this issue. Directed diagrams/colimits are indexed by directed partial orders, and here we really want a functor to ensure compatibility between the various morphisms. Barr-Wells offer a workaround in Chapter 1, Section 10, but they admit themselves that it is slightly awkward . The theory of Kan extensions: The left Kan extension of a functor $F : \mathcal{I} \to \mathcal{A}$ along a functor $G : \mathcal{I} \to \mathcal{J}$ at $J \in \mathcal{J}$ can usually be described as the colimit of the functor $G \downarrow J \to \mathcal{I} \to \mathcal{A}$ , and it seems artificial to just consider the underlying graph of $G \downarrow J$ here. This explains hopefully enough background for the following Questions. Can you list further mathematical advantages and disadvantages when taking directed multigraphs as the shapes of diagrams and limits / colimits? Can you name pedagogical advantages and disadvantages of this definition? Can you list other textbooks on category theory which use this definition? The book Toposes, Triples and Theories by Barr and Wells is an example, see Chapter 1, Section 7. They also define sketches in a "composition-free" way in Chapter 4. Not a book, but Grothendieck also defines diagrams this way in his famous Tohoku paper Sur quelques points d'algèbre homologique , Section 1.6. (More general side question) For those of you who already wrote a book or monograph, what other criteria did you choose to decide if a common definition should be changed? And how did you decide in the end?
I think focusing on graphs is not a good idea. We focus on functors for very good reasons. Here are a few: Many diagrams which are used in practice are functors between categories, and forgetting that they are compatible with composition could seem artificial in many cases. We want to compute colimits. A very fundamental tool for this is the notion of colimit-cofinal functor: those functors $u:A\to B$ such that, for any functor $f:B\to C$ , the colimit of $f$ exists if and only if the colimit of $fu$ exists, in which case both are isomorphic in $C$ . Those functors are characterized by connectivity properties on comma categories, the formation of which is sensitive to compositions. Furthermore, (co)limits are special cases of pointwise Kan left Kan extensions, the formation of which cannot be determined by underlying graphs. Category theory is used in homotopical contexts for a long time, and we now have the theory of $\infty$ -categories to explain conceptually how this is possible. Many features of ordinary categories (such as the theory of (co)limits and Kan extensions) are robust enough to be promoted to $\infty$ -categories. However, only the version with functors can be transposed to higher categories: if you have a functor $f:I\to C$ from a category $I$ to an $\infty$ -category $C$ , it is not true that the colimit of $f$ in $C$ can be tetermined by the restriction of $f$ on the graph of $I$ only (think of the kind of homology you would get if you would define cellular homology of a CW-complex by stoping short at its $1$ -skeleton). In ordinary category theory, this holds because there is no ambiguity about the notion of composition of two maps in $C$ , whereas in a higher category it could be you have a space/category of possible compositions of two maps that cannot be ignored. Therefore, if we have in mind possible generalizations of category theory to homotopical or higher categorical context, focusing on underlying graphs is very misleading. There are instances where the point of view of graphs is very useful, though. For instance, there is Nori's construction of an abelian category of motives, which relies heavily on singular cohomology seen as a map of graphs; this is documented in this book of Annette Huber and Stefan Müller-Stach , for instance. If you really want to focus on graphs, then I guess that, in a monograph on category theory, you may write a chapter explaining why we naturally speak the language of graphs when we express ourselves in the lagnuage of category theory. For instance, the category of small category is monadic on the category of graphs. In particular, any category is a colimit of free categories. There are many fundamental exemples of free categories, and is true that, when we write a diagram explicitely, we only write the images of generators, because this is what working on free objects is good for. It is also interesting to see how caterical constructions (such as colimits) are compatible with the presentabions of categories as colimits of free ones. But you will see then that the colimits of interest in this respect are in fact those which are equivalent to their corresponding 2-colimits (i.e. you will start do do homotopy theory where weak equivalences are equivalences of categories). Expressing a given category as $2$ -colimits of elementary free categories can be instructive. Cases where we have such a nice inductive procedure of this kind are interesting in practice: this is what is happening with direct Reedy categories, for instance.
{ "source": [ "https://mathoverflow.net/questions/391627", "https://mathoverflow.net", "https://mathoverflow.net/users/2841/" ] }
391,776
As a sentential logic, intuitionistic logic plus the law of the excluded middle gives classical logic. Is there a logical law that is consistent with intuitionistic logic but inconsistent with classical logic?
No, every consistent propositional logic that extends intuitionistic logic is a sublogic of classical logic. (That’s why consistent superintuitionistic logics are also called intermediate logics.) To see this, assume that a logic $L\supseteq\mathbf{IPC}$ proves a formula $\phi(p_1,\dots,p_n)$ that is not provable in $\mathbf{CPC}$ . Then there exists an assignment $a_1,\dots,a_n\in\{0,1\}$ such that $\phi(a_1,\dots,a_n)=0$ . Being a logic, $L$ is closed under substitution; thus, it proves the substitution instance $\phi'$ of $\phi$ where we substitute each variable $p_i$ with $\top$ or $\bot$ according to $a_i$ . But already intuitionistic logic can evaluate variable-free formulas, in the sense that it proves each to be equivalent to $\top$ or to $\bot$ in accordance with its classical value. Thus, $\mathbf{IPC}$ proves $\neg\phi'$ , which makes $L$ inconsistent.
{ "source": [ "https://mathoverflow.net/questions/391776", "https://mathoverflow.net", "https://mathoverflow.net/users/136356/" ] }
392,431
From time to time Mathoverflow allows soft questions because they are arguably best answered by active mathematicians and they can benefit other mathematicians/PhD students/math undergraduates. I think this is such a question. I'm a mathematics student planning to enroll in a good math PhD program this Fall. I have always been extremely disciplined in math and my goal has always been to pursue a math PhD. However, I've had the opportunity to work in computer science, and this has caused some doubts about the significance of my future work in mathematics. I imagine such doubts are nonunique to myself and that the best place to ask is here, from people who've been through a PhD themselves, who are wiser, and who may possibly have had these same thoughts. (I hope it is clear I am asking this out of good nature and that this is not dismissed as a cynical thing to ask.) My main question: Is pure mathematics useful, specifically, outside of mathematics itself? Instead of giving a definition of "useful," perhaps I can share some doubts I have about the significance of pure mathematics research. It seems to me that in all honesty, pure mathematics does not immediately benefit the population at large in a direct and obvious way. At best benefits are usually theoretical (e.g., "These methods could ..."). I think that very, very few people actually read and care about the average published pure mathematics paper. I think it's because math papers are hard and it's not clear that they are interesting or useful to math as a whole or to the future of humanity . There are very obvious exceptions, for example, for papers like Fermat's Last Theorem, which are arguably achievements for humanity. But most papers are objectively not of this level of significance and may not always contribute to major problems. It seems that the only reason we, as a population, care about mathematics, is because of the "cool" open problems which are simple to understand but difficult to prove. But this account for only a very small portion of active and successful mathematical work (since math papers don't always try to solve such problems because they're very hard). So doesn't this imply that my work as a future research mathematician is actually not useful for the future of humanity? It seems that pure mathematics was originally created to solve practical and interesting problems, and that as we turned to use abstraction as a tool to solve things (because abstraction is a very useful problem solving tool), we have arrived many years later to nested layers of subproblems of subproblems, whose depth is so deep that such problems of these areas are hard to understand and are not obviously useful for the world or for anything outside of that area of mathematics itself. It seems that mathematics is a science that studies itself, and so at a certain point, it does not have an immediate practical use outside of itself. I can't be the only math person to have every had these thoughts. As a hardcore pure math person it almost feels like a sin to have such doubts (not literally of course). I would very much like to be wrong, to learn from anyone's objections, and to do my PhD as I planned (although I obviously can't enroll with these doubts and will just continue working in CS). This leads to my secondary questions: Have any mathematicians ever had these thoughts? How did they reconcile these thoughts with their career choice?
This is not really an answer to the question as asked, but I believe it's important and relevant to your problem, and too long for a comment. I will not here express any opinion about the validity or importance of your doubts, or share any of my own beliefs about them. Instead, the point I want to make at the moment is that, in my opinion, it is possible to pursue a PhD and a career in mathematics, and believe that one is benefiting the world thereby, while also believing that one's own research in pure mathematics is completely useless (regardless of the validity, or lack thereof, of the latter belief). The point is that the majority of mathematicians in academia do not spend all of their time doing research; most of them also spend time teaching undergraduates. If they work at a liberal arts college, they may spend more time teaching than doing research. I believe it's inarguable that mathematics education is important for students, and those of us who teach them are benefiting the world. One might say, then, why do research at all? Aside from the obvious answers that we enjoy it, I believe our research benefits our students as well (and many universities also believe this). This is particularly true when we are able to create opportunities for students to research with us (an experience from which they can learn a lot, independently of the value or lack thereof of the research they do -- like perseverence, problem-solving skills, etc.). It also makes us better teachers, by keeping us excited about the subject, giving us new ideas for ways to improve our classes, keeping us connected to a wider community of mathematicians, and giving us ways to convey our excitement about mathematics to our students. Of course, this varies somewhat by university. At some research-focused universities, teaching undergraduates is regarded as something to get out of the way as quickly as possible to focus on research. Someone who approaches teaching with that attitude is probably not benefiting the world by their teaching very much. But there are plenty of colleges and universities where teaching is valued and supported by the administration and the community, and if you are worried about the possible uselessness of your research I would recommend that, in addition to reassuring yourself about the usefulness of pure mathematics, you put some effort into becoming a good teacher, and consider jobs at more teaching-focused schools.
{ "source": [ "https://mathoverflow.net/questions/392431", "https://mathoverflow.net", "https://mathoverflow.net/users/213977/" ] }
392,833
Two closely related, but different tasks in combinatorics are determining the number of elements in some set $A$ , and presenting all its elements one by one. Question: What are some works in combinatorics literature that explicitly consider the naming of these different tasks? As a background, it seems that the terminology is sometimes conflicting and confusing. In particular, enumerating can mean either task. In computer science and algorithms it often refers to task 2. In combinatorics it often refers to task 1, but not always. Pólya enumeration is definitely task 1, not task 2 (indeed in this MO question it is pointed out that Pólya enumeration is "not generally a good tool for actually listing"). For what it's worth, Merriam-Webster duly reports that enumerate has meanings 1. to ascertain the number of: COUNT; 2. to specify one after another: LIST. Task 1 is also called counting , which seems unambiguous. But I have seen "computing the number of elements without actually counting them "! Here counting seems to mean tallying , that is, keeping a counter and incrementing by one whenever a new object is seen. Task 2 admits many names, which also may indicate finer variations: listing the elements: Presenting a full listing, stored in some form (paper or computer file). generating the elements: A method that creates all the elements, one by one, but may not store them. Perhaps each element is examined, and then thrown away. visiting : similar to the previous, with a tone of computer science and data structures. constructing : similar, but with a more mathematical flavor. It suggests that creating even one object takes some effort, so it is not just "visiting". classifying : somewhat unclear, but often means something like generating the objects and counting how many of them have certain properties. But it might mean simply isomorph-free listing (in a sense, "classifying" the objects into isomorphism classes). Furthermore, task 2 is often emphasized with modifiers like "full", "explicit", "exhaustive", "actually", "one by one", "brute force" to set it apart from task 1. Enumerating may also mean a more abstract task where elements are equipped with indices and/or abstractly arranged in a potentially infinite list, but one never actually constructs the list (as in "enumerate all rational numbers"). To clarify my question: I am not asking for examples where the words are just used , as in "In this paper we enumerate all Schluppenburger contrivances of the second kind". I am interested in works that recognize the difference of these tasks and make a conscious effort in defining terminology, and perhaps explicitly comment on the usage. Here are some that I have found: Knuth (TAOCP 4B §7.2.1) considers many verbs: run through possibilities, look at permutations, enumerate , count , list , make a list , print , examine , generate , visit . He notes that enumerate may mean either task 1 or 2. He settles for generating and visiting for task 2, when the list is not explicitly stored. Cameron (Notes on Counting, p. 1–2) settles with counting for task 1 and generating for task 2. Later in the notes there are scattered instances of enumeration , which mostly seems to be synonymous with counting. Ruskey (Combinatorial Generation, 2003, p. iii) discusses the terminology for task 2. He mentions generate and enumerate but notes that both are overloaded with other meanings. For example, generate can mean generate uniformly at random, and enumerate can mean counting. Ruskey also considers listing but settles with generation . Kreher & Stinson (Combinatorial Algorithms, 1999, p. 1) defines: Generation , construct all the combinatorial structures of a particular type – – A generation algorithm will list all the objects. Enumeration , compute the number of different structures of a particular type – – each object can be counted as it is generated.
I'm not sure if this is exactly what you're looking for, but the main topic of Herb Wilf's article What is an Answer? is how to answer the question "How many ______ are there?" His basic thesis is that an alleged answer to such a question is satisfactory only if it provides an algorithm whose computational complexity is significantly less than the best known algorithm for listing the elements. More precisely, he introduces the following definitions: $\mbox{Count$(n)$} = $ the complexity of the algorithm for calculating $f(n)$ , whether it be given by a formula, an algorithm, et cetera, and $\mbox{List$(n)$} = $ the complexity of producing all of the members of the set $S_n$ , one at a time, by the speediest known method, and counting them. Definition 1: We will say that a solution of a counting problem is effective if $$\lim_{n\to\infty} \frac{\mbox{Count}(n)}{\mbox{List}(n)} = 0.$$ So Wilf certainly makes a clear distinction between the two tasks. On the other hand, he does not devote much attention to terminology per se. On a related but slightly different note, another important combinatorial task is sampling or randomly generating an element. The relation between counting and sampling is the topic of entire books, such as Computational Complexity of Counting and Sampling by István Miklós. In this context, the word counting is pretty consistently used for the task of computing $f(n)$ , although this usage is largely de facto , whereas I understand you to be looking for de jure discussions.
{ "source": [ "https://mathoverflow.net/questions/392833", "https://mathoverflow.net", "https://mathoverflow.net/users/171662/" ] }
392,837
If $S⊂[0,1]^2$ intersects every connected subset of $[0,1]^2$ with a full projection on the $x$ -axis, must $S$ have a connected component with a full projection on the $y$ -axis? An equivalent form: If $S⊂[0,1]^2$ intersects every connected subset of $[0,1]^2$ with a full projection on the $x$ -axis and $T⊂[0,1]^2$ intersects every connected subset of $[0,1]^2$ with a full projection on the $y$ -axis, must $S\cap T\neq \emptyset$ ? The motivation of this question: The question came to me when I thought about the Brouwer fixed-point theorem: Let $f=(f_1,f_2)$ be a continuous function mappping $[0,1]^2$ to itself. Then $$S\triangleq\{(x,y)\in[0,1]^2:f_1(x,y)=x\}$$ intersects every connected subset of $[0,1]^2$ with a full projection on the $x$ -axis and $$T\triangleq\{(x,y)\in[0,1]^2:f_2(x,y)=y\}$$ intersects every connected subset of $[0,1]^2$ with a full projection on the $y$ -axis. My further question: If we assume that $S\subset [0,1]^2$ is a close set, what is the answer to my question, that is, if a close set $S⊂[0,1]^2$ intersects every connected subset of $[0,1]^2$ with a full projection on the $x$ -axis, must $S$ have a connected component with a full projection on the $y$ -axis?
A counterexample to this statement was posted as a comment by Dejan Govc to the Math StackExchange question, Do partitions of a square into two sets always connect one pair of opposite edges? . For $0 < r < \tfrac{1}{2}$ , let $S_r$ be the boundary of the square $\bigl[\tfrac{1}{2}-r,\tfrac{1}{2}+r\bigr]\times \bigl[\tfrac{1}{2}-r,\tfrac{1}{2}+r\bigr]$ , and let $$ S = \{(0,0),(1,0),(0,1),(1,1)\} \;\;\cup \bigcup_{r\in \mathbb{Q}\cap (0,1/2)} S_r. $$ Note that no connected component of $[0,1]^2\setminus S$ has full projection onto the $x$ -axis, and therefore any connected subset of $[0,1]^2$ with full projection onto the $x$ -axis must intersect $S$ . However, no connected component of $S$ has full projection onto the $y$ -axis.
{ "source": [ "https://mathoverflow.net/questions/392837", "https://mathoverflow.net", "https://mathoverflow.net/users/221921/" ] }
393,319
In Descriptive Set Theory we often see the notion of encoding a real as a sequence of integers or natural numbers -- i.e. there obviously is a bijection according to ZF axioms. But how does it look like concretely? Anybody has seen a simple construction? My own approach is by chain-fractions: Let $q\in\mathbb{R}$ be the given real and now define the sequence $(z_i,q_i)$ by $$z_{i+1}=\begin{cases}[q_i]&\text{if } \{q_i\}\leq\frac{1}{2}\\ [q_{i}]+1&\text{else} \end{cases}$$ $$q_{i+1}=(q_i-z_{i+1})^{-1}$$ where $[q]$ is the next lower integer and $\{q\}=q-[q]$ . (Hence $(q_i-z_{i+1})\in(-\frac12,\frac12]$ thereby absolute value of $q_{i+1}$ , its reciprocal, is bigger than 2.) Now my bijection is mapping $q$ to the sequence: $$m_i=\begin{cases}z_i-2&z_i>0, i>1\\ z_i&i=1\\ z_i+2&z_i<0, i>1\end{cases}$$ with $i$ starting at 1 and above $q_0$ becomes the initial $q$ . And the inverse of my bijection just calculates the chain-fraction: $q_{i-1}\in(z_i-\frac12,z_i+\frac12]$ with $q_{i-1}=z_i+q_i^{-1}$ step-wise narrowing down the real by a sequence of intervals each containing the next. is there a paper or book covering my example? any other simple constructions?
[ Note: this answer uses the convention where $\mathbb{N} := \{ 0, 1, 2, \dots \}$ contains zero.] There's an elegant explicit order-preserving bijection between the Baire space $\mathbb{N}^{\mathbb{N}}$ (under lexicographical order) and $\mathbb{R}_{\geq 0}$ (under the usual order) described here . In particular, we define the image of: $$ (a_0, a_1, a_2, a_3, \dots) $$ to be the generalised continued fraction: $$ a_0 + \cfrac{1}{1 + \cfrac{1}{a_1 + \cfrac{1}{1 + \cfrac{1}{a_2 + \ddots}}}} $$ This order-preserving bijection shows that $\mathbb{R}_{\geq 0}$ and $\mathbb{N}^{\mathbb{N}}$ are not only isomorphic as sets (i.e. equinumerous), but also isomorphic as totally-ordered sets. Topologically, this bijection from $\mathbb{N}^{\mathbb{N}}$ to $\mathbb{R}_{\geq 0}$ is continuous, meaning that every open subset of the nonnegative reals corresponds to an open subset of Baire space. The converse is not quite true (if it were, the two spaces would be homeomorphic, which they're not); continuity of the inverse map fails exactly at the positive rationals.
{ "source": [ "https://mathoverflow.net/questions/393319", "https://mathoverflow.net", "https://mathoverflow.net/users/152241/" ] }
393,797
Recently in a seminar the following question was raised and, despite my familiarity with theory, I couldn't come up with a good answer: Are there any good reasons to use Tate's theory of rigid-analytic spaces, given that Huber's theory of adic spaces seems to be superior in all regards? The only possible advantage I can think of is that of simplicity — only having classical points to worry about may be conceptually simpler. This would be similar to treatments of classical algebraic geometry using maximal spectra (as done e.g. by Milne), although having to work with the G-topology seems to offset any pedagogical benefit to me. For some time I also thought the development of rigid cohomology is where rigid spaces would be advantageous, but as discussed in Lazda and Pál, " Rigid Cohomology over Laurent Series Fields ", rigid cohomology can be, and for their purposes has to be, developed with adic generic fibers instead. For contrast, let me mention that I am aware of some ways in which say Berkovich spaces have advantages despite also being subsumed by adic spaces, coming from their "Euclidean" nature, for instance the theory of integration on them, the theory of skeleta or some relations to tropical geometry. Are there any contexts like that where classical rigid varieties shine?
There are two questions here: which version of the theory is easiest to get off the ground axiomatically, and which version is more convenient to work with in applications? For the first question, it's very much a matter of taste. Adic spaces are genuinely topological spaces, whereas Tate's G-topological spaces aren't; but there are many more, and weirder, points in them whose geometric significance takes some getting used to. At risk of treading on some toes, I'm going to point out that Huber's adic spaces were available in the literature for at least 20 years before they started to become really popular. If there were a decisive advantage to setting up the theory in terms of adic spectra rather than G-topologies, then number theorists would have abandoned Tate's theory en masse in 1995 or so; and they didn't. The benefits offered by Huber's foundations weren't persuasive enough to to outweigh the "first-mover advantage" given by 30 years' worth of literature written in Tate's language. Adic spaces caught on because Scholze showed they could be used to do radically new things that were impossible in Tate's rigid geometry -- not because they allowed you to re-prove or re-visualize existing theorems in nicer ways. As for the second question, I think the correct answer is "both". There are some (mostly younger) mathematicians who, whenever rigid spaces are mentioned, smile indulgently at the folly of their elders and assume that anything written in this language is obsolete or misguided, a bit like teenagers laughing at their parents' CD collection. This is a misconception, since rigid spaces over a nonarchimedean field K can be identified with a subcategory of adic spaces over K, and a rigid space and its corresponding adic space have the same sheaf theory (equivalent as topoi). Hence, when you want to apply p-adic analytic geometry to actually do something, it very often doesn't matter whether you write $Max(A)$ or $Spa(A, A^+)$ -- their underlying sets are hugely different, but that's usually not relevant if you're writing a research paper as opposed to a textbook. Indeed, in recent literature it seems to be quite common to simply redefine "rigid space over K" to mean an adic space which is locally of finite type over K. So the large corpus of work written in Tate's language remains useful, and a working number theorist nowadays who isn't familiar with the older language is at risk of needlessly reinventing the wheel. [I'm sure Wojowu already knows everything I've written in this paragraph, but I'm putting it in for the benefit of other readers of this question.]
{ "source": [ "https://mathoverflow.net/questions/393797", "https://mathoverflow.net", "https://mathoverflow.net/users/30186/" ] }
393,957
The Covid-19 pandemic has changed our work-lives in ways few of us could have anticipated. These exceptional circumstances have forced each one of us and each one of our institutions to adapt, sometimes in creative ways. I would like to compile a list of those changes and adaptations at all levels of the mathematical ecosystem (all the way from the lives of math undergrads to the working of national funding bodies). For each entry, please discuss the advantages and disadvantages of the new setup, compared to the previous way of doing things. Be as specific as possible. Where relevant, please discuss issues of accessibility of events/resources to people who would otherwise have less access to them, and issues of climate change (less traveling means fewer emissions).
Online seminars Research (and other) seminars have gone virtual. The obvious advantage is, that anyone can attend from basically all over the world. The page https://researchseminars.org/ compiles a huge list of talks and you may visit some mathematical talk basically all around the clock. A further advantage is that no travelling is involved, so speakers are much more flexible. Moreover, the virtual format makes it very simple to record the talk and make it available afterwards. Among the disadvantages is the missing personal contact and possibilities for one-on-one discussion. Also, virtual seminars do not serve as meeting points for groups and departments as much as classical seminars do. The additional ability to have public and private chats during the talk is at least different to classical talks, but I am not sure if chats provide an advantage or disadvantage. My personal conclusion is, that virtual seminars are here to stay, but that classical seminars will come back as well and both are going to exist in parallel.
{ "source": [ "https://mathoverflow.net/questions/393957", "https://mathoverflow.net", "https://mathoverflow.net/users/5690/" ] }
393,969
Consider a positive Hermitian $N \times N$ matrix $A$ with complex valued coefficients. We list its eigenvalues in increasing order and with their multiplicities, $\mu_{1} \leq \mu_{2} \leq \cdots \leq \mu_{N}$ and consider the one parameter family of matrices $A+\lambda$ . How can I verify that for any $\lambda>-\mu_{1}$ , the following $$ \frac{d}{d \lambda} \log (\operatorname{det}(A+\lambda))=\operatorname{trace}(A+\lambda)^{-1} $$ holds? (This formula motivates the definition of a relative determinant.)
Online seminars Research (and other) seminars have gone virtual. The obvious advantage is, that anyone can attend from basically all over the world. The page https://researchseminars.org/ compiles a huge list of talks and you may visit some mathematical talk basically all around the clock. A further advantage is that no travelling is involved, so speakers are much more flexible. Moreover, the virtual format makes it very simple to record the talk and make it available afterwards. Among the disadvantages is the missing personal contact and possibilities for one-on-one discussion. Also, virtual seminars do not serve as meeting points for groups and departments as much as classical seminars do. The additional ability to have public and private chats during the talk is at least different to classical talks, but I am not sure if chats provide an advantage or disadvantage. My personal conclusion is, that virtual seminars are here to stay, but that classical seminars will come back as well and both are going to exist in parallel.
{ "source": [ "https://mathoverflow.net/questions/393969", "https://mathoverflow.net", "https://mathoverflow.net/users/157604/" ] }
394,101
I have an idea for a website that could improve some well-known difficulties around peer review system and "hidden knowledge" in mathematics. It seems like a low hanging fruit that many people must've thought about before. My question is two-fold: Has someone already tried this? If not, who in the mathematical community might be interested in creating and maintaining such a project or is working on similar projects? Idea A website dedicated to anonymous discussions of mathematical papers by experts. Motivation 1: Hidden knowledge Wilhelm Klingenberg's "Lectures on closed geodesics" can be found in every university's math library. One of the main theorems in the book is the following remarkable result, a culmination of decades of work on Morse theory of the loop space by many mathematicians: Every compact Riemannian manifold contains infinitely many prime closed geodesics. Unfortunately, there is a mistake in the proof. 44 years after the book's publication the statement is still a widely open problem. The reason I know this is because when I was in grad school I mentioned the book to my adviser and my adviser told me about it. If I tried to look for this information online I wouldn't find it (in fact, I still haven't seen it written down anywhere). This is one of many examples of "hidden knowledge", information that gets passed from adviser to student, but is inaccessible to an outsider. In principle, a new Ramanujan can open arxiv.org and get access to the cutting edge mathematical research. In reality, the hidden knowledge keeps many mathematical fields impenetrable to anyone who is not personally acquainted with one of a handful of experts. Of course, there is much more to hidden knowledge than "this paper form 40 years ago actually contains a gap". But I feel that the experts' "oral tradition" on papers in the field is at the core of it. Making it common knowledge will be of great benefit to students, mathematicians from smaller universities, those outside of North America and Europe, people from adjacent fields, to the experts themselves and to the mathematical progress. Motivation 2: Improving peer review Consider the following situations: You are refereeing a paper and get stuck on some minor issue. It will take the author 5 minutes to explain, but a few hours for you to figure it out on your own. But it doesn't quite feel worth initiating formal communication with the author through the editor over this and you don't want to break the veil of anonymity by contacting the author directly. You are being asked to referee a paper, but don't have time to referee the whole paper. On the other hand, there is a part of it that is really interesting to you. Telling the editor "yes, but I will only referee Lemma 5.3" seems awkward. You are refereeing a paper that is at the intersection of your field and a different field. You would like to discuss it with an expert in the other field to make sure you are not missing anything, but don't know a colleague in that area or feel hesitant revealing that you are a referee for this paper. These are some of many situations where ability to anonymously discuss a paper with the author and other experts in a forum-like space would be helpful in the refereeing process. But also outside of peer review mathematicians constantly find small errors, fillable gaps and ways to make an old paper more understandable that they would be happy to share with the others. At the same time, they often don't have time to produce carefully polished notes that they would feel comfortable posting on arxiv, or if they post notes on their website they may not be easy to find for anyone else reading the paper. It would be helpful to have one place where such information is collected. How will it work? The hope is to continue the glorious tradition of collaborative anonymous mathematics. One implementation can work like this: Users of the website can create a page dedicated to a paper and post questions and comments about the paper on that page. To register on the website one needs to fill in a form asking for an email and two links to math arxiv papers that have the same email in them (this way registration does not require verification by moderators) and choose their fields of expertise. When a user makes a comment or question only their field/fields are displayed.
I'm the founder of https://papers-gamma.link , an Internet place to discuss scientific articles, mentioned by Matthieu Latapy. I have been supporting this site for 6 years now. I hope that one day it will become popular (in a good sense of the word) and useful for the entire scientific community. As you may imagine, I'm pretty convinced that the idea of public review and public comments is, potentially, a very promising one. My persuasion is less than 6 years ago though, and here's why. Observing that Papers $^\gamma$ gains its popularity very slowly, I started to think more and more about the scientific review and publishing processes. I'll share with you my current understanding of this subjects, admitting that these issues are more likely to be in the field of sociology rather than mathematics. The original goal of scientific journals was to inform about and discuss the research currents. But after, this had to make room for other things: archiving and bibliometrics. I'm OK with archiving. But, it seems that the optimization of bibliometric statistics negatively affects the discussion power, the original (!) goal of the scientific journals. Let me try to illustrate exactly what I mean by "discussion power" of scientific journals. Consider Miller's one-page paper [2], which is an excellent example of conversational mathematics. The paper contains an alternative proof of Miles' results [1] about the characteristic equation of the $k$ -th order Fibonacci sequence. Miller's paper can be considered as a response to Miles's paper. Compare this to the modern Internet forums like MathOverflow. Does any journal, wiki, or another internet portal is attracted to archiving and bibliometrics as time goes forward? Maybe there is a natural or induced drift from "discussion" towards "judgment"? If it is the case, should we consider to create a new journals every, say, 10 or 20 years, to restart discussion processes? Or should we be extremely careful, trying to keep the discussion power of existing scientific journals? Here are main arguments against the mass acceptance of public review and comments, that I can imagine: Existing systems works pretty well. Not all conversations should be made public. Private reviews and conversations have their advantages. People feel more free to commit errors, express misunderstanding, and criticize in private. Good reviews help the authors to polish and publish almost perfect articles. The more a resource is open, the more it is susceptible to spam. Anonymous public discussions may be used as a platform for attacking other scientists. Here is a list of some online resources related to the idea of public reviews and collaborative science in general: MathSciNet and Zentralblatt MATH , databases with post-publication reviews. nLab wiki and The $n$ -Category Café , collaborative works on Mathematics, Physics and Philosophy. Polymath Project , a collaborative project that aims to solve important and difficult mathematical problems. Machine Learning Paper Discussions subreddit. Atmospheric Chemistry and Physics journal with interactive public peer review process. CoScience , a service that aims to "recreate scientific communication as a virtuous, open, community-driven process". F1000Research , a platform covering the life science publications. PubPeer , a online platform for post-publication peer review. "The site has served as a whistleblowing platform, in that it highlighted shortcomings in several high-profile papers, in some cases leading to retractions and to accusations of scientific fraud", as Wikipedia says. BibSonomy , a social bookmark service allowing comments. Selected Papers Network was an open-source project for share and comment scientific articles. I wonder if certain usenet groups in sci.* hierarchy are still active. In any case, the source code of Papers $^\gamma$ is open under CC0 Public Domain Dedication. And you are welcome to send me a patch or fork it if you wish so. To conclude, I think that "Hidden knowledge" will always be here, and the solution to this issue lies not in the technical but rather in the societal dimension. Someone just need to write it down in some searchable place. For instance, in enumerative combinatorics Sloane's The On-Line Encyclopedia of Integer Sequences helps a lot, but we need to watch out and constantly update it. Conclusion update: for me, both OEIS and MathOverflow are popular because their main purpose is to allow people make a research together and not to judge each other. Paywalled Biblio [1] Miles, E. “Generalized Fibonacci numbers and associated matrices”. The American Mathematical Monthly, 67(8), 1960, 745–752 [2] Miller, M. D. "On generalized Fibonacci numbers". The American Mathematical Monthly, 78(10), 1971, 1108–1109.
{ "source": [ "https://mathoverflow.net/questions/394101", "https://mathoverflow.net", "https://mathoverflow.net/users/250603/" ] }
394,391
Let $X$ be a topological space such that its suspension is a topological manifold. Can we prove that $X$ itself is a topological manifold?
It’s not true. The Poincare sphere $P$ is a manifold, and its suspension is not. But its double suspension is homeomorphic to $S^5$ by Cannon’s “Double Suspension Theorem”. I learned about this from Mark Grant in an answer to a different question of mine on MO.
{ "source": [ "https://mathoverflow.net/questions/394391", "https://mathoverflow.net", "https://mathoverflow.net/users/105900/" ] }
394,934
Question: On balance, with theoretical advances in algorithmic information theory and Quantum Computation it appears that the remarkable effectiveness of mathematics in the natural sciences is quite reasonable. By effectiveness, I am generally referring to Wigner's observation that mathematical laws have remarkable generalisation power. Might there be a modern review paper on the subject for mathematicians where the original question is re-evaluated in light of modern mathematical sciences? An information-theoretic perspective: In order to motivate an information-theoretic analysis, it is worth observing that Occam's razor is an essential tool in the development of mathematical theories. From an information-theoretic perspective, a Universe where Occam's razor is generally applicable is one where information is conserved. The conservation of information would imply that fundamental physical laws are generally time-reversible. Moreover, given that Occam's razor has an appropriate formulation within the context of algorithmic information theory as the Minimum Description Length principle this information-theoretic analysis generally presumes that the Universe itself may be simulated by a Universal Turing Machine. David Deutsch and others have done significant work demonstrating the plausibility of the Physical Church-Turing thesis(which is consistent with the original Church-Turing thesis) and this would explain why mathematical methods are so effective in the natural sciences. This brief analysis has emerged from informal discussions with a handful of algorithmic information theorists(Hector Zenil, Marcus Hutter, and others) and it makes me wonder whether complementary theories from mathematical physics might help mathematicians account for the remarkable effectiveness of mathematics in the natural sciences. Clarification of particular terms: Minimum Description Length principle: Given data in the form of a binary string $x \in \{0,1\}^*$ the Minimum Description Length of $x$ is given by the Kolmogorov Complexity of $x$ : \begin{equation} K_U(x) = \min_{p} \{|p|: U(p) = x\} \end{equation} where $U$ is a reference Universal Turing Machine and $p$ is the shortest program that takes as input the empty string $\epsilon$ and outputs $x$ . The Law of Conservation of Information: The Law of Conservation of information which dates back to Von Neumann essentially states that the Von Neumann entropy is invariant to Unitary transformations. This is meaningful within the framework of Everettian quantum mechanics as a density matrix may be assigned to the state of the Universe. This way information is conserved as we run a simulation of the Universe forwards or backwards in time. The Physical Church-Turing thesis: The Law of Conservation of information is consistent with the observation that all fundamental physical laws are time-reversible and computable. The research of David Deutsch(and others) on the Physical Church-Turing thesis explains how a Universal Quantum computer may simulate these laws. Michael Nielsen wrote a good introductory blog post on the subject [7]. The Physical Church-Turing thesis is a key point in this discussion as it provides us with a credible explanation for the remarkable effectiveness of mathematics in the natural sciences. A remark on effectiveness : What I have retained from my discussions with physicists and other natural scientists is that the same mathematical laws with remarkable generalisation power are also constrained by Occam's razor. In fact, from an information-theoretic perspective the remarkable effectiveness of mathematics is a direct consequence of the effectiveness of Occam's razor. This may be partly understood from a historical perspective if one surveys the evolution of ideas in physics [10]. Given two compatible theories, Einstein generally argued that one should choose the simplest theory that yields negligible experimental error. To be precise, he stated: It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.-Einstein(1933) As the application of Occam's razor generally requires a space of computable models, and algorithmic information theory carefully explains why simpler theories generalise better [8] it is fair to say that the notion of effectiveness may be made precise. However, the theory of algorithmic information was developed in the mid 1960s by Chaitin, Kolmogorov and Solomonoff which was after Wigner wrote his article in 1960. What is remarkable: If we view the scientific method as an algorithmic search procedure then there is no reason, a priori, to suspect that a particular inductive bias should be particularly powerful. This much was established by David Wolpert via his No Free Lunch Theorems [11]. On the other hand, the history of natural science indicates that Occam's razor is remarkably effective. The effectiveness of this inductive bias has more recently been explored within the context of deep learning [12]. References: Eugene Wigner. The Unreasonable Effectiveness of Mathematics in the Natural Sciences. 1960. David Deutsch. Quantum theory, the Church–Turing principle and the universal quantum computer. 1985. Peter D. Grünwald. The Minimum Description Length Principle . MIT Press. 2007. A. N. Kolmogorov Three approaches to the quantitative definition of information. Problems of Information and Transmission, 1(1):1--7, 1965 G. J. Chaitin On the length of programs for computing finite binary sequences: Statistical considerations. Journal of the ACM, 16(1):145--159, 1969. R. J. Solomonoff A formal theory of inductive inference: Parts 1 and 2. Information and Control, 7:1--22 and 224--254, 1964. Michael Nielsen. Interesting problems: The Church-Turing-Deutsch Principle. 2004. https://michaelnielsen.org/blog/interesting-problems-the-church-turing-deutsch-principle/ Marcus Hutter et al. (2007) Algorithmic probability. Scholarpedia, 2(8):2572. Andrew Robinson. Did Einstein really say that? Nature. 2018. The Evolution of Physics, Albert Einstein & Leopold Infeld, 1938, Edited by C.P. Snow, Cambridge University Press Wolpert, D.H., Macready, W.G. (1997), "No Free Lunch Theorems for Optimization", IEEE Transactions on Evolutionary Computation 1, 67. Guillermo Valle Pérez, Chico Camargo, Ard Louis. Deep Learning generalizes because the parameter-function map is biased towards simple functions. 2019.
A 2013 issue of Interdisciplinary Science Reviews was entirely devoted to this topic. One viewpoint, by Jesper Lützen, struck me: When Wigner claimed that the effectiveness of mathematics in the natural sciences was unreasonable it was due to a dogmatic formalist view of mathematics according to which higher mathematics is developed solely with a view to formal beauty. I shall argue that this philosophy is not in agreement with the actual practice of mathematics. Indeed, I shall briefly illustrate how physics has influenced the development of mathematics from antiquity up to the twentieth century. If this influence is taken into account, the effectiveness of mathematics is far more reasonable. (the articles in this issue are behind a paywall, perhaps there is another way to access them...)
{ "source": [ "https://mathoverflow.net/questions/394934", "https://mathoverflow.net", "https://mathoverflow.net/users/56328/" ] }
396,326
The following real $2 \times 2$ matrix has determinant $1$ : $$\begin{pmatrix} \sqrt{1+a^2} & a \\ a & \sqrt{1+a^2} \end{pmatrix}$$ The natural generalisation of this to a real $2 \times 2$ block matrix would appear to be the following, where $A$ is an $n \times m$ matrix: $$\begin{pmatrix} \sqrt{I_n+AA^T} & A \\ A^T & \sqrt{I_m+A^TA} \end{pmatrix}$$ Both $I_n+AA^T$ and $I_m+A^TA$ are positive-definite so the positive-definite square roots are well-defined and unique. Numerically, the determinant of the above matrix appears to be $1$ , for any $A$ , but I am struggling to find a proof. Using the Schur complement, it would suffice to prove the following (which almost looks like a commutativity relation): $$A\sqrt{I_m + A^TA} = \sqrt{I_n + AA^T}A$$ Clearly, $A(I_m + A^TA) = (I_n + AA^T)A$ . But I'm not sure how to generalise this to the square root. How can we prove the above?
Write the SVD of $A$ , say $A=PDQ^T$ with $D$ diagonal with non-negative entries and $P\in O(n),Q\in O(m)$ . Then $\sqrt{I_n + AA^T} = P\sqrt{1+D^2}P^T$ and $\sqrt{I_m+ A^TA} = Q\sqrt{1+D^2}Q^T$ . This gives $$ \begin{pmatrix} \sqrt{I_n + AA^T} & A \\ A^T& \sqrt{I_m+A^TA} \end{pmatrix} = \begin{pmatrix} P & 0 \\ 0 & Q \end{pmatrix} \begin{pmatrix} \sqrt{I_n + D^2} & D \\ D & \sqrt{I_m+D^2} \end{pmatrix} \begin{pmatrix} P^T & 0 \\ 0 & Q^T \end{pmatrix}. $$ Up to permutation, the matrix in the middle is diagonal by block with $n$ blocks given by 2x2 matrices of the same form as in the question.
{ "source": [ "https://mathoverflow.net/questions/396326", "https://mathoverflow.net", "https://mathoverflow.net/users/119987/" ] }
396,470
I would like to ask about recent examples, mainly after 2015, where experimentation by computers or other use of computers has led to major mathematical advances. This is a continuation of a question that I asked 11 years ago . There are several categories: A) Mathematical conjectures or large body of work arrived at by examining experimental data B) Computer-assisted proofs of mathematical theorems C) Computer programs that interactively or automatically lead to mathematical conjectures. D) Various computer programs which allow proving automatically theorems or generating automatically proofs in a specialized field. E) Computer programs (both general purpose and special purpose) for verification of mathematical proofs. F) Large databases and other tools Of course more resources (like this Wikipedia page on experimental mathematics) are also useful.
There is the recent computer-assisted verification of some key statements by Scholze and Clausen about " condensed mathematics ". The task has been accomplished by Buzzard, Commelin, and others (see comments below) using Lean, and it led to major media coverage. For instance, here is a related article that appeared on Nature on June 18, 2021.
{ "source": [ "https://mathoverflow.net/questions/396470", "https://mathoverflow.net", "https://mathoverflow.net/users/1532/" ] }
397,286
Why do we have two theorems one for the density of $C^{\infty}_c(\mathbb{R}^n)$ in $L^p(\mathbb{R}^n)$ and one for the density of $C^{\infty}_c(\Omega)$ in $L^p(\Omega)$ ? with $\Omega$ an open subset of $\mathbb{R}^n$ . Why not just the second one? I was asked by the prof what is the difference between the density of $C(\Omega)$ and $C(\mathbb{R}^n)$ but all I found when checking the demonstrations is that we take $\Omega \neq \mathbb{R}^n$ when giving the proof for the second theorem because for $\Omega = \mathbb{R}^n$ we already have the first theorem.
Some mathematicians seem to agree with you, and strive only to state and prove the most general versions of their theorems. I've had co-authors express that view. And I've sometimes had referee reports on my papers state this philosophical perspective explicitly, objecting to a warm-up theorem that I stated and proved early in the paper, even though later I proved harder, more general results. Earlier in my career, against my own judgement I would dutifully remove the objectionable warm-up presentations (and I did so even in what became one my most highly cited papers), but no longer. I strongly disagree with the objection. I don't agree that one should seek to present only the most general forms of one's theorems. Rather, there is a definite value in proving easier or more concrete results first, even when one intends to move on to prove more encompassing results later. Indeed, I would say that often the main value of a theorem is concentrated in an easier, less general principal case. The simpler results often aid in mathematical insight. Unencumbered with unnecessary generality or abstraction, they are often simply easier to understand, yet still illustrate the main idea clearly. Removing even a small generalization, such as restricting to $n=2$ or simplifying from an arbritrary real-like space $\Omega$ to the reals $\mathbb{R}$ , can dramatically improve understanding, especially on your reader's first engagement with your argument. The reason is that every generalization, even very small ones, contributes yet another layer of difficulty and abstraction, contributing to the cognitive load that can make a difficult proof impenetrable. On the first pass, it can often be best to focus on a simple, main case, which highlights the core ideas without unnecessary distractions. Once one has mastered such a case, then one has often thereby developed a familiarity of understanding of the core idea or technique of the argument, a framework of understanding capable of supporting a deeper understanding of the more general result. Having the easy case first makes the difficult case much easier to master. Indeed, often the key ideas of an argument have only to do with the special case in the first instance, and the generalizing steps are routine — all the more reason to omit them at first. So this is not just for pedagogy, although certainly students new to a topic will appreciate mastering the easier versions of a theorem first. My point is that the practice is also important for experts, at every level of expertise. One gains ultimately a deeper understanding of the general result, when one sees how the core ideas and methods generalize those in a simpler case. The same goes for mathematics talks. At conferences or seminars, please consider begining your talk with an easier special case that illustrates the theme or methods of your more general, advanced results. Your audience will definitely appreciate it. So I have no problem with having two theorems, one of them implying the other, and I would find that to be a very sound way of proceeding in mathematics.
{ "source": [ "https://mathoverflow.net/questions/397286", "https://mathoverflow.net", "https://mathoverflow.net/users/144902/" ] }
397,330
I am a PhD student in algebraic / arithmetic geometry and I never took a formal course in algebraic topology, even though I have some basic knowledge. In algebraic geometry we deal exclusively with sheaf cohomology since we care about non-constant sheaves. But I feel, maybe in my naivety, that a lot of the important results (for usual topological spaces) are only true for singular and simplicial cohomology when they coincide with sheaf cohomology (Alexander duality and " $H^i=0$ for $i>$ the covering dimension" come to mind). With that in mind, I wonder if it is worth for someone with a similar background to study the details of a first course in algebraic topology. (Perhaps on the level of Hatcher's book.) Do I lose something by just thinking in terms of sheaf cohomology?
Sheaf cohomology is a powerful tool, but it isn't a replacement for all of basic algebraic topology. For example, fundamental groups and homology some topics that would get lost. And these topics are certainly relevant to algebraic geometry. Also, as pointed out in the comments, you would lose valuable intuition if you just stuck to the sheaf cohomology viewpoint. Let me expand my original answer a bit. Let me focus on the simplest example, where $X$ is a smooth complex projective curve of genus $g$ . One learns in topology that $X$ is obtained by identifying the sides of a $2g$ -gon in the standard way. One can use this to extract 2 things about $X$ . One gets the homology $H_1(X,\mathbb{Z})=\mathbb{Z}^{2g}$ , with its intersection pairing equal to the standard symplectic form. For an algebraic geometer, this corresponds to the lattice of the Jacobian of $X$ together with its Riemann form. In particular, this is a principal polarization. Also one gets the familiar presentation of the fundamental group $$\pi_1(X)= \langle a_1\ldots a_{2g}\mid [a_1,a_2]\ldots[a_{2g-1}, a_{2g}]\rangle$$ But why should an algebraic geometer care about this? Answer: because it tells us what etale covers of $X$ look like. The etale fundamental group of $X$ is the profinite completion of the above group. This is also true for the prime to $p$ part if $X$ lives in positive characteristic by lifting. Note that Grothendieck uses this reduction to the topological case in SGA1.
{ "source": [ "https://mathoverflow.net/questions/397330", "https://mathoverflow.net", "https://mathoverflow.net/users/131975/" ] }
397,435
We all know that the complex field structure $\langle\mathbb{C},+,\cdot,0,1\rangle$ is interpretable in the real field $\langle\mathbb{R},+,\cdot,0,1\rangle$ , by encoding $a+bi$ with the real-number pair $(a,b)$ . The corresponding complex field operations are expressible entirely within the real field. Meanwhile, many mathematicians are surprised to learn that the converse is not true — we cannot define a copy of the real field inside the complex field. (Of course, the reals $\mathbb{R}$ are a subfield of $\mathbb{C}$ , but this subfield is not a definable subset of $\mathbb{C}$ , and the surprising fact is that there is no definable copy of $\mathbb{R}$ in $\mathbb{C}$ .) Model theorists often prove this using the core ideas of stability theory, but I made a blog post last year providing a comparatively accessible argument: The real numbers are not interpretable in the complex field . The argument there makes use in part of the abundance of automorphisms of the complex field. In a comment on that blog post, Ali Enayat pointed out that the argument therefore uses the axiom of choice, since one requires AC to get these automorphisms of the complex field. I pointed out in a reply comment that the conclusion can be made in ZF+DC, simply by going to a forcing extension, without adding reals, where the real numbers are well-orderable. My question is whether one can eliminate all choice principles, getting it all the way down to ZF. Question. Does ZF prove that the real field is not interpretable in the complex field? I would find it incredible if the answer were negative, for then there would be a model of ZF in which the real number field was interpretable in its complex numbers.
An interpretation of $(\mathbb R,+,\cdot)$ in $(\mathbb C,+,\cdot)$ in particular provides an interpretation of $\DeclareMathOperator\Th{Th}\Th(\mathbb R,+,\cdot)$ in $\Th(\mathbb C,+,\cdot)$ . To see that the latter cannot exist in ZF: The completeness of the theory $\def\rcf{\mathrm{RCF}}\rcf$ of real-closed fields is an arithmetical ( $\Pi_2$ ) statement, and provable in ZFC, hence provable in ZF. Its axioms are clearly true in $(\mathbb R,+,\cdot)$ , hence $\Th(\mathbb R,+,\cdot)=\rcf$ . Similarly, ZF proves completeness of the theory $\def\acfo{\mathrm{ACF_0}}\acfo$ of algebraically closed fields of characteristic $0$ , hence $\Th(\mathbb C,+,\cdot)=\acfo$ . The non-interpretability of $\rcf$ in $\acfo$ is again an arithmetical statement ( $\Pi_2$ , using the completeness of $\acfo$ ), hence its provability in ZFC automatically implies its provability in ZF. Of course, common proofs of some or all the results above may already work directly in ZF (e.g., if you take syntactic proofs of completeness, or if you make it work using countable models, etc.). My point is that it is not necessary to check the proofs, as the results transfer from ZFC to ZF automatically due to their low complexity.
{ "source": [ "https://mathoverflow.net/questions/397435", "https://mathoverflow.net", "https://mathoverflow.net/users/1946/" ] }
397,585
What are some examples of serious mathematical theory-building around hypotheses that are believed or known to be false? One interesting example, and the impetus for this question, is work in number theory based on the assumption that Siegel zeros exist. If there were such things, then the Generalized Riemann Hypothesis would be false, which it presumably isn't. So it's unlikely that there are Siegel zeros. Still, lots of effort has gone into exploring the consequences of their existence , which have turned out to be numerous, interesting, surprising and so far self-consistent. The phenomena generated by the Siegel zero hypothesis are sometimes referred to as an "illusory world" or "parallel universe" sitting alongside that of ordinary number theory. (There's some further MO discussion e.g. here and here .) I'd like to hear about other examples like this. I'd be particularly grateful for references, especially those that discuss the motivations behind and benefits of undertaking such studies. I should clarify that I'm mainly interested in "illusory worlds" built on hypotheses that were believed to be false all along , rather than those which were originally believed true or plausible and only came to be disbelieved after the theory-building was done. Further context: I'm a philosopher interested in counterfactual reasoning in mathematics. I'd like to better understand how, when and why mathematicians engage with counterfactual scenarios, especially those that are taken seriously for research purposes and whose study is viewed as useful and interesting. But I'd like to think this question might be stimulating for the MO broader community.
Girolamo Saccheri in his Euclides Vindicatus (1733) essentially discovered Hyperbolic Geometry , by building around the hypothesis that the angles of a triangle add up less than 180°. This was widely believed to be always impossible, since people at that time were convinced of the absolute nature of Eucliden Geometry.
{ "source": [ "https://mathoverflow.net/questions/397585", "https://mathoverflow.net", "https://mathoverflow.net/users/47239/" ] }
398,029
This question on a theorem in information theory called Mrs. Gerber's lemma piqued my curiosity. Who is this individual, and why the "mrs." ? A quick Google search was not informative, although it did produce a Mr. Gerber's lemma ( arXiv:1802.05861 ) -- can someone enlighten me?
Check out the original reference "A theorem on the entropy of certain binary sequences and applications - I" by Wyner and Ziv: https://doi.org/10.1109/TIT.1973.1055107 . Footnote 2 on page one explains This result is known as “Mrs. Gerber’s Lemma” in honor of a certain lady whose presence was keenly felt by the authors at the time this research was done. I'm not sure you're going to get more of an explanation than that.
{ "source": [ "https://mathoverflow.net/questions/398029", "https://mathoverflow.net", "https://mathoverflow.net/users/11260/" ] }
398,037
Consider the following sequence defined as a sum $$a_n=\sum_{k=0}^{n-1}\frac{3^{3n-3k-1}\,(7k+8)\,(3k+1)!}{2^{2n-2k}\,k!\,(2k+3)!}.$$ QUESTION. For $n\geq1$ , is the sequence of rational numbers $a_n$ always integral?
Let $A(x) = \sum_{n=1}^\infty a_n x^n$ and let $$S(x) = \sum_{k=0}^\infty (7k+8)\frac{(3k+1)!}{k!\,(2k+3)!} x^k.$$ Then the formula for $a_n$ gives $A(x) = R(x)S(x)$ , where $$R(x) = \frac{1}{3}\biggl(\frac{1}{1-\frac{27}{4} x} -1\biggr).$$ A standard argument, for example by Lagrange inversion, gives $$S\left(\frac{y}{(1+y)^3}\right)=\frac{4+y}{3(1+y)^2}.$$ A straightforward computation gives $$R\left(\frac{y}{(1+y)^3}\right) = \frac{9y}{(4+y)(1-2y)^2}.$$ Thus $$A\left(\frac{y}{(1+y)^3}\right)=\frac{3y}{(1-2y)^2(1+y)^2}.$$ Since the power series expansion of $y/(1+y)^3$ starts with $y$ and has integer coefficients, its compositional inverse has integer coefficients, so $A(x)$ does also.
{ "source": [ "https://mathoverflow.net/questions/398037", "https://mathoverflow.net", "https://mathoverflow.net/users/66131/" ] }
398,217
Question: Might there be a natural geometric interpretation of the exponential of entropy in Classical and Quantum Information theory? This question occurred to me recently via a geometric inequality concerning the exponential of the Shannon entropy. Original motivation: The weighted AM-GM inequality states that if $\{a_i\}_{i=1}^n,\{\lambda_i\}_{i=1}^n \in \mathbb{R}_+^n$ and $\sum_{i=1}^n \lambda_i = 1$ , then: \begin{equation} \prod_{i=1}^n a_i^{\lambda_i} \leq \sum_{i=1}^n \lambda_i \cdot a_i \tag{1} \end{equation} As an application, we find that if $H(\vec{p})$ denotes the Shannon entropy of a discrete probability distribution $\vec{p} = \{p_i\}_{i=1}^n$ and $r_p^2 = \lVert \vec{p} \rVert^2 $ is the $l_2$ norm of $\vec{p}$ then: \begin{equation} e^{H(\vec{p})} \geq \frac{1}{r_p^2} \tag{2} \end{equation} This result follows from the observation that if $a_i = p_i$ and $\lambda_i = p_i$ , \begin{equation} e^{-H(\vec{p})} = e^{\sum_i p_i \ln p_i} = \prod_{i=1}^n p_i^{p_i} \tag{3} \end{equation} \begin{equation} \sum_{i=1}^n p_i^2 = \lVert \vec{p} \rVert^2 \tag{4} \end{equation} and using (1), we may deduce (2) where equality is obtained when the Shannon entropy is maximised by the uniform distribution i.e. $\forall i, p_i = \frac{1}{n}$ . A remark on appropriate geometric embeddings: If we consider that the Shannon entropy measures the quantity of hidden information in a stochastic system at the state $\vec{p} \in [0,1]^n$ , we may define the level sets $\mathcal{L}_q$ in terms of the typical probability $q \in (0,1)$ : \begin{equation} \mathcal{L}_q = \{\vec{p} \in [0,1]^n: e^{H(\vec{p})} = e^{- \ln q} \} \tag{5} \end{equation} which allows us to define an equivalence relation over states $\vec{p} \in [0,1]^n$ . Such a model is appropriate for events which may have $n$ distinct outcomes. Now, we'll note that $e^{H(\vec{p})}$ has a natural interpretation as a measure of hidden information while $e^{-H(\vec{p})}$ may be interpreted as the typical probability of the state $\vec{p}$ . Given (5), a natural relation between these measures may be found using the Hyperbolic identities: \begin{equation} \cosh^2(-\ln q) - \sinh^2(-\ln q) = 1 \tag{6} \end{equation} \begin{equation} \cosh(-\ln q) - \sinh(-\ln q) = q \tag{7} \end{equation} where $2 \cdot \cosh(-\ln q)$ is the sum of these two measures and $2 \cdot \sinh(-\ln q)$ may be understood as their difference. This suggests that the level sets $\mathcal{L}_q$ have a natural Hyperbolic embedding in terms of Hyperbolic functions. References: Olivier Rioul. This is IT: A Primer on Shannon’s Entropy and Information. Séminaire Poincaré. 2018. David J.C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press 2003. John C. Baez, Tobias Fritz, Tom Leinster. A Characterization of Entropy in Terms of Information Loss. Arxiv. 2011.
With apologies for promoting my own work, there's a whole book on the mathematics of the exponentials of various entropies: Tom Leinster, Entropy and Diversity: The Axiomatic Approach . Cambridge University Press, 2021. You can download a free copy by clicking, although persons of taste will naturally want to grace their bookshelves with the bound work. The direct answer to your literal question is that I don't know of a compelling geometric interpretation of the exponential of entropy. But the spirit of your question is more open, so I'll explain (1) a non-geometric interpretation of the exponential of entropy, and (2) a geometric interpretation of the exponential of maximum entropy. Diversity as the exponential of entropy As Carlo Beenakker says, the exponential of entropy (Shannon or more generally Rényi) has long been used by ecologists to quantify biological diversity. One takes a community with $n$ species and writes $\mathbf{p} = (p_1, \ldots, p_n)$ for their relative abundances, so that $\sum p_i = 1$ . Then $D_q(\mathbf{p})$ , the exponential of the Rényi entropy of $\mathbf{p}$ of order $q \in [0, \infty]$ , is a measure of the diversity of the community, or "effective number of species" in the community. Ecologists call $D_q$ the Hill number of order $q$ , after the ecologist Mark Hill, who introduced them in 1973 (acknowledging the prior work of Rényi). There is a precise mathematical sense in which the Hill numbers are the only well-behaved measures of diversity, at least if one is modelling an ecological community in this crude way. That's Theorem 7.4.3 of my book . I won't talk about that here. Explicitly, for $q \in [0, \infty]$ $$ D_q(\mathbf{p}) = \biggl( \sum_{i:\,p_i \neq 0} p_i^q \biggr)^{1/(1 - q)} $$ ( $q \neq 1, \infty$ ). The two exceptional cases are defined by taking limits in $q$ , which gives $$ D_1(\mathbf{p}) = \prod_{i:\, p_i \neq 0} p_i^{-p_i} $$ (the exponential of Shannon entropy) and $$ D_\infty(\mathbf{p}) = 1/\max_{i:\, p_i \neq 0} p_i. $$ Rather than picking one $q$ to work with, it's best to consider all of them. So, given an ecological community and its abundance distribution $\mathbf{p}$ , we graph $D_q(\mathbf{p})$ against $q$ . This is called the diversity profile of the community, and is quite informative. As Carlo says, different values of the parameter $q$ tell you different things about the community. Specifically, low values of $q$ pay close attention to rare species, and high values of $q$ ignore them. For example, here's the diversity profile for the global community of great apes: (from Figure 4.3 of my book). What does it tell us? At least two things: The value at $q = 0$ is $8$ , because there are $8$ species of great ape present on Earth. $D_0$ measures only presence or absence, so that a nearly extinct species contributes as much as a common one. The graph drops very quickly to $1$ — or rather, imperceptibly more than $1$ . This is because 99.9% of ape individuals are of a single species (humans, of course: we "outcompeted" the rest, to put it diplomatically). It's only the very smallest values of $q$ that are affected by extremely rare species. Non-small $q$ s barely notice such rare species, so from their point of view, there is essentially only $1$ species. That's why $D_q(\mathbf{p}) \approx 1$ for most $q$ . Maximum diversity as a geometric invariant A major drawback of the Hill numbers is that they pay no attention to how similar or dissimilar the species may be. "Diversity" should depend on the degree of variation between the species, not just their abundances. Christina Cobbold and I found a natural generalization of the Hill numbers that factors this in — similarity-sensitive diversity measures . I won't give the definition (see that last link or Chapter 6 of the book), but mathematically, this is basically a definition of the entropy or diversity of a probability distribution on a metric space . (As before, entropy is the log of diversity.) When all the distances are $\infty$ , it reduces to the Rényi entropies/Hill numbers. And there's some serious geometric content here. Let's think about maximum diversity. Given a list of species of known similarities to one another — or mathematically, given a metric space — one can ask what the maximum possible value of the diversity is, maximizing over all possible species distributions $\mathbf{p}$ . In other words, what's the value of $$ \sup_{\mathbf{p}} D_q(\mathbf{p}), $$ where $D_q$ now denotes the similarity-sensitive (or metric-sensitive) diversity? Diversity is not usually maximized by the uniform distribution (e.g. see Example 6.3.1 in the book), so the question is not trivial. In principle, the answer depends on $q$ . But magically, it doesn't! Mark Meckes and I proved this. So $$ D_{\text{max}}(X) := \sup_{\mathbf{p}} D_q(\mathbf{p}) $$ is a well-defined real invariant of finite metric spaces $X$ , independent of the choice of $q \in [0, \infty]$ . All this can be extended to compact metric spaces, as Emily Roff and I showed . So every compact metric space has a maximum diversity, which is a nonnegative real number. What on earth is this invariant? There's a lot we don't yet know, but we do know that maximum diversity is closely related to some classical geometric invariants. For instance, when $X \subseteq \mathbb{R}^n$ is compact, $$ \text{Vol}(X) = n! \omega_n \lim_{t \to \infty} \frac{D_{\text{max}}(tX)}{t^n}, $$ where $\omega_n$ is the volume of the unit $n$ -ball and $tX$ is $X$ scaled by a factor of $t$ . This is Proposition 9.7 of my paper with Roff and follows from work of Juan Antonio Barceló and Tony Carbery. In short: maximum diversity determines volume. Another example: Mark Meckes showed that the Minkowski dimension of a compact space $X \subseteq \mathbb{R}^n$ is given by $$ \dim_{\text{Mink}}(X) = \lim_{t \to \infty} \frac{D_{\text{max}}(tX)}{\log t} $$ (Theorem 7.1 here ). So, maximum diversity determines Minkowski dimension too. There's much more to say about the geometric aspects of maximum diversity. Maximum diversity is closely related to another recent invariant of metric spaces, magnitude . Mark and I wrote a survey paper on the more geometric and analytic aspects of magnitude, and you can find more on all this in Chapter 6 of my book. Postscript Although diversity is closely related to entropy, the diversity viewpoint really opens up new mathematical questions that you don't see from a purely information-theoretic standpoint. The mathematics of diversity is a rich, fertile and underexplored area, waiting for mathematicians to come along and explore it.
{ "source": [ "https://mathoverflow.net/questions/398217", "https://mathoverflow.net", "https://mathoverflow.net/users/56328/" ] }
398,268
The recent article on Quanta (by Natalie Wolchover) concerning $\aleph_1$ vs. $\aleph_2$ suggests that there is excitement within that community: Juliette Kennedy: "It’s one of the most intellectually exciting, absolutely dramatic things that has ever happened in the history of mathematics." Another instance is the Fargues/Scholze advances on "Geometrization of the local Langlands correspondence," which has the Langlands world excited: Eva Viehmann: "It’s really changed everything. These last five or eight years, they have really changed the whole field." This makes me wonder if there is something like a heat map for all of mathematics, which would show the areas with a lot of excitement. It seems difficult to capture this via arXiv postings, but that is an obvious starting point. Has anyone pursued this?
https://paperscape.org/ is a 'heat map' of the arxiv if you color the graph by age. Unfortunately, its ability to detect links between mathematics papers is a bit lacking compared to physics papers for some reason, but it still gives a very interesting view of the subject.
{ "source": [ "https://mathoverflow.net/questions/398268", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
398,387
I am looking for examples of the following situation in mathematics: every object of type $X$ encountered in the mathematical literature, except when specifically attempting to construct counterexamples to this, satisfies a certain property $P$ (and, furthermore, this is not a vacuous statement: examples of objects of type $X$ abound); it is known that not every object of type $X$ satisfies $P$ , or even better, that “most” do not; no clear explanation for this phenomenon exists (such as “constructing a counterexample to $P$ requires the axiom of choice”). This is often presented in a succinct way by saying that “natural”, or “naturally occurring” objects of type $X$ appear to satisfy $P$ , and there is disagreement as to whether “natural” has any meaning or whether there is any mystery to be explained. Here are some examples or example candidates which come to my mind (perhaps not matching exactly what I described, but close enough to be interesting and, I hope, illustrate what I mean), I am hoping that more can be provided: The Turing degree of any “natural” undecidable but semi-decidable (i.e., recursively enumerable but not recursive) decision problem appears to be $\mathbf{0}'$ (the degree of the Halting problem): it is known (by the Friedberg–Muchnik theorem) that there are many other possibilities, but somehow they never seem to appear “naturally”. The linearity phenomenon of consistency strength of “natural” logical theories, which J. D. Hamkins recently gave a talk about ( Naturality in mathematics and the hierarchy of consistency strength ), challenging whether this is correct or even whether “naturality” makes any sense. Are there "natural" sequences with "exotic" growth rates? What metatheorems are there guaranteeing "elementary" growth rates? concerning the growth rate of “natural” sequences, which inspired the present question. The fact that the digits of irrational numbers that we encounter when not trying to construct a counterexample to this (e.g., $e$ , $\pi$ , $\sqrt{2}$ …) experimentally appear to be equidistributed, a property which is indeed true of “most” real numbers in the sense of Lebesgue measure (i.e., a random real is normal in every base: those which are are a set of full measure) but not of “most” real numbers in the category sense (i.e., a generic real is not normal in any base: those which are are a meager set). What other examples can you give of the “most $X$ do not satisfy $P$ , but those that we actually encounter in real life always do (and the reason is unclear)” phenomenon?
Most finite groups empirically are 2-groups (in the sense of being a p-group with $p=2$ not in the other sense of the word ). There are a lot of them. Conjecturally almost all finite groups are 2-groups. That is it is conjectured that if you count all groups up to isomorphism with at most $n$ elements, then the fraction of those which are 2-groups goes to 1 as n goes to infinity. In practice, while we often encounter small 2-groups and a few specific 2-groups like $(Z/(2Z))^k$ , when dealing with "largish" finite groups all these weird 2-groups don't seem to often show up.
{ "source": [ "https://mathoverflow.net/questions/398387", "https://mathoverflow.net", "https://mathoverflow.net/users/17064/" ] }
400,819
Can you prove or disprove the following claim: Claim: $$\frac{\sqrt{3} \pi}{24}=\displaystyle\sum_{n=0}^{\infty}\frac{1}{(6n+1)(6n+5)}$$ The SageMath cell that demonstrates this claim can be found here .
Here is an elementary proof. We rewrite the series as $$\frac{1}{4}\int_0^1\frac{1-x^4}{1-x^6}\,dx=\frac{1}{8}\int_0^1\frac{dx}{1-x+x^2}+\frac{1}{8}\int_0^1\frac{dx}{1+x+x^2}.$$ It is straightforward to show that \begin{align*} \int_0^1\frac{dx}{1-x+x^2}&=\frac{2\pi}{3\sqrt{3}},\\ \int_0^1\frac{dx}{1+x+x^2}&=\frac{\pi}{3\sqrt{3}}, \end{align*} so we are done.
{ "source": [ "https://mathoverflow.net/questions/400819", "https://mathoverflow.net", "https://mathoverflow.net/users/88804/" ] }
401,289
I came across this problem while doing some simplifications. So, I like to ask QUESTION. Is there a closed formula for the evaluation of this series? $$\sum_{(a,b)=1}\frac{\cos\left(\frac{a}b\right)}{a^2b^2}$$ where the sum runs over all pairs of positive integers that are relatively prime.
Apply Möbius summation, the formula for $\sum_{n>=1}\cos(2\pi n x)/n^2$ to obtain: $$11/4-45\zeta(3)/\pi^3=1.00543...\;$$
{ "source": [ "https://mathoverflow.net/questions/401289", "https://mathoverflow.net", "https://mathoverflow.net/users/66131/" ] }
401,441
Summary Someone claims $\mathbb{R}$ can be constructed as the following intriguing quotient, which is related to Gromov's bounded cohomology. I want to find out if it is true. $$\frac{\bigl\{f:\mathbb{Z} \to \mathbb{Z} \mathrel| \mbox{ the set } \{f(m+n)-f(m)-f(n) \mathrel| m, n \in \mathbb{Z}\} \mbox{ is bounded}\bigr\}}{\{f:\mathbb{Z} \to \mathbb{Z} \mathrel| f \mbox{ is bounded}\} }.$$ EDIT See KConrad's comment below. A similar construction, described in Hermans - An elementary construction of the real numbers, the $p$ -adic numbers and the rational adele ring , yields $\mathbb{Q}_p$ and the rational adele ring $\mathbb{A}$ . Main text In A'Campo - A natural construction for the real numbers , a natural construction of the real numbers is given as follows. ( EDIT : In this post I only address the bijection and the ring structure. The correspondence is complete; for that please refer to the paper.) Definition (Bounded cochains) Define $C^{n} = C^{n}(\mathbb{Z})$ to be $\operatorname{Map}(\mathbb{Z}^{\times n}, \mathbb{Z})$ and $C^n_b = C^{n}_b(\mathbb{Z})$ to be the subset consisting of functions $f$ having bounded image, i.e. $\operatorname{Card}(\operatorname{Im}(f)) < \infty$ . Definition (Differentials) Define $d: C^n \to C^{n+1}$ to be such that $$df(x_1,\dotsc,x_{n+1}) = f(x_2,\dotsc,x_{n+1}) + \sum_{k=1}^{n}(-1)^{k} f(x_1, \dotsc, x_{k-1}, x_k+x_{k+1}, \dotsc, x_{n+1}) + (-1)^{n+1}f(x_1,\dotsc,x_n).$$ Obviously, $d(C^n_b) \subseteq C^{n+1}_b$ , so $C^n_b \subseteq d^{-1}(C^{n+1}_b)$ . Algebraic Operations Clearly, $C^1$ has a ring structure, where addition is given by point-wise addition, and multiplication is given by function composition. Claim. $\mathbb{R} \simeq d^{-1}(C^2_b)/(C^1_b)$ This claim is made in page 1 (definition of $\mathbb{R}$ ) and page 6 (that $\mathbb{R}$ is the usual $\mathbb{R}$ ) of the paper. An explicit map $\Phi: d^{-1}(C^2_b) \to \mathbb{R}$ is given in page 4 as $$\lambda \mapsto \left[\left(\frac{\lambda (n+1)}{n+1}\right)_{n \in \mathbb{N}}\right]$$ using Cauchy sequences. Question Why is $\ker(\Phi) = C^1_b$ ? By the definition of the equivalence on the set of Cauchy sequences, $\Phi(\lambda)$ represents $0 \in \mathbb{R}$ if and only if For each $\epsilon > 0$ , there exists an $N \in \mathbb{N}$ such that $\frac{|\lambda(n+1)|}{(n+1)} < \epsilon$ whenever $n > N$ . However, $\lambda: \mathbb{Z} \to \mathbb{Z}$ that sends $n$ to $\lfloor\sqrt{|n|}\rfloor$ is one such element that is not in $C^1_b$ (namely, not bounded). EDIT As Anthony Quas points out below , such $\lambda$ isn't in the preimage of $d$ . You can see this by taking $m = n \to \infty$ . Still, I'm curious about a direct proof for the kernel being $C^1_b$ . This is given in Anthony Quas's answer . Related Category-theoretic description of the real numbers (Mathematics Stack Exchange) Gromov's bounded cohomology, see Ivanov - Notes on the bounded cohomology theory and the 9th page of A'Campo's paper.
So here is my attempt to reconstruct the construction... Suppose $f\colon\mathbb Z\to\mathbb Z$ satisfies $|f(m+n)-f(m)-f(n)|\le M$ as $m,n$ run over $\mathbb Z$ . Then setting $m=n=2^k$ , we see $|f(2^{k+1})-2f(2^k)|\le M$ , from which it follows that $f(2^k)/2^k$ is a Cauchy sequence, and so converges to some $\alpha\in\mathbb R$ . Now given $n\in (2^{k-1},2^k]$ , let its binary expansion be $n=2^{k-1}+2^{j_1}+\ldots+2^{j_r}$ (with $r< k\le \log_2 n$ ). Inductively, we can show $\Big|f(n)-\big[f(2^{k-1})+\ldots +f(2^{j_r})\big]\Big|\le rM$ , from which, together with the above, we can deduce that the full sequence $f(n)/n$ converges to $\alpha$ . On the other hand, given $\alpha\in\mathbb R$ , if one defines $f(n)=\lfloor \alpha n\rfloor$ , then it is easy to see that $f(n+m)-f(n)-f(m)$ takes values in $\{0,1\}$ , so that the map is surjective. Finally, suppose $f(n)/n\to 0$ and $|f(n+m)-f(n)-f(m)|\le M$ is bounded. We have to show that $f$ is bounded. If $|f(n_0)|\ge 2M$ for some $n_0$ , then $|f(2n_0)|\ge 2|f(n_0)|-M$ , from which we see inductively that $f(2^kn_0)\ge (2^k+1)M$ , contradicting the assumption that $f(n)/n\to 0$ .
{ "source": [ "https://mathoverflow.net/questions/401441", "https://mathoverflow.net", "https://mathoverflow.net/users/124549/" ] }
401,746
Many papers refer to an untitled manuscript of Jon Beck (Cornell, 1966) for the origin of the monadicity theorem (originally called a "tripleability theorem"). An early proof is in Manes's 1967 thesis A Triple Miscellany: Some Aspects of the Theory of Algebras over a Triple (Theorem 1.2.9). Manes cites Beck's 1967 thesis Triples, Algebras, and Cohomology as a reference, but the monadicity theorem does not actually appear there. Where can one find a copy (preferably digitised) of the untitled manuscript of Beck containing the monadicity theorem? (Considering that the manuscript is cited, presumably a copy exists and was circulated, rather than passed on by word of mouth.) Evidence for the existence of the manuscript is given by an email of Marta Bunge on the categories mailing list (dated 4th November 2007): There is an unpublished (untitled and undated) four-pages manuscript which John Beck gave to me (and I supposed also to many ohers) when he was at McGill. In it, he states and proves two theorems, the CTT (crude tripleableness theorem), and the PTT (precise tripleableness theorem). There is a connection between triples and descent implicit in the PTT. But this is not the same connection with descent as the Benabou-Roubaud theorem.
After reaching out to every researcher who cited the manuscript, John Kennison was kind enough to find and scan his copy of the untitled manuscript containing the crude and precise monadicity theorems. I have uploaded it to the nLab for posterity: Jon Beck's untitled manuscript . This copy was distributed at the Conference Held at the Seattle Research Center of the Battelle Memorial Institute in June – July 1968, though evidence from citations suggests it was first distributed as early as 1966.
{ "source": [ "https://mathoverflow.net/questions/401746", "https://mathoverflow.net", "https://mathoverflow.net/users/152679/" ] }
402,227
A famous result of Galois, in his letter to Auguste Chevalier, is that for $p$ prime $>11$ the group $\operatorname{PSL}(2,\mathbb{F}_p) $ does not embed in the symmetric group $\mathfrak{S}_p$ . The standard proof nowadays goes through the classification of subgroups of $\operatorname{PSL}(2,\mathbb{F}_p) $ (Dickson's theorem), which is far from trivial. Does anyone know a simpler argument?
A few months ago Péter Pál Pálfy has given a talk about this exact topic. The abstract of the talk was the following: In his "testamentary letter" Galois claims (without proof) that PSL(2,p) does not have a subgroup of index p whenever p>11, and gives examples that for p = 5, 7, 11 such subgroups exist. The attempt by Betti in 1853 to give a proof does not seem to be complete. Jordan's proof in his 1870 book uses methods certainly not known to Galois. Nowadays we deduce Galois's result from the complete list of subgroups of PSL(2,p) obtained by Gierster in 1881. In the talk I will give a proof that might be close to Galois's own thoughts. Last October I exchanged a few e-mails on this topic with Peter M. Neumann. So the talk is in some way a commemoration of him. The recording of the talk is available here . The presentation of the proof begins around 32 minutes in. The proof is elementary, but is itself nontrivial. We easily reduce to showing $G=\operatorname{PSL}(2,\mathbb{F}_p)$ has no index $p$ subgroup for large $p$ . The idea is to study the natural doubly transitive action of $G$ on the projective line and its interrelation between an index $p$ subgroup and the subgroup of affine transformations inside $G$ . The bounds on $p$ come out of realizing that all elements of $\mathbb F_p^\times$ must satisfy quadratic relations coming from that action.
{ "source": [ "https://mathoverflow.net/questions/402227", "https://mathoverflow.net", "https://mathoverflow.net/users/40297/" ] }
402,497
In so-called 'natural unit', it is said that physical quantities are measured in the dimension of 'mass'. For example, $\text{[length]=[mass]}^{-1}$ and so on. In quantum field theory, the dimension of coupling constant is very important because it determines renormalizability of the theory. However, I do not see what exactly the mathematical meaning of 'physical dimension' is. For example, suppose we have self-interaction terms $g_1\cdot \phi\partial^\mu \phi \partial_\mu \phi$ and $g_2 \cdot \phi^4$ , where $\phi$ is a real scalar field, $g_i$ are coupling constants and we assume $4$ dimensional spacetime. Then, it is stated in standard physics books that the scalar field is of mass dimension $1$ and so $g_1$ must be of mass dimension $-1$ and $g_2$ is dimensionless. But, these numbers do not seem to play any 'mathematical' role. To clarify my questions, What forbids me from proclaiming that $\phi$ is dimensionless instead of mass dimension $1$ ? What is the exact difference between a dimensionless coupling constant and a coupling constant of mass dimension $-1$ ? These issues seem very fundamental but always confuse me. Could anyone please provide a precise answer?
Mathematically, the concept of a physical dimension is expressed using one-dimensional vector spaces and their tensor products. For example, consider mass. You can add masses together and you know how to multiply a mass by a real number. Thus, masses should form a one-dimensional real vector space $M$ . The same reasoning applies to other physical quantities, like length, time, temperature, etc. Denote the corresponding one-dimensional vector spaces by $L$ , $T$ , etc. When you multiply (say) some mass $m∈M$ and some length $l∈L$ , the result is $m⊗l∈M⊗L$ . Here $M⊗L$ is another one-dimensional real vector space, which is capable of “storing” physical quantities of dimension mass times length. Multiplicative inverses live in the dual space: if $m∈M$ , then $m^{-1}∈M^*$ , where $\def\Hom{\mathop{\rm Hom}} \def\R{{\bf R}} M^*=\Hom(M,\R)$ . The element $m^{-1}$ is defined as the unique element in $M^*$ such that $m^{-1}(m)=1$ , where $-(-)$ denotes the evaluation of a linear functional on $M$ on an element of $M$ . Observe that $m ⊗ m^{-1} ∈ M⊗M^* ≅ \R$ , where the latter canonical isomorphism sends $(f,m)$ to $f(m)$ , so $m^{-1}$ is indeed the inverse of $m$ . Next, you can also define powers of physical quantities, i.e., $m^t$ , where $m∈M$ is a mass and $t∈\R$ is a real number. This is done using the notion of a density from differential geometry. (The case $\def\C{{\bf C}} t\in\C$ works similarly, but with complex one-dimensional vector spaces.) In order to do this, we must make $M$ into an oriented vector space. For a one-dimensional vector space, this simply means that we declare one out of the two half-rays in $M∖\{0\}$ to be positive, and denote it by $M_{>0}$ . This makes perfect sense for physical quantities like mass, length, temperature. Once you have an orientation on $M$ , you can define $\def\Dens{\mathop{\rm Dens}} \Dens_d(M)$ for $d∈\R$ as the one-dimensional (oriented) real vector space whose elements are equivalence classes of pairs $(a,m)$ , where $a∈\R$ , $m∈M_{>0}$ . The equivalence relation is defined as follows: $(a,b⋅m)∼(a b^d,m)$ for any $b∈\R_{>0}$ . The vector space operations are defined as follows: $0=(0,m)$ for some $m∈M_{>0}$ , $-(a,m)=(-a,m)$ , $(a,m)+(a',m)=(a+a',m)$ , and $s(a,m)=(sa,m)$ . It suffices to add pairs with the same second component $m$ because the equivalence relation allows you to change the second component arbitrarily. Once we have defined $\Dens_d(M)$ , given $m∈M_{>0}$ and $d∈\R$ , we define $m^d∈\Dens_d(M)$ as the equivalence class of the pair $(1,m)$ . It is easy to verify that all the usual laws of arithmetic, like $m^d m^e = m^{d+e}$ , $m^d n^d = (mn)^d$ , etc., are satisfied, provided that multiplication and reciprocals are interpreted as explained above. Using the power operation operations we just defined, we can now see that the equivalence class of $(a,m)$ is equal to $a⋅m^d$ , where $m∈M_{>0}$ , $m^d∈\Dens_d(M)_{>0}$ , and $a⋅m^d∈\Dens_d(M)$ . This makes the meaning of the equivalence relation clear. In particular, for $d=-1$ we have a canonical isomorphism $\Dens_{-1}(M)→M^*$ that sends the equivalence class of $(1,m)$ to the element $m^{-1}∈M^*$ defined above, so the two notions of a reciprocal element coincide. If you are dealing with temperature without knowing about the absolute zero, it can be modeled as a one-dimensional real affine space. That is, you can make sense of a linear combination $$a_1 t_1 + a_2 t_2 + a_3 t_3$$ of temperatures $t_1$ , $t_2$ , $t_3$ as long as $a_1+a_2+a_3=1$ , and you don't need to know about the absolute zero to do this. The calculus of physical quantities can be extended to one-dimensional real affine spaces without much difficulty. None of the above constructions make any noncanonical choices of physical units (such as a unit of mass, for example). Of course, if you do fix such a unit $μ∈M_{>0}$ , you can construct an isomorphism $\R→\Dens_d(M)$ that sends $a∈\R$ to $aμ^d$ , and the above calculus (including the power operations) is identified with the usual operations on real numbers. In general relativity, we no longer have a single one-dimensional vector space for length. Instead, we have the tangent bundle , whose elements model (infinitesimal) displacements. Thus, physical quantities no longer live in a fixed one-dimensional vector space, but rather are sections of a one-dimensional vector bundle constructed from the tangent bundle. For example, the volume is an element of the total space of the line bundle of 1-densities $\Dens_1(T M)$ , and the length is now given by the line-bundle of $λ$ -densities $\Dens_λ(T M)$ , where $λ=1/\dim M$ .
{ "source": [ "https://mathoverflow.net/questions/402497", "https://mathoverflow.net", "https://mathoverflow.net/users/56524/" ] }
403,011
I am looking for a proof of the following claim: First define the function $\chi(n)$ as follows: $$\chi(n)=\begin{cases}1, & \text{if }n \equiv \pm 1 \pmod{10} \\ -1, & \text{if }n \equiv \pm 3 \pmod{10} \\ 0, & \text{if otherwise } \end{cases}$$ Then, $$\frac{\pi^2}{5\sqrt{5}}=\displaystyle\sum_{n=1}^{\infty}\frac{\chi(n)}{n^2}$$ The SageMath cell that demonstrates this claim can be found here .
More generally, if $1\le k\le N-1$ is an integer, where $N$ is a positive interger, $$S_{N,k} := \sum_{n=0}^\infty\biggl( \frac{1}{(N n+N-k)^2} + \frac{1}{(N n+k)^2} \biggr) = \frac{\pi^2}{N^2\sin^2(\pi k/N)}.$$ Your sum is $S_{10,1}-S_{10,3}$ .
{ "source": [ "https://mathoverflow.net/questions/403011", "https://mathoverflow.net", "https://mathoverflow.net/users/88804/" ] }
403,184
A (non-mathematical) friend recently asked me the following question: Does the golden ratio play any role in contemporary mathematics? I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today. I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question. My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.
The "Cleary group" $F_\tau$ is a version of Thompson's group $F$ , introduced by Sean Cleary, that is defined using the golden ratio, and it's definitely of interest in the world of Thompson's groups. See An Irrational-slope Thompson's Group ( Publ. Mat. 65(2): 809-839 (2021). DOI: 10.5565/PUBLMAT6522112 ). Very roughly, where $F$ arises by "cutting things in half", $F_\tau$ arises in an analogous way by "cutting things using the golden ratio". There are lots of similarities between $F_\tau$ and $F$ , but also plenty of mysteries, for example I believe it's still open whether $F_\tau$ embeds into $F$ (i.e., whether there exists a subgroup of $F$ isomorphic to $F_\tau$ ).
{ "source": [ "https://mathoverflow.net/questions/403184", "https://mathoverflow.net", "https://mathoverflow.net/users/352001/" ] }
403,200
In several places (for example, Chriss & Ginzburg’s book “Representation Theory and Complex Geometry”), the author says that the set $X$ of Borel subalgebras of a semi-simple Lie algebra $\mathfrak g$ forms a Zariski-closed subset of a Grassmannian on this semi-simple Lie algebra. A Borel subalgebra is a maximal solvable subalgebra, by general theorems, all of them have the same dimension, say $d$ . Then the set $X$ can be identified with a subset of the Grassmannian $\mathrm{Gr}(d,\mathfrak g)$ . My question is: why is $X$ a Zariski-closed subset? How to translate the condition “solvable” to an algebraic condition? I thought of Lie’s criterion on solvability, but it is not an equivalence condition, so it did not work.
The "Cleary group" $F_\tau$ is a version of Thompson's group $F$ , introduced by Sean Cleary, that is defined using the golden ratio, and it's definitely of interest in the world of Thompson's groups. See An Irrational-slope Thompson's Group ( Publ. Mat. 65(2): 809-839 (2021). DOI: 10.5565/PUBLMAT6522112 ). Very roughly, where $F$ arises by "cutting things in half", $F_\tau$ arises in an analogous way by "cutting things using the golden ratio". There are lots of similarities between $F_\tau$ and $F$ , but also plenty of mysteries, for example I believe it's still open whether $F_\tau$ embeds into $F$ (i.e., whether there exists a subgroup of $F$ isomorphic to $F_\tau$ ).
{ "source": [ "https://mathoverflow.net/questions/403200", "https://mathoverflow.net", "https://mathoverflow.net/users/142832/" ] }
403,414
The following simple-looking inequality for complex numbers in the unit disk generalizes Problem B5 on the Putnam contest 2020 : Theorem 1. Let $z_1, z_2, \ldots, z_n$ be $n$ complex numbers such that $\left|z_i\right| \leq 1$ for each $i \in \left\{1,2,\ldots,n\right\}$ . Prove that \begin{align} \left| z_{1} + z_{2} + \cdots + z_n - n \right| \geq \left| z_{1} z_{2} \cdots z_n - 1 \right| , \end{align} and equality holds only if at least $n-1$ of the $n$ numbers $z_1, z_2, \ldots, z_n$ equal $1$ . In the particular case when $n = 4$ , the theorem can be proved using stereographic projection onto a line, followed by a longish computation. This is how both proposed solutions go. On the other hand, in the general case, the only elementary solution I know was given by @mela_20-15 on AoPS (spread over several posts) . It has some beautiful parts (Cauchy induction), but also some messy ones (tweaking the points to lie on the unit circle in the induction step). There might also be a heavily analysis-based proof in Kiran Kedlaya's solutions (not sure if Theorem 1 is proved in full there). Question. What is the "proof from the book" for Theorem 1? Someone suggested to me to try to interpolate expressions of the form $\dbinom{n-1}{k-1}^{-1} \left|\sum\limits_{i_1 < i_2 < \cdots < i_k} z_{i_1} z_{i_2} \cdots z_{i_k} - \dbinom{n}{k}\right|$ between the left and the right hand sides in Theorem 1; but this does not work. For example, the inequality $\dfrac{1}{2} \left| z_1 z_2 + z_2 z_3 + z_1 z_3 - 3\right| \geq \left|z_1 z_2 z_3 - 1\right|$ fails quite often even on the unit circle . A warning : Inequalities like Theorem 1 are rather hard to check numerically. Choosing the $z_i$ uniformly will rarely hit close to the equality case; usually the left hand side will be much larger than the right. Near the equality case, on the other hand, it is hard to tell whether the answer comes out right legitimately or whether accumulated errors have flipped the sign.
Darij, such stuff is usually Gauss-Lucas in disguise and this case is no exception, though one needs to use once the version for polar derivative $D_1f(z)=(1-z)f'(z)+nf(z)$ of a polynomial $f$ of degree $n$ with respect to $1$ (the corresponding theorem says that if a circle contains all roots of the polynomial $f$ but not the point $w$ , then it contains all roots of $D_wf$ ). Just apply it once and then use the usual Gauss-Lucas, observing every time that if all roots are in the unit disk, then the free term cannot exceed the leading coefficient in absolute value. For a polynomial of degree $n=5$ , say, write it as $(z-1)^5+a_4z^4+\dots+a_0$ . Then we need to show that if its roots are in the unit disk, then $|a_0|\le|a_4|$ . Apply $D_1$ to kill $(z-1)^5$ . You'll be left with the polynomial whose coefficients are $$ a_4, 2a_3+4a_4, 3a_2+3a_3, 4a_1+2a_2, 5a_0+a_1 $$ and whose roots are still in the unit disk. Assume that $|a_4|<|a_0|=1$ , say. Then $|a_1|>4$ . Differentiate (normally). The coefficients will become $$ 4a_4, 3(2a_3+4a_4), 2(3a_2+3a_3), 4a_1+2a_2\,. $$ Since $|a_1|>4$ , we must have $|a_2|>6$ . Differentiate again. You'll get $$ 12a_4, 6(2a_3+4a_4), 2(3a_2+3a_3)\,. $$ Since $|a_2|>6$ , we must have $|a_3|>4$ . Finally, differentiate once more. You'll get $$ 24a_4, 6(2a_3+4a_4). $$ Since $|a_3|>4$ , we must have $|a_4|>1$ contrary to what we have assume. I leave it to you to write this properly for an arbitrary $n$ and to treat the equality case :-)
{ "source": [ "https://mathoverflow.net/questions/403414", "https://mathoverflow.net", "https://mathoverflow.net/users/2530/" ] }
403,441
$\DeclareMathOperator\Diff{Diff}$ Suppose for simplicity that $X$ is affine, it is then possible to define $\Diff(X)$ — the ring of Grothendieck differential operators. When $X$ is smooth, then Definition. the category of $D$ -modules on $X$ is defined to be modules over $\Diff(X)$ . (Category 1 ) However, when $X$ is singular, this is not the right category to consider. One usually follows Kashiwara's approach: Definition. choose a closed embedding $X\hookrightarrow V$ and define $D$ -modules be to modules over $\Diff(V)$ such that are (set-theoretically) supported $X$ . (Category 2 ) The usual reason I heard for why to consider the second category is that $\Diff(X)$ behaves badly when $X$ is singular, and specifically people will point out that $\Diff(X)$ is not Noetherian. (Noetherian = left + right.) For example, this is the case when $X$ is the 'cubic cone' [BGG72]. However, I am no longer satisfied with this answer because of the following: (1), when $X$ is a curve then $\Diff(X)$ is Noetherian. [SS88] (2), when $X=V/G$ a quotient singularity then $\Diff(X)$ is Noetherian. But in these cases, one still considers Category 2 for these $X$ . So it has to be the case that, in general and in these cases, $\Diff(X)$ is bad not just because it is not Noetherian, it is also bad for other reasons. So my question is: Question: Why do we work in category 2 in the situations above. Or a better questions, what is bad about $\Diff(X)$ besides not being Noetherian. Note my question is not how to work in category 2 , but why it fails badly if we work in category 1 in situations (1) and (2). It is worth to remark that: in (1), if the curve is cuspidal then category 1 $\cong$ category 2 . [SS88] generalised in [BZN04] in (2), if the $X=\mathbb{C}^2/(\mathbb{Z}/2\mathbb{Z})$ then category 1 $\cong$ category 2 (I think this is true, but do please correct me if I am wrong.) [BGG72] I. N. Bernˇste ̆ın, I. M. Gel’fand, and S. I. Gel’fand. Differential operators on a cubic cone. Uspehi Mat. Nauk, 27(1(163)):185–190, 1972. [BZN04] David Ben-Zvi and Thomas Nevins. Cusps and D-modules. Journal of the American Mathematical Society, 17.1:155–179, 2004 [SS88] S. P. Smith and J. T. Stafford. Differential operators on an affine curve. Proc. London Math. Soc. (3), 56(2):229–259, 1988. Noted later: actually it is not true that on $X=\mathbb{C}^2/(\mathbb{Z}/2\mathbb{Z})$ then category 1 $\cong$ category 2 , sorry for the confusion.
Darij, such stuff is usually Gauss-Lucas in disguise and this case is no exception, though one needs to use once the version for polar derivative $D_1f(z)=(1-z)f'(z)+nf(z)$ of a polynomial $f$ of degree $n$ with respect to $1$ (the corresponding theorem says that if a circle contains all roots of the polynomial $f$ but not the point $w$ , then it contains all roots of $D_wf$ ). Just apply it once and then use the usual Gauss-Lucas, observing every time that if all roots are in the unit disk, then the free term cannot exceed the leading coefficient in absolute value. For a polynomial of degree $n=5$ , say, write it as $(z-1)^5+a_4z^4+\dots+a_0$ . Then we need to show that if its roots are in the unit disk, then $|a_0|\le|a_4|$ . Apply $D_1$ to kill $(z-1)^5$ . You'll be left with the polynomial whose coefficients are $$ a_4, 2a_3+4a_4, 3a_2+3a_3, 4a_1+2a_2, 5a_0+a_1 $$ and whose roots are still in the unit disk. Assume that $|a_4|<|a_0|=1$ , say. Then $|a_1|>4$ . Differentiate (normally). The coefficients will become $$ 4a_4, 3(2a_3+4a_4), 2(3a_2+3a_3), 4a_1+2a_2\,. $$ Since $|a_1|>4$ , we must have $|a_2|>6$ . Differentiate again. You'll get $$ 12a_4, 6(2a_3+4a_4), 2(3a_2+3a_3)\,. $$ Since $|a_2|>6$ , we must have $|a_3|>4$ . Finally, differentiate once more. You'll get $$ 24a_4, 6(2a_3+4a_4). $$ Since $|a_3|>4$ , we must have $|a_4|>1$ contrary to what we have assume. I leave it to you to write this properly for an arbitrary $n$ and to treat the equality case :-)
{ "source": [ "https://mathoverflow.net/questions/403441", "https://mathoverflow.net", "https://mathoverflow.net/users/111070/" ] }
403,939
Some background first. I recently graduated with a master's degree in applied mathematics. During graduate school I began working on a paper, which I continued to work on post-graduation. A complete working copy of the paper is done and I have posted it on the arXiv here . The work contained in the paper is completely original and solves an open problem. It was of my opinion that the paper contained publishable material. To verify, I emailed a professor at my alma mater with a copy of the current draft (current as of approximately six months ago). The professor did reply stating that the work was publishable and even suggested some Q1/Q2 journals that might accept this type of work. While this was useful feedback, I received the reply in just a few days, so I doubt the professor in question had the ability to read my paper in depth. The problem: I have published a couple of papers before and thus have some experience in the world of academic publishing. That said, the scale and complexity of this paper is something I have never dealt with before so I am not comfortable with proceeding to publish it without help/guidance, i.e. on my own. In particular, I suspect I am going to have to divide the work into small portions and publish a few separate papers, but I don't know how best to do this and do not want to sink a lot of additional time into this without any direction. Also, the paper is very dense and I am concerned that its readability is not optimal. Given that I do not have a ton of experience with large papers like this and am essentially working in a vacuum, I also really desire to get feedback on the quality of my proofs, which I suspect are not as concise as they could be. My situation seems a little unusual and I suspect that the feedback I am looking for would typically be provided by an adviser in a Ph.D. program. Given that I do not have an adviser that can provide detailed feedback, what should I do? I have tried reaching out to researchers/experts with relevant backgrounds and offering authorship in exchange for the help I need. Since what I'm seeking entails a significant amount of work, this proposition seemed reasonable; however, my efforts have not lead to much fruit. As mathematicians in academia, how are these types of requests viewed and how might I go about reaching out for help? I do not have funding and so I considered the offering of authorship as a a reasonable incentive to get the help I need. Is this practice frowned upon and is there anything else I can do to increase my chances of getting a researchers attention?
First of all, I would consider it against the ethics of scientific publishing to accept an offer as a co-author when you were not involved in the research. So I don't think that is viable route. What you have achieved is quite unusual, you have on your own identified and developed a research direction and produced a set of results that advance the state of the art. Isn't that what a Ph.D. is all about? Rather than seeking a co-author, I would seek a Ph.D. advisor. Contacting an expert in the field, asking for a chance to present your results – with the objective to expand this into a Ph.D. thesis – might very well succeed. Preparing a seminar in which you present your work would also help you to focus on the essential innovation, which is difficult to extract from the arXiv paper. You might even find that this seminar can be converted into a paper that would be more suitable for publication. In the mean time, by posting your work on arXiv you have established your priority, so a journal publication is not at all urgent.
{ "source": [ "https://mathoverflow.net/questions/403939", "https://mathoverflow.net", "https://mathoverflow.net/users/125801/" ] }
404,213
Clearly this is impossible for $p$ of even degree, and I imagine that Cardano’s formula quickly reveals it to be impossible in the cubic case, although I have not checked in detail. My guess is that no such $p$ exists. Does one exist? If so, is there an explicit example? Failing a general yes or no answer, are there sufficient conditions to identify a non-surjective polynomial function?
No, this can't happen. One way to prove this is via Hilbert irreducibility: The polynomial $p(x) - t$ is irreducible over $\mathbb Q[x,t]$ , so there are infinitely many specializations $t = c$ with $c \in \mathbb Q$ such that $p(x) - c$ is irreducible in $\mathbb Q[x]$ . Since the degree of $p(x)$ is greater than 1, it follows that for each such $c$ the polynomial $p(x) - c$ has no rational roots.
{ "source": [ "https://mathoverflow.net/questions/404213", "https://mathoverflow.net", "https://mathoverflow.net/users/351164/" ] }
404,724
Linear algebra as we learn it as undergraduates usually holds for any field (even though we usually learn it for the complex, or real, numbers). I am looking for a list of concepts, and results, in linear algebra that actually depend on the choice of field. To start I propose the notion of an complex valued inner product. Here the anti-linear axiom requires an involution on the field.
The existence of Chevalley–Jordan decompositions depends on the perfectness of the field.
{ "source": [ "https://mathoverflow.net/questions/404724", "https://mathoverflow.net", "https://mathoverflow.net/users/167165/" ] }
404,760
Example: How can you guess a polynomial $p$ if you know that $p(2) = 11$ ? It is simple: just write 11 in binary format: 1011 and it gives the coefficients: $p(x) = x^3+x+1$ . Well, of course, this polynomial is not unique, because $2x^k$ and $x^{k+1}$ give the same value at $p=2$ , so for example $2x^2+x+1$ , $4x+x+1$ also satisfy the condition, but their coefficients have greater absolute values! Question 1: Assume we want to find $q(x)$ with integer coefficients, given its values at some set of primes $q(p_i)=y_i$ such that $q(x)$ has the least possible coefficients. How should we do it? Any suggestion/algorithm/software are welcome. (Least coefficients means: the least maximum of modulii of coefficients). Question 2: Can one help to guess the polynomial $p$ such that $p(3) = 221157$ , $p(5) = 31511625$ with the smallest possible integer coefficients? (Least maximum of modulii of coefficients). Does it exist? (That example comes from the question MO404817 on count of 3x3 anticommuting matrices $AB+BA=0$ over $F_p$ .) (The degree of the polynomial seems to be 10 or 11. It seems divisible by $x^3$ , and I have run a brute force search bounding absolute values of the coefficients by 3, but no polynomial satisfying these conditions is found, so I will increase the bound on the coefficients and will run this search again, but the execution time grows too quickly as the bound increases and it might be that brute force is not a good choice). Question 3: Do conditions like $q(p_i)=y_i$ imply some bounds on coefficients? E.g., can we estimate that the coefficients are higher than some bound?
You can certainly do better than brute force by considering modular constraints. If the solution is $p(x) = \sum_i a_i x^i$ then $p(x) - \sum_{j=0}^{n-1} a_j x^j$ is divisible by $x^n$ and $$\frac{p(x) - \sum_{j=0}^{n-1} a_j x^j}{x^n} = a_n \pmod x$$ Solving for $a_0$ in each of the given bases and using the Chinese remainder theorem gives an equivalence class for $a_0$ ; for each possible value of $a_0$ you can expand similarly for $a_1$ ; and traversing this tree in order of increasing cost of the coefficients gives a directed search. This works in principle for any cost function which increases when any coefficient increases in absolute value. This Python code implements the idea and finds two polynomials with sum of absolute values of 29: $$-2x^3 + 4x^4 + 3x^5 + 11x^6 + x^7 + 5x^8 + 3x^{10} \\ -2x^3 + 4x^4 + 3x^5 - 4x^6 + 9x^7 + 4x^8 + 3x^{10}$$ and one polynomial with maximum absolute value of 7: $$-2x^3 + 4x^4 + 3x^5 - 4x^6 - 6x^7 - 3x^8 + 7x^9 + 2x^{10}$$ in a small fraction of a second. Some follow-up questions in comments brought me to the realisation that if we're trying to interpolate $\{ (x_i, y_i) \}$ with the $x_i$ coprime, there is at most one polynomial with coefficients in the range $[-\lfloor \tfrac{(\operatorname{lcm} x_i) - 1}2 \rfloor, \lfloor \tfrac{\operatorname{lcm} x_i}2 \rfloor]$ , because the tree collapses to a chain. This gives the following algorithm for finding such a polynomial, if it exists: M := lcm(x_i) while any y_i is non-zero: find a_0 by Chinese remainder theorem if a_0 > floor(M / 2): a_0 -= M output a_0 update y_i := (y_i - a_0) / x_i In the long term, the initial values of the $y_i$ are reduced to negligibility by the repeated division by $x_i$ , so eventually each $y_i$ will be reduced to a range which is bounded by $\frac{x_i}{x_i - 1} M$ . This means that for a given set of $x_i$ it's possible to compute a finite directed graph to see whether existence is guaranteed. In the particular case that the $x_i$ are $\{3, 5\}$ there are three cycles, all of them loops: $(0,0) \to (0,0)$ is the terminating loop which indicates that a solution exists, but there are also loops $(2, 1) \to (2, 1)$ and $(-2, -1) \to (-2, -1)$ .
{ "source": [ "https://mathoverflow.net/questions/404760", "https://mathoverflow.net", "https://mathoverflow.net/users/10446/" ] }
404,882
Gian-Carlo Rota's famous 1991 essay , "The pernicious influence of mathematics upon philosophy" contains the following passage: Perform the following thought experiment. Suppose that you are given two formal presentations of the same mathematical theory. The definitions of the first presentation are the theorems of the second, and vice versa. This situation frequently occurs in mathematics. Which of the two presentations makes the theory 'true'? Neither, evidently: what we have is two presentations of the same theory. Rota's claim that "this situation frequently occurs in mathematics" sounds reasonable to me, because I feel that I have frequently encountered authors who, after proving a certain theorem, say something like, "This theorem can be taken to be the definition of X," with the implicit suggestion that the original definition of X would then become a theorem. However, when I tried to come up with explicit examples, I had a lot of trouble. My question is, does this situation described by Rota really arise frequently in the literature? There is a close connection between this question and another MO question about cryptomorphisms . But I don't think the questions are exactly the same. For instance, different axiomatizations of matroids comprise standard examples of cryptomorphisms. It is true that one can take (say) the circuit axiomatization of a matroid and prove basis exchange as a theorem, or one can take basis exchange as an axiom and prove the circuit "axioms" as theorems. But these equivalences are all pretty easy to prove; in Oxley's book Matroid Theory , they all appear in the introductory chapter. As far as I know, none of the theorems in later chapters have the property that they could be taken as the starting point for matroid theory, with (say) basis exchange becoming a deep theorem. What I'm wondering is whether there are cases in which a significant piece of theory really is developed in two different ways in the literature, with a major theorem of Presentation A being taken as a starting point for Presentation B, and the definitions of Presentation A being major theorems of Presentation B. Let me also mention that I don't think that reverse mathematics is quite what Rota is referring to. Brouwer's fixed-point theorem can be shown to imply the weak Kőnig's lemma over RCA 0 , but as far as I know, nobody seriously thinks that it makes sense to take Brouwer's fixed-point theorem as an axiom when developing the basics of analysis or topology. EDIT: In another MO question , someone quoted Bott as referring to "the old French trick of turning a theorem into a definition". I'm not sure if Bott and Rota had exactly the same concept in mind, but it seems related.
From a conversation I had with Gian-Carlo Rota when I was undergraduate, I know that one simple but important example that he specifically had in mind was the calculus of vector fields (whether specifically in three dimensions or more generally). The gradient, divergence, and curl of differentiable fields on ${\mathbb R}^{3}$ can be defined as particular combinations of partial derivatives—in which case it is necessary to prove that they represent geometrical objects (meaning they transform correctly). Alternatively, it is possible to specific purely geometrical definitions of all three objects, in which case it is necessary to prove that, when applied to sufficiently smooth functions, the can be calculated entirely in terms of partial derivatives. Whichever way you like to approach the theory, it is possible to find textbooks that take your preferred starting point and do a good job of explaining vector calculus—even though the two approaches are, philosophically, quite different in terms of what they seem to assume about what, say, $\operatorname{grad} f$ "really means." Moreover, there are also plenty of important theorems that can be proven from either starting point, without proving the equivalence first. Somebody else, in the course of that conversation, mentioned the logarithm as an even more basic example. There are actually many ways of initially defining the logarithm, and Calculus by James Stewart (or the first edition, at least) actually demonstrates explicitly that you can begin with the logarithm as the inverse of the exponential, or you can define $\ln x=\int_{1}^{x}(1/t)\,dt$ and eventually prove all the same things.
{ "source": [ "https://mathoverflow.net/questions/404882", "https://mathoverflow.net", "https://mathoverflow.net/users/3106/" ] }
405,196
Recall that $A(X)$ , the K-theory of a connected, pointed space X, is defined as the K-theory spectrum of the ring spectrum $\Sigma^\infty_+ \Omega X$ (or via a plethora of alternative definitions). Is it known if the homotopy type of $A(X)$ determines the homotopy type of $X$ ? If not, what is the best one can hope for? Of course, since $X$ is connected the space $\Omega X$ with its loop space structure determines the homotopy type of $X$ , but I am not sure if this is still true when we take $\Sigma^\infty_+$ , I am worried we get might get $X$ only up to $\Sigma^n \Omega^n $ . Then there is the question of if ring spectra of this type can have the same K-theory, perhaps we should assume $X$ simply connected to get a positive answer?
The answer to the question Does the homotopy type of $()$ determine the homotopy type of $$ ? is No in general. As you say, $A(X)$ is determined by the homotopy type of $\Sigma^\infty \Omega X_+$ as an associative (or $A_\infty$ ) ring spectrum, and this ring spectrum does not uniquely determine $X$ , even if $X$ is simply-connected. For example, suppose $Z$ is a pointed space. Let $T(Z)$ be the free associative $S$ -algebra generated by $Z$ . I.e., $$T(Z)=\Sigma^\infty S^0\vee \Sigma^\infty Z \vee (\Sigma^\infty Z)^{\wedge 2} \vee \cdots .$$ If $Z$ is connected, there is an equivalence of associative ring spectra $$ T(Z) \simeq \Sigma^\infty \Omega\Sigma Z_+.$$ The equivalence is a version of the classical James splitting. It is induced by a map of spectra $\Sigma^\infty Z \to \Sigma^\infty \Omega\Sigma Z_+$ , extended to a map of ring spectra $T(Z)\to \Sigma^\infty \Omega\Sigma Z_+$ using freeness. It follows that if $X$ and $Y$ are connected spaces such that $\Sigma X$ and $\Sigma Y$ are not equivalent, but $\Sigma^\infty X$ and $\Sigma^\infty Y$ are equivalent, then there is an equivalence $A(\Sigma X)\simeq A(\Sigma Y)$ providing a counterexample. A couple of comments: It is well-known that there exist non-isomorphic groups $G$ and $H$ such that the group rings $\mathbb Z[G]$ and $\mathbb Z[H]$ are isomorphic. There are even examples with finite $G$ and $H$ . One may wonder if for some of these examples the spherical group rings $\Sigma^\infty G_+$ and $\Sigma^\infty H_+$ are equivalent as associative ring spectra. If yes, then $BG$ and $BH$ would provide another counterexample. In general one can have non-equivalent ring spectra that have equivalent $K$ -theories. For example, I believe that if $P$ and $Q$ are Morita equivalent in a suitable sense, then $K(P)\simeq K(Q)$ . Can there be two spaces $X$ and $Y$ such that $\Sigma^\infty \Omega X_+$ and $\Sigma^\infty \Omega Y_+$ are not equivalent as ring spectra, but have equivalent categories of modules (in a strong enough sense to induce equivalence of $K$ -theories)? It seems far fetched, but I don't know how to exclude this possibility. Added later : A paper by Roggenkamp and Zimmerman gives an example of two groups $G$ and $H$ for which the rings $\mathbb Z[G]$ and $\mathbb Z[H]$ are not isomorphic, but Morita equivalent. It follows that the Quillen $K$ -theory of these rings is isomorphic. One may ask whether the $K$ -theory spectra of $\Sigma^\infty G_+$ and $\Sigma^\infty H_+$ are equivalent as well.
{ "source": [ "https://mathoverflow.net/questions/405196", "https://mathoverflow.net", "https://mathoverflow.net/users/134512/" ] }
405,256
I apologize for this question which is obviously not research-level. I've been teaching to master students the standard generating sets of the symmetric and alternating groups and I wasn't able to give a simple, convincing example where it's useful to use the two-generating set $\{(1,2),(1,2,...,n)\}$ . (I always find it annoying when we teach something and we're not able to convince the students that it's useful.) I asked a couple of colleagues and no simple answer came out -- let me stress that I'd like to find something simple enough, like a remark I could do in passing or an exercise that I could leave to the reader without cheating him/her. Do you know such examples ?
Let $f\in\mathbb{Q}[x]$ be an irreducible polynomial of prime degree $p$ , with exactly $2$ non-real roots. You can view the Galois group of $f$ (i.e., the Galois group of the splitting $f$ ) as a subgroup of $S_p$ . Complex conjugation shows that the Galois group contains a transposition. You can use Cauchy's theorem from group theory to show that the Galois group contains a $p$ -cycle. Then $f$ has Galois group $S_p$ . This uses the slightly stronger fact that $S_p$ is generated by any transposition and $p$ -cycle (which can be proved from the standard two-generating set). In turn, constructing a polynomial with Galois group $S_5$ is useful for proving insolvability of the quintic.
{ "source": [ "https://mathoverflow.net/questions/405256", "https://mathoverflow.net", "https://mathoverflow.net/users/17988/" ] }
405,295
Does there exist a map $f:\Bbb R^n \rightarrow \Bbb R^m$ , where $n<m$ and $ n,m \in\Bbb N^+$ such that $f$ is surjective and differentiable?
$\DeclareMathOperator\R{\mathbf{R}}$ It's easy to check that the image of any locally Lipschitz map $f:\R^n\to\R^m$ has measure zero when $n<m$ (this encompasses the case of class- $\text{C}^1$ maps, but not the case of differentiable maps). Indeed, extend $f$ to $F:\R^m\to\R^m$ by $F(x,y)=f(x)$ . This is still locally Lipschitz. So it maps the subset $\R^n$ of measure zero to a subset of measure zero, see this MathSE post (it assumes Lipschitz, but the argument is local and $\R^m$ is a countable union of subsets on which $F$ is Lipschitz). Taking into accounts the comments: here is a setting encompassing both the cases when $f$ is locally Lipschitz, and when $f$ is differentiable. Suppose that for every $x\in\mathbf{R}^n$ , we have $$(*)\qquad F_f(x)=\limsup_{y\to x,\;y\neq x}\frac{\|f(y)-f(x)\|}{\|y-x\|}<\infty.$$ Define, for $p$ positive integer $$X_p=\{x\in\mathbf{R}^n:\forall y\in\mathbf{R}^n:\|y-x\|\le 1/p \Rightarrow \|f(y)-f(x)\|\le p\|y-x\|\}.$$ Then $\mathbf{R}^n$ is the (countable) union of all $X_p$ , and $X_p$ is a countable union of subsets $X_{p,i}$ of diameter $\le 1/p$ . And $f$ is $p$ -Lipschitz on $X_{p,i}$ (and also on its closure, in case one wishes to get closed subsets). So the result indeed follows, not of the Lipschitz case as strictly said, but of the same statement replacing $\mathbf{R}^n$ with a subset of $\mathbf{R}^n$ with the restriction of the Euclidean distance (namely: for $n<m$ and $Y$ subset of $\mathbf{R}^n$ , every Lipschitz function $Y\to\mathbf{R}^m$ has image of measure zero). The argument for the latter seems unchanged. PS: for a reference, it is mentioned by @Kosh that Lemma 7.25 in Rudin's Real and complex analysis (initially published in 1966) does all the job: it asserts that any map $f:\mathbf{R}^m\to\mathbf{R}$ satisfying $(*)$ maps measure zero subsets to measure zero subsets. The proof given here actually seems to roughly be the same as the one written (concisely) in Rudin's book.
{ "source": [ "https://mathoverflow.net/questions/405295", "https://mathoverflow.net", "https://mathoverflow.net/users/369324/" ] }
405,805
I am a beginner, so this question may be naive. Suppose we have a (sufficiently strong) consistent first order logic system. Gödel's first incompleteness theorem says there exists a Gödel sentence $g$ which is unprovable, and its negation is also unprovable. By Gödel's completeness theorem, $g$ can't be a logical consequence of the axioms, which means there are models of the system that makes $g$ false. So my question is: then why do people say $g$ is true when viewed outside the system? PS: apparently there are "non-standard models" that makes $g$ false according to wikipedia, then why don't people say $g$ is true in standard models, which is more accurate? Also, do non-standard models work with natural numbers anymore?
When we say the Gödel sentence is true, we mean exactly the same thing as when we say the Fundamental Theorem of Arithmetic is true, or Fermat’s Last Theorem, or any other theorem in mathematics. We mean that we’ve proven it, using our standard consensus principles for reasoning about mathematical objects. And when we talk about natural numbers — as in FTA, FLT, or the Gödel sentence — we mean the actual natural numbers, not arbitrary models of PA. With FTA or FLT, we don’t usually even question that. The reason we look at Gödel’s sentence in other models of PA isn’t because of any difference or subtlety in what the statement means — it’s just a difference in why we’re interested in it . That difference comes back to the question of what principles we’re using in our proofs. Most of the time, we just take those “standard principles” as an implicit background consensus, and don’t mention them explicitly. But with our logicians’ hats on, we may want to be more explicit about them. Most of the time, they’re assumed to be ZFC set theory or something closely equivalent. So we can refine our statement that “the FTA is true” (or FTA, the Gödel sentence, etc) to “in ZFC, we have shown the FTA is true”, or more formally “ZFC proves FTA”. And then to further refine it, we can ask: did we really need the whole power of ZFC, or does some weaker logical system suffice? So we can ask whether these theorems are provable in PA, or any other logical theory T that has a way of talking about natural numbers. And only then can we start asking about whether these statements may hold in some models of T and fail in others. Which is an interesting question — and especially because in the case of the Gödel sentence, we can show it holds in some models and fails in others — but it’s very much a secondary one, and doesn’t affect the original primary meaning of the statement. And it depends entirely on what theory T is under consideration. The one subtlety to note here is that if we’re talking about “PA proves FTA” and “ZFC proves FTA”, these can’t quite be formally the same statement “FTA”, since one must be written in the formal language of arithmetic, the other in the language of set theory. What’s happening here is that the ZFC-version of “FTA” is an translation of the PA-version of FTA in ZFC, using ZFC’s set of natural numbers. This translation is what “the standard model” means. But it’s just part of giving a more refined analysis of the logical status of these statements — it doesn’t mean that every time we do any elementary number theory in ZFC, we should feel obliged to add “in the standard model”. The whole point of a standard model is that it’s standard — it’s just giving the language of arithmetic its usual meaning within ZFC (or other ambient foundation). You can equally well take “FTA” to be the PA-statement and view the ZFC-version as its interpretation under the standard mode, or take “FTA” to be the ZFC-statement and view the PA-version as a transcription of it into the language of arithmetic. The former is more common in logic, but the latter is arguably closer to mathematical practice. So overall: It’s completely accurate to just say “The Gödel sentence is true” , in the same sense that we mean when we say any other mathematical statement is true or false. But if we want to refine that statement to a sharper one, then what we should say is “ZFC proves the Gödel statement [in the standard model of arithmetic].” — the part that really sharpens the statement is specifying “ZFC”, not the mention of the standard model. Similarly, when we say that it is unprovable (or fails in some models), we need to be clear which theory we’re talking about provability in, or models of. Edit. I’ve assumed we’re talking of the Gödel sentence for PA, or some similar theory of arithmetic; but the same applies with the Gödel sentence for ZFC, or any other theory $T$ . In Gödel’s theorem, we assume $T$ comes equipped with an interpretation of the language of arithmetic, and its Gödel sentence $G_T$ is a priori a sentence of arithmetic, that then gets (in the proof of Gödel’s theorem) interpreted into $T$ . So again what it means when we say $G_T$ holds is no different in principle from what it means when we say FTA or FLT holds — it means “reasoning in the normal mathematical way, we can prove $G_T$ holds (in the natural numbers)”. So there’s no difference from before what it means for $G_T$ to hold. And there’s a difference, but a straightforward one, in whether we can show $G_T$ holds: If $T$ is a theory that we can prove consistent (so e.g. PA would be such a theory, if we’re working ambiently in something like ZFC), then using that, we can prove unconditionally that $G_T$ holds. If we can’t prove $T$ is consistent (e.g. if $T$ is ZFC itself, or something stronger), then all we can prove is: If $T$ is consistent, then $G_T$ holds.
{ "source": [ "https://mathoverflow.net/questions/405805", "https://mathoverflow.net", "https://mathoverflow.net/users/405332/" ] }
406,315
The typical application of Sylow's Theorem is to count subgroups. This makes it difficult to search the web for other applications, since most hits are in the context of qualifying exams. What are other uses of Sylow's Theorem? In particular, are there famous/common instances where the goal is to find two non-conjugate $p$ -subgroups, implying that a higher power of $p$ divides $|G|$ ?
There are many examples of using Sylow $p$ -subgroups to understand the structure of general finite groups. KConrad's answer indicates some of these, and others are mentioned in comments. A finite group $G$ with Sylow $p$ -subgroup $P$ is said to have a normal $p$ -complement if there is a normal subgroup $K$ (necessarily of order prime to $p$ ) with $G = PK$ and $P \cap K = 1.$ There are many theorems relating so-called $p$ -local analysis and the existence of normal $p$ -complements in finite groups. One of the earliest was Burnside's normal $p$ -complement theorem, which states that if a finite group $G$ has an Abelian Sylow $p$ -subgroup $S$ with $N_{G}(S) = C_{G}(S)$ , then $G$ has a normal $p$ -complement. Another powerful theorem due to G. Frobenius is that if a finite group $G$ has a Sylow $p$ -subgroup $P$ such that $N_{G}(Q)/C_{G}(Q)$ is a $p$ -group for each subgroup $Q$ of $P$ , then $G$ has a normal $p$ -complement. Other so-called transfer theorems emerged in finite group theory in the early to mid 20th century: these are theorems which used the structure of the normalizers of non-trivial $p$ -subgroups of $G$ to demonstrate the existence of non-trivial Abelian homomorphic images of $G$ in many circumstances. Such theorems were taken to new heights in the late 1950s and the 1960s, in particular by work of J.G. Thompson, G. Glauberman and J.L. Alperin. For example, the work of Glauberman and Thompson demonstrated (for odd $p$ ) the existence of a non-trivial characteristic $p$ -subgroup $C(P)$ of the Sylow $p$ -subgroup $P$ of $G$ such that $G$ has a normal $p$ -complement if and only if $N_{G}(C(P))$ has a normal $p$ -complement. The use of local analysis by Thompson in his $N$ -group paper formed a template/guide for the later completion of the classification of finite simple groups ( some refinements were necessary). Another use of Sylow $p$ -subgroups there was in the development of signalizer functor theory. This is an over-simplification, but the general idea here is, given a finite group $G$ , to build a non-trivial subgroup $L$ of order prime to $p$ which is normalized by a Sylow $p$ -subgroup $P$ of $G$ , and then by $N_{G}(Q)$ for many non-trivial $p$ -subgroups $Q$ of $P$ , and finally by $G$ itself (so that $G$ is not simple if $P \neq 1$ ). This line of development again has origins in work of Thompson, later refined by others, such as Gorenstein, Goldschmidt and Glauberman. The work of R. Brauer relates the $p$ -local structure of finite groups to their representation theory in characteristic $p$ , and draws many new conclusions about complex characters of finite groups. Here, defect groups play an important roles. These are $p$ -subgroups whose order depends on representation-theoretic properties of $G$ , and these behave like Sylow $p$ -subgroups in the context of Brauer's block theory. Properties of normalizers of non-trivial subgroups of the defect group determine many representation-theoretic invariants of $G$ ( and are conjectured to determine many more). So, in the context of finite group theory, Sylow's theorem is an indispensable tool whose use goes far beyond counting theorems. Later edit: Regarding the last question, the idea of "pushing-up" is very important in the classification of finite simple groups. The idea here is that we have a putative finite simple group $G$ , and a maximal subgroup $M$ of $G$ with Sylow $p$ -subgroup $P$ and with $C_{M}(O_{p}(M)) \subseteq O_{p}(M)$ . The question is to determine whether $P$ is also a Sylow $p$ -subgroup of $G$ . Again, what follows is an over-simplification of the necessary analysis, but the overall goal is to get many $p$ -local subgroups "into one place", that is to say, into a single maximal subgroup which contains a given Sylow $p$ -subgroup $P$ and many normalizers $N_{G}(R)$ for $R$ non-trivial subgroups of $P$ . The answer is yes if there is a non-identity characteristic subgroup of $P$ which is normal in $M$ . For if $1 \neq C(P)$ is a characteristic subgroup of $P$ which is normal in $M$ , take a Sylow $p$ -subgroup $Q$ of $G$ which contains $P$ . If $Q \neq P$ , then $N_{Q}(P) > P$ and $C(P) {\rm char} P \lhd N_{Q}(P)$ , so $N_{Q}(P) \leq N_{G}(C(P))$ . But $N_{G}(C(P)) \geq M$ since $C(P) \lhd M.$ Since $M$ is maximal and $G$ is simple, $N_{G}(C(P))$ is a proper subgroup of $G$ containing $M$ , so that $N_{G}(C(P)) = M$ as $M$ is maximal. But then $P < N_{Q}(P) \leq N_{G}(C(P)) = M$ , contrary to the fact that $P$ is a Sylow $p$ -subgroup of $M$ . Hence $Q$ must be a Sylow $p$ -subgroup of $G$ . If no such characteristic subgroup $C(P)$ exists, then more delicate analysis is necessary. This is a crucial dichotomy, which emerged in the 1960s, and was further pursued by many group theorists, including Aschbacher, Baumann, Glauberman, Niles and Thompson.
{ "source": [ "https://mathoverflow.net/questions/406315", "https://mathoverflow.net", "https://mathoverflow.net/users/13923/" ] }
406,896
In the empirical sciences, there are a number of journals that publish 'negative' results. Negative or null results occur when researchers are unable to confirm the findings obtained from earlier published reports. In the applied sciences, they may also come about when a scientist aims to to show that a particular technology (e.g., CRISPR) could alleviate a problem (e.g., a particular virus that kills that kills a specific type of plant), only to find out that it does quite the opposite (e.g. the technology led to the evolution of viruses that were more resistant to CRISPR). In the formal sciences, including mathematics and logic, experiments like these aren't conducted*. However, it does happen that mathematicians develop machinery to tackle a particular thorny problem, only to find out it doesn't work. A good example is John R. Stallings' false proof of the Poincaré Conjecture. Publications like these are few and far between. It seems to me that one of the reasons this is the case, is that there aren't any journals that are specifically geared to these types of papers. They are predominantly focussed on publishing articles that obtain 'positive' results, i.e. actually prove theorems or refute conjectures. Yet it also appears to me that papers like these can be very useful to researchers in mathematics, for the following reasons: A. They may inspire someone to slightly tweak the failed approach, in order to make it work and actually prove the theorem(s); B. They may allow someone to see what has already been tried, and what types of avenues of research are probably not worth pursuing; C. They may provide a platform for approaches to tackling difficult problems in mathematics, even if the methods don't work so far. Thus, they provide a place to share ideas, rather than throwing away months of work. My question is twofold: Are there already any journals that are devoted to papers containing negative results in the above sense? Would it be worthwhile to set up such a journal, from your perspective? (*) I am aware that experimental mathematics is a thing. The focal point of this question isn't really the experimental nature of the mathematics research, but it's about offering a venue to the failed approaches to solving problems developed through research - formal, experimental or otherwise.
I don't really know what an answer is for a question like "what about ...?" but I have some thoughts. In fact, way back around 2006-2007 (according to the dates on the ArXiv, see Multiplying Modular Forms , if you want). What happened was I had written what I thought was a really nice paper explaining how to multiply modular forms whose associated representations (of a real Lie group) belonged to the discrete series. It clarified (to me) some ideas of others, and seemed to extend to all sorts of other kinds of modular forms like those on the exceptional group $G_2$ . Well... I was all happy about this and about to speak at a conference. But the day before, Gordan Savin told me about a mistake in the paper. I spent a long evening kicking around the paper and then kicking myself about it. It was a really subtle thing to me -- a difference between $K$ -fixed vectors and $K \times K$ -fixed vectors -- but a "well-known" issue to experts, ultimately involving the failure of discrete decomposability. Anyways, the next day at the conference (AMS Special Session, Jan 8, 2007), I didn't really know what to say, but I suggested that someone start the "Journal of Doomed Proofs". And I wasn't kidding. It would be refereed and everything. Criteria for acceptance would be the following: The paper contains an plausible approach to a problem of interest to the mathematical community. The approach is sufficiently motivated that many other people might try it. The approach is doomed, though not obviously from the beginning. The paper explains why the approach is doomed, identifying the obstacles which really stop things from working. Or at least have to be worked around in the future. I still think this is a good idea, and not just for the usual "science should publish negative results" reason. Mathematicians have a sort of secret oral tradition of "well-known" things (doomed ideas, silly apocrypha, the largest rank of an elliptic curve over Q, etc.). But those of us who teach in redwood forests don't really have access to this tradition any more. And some were never granted access in the first place. A journal might go a little way to correct this. If anyone knows how to pitch a new journal, count me in. If it's JDP (Journal of Doomed Proofs) or JNR (Journal of Negative Results) or whatever, it's fine with me. But not with Elsevier please.
{ "source": [ "https://mathoverflow.net/questions/406896", "https://mathoverflow.net", "https://mathoverflow.net/users/93724/" ] }
407,289
Is it true that in the category of connected smooth manifolds equipped with a compatible field structure (all six operations are smooth) there are only two objects (up to isomorphism) - $\mathbb{R}$ and $\mathbb{C}$ ?
Here is a series of standard arguments. Let $(\mathbb{F},+,\star)$ be such a field. Then $(\mathbb{F},+)$ is a finite-dimensional (path-)connected abelian Lie group, hence $(\mathbb{F},+) \cong \mathbb{R}^n \times (\mathbb{S}^1)^m$ as Lie groups. Since $\mathbb{F}$ is path-connected, there is in particular a path $\gamma: [0,1] \to \mathbb{F}$ with $\gamma(0) = 0_{\mathbb{F}}$ and $\gamma(1) = 1_{\mathbb{F}}$ . Now consider the homotopy $H: \mathbb{F} \times [0,1] \to \mathbb{F}$ , $(x,t) \mapsto \gamma(t) \star x$ . This gives a contraction of $\mathbb{F}$ and so we can exclude all the circle factors. Now, fix $y_0 \in \mathbb{F}$ and consider the map $\widehat{y_0}: \mathbb{R}^n \to \mathbb{R}^n$ , $x \mapsto x \star y_0$ . Then $\widehat{y_0}$ is an additive map (but at the moment not necessarily linear with respect to the natural vector space structure on $\mathbb{R}^n$ ). It is not too difficult to see that by additivity we have $\forall q\in \mathbb{Q}: \widehat{y_0}(qx) = q \widehat{y_0}(x)$ . Since $\widehat{y_0}$ is continuous (as being smooth), it now follows that it's actually $\mathbb{R}$ - linear . Thus $\mathbb{F}$ is an $\mathbb{R}$ -algebra. From this point on one can finish either by the Frobenius theorem on the classification of finite-dimensional associative $\mathbb{R}$ -algebras or invoke a theorem of Bott and Milnor from algebraic topology that $\mathbb{R}^n$ can be equipped with a bilinear form $\beta$ turning $(\mathbb{R}^n,\beta)$ into a division $\mathbb{R}$ -algebra (not necessarily associative) only in the cases $n=1,2,4,8$ . EDIT: Another finishing topological argument is a theorem of Hopf saying that $\mathbb{R}$ and $\mathbb{C}$ are the only finite-dimensional commutative division $\mathbb{R}$ -algebras. This is less of an overkill compared to invoking Frobenius or Bott–Milnor as the proof is a rather short and cute application of homology, see p.173, Thm. 2B.5 in Hatcher's "Algebraic Topology".
{ "source": [ "https://mathoverflow.net/questions/407289", "https://mathoverflow.net", "https://mathoverflow.net/users/148161/" ] }
407,553
Differential equations are at the heart of applied mathematics - they are used to great success in fields from physics to economics. Certainly, they are very useful in modelling a wide range of phenomena. Integral equations, on the other hand, do not receive such attention. While I have seen some integral equations crop up in physics (Boltzmann equation or the tautochrone problem) or biology (population dynamics), their importance pales in comparison to differential equations. Why is it that differential equations are so much more popular than integral ones? Or am I just ignorant of the matter and there actually are many examples of integral equations in applied mathematics? It also seems that when an integral equation appears, one immediately wants to reduce it to a differential equation. So examples, where this is not possible or not done for different reasons would be welcome.
One important point is that differential equations encode local behaviour of a system, while integral equations typically endcode global behaviour. Local behaviour is often easier to model and to grasp intuitively. In many cases, it can also be described by much simpler formulae. More specifically: Let us consider the simple example where $p(t)$ describes the population of a species which reproduces without any resource limit. It is very intuitive to make the assumption that the growth of the population will be proportional to the size of the population, i.e., one has $$ (*) \quad \begin{cases} \dot p(t) & = c p(t), \\ p(0) & = p_0 \end{cases} $$ where $c$ is a constant, and $p_0$ is the initial size of the population. The reason why this behaviour is easy to model is that we have an intuitive understanding of growth , which is a local (with respect to time) quantity (and modelled by a derivative). The integral equation $$ (**) \qquad p(t) = p_0 + c \int_0^{t} p(s) \, ds $$ is mathematically equivalent to $(*)$ , but its intuitive meaning is more difficult to understand, since it involves the behaviour of the population over time intervals rather than only at single instances in time. The local character of differential equation is reflected by the fact that initial and boundary conditions can be taken into account separately. In the initial value problem $(*)$ , the initial condition $p(0) = p_0$ is separated from the differential equation, and has a clear intuitive meaning. The equivalent integral equations $(**)$ on the other hand, encodes both the dynamical behaviour of $p(t)$ and the initial condition in the same equation, which makes it more difficult to distinguish between the two effects. These phenomena get even more pronounced when one consides partial differential equations. For instance, the heat equation is very easy to heuristically derive locally. The behaviour at the boundary (fixed temperature = Dirichlet boundary conditions, thermal isolation = Neumann boundary conditions) can then be taken into account separately. Reformulating the equation as an integral equations (which, for homogeneous boundary conditions, essentially comes down to computing the resolvent of the Laplace operator with the given boundary conditions) means that ones has to include the boundary conditions in the integral equation. By corollary, such an integral formulation would also need to take the geometry of the domain into account, which can be arbitrarily complicated. On a related note, this also explains why it is impossible to explicitly compute the integral kernel of the resolvent (= Green function) of the Laplace operator on any but the most simple domains.
{ "source": [ "https://mathoverflow.net/questions/407553", "https://mathoverflow.net", "https://mathoverflow.net/users/114143/" ] }
407,823
Recently I was preparing an undergrad-level proof of (a form of) the Jordan Curve Theorem, and I had forgotten just how much work is involved in it. The proof stored my head was just using Alexander duality plus some sanity-checks on the topology of the curve in question, which is a fine approach but does require a bit of algebraic topology machinery my audience didn't have access to. The more elementary proof (or at least the one I landed on) has a straightforward idea behind it, but turning that into a proper argument required slogging through quite a few details about polygons, regular neighborhoods, etc. Similarly, I couldn't help noticing just how much of a pain it is to prove the 2- and 3-dimensional versions of Stokes' theorem without some notion of manifolds, let alone the usual Stokes' theorem in some suitable setting. Those are both elementary examples, but it got me thinking about the general topic of results that have very rough proofs from more elementary principles but have much clearer and smoother proofs with some more advanced background. Specifically, what are some examples of results from more advanced or narrow topics of mathematics can vastly simplify or explain in retrospect theorems that are encountered and proved laboriously in less specialized or more common areas of math? (If it helps clarify what I'm trying to get at, another example in my mind is May's "Concise Course in Algebraic Topology," which I think of as having the premise of, "So, now that you've gone through the standard intro algebraic topology course, here's what was secretly going on behind the scenes the whole time.")
The associativity of the group law on an elliptic curve can be proved in an elementary way by explicitly manipulating algebraic expressions, but this is not very enlightening. By using more advanced geometric ideas, one can prove associativity more conceptually .
{ "source": [ "https://mathoverflow.net/questions/407823", "https://mathoverflow.net", "https://mathoverflow.net/users/61829/" ] }
408,138
A pair of continuous mappings $f \colon X \to Y$ and $g \colon Y \to X$ is called $\pi_1$ -equivalence if they induce mutually inverse isomorphisms of fundamental groups. Spaces are called $\pi_1$ -equivalent if there is $π_1$ -equivalence between them. Let $X, Y$ be CW-complexes Is it true that if $f \colon X \to Y$ induces an isomorphism of fundamental groups, then $X$ and $Y$ are $π_1$ -equivalent? Is it true that if $\pi_1(X)$ is isomorphic to $\pi_1(Y)$ , then $X$ and $Y$ are $\pi_1$ -equivalent? (added later) Is it true that if $\pi_1(X)$ is isomorphic to $\pi_1(Y)$ , then there is of a mapping $f \colon X \to Y$ inducing an isomorphism or there is of a mapping $g \colon Y \to X$ inducing an isomorphism?
No and no. For an explicit counterexample to 1. (which is also a counterexample to 2.) take the map $\mathbb{R}P^2\to \mathbb{R}P^{\infty}$ .
{ "source": [ "https://mathoverflow.net/questions/408138", "https://mathoverflow.net", "https://mathoverflow.net/users/148161/" ] }
408,301
Are there arbitrarily large sets $\mathcal S=\{a_1,\ldots,a_n\}$ of strictly positive integers such that all sums $a_i+a_j$ of two distinct elements in $\mathcal S$ are squares? Considering subsets in $\mathbb Z$ should essentially give the same answer since such a set can contain at most one negative integer. An example of size $3$ is given by $\{6,19,30\}$ . (Allowing $0$ , one gets $\{0,a^2,b^2\}$ in bijection with Pythagorean triplets $c^2=a^2+b^2$ .) There is no such example with four integers in $\{1,\ldots,1000\}$ . (Accepting $0$ , solutions are given by Euler bricks: $\{0,44^2,117^2,240^2\}$ is the smallest example. I suspect thus that there are strictly positive solutions in $\mathbb N^4$ ). An equivalent reformulation: Consider the infinite graph with vertices $1,2,3,\ldots$ and edges $\{i,j\}$ if $i+j$ is a square. Does this graph contain arbitrarily large complete subgraphs? (Trivial observation: Every edge $\{a,b\}$ is only contained in finitely many different complete subgraphs.) Motivation This is somehow a variation on question Generalisation of this circular arrangement of numbers from $1$ to $32$ with two adjacent numbers being perfect squares
The size of such sets is bounded by some (unknown) constant, assuming a big conjecture in arithmetic geometry. The Bombieri-Lang conjecture (non-trivially via the Uniformity Conjecture, see Stanley Yao Xiao's comment) implies that for any $f(x)\in \mathbb{Z}[x]$ of degree $5$ , with no repeated roots, there are at most $B$ many rational numbers $m$ for which $f(m)$ is a square - here $B$ is some absolute constant (so the conjecture goes), completely independent of $f$ . This implies that the size of a set $A$ such that $a+a'$ is a square for any two distinct elements $a,a'\in A$ is at most $B+5$ . Indeed, take any $5$ distinct elements $a_1,\ldots,a_5\in A$ , and consider $f(x) = (x+a_1)\cdots(x+a_5)$ . For any $m\in A\backslash \{a_1,\ldots,a_5\}$ , we know that $f(m)$ is a square, and so $\lvert A\rvert-5\leq B$ . I learnt of this kind of argument (and the Uniformity Conjecture) via this paper of Cilleruelo and Granville, which has many similar arguments and applications: https://arxiv.org/pdf/math/0608109.pdf .
{ "source": [ "https://mathoverflow.net/questions/408301", "https://mathoverflow.net", "https://mathoverflow.net/users/4556/" ] }
408,374
Let $u(i,j)$ denote the number of lattice paths from the origin to a fixed terminal point $(i,j)$ subject only to the condition that each successive lattice point on the path is closer to $(i,j)$ than its predecessor. For example, $u(1,1) = 5$ counts the one-step path $(0,0) \to (1,1)$ and the 4 two-step paths with lone interior point (1,0), (0,1), (2,1), and (1,2) respectively, each of which points is just 1 unit from (1,1) while the origin is at distance $\sqrt{2}$ . By symmetry, $u(i,j)=u(j,i)$ and $u(i,j)=u(\pm i, \pm j)$ . So we may assume $0\le j \le i$ . The numbers $u(i,j)$ grow rapidly but have only small prime factors. For example, $u(15,4)=269124680144389687575008665117965469864474632630636414714548567937\\ 47381916046142578125 =3^{114}\ 5^{19}\ 13^6\ 17^9.$ Any explanations? Have these paths been considered in the literature? Here is Mathematica code to generate $u(i,j)$ . addListToListOfLists[ls_,lol_] := (ls + #1 &) /@ lol SelectLatticePtsInDiskCenteredAtO[radius_] := Module[{m}, Flatten[Table[m = Ceiling[Sqrt[radius^2 - i^2]] - 1; Table[{i,j}, {j,-m,m}], {i,-Floor[radius],Floor[radius]}], 1] ] SelectLatticePtsInDiskCenteredAtij[{i_,j_}, radius_] := addListToListOfLists[{i,j}, SelectLatticePtsInDiskCenteredAtO[radius]] u[i_,j_] := u[{i,j}]; u[{0,0}] = 1; u[{i_,j_}] /; j > i := u[{i,j}] = u[{j,i}]; u[{i_,j_}] /; i < 0 := u[{i,j}] = u[{-i,j}]; u[{i_,j_}] /; j < 0 := u[{i,j}] = u[{i,-j}]; u[{i_,j_}] /; 0 <= j <= i := u[{i,j}] = Module[{cntFromNewOrigin,radius}, radius = Norm[{i,j}]; cntFromNewOrigin[newOrigin_] := u[{i,j} - newOrigin]; Apply[Plus, Map[cntFromNewOrigin, SelectLatticePtsInDiskCenteredAtij[{i,j}, radius]]]] In[918]:= Table[ u[{i,j}],{i,0,3},{j,0,i}] Out[918]= {{1}, {1, 5}, {25, 125, 1125}, {5625, 28125, 253125, 102515625}}
For any $k\in\mathbb N$ let $a_k$ be the number of points on the circle of radius $\sqrt{k}$ (this number may be zero). For any path as in question and for any $k$ between $1$ and $i^2+j^2-1$ , there is going to be either none or exactly one of the points on the circle of radius $\sqrt{k}$ around $(i,j)$ . As the sequence of points determines the path, this tells us that the number of possible such paths is $$\prod_{k=1}^{i^2+j^2-1}(1+a_k).$$ This is a product of comparatively large number of factors, each of which is small: we easily see $a_k<4k$ , but in fact we have tighter bounds, for instance $a_k\leq 4d(k)$ where $d$ is the divisor counting function (follows e.g. from this result ), which is $O(k^c)$ for all $c>0$ . As all these factors are small, they can only have small prime factors, explaining your observation.
{ "source": [ "https://mathoverflow.net/questions/408374", "https://mathoverflow.net", "https://mathoverflow.net/users/29500/" ] }
408,601
As all analytic number theorists know, iterated logarithms ( $\log x$ , $\log \log x$ , $\log \log \log x$ , etc.) are prevalent in analytic number theory. One can give countless examples of this phenomenon. My question is, can someone give an intuitive account for why this is so? Specifics regarding any of the famous theorems involving iterated logarithms are welcome. Many thanks! EDIT: Thank you so much for the answers so far! I'm still trying to get a better intuition on how a $\log \log \log$ or $\log \log \log \log$ arises, especially in Littlewood's 1914 proof that $\pi(x)-\operatorname{li}(x) = \Omega_{\pm} \left(\frac{\sqrt{x}\log \log \log x}{\log x}\right) \ (x \to \infty)$ or Montgomery's conjecture that $\limsup_{x \to \infty}\dfrac{\lvert\pi(x)-\operatorname{li}(x)\rvert}{\;\frac{\sqrt{x}\, (\log \log \log x)^2}{\log x}\;}$ is finite and postive. I admit to knowing nothing (yet) about sieve theory, so I will have to dive into the proof of the prime gap theorem by Tao, Maynard, et. al. Can someone give a more precise account of how the $\log \log \log$ or $\log \log \log\log$ arises in the proof? I'm very familiar with why occurrences of $\log \log$ happen, but once you get to $\log \log \log$ , I'm still a bit mystified. Also, is there a good introduction to sieve theory where I could start, or should I just dive right in to the papers on large prime gaps? FURTHER EDIT: Can someone also explain intuitively the reason for the $\log \log \log$ in Littlewood's theorem? Historically, was this the first occurrence of a triple log in number theory?
There are two main sources of repeated logs. (These sources can be further refined into natural subcategories, but I'll only mention a couple of those subcategories.) Those two main sources are: Type 1 : Repeated logs occur because that is just the truth of the matter. One of my favorite examples is a 2008 theorem of Kevin Ford, solving the multiplication table problem. The theorem states that $$ |\{a\cdot b\, : a,b\in \{1,2,\ldots,N\}\}|\asymp \frac{N^2}{\log(N)^c(\log\log(N))^{3/2}}, $$ where $c=1-\frac{1+\log\log(2)}{\log(2)}$ . Lest you believe that the $(\log\log(N))^{3/2}$ factor is a consequence of this being a 2-dimensional problem, it also shows up in the other dimensions. See this other question for more information. In some cases it is much easier to see where these extra log's come from. For instance, when turning sums over integers into sums over primes, this often leads to an extra log coming into force, just from the nature of the problem at hand and asymptotics with primes. For instance, we have $$ \sum_{n=1}^{N}\frac{1}{n}=\log(N)+\gamma+o(1)\ \text{ while }\ \sum_{p\leq N,\ p\text{ prime}}\frac{1}{p}=\log\log(N)+B+o(1), $$ where $\gamma$ and $B$ are well-known constants. These two asymptotics can be thought of as discrete version of the integral equalities $$ \int\frac{1}{x}\, dx = \log(x) \ \text{ while }\ \int\frac{1}{x\log(x)}\, dx=\log\log(x). $$ Since primes occur all over number theory, and they also come weighted with an extra log factor, this often contributes extra double-log factors. Type 2 : Repeated logs occur as an artifact of our current best machinery. For example, Rankin showed in 1938 that the largest prime gap below $N$ , for $N\gg 0$ , is at least $$ \frac{1}{3}\frac{\log(N)\log\log(N)\log\log\log\log(N)}{(\log\log\log(N))^2}. $$ These extra logs happen when optimizing inequalities, and when using the known machinery of the day. But they do not represent a fundamental truth about the problem. The constant $\frac{1}{3}$ has been slowly improved. Recently, in 2014, Ford, Green, Konyagin, and Tao improved this bound, and Maynard also did so independently, by replacing the fraction $\frac{1}{3}$ by an arbitrary number. See this preprint and this other preprint for more details. Later, these five mathematicians together removed the square from the denominator. If you read the proofs, the logs are coming from the current state of the art sieve methods, together with bounding techniques. When you solve for the best fit functions to undo some of the exponentiation that occurs in calculations, the logs just fall out. In these types of problems, it is not inconceivable (and actually occurs quite regularly) that one new idea is applied to the problem, and the asymptotic changes (sometimes involving more multi-log factors, to account for the small additional room for improvement that was gained). What is surprising about Rankin's bound is that even though it is far from the predicted asymptotic, each extra idea only changed the constant out front---at least until recently. Edited to add: Working through a well-written proof will, of course, give a deeper understanding of where iterated logs arise in the problem at hand. That is certainly true for the three examples above. However, if you are not yet familiar with sieve theory, or the circle method, I wouldn't recommend working through the big proofs of those theorems mentioned above and in your question (at least, not initially). Rather, I would recommend starting with an introductory text on sieves, such as Cojocaru and Murty's book "An Introduction to Sieve Methods and their Applications". Double-logs occur almost at the very beginning. Triple-logs show up in the exercises in Chapter 5 (and perhaps earlier). Indeed, problem 25 is a typical example of how a triple log is introduced to improve an asymptotic.
{ "source": [ "https://mathoverflow.net/questions/408601", "https://mathoverflow.net", "https://mathoverflow.net/users/17218/" ] }
408,644
$\require{AMScd}$ I am currently thinking about (strict) henselisations but I don't know too much literature about the topic. So I am wondering if there is a natural way to restrict maps between strict henselisations to henselisations: Let $A$ , $B$ be local rings with an injective homomorphism $h:A\to B$ . If I have a homomorphism $f:A^\text{sh} \to B^\text{sh}$ , is there always a homomorphism $g:A^\text h \to B^\text h$ such that the following diagram commutes? \begin{CD} A^\text{sh} @>f>> B^\text{sh}\\ @AAA @AAA\\ A^\text h @>>g> B^\text h \end{CD} Equivalently, I am asking if you start with $x \in A^\text h \subset A^\text{sh}$ , is $f(x) \in B^\text h$ ? It feels like this should be related to Galois theory, where Galois extensions fix the base field. (Note: There might be requirements on $A$ and $B$ like being normal, or $B$ being a field. I'm also interested in answers with more assumptions than stated above.) Clarifications: The underlying injective homomorphism $h: A \to B$ is not necessarily a local map but $f$ commutes with $h$ . \begin{CD} A^\text{sh} @>f>> B^\text{sh}\\ @AAA @AAA\\ A @>>h> B \end{CD} The example I have in mind is the following: Pick a ring $R$ and a maximal ideal $\mathfrak{m} \in R$ and a map $\operatorname{Frac}(R) \to \operatorname{Frac}(R)^\text{sep}$ (i.e. a geometric point over the generic point of my curve $\operatorname{Spec}(R)$ ). Then $A := R_{\mathfrak{m}}$ , $B:=\operatorname{Frac}(R)$ , the map $h: A \to B$ is not local and $B \to B^\text{sh} = \operatorname{Frac}(R) ^\text{sep}$ is given by above chosen geometric point.
There are two main sources of repeated logs. (These sources can be further refined into natural subcategories, but I'll only mention a couple of those subcategories.) Those two main sources are: Type 1 : Repeated logs occur because that is just the truth of the matter. One of my favorite examples is a 2008 theorem of Kevin Ford, solving the multiplication table problem. The theorem states that $$ |\{a\cdot b\, : a,b\in \{1,2,\ldots,N\}\}|\asymp \frac{N^2}{\log(N)^c(\log\log(N))^{3/2}}, $$ where $c=1-\frac{1+\log\log(2)}{\log(2)}$ . Lest you believe that the $(\log\log(N))^{3/2}$ factor is a consequence of this being a 2-dimensional problem, it also shows up in the other dimensions. See this other question for more information. In some cases it is much easier to see where these extra log's come from. For instance, when turning sums over integers into sums over primes, this often leads to an extra log coming into force, just from the nature of the problem at hand and asymptotics with primes. For instance, we have $$ \sum_{n=1}^{N}\frac{1}{n}=\log(N)+\gamma+o(1)\ \text{ while }\ \sum_{p\leq N,\ p\text{ prime}}\frac{1}{p}=\log\log(N)+B+o(1), $$ where $\gamma$ and $B$ are well-known constants. These two asymptotics can be thought of as discrete version of the integral equalities $$ \int\frac{1}{x}\, dx = \log(x) \ \text{ while }\ \int\frac{1}{x\log(x)}\, dx=\log\log(x). $$ Since primes occur all over number theory, and they also come weighted with an extra log factor, this often contributes extra double-log factors. Type 2 : Repeated logs occur as an artifact of our current best machinery. For example, Rankin showed in 1938 that the largest prime gap below $N$ , for $N\gg 0$ , is at least $$ \frac{1}{3}\frac{\log(N)\log\log(N)\log\log\log\log(N)}{(\log\log\log(N))^2}. $$ These extra logs happen when optimizing inequalities, and when using the known machinery of the day. But they do not represent a fundamental truth about the problem. The constant $\frac{1}{3}$ has been slowly improved. Recently, in 2014, Ford, Green, Konyagin, and Tao improved this bound, and Maynard also did so independently, by replacing the fraction $\frac{1}{3}$ by an arbitrary number. See this preprint and this other preprint for more details. Later, these five mathematicians together removed the square from the denominator. If you read the proofs, the logs are coming from the current state of the art sieve methods, together with bounding techniques. When you solve for the best fit functions to undo some of the exponentiation that occurs in calculations, the logs just fall out. In these types of problems, it is not inconceivable (and actually occurs quite regularly) that one new idea is applied to the problem, and the asymptotic changes (sometimes involving more multi-log factors, to account for the small additional room for improvement that was gained). What is surprising about Rankin's bound is that even though it is far from the predicted asymptotic, each extra idea only changed the constant out front---at least until recently. Edited to add: Working through a well-written proof will, of course, give a deeper understanding of where iterated logs arise in the problem at hand. That is certainly true for the three examples above. However, if you are not yet familiar with sieve theory, or the circle method, I wouldn't recommend working through the big proofs of those theorems mentioned above and in your question (at least, not initially). Rather, I would recommend starting with an introductory text on sieves, such as Cojocaru and Murty's book "An Introduction to Sieve Methods and their Applications". Double-logs occur almost at the very beginning. Triple-logs show up in the exercises in Chapter 5 (and perhaps earlier). Indeed, problem 25 is a typical example of how a triple log is introduced to improve an asymptotic.
{ "source": [ "https://mathoverflow.net/questions/408644", "https://mathoverflow.net", "https://mathoverflow.net/users/103737/" ] }
408,648
$\DeclareMathOperator\SO{SO}\DeclareMathOperator\SU{SU}$ I want to write a $3 \times 3$ complex-matrix representation of $\SO(4)$ , for example, we know that $\SO(5)$ is a subgroup of $\SU(4)$ , so we can write a $4 \times 4$ complex-matrix representation of $\SO(5)$ . There is a paper which gives this representation: Mapping two-qubit operators onto projective geometries . This paper mentions which matrices form the $\SO(5)$ group, and Manipulating two-spin coherences and qubit pairs contains the matrices explicitly (both papers are by A.R.P. Rau). I want to know if $\SO(4)$ is also a subgroup of $\SU(3)$ , and if so, can we find a $3 \times 3$ complex-matrix representation of $\SO(4)$ ?
No. There is probably a straightforward representation-theoretic argument, but I am too ignorant of the subject to give one, so here is a topological argument. If $H \subset G$ are Lie groups with $H$ closed in $G$ , then $G/H$ has the natural structure of a smooth manifold without boundary. If $G$ is compact, so is $G/H$ , as it is the continuous image of a compact space; if $H$ is compact the closed-ness condition is automatic. Now $\dim SU_3 = 8$ and $\dim SO_4 = 6$ . Therefore, if there existed some group embedding $j: SO_4 \hookrightarrow SU_3$ , the quotient $X = SU_3/j(SO_4)$ must be a compact 2-manifold without boundary. Furthermore, the fibering $SO_4 \xrightarrow{j} SU_3 \to X$ induces the long exact sequence of homotopy groups $$\pi_2 SU_3 \to \pi_2 X \to \pi_1 SO_4 \to \pi_1 SU_3 \to \pi_1 X \to \pi_0 SO_4.$$ Filling in the values we know for these groups, we find that there is a long exact sequence of groups $$0 \to \pi_2 X \to \Bbb Z/2 \to 0 \to \pi_1 X \to 0.$$ Therefore $\pi_2 X \cong \Bbb Z/2$ and $\pi_1 X \cong 0$ . This is a contradiction; the only simply connected compact surface without boundary is the 2-sphere, which has $\pi_2 X = \Bbb Z$ .
{ "source": [ "https://mathoverflow.net/questions/408648", "https://mathoverflow.net", "https://mathoverflow.net/users/466032/" ] }
409,421
That is, is there an open cover of $\mathbb{R}P^n$ by $n$ sets homeomorphic to $\mathbb{R}^n$ ? I came up with this question a few years ago and I´ve thought about it from time to time, but I haven´t been able to solve it. I suspect the answer is negative but I´m not very sure. Also, is there an area of topology which studies questions like this one?
Expanding on the comment by @user127776, the key reference is Palais, "Lusternik-Schnirelman Theory on Banach Manifolds", Topology 5 (1966), where it is proved that if $X$ can be covered by $n$ contractible closed sets, then the cup-length of $X$ is strictly less than $n$ . (Here the cup-length is the largest $n$ such that for some field $F$ and some elements $c_1,\ldots,c_n$ in $H^*(X,F)$ , we have $c_1\cup\ldots\cup c_n\neq 0$ .) This rules out covering ${\mathbb RP}^n$ with $n$ closed contractible sets, which should suffice here (after slightly shrinking the given $n$ copies of ${\mathbb R}^n$ ). Editing to add: More generally, suppose $X$ is a compact Hausdorff space covered by $n$ closed sets $X_1,\ldots, X_n$ with all $H^1(X_i,{\mathbb Z}/2{\mathbb Z})=0 $ . (Equivalently, any (real) line bundle on $X_i$ is trivial.) Theorem. Any line bundle on $X$ can be generated by $n$ sections. Proof. Let $\hat{X}= Spec(C(X,{\mathbb R}))$ , so that $X$ imbeds in $\hat{X}$ . Note that: Because $X$ is normal, each $X_i$ is defined by the vanishing of a continuous function, so the $\hat{X}_i$ form a closed covering of $\hat{X}$ . By Swan's theorem, the map that takes a vector bundle over $\hat{X}$ to its pullback over $X$ is an equivalence of categories (and likewise with $X$ replaced by $X_i$ ). Now because every line bundle on $X_i$ is trivial, so is every line bundle on $\hat{X}_i$ . Because $\hat{X}$ is an affine scheme, a line bundle corresponds to a projective module, which in turn is the image of an idempotent matrix with entries in $C(X,{\mathbb R})$ . A little thought reveals that this matrix can be taken to be $n\times n$ . It follows that any line bundle on $\hat{X}$ is generated by $n$ sections. Therefore (by the Swan correspondence) so is any line bundle on $X$ , as advertised. Corollary. For any $c\in H^1(X,{\mathbb Z}/2{\mathbb Z})$ , the $n$ -fold cup product $c^n\in H^n(X,{\mathbb Z}/2{\mathbb Z})$ is zero. Proof. $c$ is the first Stiefel-Whitney class of some line bundle $\xi$ . Let $\phi_\xi:X\rightarrow {\mathbb RP}^\infty$ be the classifying map of $\xi$ . The $n$ sections guaranteed by the theorem provide a factorization of $\phi_\xi$ through ${\mathbb RP}^{n-1}$ . But $H^n({\mathbb RP}^{n-1},{\mathbb Z}/2{\mathbb Z})=0$ .
{ "source": [ "https://mathoverflow.net/questions/409421", "https://mathoverflow.net", "https://mathoverflow.net/users/172802/" ] }
409,431
Let $X$ be a random variable following a $\mathrm{Binomial}(n,p)$ distribution, and let $$Y=\min\{X,n-X\}.$$ Ispired by the problem posed by C. Clement on https://math.stackexchange.com/questions/1696256/expectation-and-concentration-for-minx-n-x-when-x-is-a-binomial , I want to ask whether there exists some constant $c>0$ such that $\mathbb{E}(Y)\geq c\cdot\min\{p,1-p\}\cdot n$ for all $0<p<1$ . If this is not true, can we find some $p_0$ with $\frac{1+\sqrt{5}}{4}\leq p_0<1$ such that if $X\sim \mathrm{Binomial}(n,p_0)$ then $\mathbb{E}(Y)\geq (1-p)[(1+4p)n-8(1+p)]$ ?
Expanding on the comment by @user127776, the key reference is Palais, "Lusternik-Schnirelman Theory on Banach Manifolds", Topology 5 (1966), where it is proved that if $X$ can be covered by $n$ contractible closed sets, then the cup-length of $X$ is strictly less than $n$ . (Here the cup-length is the largest $n$ such that for some field $F$ and some elements $c_1,\ldots,c_n$ in $H^*(X,F)$ , we have $c_1\cup\ldots\cup c_n\neq 0$ .) This rules out covering ${\mathbb RP}^n$ with $n$ closed contractible sets, which should suffice here (after slightly shrinking the given $n$ copies of ${\mathbb R}^n$ ). Editing to add: More generally, suppose $X$ is a compact Hausdorff space covered by $n$ closed sets $X_1,\ldots, X_n$ with all $H^1(X_i,{\mathbb Z}/2{\mathbb Z})=0 $ . (Equivalently, any (real) line bundle on $X_i$ is trivial.) Theorem. Any line bundle on $X$ can be generated by $n$ sections. Proof. Let $\hat{X}= Spec(C(X,{\mathbb R}))$ , so that $X$ imbeds in $\hat{X}$ . Note that: Because $X$ is normal, each $X_i$ is defined by the vanishing of a continuous function, so the $\hat{X}_i$ form a closed covering of $\hat{X}$ . By Swan's theorem, the map that takes a vector bundle over $\hat{X}$ to its pullback over $X$ is an equivalence of categories (and likewise with $X$ replaced by $X_i$ ). Now because every line bundle on $X_i$ is trivial, so is every line bundle on $\hat{X}_i$ . Because $\hat{X}$ is an affine scheme, a line bundle corresponds to a projective module, which in turn is the image of an idempotent matrix with entries in $C(X,{\mathbb R})$ . A little thought reveals that this matrix can be taken to be $n\times n$ . It follows that any line bundle on $\hat{X}$ is generated by $n$ sections. Therefore (by the Swan correspondence) so is any line bundle on $X$ , as advertised. Corollary. For any $c\in H^1(X,{\mathbb Z}/2{\mathbb Z})$ , the $n$ -fold cup product $c^n\in H^n(X,{\mathbb Z}/2{\mathbb Z})$ is zero. Proof. $c$ is the first Stiefel-Whitney class of some line bundle $\xi$ . Let $\phi_\xi:X\rightarrow {\mathbb RP}^\infty$ be the classifying map of $\xi$ . The $n$ sections guaranteed by the theorem provide a factorization of $\phi_\xi$ through ${\mathbb RP}^{n-1}$ . But $H^n({\mathbb RP}^{n-1},{\mathbb Z}/2{\mathbb Z})=0$ .
{ "source": [ "https://mathoverflow.net/questions/409431", "https://mathoverflow.net", "https://mathoverflow.net/users/75264/" ] }
409,504
In classical probability theory, the (multivariate) Gaussian is in some sense the "nicest quadratic" random variable, i.e. with second moment a specified positive-definite matrix. I do not know how to make this precise, but non-precisely what I mean is that 1. Gaussian shows up everywhere, and 2. it is universal/canonical/... in some sense, e.g. as in the central limit theorem. My question is whether for many noncommutative probability spaces (an algebra $A$ over $\mathbf{C}$ and a map $E:A\to\mathbf{C}$ , with conditons), there also exists a "nicest quadratic" random variable $X\in A$ , satisfying analogous properties to the Gaussian.
The theory of classical independence and classical convolution can be generalised to noncommutative settings in several ways. The most famous one is that of free independence and free convolution (introduced by Voiculescu), but there is also boolean independence and boolean convolution (introduced by Speicher and Woroudi in Boolean convolution ); monotone independence and monotone convolution (introduced by Muraki in Monotonic independence, monotonic central limit theorem and monotonic law of small numbers ); and anti-monotone independence and anti-monotone convolution (the order-reversal of the previous notion). There are classification results of Speicher ( On universal products ) and Muraki ( The five independences as natural products ) that show that these are the only notions of independence (or convolution) that obey some natural set of axioms. (Speicher's classification assumed that convolution is commutative, so omitted the monotone and anti-monotone cases that were later discovered by Muraki.) For each such concept of independence, there is a central limit theorem. Classically, the limiting distribution is the gaussian; in free probability it is the semicircular law; in the boolean case it is the Bernoulli distribution; and in the monotone and anti-monotone cases it is the arcsine law. See Section 9.2.1 of the recent thesis Evolution equations in non-commutative probability of David Jekel (and Chapter 5 of that thesis contains a more detailed history of the development of these notions of independence). For the classical and free independence concepts, at least, there is also an associated notion of entropy, and these distributions extremise the entropy amongst all distributions of a fixed mean and variance; again, Jekel's thesis has further information. (For the free case, of course, pretty much any introduction to free probability will contain these facts.) EDIT: There is also finite free convolution (see Marcus, Spielman, and Srivastava - Finite free convolutions of polynomials ), in which the analogue of the gaussian is the distribution of zeroes of Hermite polynomials.
{ "source": [ "https://mathoverflow.net/questions/409504", "https://mathoverflow.net", "https://mathoverflow.net/users/119012/" ] }
410,081
Reading the autobiography of Richard Feynman , I struck upon the following paragraphs, in which Feynman recall when, as a student of the Princeton physics department, he used to challenge the students of the math department. I challenged them: "I bet there isn't a single theorem that you can tell me ­­what the assumptions are and what the theorem is in terms I can understand ­­where I can't tell you right away whether it's true or false." It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?" "No holes?" "No holes." "Impossible! There ain't no such a thing." "Ha! We got him! Everybody gather around! It's So­-and­-so's theorem of immeasurable measure!" Just when they think they've got me, I remind them, "But you said an orange! You can't cut the orange peel any thinner than the atoms." "But we have the condition of continuity: We can keep on cutting!" "No, you said an orange, so I assumed that you meant a real orange." So I always won. If I guessed it right, great. If I guessed it wrong, there was always something I could find in their simplification that they left out. Forgetting that most of this was probably done as a joke, with what theorem would you have answered to Feynman's challenge?
There's a certain gaming/sporting aspect to Feynman's challenge that works in his favor. First of all, as phrased, the challenge gives him a 50/50 shot at being right even if he guesses randomly. Also, if you present a statement which seems too obviously true then Feynman could reason that you wouldn't have chosen that statement if the obvious answer were correct. With those caveats, I present some proposals below. I have tried to gravitate toward problems involving physical intuition, since IMO fooling Feynman's physical intuition carries bonus points. Is sphere eversion (without creasing) possible? (Edit: I see now that Noah Schweber already suggested this one.) This is my favorite, and the only flaw is that a real physical surface can't pass through itself, so the question isn't quite a physical one. Is there a convex solid of uniform density with exactly one stable equilibrium and one unstable equilibrium? This problem postdates Feynman's life and maybe he would have guessed correctly, but I would be impressed. Does the regular $n$ -gon have the largest area among all $n$ -gons with unit diameter ? Tricky because the answer is yes if $n$ is odd but false if $n$ is even and $n\ge 6$ . Is there a closed planar convex shape with two equichordal points ? This one is good if you suspect that Feynman might reason that the "obvious" answer (no) can't be correct or else you wouldn't be asking. "No" is correct but the proof is highly nontrivial. Does every $d$ -dimensional polytope have a realization in which all its vertices have rational coordinates ? Explaining the precise statement of this result is a little tricky, and it has the flaw that the weirdness doesn't kick in until $d=4$ , but otherwise this one is really nice IMO. EDIT: There has been some discussion in the comments about whether the presence of irrational numbers in #5 above means it refers to some mathematical abstraction that is not "physically realizable." Here is an easier-to-understand (though less spectacular) question that captures the essential issue. Consider the Perles configuration of nine green points in the figure below. If the outer pentagon is a mathematically perfect regular pentagon, then at least some of the nine points have to have irrational coordinates. No big surprise there. But now consider this. Suppose I don't require that the pentagon be a perfectly regular pentagon; suppose I allow you to redraw the figure, re-positioning the nine points in any way you like, as long as you preserve all the collinearities (i.e., points that are collinear in the original diagram must remain collinear in your new picture). Can you arrange for all nine points to have rational coordinates? The answer, which I find surprising, is no. As far as physical intuition is concerned, what this example shows is that the seemingly physical concept of "collinearity of points in the plane" (suitably idealized of course) already forces irrational numbers upon us. In contrast, if you point out that a right-angled triangle with two sides of length 1 has a hypotenuse of $\sqrt{2}$ , then a physicist could object that physical angles and lengths are never infinitely precise, and that you can create an arbitrarily close approximation using rational numbers. In the Perles configuration, we are not stipulating any distances or angles precisely; we are only stipulating that certain points be exactly collinear , and it turns out that this idealization already requires us to introduce irrational numbers.
{ "source": [ "https://mathoverflow.net/questions/410081", "https://mathoverflow.net", "https://mathoverflow.net/users/244671/" ] }
410,462
Consider the set of ultrafilters $\beta(\mathbb N)$ on $\mathbb N$ . Any function $f\colon\mathbb N\to\mathbb N$ extends to a function $\beta f\colon \beta \mathbb N \to \beta\mathbb N$ . We say that two ultrafilters $\mathcal U$ and $\mathcal V$ are isomorphic if there is some bijection $f$ with $f(\mathcal U) = f(\mathcal V)$ . Since there are only $2^{\aleph_0}$ many bijections of $\mathbb N$ , but $2^{2^{\aleph_0}}$ many ultrafilters on $\mathbb N$ , we know that there are many isomorphism classes of free ultrafilters. On the other hand, in any proof that I have seen using ultrafilters, it does not seem to matter which ultrafilter is chosen. This leads me to the following Question: is there some way in which all free ultrafilters are the 'same'? I have thought of some possibilities what it could mean for ultrafilters to be the 'same'. We can see any ultrafilter $\mathcal U$ as an ordered set, using the partial order $\subseteq$ . I can imagine that if $\mathcal U$ and $\mathcal V$ are free ultrafilters, they are isomorphic as partial orderings. This seems pretty weak though. Another possibility would be to consider the action of $\operatorname{Homeo}(\beta\mathbb N)$ on $\beta\mathbb N$ . Does it act transitively? It might be interesting to consider the Rudin–Keisler ordering $\leq_{\text{RK}}$ on $\beta\mathbb N$ . It is defined by $\mathcal U\leq_{\text{RK}} \mathcal V$ iff there is a function $f\colon\mathbb N\to\mathbb N$ with $\beta f(\mathcal V) = \mathcal U$ . It is known that there exist free ultrafilters that are not minimal for the Rudin–Keisler ordering, while it is independent of ZFC whether there exists free ultrafilters that are not minimal. Presumably, a minimal ultrafilter is not the 'same' as a not-minimal ultrafilter. However, even then it might be consistent with ZFC that all free ultrafilters are the 'same'.
Certain important properties are shared by all free ultrafilters. In many applications of ultrafilters, especially more elementary applications, only these properties are used. In such a situation, it does not matter which free ultrafilter is chosen -- any one will do. But there are some proofs that require (or seem to require) special kinds of ultrafilters. One important example, mentioned in the comments by Benjamin Steinberg, are algebraically special ultrafilters, such as the idempotent ultrafilters used to prove Hindman's Theorem, or the minimal idempotents used in the ultrafilters proof of the Hales-Jewett Theorem. These ultrafilters are "algebraically special" in the sense that they have special properties defined using the algebraic-and-topological structure $(\beta \mathbb N,+)$ . However, if we don't want to look at all this extra structure on $\beta \mathbb N$ , an idempotent ultrafilter may not be too different from any other. In fact, every idempotent is equivalent to many non-idempotents (in the sense of being isomorphic to them, as described in your first paragraph). Another special kind of ultrafilter is a $P$ -point. These are definable in the topological space $\beta \mathbb N$ , without considering any other dynamic or algebraic structure, and a $P$ -point cannot be isomorphic to a non- $P$ -point. One important characterization of $P$ -points is: an ultrafilter $\mathcal U$ is a $P$ -point if and only if for any sequence $\langle x_n \rangle$ of real numbers, $r = \mathcal U$ - $\lim x_n$ if and only if there is a subsequence of $\langle x_{n_k} \rangle$ converging to $r$ with $\{n_k :\, k \in \mathbb N\} \in \mathcal U$ . I have seen this property of $P$ -points used in proofs before, although no particularly famous examples spring to mind. I remember a theorem of mine, in a paper with Piotr Oprocha, where we need to take $\mathcal U$ -limits that do not have this property. So this is an application of ultrafilters where it is important to use a non - $P$ -point. From an order-theoretic point of view, Ramsey ultrafilters are quite special. One can write a short proof of Ramsey's Theorem (the infinitary version) using a Ramsey ultrafilter. (I like this proof for pedagogical purposes, and have given it to graduate students in the past, since it nicely parallels the proof that measurable cardinals are Ramsey.) So some applications of ultrafilters really do use special ultrafilters. To address some of your other questions: It is a theorem of ZFC, not just a consistency result, that there are RK-incomparable ultrafilters. This is due to Kunen and Frolik. It is also a theorem of ZFC that the action of homeomorphisms on $\beta \mathbb N \setminus \mathbb N$ is not transitive. In fact, Kunen proved (from ZFC only) that $\beta \mathbb N \setminus \mathbb N$ contains weak $P$ -points . These are defined as points of $\beta \mathbb N \setminus \mathbb N$ that are not contained in the closure of any countable subset of $\beta \mathbb N \setminus \mathbb N$ not already containing the point. It isn't too hard to see that, because $\beta \mathbb N \setminus \mathbb N$ is compact, not every point has this property. And no self-homeomorphism of $\beta \mathbb N \setminus \mathbb N$ can map a weak $P$ -point onto a non-weak $P$ -point. Finally, is there a meaningful sense in which all free ultrafilters are the same? Maybe. It is an open question whether all free ultrafilters can (consistently) have the same Tukey type . Roughly, there is a means of comparing partial orders via maps called Tukey reductions, much coarser than the notion of isomorphism described in your post. It is course enough that -- maybe -- all ultrafilters compare to all others. But this is consistently not the case. For example, a $P$ -point is never Tukey-equivalent to a non- $P$ -point.
{ "source": [ "https://mathoverflow.net/questions/410462", "https://mathoverflow.net", "https://mathoverflow.net/users/470870/" ] }
410,661
Let $G$ be a finite group. Let $r_2\colon G \to \mathbb{N}$ be the square-root counting function, assigning to each $g\in G$ the number of $x\in G$ with $x^2=g$ . Perhaps surprisingly, $r_2$ does not necessarily attain its maximum at the identity for general groups, see Square roots of elements in a finite group and representation theory . I'm interested in whether $r_2(g)$ can attain a value above $0.999|G|$ for some non-identity element $g\in G$ . Update: Thanks to everybody who participated in the discussion. The lemma proved here influenced greatly the statement of Theorem 4.2 in https://arxiv.org/pdf/2204.09666.pdf . Proposition 3.12 in this same paper is essentially the answer posted by GH from MO.
Here is an elementary way to prove that there can´t be a finite group $G$ and non-identity $g\in G$ with $r_2(g)>\frac{5}{6}|G|$ . Suppose that happens and call $S=\{x\in G; x^2=g\}$ . Then of course there must be $x\in G$ with $x,x^{-1}\in S$ , so $g^2=1$ . Now for every $x\in G$ , $S\cap x^{-1}S$ has more than $\frac{2|G|}{3}$ elements. If for each $y\in G$ you consider the set $A_y=\{x\in G;y\in S\cap x^{-1}S\}$ , then $\sum_{y\in G}|A_y|=\sum_{x\in G}|S\cap x^{-1}S|>|G|\frac{2|G|}{3}$ , so there is some $y$ with $|A_y|>\frac{2|G|}{3}$ . So, $|S\cap A_y|>\frac{|G|}{2}$ . Pick $x\in S\cap A_y$ . Then we have $y^2=xyxy=x^2=g$ . From these equalities we deduce $xyx=y$ , and as $x^2y^2=g^2=1$ , we have $xy=x^{-1}y^{-1}$ . So, $y=xyx=x^{-1}y^{-1}x$ . This means that for all possible choices of $x$ , which is more than $\frac{|G|}{2}$ , $x^{-1}y^{-1}x= y$ . So, $x^{-1}y^{-1}x=y$ for all $x$ , which is impossible since $y\neq y^{-1}$ . Edit : As Emil Jeřábek points out in the comments, this argument can be refined to prove that $r_2(g)>\frac{3}{4}|G|$ can´t be achieved. The bound $r_2(g)=\frac{3}{4}|G|$ is reached in the example Derek Holt mentions in his answer: the group $Q_8$ and its element $g$ with $r_2(g)=6$ .
{ "source": [ "https://mathoverflow.net/questions/410661", "https://mathoverflow.net", "https://mathoverflow.net/users/160715/" ] }
410,798
Briefly, I was wondering if someone can suggest an angle for introducing the gist of Galois groups of polynomials to (advanced) high school students who are already familiar with polynomials (factorisation via Horner, polynomial division, discriminant and Vieta's formulas for quadratic equations). I am struggling to find a coherent approach that makes use of their current understanding, e.g., whether to give an intro on group theory, or to describe permutation operations on roots. This is all intended for a short course introduction and not an extended program. Any advice would be quite helpful.
I have now twice taught Galois theory to advanced high school students at PROMYS . This is a six week course, meeting four times a week, for students who already are comfortable with proofs and, in particular, have seen basic number theory. The second time, I taught the course as an IBL course, and you can read my worksheets here . Here is what I have done, and some thoughts about ways to do less. Showing that there is no universal quintic formula: Both times, I started with a weak version of unsolvability of the quintic. This would make a very natural stopping point for a less ambitious course. Consider the general degree $n$ polynomial $$x^n + c_{n-1} x^{n-1} + \cdots + c_1 x + c_0 = (x-r_1) (x-r_2) \cdots (x-r_n).$$ Point out that $c_1$ , ..., $c_n$ are symmetric polynomials in the roots $r_1$ , ..., $r_n$ . Exhibit the quadratic, cubic and (optionally) quartic formulas and point out that they compute they express the roots $r_i$ in terms of the coefficients $c_i$ while staying entirely inside the polynomials . For example, the quadratic formula is $r_1 = \tfrac{-c_1 + \sqrt{c_1^2-4c_2}}{2}$ , and $c_1^2-4c_2 = (r_1-r_2)^2$ , so you can take the square root without leaving the world of polynomials. (By prepared for discussions about what you mean by square root, since teachers have taught them that square roots are always positive; what you mean is any expression whose square is $c_1^2 - 4 c_2$ .) Announce that your goal is to show, for $n \geq 5$ , that there is no expression for $r_1$ , ..., $r_n$ in terms of $c_1$ , ..., $c_n$ , using the operators $+$ , $-$ , $\times$ , $\div$ , $\sqrt[n]{\ }$ such that every $n$ -th root stays inside the symmetric rational functions. It is worth taking some time to get some buy in that students understand the goal and realize it is nontrivial and a reasonable approximation to what we might informally state as "there is no quintic formula". With this as the goal, define the symmetric group $S_n$ and note how it acts on formulas in $r_1$ , ..., $r_n$ . Define the sign homomorphism $S_n \to \pm 1$ by the action of $S_n$ on $\prod_{i<j} (r_i - r_j)$ and define the alternating permutations $A_n$ as the permutations with sign $1$ . Show that, for $n=3$ or $4$ , there are polynomials which define homomorphisms $A_n \to \mathbb{C}^{\ast}$ . Prove that, for $n \geq 5$ , there are no homomorphisms $A_n \to \mathbb{C}^{\ast}$ . Now, define $F$ to be the set of all rational functions invariant for $A_n$ . Clearly, $F$ is closed under $+$ , $-$ , $\times$ , $\div$ . Now the key Lemma: If $f \in F$ is nonzero, and $f = g^n$ for some $g$ in $\mathbb{C}(r_1, \dots, r_n)$ , then $\sigma : \tfrac{\sigma(g)}{g}$ would be a group homomorphism $A_n \to \mathbb{C}^{\ast}$ . But we showed (for $n \geq 5$ ) that there are no such homomorphisms! So $g$ must also be in $F$ . Thus, our operations can never get us outside $F$ , and in particular (for $n \geq 5$ ) we cannot get to $r_1$ , $r_2$ , ..., $r_n$ . $\square$ I really like this approach because it introduces so many key concepts -- symmetries, characters, a set closed under the field operations, defining a subfield by its symmetries -- without ever needing to define a field or a group as an abstract object. The first time, I did the above argument in a week of lectures and then went back to point out the key abstractions of "field", "group" and "character" hiding in the proof. The second time, I did it in 3 weeks of IBL and introduced the group theory language explicitly as I went, but held back the definition of a field until we had completed the proof. Worksheet 9 is the climax. As a side note, I never needed to prove the fundamental theorem of symmetric functions, though I assigned it as homework, and it is well-motivated by these results. The proof only needs the easy containment $\mathbb{C}(c_1, \ldots, c_n) \subseteq \mathbb{C}(r_1, \ldots, r_n)^{S_n}$ , not the equality. Getting to abstract fields If you want to do anything harder than this, I think you need to define fields. Here, the fact that my students have already seen basic number theory is a huge advantage: They already know that, for $p$ a prime, every nonzero element of $\mathbb{Z}/p \mathbb{Z}$ is a unit, so they find it very quick to believe and prove the same thing about $k[x]/p(x) k[x]$ for $p(x)$ an irreducible polynomial. There is a conceptual obstacle, though, which is convincing my students that they really can treat $\mathbb{Q}[\sqrt[3]{2}]$ as the same thing as $\mathbb{Q}[x]/(x^3-2) \mathbb{Q}[x]$ . It takes them a long time to believe that, for example, knowing that $x^2+x+1$ is a unit in the ring $\mathbb{Q}[x]/(x^3-2) \mathbb{Q}[x]$ really proves that $\tfrac{1}{\sqrt[3]{2}^2+\sqrt[3]{2}+1}$ is in $\mathbb{Q}[\sqrt[3]{2}]$ . I think it is worth spending time to make them sit with this discomfort until they resolve it. There is a reasonable half way goal to aim for here -- that $\sqrt[3]{2}$ can't be computed with $+$ , $-$ , $\times$ , $\div$ , $\sqrt{\ }$ . That has the advantage of using field extensions, and the multiplicativity of degree, but not splitting fields or Galois groups. I talked about that more here . To my surprise, in summer 2021, I never actually wound up needing to prove that the degree of a field extension was well defined! I talked about bases and spanning sets, and showed that $1$ , $x$ , ..., $x^{\deg(f)-1}$ was a basis for $k[x]/f(x) k[x]$ , but I never proved that two bases of a field had the same cardinality or that degree was multiplicative! This was quite a relief; in summer 2018, I took a week away from the main material to do a crash course on linear algebra, and I lost a lot of momentum there. If you are going for the full unsolvability of the quintic You need to introduce splitting fields and their automorphisms. At first, I found students swallow these with no hesitation, but they don't really realize what they've said yes to. When you tell them that there is an automorphism of $\mathbb{Q}[\sqrt[3]{2}, \omega]$ which maps $\sqrt[3]{2}$ to $\omega \sqrt[3]{2}$ , the students who don't just say yes to everything will start to feel very concerned. I think this is probably the point where it pays off to have really gotten them used to the notion that computations with polynomials really do prove things about concrete subfields of $\mathbb{C}$ . The key lemma , from this perspective, is that if $f(x)$ is an irreducible polynomial in $F[x]$ , and $K$ is a normal extension of $F$ in which $f$ has a root, then $f$ splits in $K$ and $\text{Aut}(K/F)$ acts transitively on the roots of $f$ in $K$ . From this, deduce: Theorem Suppose that $f(x)$ is an irreducible polynomial of degree $n \geq 5$ over $F_0$ , let $K$ be a field in which $f$ splits and suppose that $\text{Aut}(K/F_0)$ acts on the roots of $f$ by $S_n$ . Then there is no tower of radical extensions $F_0 \subset F_1 \subset \cdots \subset F_r$ in which $f(x)$ has a root. It's worth taking the time to convince students that this really is a rigorous and powerful formalization of "you can't solve the quintic with radicals". The key proof is to embed everything into a single splitting field $L$ ; let $G = \text{Aut}(L/F_0)$ . The details are on Worksheet 17 , but the key point is that, on the one hand, we have a surjection $G \to S_n$ but, on the other hand, we have a sequence of subgroups $G \trianglerighteq G_0 \trianglerighteq G_1 \trianglerighteq \cdots \trianglerighteq G_r = \{ e \}$ with each $G_{i+1}$ the kernel of a character $G_i \to \mathbb{C}^{\ast}$ , and this is impossible. (The warm-up version that I start the course with is easier because $G$ is $S_n$ , so we are building the composition series in a concrete group we know, rather than a very abstract group where all we can say is that it surjects to $S_n$ .) There are a surprising number of things I didn't need to prove! I never introduced the notion of separability; the result is correct as stated for a normal inseparable extension. I also never proved the fundamental theorem of Galois theory! This proof starts with a chain of subfields and uses it to construct a chain of subgroups, but we never need the reverse construction! In particular, if there were two different subgroups that stabilized the same subfield, it would not effect the proof in any way. I also never needed the abstract notion of a quotient group! I always had a concrete homomorphism $G \to H$ , and talked about its image and kernel; I never needed to show that every normal subgroup was the kernel of a homomorphism. I put this on the homework, but it was never used in class. Once you get here, a final issue is to construct an explicit example of a polynomial where the Galois group is $S_n$ . The easy root is to take the polynomial $x^n + c_{n-1} x^{n-1} + \cdots + c_1 x + c_0$ with coefficients in $\mathbb{C}(c_1, \ldots, c_n)$ , since it is easy to see that $\text{Aut}{\big(} \mathbb{C}(r_1, \ldots, r_n)/\mathbb{C}(c_1, \ldots, c_n){\big)}$ is $S_n$ . If you want to give an actual quintic with coefficients in $\mathbb{Q}$ whose Galois group is $S_5$ , there are various hacks to do this. What I did was to tell my students that, in 10 years, this would no longer be an issue: We will just write down a random polynomial $x^5 + c_4 x^4 + \cdots + c_0$ with integer coefficients and roots $(r_1, \ldots, r_5)$ , compute the degree $120$ polynomial $g(x):= \prod_{\sigma \in S_5} {\big( }x-r_{\sigma(1)} - 2 r_{\sigma(2)} - \cdots - 5 r_{\sigma(5)} {\big)}$ , and check that the result is irreducible; this will prove that the polynomial has Galois group $S_5$ . The need for a clever proof is simply because modern computers can't * compute and factor such large polynomials. I then did show them the clever proof that an irreducible quintic with two complex roots has Galois group $S_5$ , which was the last result of the course. * I might be wrong about this! I just attempted the case of a random integer monic quintic on my laptop. The first time, I didn't use enough working precision in computing the roots, and the degree $120$ polynomial didn't even even have real coefficients. But I went back and told Mathematica to use 100 digits for every floating point computation, and it got the polynomial in quite reasonable time, with every coefficient within $10^{-90}$ of an integer. This could make a genuine in class demo! Of course, I would need to know whether I am actually computing the degree $120$ polynomial correctly, but it seems unlikely that floating point errors would come out so near integers every time. I'm not sure whether I can trust Mathematica's integer polynomial factorization for such large polynomials, but I know that Mathematica's routines for characteristic $p$ factorization are very good, and factoring my polynomial modulo the first $10$ primes finds a a degree $100$ factor modulo $3$ ; this is already enough to prove irreducibility. The future is now!
{ "source": [ "https://mathoverflow.net/questions/410798", "https://mathoverflow.net", "https://mathoverflow.net/users/115841/" ] }
412,002
Edit 1: I have received a lot of great answers. I am not accepting any answer because I think there might be in future that some user want to contribute any new answer, as in my opinion some users might find the answers useful to them to keep their head high in time of despair. Thank you very much to all! I am a person living in a 3rd world country who has done a master's in mathematics and is preparing for grad school in math. Unfortunately, I fell into depression due to horrible harassment by 2 professors against which no action could be taken due to nepotism and corruption in my country, family issues and had to take a break. I don't get much support from my parents or friends as research in mathematics doesn't yield much money and there are a lot of other high paying jobs. I don't care much about their opinions. I live in a very very capitalistic country and people here respect only money. I like to study mathematics as I am very much interested in it. I am taking therapy and medications. I have realized that I need some motivating instances to help keep me going and so that I can motivate myself when I am low due to my depression. It is my humble request to you to suggest books, websites, and/or blogs of real life instances of mathematicians overcoming challenges and hardships in life , both mathematical and non-mathematical. I have been reading Men of Mathematics by E.T. Bell but it is mostly about mathematicians of older centuries, not the 20th and 21st centuries, although it's still very good read.
The AMS published a book called Living Proof in which a number of mathematicians relate their own experience with overcoming adversity. Some of these are famous although most are ordinary mathematicians. The stories are equally inspiring. The book is available as a free download from the AMS.
{ "source": [ "https://mathoverflow.net/questions/412002", "https://mathoverflow.net", "https://mathoverflow.net/users/151209/" ] }
412,385
While exploring the Baxter sequences from my earlier MO post , I obtained a rather curious identity (not listed on OEIS either). I usually try to employ the Wilf-Zeilberger (WZ) algorithm to justify such claims. To my surprise, WZ offers two different recurrences for each side of this identity. So, I would like to ask: QUESTION. Is there a conceptual or combinatorial reason for the below equality? $$\frac1n\sum_{k=0}^{n-1}\binom{n+1}k\binom{n+1}{k+1}\binom{n+1}{k+2} =\frac2{n+2}\sum_{k=0}^{n-1}\binom{n+1}k\binom{n-1}k\binom{n+2}{k+2}.$$ Remark 1. Of course, one gets an alternative formulation for the Baxer sequences themselves: $$\sum_{k=0}^{n-1}\frac{\binom{n+1}k\binom{n+1}{k+1}\binom{n+1}{k+2}}{\binom{n+1}1\binom{n+1}2} =2\sum_{k=0}^{n-1}\frac{\binom{n+1}k\binom{n-1}k\binom{n+2}{k+2}}{\binom{n+1}1\binom{n+2}2}.$$ Remark 2. Yet, here is a restatement to help with combinatorial argument: $$\sum_{k=0}^{n-1}\binom{n+1}k\binom{n+1}{k+1}\binom{n+1}{k+2} =2\sum_{k=0}^{n-1}\binom{n+1}k\binom{n}k\binom{n+1}{k+2}.$$
Just playing around with it: The RHS multiplied by $n$ is the same as $$2 \sum_{k=0}^{n-1} \binom{n+1}{k} \binom{n}{k} \binom{n+1}{k+2}.$$ Subtracting this from $n$ times the LHS gives $$\sum_{k=0}^{n-1} \binom{n+1}{k} \binom{n+1}{k+2} \left( \binom{n}{k+1} - \binom{n}{k} \right).$$ Now you check that replacing $k$ with $n-1-k$ changes the sign of the summand, so the sum is zero.
{ "source": [ "https://mathoverflow.net/questions/412385", "https://mathoverflow.net", "https://mathoverflow.net/users/66131/" ] }
412,437
This is somehow a general (and naive) question, but as specialized mathematicians we usually miss important results outside our area of research. So, generally speaking, which have been important breakthroughs in 2021 in different mathematical disciplines?
Advancing mathematics by guiding human intuition with AI , Nature 600 , 70 (2021), stands out because it represents the first significant advance in pure mathematics generated by artificial intelligence. More newsworthy items (each item has a link to a blog on Quanta magazine for an informal discussion of its significance): A counterexample to the unit conjecture for group rings , Giles Gardam, Ann. of Math. 194 , 967 (2021). [ Quanta link ] Tadayuki Watanabe solved the last open case of the Smale conjecture. (still unpublished in 2021) $MM^{++}$ implies $(\ast)$ , David Asperó and Ralf Schindler, Ann. Math. 193 , 793 (2021). [ Quanta link ] Proof of the p-adic formula for Brumer–Stark units , Samit Dasgupta and Mahesh Kakde. [ Quanta link ] Geometrization of the local Langlands correspondence , Laurent Fargues and Peter Scholze. [ Quanta link ] Proof of Arnold's conjecture for cyclical number systems , Mohammed Abouzaid and Andrew Blumberg. [ Quanta link ]
{ "source": [ "https://mathoverflow.net/questions/412437", "https://mathoverflow.net", "https://mathoverflow.net/users/46573/" ] }
412,762
There are many mathematical statements that, despite being supported by a massive amount of data, are currently unproven. A well-known example is the Goldbach conjecture, which has been shown to hold for all even integers up to $10^{18}$ , but which is still, indeed, a conjecture. This question asks about examples of mathematical statements of the opposite kind, that is, statements that have been proved true (thus, theorems) but that have almost no data supporting them or, in other words, that are essentially impossible to guess by empirical observation. A first example is the Erdős–Kac theorem , which, informally, says that an appropriate normalization of the number of distinct prime factors of a positive integer converges to the standard normal distribution. However, convergence is so slow that testing it numerically is hopeless, especially because it would require to factorize many extremely large numbers. Examples should be theorems for which a concept of "empirical observation" makes sense. Therefore, for instance, theorems dealing with uncomputable structures are (trivially) excluded.
Letting $\pi$ be the prime counting function and $\mathrm{Li}$ the logarithmic integral, Littlewood proved in his 1914 article "Sur la distribution des nombres premiers" that the difference $\pi(x)-\mathrm{Li}(x)$ changes sign infinitely many times; however, according to Wolfram Math World: Skewes Number , Kotnik proved that the smallest number for which this happens is greater than $10^{14}$ .
{ "source": [ "https://mathoverflow.net/questions/412762", "https://mathoverflow.net", "https://mathoverflow.net/users/470546/" ] }
412,923
I recently came across this problem from USAMO 2005 : "A calculator is broken so that the only keys that still work are the $\sin$ , $\cos$ , $\tan$ , $\arcsin$ , $\arccos$ and $\arctan$ buttons. The display initially shows $0$ . Given any positive rational number $q$ , show that pressing some finite sequence of buttons will yield $q$ . Assume that the calculator does real number calculations with infinite precision. All functions are in terms of radians." A surprising question whose ingenious solution actually shows how to generate the square root of any rational number. I'd like to pose the following questions related to this problem: What is the smallest set of real functions, continuous at all points of $\mathbb{R}$ , which can be applied to $0$ to yield a sequence containing all the rational numbers? It's also interesting perhaps weaken this to allowing finite numbers of discontinuities so you can use the rational functions for example: What is the smallest set of real functions, continuous except at a finite set of points, which can be applied to $0$ to yield a sequence containing all the rational numbers? Note that these are slightly different questions to the one above in that we are asking not only to be able to produce any rational from $0$ but to produce all of them at some point after starting at $0$ . In the case of the USAMO question you can generate a complete sequence of rationals as well as any given rational but this may not always be true. (See solution for details) For the second question note that from the theory of continued fractions of rational numbers the functions $f(x)=1/x$ , $g(x)=x+1$ will generate any given rational starting from $0$ . For example since $$\frac{355}{113} = 3+\cfrac{1}{7+\cfrac{1}{16}}$$ we have $\frac{355}{113}=g^{[3]}(f(g^{[7]}(f(g^{[16]}(0)))))$ . If we also throw in $h(x)=x-1$ we again have every inverse included hence this set of three functions will generate all rationals. So we know that the smallest set must contain either $1$ , $2$ or $3$ functions. In fact as pregunton noted in this related question the functions $f(x)=x+1$ and $g(x)= -1/x$ generate the modular group which acts transitively on $\mathbb{Q}$ and this gives an elegant example with only two functions.
It is enough with one continuous function. First, I'll give a simple example with one function which is discontinuous at one point. To do it, consider the function $$f:(0,\pi+1)\to(0,\pi+1)$$ with $$ f(x) = \begin{cases} x+1 &\text{if $x<\pi$,} \\ x-\pi &\text{if $x>\pi$,} \\ 1 &\text{if $x=\pi$.}\\ \end{cases} $$ Claim : The sequence $$1,f(1),f^2(1),\dots \tag{$*$}$$ is dense in $(0,\pi+1)$ . To verify the claim, it is enough to see that the image is dense in the interval $(0,1)$ , and that is true because for every $n$ , the number $\lceil n\pi\rceil-n\pi$ is in the image, and the sequence of multiples of $\pi$ modulo 1 is dense in $(0,1)$ due to $\pi$ being irrational. Let $A$ denote the image of the sequence $(*)$ . Since $A$ is dense in $(0,\pi+1)$ , we can find an homeomorphism $h:(0,\pi+1)\to\mathbb{R}$ with $h(A)=\mathbb{Q}$ (using that $\mathbb{R}$ is countable dense homogeneous, see for example this reference ). We can also suppose $h(1)=0$ changing $h$ by $h-h(1)$ if necessary. Then the function $F=hfh^{-1}$ does the trick, because $$F^n(0)=hf^nh^{-1}(0)=h(f^n(1)),$$ so $h(A)$ , which is $\mathbb{Q}$ , is the image of the sequence $0,F(0),F^2(0),\dots$ To prove that the problem can be solved with one continuous function, we can apply the same argument but taking instead of $f$ a continuous function $g:\mathbb{R}\to\mathbb{R}$ such that $0,g(0),g^2(0),\dots$ is dense in $\mathbb{R}$ . As Martin M. W. noticed in his answer, those functions are known to exist (they are called transitive maps), this paper gives examples of them.
{ "source": [ "https://mathoverflow.net/questions/412923", "https://mathoverflow.net", "https://mathoverflow.net/users/7113/" ] }
412,940
Define the function $$S(N, n) = \sum_{k=0}^n \binom{N}{k}.$$ For what values of $N$ and $n$ does this function equal a power of 2? There are three classes of solutions: $n = 0$ or $n = N$ , $N$ is odd and $n = (N-1)/2$ , or $n = 1$ and $N$ is one less than a power of two. There are only two solutions $(N, n)$ outside of these three classes as far as I know: (23, 3) and (90, 2). These were discovered by Marcel Golay in 1949. There are no more solutions with $N < 30{,}000$ . I've written more about this problem here . By the way, I looked in Concrete Mathematics hoping to find a nice closed form for $S(N, n)$ but the book specifically says there isn't a closed form for this sum. There is a sum in terms of the hypergeometric function $_2F_1$ but there's no nice closed form.
The case $n=2$ was settled by Nagell in 1948 and suspected (?) by Ramanujan in 1913, but in an equivalent form. As John points out in his growing blog post , the $n = 2$ case is a quadratic equation which, via the quadratic formula, requires that $2^n - 7 = x^2$ for some integer $x$ . Motivated by who-knows-what, Ramanujan posted the following in 1913 (J. Indian Math.). Question 464. $2^n - 7$ is a perfect square for the values $3, 4, 5, 7, 15$ of $n$ . Find other values. A posted "solution" just verified the values of $n$ he gave and did not address whether there are other solutions. The same problem was proposed by Ljunggren in a Norwegian journal in 1943; in 1948 Nagell proved that there are no other solutions, using a quadratic field with $\sqrt{-7}$ and focusing on values of $x$ rather than $n$ . Skolem, Chowla, and Lewis (referencing Ramanujan but not aware of Nagell's solution) solved the problem using $p$ -adic techniques in 1959, prompting Nagell to republish his easier 1948 proof in English. Meanwhile, in another part of the forest , error-correcting codes arose. With that motivation, Shapiro and Slotnick essentially reconstructed Nagell's approach in 1959. Their subsequent results make use of other error-correcting code structures; techniques in coding veer away from the binomial sum question. As van Lint explained in a 1975 survey, Although as far as perfect codes are concerned the problem has been settled, the purely number-theoretic problem of finding all solutions of (5.2) remains open. where (5.2) is the more general $\sum_{i=0}^e \binom{n}{i} (q-1)^i = q^k$ where $q$ is a power of a prime. Bringing Nagell into the error-correcting code literature occurred by 1964 (Cohen). The OEIS entries A215797 , A060728 , and A038198 address the problem from different viewpoints. There's one reference to another solution that I have not been able to track down. In a 1998 textbook on error correcting codes, John Baylis writes (p109) ...so $2+n+n^2$ must be a power of 2. It was shown in 1930 that $n = 1, 2, 5$ and 90 are the only positive integers for which this is true. Any idea what 1930 result he has in mind? References: Baylis, Error-Correcting Codes, Chapman & Hall, 1998. Berndt, Choi, Kang, The problems submitted by Ramanujan to the Journal of the Indian Mathematical Society, Contemporary Mathematics 236, 1999. Cohen, A note on double perfect error-correcting codes on $q$ symbols, Information and Control 7, 1964. Nagell, The diophantine equation $x^2 + 7 = 2^n$ , Arkiv Math. 4, 1961 (English version of his 1948 article published in Norwegian). Shapiro, Slotnick, On the mathematical theory of error correcting codes, IBM Journal, January 1959 (available through IEEE). Skolem, Chowla, Lewis, The diophantine equation $2^{n+2} - 7 = x^2$ and related problems, Proc. AMS 10, 1959. van Lint, A survey of perfect codes, Rocky Mountain J. Math. 5, 1975.
{ "source": [ "https://mathoverflow.net/questions/412940", "https://mathoverflow.net", "https://mathoverflow.net/users/136/" ] }
412,988
I am an enthusiastic but ever-so-slightly naive PhD student and have been 'following my nose' a lot recently, seeing whether topics that I have studied can be generalised or translated in various ways into unfamiliar settings; exploring where the theory breaks down etc. When doing this, I have found it very difficult to assess whether it is going to 'work' in the more general sense of whether it could lead to a viable project for a PhD thesis or perhaps a short research paper. I guess it becomes easier to get a feel for these things as one gains experience and a better sense of perspective. Of course, one added complication over the past year has been that due to various lockdowns it been difficult to get to know other mathematicians and run ideas past them in the natural way that would have occurred in previous years. Suppose that a wise and experienced pure mathematician wishes to generalise a particular theory or shed some light on an open problem and will devote, say, at least 6 months to it. What reasonable steps should be taken to maximise the likelihood of this being a fruitful endeavour? My main concern personally would be (is?) a previously unforeseen obstacle rearing its ugly head only after a significant amount of time and energy has been invested that brings the whole thing crashing down. How can this scenario be avoided when exploring something brand new? EDIT: Although I have referred to my own circumstances above, my question relates primarily to the more general issue.
Over decades, and across multiple research fields, I've noticed a way to predict I'm on track to make progress. I discover something interesting, only to learn it is already known . As a student, this was incredibly discouraging, and in fact I stopped some lines of research for this very reason. But by now I'm used to it: I start looking at a new area, and have an insight. Arg, it turns out people knew it 10 years ago. I read some more, think some more, and have a new insight. Careful searching reveals a paper with that result from three years ago. Too bad—but that paper is fascinating, and I can deeply appreciate it and feel kinship with the author. Thinking about it leads to another insight, which I start writing up. Oof, then I see a preprint from a month ago which says the same thing. What I've learned over time is that this pattern of rediscovery, particularly if the dates of things I've been rediscovering get more and more recent, is a reliable sign I'm on a good path, and that I'm building my intuition in an area other people care about. So keep following your nose, check back with the literature regularly, and take any rediscoveries as a green light, not a red light.
{ "source": [ "https://mathoverflow.net/questions/412988", "https://mathoverflow.net", "https://mathoverflow.net/users/197447/" ] }
413,165
I am a graduate student and I've been thinking about this fun but frustrating problem for some time. Let $d = \frac{d}{dx}$ , and let $f \in C^{\infty}(\mathbb{R})$ be such that for every real $x$ , $$g(x) := \lim_{n \to \infty} d^n f(x)$$ converges. A simple example for such an $f$ would be $ce^x + h(x)$ for any constant $c$ where $h(x)$ converges to $0$ everywhere under this iteration (in fact my hunch is that every such $f$ is of this form), eg. $h(x) = e^{x/2}$ or simply a polynomial, of course. I've been trying to show that $g$ is, in fact, differentiable, and thus is a fixed point of $d$ . Whether this is true would provide many interesting properties from a dynamical systems point of view if one can generalize to arbitrary smooth linear differential operators, although they might be too good to be true. Perhaps this is a known result? If so I would greatly appreciate a reference. If not, and this has a trivial counterexample I've missed, please let me know. Otherwise, I've been dealing with some tricky double limit using tricks such as in this MSE answer , to no avail. Any help is kindly appreciated. $\textbf{EDIT}$ : Here is a discussion of some nice consequences know that we now the answer is positive, which I hope can be generalized. Let $A$ be the set of fixed points of $d$ (in this case, just multiples of $e^x$ as we know), let $B$ be the set of functions that converge everywhere to zero under the above iteration. Let $C$ be the set of functions that converges to a smooth function with the above iteration. Then we have the following: $C$ = $A + B = \{ g + h : g\in A, h \in B \}$ . Proof: Let $f \in C$ . Let $g$ be what $d^n f$ converges to. Let $h = f-g$ . Clearly $d^n h$ converges to $0$ since $g$ is fixed. Then we get $f = g+h$ . Now take any $g\in A$ and $h \in B$ , and set $f = g+h$ . Since $d^n h$ converges to $0$ and $g$ is fixed, $d^n f$ converges to $g$ , and we are done. Next, here I'm assuming the result of this thread holds for a general (possibly elliptic) smooth linear differential operator $d : C^\infty (\mathbb{R}) \to C^\infty (\mathbb{R}) $ . A first note is that fixed points of one differential operator correspond to solutions of another, i.e. of a homogeneous PDE. Explicitly, if $d_1 g = g$ , then setting $d_2 = d_1 - Id$ , we get $d_2 g = 0$ . This much is simple. So given $d$ , finding $A$ from above amounts to finding the space of solutions of a PDE. I'm hoping that one can use techniques from dynamical systems to find the set $C$ and thus get $A$ after the iterations. But I'm approaching this naively and I do not know the difficulty or complexity of such an affair. One thing to note is that once we find some $g \in A$ , we can set $h(x) = g(\varepsilon x)$ for small $\varepsilon$ and $h \in B$ . Conversely, given $h \in B$ , I'm wondering what happens when set set $f(x) = h(x/\varepsilon)$ , and vary $\varepsilon$ . It might not coincide with a fixed point of $d$ , but could very well coincide with a fixed point of the new operator $d^k$ for some $k$ . For example, take $h(x) = cos(x/2)$ . The iteration converges to 0 everywhere, and multiplying the interior variable by $2$ we do NOT get a fixed point of $d = \frac{d}{dx}$ but we do for $d^4$ . I'll leave it at this, let me know again if there is anything glaringly wrong I missed.
I was able to adapt the accepted answer to this MathOverflow post to positively answer the question. The point is that one can squeeze more out of Petrov's Baire category argument if one applies it to the "singular set" of the function, rather than to an interval. The key step is to establish Theorem 1 . Let $f \in C^\infty({\bf R})$ be such that the quantity $M(x) := \sup_{m \geq 0} |f^{(m)}(x)|$ is finite for all $x$ . Then $f$ is the restriction to ${\bf R}$ of an entire function (or equivalently, $f$ is real analytic with an infinite radius of convergence). Proof . Suppose this is not the case. Let $X$ denote the set of real numbers $x$ for which there does not exist any entire function that agrees with $f$ on a neighbourhood of $x$ (this is the "entire-singular set" of $f$ ). Then $X$ is non-empty (by analytic continuation) and closed. Next, let $S_n$ denote the set of all $x$ such that $M(x) \leq n$ for all $m$ . As $M$ is lower semicontinuous, the $S_n$ are closed, and by hypothesis one has $\bigcup_{n=1}^\infty S_n = {\bf R}$ . Hence, by the Baire category theorem applied to the complete non-empty metric space $X$ , one of the sets $S_n \cap X$ contains a non-empty set $(a,b) \cap X$ for some $a < b$ . Now let $(c,e)$ be a maximal interval in the open set $(a,b) \backslash X$ , then (by analytic continuation) $f$ agrees with an entire function on $(c,e)$ , and hence on $[c,e]$ by smoothness. On the other hand, at least one endpoint, say $c$ , lies in $S_n$ , thus $$ |f^{(m)}(c)| \leq n$$ for all $m$ . By Taylor expansion of the entire function, we then have $$ |f^{(m)}(x)| \leq \sum_{j=0}^\infty \frac{|f^{(m+j)}(c)|}{j!} |x-c|^j$$ $$ \leq \sum_{j=0}^\infty \frac{n}{j!} (b-a)^j$$ $$ \leq n \exp(b-a)$$ for all $m$ and $x \in [c,e]$ . Letting $(c,e)$ and $m$ vary, we conclude that the bound $$ M(x) \leq n \exp(b-a)$$ holds for all $x \in (a,b) \backslash X$ . Since $(a,b) \cap X$ is contained in $S_n$ , these bounds also hold on $(a,b) \cap X$ , hence they hold on all of $(a,b)$ . Now from Taylor's theorem with remainder we see that $f$ agrees on $(a,b)$ with an entire function (the Taylor expansion of $f$ around any point in $(a,b)$ ), and so $(a,b) \cap X$ is empty, giving the required contradiction. $\Box$ The function $f$ in the OP question obeys the hypotheses of Theorem 1. By Taylor expansion applied to the entire function that $f$ agrees with, and performing the same calculation used to prove the above theorem, we obtain the bounds $$ M(x) = \sup_{m \geq 0} |f^{(m)}(x)| \leq M(0) \exp(|x|)$$ for all $x \in {\bf R}$ . We now have locally uniform bounds on all of the $f^{(m)}$ and the argument given by username (or the variant given in Pinelis's comment to that argument) applies to conclude.
{ "source": [ "https://mathoverflow.net/questions/413165", "https://mathoverflow.net", "https://mathoverflow.net/users/143629/" ] }
413,468
The unsolvability of a general quintic equation in terms of the basic arithmetic operations and $n$ th roots (i.e. the Abel–Ruffini theorem) is considered a major result in the mathematical canon. I have recently become confused as to why this is the case. The formula $z=\frac{-b+\sqrt{b^2-4ac}}{2a}$ expresses the solutions to the quadratic equation $az^2+bz+c = 0$ in terms of the inverse of an analytic function $z \mapsto z^2$ . We have simply turned the problem of inverting one analytic function, $z \mapsto az^2+bz$ , into the problem of inverting another analytic function, $z \mapsto z^2$ . Therefore, all the power of the quadratic equation lies in how it solves any quadratic equation by inverting a single analytic function, $z \mapsto z^2$ . Similarly, Cardano's formula solves any cubic equation by inverting a two analytic functions, $z \mapsto z^2$ and $z \mapsto z^3$ . Interestingly, you can also solve a cubic by inverting only one analytic function, for example $z \mapsto \sin z$ . And crucially, you can also do this for quintic equations, by inverting $z \mapsto z^k, k \leq 4$ and $z \mapsto z^5+z$ . One possible statement of the Abel–Ruffini theorem is that it is impossible to solve a general quintic equation by exclusively inverting functions of the form $z \mapsto z^k$ for $k \in \mathbb{N}$ . But why would we only be interested in solutions that invert analytic functions of that form? In simplier terms, what's so special about radicals that makes solutions in terms of them so desirable? I can't see an argument that such inverses are intuitively straightforward: they often produce answers that purely formal (e.g. $\sqrt{2}$ doesn't have a simpler defintion than the positive inverse of the squaring function at $2$ ). To me, it seems that the more natural question is, for $n \in \mathbb{N}$ , Is there a finite set of analytic functions such that the solutions to any degree $n$ polynomial may be expressed in terms of the inverse of these analytic functions? I know very little about the status of this question (exept that it holds for some small values of $n$ ). Any information on what is known about this question would also be of interest.
I think that a large part of the difficulty we have in understanding why this result is considered important is that it is psychologically difficult to put oneself into the shoes of mathematicians of the past. There was a time not so many centuries ago when people didn't know how to solve cubic equations with radicals . Whether the quintic is solvable in radicals was once a difficult question. A problem that gains some notoriety for being difficult is usually going to be considered important when it is finally solved, regardless of whether it ends up occupying a central place in the "theory" that we end up constructing a posteriori. Fermat's Last Theorem is another good example of this. Occasionally I will hear mathematicians say something to the effect of, "Fermat's Last Theorem isn't important; it's the math used to prove the theorem that is important." I don't entirely agree. There are two different kinds of importance that are being conflated. Something can be important because it occupies a central position in our theory. But an appealing and tantalizingly difficult problem is important because of its role in capturing our imagination and giving us something to sink our teeth into. I do not think we should disavow the importance of such problems just because they are solved. Many of our much-beloved theories would likely not exist if there hadn't been some interesting unanswered questions to motivate our research. The quintic has the additional feature of being an "impossibility" result. Like non-Euclidean geometries and Gödel's incompleteness theorems, the solution made us realize that we had been making some unfounded assumptions about what the answer should look like. The psychological broadening of our horizons was a valuable byproduct that is sometimes overlooked.
{ "source": [ "https://mathoverflow.net/questions/413468", "https://mathoverflow.net", "https://mathoverflow.net/users/137577/" ] }
413,510
Let $\mathbb{R}^d$ denote the $d$ -dimensional Euclidean space, $\mathcal{W}_2(\mathbb{R}^d)$ denote the $2$ -Wasserstein space with respect to the $d$ -dimensional Euclidean space $\mathbb{R}^d$ . Let $L^2(\mathbb{R}^d)$ denote the Bochner space of all Borel functions $f:\mathbb{R}^d\rightarrow \mathbb{R}^d$ satisfying $\int \, \lVert f(x)\rVert^2dx<\infty$ . Let $\mathcal{X}\subseteq \mathcal{W}_2(\mathbb{R}^d)$ consist of all measures ${\nu}$ for which there is some $f\in L^2(\lambda,\mathbb{R}^d)$ satisfying: $$ {\nu}=f_{\#}\lambda $$ where $\lambda$ is the uniform measure on $[0,1]^d$ . How is $\mathcal{X}$ related to $\mathcal{W}_2(\mathbb{R}^d)$ ? Is $\mathcal{X}$ a dense subset of $\mathcal{W}_2(\mathbb{R}^d)$ ?
I think that a large part of the difficulty we have in understanding why this result is considered important is that it is psychologically difficult to put oneself into the shoes of mathematicians of the past. There was a time not so many centuries ago when people didn't know how to solve cubic equations with radicals . Whether the quintic is solvable in radicals was once a difficult question. A problem that gains some notoriety for being difficult is usually going to be considered important when it is finally solved, regardless of whether it ends up occupying a central place in the "theory" that we end up constructing a posteriori. Fermat's Last Theorem is another good example of this. Occasionally I will hear mathematicians say something to the effect of, "Fermat's Last Theorem isn't important; it's the math used to prove the theorem that is important." I don't entirely agree. There are two different kinds of importance that are being conflated. Something can be important because it occupies a central position in our theory. But an appealing and tantalizingly difficult problem is important because of its role in capturing our imagination and giving us something to sink our teeth into. I do not think we should disavow the importance of such problems just because they are solved. Many of our much-beloved theories would likely not exist if there hadn't been some interesting unanswered questions to motivate our research. The quintic has the additional feature of being an "impossibility" result. Like non-Euclidean geometries and Gödel's incompleteness theorems, the solution made us realize that we had been making some unfounded assumptions about what the answer should look like. The psychological broadening of our horizons was a valuable byproduct that is sometimes overlooked.
{ "source": [ "https://mathoverflow.net/questions/413510", "https://mathoverflow.net", "https://mathoverflow.net/users/36886/" ] }
414,124
I am reading the following paper 1998(H.Hudzik) P.574 It reads using L'Hopital rule $$\liminf_{u\to\infty} \frac{1/\varphi(1/u)}{\psi(u)}=\liminf_{u\to\infty}\frac{\varphi'(u)}{\psi'(u)u^2[\varphi(1/u)]^2}.$$ That means we can apply L'Hopital for lower limits i.e. $$\liminf_{u\to\infty} \frac{f(u)}{g(u)}=\liminf_{u\to\infty}\frac{f'(u)}{g'(u)}?$$ But I only know the classical one. Is there someone can give me some reference to check this formula? Or if possible someone can give a proof?
The full L'Hopital rule says that $$\liminf \frac{f'}{g'}\leq\liminf\frac{f}{g}\leq\limsup\frac{f}{g}\leq\limsup\frac{f'}{g'}.$$ So in the special case when the limit of $f'/g'$ , exists, the limit of $f/g$ also exists and is equal to the limit of $f'/g'$ . This general rule is proved by integration.
{ "source": [ "https://mathoverflow.net/questions/414124", "https://mathoverflow.net", "https://mathoverflow.net/users/147009/" ] }
414,139
There are many models for $(\infty,1)$ -categories: simplicial categories, Segal categories, complete Segal spaces, and quasi-categories. Doubtlessly the model most used to do higher category theory in is the model of quasi-categories, due to the work of Lurie ( Higher Topos Theory , Kerodon ), who calls them $\infty$ -categories. Just browsing through these books, I noticed that however, simplicial categories are used too, for instance to construct examples of quasi-categories, as in the construction of the quasi-category of spaces : there is a simplicial category of Kan complexes, and to get the quasi-category of Kan complexes we take the homotopy coherent nerve (also called simplicial nerve in HTT I think) of that. Question: If simplicial categories are a more practical model for actually constructing examples, why are quasi-categories used for most of the theory? That is, what advantages do quasi-categories have over simplicial categories that outweigh the complications of constructing quasi-categories directly?
As a preface, I think that this question should be viewed as analogous to "what are the advantages of ZFC over type theory" or vice versa. We're talking about foundations -- in principle, it doesn't matter what foundations you use; you end up with an equivalent model-independent theory of $\infty$ -categories. The paradigm is "use simplicial categories for examples; use quasicategories for general theorems". There are a lot of constructions which are simpler in quasicategories. I tend to think that a lot of the difference is visible at the model category level: the Joyal model structure on $sSet$ is just much nicer to work with than the Bergner model structure on $sCat$ (of course, the latter is still theoretically very important, at the very least for the purposes of importing examples which start life as simplicial categories). Some of these differences are: The Joyal model structure is defined on a presheaf category. The Joyal model structure is cartesian, making it much easier to talk about functor categories. In the Joyal model structure, every object is cofibrant. There's a synergy between (1) and (2) -- if $X$ is a quasicategory and $A$ is any simplicial set, then the mapping simplicial set $Map(A,X)$ gives a correct model for the functor category from $A$ to $X$ -- you don't need to do any kind of cofibrant replacement of $A$ . This is nicely explained in Justin Hilburn's answer . (2) is quite convenient. For example, in general frameworks like Riehl and Verity's $\infty$ -cosmoi , a lot of headaches are avoided by assuming something like (2). Here are a few examples of some things which are easier in quasicategories -- I'd be curious to hear other examples folks might mention! The join functor is very nice. Consequently (in combination with the nice mapping spaces), limits and colimits can be defined pretty cleanly. An example of a theorem proven in HTT using quasicategories which I imagine would be hard to prove (maybe even to formulate) directly in simplicial categories is the theorem that an $\infty$ -category with products and pullbacks has all limits. The proof uses the fact that the nerve of the poset $\omega$ is equivalent to a 1-skeletal (non-fibrant) simplicial set, and relies on knowing how to compute co/limits indexed by non-fibrant simplicial sets like this. The theory of cofinality is very nice, arising from the (left adodyne, left fibration) weak factorization system on the underlying category -- I imagine it would be quite complicated with simplicial categories. Roughly at this point in the theory, though, one starts to have enough categorical infrastructure available that it becomes more possible to think "model-independently", and the differences start to matter less. Here's a few more: When you take the maximal sub- $\infty$ -groupoid of a quasicategory, it is literally a Kan complex, ready and waiting for you to do simpicial homotopy with. This is especially nice when you take the maximal sub- $\infty$ -groupoid of a mapping object -- which doesn't quite make the model structure simplicial, but it's kind of "close". The theory of fibrations is pretty nice in quasicategories -- just like in ordinary categories, left, right, cartesian, and cocartesian fibrations are "slightly-too-strict" notions which are very useful and have nice properties like literally being stable under pullback. I don't know what the theory of such fibrations looks like in simplicial categories. The fact that $sSet$ is locally cartesian closed is sneakily useful. Even though $Cat_\infty$ is not locally cartesian closed, there's a pretty good supply of exponentiable functors, and it's not uncommon to define various quasicategories using the right adjoint to pullback of simplicial sets. (Rule of thumb: in HTT, when Lurie starts describing a simplicial set by describing its maps in from simplices over a base, 90% of the time he's secretly describing the local internal hom of simplicial sets.)
{ "source": [ "https://mathoverflow.net/questions/414139", "https://mathoverflow.net", "https://mathoverflow.net/users/475383/" ] }
414,378
I usually work in the field of differential geometry, but I have encountered the following problem in my research: Are there infinitely many positive integers $k,l,m\in\mathbb N^{>0}$ such that $$(3+3k+l)^2=m\,(k\,l-k^3-1)\,?$$ Obviously, taking $l=k^2$ and $m=-(3+3k+l)^2$ gives infinitely many integer solutions, but $m<0$ is negative. As a non-expert, I imagine that there is either a simple answer to this question, or the problem is not so simple to solve. Of course, I've played around with the equations a bit, but other than finding numerous examples, I haven't made any progress. I would appreciate an existence or non-existence statement for infinitely many positive integer solutions, but also some hints that the problem is most likely hard to solve would help me. Background: I am looking for certain integer representations of a surface group, and I can show that integer solutions to this diophantine equation actually give rise to integer representations. The condition that $k,l,m$ are positive is equivalent to the condition that the corresponding representation is contained in a higher Teichmüller component (which is important for my differential geometric application).
It does have infinitely many positive solutions. Here is just one such series. Consider the following recurrence sequence: $$u_0=1,\ u_1=2,\ u_{n+1} = 23 u_n - u_{n-1} - 4\qquad (n\geq 1).$$ Let $t,k$ be any two consecutive terms of this sequence, then setting $l:=k^2+t$ produces the following equality: $$(3+3k+l)(t+1) = (k+26)(kl-k^3-1),$$ which gives solution $m:=\frac{(k+26)(3+3k+l)}{t+1}$ (which is an integer) to the original equation. In fact, integrality of $m$ follows from the identity: $$(u_{n+2}+1)(u_n+1) = (u_{n+1}+26)(u_{n+1}+1),$$ which can be verified from the recurrence for $u_n$ . In summary, the values $(k,l,m)$ in this solution series are given by $$\begin{cases} k = u_{n+1}, \\ l = u_{n+1}^2 + u_n, \\ m = (u_{n+2}+2)(u_{n+1}+2) + 24. \end{cases}\qquad (n\in\mathbb{Z}_+) $$ ADDED. I've added $u_n$ to the OEIS as sequence A350917 . Together with 9 other similar recurrences it gives all solutions $k$ to $(tk-1)\mid (k+1)^4$ , which are now listed in sequence A350916 .
{ "source": [ "https://mathoverflow.net/questions/414378", "https://mathoverflow.net", "https://mathoverflow.net/users/4572/" ] }
414,402
On MSE this got 5 upvotes but no answers not even a comment so I figured it was time to cross-post it on MO: Is the Moebius strip a linear group orbit? In other words: Does there exists a Lie group $ G $ a representation $ \pi: G \to \operatorname{Aut}(V) $ and a vector $ v \in V $ such that the orbit $$ \mathcal{O}_v=\{ \pi(g)v: g\in G \} $$ is diffeomorphic to the Moebius strip? My thoughts so far: The only two obstructions I know for being a linear group orbit is that the manifold (1) must be smooth homogeneous (shown below for the the group $ \operatorname{SE}_2 $ ) and (2) must be a vector bundle over a compact Riemannian homogeneous manifold (here the base is the circle $ S^1 $ ). The Moebius strip is homogeneous for the special Euclidean group of the plane $$ \operatorname{SE}_2= \left \{ \ \begin{bmatrix} a & b & x \\ -b & a & y \\ 0 & 0 & 1 \end{bmatrix} : a^2+b^2=1 \right \}. $$ There is a connected group $ V $ of translations up each vertical line $$ V= \left \{ \ \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & y \\ 0 & 0 & 1 \end{bmatrix} : y \in \mathbb{R} \right \}. $$ Now if we include the rotation by 180 degrees $$ \tau:=\begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$ then $ \langle V, \tau \rangle$ has two connected components and $$ \operatorname{SE}_2/\langle V, \tau \rangle $$ is the Moebius strip.
Yes. Here is one way: Consider standard $\mathbb{R}^3$ endowed with the Lorentzian quadratic form $Q = x^2+y^2-z^2$ , and let $G\simeq\mathrm{O}(2,1)\subset\mathrm{GL}(3,\mathbb{R})$ be the symmetry group of $Q$ . Then $G$ preserves the hyperboloid $H$ of $1$ -sheet given by the level set $Q=1$ , which is diffeomorphic to a cylinder. Consider the quotient of $H$ by $\mathbb{Z}_2$ defined by identifying $v\in H\subset\mathbb{R}^3$ with $-v$ . This abstract quotient is a smooth Möbius strip. This quotient can be identified as a linear group orbit as follows: Let $V = S^2(\mathbb{R}^3)\simeq \mathbb{R}^6$ and consider the smooth mapping $\sigma:\mathbb{R}^3\to V$ given by $\sigma(v) = v^2$ for $v\in\mathbb{R}^3$ . Then $\sigma$ is a $2$ -to- $1$ immersion except at the origin. The action of $G$ on $\mathbb{R}^3$ extends equivariantly to a representation $\rho:G\to \mathrm{Aut}(V)$ such that $\rho(g)(v^2) = \rho\bigl(\sigma(v)\bigr)=\sigma(g v)= (gv)^2$ . It follows that $\sigma(H)\subset S^2(\mathbb{R}^3)\simeq\mathbb{R}^6$ , which is a Möbius strip, is a linear group orbit under the representation $\rho$ . Note that the representation of $G$ on $S^2(\mathbb{R}^3)\simeq\mathbb{R}^6$ is actually reducible as the direct sum of a trivial $\mathbb{R}$ and an irreducible $\mathbb{R}^5$ . Projecting everything into the $\mathbb{R}^5$ factor, one obtains a representation of $G$ on $\mathbb{R}^5$ that has a Möbius strip as a $G$ -orbit.
{ "source": [ "https://mathoverflow.net/questions/414402", "https://mathoverflow.net", "https://mathoverflow.net/users/387190/" ] }
414,973
Van der Waerden's theorem states that any colouring of the integers in a finite number of colours has monochromatic arithmetic progressions of arbitrary length. Szemerédi's Theorem is a dramatic strengthening of this result and says that any set of positive integers with positive natural density contains arbitrary length arithmetic progressions. Clearly the latter theorem implies the former but not the other way round. The proof of Van der Waerden's theorem although elementary is not simple and perhaps quite challenging to discover. However it would follow from the following theorem: Theorem: Let $A$ be subset of the integers such that $|A\cap [n]|\geq n/2$ for all $n>0$ . Then $A$ contains an arithmetic progression of arbitrary length. This is weaker than Szemeredi's theorem but seems only slightly stronger than Van der Waerden. Hence my question: Is there a proof of the above theorem that is substantially simpler than the proof of Szemeredi's Theorem?
As (implicitly) observed already in Szemerédi's celebrated paper Szemerédi, Endre , On sets of integers containing no (k) elements in arithmetic progression , Acta Arith. 27, 199-245 (1975). ZBL0303.10056 . (and perhaps previously), Szemerédi's theorem for a fixed density $0 < \delta_0 < 1$ (such as $\delta_0 = 1/2$ ), when combined with van der Waerden's theorem, implies Szemerédi's theorem for arbitrary density $\delta > 0$ . This is because, once one is given a subset $A$ of integers in $\{1,\dots,N\}$ of density $\delta$ , it is not difficult to use the probabilistic method to find $O_{\delta,\delta_0}(1)$ translates of $A$ (by shifts randomly selected between $-N$ and $N$ ) which cover this interval to density at least $\delta_0$ . If one can find a sufficiently long arithmetic progression inside this union of translates, then by van der Waerden's theorem, at least one of these translates also contains a long progression, which gives Szemerédi's theorem for that density $\delta$ . As a consequence of this argument, the gap in difficulty between Szemerédi's theorem for a fixed density $0 < \delta_0 < 1$ (but arbitrary lengths $k$ ) and for arbitrary densities (and arbitrary lengths) is basically no greater than the difficulty required to prove van der Waerden's theorem (which can be proved in a page or two). EDIT: the situation is very different if instead one fixes the length $k$ of the progression. As pointed out in comments, Szemerédi's theorem is now easy for very large densities such as $\delta > 1-1/k$ , and the difficulty increases as the density lowers (although several proofs of Szemerédi's theorem proceed by a downward induction on density now commonly known as the density increment argument ). However, in most proofs, the increase in difficulty as $\delta$ decreases is negligible compared to the increase in difficulty as $k$ increases; for instance the $k=3$ case of the theorem, first established by Roth, is substantially easier than the $k>3$ cases. So the van der Waerden reduction given above, which trades the small- $\delta$ difficulty for the large- $k$ difficulty, is generally not useful in practice (in particular, it is largely incompatible with any attempt to induct on $k$ , which tends to be a key component of most approaches to this theorem).
{ "source": [ "https://mathoverflow.net/questions/414973", "https://mathoverflow.net", "https://mathoverflow.net/users/7113/" ] }
415,022
In Cohen's article, The Discovery of Forcing , he says that "one cannot prove the existence of any uncountable standard model in which AC holds, and CH is false," and offers the following proof. If $M$ is an uncountable standard model in which AC holds, it is easy to see that $M$ contains all countable ordinals. If the axiom of constructibility is assumed, this means that all the real numbers are in $M$ and constructible in $M$ . Hence CH holds. But this argument, on the surface of it, invokes $V = L$ . Can we eliminate the use of $V = L$ ? The discussion in a related MO question seems to come close to answering this question, but doesn't directly address it.
Remarks (2) and (3) are added in this edit. What Cohen's quoted proof outline is leaving implicit is the following statement in which $\mathrm{Con}(T)$ means " $T$ is consistent". $(*)$ Assuming $\mathrm{Con(ZF + SM)}$ , $\mathrm{V} \neq \mathrm {L}$ is not provable from $\mathrm{ZF + SM}$ , where $\mathrm{SM}$ stands for the statement "there is standard (i.e., well-founded) model of ZF". $(*)$ is an immediate consequence of the the well-known fact that $\mathrm{Con(ZF + SM)}$ implies $\mathrm{Con(ZF + SM + V = L)}$ . This well-known fact, in turn, follows from absoluteness considerations: if $\mathcal{M}\models \mathrm{ZF + SM}$ , then $\mathrm{L}^{\mathcal{M}} \models \mathrm{ZF + SM+V=L}$ , where $\mathrm{L}^{\mathcal{M}}$ is the constructible universe as computed in $\mathcal{M}$ . By the way: The quoted statement of Cohen in his article is phrased as the theorem below on pages 108-109 of his book "Set Theory and the Continuum Hypothesis". In Cohen's terminology SM stands for the statement "there is standard (i.e., well-founded) model of $\mathrm{ZF}$ ". Theorem. From $\mathrm{ZF + SM}$ or indeed from any axiom system containing $\mathrm{ZF}$ which is consistent with $\mathrm{V = L}$ , one cannot prove the existence of an uncountable standard model in which $\mathrm{AC}$ is true and $\mathrm{CH}$ is false, nor even one in which AC holds and which contains nonconstructible real numbers . Three remarks are in order: Remark (1) In unpublished work, Cohen and Solovay noted that one can use forcing over a countable standard model of ZF to build uncountable standard models of $\mathrm{ZF}$ (in which AC fails by Cohen's aforementioned result). Later, Harvey Friedman extended their result by showing that every countable standard model of $\mathrm{ZF}$ of (ordinal) height $\alpha$ can be generically extended to a model with the same height but whose cardinality is $\beth_{\alpha}$ ( Friedman, Harvey , Large models of countable height , Trans. Am. Math. Soc. 201, 227-239 (1975). ZBL0296.02036 ). Remark (2) It is easy to see (using the reflection theorem and relativizing to the constructible universe) that, assuming the consistency of $\mathrm{ZF + SM}$ , the theory $\mathrm{ZF + SM}$ + "there is no uncountable standard model of $\mathrm{ZFC}$ " is also consistent. Remark (3) Within $\mathrm{ZF}$ + "there is an uncountable standard model $\mathcal{M} \models \mathrm{ZFC+V=L}$ such that $\omega_3^{\mathcal{M}}$ is countable", one can use forcing to build a generic extension $\mathcal{N}$ of $\mathcal{M}$ that violates $\mathrm{CH}$ ; thus $\mathcal{N}$ is an uncountable standard model of $\mathrm{ZFC + \lnot CH}$ . More specifically, the assumption of countability of $\omega_3^{\mathcal{M}}$ , and the fact that GCH holds in $\mathcal{M}$ , assures us that there exists a $\mathbb{P}$ -generic filter over $\mathcal{M}$ , where $\mathbb{P}$ is the usual notion of forcing in $\mathcal{M}$ for adding $\omega_2$ Cohen reals. Thus, in the presence of the principle " $0^{\sharp}$ exists" (which is implied by sufficiently large cardinals, and implies that every definable object in the constructible universe is countable) there are lots of uncountable standard models of $\mathrm{ZFC + \lnot CH}$ .
{ "source": [ "https://mathoverflow.net/questions/415022", "https://mathoverflow.net", "https://mathoverflow.net/users/3106/" ] }
415,151
Let $f=(u,v)\in \mathscr{D}'(U,\mathbb{C})$ be a distribution, where $U\subset\mathbb{C}=\mathbb{R}^2$ is an open set and $u$ and $v$ are the projection of $f$ onto the real and imaginary axis (ie $\langle f,\phi\rangle=\langle u,\phi\rangle+i\langle v,\phi\rangle$ ). Suppose that $$ \frac{\partial}{\partial \overline{z}}f=0\qquad\text{in U,} $$ where $\frac{\partial}{\partial \overline{z}}=\frac{1}{2}\bigg(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y}\bigg)$ and the derivatives are in distributional sense. Does it follow that $f$ is holomorphic in the classical sense, ie $f\in C^\infty(U,\mathbb{C})$ and the Cauchy-Riemann equations are satisfied? The obvious idea would be to mollify, get holomorphic functions and then take the limit. But how can we conclude that the limit is still holomorphic?
Remarks (2) and (3) are added in this edit. What Cohen's quoted proof outline is leaving implicit is the following statement in which $\mathrm{Con}(T)$ means " $T$ is consistent". $(*)$ Assuming $\mathrm{Con(ZF + SM)}$ , $\mathrm{V} \neq \mathrm {L}$ is not provable from $\mathrm{ZF + SM}$ , where $\mathrm{SM}$ stands for the statement "there is standard (i.e., well-founded) model of ZF". $(*)$ is an immediate consequence of the the well-known fact that $\mathrm{Con(ZF + SM)}$ implies $\mathrm{Con(ZF + SM + V = L)}$ . This well-known fact, in turn, follows from absoluteness considerations: if $\mathcal{M}\models \mathrm{ZF + SM}$ , then $\mathrm{L}^{\mathcal{M}} \models \mathrm{ZF + SM+V=L}$ , where $\mathrm{L}^{\mathcal{M}}$ is the constructible universe as computed in $\mathcal{M}$ . By the way: The quoted statement of Cohen in his article is phrased as the theorem below on pages 108-109 of his book "Set Theory and the Continuum Hypothesis". In Cohen's terminology SM stands for the statement "there is standard (i.e., well-founded) model of $\mathrm{ZF}$ ". Theorem. From $\mathrm{ZF + SM}$ or indeed from any axiom system containing $\mathrm{ZF}$ which is consistent with $\mathrm{V = L}$ , one cannot prove the existence of an uncountable standard model in which $\mathrm{AC}$ is true and $\mathrm{CH}$ is false, nor even one in which AC holds and which contains nonconstructible real numbers . Three remarks are in order: Remark (1) In unpublished work, Cohen and Solovay noted that one can use forcing over a countable standard model of ZF to build uncountable standard models of $\mathrm{ZF}$ (in which AC fails by Cohen's aforementioned result). Later, Harvey Friedman extended their result by showing that every countable standard model of $\mathrm{ZF}$ of (ordinal) height $\alpha$ can be generically extended to a model with the same height but whose cardinality is $\beth_{\alpha}$ ( Friedman, Harvey , Large models of countable height , Trans. Am. Math. Soc. 201, 227-239 (1975). ZBL0296.02036 ). Remark (2) It is easy to see (using the reflection theorem and relativizing to the constructible universe) that, assuming the consistency of $\mathrm{ZF + SM}$ , the theory $\mathrm{ZF + SM}$ + "there is no uncountable standard model of $\mathrm{ZFC}$ " is also consistent. Remark (3) Within $\mathrm{ZF}$ + "there is an uncountable standard model $\mathcal{M} \models \mathrm{ZFC+V=L}$ such that $\omega_3^{\mathcal{M}}$ is countable", one can use forcing to build a generic extension $\mathcal{N}$ of $\mathcal{M}$ that violates $\mathrm{CH}$ ; thus $\mathcal{N}$ is an uncountable standard model of $\mathrm{ZFC + \lnot CH}$ . More specifically, the assumption of countability of $\omega_3^{\mathcal{M}}$ , and the fact that GCH holds in $\mathcal{M}$ , assures us that there exists a $\mathbb{P}$ -generic filter over $\mathcal{M}$ , where $\mathbb{P}$ is the usual notion of forcing in $\mathcal{M}$ for adding $\omega_2$ Cohen reals. Thus, in the presence of the principle " $0^{\sharp}$ exists" (which is implied by sufficiently large cardinals, and implies that every definable object in the constructible universe is countable) there are lots of uncountable standard models of $\mathrm{ZFC + \lnot CH}$ .
{ "source": [ "https://mathoverflow.net/questions/415151", "https://mathoverflow.net", "https://mathoverflow.net/users/351083/" ] }