source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
223,584
I am just beginning to look further into trace formulas and automorphic forms in a quite general setting. For long I have noticed that the natural assumption on the group $G$ we work on is to be reductive . Hence my question : why is reductive interesting? Of course I have already seen some formal definitions, like to have a trivial unipotent radical or some discussions about root systems. But it does not seem very satisfactory to me: why is this assumption the one that seems to be the Grail of all generalizations done in this field? why are those reductive groups so deeply interesting? what are the implied or equivalent properties making them so desirable? (I have heard of the complete reducibility of representations -- is it true? without other assumption on the representations? is it an equivalent or a weaker statement?) Any clue helping me to cloth off reductive groups will be of great use :)
Harish-Chandra told the following (paraphrased) little story that he heard from Chevalley: "When God and the Devil were creating the universe, God gave the Devil a free hand in building things but told him to keep off certain objects to which He would attend. Semisimple groups were among those special items." From the modern perspective the class of (connected) reductive groups is more natural than that of (connected) semisimple groups for the purposes of setting up a robust general theory, due to the fact that Levi factors of parabolics in reductive groups are always reductive but generally are not semisimple when the ambient group is semisimple. However, after some development of the basic theory one learns that reductive groups are just a fattening of semisimple groups via a central torus (e.g., GL$_n$ versus SL$_n$), so Harish-Chandra had no trouble to get by in the semisimple case by just dragging along some central tori here and there in the middle of proofs. So your question appears to be basically the same as asking: why are (connected) semisimple groups such a natural class on which to focus so much effort? It is the rich class of examples from representation theory and number theory that historically motivated subsequent developments and inspired the search for a single uniform perspective with to understand disparate phenomena in a common manner. (i) The possibility to prove theorems about reductive groups via inductive techniques, building up from Levi factors in proper parabolic subgroups, is very powerful and characteristic of the beauty of the subject; one cannot fathom this from the mundane definition with (geometric) unipotent radicals, much as I think it is inconceivable from the raw definition of "(connected) compact Lie group" that there should be any rich structure theory at all. This makes learning how to think about the subject a tricky matter, since it either looks like a grab-bag of explicit examples (which may or may not be to one's liking) or else one has to suspend patience and get somewhat far into the theory to see the rich structure that unifies all of the examples. Harish-Chandra used the inductive method all the time to prove theorems of sweeping generality about all reductive groups without ever needing to compute explicitly beyond SL$_2$, almost as if there is only "one" reductive group, $G$. One can really prove a huge amount about these groups without ever computing with matrices at all. (Not that explicit computations are bad, but one can get much insight without them.) It is only by treating them all in the same way, and maintaining a completely general outlook, that Harish-Chandra's induction could succeed. This led him to make an analogy with finance: "If you don't borrow enough, you have cashflow problems. If you borrow too much, you can't pay the interest." (ii) There are numerous contexts in which the unified structure theory is the answer to many prayers. Consider "finite simple groups of Lie type": classically these were studied case-by-case, but the general theory of reductive groups provides a completely uniform approach to their internal subgroup structure as well as proofs of simplicity and even formulas for their size. Consider connected compact Lie groups: these are functorially "the same" as connected reductive $\mathbf{R}$-groups that do not contain ${\rm{GL}}_1$ as an $\mathbf{R}$-subgroup, and share many similarities with the theory of connected complex Lie groups with a semisimple Lie algebra, so the algebraic theory over general fields unifies those similarities. For "algebraic" questions about a finite-dimensional representation of an arbitrary group $\Gamma$ on a vector space $V$ over a field $k$ of characteristic 0, one can often replace $\Gamma$ with its Zariski closure $G$ in ${\rm{GL}}(V)$. If $\Gamma$ is acting irreducibly (or just semi-simply) on $V$ then such $G$ has reductive identity component! So once it is known that there is a rich structure theory of connected reductive groups then it opens the door to solving problems that don't seem to mention such groups at all (e.g., to prove that if $V$ and $V'$ are semisimple finite-dimensional representations of $\Gamma$ over a field $k$ of characteristic 0 then $V \otimes V'$ is also a semisimple representation). (iii) It is a general fact that a smooth connected affine group over a field of characteristic 0 is reductive if and only if all of its algebraic representations are completely reducible, but this is utterly non-obvious (in the more interesting "only if" direction) just from the definition of "reductive" via the unipotent radical. Another characterization, due to Borel and Richardson, has perhaps less apparent significance (but is actually quite important): if $G$ is a smooth affine subgroup of ${\rm{GL}}_n$ over a field (any smooth affine group can be realized in that way) then ${\rm{GL}}_n/G$ is affine if and only if $G^0$ is reductive! (iv) Suppose you are handed a smooth connected affine group $G$ over a perfect field $k$ and that you know nothing else about $G$ and you'd like to prove some general theorem. The unipotent radical $U$ has a (canonical) composition series whose successive quotients are vector groups, so $U$ is often "easy" to analyze. What about $G/U$? Tautologically this is reductive, but so what? That is useful only if we know something serious about the structure of reductive groups beyond the opaque definition involving triviality of the unipotent radical. It seems scarcely believable from just the definition and knowledge of some basic examples that there should be a uniform method to analyze the internal structure and basic representation theory of all reductive groups, but it is true. It is also an undeserved miracle that the structure theory (including the role of root systems) works in an essentially characteristic-free manner, even over arbitrary fields (no need to limit to the perfect case; see the books by Borel, Humphreys, and Springer), as well as over rings such as $\mathbf{Z}$ (see SGA3). And many ideas from the theory of quadratic forms (e.g., mass formulas, Hasse Principle) can be extended and understood better by putting them into a broader group-theoretic framework via the structure theory of reductive groups (going far beyond the case of special orthogonal and spin groups). (v) The early decades of the theory of automorphic forms exhibited an abundance of similar-looking features, such as for Hilbert modular forms, Siegel modular forms, theta-series in the study of quadratic forms, and so on. But how to unify these into a single framework, treat both Maass forms and holomorphic forms on equal footing, understand the significance of Eisenstein series, etc.? It is precisely the unified perspective on the internal structure of all reductive groups and the analogous unification for the representation theory in Harish-Chandra's work that makes them an ideal framework for subsuming all such considerations into a uniform group-theoretic framework. In much contemporary research there is a focus on specific classes of groups (e.g., unitary groups, symplectic similitude groups, etc.) in order to prove a result (even if the dream is to treat all reductive groups at once!), much as in contemporary algebraic geometry there is more focus on the study of special classes of structures (e.g., toric varieties, K3 surfaces, etc.) than was the case in Grothendieck's work. But the uniform inductive approach to the internal structure of all reductive groups is very powerful to keep in the back of one's mind. So if you see the theory of reductive groups as the study of some list of explicit examples then that is a big mistake (yet part of the appeal is the balance between the concreteness of the building blocks in the classification and the possibility to nonetheless study them in a manner which often avoids needing the classification).
{ "source": [ "https://mathoverflow.net/questions/223584", "https://mathoverflow.net", "https://mathoverflow.net/users/43737/" ] }
223,588
Given a nontrivial left ideal $J$ of a unital $C^*$ algebra $A$, is there a state on $A$ which vanishes on all elements of $J$? (Left or right doesn't matter, just not 2-sided.) The problem came from the idea of a state as evaluation at a 'point' of a noncommutative space. If an ideal corresponds to a vanishing 'set', then can we look at 'points' of that 'set'? I must admit that this problem as I stated it sounds unlikely to be true, but is there another version which might work? I would also be interested in any related references which people could suggest about the theory of 1-sided ideals in operator algebras.
Harish-Chandra told the following (paraphrased) little story that he heard from Chevalley: "When God and the Devil were creating the universe, God gave the Devil a free hand in building things but told him to keep off certain objects to which He would attend. Semisimple groups were among those special items." From the modern perspective the class of (connected) reductive groups is more natural than that of (connected) semisimple groups for the purposes of setting up a robust general theory, due to the fact that Levi factors of parabolics in reductive groups are always reductive but generally are not semisimple when the ambient group is semisimple. However, after some development of the basic theory one learns that reductive groups are just a fattening of semisimple groups via a central torus (e.g., GL$_n$ versus SL$_n$), so Harish-Chandra had no trouble to get by in the semisimple case by just dragging along some central tori here and there in the middle of proofs. So your question appears to be basically the same as asking: why are (connected) semisimple groups such a natural class on which to focus so much effort? It is the rich class of examples from representation theory and number theory that historically motivated subsequent developments and inspired the search for a single uniform perspective with to understand disparate phenomena in a common manner. (i) The possibility to prove theorems about reductive groups via inductive techniques, building up from Levi factors in proper parabolic subgroups, is very powerful and characteristic of the beauty of the subject; one cannot fathom this from the mundane definition with (geometric) unipotent radicals, much as I think it is inconceivable from the raw definition of "(connected) compact Lie group" that there should be any rich structure theory at all. This makes learning how to think about the subject a tricky matter, since it either looks like a grab-bag of explicit examples (which may or may not be to one's liking) or else one has to suspend patience and get somewhat far into the theory to see the rich structure that unifies all of the examples. Harish-Chandra used the inductive method all the time to prove theorems of sweeping generality about all reductive groups without ever needing to compute explicitly beyond SL$_2$, almost as if there is only "one" reductive group, $G$. One can really prove a huge amount about these groups without ever computing with matrices at all. (Not that explicit computations are bad, but one can get much insight without them.) It is only by treating them all in the same way, and maintaining a completely general outlook, that Harish-Chandra's induction could succeed. This led him to make an analogy with finance: "If you don't borrow enough, you have cashflow problems. If you borrow too much, you can't pay the interest." (ii) There are numerous contexts in which the unified structure theory is the answer to many prayers. Consider "finite simple groups of Lie type": classically these were studied case-by-case, but the general theory of reductive groups provides a completely uniform approach to their internal subgroup structure as well as proofs of simplicity and even formulas for their size. Consider connected compact Lie groups: these are functorially "the same" as connected reductive $\mathbf{R}$-groups that do not contain ${\rm{GL}}_1$ as an $\mathbf{R}$-subgroup, and share many similarities with the theory of connected complex Lie groups with a semisimple Lie algebra, so the algebraic theory over general fields unifies those similarities. For "algebraic" questions about a finite-dimensional representation of an arbitrary group $\Gamma$ on a vector space $V$ over a field $k$ of characteristic 0, one can often replace $\Gamma$ with its Zariski closure $G$ in ${\rm{GL}}(V)$. If $\Gamma$ is acting irreducibly (or just semi-simply) on $V$ then such $G$ has reductive identity component! So once it is known that there is a rich structure theory of connected reductive groups then it opens the door to solving problems that don't seem to mention such groups at all (e.g., to prove that if $V$ and $V'$ are semisimple finite-dimensional representations of $\Gamma$ over a field $k$ of characteristic 0 then $V \otimes V'$ is also a semisimple representation). (iii) It is a general fact that a smooth connected affine group over a field of characteristic 0 is reductive if and only if all of its algebraic representations are completely reducible, but this is utterly non-obvious (in the more interesting "only if" direction) just from the definition of "reductive" via the unipotent radical. Another characterization, due to Borel and Richardson, has perhaps less apparent significance (but is actually quite important): if $G$ is a smooth affine subgroup of ${\rm{GL}}_n$ over a field (any smooth affine group can be realized in that way) then ${\rm{GL}}_n/G$ is affine if and only if $G^0$ is reductive! (iv) Suppose you are handed a smooth connected affine group $G$ over a perfect field $k$ and that you know nothing else about $G$ and you'd like to prove some general theorem. The unipotent radical $U$ has a (canonical) composition series whose successive quotients are vector groups, so $U$ is often "easy" to analyze. What about $G/U$? Tautologically this is reductive, but so what? That is useful only if we know something serious about the structure of reductive groups beyond the opaque definition involving triviality of the unipotent radical. It seems scarcely believable from just the definition and knowledge of some basic examples that there should be a uniform method to analyze the internal structure and basic representation theory of all reductive groups, but it is true. It is also an undeserved miracle that the structure theory (including the role of root systems) works in an essentially characteristic-free manner, even over arbitrary fields (no need to limit to the perfect case; see the books by Borel, Humphreys, and Springer), as well as over rings such as $\mathbf{Z}$ (see SGA3). And many ideas from the theory of quadratic forms (e.g., mass formulas, Hasse Principle) can be extended and understood better by putting them into a broader group-theoretic framework via the structure theory of reductive groups (going far beyond the case of special orthogonal and spin groups). (v) The early decades of the theory of automorphic forms exhibited an abundance of similar-looking features, such as for Hilbert modular forms, Siegel modular forms, theta-series in the study of quadratic forms, and so on. But how to unify these into a single framework, treat both Maass forms and holomorphic forms on equal footing, understand the significance of Eisenstein series, etc.? It is precisely the unified perspective on the internal structure of all reductive groups and the analogous unification for the representation theory in Harish-Chandra's work that makes them an ideal framework for subsuming all such considerations into a uniform group-theoretic framework. In much contemporary research there is a focus on specific classes of groups (e.g., unitary groups, symplectic similitude groups, etc.) in order to prove a result (even if the dream is to treat all reductive groups at once!), much as in contemporary algebraic geometry there is more focus on the study of special classes of structures (e.g., toric varieties, K3 surfaces, etc.) than was the case in Grothendieck's work. But the uniform inductive approach to the internal structure of all reductive groups is very powerful to keep in the back of one's mind. So if you see the theory of reductive groups as the study of some list of explicit examples then that is a big mistake (yet part of the appeal is the balance between the concreteness of the building blocks in the classification and the possibility to nonetheless study them in a manner which often avoids needing the classification).
{ "source": [ "https://mathoverflow.net/questions/223588", "https://mathoverflow.net", "https://mathoverflow.net/users/29625/" ] }
224,077
Let $z$ be a fixed complex number with $|z|<1$ and consider the set $$X_z := \Big\{\sum\limits_{i=1}^{\infty} a_i z^i \ \Big|\ a_i\in \{-1,1\} \forall i\Big\}.$$ What can be said about the set $M$ of those $|z|\lt 1$ such that $X_z \subset \mathbb{C}$ connected?
There is a related iterated function system with two functions, $f_0(x) = 1+zx$ $f_1(x) = -1+zx.$ $X_z$ is the unique nonempty compact fixed set of this iterated function system. It is sometimes called a generalized dragon set with parameter $z$, and particular values of $z$ can produce some well-known fractals called dragons. A relevant result on iterated function systems is that the fixed set $X$ is connected iff it is arcwise connected iff the family of subsets $\{f_i(X)\}$ is connected, which in this case means $f_0(X_z) \cap f_1(X_z)$ is nonempty. ( This paper refers to Kigami, Analysis on Fractals chapter 1 for the result.) So, the set is connected iff we can write $$\begin{eqnarray} 1 + \sum_{i=1}^{\infty} a_i z^i &=& -1 + \sum_{i=1}^\infty a_i' z^i \newline 1 + \sum_{i=1}^{\infty} \frac{a_i-a_i'}{2} z^i &=& 0 \newline 1 + \sum_{i=1}^{\infty} b_i z^i &=& 0 \end{eqnarray}$$ where $b_i = (a_i-a_i')/2 \in \{0,1,-1\}$. In particular, $X_z$ is connected when $z$ is real with $1/2 \le |z| \lt 1$ and when $z$ is a root of a polynomial with coefficients in $\{-1,0,1\}$. The intersection of the closure of those roots with the interior of the disk is $M$. This image shows the nonzero roots of polynomials of degrees up to $9$ with coefficients in $\{-1,0,1\}$ with the circles of radii $\frac{1}{\sqrt{2}}$ and $1$. The closures of roots of polynomials with restricted coefficients have been studied, and they are quite interesting . In some areas, there seems to be a Julia-Mandelbrot correspondence, where the set of roots of small degree near a point resembles the fixed set of the iterated function at that point. This preprint of Thierry Bousch proves some connectivity properties of the closure, and that the annulus $\frac{1}{\sqrt{2}} \lt |z| \lt 1$ is in $M$. So, some of the apparent holes in the picture above close up as the degree increases, including those between the two circles such as near some roots of unity. The paper of Calegari et al mentioned by Nikita Sidorov proves that there are many actual holes in $M$, among other results.
{ "source": [ "https://mathoverflow.net/questions/224077", "https://mathoverflow.net", "https://mathoverflow.net/users/81587/" ] }
224,232
I suspect that the curve $x^5 + y^5=7$ has no $\mathbb Q$ points, and a brief computer search verifies this hypothesis for denominators up to $10^4$. What techniques can be used to show that there are no solutions?
There is an action of $\mu_5$ , the group of fifth roots of unity, on your curve, given by $\zeta \cdot (x,y) = (\zeta x, \zeta^{-1} y)$ . The quotient by this group action is the hyperelliptic curve $$C \colon Y^2 = X^5 + \frac{49}{4},$$ the map being given by $(X, Y) = (-xy, x^5 - \frac{7}{2})$ . So it is enough to find all the rational points on $C$ . $C$ is isomorphic to $C' \colon Y^2 = 4 X^5 + 49$ . By a 2-descent, one can show that the Jacobian variety of $C'$ , $J'$ , has Mordell-Weil rank (at most) 1, and since one finds a point of infinite order ( $(x^2 - \frac{10}{9} x - \frac{10}{9}, \frac{200}{27} x - \frac{61}{27})$ in Mumford representation), Chabauty's method is applicable and shows that $\infty$ , $(0, \pm \frac{7}{2})$ are the only rational points on $C$ . This implies that there are no rational points on your affine curve. Here is Magma code to check this: P<x> := PolynomialRing(Rationals()); C := HyperellipticCurve(4*x^5 + 49); J := Jacobian(C); RankBound(J); // --> 1 ptsJ := Points(J : Bound := 500); // the first 5 are torsion Chabauty(ptsJ[6]); // --> { (0 : -7 : 1), (1 : 0 : 0), (0 : 7 : 1) } (If you do not have direct access to Magma, you can try this out with the online Magma calculator at http://magma.maths.usyd.edu.au/calc/ .)
{ "source": [ "https://mathoverflow.net/questions/224232", "https://mathoverflow.net", "https://mathoverflow.net/users/29961/" ] }
224,725
I would like to see a simple example which shows how mathematical notation were evolve in time and space. Say, consider the formula $$(x+2)^2=x^2+4{\cdot}x+4.$$ If I understand correctly, Franciscus Vieta would write something like this. (Feel free to correct me.) $\overline{N+2}$ quadr. æqualia $Q+N\,4+4$. ($N$ stays for unknown and $Q$ for its square.) Can you give me the other examples?
For a quite extensive overview with many examples, you might want to check out The origins and development of mathematical notation . I also enjoyed reading Stephen Wolfram's take on Mathematical Notation – past and future, with a great variety of illustrations. The OP asks specifically for the evolution of one formula, "to see the big picture". Here is one example, taken from Math through the ages :
{ "source": [ "https://mathoverflow.net/questions/224725", "https://mathoverflow.net", "https://mathoverflow.net/users/1441/" ] }
225,138
Consider the Peano axioms. There exists a model for them (namely, the natural numbers with a ordering relation $<$, binary function $+$, and constant term $0$). Therefore, by the model existence theorem, shouldn't this suffice to prove the consistency of first order arithmetic? Why is Gentzen's proof necessary?
The axioms of first-order arithmetic include the induction schema, which says that, for every formula $A(x)$ with free variable $x$ , the conjunction of $A(0)$ and $\forall x\,(A(x)\rightarrow A(x+1))$ implies $\forall x\,A(x)$ . This is, of course, a special case of the well-known and basic induction property of the natural numbers that says the same thing for any property $A(x)$ whatsoever, whether or not it's defined by a first-order formula. For anyone who (1) understands the natural numbers well enough to grasp the general induction principle and (2) believes that (first-order) quantifiers over the natural numbers are meaningful so that first-order formulas $A(x)$ really define properties, it is clear that the natural number system satisfies all of the first-order Peano axioms, and therefore those axioms are consistent. A difficulty arises if one adopts a very strong constructivist or finitist viewpoint, doubting item (2) above, i.e., questioning the meaning of first-order quantifiers $\forall z$ and $\exists z$ when $z$ ranges over an infinite set (like $\mathbb N$ ) so that one can't actually check each individual $z$ . From such a viewpoint, the formulas $A(x)$ occurring in the induction schema are gibberish (or close to gibberish, or at least not clear enough to be used in mathematical reasoning), and then the proposed consistency proof collapses. The chief virtue of Gentzen's consistency proof is that it essentially avoids any explicit quantification over infinite sets. It can be formulated in terms of very basic, explicit, computational constructions (technically, in terms of primitive recursive functions and relations). There is, however, a cost for this virtue, namely that one needs an induction principle not just for the usual well-ordering of the natural numbers but for the considerably longer well-ordering $\varepsilon_0$ . Thus, Gentzen uses a much longer well-ordering, but his induction principle is only about primitive recursive properties, not about arbitrary first-order definable properties. There is a trade-off: Length of well-ordering versus quantification. I believe the trade-off can be made rather precise, but I don't remember the details. Recall that $\varepsilon_0$ is the limit of the sequence of iterated exponentials $\omega(0)=\omega$ and $\omega(n+1)=\omega^{\omega(n)}$ . If we weaken PA by limiting the induction principle to formulas $A(x)$ that can be defined with a fixed number $n$ of quantifiers, then the consistency of this weakened theory can be proved using primitive recursive induction up to $\omega(n)$ , as proved by Carnielli and Rathjen in " Hydrae and subsystems of arithmetic ". In other words, the trade-off is that an additional quantifier in the induction formulas costs an additional exponential in the ordinal.
{ "source": [ "https://mathoverflow.net/questions/225138", "https://mathoverflow.net", "https://mathoverflow.net/users/83579/" ] }
225,172
Let $n$ be a natural number. For every group $G$ of order $n$, denote $d(G)$ : The number of elements of the smallest generating set of $G$ How large is the maximum possible value of $d(G)$ depending on $n$ ? If $n$ is a cyclic number, we have $d(G)=1$ for every group of order $n$. For $n=2p$ , $p$ an odd prime, there are two groups : the cyclic group and the dihedral group with $2$ generators, so in this case the maximum value is $2$. But I wonder, if the maximal value for $d(G)$ can be determined in general, assuming the factorization of $n$ is known. Is the value known for $n=2048$, for example ?
By a Theorem of Guralnick and Lucchini (which does require CFSG), if each Sylow subgroup of $G$ (ranging over all primes) can be generated by $r$ or fewer elements, then $G$ can be generated by $r+1$ or fewer elements. As noted in comments, if $G$ has a Sylow $p$-subgroup $P$ of order $p^{a}$, then $P$ can be generated by $a$ or fewer elements (and $a$ are needed if and only if $P$ is elementary Abelian). Hence if $|G|$ has prime factorization $p_{1}^{a_{1}}p_{2}^{a_{2}} \ldots p_{r}^{a_{r}}$ with the $p_{i}$ distinct primes, and the $a_{i}$ positive integers, then $G$ can be generated by $1 + {\rm max}(a_{i})$ or fewer elements. (The result attributed to Guralnick and Lucchini was not a joint paper, rather a result proved independently at around the same time: references: R. Guralnick, "A bound for the number of generators of a finite group, Arch. Math. 53 (1989), 521-523. A Lucchini: "A bound on the number of generators of a finite group", Arch. Math 53, (1989), 313-317).
{ "source": [ "https://mathoverflow.net/questions/225172", "https://mathoverflow.net", "https://mathoverflow.net/users/49398/" ] }
225,391
This question is probably too vague for experts, but I really don't know how to avoid it. I've read in several places that under mild conditions, a morphism is an effective descent morphism iff the base-change functor it induces is monadic. Now, I don't really know anything about descent and I'm trying to probe around and figure out how to start learning about it. I think of monads in terms of algebraic theories. Descent on the other is supposedly of geometric nature; a formalism which generalizes familiar gluing over open subsets to a more general setting without a spatial topology. I can't imagine how or why these two notions should be related. What lies at the core of the relationship between monadicity - a seemingly algebraic and concrete notion - and (effective) descent (morphisms)?
Probably the reason "monadicity" gets connected with descent (and the associated terminology of descent theory) is because of its relevance to the question of descent for rings. If you're talking about a morphism of rings $\phi:A\to B$ there is a functor $-\otimes_AB:Mod_A\to Mod_B$. Then you can ask the question: Can I recover the category $Mod_A$ from $Mod_B$? This question is answered by knowing whether or not $-\otimes_AB$ is comonadic. In particular, this tells us that $Mod_A$ is equivalent to the category of $ \phi^\ast\circ(-\otimes_AB)$-comodules in $Mod_B$. It takes a little bit of work, but as an exercise for really understanding descent, it may be useful to prove that in this case, the category of comodules for this comonad is equivalent to the category of $B\otimes_AB$-comodules, where $B\otimes_AB$ is a $B$-coring with comultiplication $$B\otimes_AB\cong B\otimes_AA\otimes_AB\to B\otimes_AB\otimes_AB\cong B\otimes_AB\otimes_B B\otimes_AB.$$ This object is also sometimes called the descent coring, and for reasons that will hopefully become clear in the next paragraph, its comodules are often referred to as descent data. With a bit of finagling one can prove that a $B\otimes_AB$-comodule structure on a $B$-module $N$ (so in the case that $-\otimes_AB$ is comonadic we know that $N\cong M\otimes_AB$ for a unique $A$-module $M$) is the same thing as a "cocycle condition" satisfying isomorphism $$M\otimes_AB \cong M\otimes_B B\otimes_AB\overset{\sim}\to B\otimes_AB\otimes_B M\cong B\otimes_AM.$$ The basic idea is to apply the adjunction $$Hom_B(M,B\otimes_AM)\cong Hom_{B\otimes_AB}(M\otimes_AB,B\otimes_AM).$$ And so this is the thing that lets us take a geometric perspective. If we consider the equivalent problem of whether or not we can recover the category of quasicoherent sheaves on $Spec(A)$ from the category of quasicoherent sheaves on $Spec(B)$, with adjunction $\phi^\ast:QC_{Spec(A)}\rightleftarrows QC_{Spec(B)}:\phi_*$ then the descent data (as defined above for modules) for a sheaf $\mathcal{F}$ on $Spec(B)$ is an isomorphism between the two ways of pulling back $\mathcal{F}$ to $Spec(B)\times_{Spec(A)}Spec(B).$ This, as you may recall, is precisely the way that we typically state "gluing" conditions. In other words, if a sheaf supports such an isomorphism, it lives in the coequalizer (in categories, so necessarily the coequalizer computed as a 2-colimit) of the diagram $$QC_{Spec(B)}\leftleftarrows QC_{Spec(B)\times_{Spec(A)} Spec(B)}\Lleftarrow QC_{Spec(B)\times_{Spec(A)} Spec(B)\times_{Spec(A)} Spec(B)}.$$ The only further thing maybe to mention is that usually geometric-type descent is stated in terms of covers $X\overset{f_i}\leftarrow\{U_i\}$ but we can do everything above by just considering $X\overset{\coprod f_i}\leftarrow\coprod U_i.$ So now whenever you've got any kind of monad or comonad, you can ask about "descent" for that monad, which is really just a question of whether or not you can recover some category from some other category of (co)monadic (co)modules.
{ "source": [ "https://mathoverflow.net/questions/225391", "https://mathoverflow.net", "https://mathoverflow.net/users/69037/" ] }
225,445
Sophisticated mathematical concepts typically shed light on sophisticated mathematics. But in a few cases they also apply to elementary mathematics in an interesting way. I find such examples particularly enlightening, and would love to have a list of them. To make the question well-defined, my (arbitrary) cutoff for "elementary" is USA K-12. By "sophisticated" I mean 20th-21st century research mathematics, and the applications should be nontrivial enough to have appeared in research-level publications. I list two examples I know. The carrying operation in base 10 addition involves the computation of a group 2-cocycle, as explained in this MO answer and in this ncatlab page following (Daniel C. Isaksen, A cohomological viewpoint on elementary school arithmetic, Amer. Math. Monthly (109), no. 9 (2002), p. 796--805). See also the $\mathtt{sci.math}$ posts by James Dolan back in January 1994 ( $\rm\LaTeX$ version ). The passage from the poset of non-negative integers to the monoid of their differences ( i.e. working with expressions of the form $x-y$ where $x$ and $y$ are numbers) is the forgetful functor from a comma category $0/\mathcal{C}$ to $\mathcal{C}$ where $\mathcal{C}$ is a monoid with unique object $0$ , as described by Lawvere in Section 4 of Taking Categories Seriously . Question: What are other examples of sophisticated mathematics elucidating elementary mathematical ideas and concepts? Edit: One example per answer, please. Also, if you have examples which are not research-paper content but are close then that's still interesting for me, but less so as it gets further from research level. The most interesting examples for me are those with citations to research papers, as in the listed examples.
The angle addition formula $\tan(\alpha + \beta) = \frac{\tan(\alpha) + \tan(\beta)}{1 - \tan(\alpha) \tan(\beta)}$ for tangent gives one of the simplest nontrivial examples of a formal group law, namely $F(x, y) = \frac{x + y}{1 - xy}$. A variation corresponding to the hyperbolic tangent governs the addition of velocities in special relativity, and a further variation is related, via the dictionary between formal group laws and genera , to the Hirzebruch $\chi_y$ genus. See, for example, this paper .
{ "source": [ "https://mathoverflow.net/questions/225445", "https://mathoverflow.net", "https://mathoverflow.net/users/2051/" ] }
225,459
Definitions: Lagrange's theorem implies that for each prime $p$, the factors of $(p − 1)!$ can be arranged in unequal pairs, with the exception of $±1$, where the product of each pair $≡ 1 \pmod p$. See Wiki article on Wilson's theorem. From the example in the link above, for $p=11$ we have $$(11-1)!=[(1\cdot10)]\cdot[(2\cdot6)(3\cdot4)(5\cdot9)(7\cdot8)] \equiv [-1]\cdot[1\cdot1\cdot1\cdot1] \equiv -1 \pmod{11}$$ Let the products of the pairs that $≡ 1 \pmod p$ be the multiset $A_p$, and $A_{p_n}$ the multiset for the $n$th prime. For the above example then, $A_{p_5}=\{(2\cdot6),(3\cdot4),(5\cdot9),(7\cdot8)\}=\{12,12,45,56\}$. Conjecture: $$\lim\limits_{n\rightarrow\infty}\dfrac{\sum\limits_{k \in A_{p_n}}(k-1)}{(p_n)^3}\approx\frac18$$ where $p_n$ is the $n$th prime. Examples: For $p=11$ we have $$\dfrac{11+11+44+55}{11^3}=\dfrac{1}{11}$$ For $p=997$ we have $$\dfrac{123218233}{997^3}=\dfrac{123218233}{991026973}$$ Comments: As @YCor noted below, the $-1$ in the $k-1$ can be removed, since its contribution tends to $0$. The conjecture can therefore be simplified to $$\lim\limits_{n\rightarrow\infty}\dfrac{\sum\limits_{k \in A_{p_n}}k}{(p_n)^3}\approx\frac18$$ I have no idea whether the above statement is correct, or how to go about trying to find a proof. Any comments on the any of the above are most welcome.
For an integer $n$ with $1\leq n\leq p-1$, let $n^{-1}$ be the inverse of $n$ modulo $p$. It follows from Weil's bound on Kloosterman sums that for every $\epsilon>0$ the set $\{n: xp\leq n\leq (x+\epsilon) p, yp\leq n^{-1}<(y+\epsilon) p\}$ has cardinality $\epsilon^2p+\mathcal{O}(\sqrt{p}\log^2 p)$. Hence up to a relative error tending to 0 the sum in question can be replaced by an integral, that is $$ \sum_{n=1}^{p-1} n\cdot n^{-1} \sim p^3\int_0^1\int_0^1 xy\;dx\;dy = \frac{p^3}{4}. $$ (Note that the $\cdot$ on the left hand side refers to the multiplication of integers, not to modular multiplication). Here each pair $(a,b)$ with $ab\equiv 1\pmod{p}$ is counted twice, with the exception of $(1,1)$ and $(-1, -1)$, which contribute less than $p^2$. Hence up to an error $\mathcal{O}(p^2)$ the left hand side of the above expression is twice $\sum_{k\in A} k$, which proves your claim.
{ "source": [ "https://mathoverflow.net/questions/225459", "https://mathoverflow.net", "https://mathoverflow.net/users/45057/" ] }
225,814
I was curious about a physics question which I thought might be suitable for mathoverflow. I looked at the answer to this question , but it's not what I'm looking for. Basically, classical mechanics and the $\hbar \to 0$ limit of quantum mechanics study the action of the same algebra on very different representations. I'm curious whether there is a good physical explanation for why as you degenerate to the $\hbar \to 0$ limit the algebra of observables degenerates to the same Poisson algebra appearing in classical mechanics, but the relevant representation changes significantly. Specifically, (non-relativistically) classical systems have evolution $\frac{d}{dt} \rho = \{H, \rho\}$ and quantum systems have evolution $\frac{d}{dt}\psi = [H, \psi]$ (up to some constants). So far these look analogous, but in classical mechanics, the density function $\rho$ is itself a function on the phase space (i.e. a vector of the regular representation), whereas in quantum mechanics, $\psi$ is a is just a function (or something) on the $x_i$ themselves - i.e. a vector of a representation of "square-root dimension" (half-dimensional singular support)! My guess is that this is a many-particle phenomenon, and a fully honest answer to "why do we observe classical mechanics" will probably involve a serious study of deconherence and questions of "what is observation", etc. But I'm curious if there is a heuristic way to see why the algebra that's acting is the same (and in what way the representation is allowed to change: e.g., is there some embedding of the regular representation in a tensor product of irreducible ones?)
It is perhaps helpful to distinguish between four types of mechanics here: Pure-state classical mechanics . Here, the mechanics are classical, and the system is described by a single point $(q,p)$ in phase space. This point evolves via Hamilton's equations of motion $\partial_t q = \frac{\partial H}{\partial p}; \partial_t p = - \frac{\partial H}{\partial q}$. Mixed-state classical mechanics . Here, the mechanics are classical, and the system is described by a probability density function $\rho(q,p)$ on phase space (this density may be a generalised function, e.g. a Dirac delta, rather than a classical function). This density function evolves via the advection equation $\partial_t \rho = \partial_p ( \rho \partial_q H ) - \partial_q (\rho \partial_p H ) = \{H,\rho\}$. Pure-state quantum mechanics . Here, the mechanics are quantum, and the system is described by a wave function $|\psi\rangle$ in a Hilbert space. This wave function evolves via Schrödinger's equation of motion $\partial_t |\psi \rangle = \frac{1}{i\hbar} H |\psi\rangle$. Mixed-state quantum mechanics . Here, the mechanics are quantum, and the system is described by a density matrix $\rho$ (a positive semi-definite trace one operator on a Hilbert space). This density matrix evolves by the von Neumann evolution equation $\partial_t \rho = \frac{1}{i\hbar} [H,\rho]$. In both the classical and quantum regimes, a mixed state can be viewed as a convex (or classical) superposition of pure states (with a pure classical state $(q,p)$ identified with the Dirac probability density function $\delta_{(q,p)}$, and a pure quantum state $|\psi \rangle$ identified with a pure density matrix $|\psi \rangle \langle \psi|$). So in principle the pure-state mechanics describes the mixed-state mechanics completely (albeit with the caveat that in the quantum case, in contrast to the classical case, the decomposition of a mixed state as a superposition of pure states is non-unique). However, the correspondence principle is clearest to see at the mixed state level, i.e. to compare 2. with 4. in the semiclassical limit $\hbar \to 0$, rather than comparing 1. with 3.. Indeed, any density matrix $\rho$ has a Wigner transform $\tilde \rho$, which is a function on phase space defined via duality as $\int \tilde \rho(q,p) A(q,p)\ dq dp = \hbox{tr}( \rho \hbox{Op}(A) )$ for any classical observable $A$, where $\hbox{Op}(A)$ is the (Weyl) quantisation of $A$ (i.e. the Wigner transform is the adjoint of the quantisation operator). This Wigner transform $\tilde \rho$ will usually not be non-negative, and hence will not be a classical probability density function, but in semiclassical regimes it is often the case that $\tilde \rho$ will tend (in a suitable weak sense) to a classical probability density when $\hbar \to 0$, which will then evolve by the classical advection equation. This is the dual to the assertion that the quantum Heisenberg equation $\partial_t A = \frac{i}{\hbar} [H,A]$ for the evolution of quantum observables converges to the classical Poisson equation $\partial_t A = -\{ H,A\}$ for the evolution of classical observables in the semiclassical limit $\hbar \to 0$. There is still a correspondence at the level of 1. and 3., but it is a bit trickier to see; one has to restrict to things like "gaussian beam" type solutions $|\psi \rangle$ to the Schrödinger equation that are well localised in both position and momentum space, in order to get a classical limit that is a pure state rather than a mixed state. (An arbitrary wave function would instead get a "phase space portrait" which in the semiclassical limit becomes [assuming some equicontinuity and tightness, and possibly after passing to a subsequence, as noted in comments] a mixed state from 2., rather than a pure state from 1.).
{ "source": [ "https://mathoverflow.net/questions/225814", "https://mathoverflow.net", "https://mathoverflow.net/users/7108/" ] }
225,903
Nature just published a paper by Cubitt, Perez-Garcia and Wolf titled Undecidability of the Spectral Gap , there is an extended version on arxiv which is 146 pages long. Here is from the abstract:" Many challenging open problems, such as the Haldane conjecture, the question of the existence of gapped topological spin liquid phases, and the Yang–Mills gap conjecture, concern spectral gaps. These and other problems are particular cases of the general spectral gap problem: given the Hamiltonian of a quantum many-body system, is it gapped or gapless? Here we prove that this is an undecidable problem. Specifically, we construct families of quantum spin systems on a two-dimensional lattice with translationally invariant, nearest-neighbour interactions, for which the spectral gap problem is undecidable ". I am curious about the undecidable part. The abstract says " our result implies that there exists no algorithm to determine whether an arbitrary model is gapped or gapless, and that there exist models for which the presence or absence of a spectral gap is independent of the axioms of mathematics ". "Axioms of mathematics" is kind of vague, so in the extended version they phrase it in a Gödelian manner:" Our results imply that for any consistent, recursive axiomatisation of mathematics, there exist specific Hamiltonians for which the presence or absence of a spectral gap is independent of the axioms ". But still, axiomatization of which mathematics, how much mathematics do they need to construct their Hamiltonians. Is it real analysis? ZF? ZFC? I can't figure it out even from their theorem statements. Is this a mathematical proof or something at the "physical level of rigor"? If so, does it produce "concrete" undecidable statements, or are these Hamiltonians as obscure as "I am unprovable"? Does it represent a new way of proving independence results compared to forcing, etc.? In other words, is it an advance on Gödel sentences and the continuum hypothesis? EDIT: Cubitt gave an interview where he commented on the nature of the result informally:" It's possible for particular cases of a problem to be solvable even when the general problem is undecidable, so someone may yet win the coveted $1m prize... The reason this problem is impossible to solve in general is because models at this level exhibit extremely bizarre behaviour that essentially defeats any attempt to analyse them... For example, our results show that adding even a single particle to a lump of matter, however large, could in principle dramatically change its properties ".
I haven't read the paper carefully, but this appears to be a standard undecidability result, of the sort of which there are dozens if not hundreds in the literature, of the same ilk as the undecidability of Wang tilings, the undecidability of the existence of solutions to Diophantine equations, the word problem for groups, and many others. It's a formal mathematical proof which shows (Theorem 3 in the extended version) that, there is a family of Hamiltonians $H^{\Lambda(L)}(n)$ such that the set of $n$ such that $H^{\Lambda(L)}(n)$ is "gapped" is a complete computably enumerable set. The connection to things like axiomitizations of math is then completely standard---any correct and consistent system of axioms cannot prove "$H^{\Lambda(L)}(n)$ is not gapped" for all $n$ for which this is true.
{ "source": [ "https://mathoverflow.net/questions/225903", "https://mathoverflow.net", "https://mathoverflow.net/users/51484/" ] }
225,910
If a topological group $G$ is also a topological manifold, it is well-known (Hilbert's 5th Probelm) that there is a unique analytic structure making it a Lie group. However, for a compact Lie group $G$, do we know if the underlying topological manifold supports any other exotic smooth structures (necessarily not a Lie group)? Even a more specific example: Up to diffeomorphism, we have $SO(8)=SO(7)\times S^7$. If we replace the smooth structure on $S^7$ by an exotic one, do we get an exotic smooth structure on $SO(8)$? Thank you!
I haven't read the paper carefully, but this appears to be a standard undecidability result, of the sort of which there are dozens if not hundreds in the literature, of the same ilk as the undecidability of Wang tilings, the undecidability of the existence of solutions to Diophantine equations, the word problem for groups, and many others. It's a formal mathematical proof which shows (Theorem 3 in the extended version) that, there is a family of Hamiltonians $H^{\Lambda(L)}(n)$ such that the set of $n$ such that $H^{\Lambda(L)}(n)$ is "gapped" is a complete computably enumerable set. The connection to things like axiomitizations of math is then completely standard---any correct and consistent system of axioms cannot prove "$H^{\Lambda(L)}(n)$ is not gapped" for all $n$ for which this is true.
{ "source": [ "https://mathoverflow.net/questions/225910", "https://mathoverflow.net", "https://mathoverflow.net/users/41734/" ] }
226,086
Let $M^3$ be an oriented 3-manifold, and let $f:M^3\looparrowright \mathbb R^4$ be a codimension one immersion. Is it possible to find a small deformation of the composite map $$ M^3 \to \mathbb R^4 \to \mathbb R^6 $$ which is an embedding? (I expect the answer to be "no", and so I'm mostly interested in the method of proof.)
Quoting Theorem F of this paper by Ulrich Koschorke: For any self-transverse immersion $j$ of a closed 3-manifold $M$ into $\mathbb{R}^4$ the following integers are equal modulo 2: the Euler number of the surface of double points of $j$; the number of quadruple points of $j$; the number of double points of any self-transverse immersion $M\looparrowright\mathbb{R}^6$ which is regularly homotopic to $M \stackrel{j}{\looparrowright}\mathbb{R}^4\subseteq\mathbb{R}^6$. Moreover, there exists an oriented 3-manifold immersed into $\mathbb{R}^4$ such that all these numbers are odd. An explicit geometric construction of the required immersion is given in the final section.
{ "source": [ "https://mathoverflow.net/questions/226086", "https://mathoverflow.net", "https://mathoverflow.net/users/5690/" ] }
226,093
As a generalisation to the equation of Fermat, one can ask for rational solutions of $X^n+Y^n+Z^n=1$ (or almost equivalently integer solutions of $X^n+Y^n+Z^n=T^n$). Contrary to the case of Fermat, the case where $n=3$ has infinitely many solutions, because the surface is rational. For $n=4$, we get a K3 surface and for $n\ge 5$ the surface should have finitely many points or the points should be contained in finitely many curves since it is of general type (Bombieri-Lang conjecture). But are there some non-trivial solutions for $n\ge 5$ (and $n=4$)? Do we know if there are finitely many solutions for $n\ge 5$ ? I guess that it should be classical, but I did not find it on this site or online, after googling "Fermat, surface, rational solutions", etc
It has been conjectured by Euler that this equation has no solutions in positive integers when $n\geq 4$. When $n=4$, this was disproved by Elkies in the paper [Elkies, On A4+B4+C4=D4] in a very strong way: he proves that the rational points of this K3 surface are dense in the real points for the euclidean topology. When $n\geq 5$ is odd, your surface contains lines, for instance the line $Z-T=X+Y=0$. Consequently, it has infinitely many rational points. Of course, this does not disprove Euler's conjecture, that required positive integers.
{ "source": [ "https://mathoverflow.net/questions/226093", "https://mathoverflow.net", "https://mathoverflow.net/users/23758/" ] }
226,169
This question does not concern the comparative merits of standard (SA) and nonstandard (NSA) analysis but rather a comparison of different approaches to NSA. What are the concrete advantages of the abstract approaches to NSA (e.g., via the compactness theorem), as compared to the more concrete approach using ultrapowers? One can name generic reasons such as naturality, functoriality, categoricity, etc., but I am hoping for a concrete illustration of why a more abstract approach may be advantageous for understanding NSA concepts and/or proving theorems. Note 1. One of the existing answers provided a bit of information about advantages of the more abstract approach in terms of saturation. I would appreciate an elaboration of this if possible, in terms of a concrete application of saturation. Note 2. These issues are explored in more detail in this 2017 publication in Real Analysis Exchange .
To my way of thinking, there are at least three distinct perspectives one can naturally take on when undertaking work in nonstandard analysis. In addition, each of these perspectives can be varied on two other dimensions, independently. Those dimensions are, first, the order of nonstandardness (whether one wants nonstandardness only for objects, or also for functions and predicates, or also for sets of functions and sets of those and so on), and second, how many levels of standardness and nonstandardness one desires. Let me describe the three views I have in mind, and give them names, and then discuss how they relate to one another. Classical model-construction perspective. In this approach, one thinks of the nonstandard universe as the result of an explicit construction, such as an ultrapower construction. In the most basic instance, one has the standard real field structure $\newcommand\R{\mathbb{R}}\newcommand\Z{\mathbb{Z}}\langle\R,+,\cdot,0,1,\Z\rangle$, and you perform the ultrapower construction with respect to an fixed ultrafilter on the natural numbers (or on some other set, if this was desirable). In time, one is led to want more structure in the pre-ultrapower model, so as to be able to express more ideas, which will each have nonstandard counterparts. And so very soon one will have constants for every real, a predicate for the integers $\Z$, or indeed for every subset of $\mathbb{R}$ and a function symbol for every function on the reals, and so on. Before long, one wants nonstandard analogues of the power set $P(\R)$ and higher iterates. In the end, what one realizes is that one might as well take the ultrapower of the entire set-theoretic universe $V\to V^{\omega}/U$, which amounts to doing nonstandard analysis with second-order logic, third-order, $\alpha$-order for every ordinal $\alpha$. One then has the copy of the standard universe $V$ inside the nonstandard realm $V^*$, which one analyzes and understands by means of the ultrapower construction itself. Some applications of nonstandard analysis have required one to take not just a single ultrapower, but an iterated ultrapower construction along a linear order. Such an ultrapower construction gives rise to many levels of nonstandardness, and this is sometimes useful. Ultimately, as one adds additional construction methods, this amounts as Terry Tao mentioned to just adopting all of model theory as one toolkit. One will want to employ advanced saturation properties or embeddings or the standard system and so on. There is a very well-developed theory of models of arithmetic that uses quite advanced methods. To give a sample consequence of saturation: every infinite graph, no matter how large, arises as an induced subgraph of a nonstandard-finite graph in any sufficiently saturated model of nonstandard analysis. This often allows you to undertake finitary constructions with infinite graphs, modulo the move to a nonstandard context. Standard Axiomatic approach. Most applications of nonstandard analysis, however, do not rely on the details of the ultrapower or iterated ultrapower constructions, and so it is often thought worthwhile to isolate the general principles that make the nonstandard arguments succeed. Thus, one writes down the axioms of the situation. In the basic case, one has the standard structure $\R$ and so on, perhaps with constants for every real (and for all subsets and functions in the higher-order cases), with a map to the nonstandard structure $\R^*$, so that every real number $a$ has its nonstandard version $a^*$ and every function $f$ on the reals has its nonstandard version $f^*$. Typically, the main axioms would include the transfer principle , which asserts that any property expressible in the language of the original structure holds in the standard universe just in case it holds of the nonstandard analogues of those objects in the nonstandard realm. The transfer principle amounts precisely to the elementarity of the map $a\mapsto a^*$ from standard objects to their nonstandard analogues. One often also wants a saturation principle , expressing that any sufficiently realizable type is actually realized in the nonstandard model, and this just axiomatizes the saturation properties of the ultrapower. Sometimes one wants more saturation than one would get from an ultrapower on the natural numbers, but one can still achieve this by larger ultrapowers or other model-theoretic methods. Essentially the same axiomatic approach works with the high-order approach, where one has nonstandard version of every set-theoretic object, and a map $V\to V^*$, with nonstandard structures of any order. And similarly, one can axiomatize the features one wants to use in the iterated ultrapower case, with various levels of standardness. As with most mathematical situations where one has a construction approach and an axiomatic approach, it is usually thought to be better to argue from the axioms, when possible, than to use details of the construction. And most applications of nonstandard analysis that I have seen can be undertaken using only the usual nonstandard axioms. Nonstandard Axiomatic approach. This is a more radical perspective, which makes a foundational change in how one thinks about one's mathematical ontology. Namely, rather than thinking of the standard structures as having analogues inside a nonstandard world, one essentially thinks of the nonstandard world as the real world, with "standardness" structures picking out parts of it. So one has the real numbers including both infinite and infinitesimal reals, and one can say when two finite real numbers have the same standard part and so on. With this perspective, we think of the real real numbers as what on the other perspective would be the nonstandard reals, and then we have a predicate on that, which amounts to the range of the star map in the other approach. So some real numbers are standard, and some functions are standard and so on. One sometimes sees this kind of perspective used in arguments of finite combinatorics, where one casually considers the case of an infinite integer or an infinitesimal rational. (I have a colleague who sometimes talks this way.) That kind of talk may seem alien for someone not used to the perspective, but for those that adopt the perspective it is very useful. In a sense, one goes whole-hog into the nonstandard realm. More extreme versions of this idea adopt many levels of standardness and nonstandardness, extended to all orders. Karel Hrbáček has a very well-developed theory like this for nonstandard set theory, with an infinitely deep hierarchy of levels of standardness. He spoke on this last year at the CUNY set theory seminar , and I refer you to his articles on this topic. In Karel's system, one doesn't start with a standard universe and go up to the nonstandard universe, but rather, one starts with the full universe (which is fundamentally nonstandard) and goes down to deeper and deeper levels of standardness. Every model of ZFC, he proved, is the standard universe inside another model of the nonstandard set theories he considers. Ultimately, my view is that the choice between the perspectives I mentioned is a matter of taste, and that in principle any achievement of nonstandard analysis that can be undertaken with one of the perspectives has natural analogues that can be expressed with the other perspectives.
{ "source": [ "https://mathoverflow.net/questions/226169", "https://mathoverflow.net", "https://mathoverflow.net/users/28128/" ] }
226,277
Y. Sergeyev developed a positional system for representing infinite numbers using a basic unit called a "grossone", as well as what he calls an "infinity computer". The mathematical value of this seems dubious but numerous articles have already appeared in refereed research journals. Thus, there are currently 23 such articles in mathscinet not to speak of numerous lectures in conferences. In a comment accessible here Sergeyev asserts that "Levi-Civita numbers are built using a generic infinitesimal $\varepsilon$ ... whereas our numerical computations with finite quantities are concrete and not generic." Here apparently "finite" is a misprint and should be "infinite". How is this comment on the difference between Sergeyev's grossone one the one hand, and the Levi-Civita unit on the other, to be understood? In a 2013 article, Sergeyev compares his grossone to Levi-Civita in the following terms in footnote 5: 5 At the first glance the numerals (7) can remind numbers from the Levi-Civita field (see [20]) that is a very interesting and important precedent of algebraic manipulations with infinities and infinitesimals. However, the two mathematical objects have several crucial differences. They have been introduced for different purposes by using two mathematical languages having different accuracies and on the basis of different methodological foundations. In fact, Levi-Civita does not discuss the distinction between numbers and numerals. His numbers have neither cardinal nor ordinal properties; they are build using a generic infinitesimal and only its rational powers are allowed; he uses symbol 1 in his construction; there is no any numeral system that would allow one to assign numerical values to these numbers; it is not explained how it would be possible to pass from d a generic infinitesimal h to a concrete one (see also the discussion above on the distinction between numbers and numerals). In no way the said above should be considered as a criticism with respect to results of Levi-Civita. The above discussion has been introduced in this text just to underline that we are in front of two different mathematical tools that should be used in different mathematical contexts. It would be interesting to have a specialist in numerical analysis comment on Sergeyev's use of the term "numerical" to explain the difference between his grossone and an infinite element of the Levi-Civita field. Sergeyev claims that his grossone has the properties of both ordinal and cardinal numbers. Does he give a definition that would ensure such properties, or is this claim merely a declarative pronouncement? Following the publication of an article by Sergeyev in EMS Surveys in Mathematical Sciences , the editors published the following clarification : Statement of the editorial board We deeply regret that this article appears in this issue of the EMS Surveys in Mathematical Sciences. It was a serious mistake to accept it for publication. Owing to an unfortunate error, the entire processing of the paper, including the decision to accept it, took place without the editorial board being aware of what was happening. The editorial board unanimously dissociates itself from this decision. It is not representative of the very high level that we expect to see in our journal, which can be assessed from all other papers that we have published. Both editors-in-chief have assumed responsibility for these mistakes and resigned from their position. Having said that, we add that this journal would not exist without their dedication and years of hard work, and we wish to register our thanks to them. An interesting viewpoint of a computer scientist is developed here (as well as a related discussion of legal issues in the comments ). The unanimous statement of the EMS Surveys editors is now fleshed out in the Zentralblatt review and the MathSciNet review also available here . Question. Does Sergeyev's grossone admit a consistent interpretation? Lolli in his answer argues that it does; others have argued otherwise.
I do not understand what the bounty on this question is for, as it seems to me that the other answers were already rather devastating. Here is a semi-reasoned technical answer. According to G. Lolli (the paper you cite) "Sergeyev is wary of the axiomatic method because he thinks that by adopting it we would be tied to the expressive power of a language in the description of mathematical objects and concepts." Serious mathematics requires serious adherence to the generally accepted standards of mathematics. Perhaps prof. Sergeyev thinks that he can surpass the limitations of formalization by taking a non-standard route to mathematics, but I would rather suspect that route will take him backwards in time and much closer to (a bad kind of) philosophy than most mathematicians would feel comfortable with. Regarding the formalization by G. Lolli, I see no difference between what is done in the paper and non-standard arithmetic. A grossone $G$ is axiomatized by the infinitely many axioms $0 < G$, $1 < G$, $2 < G$, ... which is exactly how one can get non-standard arithmetic going. The paper does not even mention non-standard arithmetic. This is what you get for publishing logic papers in applied math journals. So, it looks to me that grossones are a moving target with unclear and confused mathematical content, until one actually pins them down with a precise mathematical definition, only to find out they are not new at all. Update: it was pointed out that none of the answers has commented on the computational part of the grossone theory. I had a look at three papers, found on the infinity computer web site: The recommended paper to start with is Sergeyev Ya.D. (2010) Lagrange Lecture: Methodology of numerical computations with infinities and infinitesimals , Rendiconti del Seminario Matematico dell'Università e del Politecnico di Torino, 68(2), 95–113. It has a lot of informal descriptions and philosophy, some illustrative examples, but nothing that would actually describe a revolutionary new way of computing. Rather, it looks like ideas that could possibly lead to re-invention of non-standard arithmetic. Sergeyev Ya.D. (2016) The exact (up to infinitesimals) infinite perimeter of the Koch snowflake and its finite area , Communications in Nonlinear Science and Numerical Simulation, 31(1–3):21–29. I tried this paper because the title promised that there would be a concrete result in it. There is, of course, but again the theory of computation underlying the method is not properly explained. There are examples and analogies which again hint at something like non-standard arithmetic. Sergeyev Ya.D. (2015) Computations with grossone-based infinities , C.S. Calude, M.J. Dinneen (Eds.), Proc. of the 14th International Conference “Unconventional Computation and Natural Computation”, Lecture Notes in Computer Science, vol. 9252, Springer, 89-106. A pattern starts to emerge. Every paper contains a very long introduction to the philosophy and ideas about grossones, supported by illustrative examples, but there is no clear explanation of what is going on. All three papers present an equational system for grossones, i.e., things like associativity, commutativity, and other equations one would expect. A smart person can use these to simplify expressions and thereby "compute" results. But a computational model requires a description of a general procedure for performing computations, whatever it is. Is there a method for normalizing expressions involving grossones? Or perhaps an abstract machine one can run? Or something else? I suppose the infinity computer is hiding in the patent. We shall never know. And I have now wasted more time on this than 50 points of bounty are worth. If someone can point me at an actual description of a computational model (whether it be "axiomatic" or not) which is not composed of a series of analogies and good ideas, I might take another look.
{ "source": [ "https://mathoverflow.net/questions/226277", "https://mathoverflow.net", "https://mathoverflow.net/users/28128/" ] }
226,343
Let $A$ be a commutative ring, and $L, M, N$ be $A$-modules. Then is it true that $$\text{Hom}_A (L, M)\otimes_A N \cong \text{Hom}_A (L, M\otimes_A N)$$ as $A$-modules? (Note that there is a natural morphism from the left to the right, I think it's not easy to check it is injective or surjective, but I didn't really do it; also, I think elements of the RHS are hard to decompose, so I don't hope for a (natural) arrow in the opposite direction.) If this is not true, how about we assume that $A$ is a local ring and $N$ is a flat $A$-module or even a flat local $A$-algebra? Could anyone give some hint or a proof, or a counterexample? Other appropriate conditions that guarantee the isomorphism are appreciated. $\textbf{Edit:}$ My main concern is the case when $A=\mathscr{O}_{\mathbb{C}^n,0}=M, N=\mathscr{E}_{\mathbb{C}^n,0}$, and $L$ is the stalk at $0\in \mathbb{C}^n$ of some coherent $\mathscr{O}_{\mathbb{C}^n}$-module, where $\mathscr{O}_{\mathbb{C}^n}$ and $\mathscr{E}_{\mathbb{C}^n}$ mean the sheaves of holomorphic functions and complex-valued smooth functions on $\mathbb{C}^n$ respectively. The flatness of $\mathscr{E}_{\mathbb{C}^n}$ over $\mathscr{O}_{\mathbb{C}^n}$ can be found here (Theorem 7.2.1), which cites Bernard Malgrange's book 'Ideals of differentiable functions' (page 88, Coro 1.12), and here is another discussion on MO.
The example $A=M=\mathbb{Z}$, $L=N=\mathbb{Q}$ shows that the answer is negative: we have $\text{Hom}(L,M)=0$ so $\text{Hom}_A(L,M)\otimes_AN=0$, but $\text{Hom}_A(L,M\otimes_AN)=\text{Hom}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Q})=\mathbb{Q}\neq 0$
{ "source": [ "https://mathoverflow.net/questions/226343", "https://mathoverflow.net", "https://mathoverflow.net/users/42571/" ] }
226,736
I am asking this question starting from two orders of considerations. Firstly, we can witness, considering the historical development of several sciences, that certain physical entities "disappeared": it is the case of luminiferous aether with the surge of Einstein's relativity , or the case of the celestial spheres that disappeared in the passage from the pre-copernica universe to the Copernican universe . Secondly, if we consider the evolution of mathematics, we assist to quite an opposite phenomenon: new mathematical objects are often invented and enrich already existing ontologies (let us consider the growth of the number system with negative, imaginary, etc., or the surge of non-euclidean geometries). But can anyone think of examples of mathematical objects that have, on the contrary, disappeared (have been abandoned or dismissed by mathematicians)?
I certainly can't think of examples similar to your physics examples of concepts that were just wrong so effectively became extinct. Two extremes which are present in mathematics are 1) Things which become too simple to have their name retain prominence as commonly known terminology. For example: For Aristotle, Square numbers and Oblong number s were perhaps similarly important. Today the first concept is vibrant while second concept is fairly unfamiliar. Similarly the concept of singly and doubly even integers is fairly unfamiliar being subsumed as the case $p=2$ of the $p$-adic order of an integer (or rational number.) AND 2) Things which are too complex for the tools of their time and so hibernate for a while As an example, I feel compelled to quote the stirring first paragraph of The Invariant Theory of Binary Forms Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics. During its long eclipse, the language of modern algebra was developed, a sharp tool now at long last being applied to the very purpose for which it was intended. More recently, the artillery of combinatorics began to be aimed at the problems of invariant theory bequeathed to us by the nineteenth century, abandoned at the time because of insufficient combinatorial thrust.
{ "source": [ "https://mathoverflow.net/questions/226736", "https://mathoverflow.net", "https://mathoverflow.net/users/84431/" ] }
226,746
I am looking for a (classical and/or oldest) reference giving the characterisation of the operator $(-\Delta)^{\frac 12}$ as the Dirichlet to Neumann map $w_y$ where $w$ is the harmonic extension on the upper-half plane. It can be in domains or in the whole space. Does anyone know a citation? It seems very hard to find one.
I certainly can't think of examples similar to your physics examples of concepts that were just wrong so effectively became extinct. Two extremes which are present in mathematics are 1) Things which become too simple to have their name retain prominence as commonly known terminology. For example: For Aristotle, Square numbers and Oblong number s were perhaps similarly important. Today the first concept is vibrant while second concept is fairly unfamiliar. Similarly the concept of singly and doubly even integers is fairly unfamiliar being subsumed as the case $p=2$ of the $p$-adic order of an integer (or rational number.) AND 2) Things which are too complex for the tools of their time and so hibernate for a while As an example, I feel compelled to quote the stirring first paragraph of The Invariant Theory of Binary Forms Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics. During its long eclipse, the language of modern algebra was developed, a sharp tool now at long last being applied to the very purpose for which it was intended. More recently, the artillery of combinatorics began to be aimed at the problems of invariant theory bequeathed to us by the nineteenth century, abandoned at the time because of insufficient combinatorial thrust.
{ "source": [ "https://mathoverflow.net/questions/226746", "https://mathoverflow.net", "https://mathoverflow.net/users/84434/" ] }
227,035
Let $ N_\chi(\alpha,T)$ be the number of zeros of $L(s=\sigma+it,\chi) = \sum \frac{\chi(n)}{n^s}$ where $c > 0$ and $(\sigma,t) $ are in the rectangle $ [\alpha,1] \times [-T,T]$. In various papers one can read an estimate of the number of zeros over all L-functions: $$ \sum_{\chi \mod q} N_\chi(\alpha,T) \ll T^{c(1-\alpha)} $$ How does this imply the prime number theorem for arithmetic progressions? $$ \sum_{1 \leq q \leq Q} \sum_{\chi} \left| \sum_x^{x+h}\chi(p) \log p \right| \ll h \cdot e^{\large -a \frac{\log x}{\log Q}}$$ I hardly recognize this as the statement there are infinitely many primes in any arithmetic sequence Since this is a statement about all characters $\chi$ how do we get a single arithmetic sequence $n = aq+b$? I am a non-expert and I suspect results like these are not 100% clear to people outside that field. Similar in spirit, does the Polya-Vinogradov inequality imply PNT for arithmetic sequences? That may be a separate question.
Firstly, a general comment: as understanding of a mathematical problem deepens, it is common (and even expected) for the most mathematically natural formulation of a given problem (or class of problems) to become "hardly recognisable" as arising from the formulation of the problem that historically motivated work in this area. For instance, the classical Greek problems of trisecting the angle or doubling the cube by straightedge and compass became transformed into part of the Galois theory of field extensions, whereas the superficially similar classical Greek question of squaring the circle by straightedge and compass has now instead become part of transcendence theory. These fields have since transformed further, for instance in modern algebraic geometry one can think of Galois theory as a special case of the theory of schemes (and their fundamental groups). These transformations arise because the mathematical tools and insights that one slowly acquires to attack these problems are often naturally abstracted using a formalism which often has no direct connection to the historical motivations of the problem, other than that the historical problem happens to be transformable to a special case of a much broader (and often much more interesting, ultimately) class of questions that the formalism is well equipped to handle. Now for the specific question at hand. The first insight, fundamental to modern analytic number theory, is that historical number-theoretic questions such as "how can I show that there infinitely many primes of the form $X$?" should be thought of as special cases of the much more general problem "How can I obtain good asymptotics or bounds for sums of the form $\sum_p f(p)$, where $p$ ranges over primes?". The original problem can be viewed as the special case where $f(p)$ is the indicator function $1_X$ associated to the property $X$; however the latter problem is in a much more general, flexible, and powerful framework, because of all the tools we have to compare one sum with another. For instance, rather than doing a naive unweighted count $\sum_p 1_X(p)$, one could work with more general weighted sums such as $\sum_p \frac{1_X(p)}{p}$, $\sum_p \frac{1_X(p)}{p^s}$, $\sum_p 1_X(p) \log p$, etc. which may be more advantageous to work with. In the case of Dirichlet's theorem on arithmetic progressions, a key insight of Dirichlet himself was that the multiplicative Fourier expansion $$ 1_{a \hbox{ mod } q}(n) = \frac{1}{\phi(q)} \sum_\chi \overline{\chi(a)} \chi(n)$$ allowed one to convert the original question of primes into arithmetic progressions $a \hbox{ mod } q$ into what turned out to ultimately be a much more interesting question, namely the estimation of Dirichlet character sums over primes, such as $\sum_{x \leq p \leq x+h} \chi(p) \log p$. Basically, the more upper bounds one has on these sums for non-principal Dirichlet characters, the more easily one is able to control historically interesting statistics such as $\sum_{x \leq p \leq x+h: p = a \hbox{ mod } q} 1$ in terms of the principal character $\chi_0(n) := 1_{(n,q)=1}$, which relatively easy to work with. The main reason that one prefers to work with characters over arithmetic progressions is because the former has much better multiplicative structure than the latter, allowing the full power of multiplicative number theory to be brought into play. Indeed, the next great insight of Dirichlet was that character sums and products over primes were closely tied to the associated Dirichlet $L$-function, through identities such the Euler product formula $$ L(s,\chi) = \prod_p (1 - \frac{\chi(p)}{p^s})^{-1}$$ (which encodes the fundamental theorem of arithmetic and the multiplicative nature of the Dirichlet character $\chi$) or the log-derivative of this formula, $$ -\frac{L'(s,\chi)}{L(s,\chi)} = \sum_{j=1}^\infty \sum_p \frac{\chi(p)^j}{p^{sj}} \log p.$$ In the latter formula one can already begin seeing why it is in fact natural to weight primes by characters and by the logarithmic weight $\log p$. (Actually, the most convenient way to package all this information is to replace sums over primes with sums over natural numbers weighted by the von Mangoldt function .) Through complex analysis tools such as the residue theorem, we know that the behaviour of the meromorphic function $-\frac{L'(s,\chi)}{L(s,\chi)}$ is controlled by the location of the zeroes of the Dirichlet $L$-function $L(s,\chi)$. Putting all this together, we now see that the key question one needs to resolve to understand primes in arithmetic progressions is where the zeroes of $L(s,\chi)$ are. For instance, Dirichlet's original proof of his theorem ultimately reduced matters to showing that $s=1$ is not a zero of this function. Through various "explicit formulae", one can make this connection much more manifest, for instance the von Mangoldt explicit formula for the Dirichlet L-function basically reads $$ \sum_{p \leq X} \chi(p) \log p + \dots = - \sum_\rho \frac{X^\rho}{\rho} + \dots$$ where $\rho$ ranges over zeroes of $L(s,\chi)$, and the $\dots$ conceal some lower order terms which I will omit here (together with the issue of how to interpret the infinite sum on the RHS, or whether one should truncate it to a finite sum) for simplicity. Combining such formulae with the multiplicative Fourier expansion mentioned earlier gives some useful direct connection between primes in arithmetic progressions and zeroes of $L$-functions. (This connection is sometimes popularly referred to as "the music of the primes", particularly in the $q=1$ case. Actually, with an appropriately "adelic" mindset, the $q=1$ case and $q>1$ cases can be unified into what is arguably an even more natural, albeit abstract, formalism, but that is perhaps a story for another time.) One thing that the explicit formula reveals is that the zeroes $\rho$ that are of large real part, $\operatorname{Re}(s) > \alpha$, will have an outsize impact on the distribution of the primes. Ideally, to get the most out of the explicit formula, we would like no zeroes whatsoever to the right of the critical line $\operatorname{Re}(s)=\frac{1}{2}$ (which is where all known zeroes of Dirichlet L-functions reside), and this is the main reason why the generalised Riemann hypothesis (GRH) is such a prized objective. Of course, we can't prove GRH, but in some applications we can make do with weaker density theorems that don't entirely prohibit zeroes from appearing to the right of the critical line, but at least limit how many of them can do so, which can still lead to reasonable upper bounds on various error terms that arise when using the explicit formula. As it turns out, the closer one gets to the line $\operatorname{Re}(s)=1$, the easier it is to exclude zeroes, and indeed it is known that there are no zeroes on or to the right of this line. For $\alpha < 1$, we have thus far been unable to prevent an infinity of zeroes from lying in the infinite strip $\{ s: \operatorname{Re}(s) > \alpha \}$, but we have at least been able to get non-trivial bounds for the zeroes in rectangles such as $\{ s: \operatorname{Re}(s) > \alpha; 0 \leq \operatorname{Im}(s) < T\}$, and this, together with suitably truncated versions of a suitable explicit formula, turns out to be good enough (after some calculation) to control primes in various arithmetic progressions, as is done in the paper of Gallagher you are citing. One might start with Davenport's multiplicative number theory book for an introduction to all this, in particular to the proof of the prime number theorem in arithmetic progressions with classical error term which uses the same general strategy as in Gallagher's paper but is somewhat easier to execute (one only needs the classical zero free region for the Dirichlet L-function, rather than the more difficult zero density estimates used by Gallagher). As mentioned in GH's answer, Iwaniec-Kowalski gives a more modern and advanced treatment of these topics. I also like Montgomery's CBMS book. As for the Polya-Vinogradov estimate, it is certainly of some relevance in bounding character sums, which in turn can be used to control L-functions, but it is generally not the most convenient tool to use for this purpose (for instance, the original formulation of the inequality is concerned with sharply truncated character sums, whereas smoothed character sums are often more natural to work with for the purposes of understanding L-functions). Nevertheless there is certainly a close kinship. For instance the Fourier inversion used to prove Polya-Vinogradov is closely tied to the Poisson formula proof of the functional equation for the Dirichlet L-function.
{ "source": [ "https://mathoverflow.net/questions/227035", "https://mathoverflow.net", "https://mathoverflow.net/users/1358/" ] }
227,083
In the past, first-order logic and its completeness and whether arithmetic is complete was a major unsolved issues in logic . All of these problems were solved by Godel. Later on, independence of main controversial axioms were established by forcing method. I wonder if there still exist some "natural" questions in mathematical logic that are still unsolved? Or is it the case that most of the major questions have been already answered? I'd love to know about some important, but still unsolved problems that puzzle logicians and why would the young logician\mathematician care about those? (that is, Whey they are important?) I'm not an expert in logic (nor in any other mathematical field, I'm undergraduate) but I'm interested in logic so I would like to know about the current problems that logicians face and what are the trends of research in the discipline nowdays and what type of problems people are trying to solve. I know that logic is a vast term which includes many sub-disciplines: model theory, proof theory, set theory, recursion theory, higher-order logics , non-classical logics, modal logics, algebraic logic and many others. So feel free to tell us about problems form whichever topic you would love to.
Yes, there are several. Here's a few which I personally care about (described in varying amounts of precision). This is not meant to be an exhaustive list, and reflects my own biases and interests. I am focusing here on questions which have been open for a long amount of time, rather than questions which have only recently been raised, in the hopes that these are more easily understood. MODEL THEORY The compactness and Lowenheim-Skolem theories let us completely classify those sets of cardinalities of models of a first-order theory; that is, sets of the form $$\{\kappa: \exists \mathcal{M}(\vert\mathcal{M}\vert=\kappa, \mathcal{M}\models T)\}.$$ A natural next question is to count the number of models of a theory of a given cardinality. For instance, Morley's Theorem shows that if $T$ is a countable first-order theory which has a unique model in some uncountable cardinality, then $T$ has a unique model of every uncountable cardinality (this is all up to isomorphism of course). Surprisingly, the countable models are much harder to count! Vaught showed that if $T$ is a (countable complete) first-order theory, then - up to isomorphism - $T$ has either $\aleph_0$ , $\aleph_1$ , or $2^{\aleph_0}$ -many countable models. Vaught's Conjecture states that we can get rid of the weird middle case: it's either $\aleph_0$ or $2^{\aleph_0}$ . In case the continuum hypothesis holds, VC is vacuously true; but in the absence of CH, very little is known. VC is known for certain special kinds of theories (see e.g. Vaught's conjecture for partial orders and http://link.springer.com/article/10.1007%2FBF02760651 ) and a counterexample to VC is known to have some odd properties, including odd computability-theoretic properties ( https://math.berkeley.edu/~antonio/papers/VaughtEquiv.pdf ), but the conjecture is wide open. NOTE: VC can be rephrased as a "countable/perfect" dichotomy, in which case it is not trivially true if CH holds and is in fact forcing invariant; see e.g. How do we know if Vaught's Conjecture is Absolute? . PROOF THEORY If $T$ is a strong enough reasonable theory, we can define the proof-theoretic ordinal of $T$ ; roughly, how much induction is necessary to prove that $T$ is consistent. For instance, the proof-theoretic ordinal of $PA$ is $$\epsilon_0=\omega^{\omega^{\omega^{...}}}.$$ Proof-theoretic ordinals have been calculated for a variety of systems reaching up to (something around) $\Pi^1_2$ - $CA_0$ , a reasonably strong fragment of second-order arithmetic which is in turn a very very small part of ZFC. It seems unfair, based on this, to list " finding the proof-theoretic ordinal of ZFC " as one of these problems, based on how far away it is; but "find ordinals for stronger theories" is an important program. See e.g. Proof-Theoretic Ordinal of ZFC or Consistent ZFC Extensions? . COMPUTABILITY THEORY I believe the oldest open problem in computability theory is the automorphism problem . In Turing's 1936 paper, he introduced - in addition to the usual Turing machine - the oracle Turing machine (or o-machine ). This is a Turing machine which is equipped with "extra information" in the form of a (fixed arbitrary) infinite binary string. Oracle machines allow us to compare the non-computability of sets of natural numbers: we write $A\le_T B$ if an oracle machine equipped with $B$ can compute $A$ . This yields a partial ordering $\mathcal{D}$ , the Turing degrees . Initially the Turing degrees were thought to be structurally simple; for instance, it was conjectured (I believe by Shoenfield) that the partial order is "very homogeneous" (there were many different conjectures). As it turned out, however, the exact opposite happens: the Turing degrees have surprisingly rich structure. See e.g. http://www.jstor.org/stable/2270693?seq=1#page_scan_tab_contents for an early example of this by Feiner, and http://www.pnas.org/content/76/9/4218.full.pdf for a later one by Shore. Indeed, currently the general belief is that $\mathcal{D}$ is rigid , and it has been shown (see e.g. https://math.berkeley.edu/~slaman/papers/IMS_slaman.pdf , Theorem 4.30) that $Aut(\mathcal{D})$ is at most countable. The automorphism problem is exactly the question of determining $Aut(\mathcal{D})$ ; I don't have a reference as to when it was first stated, but I vaguely recall the date 1955. We can also ask about "local" degree structures - e.g., the partial order of the c.e. degrees, or the degrees below $0'$ - and there are interesting connections between the local and global pictures. Another structural question about the Turing degrees is what sort of natural operations on Turing degrees exist. For instance, there is the Turing jump, and its iterates; but these seem to be the only natural ones. Martin's conjecture states that indeed, every "reasonable" increasing function on the Turing degrees is "basically" an iterate of the Turing jump; MC has a few different forms, for instance "all Borel functions . . ." or "In $L(\mathbb{R})$ . . .". See e.g. https://math.berkeley.edu/~slaman/talks/vegas.pdf . SET THEORY An important theme in set theory is the development of canonical models for extensions of ZFC. The first example is Goedel's $L$ , which has a number of nice properties: a well-understood structure, a "minimality" property, and a canonical (in particular, foring-invariant) definition. We can ask whether similar models exist for ZFC + large cardinals: e.g. is there a "core" model for ZFC + "There is a measurable cardinal"? This is the inner model program , and has been developed extensively. Surprisingly, there is an end in sight: in an appropriate sense, if a canonical inner model for ZFC + "There is a supercompact cardinal" can be constructed, then this inner model will in fact capture all the large cardinal properties of the universe. I am breezing past a truly gargantuan amount of detail here, but the picture is roughly accurate. See e.g. http://www.math.uni-bonn.de/ag/logik/events/young-set-theory-2011/Slides/Grigor_Sargsyan_slides.pdf for more details, as well as the recent presentation https://www.youtube.com/watch?v=MFDVN7UEUSg&list=PLTn74Qx5mPsQlRpBE5OnxMdN3R1d3DLUO&index=4 by Woodin. SET THEORIES When someone says "set theory," they usually mean ZFC-style set theory. But this isn't necessarily so; there are alternative set theories . As far as I know, the oldest open consistency problem here is whether Quine's NF - an alternative to ZFC - is consistent. Seemingly small variations of NF are known to be consistent, relative to very weak theories, but these proofs dramatically fail to establish the consistency of NF. Recently Gabbay ( http://arxiv.org/abs/1406.4060 ) and Holmes ( http://math.boisestate.edu/~holmes/holmes/basicfm.pdf ) proposed proofs of Con(NF); my understanding is that Gabbay has withdrawn his proof, and Holmes' proof has not been evaluated by the community (it is quite long and intricate). FINITE MODEL THEORY For a first-order sentence $\varphi$ , let the spectrum of $\varphi$ be the set of sizes of finite models of $\varphi$ : $$\operatorname{Spec}(\varphi)=\{n: \exists\mathcal{M}(\vert\mathcal{M}\vert=n, \mathcal{M}\models\varphi)\}.$$ We can ask what sets of natural numbers are spectra of sentences; in particular, the finite spectrum problem (see the really lovely paper http://www.diku.dk/hjemmesider/ansatte/neil/SpectraSubmitted.pdf ) asks whether the complement of a spectrum is also a spectrum. It is known, for example, that the complement of the spectrum of a sentence not using " $=$ " is a spectrum ( http://www.inf.u-szeged.hu/actacybernetica/edb/vol07n2/pdf/Ecsedi-Toth_1985_ActaCybernetica.pdf ). There is a complexity theory connection here: a set is a spectrum iff it is in NEXP. So the finite spectrum problem asks, "Does $\text{NEXP}=\text{coNEXP}$ ?" We can also ask about spectra for non-first-order sentences. ABSTRACT MODEL THEORY Abstract model theory is the study of logics other than first-order. The classic text is "Model-theoretic logics" edited by Barwise and Feferman; see (freely available!) https://projecteuclid.org/euclid.pl/1235417263 . The field began (arguably) with Lindstrom's Theorem, which showed that there is no "reasonable" logic stronger than first-order logic which satisfies both the Compactness and Lowenheim-Skolem properties. Shortly after Lindstrom's result, attention turned towards Craig's interpolation theorem, a powerful result in proof theory (see https://math.stanford.edu/~feferman/papers/Harmonious%20Logic.pdf ). Feferman, following Lindstrom, asked whether there is a reasonable logic stonger than first-order which satisfies compactness and the interpolation property. As far as I know, this question - and many weaker versions! - are still completely open. I believe this is by far the youngest question in this answer.
{ "source": [ "https://mathoverflow.net/questions/227083", "https://mathoverflow.net", "https://mathoverflow.net/users/35397/" ] }
227,402
It's obvious that the volume of a envelope is 0 when flat and non-0 when you open it up. However, if you were to fill it with liquid, there must be some shape where it has a maximum volume. Is there a practical way to determine that volume? I think a starting point would be to start with a cylinder and seal one end by putting it in something like a vice. That gives you the same shape. EDIT: The reason this problem came to mind is because Quaker's oatmeal packets have a fill line on them so that you don't need a measuring cup. Every day, as I fill up the little packet with water, I think "what a clever idea," but also I wonder about the math behind it. I expect they approximate because, after all, it is just oatmeal.
Your question is a variant of the teabag problem . I don't believe an exact answer is known, but for the $1 \times 1$ square teabag, the maximum volume is about $0.2$: (Image from Wikipedia article .) The primary reference is Anthony Robin's 2004 article, "Paper Bag Problem". Mathematics Today—Bulletin of the Institute of Mathematics and its Applications. 40 .3 (2004): 104-107. MathWorld article on "paper bags" , which quotes Robin's 2004 volume bound. $13$-second YouTube Simulation of Inflating a Teabag , using a cloth simulation algorithm.
{ "source": [ "https://mathoverflow.net/questions/227402", "https://mathoverflow.net", "https://mathoverflow.net/users/84759/" ] }
227,435
Can anyone provide me with an example of an orientable closed manifold $M$ of dimension $n\geq 2$ , which cannot be smoothly embedded in $\mathbb R^{2n-1}$ ? I know these cannot exist for $n=1$ , i.e. $S^1$ . If we ignore orientability, then if we take $n=2^r$ , $\mathbb RP^n$ cannot be embedded in $\mathbb R^{2n-1}$ . While searching on the internet, I found some sharper results here . So can we say anything when $\dim(M)= 2^r$ and $M$ is orientable? It would be very helpful if someone could provide me with some more references. Thanks in advance.
A closed smooth $n$-manifold embeds into $\mathbb R^{2n-1}$ if and only if the normal $(n-1)$th Stiefel-Whitney class vanishes. This is due to Hirsch-Haefliger in dimensions $\neq 4$ and to Fang in dimension $4$. Massey showed that if the normal $(n-1)$th Stiefel-Whitney class is nonzero, then $M$ is non-orientable and $n$ is a power of $2$. Thus any smooth closed orientable $n$-manifold smoothly embeds into $\mathbb R^{2n-1}$. For references see here and here .
{ "source": [ "https://mathoverflow.net/questions/227435", "https://mathoverflow.net", "https://mathoverflow.net/users/33064/" ] }
227,713
A. Is there natural numbers $a,b,c$ such that $\frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b}$ is equal to an odd natural number ? (I do not know any such numbers). B. Suppose that $\frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b}$ is equal to an even natural number ( $a,b,c $ are still natural numbers) then is there any way to estimate the minimum of $a,b $ and $c$ ? The smallest solution that I know for $\frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b} = 4 $ is: In text (from the comment below of @djsadinoff): $$\scriptsize{a = 4373612677928697257861252602371390152816537558161613618621437993378423467772036}$$ $$\scriptsize{b = 36875131794129999827197811565225474825492979968971970996283137471637224634055579}$$ $$\scriptsize{c = 154476802108746166441951315019919837485664325669565431700026634898253202035277999} $$
This problem turned out to be much more interesting than I originally thought. Let me give my solution, which seems to be slightly different from (but essentially the same as) the solution in the paper by Bremner and MacLeod (see Allan MacLeod's answer). Theorem . Let $a,b,c$ be positive integers. Then $\frac{a}{b+c} + \frac{b}{c+a} + \frac{c}{a+b}$ can never be an odd integer. Let $n$ be a positive odd integer. The equation $\frac{a}{b+c} + \frac{b}{c+a} + \frac{c}{a+b} = n$ implies $$a^3 + b^3 + c^3 + abc - (n-1)(a+b)(b+c)(c+a) = 0.$$ This describes a smooth cubic curve $E_n$ in the projective plane that has at least six rational points (of the form $(1:-1:0)$ and $(1:-1:1)$ and their cyclic permutations). Declaring one of these to be the origin, $E_n$ is an elliptic curve over $\mathbb Q$. Bringing $E_n$ in Weierstrass form, we obtain the isomorphic curve $$E'_n \colon y^2 = x \bigl(x^2 + (4n(n+3)-3)x + 32(n+3)\bigr) =: x(x^2 + Ax + B).$$ If $n = 1$, then there are obviously no positive solutions, so we assume $n \ge 3$. Then $E_n(\mathbb R)$ has two connected components, one of which contains the six `trivial' points but no points with positive coordinates, whereas the other component does contain positive points. In the model $E'_n$, this component consists of points with negative $x$-coordinate. Claim . If $(\xi,\eta) \in E'_n(\mathbb Q)$, then $\xi \ge 0$. This clearly implies the statement of the theorem. To show the claim, let $D = 2n + 5$. Then $D$ is odd, positive, coprime with $B$ and divides $A^2 - 4B = (2n-3)(2n+5)^3$. If $p$ is an odd prime dividing $B$, then $n \equiv -3 \bmod p$ and so $-D \equiv 1 \bmod p$. The equation $B x^2 - D y^2 = z^2$ has the solution $(x,y,z)=(1,4,4)$, so the Hilbert symbol $(B, -D)_p = 1$ for all primes $p$. We will show: If $(\xi,\eta) \in E'_n(\mathbb Q)$ with $\xi \neq 0$, then $(\xi, -D)_p = 1$ for all primes $p$. Given this, the product formula for the Hilbert symbol implies $(\xi, -D)_\infty = 1$ and so $\xi > 0$ (since $-D < 0$). Note that $(\xi, -D)_p = (\xi^2 + A \xi + B, -D)_p$. We first consider odd $p$. We note that when $\xi$ is not a $p$-adic integer, then $\xi$ must be a $p$-adic square, so $(\xi, -D)_p = 1$. So we can assume that $\xi \in {\mathbb Z}_p$. There are three cases. $p$ divides neither $B$ nor $D$. If $\xi \in {\mathbb Z}_p^\times$, then $(\xi, -D)_p = 1$, since both entries are $p$-adic units. Otherwise, $(\xi, -D)_p = (\xi^2 + A \xi + B, -D)_p = (B, -D)_p = 1$. $p$ divides $B$. Then $-D \equiv 1 \bmod p$, so $-D$ is a $p$-adic square, hence $(\xi, -D)_p = 1$. $p$ divides $D$. Then $x^2 + Ax + B \equiv (x + A/2)^2 \bmod p$. So if $\xi \in {\mathbb Z}_p^\times$, then $\xi$ must be a square mod $p$, and $(\xi, -D)_p = 1$. If $\xi$ is divisible by $p$, then as before, $(\xi, -D)_p = (\xi^2 + A \xi + B, -D)_p = (B, -D)_p = 1$. It remains to consider $p = 2$. If $n \equiv 1 \bmod 4$, then $-D \equiv 1 \bmod 8$, so $(\xi, -D)_2 = 1$ for all $\xi$. If $n \equiv 3 \bmod 4$, then $-D \equiv 5 \bmod 8$, so $(\xi, -D)_2 = (-1)^{v_2(\xi)}$, and we have to show that the 2-adic valuation of $\xi$ must be even. Note that in this case $v_2(B) = 6$ and $A \equiv -3 \bmod 8$. If $v_2(\xi)$ is odd, then exactly one of the three terms $\xi^3$, $A \xi^2$, $B \xi$ has minimal 2-adic valuation, which must be even, so it cannot be the first or the third term. This reduces us to $\nu := v_2(\xi) \in \{1,3,5\}$. One then easily checks that $\xi(\xi^2 + A\xi + B) = 4^\nu u$ with $u \equiv -1 \bmod 4$ when $\nu = 1$ or $5$ and $u \equiv -3 \bmod 8$ when $\nu = 3$. In all cases, $u$ cannot be a square, and so points with $x$-coordinate $\xi$ cannot exist. This concludes the proof. Note that when $n$ is even, we have $-D \equiv 3 \bmod 4$ and also $v_2(B) = 5$, so we lose control over the 2-adic Hilbert symbol. This is the previous version of this answer, which I leave here, since it may contain some points of interest. The equation $\frac{a}{b+c} + \frac{b}{c+a} + \frac{c}{a+b} = n$ gives rise to the elliptic curve $$E_n \colon a^3 + b^3 + c^3 + abc - (n-1)(a+b)(b+c)(c+a) = 0.$$ You are asking for rational points on this curve (such that $a+b, b+c, c+a \neq 0$). For odd positive $n$ up to and including 17, this is a curve of rank zero (with 6 rational points), whereas for $n = 19$, it has rank 1. Therefore $E_{19}$ has infinitely many rational points, and your equation has infinitely many solutions for $n = 19$. I'll do the computations and find one explicitly. EDIT: As pointed out by Jeremy Rouse in a comment below, the integral solutions for $n = 19$ are not positive. More precisely, the real points $E_n(\mathbb R)$ form two connected components (the discriminant of $E_n$ is positive), and it is the non-identity component that contains points with all positive coordinates (taking as the identity one of the six points like $(1:-1:0)$ or $(1:1:-1)$). So the question is whether there is odd $n$ such that there is a rational point on the non-identity component; then the rational points will be dense on this component and so there will be positive solutions. So far, no such $n$ turned up, even though there are many such that $E_n$ has positive rank. FURTHER EDIT: I suspect that there really is no odd $n > 0$ such that $E_n$ has rational points on the non-identity component. One way of checking this for any given $n$ is to do (half of) a 2-isogeny descent on $E_n$. This produces a number of curves of the form $C_u \colon y^2 = u x^4 + v x^2 + w$ where $v = 4n(n+3)-3$ and $uw = 32(n+3)$ that are unramified double covers of $E_n$. We consider the curves $C_u$ that have points over all completions of $\mathbb Q$. Then every rational point on $E_n$ is the image of a rational point on one of these curves $C_u$. Doing the computation, one obtains a set of curves $C_u$ that all have $u > 0$ (this is only experimental; I checked it for $n$ up to 9999). But if $u > 0$, then [$C_u$ has only one real component — this is wrong, but the following is OK] the image of $C_u(\mathbb R)$ in $E_n(\mathbb R)$ is the identity component, so there can be no rational point on the other component. My feeling is that there might be a Brauer-Manin obstruction to the existence of rational points on the non-identity component for odd $n$, but I don't have enough time to check this. A possible approach would be to note that $$E'_n \colon y^2 = x \bigl(x^2 + (4n(n+3)-3)x + 32(n+3)\bigr)$$ is isomorphic to $E_n$. If we can find a positive integer $d(n)$ such that for all rational points $(\xi,\eta) \in E'_n(\mathbb Q)$ (with $\xi \neq 0$) the product $\prod_p (\xi, -d(n))_p$ of Hilbert symbols (over all finite places) is always $+1$, then the claim follows from the product formula for the Hilbert symbol and $(\xi, -d(n))_\infty = -1$ for $\xi < 0$. SUCCESS: For odd $n \ge 3$, $d(n) = 2n-3$ works. One can check that $(\xi, 3-2n)_p = 1$ for all primes $p$. Details later (it is getting late). Actually, $d(n) = -5-2n$ works better. See above. Note that for even $n$, there usually are $C_u$ with $u < 0$ when $E_n$ has positive rank (the first exception seems to be $n = 40$). So I would expect the Brauer-Manin obstruction to result from an interaction between $p = 2$ and the infinite place. For $n = 4$, the curve has also rank 1, which explains the existence of solutions. I'll try to check if there are smaller ones than that given by you. EDIT: The given solution is really the smallest (positive) one. The next larger one has numbers of 167 to 168 digits.
{ "source": [ "https://mathoverflow.net/questions/227713", "https://mathoverflow.net", "https://mathoverflow.net/users/72273/" ] }
227,889
I've recently been to a seminar on quantum matrices. In particular the speaker introduced these objects as the coordinate ring of $2$ by $2$ matrices modulo some odd looking relations (see start of Section 2 here ). As a theoretical physicist, I'm struggling to understand in what sense these objects are quantum! Has anyone got any references or knowledge which might help answer Where do these odd looking relations come from? In what sense are the matrices quantum (can the non-commutativity in the coordinate ring be understood as emerging from some quantisation procedure...)? Should I think of quantum groups as a controlled mechanism for introducing non-commutativity into coordinate rings in general? Many thanks in advance for your expertise!
Typically in math "quantum X" means a deformation of "X" which is in some sense "less commutative." So quantum groups should be deformations of groups which are "less commutative." Interpreting this is slightly tricky since groups are already non-commutative, but nonetheless they do have some "commutativity" built in which you can see either by noting: The ring of functions on the group is a commutative ring. The tensor product of representations of the group has a symmetric isomorphism $V \otimes W \rightarrow W \otimes V$. You can use either of these to motivate versions of quantum groups. A quantum group is: A Hopf algebra which deforms the Hopf algebra of functions on a group, but where the ring structure is non-commutative. Something whose category of representations deforms the category of representations of a group, but where the tensor product structure is not symmetric.
{ "source": [ "https://mathoverflow.net/questions/227889", "https://mathoverflow.net", "https://mathoverflow.net/users/22337/" ] }
228,388
At a basic level, algebraic topology is the study of topological spaces by means of algebraic invariants. The key word here is "topological spaces". (Basic) algebraic topology is very useful in other areas of mathematics, especially, in geometry(I would say almost in all geometry). I'm not an algebraic topologist myself, so I know only basic techniques. However, I'm intrigued by modern tool in homotopy theory. For example, we have simplicial homotopy theory, where one studies simplicial sets instead of topological spaces. As far as I understand, simplicial techniques are indispensible in modern topology. Then we have axiomatic model-theoretic homotopy theory, stable homotopy theory, chromatic homotopy theory. Recently, we got a topological version of algebraic geometry, namely spectral algebraic geometry which is proved useful in studying topological modular forms. But one may wonder what is it for? Those are really fancy and sometimes beautiful tools, but what are exactly the questions modern algebraic topology seeks to answer? Because It feels it's really not part of topology anymore, it's more as topology now is a small part of algebraic topology/homotopy theory. So, I would like to hear about goals and perspectives of modern homotopy theory from those working on it. I hope this question might be useful to someone else.
While I think that Andre is right in saying that homotopy theory (or algebraic topology) is ready to study everything that fits into the framework of abstract homotopy theory, some things have still an especially important place in our heart. Especially when we say algebraic topology instead of homotopy theory. This says that while all of category theory and all of homological algebra belongs to the study of $(\infty, 1)$-categories, this is not where our aim is. The roots of our subject lie in the study of nice spaces like manifolds. Important questions are: Can we classify manifolds up to some equivalence relation? Can we understand maps between manifolds? The coarsest useful equivalence relation for the classification of manifolds is bordism and this is also the basis of most other classification results (those using surgery theory). Computing bordism groups was an important topic in earlier algebraic topology and was done succesfully for some flavors rather early ($\Omega_O$, $\Omega_U$, $\Omega_{SO}$,...). But one of the most important variants, both theoretically and from the viewpoint of clasification of manifolds, is framed bordism . By an old theorem by Pontryagin, the framed bordism groups are isomorphic to the stable homotopy groups of spheres , connecting it to the second question. One can say that much of algebraic topology was invented or can be used to study the stable homotopy groups of spheres. One of the most recent spectacular advances in algebraic topology was the solution of (most of) the Kervaire invariant 1 problem by Hill, Hopkins and Ravenel about framed manifolds/stable homotopy groups of spheres. They used a tremendous amount of stuff to solve this classical problem: equivariant topology, chromatic homotopy theory, spectral sequences, orthogonal spectra, abstract homotopy theory, ... Likewise topological modular forms $tmf$ have important applications to the stable homotopy groups of spheres and also to string bordism. And to really understand $tmf$, you have to study some spectral algebraic geometry. I do not want to say that all of algebraic topology still directly aims at classical questions. As soon as we see an interesting structure, we also study it for its own sake; new phenomena need explanations and developing abstract frameworks is also fun. But like in the relationship between mathematics and physics, sharpening our tools and exploring by pure curiosity can be quite useful for the classical questions. When people replaced older, in some aspects more clumsy models of spectra by symmetric and orthogonal spectra, they probably didn't have in mind any direct applications to framed manifolds. But what Hill, Hopkins and Ravenel did would have been much harder without these tools in their hands.
{ "source": [ "https://mathoverflow.net/questions/228388", "https://mathoverflow.net", "https://mathoverflow.net/users/85259/" ] }
229,132
Assume that $P(z)$, $Q(z)$ are complex polynomials such that $P(S)=Q(S)$, where $S=\{z\colon |z|=1\}$ (equality is understood in the sense of sets, but I do not know the answer even for multisets). Does it follow that there exist polynomial $f(z)$, positive integers $m,n$ and complex number $w\in S$ such that $P(z)=f(z^n)$, $Q(z)=f(wz^m)$? It is motivated by this question (and if above claim is true, it actually implies much more than asked therein.) I started a new question with algebraic geometry tag, since it looks reasonable and may pay attention of right people rather than comments to an old post.
This is a special case of the main theorem in the paper by I. N. Baker, J. A. Deddens, and J. L. Ullman, A theorem on entire functions with applications to Toeplitz operators, Duke Math. J. Volume 41, Number 4 (1974), 739-745. They proved a similar statement for arbitrary entire functions.
{ "source": [ "https://mathoverflow.net/questions/229132", "https://mathoverflow.net", "https://mathoverflow.net/users/4312/" ] }
229,732
I realize this question is risky (as the title and the tags indicate), but hopefully I can make it acceptable. If not, and the question cannot be salvaged, I'm sorry and ready to delete it or accept closure. Oftentimes, when one listens to chess grandmasters commentating on games, they will say things like, "The engines give a +1 advantage to White, but the postion is dead-drawn." And they proceed to provide high-level explanations of such statements. I'm a bad chess player so I cannot really attempt to verify those, but they often sound extremely convincing. Also, when they teach simple endgames, they seem to consider analyzing the crucial ideas to be enough for proof (at least in the sense of "a completely convincing explanation") and it's difficult not to agree with that. Even certain positions with a lot of pieces on the board can apparently be treated this way. "There is no way to make progress" is something that one can frequently hear. Sometimes these just turn out to be wrong. These statements are generally based on intuitions ("axioms") such as "you cannot give away your queen for a pawn", which are believed to be true in almost all situations and on already analyzed positions (like the Lucena position). The grandmasters' intuitions are very fine, but it can happen that they will miss a counterintuitive material sacrifice, a counterintuitive retreat or a counterintuitive something else. However, they can be extremely convincing sometimes (and never proven wrong). It's clear that chess can't be solved this way any time soon. A clear obstacle is the "wild" or "unclear" positions, where "anything can happen". But there are also those "tame" positions, "dead-drawn" positions and "winning advantages". (Surely, some -- or maybe a lot -- of these statements are not correct or correct with incorrect reasonings behind them.) Another indication is that the humans who make these statements get beaten by computers, which primarily use low-level tree searches. My question is how much, if anything, of this high-level reasoning can be put to a form that would make it mathematics and mathematical proof. I think brute force is clearly not the only way of evaluating positions rigorously. In a K+Q v. K endgame, one doesn't have to analyze each possible move of the lone king to show that it's doomed. It's enough to show that a stalemate is impossible when the king is not on the side of the board, that the king can be forced to the side of the board and then that whenever the king is on the side, it can be checkmated. For a rigorous proof, one can use all kinds of symmetries and translations without having to go through every single node of the tree. The question How much of that high-level thinking grandmasters employ can be made into mathematics? is what I'm interested in, but this is vague -- it's not clear what "how much", "high-level thinking" or even "made into" mean. But I think people must have tried to pursue this and some clear mathematical statements must have been produced. If not, is it at least possible to say why it's difficult, infeasible or impossible? Is there a chess position with a large (enough to be intractable to a human with just a pen and a piece of paper) tree that has been solved without brute-forcing the tree? Has a rigorous proof accessible to a human been produced?
Let me prove, for example, that the following 7-piece position is a draw. 7-piece positions are about the borderline of what's doable by brute force: they were tabulated around 2010. Black draws as follows: 1) if white queen captures the rook or the pawn, recapture. 2) else, in the event of check, move the king to h7, h8, or g8 (cannot all be controlled by the queen simultaneously) 3) else, if white queen or pawn moves to 6-th rank, capture it with the rook, 4) else, move the rook to f6 or h6. Clearly, unless white sacrifices the queen, she cannot cross the 6-th rank with the king and thus cannot break the above routine. The possible sacrifices are: A) by capturing the g7 pawn. Recapture with the king and continue moving the rook, securing a draw. B) by taking the rook. Any resulting pawn endgame is drawn by moving the king between h7, h8, and g8; C) by Qa6, Qb6, Qc6, Qd6 or Qe6, followed by rook takes queen and king takes rook. The very same idea, only be sure to take g7:h6 when you can. D) By Qg6 R:g6 h5:g6. This is also a theoretical draw. Move Kh8-h7-h8-... and be sure not to take g7:h6 at a wrong moment. Some details are left off here, but I think the level of rigour is fairly close to how math is written. P. S. In a game between Mamedyarov and Caruana (world N17 and N3, respectively), played last Friday, a draw was agreed in the following position: Computers wrongly give a decisive advantage to white: the idea of cutting the king with the rook is too "high-level" for them to understand it. And I believe, if needed, one can devise a rigorous proof here similar to the one above.
{ "source": [ "https://mathoverflow.net/questions/229732", "https://mathoverflow.net", "https://mathoverflow.net/users/20803/" ] }
230,426
The title says everything but while it is a little bit provocative let me elaborate a bit about my question. First time when I met the foliation it was just an isolated example in the differential geometry course (I was the Reeb foliation) and I didin't pay many attention to it. In the meanwhile I get interested in the noncommutative theory in particular in $C^*$-algebras. While reading about Noncommutative Geometry I came across foliations as the one of the main motivating examples of the theory. I learned that in general the space of leaves of the foliation is badly behaved as a topological space and I believe that it is more worthwile to deal with these spaces using algebraic methods. But I don't have something like a mental picture of what foliations is and why should I even care about those objects?
Without any disrespect, let me say that I find it incredible that someone naturally cares about non-commutative geometry but needs convincing about actual geometry (this just goes to highlight that there is a wide variety of ways of thinking in mathematics). I would need convincing the other way around (e.g. How are C* algebras relevant in foliation theory from the geometric point of view?). From the point of view of someone interested in geometry, foliations appear naturally in many ways. The most basic way is when you consider the level sets of a function. If the function is a submersion you get a non-singular foliation, but this is rare. However every manifold admits a Morse function and the theory of Morse functions (which can be used for example to classify surfaces, and to prove the high dimensional case of the generalized Poincaré conjecture) can be seen as a special (or maybe as the most important) case of the theory of singular foliations (where the singularities are pretty simple). Another natural type of foliation is the partition of a manifold into the orbits of the flow determined by a vector field. Again the simplest case, in which the vector field has no zeros, is rare but yields a non-singular foliation (with one-dimensional leaves). However, already in this case one can see that the leaves of a foliation can be recurrent (i.e. accumulate on themselves) in non-trivial ways (the typical example is the partition of the flat torus $\mathbb{R}^2/\mathbb{Z}^2$ into lines of a given irrational slope). A notable fact generalizing the above case (the result is in papers of Sussmann and Stefan from the early 70s) is the following: Consider $n$ vector fields on a manifold. For each point $x$, consider the set of points you can reach using arbitrary finite compositions of the flows of these vector fields. The partition of the manifold into these "accessibility classes" is a singular foliation (in particular each accessibility class is a submanifold). Hence foliations appear naturally in several types of "control problems" where one has several valid directions of movement and wishes to understand what states are achievable from a given state. This point of view also gives a nice insight into Hörmander's theorem on why certain differential operators have smooth kernels (there's a nice article by Hairer explaining Malliavin's proof of this theorem). Essentially the Hormander bracket condition means that Brownian motion can go anywhere it wants (i.e. a certain foliation associated to the operator is trivial). Another way to obtain motivation is to look at history (I remember reading a nice survey which I think was written by Haefliger). In my (unreliable) view, the first geometric results (so I'm skipping Frobenius's theorem) in foliation theory are the Poincaré-Benedixon and Poincaré-Hopf theorems both of which can be used to show that every one-dimensional foliation of the two-dimensional sphere has singularities. Hopf then asked in the 1930's if there exists a foliation of the three dimensional sphere using only surfaces. The first observation, due to Reeb and Ehresman is that if one of the surfaces is a sphere then you cannot complete the foliation without singularities. They also constructed the famous Reeb foliation and answered the question in the affirmative. Since then there has been a whole line of research dedicated to the question of which manifolds admit non-singular foliations. In this regard, the main Theorem is due to Thurston who (in the words of an expert in the theory) came around and "foliated everything that could be foliated". But there are other lines of research. For example, I know that there is a certain subset of algebraic geometry dedicated to trying to understand the foliations of complex projective space which are determined by the level sets of rational functions of a certain degree. Also, whenever you have an action of the fundamental group of a manifold there is a natural "suspension" foliation attached (suspensions are considered the "local model" for a general foliation and are hence very important in the theory). This point of view sometimes has given results in the current area of research known as higher-Teichmüller theory (where basically they study linear actions of the fundamental group of a surface). And of course, when one has an Anosov, or hyperbolic, diffeomorphism or flow (for example the geodesic flow of a hyperbolic surface), there are the stable and unstable foliations which play a role for example in the famous Hopf (not the same Hopf as before) argument for establishing ergodicity. Oh, and I haven't even mentioned the special place that foliations occupy in the theory of 3-dimensional manifolds. Here there are many results which I can't say much about (but I've heard the book by Calegari is quite nice). Maybe a basic one is Novikov's theorem which basically proves that the existence of Reeb components is forced for foliations on many 3-manifolds. And (I couldn't resist adding one last example), there are also foliations by Brouwer lines, which have recently been used (by LeCalvez and others) to prove interesting results about the dynamics of surface homeomorphisms. TLDR: Foliations occur naturally in many contexts in geometry and dynamical systems. There may not be a very unified "Theory of foliations" but several special types have been studied in depth for different reasons and have yielded insight or participate in the proof of important results such as the Poincaré conjecture and Hörmander's bracket theorem. For this reason mathematicians have taken notice and singled out foliations as a basic object in geometry (there have even been significant efforts in producing a couple of nice treaties trying to give the grand tour, for example the books by Candel and Conlon).
{ "source": [ "https://mathoverflow.net/questions/230426", "https://mathoverflow.net", "https://mathoverflow.net/users/24078/" ] }
230,996
In a recent article , Emmanuel Lecouturier proves a generalization of the following surprising result: for a Mersenne prime $N = 2^p - 1 \ge 31$, the element $$ S = \prod_{k=1}^{\frac{N-1}2} k^k $$ is a $p$-th power modulo $N$, and observed that he did not know an elementary proof. Neither do I. Numerical experiments suggest that $s_p$ is actually a $6p$-th power modulo $N$. I can't even see why it is a quadratic residue, i.e., why the following result (not proved in the article cited) should hold: $$ T = \prod_{k=1}^{\frac{N+1}4} (2k-1) $$ is a square mod N. For arbitrary primes $N \equiv 3 \bmod 4$, the following seems to hold: $$ \Big(\frac{T}{N} \Big) = \begin{cases} - (-1)^{(h-1)/2} & \text{ if } N \equiv 3 \bmod 16, \\ - 1 & \text{ if } N \equiv 7 \bmod 16, \\ (-1)^{(h-1)/2} & \text{ if } N \equiv 11 \bmod 16, \\ + 1 & \text{ if } N \equiv 15 \bmod 16, \end{cases} $$ where $h$ is the class number of the complex quadratic number field with discriminant $-N$. This suggests a possible proof using L-functions (i.e. using methods in (Congruences for L-functions, Urbanowicz, K.S. Williams) and explains the difficulty of finding an elementary proof. My questions: Is this congruence for $(T/N)$ known? How would I start attacking this result using known results on L-functions?
1. First we show that $$\left(\frac{T}{N}\right) = \begin{cases} - (-1)^{(h-1)/2} & \text{ if } N \equiv 3 \bmod 16, \\ (-1)^{(h-1)/2} & \text{ if } N \equiv 11 \bmod 16. \\ \end{cases}$$ Consider the sets $$A_0:=\{1,2,\dots,\tfrac{N-1}{2}\},\quad A_1:=\{1,3,\dots,\tfrac{N-1}{2}\},\quad A_2:=\{2,4,\dots,\tfrac{N-3}{2}\},$$ $$ A_3:=\{1,2,\dots,\tfrac{N-3}{4}\},\quad A_4:=\{\tfrac{N+1}{4},\tfrac{N+5}{4},\dots,\tfrac{N-1}{2}\}.$$ Consider also the corresponding character sums $$S_i:=\sum_{a\in A_i}\left(\frac{a}{N}\right),\qquad i=0,1,2,3,4.$$ Modulo $N$, we have $2A_3=A_2$ and $-2A_4=A_1$. Therefore, $S_3=-S_2$ and $S_4=S_1$, and hence $S_1-S_2=S_3+S_4=S_0$. But also $S_1+S_2=S_0$, hence $S_1=S_0$ and $S_2=0$. On the other hand, it is known that $S_0=3h$, see e.g. Theorem 70 in Fröhlich-Taylor: Algebraic number theory. This means that $S_1=3h$, i.e. the number of quadratic nonresidues in $A_1$ equals $$ n=\frac{N+1}{8}-\frac{3h}{2}=\frac{N+5}{8}-2h+\frac{h-1}{2}.$$ We conclude $$ \left(\frac{T}{N}\right)=(-1)^n=(-1)^{(N+5)/8}(-1)^{(h-1)/2}. $$ This is the claimed formula (with Jeremy Rouse's correction). 2. Now we show that $$\left(\frac{T}{N}\right) = \begin{cases} - 1 & \text{ if } N \equiv 7 \bmod 16, \\ + 1 & \text{ if } N \equiv 15 \bmod 16. \end{cases}$$ We proceed as before, but this time we get $S_3=S_2$ and $S_4=-S_1$, so that $S_2-S_1=S_3+S_4=S_0$. Together with $S_2+S_1=S_0$, this yields $S_1=0$ and $S_2=S_0$. Hence the formula for $n$ simplifies to $n=(N+1)/8$, and we conclude $$\left(\frac{T}{N}\right)=(-1)^n=(-1)^{(N+1)/8}.$$
{ "source": [ "https://mathoverflow.net/questions/230996", "https://mathoverflow.net", "https://mathoverflow.net/users/3503/" ] }
231,078
It was exciting to hear that LIGO detected the merging of two black holes one billion light-years away. One of the black holes had 36 times the mass of the sun, and the other 29. After the merging the mass was that of 62 suns, with the rest converted to gravitational radiation. Could someone explain, without assuming a deep knowledge of general relativity, how these conclusions were reached?
Short version: LIGO matches their data onto waveforms calculated in numerical relativity. The mathematical study of black hole solutions plays a significant role in this; we couldn't trust our inferences if we didn't know a priori that black holes rapidly stabilize into a handful of low-parametric stationary configurations. Classical black holes are solutions to the vacuum Einstein's equations for the Lorentzian metric. The stationary ones are relatively simple things: Bunting & Mazur proved that the only static, axisymmetric solution of the coupled Einstein & Maxwell equations is the Kerr-Newman-et-al solution, characterized by mass $M$, angular momentum $J$, and charge $Q$ ($\simeq 0$ in practice). (For a review of the development of these ideas, see Robinson's Four decades of black hole uniqueness theorems and P. Mazur's Black Hole Uniqueness Theorems .) Non-stationary black hole solutions to Einstein's equations also exist, but they are more difficult to study, being analytically intractable. Our understanding of them relies on a mix of general considerations and numerical simulations. The basic picture is straightforward, however. Two black holes caught in each other's reach will rapidly collapse into a single black hole, and emit a spherically expanding pulse of gravitational radiation. By the time this radiation reaches Earth, only the leading spherical harmonic (the quadrupole moment) is potentially observable. Everything else dies off in powers of $r$. After gauge fixing & choosing coordinates, the quadrupole moment can be treated as a perturbation to the 3d Euclidean signature metric on a space-like hypersurface in our local Minkowski-like patch of spacetime. It has the form $$ h_{ij}(r, t) \simeq \mbox{ transverse traceless part of}\frac{2G}{rc^4} \frac{d^2}{dt^2} I_{ij}(t-r) $$ where $I_{ij}$ is the quadrupole moment of the source. Numerical simulations allow you to calculate the shape of these waveforms as a function of the initial and final parameters of a black hole merger. This is done via a modified ADM formalism; they gauge fix and time-evolve data from one spacelike hypersurface to another. I'm not expert enough to say anything interesting; it's all hard detail. (To learn more, I'd start at Living Reviews in Relativity , which is a remarkable journal.) LIGO's laser interferometry is sampling local perturbations to the spatial metric. These are more or less continuous measurements and (after the terrestrial backgrounds and detector noise are subtracted) fairly clean ones. You get a high resolution look at passing waveforms in the band of sensitivity. When LIGO sees a signal, they match it onto a database of pre-calculated waveforms, which encodes their priors about what observations are possible in their detectors, given what we known about likely gravitational radiation sources. The data is high enough resolution that you can match pretty accurately and read off the parameters of the merging binary black holes. If you want to get into the fun details, LIGO has released their data for the big collision, along with a Python-based tutorial in signal processing. https://losc.ligo.org/s/events/GW150914/GW150914_tutorial.html
{ "source": [ "https://mathoverflow.net/questions/231078", "https://mathoverflow.net", "https://mathoverflow.net/users/2807/" ] }
231,160
I couldn't find a list of open problems in Hopf algebras. So my question is the following: In the theory of Hopf algebras, what are the (big) open problems? Any kind of problem/question will be welcome. I am interested in any kind of problem involving Hopf algebras, say going from combinatorics or topology to abstract algebra.
There had been a workshop on Hopf algebras and related areas in September 2015. Its report ( https://www.birs.ca/workshops//2015/15w5053/report15w5053.pdf ) includes a large list of open problems and conjectures in Hopf algenbras, for example "Is the antipode of a noetherian Hopf algebra bijective ?".
{ "source": [ "https://mathoverflow.net/questions/231160", "https://mathoverflow.net", "https://mathoverflow.net/users/87680/" ] }
231,636
Let $S$ be a scheme, let $T$ be an $S$-scheme, and let $M$ be a set. Let $M_{S}$ be the disjoint union of $M$ copies of $S$, considered as an $S$-scheme. (Notation from [SGA 3, Exp. I, 1.8].) Then $S$-scheme morphisms $T \to M_{S}$ correspond to locally constant functions $T \to M$, i.e. continuous functions $T \to M$ where $M$ is given the discrete topology. The functor $G_{0} : \operatorname{Set} \to \operatorname{Sch}/S$ sending $M \mapsto M_{S}$ is a sort of "partial right adjoint" to the functor $F : \operatorname{Sch}/S \to \operatorname{Top}$ sending $(T,\mathscr{O}_{T}) \mapsto T$, i.e. taking the underlying topological space of the $S$-scheme. Can the functor $G_{0}$ be extended to a right adjoint $G : \operatorname{Top} \to \operatorname{Sch}/S$ of $F$? My naive guess is to take a topological space $X$, give $X_{S} := S \times X$ the product topology and set $\mathscr{O}_{X_{S}} := \pi^{-1}(\mathscr{O}_{S})$ where $\pi : X_{S} \to S$ is the projection. Then $(X_{S},\mathscr{O}_{X_{S}})$ is indeed a locally ringed space and gives the usual construction when $X$ is a discrete space, but in general it is not a scheme. Consider $S = \operatorname{Spec} k$ and $X = \{x_{1},x_{2}\}$ the two-point set with the trivial topology; then the only open subsets of $X_{S}$ as defined above are $\emptyset$ and $X_{S}$ itself, so that $X_{S}$ is not even a sober space. What if I restrict the target category of $F$ to the category of sober spaces? The product of sober spaces is sober, so it's no longer immediately clear to me whether the above construction fails.
Another way to see that the functor $\mathrm{Sch} \to \mathrm{Top}$ is not a left adjoint is to see that it does not preserve colimits. In this MO answer , Laurent Moret-Bailley gives an example of a pair of arrows $Z \rightrightarrows X$ in $\mathrm{Sch}$, such that the canonical map from $X$ to the coequalizer $Y$ is not surjective (as a function between the sets of points of the underlying spaces). Since in $\mathrm{Top}$ those canonical maps to the coequalizer are always surjective, this coequalizer cannot be preserved by the forgetful functor.
{ "source": [ "https://mathoverflow.net/questions/231636", "https://mathoverflow.net", "https://mathoverflow.net/users/15505/" ] }
231,922
Let $G_1$ and $G_2$ be the groups with the following presentations: $$G_1=\langle a,b \;|\; (ab)^2=a^{-1}ba^{-1}, (a^{-1}ba^{-1})^2=b^{-2}a, (ba^{-1})^2=a^{-2}b^2 \rangle,$$ $$G_2=\langle a,b \;|\; ab=(a^{-1}ba^{-1})^2, (b^{-1}ab^{-1})^2=a^{-2}b, (ba^{-1})^2=a^{-2}b^2 \rangle,$$ Are these groups torsion-free? Motivation: In both of these groups $1+a+b$ as an element of the group algebra $\mathbb{F}_2[G_i]$ over the field with two elements is a zero divisor. Thus one has a counterexample for the Kaplansky zero divisor conjecture if one of $G_i$s is torsion-free! $$(1+a+b)(b^{-1}a^{-2}ba^{-1}+a^{-1}ba^{-2}ba^{-1}+a^{-1}ba^{-1}+b^{-1}a^{-1}+a^{-1}b^2a^{-1}ba^{-1}+aba^{-1}ba^{-1}+1+a^{-2}ba^{-1}+a^{-1}b^{-1}a^{-1}+b+baba^{-1}ba^{-1}+ba^{-1}ba^{-1}+a^{-1}+b^{-1}a)=0$$ $$(1+a+b)(aba^{-1}ba^{-1}+a^{-1}b^2a^{-1}ba^{-1}+a^{-1}ba^{-1}+b^{-1}a^{-1}+b^{-1}a^{-2}ba^{-1}+ab+1+ba^{-1}ba^{-1}+baba^{-1}ba^{-1}+b+bab+a^{-2 }ba^{-1}+a^{-1}+b^{-1}a)=0$$
Denote $x=ab$, $y=a^{-1}ba^{-1}$. Then the first two relations of the first group are $x^2=y$, $y^2=(yx)^{-1}$. This implies $x^4=x^{-3}$ or $x^7=1$. So the group has torsion. I leave the second group as an exercise for the others.
{ "source": [ "https://mathoverflow.net/questions/231922", "https://mathoverflow.net", "https://mathoverflow.net/users/19075/" ] }
232,087
In August 2012, a proof of the abc conjecture was proposed by Shinichi Mochizuki. However, the proof was based on a "Inter-universal Teichmüller theory" which Mochizuki himself pioneered. It was known from the beginning that it would take experts months to understand his work enough to be able to verify the proof. Are there any updates on the validity of this proof?
In January, Vesselin Dimitrov posted to the arXiv a preprint showing that Mochizuki's work, if correct, would be effective. While this doesn't validate Mochizuki's work it does do a few things: It shows that people are understanding more of the proof. It gives another avenue through which to check whether Mochizuki's work is invalid. It makes Mochizuki's work that much more important.
{ "source": [ "https://mathoverflow.net/questions/232087", "https://mathoverflow.net", "https://mathoverflow.net/users/36586/" ] }
232,257
Kronheimer and Mrowka showed that the Khovanov homology detects the unknot. Bar-Natan showed a program to compute the Khovanov homology fast: there was no rigorous complexity analysis of the algorithm, but it is estimated by Bar-Natan that the algorithm runs in time proportional to the square root of the number of crossings, so it is even less than linear in the number of crossings. What I might understand from this, is that if we have: a proof for the correctness of Bar-Natan's algorithm a rigorous algorithm analysis showing the run time of the algorithm is polynomial or less then we have a proof that the unknotting problem is in P. I guess this is not really the case. Is what I assume here true? if not, why? (maybe Bar-Natan's algorithm is not deterministic?)
EDIT: Marc Lackenby has just announced a quasi-polynomial time algorithm. That is, given an $n$ —crossing diagram, the algorithm takes $n^{O(\log(n))}$ time to either find a spanning disk (proving the knot is trivial) or a hierarchy (proving the knot is non-trivial). PREVIOUS: A quick skim of the paper you linked to finds (on paragraph two of page two) comments by Bar-Natan suggesting that his improvement should take time $\exp({\sqrt{n}})$ , beating the naive algorithm (taking time $\exp(n)$ ). That is, the improvement hopefully makes an exponential algorithm subexponential. He is not claiming a sublinear algorithm.
{ "source": [ "https://mathoverflow.net/questions/232257", "https://mathoverflow.net", "https://mathoverflow.net/users/88227/" ] }
232,260
How can we show the following equation $$\sum_{n\text{ odd}}\frac1{n\sinh(n\pi)}=\frac{\mathrm{ln}2}8\;?$$ I found it in a physics book(David J. Griffiths,'Introduction to electrodynamics',in Chapter 3,problem 3.48), which provided no proof.
Here is a proof, surely not the simplest. Using $$\frac1{\sinh(n\pi)}=\frac{2e^{-\pi n}}{1-e^{-2\pi n}}=2\sum_{m\text{ odd}}e^{-mn\pi}$$ and $$2\sum_{n\text{ odd}}\frac{x^n}{n}=-2\ln(1-x)+\ln(1-x^2)=\ln\frac{1+x}{1-x},\qquad |x|<1,$$ we get that $$\sum_{n\text{ odd}}\frac1{n\sinh(n\pi)}=\sum_{m\text{ odd}}\ln\frac{1+e^{-m\pi}}{1-e^{-m\pi}}=\ln\prod_{m\text{ odd}}\frac{1+e^{-m\pi}}{1-e^{-m\pi}}.$$ This is the same as $\ln\frac{G_1}{g_1}$, where $G_N$ and $g_N$ are the class invariants as in Chapter 34 of Berndt: Ramanujan's notebooks, Part V. It is known (see same chapter) that $G_1=1$ and $(g_1G_1)^8(G_1^8-g_1^8)=\frac{1}{4}$. Hence $g_1=2^{-1/8}$, and the original sum equals $$\sum_{n\text{ odd}}\frac1{n\sinh(n\pi)}=\ln\frac{G_1}{g_1}=\frac{\ln 2}{8}.$$
{ "source": [ "https://mathoverflow.net/questions/232260", "https://mathoverflow.net", "https://mathoverflow.net/users/75153/" ] }
232,930
It is a well-known result that all profinite groups arise as the Galois group of some field extension. What profinite groups are the absolute Galois group $\mathrm{Gal}(\overline{K}|K)$ of some extension $K$ over $\mathbb{Q}$? The answer is simple enough in the finite case: (Artin-Schreier) Only the trivial one and $C_2$. This might tempt us to think that absolute Galois groups are not that diverse. But 0-dimensional anabelian geometry shows us differently: (Neukirch-Uchida) There's as many different absolute Galois groups as non-isomorphic number fields. The answer might still turn out to be boring, but I haven't seen this discussed anywhere. The closest is Szamuely in his book "Galois Groups and Fundamental Groups", where he seems delighted by the fact that the absolute Galois group of $\mathbb{Q}(\sqrt{p})$ and $\mathbb{Q}(\sqrt{q})$ are non-isomorphic for primes $p\neq q$.
This is a very good question which is a big open problem. There are a number of theorems, some of them easy and some very difficult, and also a number of conjectures, restricting the class of groups which may turn out to be absolute Galois groups. But it seems that nobody has any idea about how a precise description of the class of absolute Galois groups might look like. One general point-of-view position on which the experts seem to agree is that the right object to deal with is not just the Galois group $G_K=\operatorname{Gal}(\overline{K}|K)$ considered as an abstract profinite group, but the pair $(G_K,\chi_K)$, where $\chi_K\colon G_K\to \widehat{\mathbb Z}^*$ is the cyclotomic character of the group $G_K$ (describing its action in the group of roots of unity in $\overline K$). The question about being absolute Galois should be properly asked about pairs $(G,\chi)$, where $G$ is a profinite group and $\chi\colon G\to \widehat{\mathbb Z}^*$ is a continuous homomorphism, rather than just about the groups $G$. For example, here is an important and difficult theorem restricting the class of absolute Galois groups: for any field $K$, the cohomology algebra $H^*(G_K,\mathbb F_2)$ is a quadratic algebra over the field $\mathbb F_2$. This is (one of the formulations of) the Milnor conjecture, proven by Rost and Voevodsky. More generally, for any field $K$ and integer $m\ge 2$, the cohomology algebra $\bigoplus_n H^n(G_K,\mu_m^{\otimes n})$ is quadratic, too, where $\mu_m$ denotes the group of $m$-roots of unity in $\overline K$ (so $G_K$ acts in $\mu_m^{\otimes n}$ by the character $\chi_K^n$). This is the Bloch-Kato conjecture, also proven by Rost and Voevodsky. Here is a quite elementary general theorem restricting the class of absolute Galois groups: for any field $K$ of at most countable transcendence degree over $\mathbb Q$ or $\mathbb F_p$, the group $G_K$ has a decreasing filtration $G_K\supset G_K^0\supset G_K^1\supset G_K^2\supset\dotsb$ by closed subgroups normal in $G_K$ such that $G_K=\varprojlim_n G_K/G_K^n$ and $G_K/G_K^0$ is either the trivial group or $C_2$, while $G_K^n/G_K^{n+1}$, $n\ge0$ are closed subgroups in free profinite groups. (Groups of the latter kind are called "projective profinite groups" or "profinite groups of cohomological dimension $\le1$".) One can get rid of the assumption of countability of the transcendence degree by considering filtrations indexed by well-ordered sets rather than just by the integers. Here is a conjecture about Galois groups of arbitrary fields (called "the generalized/strengthened version of Bogomolov's freeness conjecture). For any field $K$, consider the field $L=K[\sqrt[\infty]K]$ obtained by adjoining to $K$ all the roots of all the polynomials $x^n-a$, where $n\ge2$ and $a\in K$. In particular, when the field $K$ contains all the roots of unity, the field $L$ is (the maximal purely inseparable extension of) the maximal abelian extension of $K$. Otherwise, you may want to call $L$ "the maximal radical extension of $K$". The conjecture claims that the absolute Galois group $G_L=\operatorname{Gal}(\overline{L}|L)$ is a projective profinite group.
{ "source": [ "https://mathoverflow.net/questions/232930", "https://mathoverflow.net", "https://mathoverflow.net/users/43108/" ] }
233,483
Now that AlphaGo has just beaten Lee Sedol in Go and Deep Blue has beaten Garry Kasparov in chess in 1997, I wonder what advantage humans have over computers in mathematics? More specifically, are there any fundamental reasons why a machine learning algorithm trained on a large database of formal proofs couldn't reach a level of skill that is comparable to humans? What this question is not about We know that automated theorem proving is in general impossible (finding proofs is semi-decidable). However, humans are still reasonably good at this task. I'm not asking for a general procedure for finding proofs but merely for an algorithm that could mimic human capability at this task. Another caveat is that most written mathematics at the moment is in a form that is not comprehensible to computers. There do exist databases of formal proofs (such as Metamath , Mizar , AFP ) and, even though they are quite small at the moment, it is conceivable that in future we could have a reasonably sized database. I'm not asking whether you believe that a substantial amount of mathematics will be formalized one day -- I'm willing to make this assumption. Finally, there is the issue of the sheer machine power required to run this. Again, I'm willing to assume that we have a large enough computer to train an AlphaGo -style algorithm and then use reinforcement learning for "practice runs".
The day will come when not only will computers be doing good mathematics, but they will be doing mathematics beyond the ability of (non-enhanced) humans to understand. Denying it is understandable, but ultimately as short-sighted as it was to deny computers could ever win at Go. This is not as depressing as it might sound, as we humans don't need to be left behind. Direct brain-computer interfaces will come too, and even the distinction between them will become blurred. COMMENT: We were amazed when someone built a machine that could travel faster than a horse, then amazed when a machine let us fly into the air, and even go to the moon, then amazed again that millions of us could carry a tiny machine that identifies our position on the planet within meters and lets us talk instantly to anyone else on the planet, and amazed again that someone invented a machine that can edit life forms to make new life forms. But the very idea that a machine could do mathematics, that one is surely impossible! (By the way, I love the down-votes.)
{ "source": [ "https://mathoverflow.net/questions/233483", "https://mathoverflow.net", "https://mathoverflow.net/users/37211/" ] }
233,833
For $X$ a topological space, from the short exact sequence $$ 0 \rightarrow \mathbb{Z}/2 \rightarrow \mathbb{Z}/4 \rightarrow \mathbb{Z}/2 \rightarrow 0 $$ we get a Bockstein homomorphism $$H^i(X, \mathbb{Z}/2) \rightarrow H^{i+1}(X, \mathbb{Z}/2)$$ This is also known as the Steenrod square $Sq^1$. Now suppose instead that $X$ is a variety over a (not algebraically closed) field. We still get a sequence $$ 0 \rightarrow \mathbb{Z}/2 \rightarrow \mathbb{Z}/4 \rightarrow \mathbb{Z}/2 \rightarrow 0 $$ inducing a Bockstein homomorphism in etale cohomology $$H^i_{et}(X, \mathbb{Z}/2) \rightarrow H^{i+1}_{et}(X, \mathbb{Z}/2).$$ However, there is also a short exact sequence $$ 0 \rightarrow \mathbb{Z}/2 \rightarrow \mathbb{Z}/4(1) \rightarrow \mathbb{Z}/2 \rightarrow 0 $$ where $\mathbb{Z}/4(1)$ denotes the Tate twist, probably less confusingly written as $\mu_4$. This also induces a [presumably different] Bockstein map in etale cohomology $$H^i_{et}(X, \mathbb{Z}/2) \rightarrow H^{i+1}_{et}(X, \mathbb{Z}/2).$$ Question : Which of these is the "right" Bockstein homomorphism? This question is a little open-ended. My main criterion for "right" is that the map be "$Sq^1$ on etale cohomology", meaning it fits into an action of the Steenrod algebra. There are other possible criteria. For instance, a literature search revealed that people have defined notions of "Bockstein homomorphism" and "Steenrod operations" on Chow rings, motivic cohomology, ... so "right" could also mean "compatible with these other things". (Hopefully the answer is the same.) Relevant literature: Jardine's paper https://link.springer.com/chapter/10.1007%2F978-94-009-2399-7_5 gives a version with $\mathbb{Z}/4$ Guillou and Weibeil ( http://arxiv.org/pdf/1301.0872.pdf ) describe a $Sq^1 : H^k(\mu_2^i) \rightarrow H^{k+1}(\mu_2^{2i})$, so the weight has doubled. I don't know if it fits into a Bockstein story. Unfortunately, I can't really parse what's happening in these papers. Some motivation/example: For $X \subset Y$ a closed embedding of smooth varieties of codimension $r$, I have a cycle class $[X] \in H^{2r}_X(Y; \mathbb{Z}/2)$. I would like to show that $Sq^1 [X]=0$ (it may not be true). Since the cycle class lifts to $H^{2r}_X(Y;\mathbb{Z}_{2}(r))$, this depends on which sequence is related to $Sq^1$.
You maybe want to have a look at P. Brosnan and R. Joshua. Comparison of motivic and simplicial operations in mod-$\ell$ motivic and étale cohomology. In: Feynman amplitudes, periods and motives, Contemporary Math. 648, 2015, 29-55. There are two sequences of cohomology operations in motivic and étale cohomology which can rightfully be called Steenrod operations. One comes from a "geometric" model of the classifying space of the symmetric groups, the other one from a "simplicial" model. The first one is related to Voevodsky's motivic Steenrod algebra, the second one to the more classical Steenrod algebra (as in Denis Nardin's answer). The paper mentioned above provides a comparison between these sequences of cohomology operations (see Theorem 1.1 part iii for the étale cohomology) in terms of cup products with powers of a motivic Bott element in $H^0(\operatorname{Spec} k,\mathbb{Z}/\ell\mathbb{Z}(1))$. Addendum: Actually, the Bockstein operations are the same in both sequences of cohomology operations, and they are the ones defined by the $\mathbb{Z}/4\mathbb{Z}$ extension. (So probably the first one should be the "right".) This follows from the paper of Brosnan and Joshua as well as the paper by Guillou and Weibel in the question. Note that $\mu_2^{\otimes i}\cong\mu_2^{\otimes 2 i}\cong\mathbb{Z}/2\mathbb{Z}$ and the $Sq^1$ in the paper of Guillou-Weibel actually fits the Bockstein story. Another note: If the motivation/example is defined over a field with $4$-th root of unity then $\mathbb{Z}/4\cong\mu_4$ (compatible with the extensions) and so both operations you defined agree.
{ "source": [ "https://mathoverflow.net/questions/233833", "https://mathoverflow.net", "https://mathoverflow.net/users/84144/" ] }
233,921
Apologies in advance if this question is too elementary for MO. I didn't find an explanation of these ideas in any algebraic geometry books (I don't know French). The following is an excerpt from this archive : Thierry Coquand recently asked me "In your "Comments on the Development of Topos Theory" you refer to a simpler alternative definition of "scheme" due to Grothendieck. Is this definition available at some place?? Otherwise, it it possible to describe shortly the main idea of this alternative definition??" Since several people have asked the same question over the years, I prepared the following summary which, I hope, will be of general interest: The 1973 Buffalo Colloquium talk by Alexander Grothendieck had as its main theme that the 1960 definition of scheme (which had required as a prerequisite the baggage of prime ideals and the spectral space, sheaves of local rings, coverings and patchings, etc.), should be abandoned AS the FUNDAMENTAL one and replaced by the simple idea of a good functor from rings to sets. The needed restrictions could be more intuitively and more geometrically stated directly in terms of the topos of such functors, and of course the ingredients from the "baggage" could be extracted when needed as auxiliary explanations of already existing objects, rather than being carried always as core elements of the very definition. Thus his definition is essentially well-known, and indeed is mentioned in such texts as Demazure-Gabriel, Waterhouse, and Eisenbud; but it is not carried through to the end, resulting in more complication, rather than less. I myself had learned the functorial point of view from Gabriel in 1966 at the Strasbourg-Heidelberg-Oberwolfach seminar and therefore I was particularly gratified when I heard Grothendieck so emphatically urging that it should replace the one previously expounded by Dieudonne' and himself. He repeated several times that points are not mere points, but carry Galois group actions. I regard this as a part of the content of his opinion (expressed to me in 1989) that the notion of topos was among his most important contributions. A more general expression of that content, I believe, is that a generalized "gros" topos can be a better approximation to geometric intuition than a category of topological spaces, so that the latter should be relegated to an auxiliary position rather than being routinely considered as "the" default notion of cohesive space. (This is independent of the use of localic toposes, a special kind of petit which represents only a minor modification of the traditional view and not even any modification in the algebraic geometry context due to coherence). It is perhaps a reluctance to accept this overthrow that explains the situation 30 years later, when Grothendieck's simplification is still not widely considered to be elementary and "basic". I'm trying to slowly digest the last paragraph. As a novice in algebraic geometry I'm always looking for geometric and "philosophical" intuition, so I very much want to understand why Grothendieck was insistent on points having Galois group actions. Why, geometrically (or philosophically?) is it essential and important that points have Galois group actions?
Suppose $k$ is a field, not necessarily algebraically closed. $\text{Spec } k$ fails to behave like a point in many respects. Most basically, its "finite covers" (Specs of finite etale $k$-algebras) can be interesting, and are controlled by its absolute Galois group / etale fundamental group. For example, $\text{Spec } \mathbb{F}_q$, the Spec of a finite field, has the same finite covering theory as $S^1$, which reflects (and is equivalent to) the fact that its absolute Galois group is the profinite integers $\widehat{\mathbb{Z}}$. (So this suggests that one can think of $\text{Spec } \mathbb{F}_q$ itself as behaving like a "profinite circle.") More generally, suppose you want to classify objects of some kind over $k$ (say, vector spaces, algebras, commutative algebras, Lie algebras, schemes, etc.). A standard way to do this is to instead classify the base changes of those objects to the separable closure $k_s$, then apply Galois descent. The topological picture is that $\text{Spec } k$ behaves like $BG$ where $G$ is the absolute Galois group, $\text{Spec } k_s$ behaves like a point, or if you prefer like $EG$, and the map $$\text{Spec } k_s \to \text{Spec } k$$ behaves like the map $EG \to BG$. In the topological setting, families of objects over $BG$ are (when descent holds) the same thing as objects equipped with an action of $G$. The analogous fact in algebraic geometry is that objects over $\text{Spec } k$ are (when Galois descent holds) the same thing as objects over $\text{Spec } k_s$ equipped with homotopy fixed point data, which is a generalization of being equipped with a $G$-action which reflects the fact that $k_s$ itself has a $G$-action. (I need to be a bit careful here about what I mean by "$G$-action" to take into account the fact that $G$ is a profinite group. For simplicity you can pretend that I am instead talking about a finite extension $k \to L$, although I'll continue to write as if I'm talking about the separable closure. Alternatively, pretend I'm talking about $k = \mathbb{R}, k_s = \mathbb{C}$.) The classification of finite covers is the simplest place to see this: the category of finite covers of $\text{Spec } k_s$ is the category of finite sets, with the trivial $G$-action, so homotopy fixed point data is the data of an action of $G$, and we get that finite covers of $\text{Spec } k$ are classified by finite sets with $G$-action. But Galois descent holds in much greater generality, and describes a very general sense in which objects over $k$ behave like objects over $k_s$ with a Galois action in a twisted sense.
{ "source": [ "https://mathoverflow.net/questions/233921", "https://mathoverflow.net", "https://mathoverflow.net/users/69037/" ] }
234,051
I am interested in applications of algebraic geometry to machine learning. I have found some papers and books, mainly by Bernd Sturmfels on algebraic statistics and machine learning. However, all this seems to be only applicable to rather low dimensional toy problems. Is this impression correct? Is there something like computational algebraic machine learning that has practical value for real world problems, may be even very high dimensional problems, like computer vision?
One useful remark is that dimension reduction is a critical problem in data science for which there are a variety of useful approaches. It is important because a great many good machine learning algorithms have complexity which depends on the number of parameters used to describe the data (sometimes exponentially!), so reducing the dimension can turn an impractical algorithm into a practical one. This has two implications for your question. First, if you invent a cool new algorithm then don't worry too much about the dimension of the data at the outset - practitioners already have a bag of tricks for dealing with it (e.g. Johnson-Lindenstrauss embeddings, principal component analysis, various sorts of regularization). Second, it seems to me that dimension reduction is itself an area where more sophisticated geometric techniques could be brought to bear - many of the existing algorithms already have a geometric flavor. That said, there are a lot of barriers to entry for new machine learning algorithms at this point. One problem is that the marginal benefit of a great answer over a good answer is not very high (and sometimes even negative!) unless the data has been studied to death. The marginal gains are much higher for feature extraction, which is really more of a domain-specific problem than a math problem. Another problem is that many new algorithms which draw upon sophisticated mathematics wind up answering questions that nobody was really asking, often because they were developed by mathematicians who tend to focus more on techniques than applications. Topological data analysis is a typical example of this: it has generated a lot of excitement among mathematicians but most data scientists have never heard of it, and the reason is simply that higher order topological structures have not so far been found to be relevant to practical inference / classification problems.
{ "source": [ "https://mathoverflow.net/questions/234051", "https://mathoverflow.net", "https://mathoverflow.net/users/89192/" ] }
234,108
Prime gaps studies seems to be one of the most fertile topics in analytic number theory, for long and in lots of directions : lower bounds (recent works by Maynard, Tao et al. [1]) upper bounds (recent works by Zhang and the whole Polymath 8 project [2]) statistics on most frequent gaps ("jumping champions" [3]) mean gaps (prime number theorem) median gaps (Erdös-Kac and related conjectures) I keep wondering about why so many efforts ? indeed it can be for pure knowledge of prime number distributions and properties for themselves, and that would already be a sufficient motivation, but is there any hope for further applications and consequences ? What I am thinking about is the following. Since Weil's works on explicit formulae, prime distribution knowledge is useful to deduce properties on operator's spectrum or zeroes of L-functions. For instance, all the works since Montgomery around pair correlation of zeroes and $n$-densities estimates, and their relations with primes properties (underlined by Montgomery and Goldston [4]). So my question is, mainly related to the jumping champions problem [3] : could we, by the mean of explicit formulae or whatever else, deduce from prime gaps properties some properties out of this apparently very specific field (zeroes of zeta functions, spectral informations, families statistics, etc?) ? Hoping the question will not be an affront to those for who the answers will be obvious and trivial, I keep impatiently waiting for possible motivations and external relations for this prime gap world ;) Best regards == References == [1] James Maynard , Small gaps between primes , Ann. of Math. (2) 181 (2015), no. 1, 383--413. [2] Yitang Zhang , Bounded gaps between primes , Ann. of Math. (2) 179 (2014), no. 3, 1121--1174. [3] Andrew Odlyzko, Michael Rubinstein, and Marek Wolf , Jumping champions , Experiment. Math. 8 (1999), no. 2, 107--118. [4] D. A. Goldston, S. M. Gonek, A. E. Özlük, and C. Snyder , On the pair correlation of zeros of the Riemann zeta-function , Proc. London Math. Soc. (3) 80 (2000), no. 1, 31--49.
Since you ask about zeta zeros, Riemann hypothesis implies the gap is $O(\sqrt{p_n} \log p_n)$. Larger gap will give you nontrivial zero off the critical line, disproving RH. On the other hand, bounding the gap by $O(polylog(p_n))$ will solve the open problem for deterministically finding primes in polynomial time . For practical purposes, some cryptographic algorithms need to find prime efficiently. Large gaps may break some implementations.
{ "source": [ "https://mathoverflow.net/questions/234108", "https://mathoverflow.net", "https://mathoverflow.net/users/43737/" ] }
234,183
In short: What can we say about the collection of all solutions of an ODE when we don't have uniqueness? When we teach a first course in ODE's, we look at the equation $f:D\to \mathbb{R}, \quad D\subseteq \mathbb{R}^2,$ $y'(x) = f(x,y),\quad y(x_0 ) = y_0, \quad (x_0,y_0 )\in D $ and prove two theorems If $f\in C(D)$, then there exists a neighbourhood of $x_0$ for which there is a solution $y(x) $ Peano Theorem . If $f$ is also Lipschitz in $y$, then there exists a neighbourhood of $x_0$ in which $y(x)$ exist and is a unique solution. Picard Lindelöf . The natural question, which I tried to "google" and did not find an answer to, is What can be generally said about the set of all solutions when there is no uniqueness, i.e. $f$ is continuous but not Lipschitz?
In most common applications, uniqueness and the Lipschitz condition are violated only at certain points which form a curve which is a "singular solution". The keyword is Clairault Equation The simplest example is $y'=y^{1/3}$ , the singular solution is $y=0$ . Such situation usually occurs when one considers $F(y',y,t)=0$ where $F$ is analytic. For a modern exposition, see MR1503299 Ritt, J. F. On the singular solutions of algebraic differential equations. Ann. of Math. (2) 37 (1936), no. 3, 552–617. Lipschitz condition in the uniqueness theorem can be slightly relaxed. $$|F(x,t)-F(x',t)|\leq\omega(x-x'),$$ where $$\int_{0+}\frac{ds}{\omega(s)}=\infty.$$ This is due to Osgood. (See Ph. Hartman, Ordinary DE, Birkhauser 1982). A remarkable theorem of Władysław Orlicz says the following. Consider the differential equation $x'=f(x,t)$ , where $x$ and $f$ are vector functions. In the space of all continuous vector functions $f(x,t)$ those functions for which the differential equation has at least one non-uniqueness point is of first category. (Convergence in the space is uniform on compact subsets). This shows that in the generic situation you do not need Lipschitz condition for uniqueness. Ref. W. Orlicz, Zur Theorie der Differentialgleichung $y'=f(x,y)$ , Bull. de Acad. Polon. des Sciences, Ser A, 1932, 221-228. (Of course the set of Lipschitz functions is also of the first category:-) Same about functions satisfying Osgood's condition above. A theorem of H. Kneser says that if you have non-uniqueness at some point then the set of solutions passing through this point is a continuum (closed connected set). Über die Lösungen eines Systems gewöhnlicher Differentialgleichungen, das der Lipschitzschen Bedingung nicht genügt. (German) JFM 49.0302.03 Berl. Ber. 1923, 171-174 (1923). Scalar autonomous equation $x'=f(x)$ is exceptional: if $f(x_0)\neq 0$ and $f$ is continuous then the solution with $x(0)=x_0$ is unique. A reference for general discussion of non-uniqueness (and its relevance for science) is V. I. Yudovich, Mathematical models for natural sciences. Lecture course. Moscow, 2004. (In Russian, unfortunately not translated into Latin-alphabet languages, and not reviewed on Mathscinet). In particular, Yudovich asks for an explicit condition of uniqueness which is satisfied by a residual set of continuous functions. No such condition is known. (Residual means that the complement is of the 1-st category).
{ "source": [ "https://mathoverflow.net/questions/234183", "https://mathoverflow.net", "https://mathoverflow.net/users/42864/" ] }
234,320
A while ago a professor of mine said something along the lines of No matter how many algebraic invariants we attach to topological spaces, there will always be nonhomeomorphic spaces agreeing on all their invariants Where algebraic invariants are functors from $Top$ to $Grp$ I guess. I've not been able to find this result anywhere online (partly because I don't know what to google), so I'm looking for a source where this result is discussed.
perhaps he is implying some even stronger result He is referring to the following result of Peter Freyd ( Freyd uncertainty principle ): The homotopy category of spaces $HoTop$ does not admit a faithful functor to the category of sets $Set$ . Specifically, for any functor $T: Top_* \to Set$ from base-pointed spaces to sets which is homotopy invariant, there exist a triple $f: X \to Y$ such that $f$ is not null-homotopic, but $T(f) = T(\ast)$ . Here $\ast$ is the null map to the basepoint of $Y$ . In particular, any algebraic invariant is a set-valued homotopy invariant. This includes homotopy groups, cohomology, cohomology and homotopy operations and whatever you can think of. Freyd's theorem implies that we cannot describe the homotopy category as a category of algebras for some algebraic theory $\mathcal{T}$ , since any algebraic category is concrete. Fun fact: Freyd's theorem essentially relies only on the general set-theoretic arguments and cardinal counting. Non-counter-example: Whitehead's theorem states that if a map $f: X \to Y$ between pointed connected CW-complexes induces an isomorphism on all homotopy groups $\pi_i, \ i=1, 2,\dots$ , then $f$ is a homotopy equivalence between $X$ and $Y$ . Note that you still can't discriminate spaces looking just at the collection of homotopy groups: there can be $X$ and $Y$ such that $\pi_i(X) = \pi_i(Y)$ for all $i$ , but this isomorphism is not induced by any actual map $F: X\to Y$ and the spaces are not homotopy equivalent. The simplest example is $\Bbb R \Bbb P^2 \times S^3$ and $S^2 \times \Bbb R \Bbb P ^3$ . They both have a double covering by $S^2 \times S^3$ and have thus the same homotopy groups, but their cohomology is not isomorphic. On second thought, I can't see why Freyd's theorem would imply actual non-discriminable spaces without any extra conditions on the invariant. Perhaps someone can fill this gap, but imho the non-discrimination of maps is bad enough. Since this theorem states indiscriminability even for non-homotopy equivalent spaces, it in particular does so for non-homeomorphic spaces. This could be weaker than what your professor implied since we could in principle consider invariants of spaces which are not homotopy invariants. However, this requires some more specifics on what we would call "algebraic invariants" since e.g. the lattice of open subsets looks like a perfectly fine algebraic invariant to me, but it certainly discriminates spaces.
{ "source": [ "https://mathoverflow.net/questions/234320", "https://mathoverflow.net", "https://mathoverflow.net/users/64302/" ] }
234,492
In 1991, Kapranov and Voevodsky published a proof of a now famously false result, roughly saying that the homotopy category of spaces is equivalent to the homotopy category of strict infinity categories that are weak infinity groupoid. In 1998 Carlos Simpson showed that their main result could not be true, but did not explain what was precisely wrong in the paper of Kapranov and Voevodsky. In fact, as explained by Voevodsky here , for a long time after that, Voevodsky apparently thought his proof was correct and that Carlos Simpson made a mistake, until he finally found a mistake in his paper in 2013 ! Despite being false, the paper by Kapranov and Voevodsky contains a lot of very interesting things, moreover, the general strategy of the proof to use Johnson's Higher categorical pasting diagram as generalized Moore path to strictify an infinity groupoid sound like a very reasonable idea and it is a bit of a surprise, at least to me, that it does not work. In fact when Carlos Simpson proved that the main theorem of Kapranov and Voevodsky's paper was false he conjectured that their proof could allow one to obtain that the homotopy category of spaces is equivalent to the homotopy category of strict non unital infinity category that are weak (unital) infinity groupoid (this is now known as Simpson's conjecture). So: Can someone explain what precisely goes wrong in this paper ?
Here is my guess. To compare spaces with their notion of strict $\infty$-groupoids (in which everything is strict except inverses) Kapranov and Voevodsky use an intermediate category of Kan diagrammatic sets, which they show to be equivalent to both spaces and strict $\infty$-groupoids (after inverting a suitable collection of weak equivalences). Whatever Kan diagrammatic sets are, they seem to be a non-strict model, and so let's assume that they do form a model for spaces. In this case the mistake must be in the comparison of Kan diagrammatic sets and strict $\infty$-groupoids (Theorem 3.7). This theorem relies on Proposition 3.5 which compares the homotopy groups of a Kan diagrammatic set $X$ and the homotopy groups of the strict $\infty$-groupoid $\Pi(X)$ generated from $X$. This comparison, in turn, is based on Lemma 3.4 which says that any morphism in $\Pi(X)$ can be realized via a single pasting diagram in $X$, which are in some sense the cells of $X$ (since $X$ is a presheaf on pasting diagrams). But this claim doesn't seem to be true, and the reason is that when one generates the $\infty$-groupoid $\Pi(X)$ one doesn't only freely add morphisms, but also identifies pairs of morphisms which are supposed to be the same in a strict $\infty$-category structure. This means, for example, that if two different pasting diagrams coincide after this identification, then the identity morphism between them might not be a pasting diagram in $X$ (or at least, one would have to explicitly argue why this would be the case). The proof of Lemma 3.4 seems to be vague enough to allow for this subtlety to slip. All of this could be wrong of course, but if I had to pick one possibly problematic lemma it would be this Lemma 3.4.
{ "source": [ "https://mathoverflow.net/questions/234492", "https://mathoverflow.net", "https://mathoverflow.net/users/22131/" ] }
234,777
My question concerns the argument given by Gauss in his "geometric proof" of the fundamental theorem of Algebra. At one point he says (I am reformulating) : A branch (a component) of any algebraic curve either comes back on itself (I suppose that means : it is a closed curve) or it goes to infinity on both sides. I have a geometric intuition of what it means, but I am not sure where or how to get a "modern" formulation of such result, and a proof.
Gauss actually appends a footnote to this statement "if a branch of an algebraic curve enters a limited space, it necessarily has to leave it again" (Latin original follows below), in which he argues that: It seems to be well demonstrated that an algebraic curve neither ends abruptly (as it happens in the transcendental curve $y = 1/\log x$ ), nor lose itself after an infinite number of windings in a point (like a logarithmic spiral). As far as I know nobody has ever doubted this, but if anybody requires it, I take it on me to present, on another occasion, an indubitable proof. As explained by Harel Cain (see also Steve Smale ), this outline of the proof shows that Gauss’s geometric proof of the FTA is based on assumptions about the branches of algebraic curves, which might appear plausible to geometric intuition, but are left without any rigorous proof by Gauss. It took until 1920 for Alexander Ostrowski to show that all assumptions made by Gauss can be fully justified. Alexander Ostrowski. Über den ersten und vierten Gauss’schen Beweis des Fundamental satzes der Algebra. (Nachrichten der Gesellschaft der Wissenschaften Göttingen, 1920). Here is the Latin original and an English translation of Carl Friedrich Gauss. Demonstratio nova theorematis omnem functionem algebraicam rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse (PhD thesis, Universitat Helmstedt, 1799); paragraph 21 and footnote 10. Iam ex geometria sublimori constat, quamuis curvam algebraicam, (sive singulas cuiusuis curvae algebraicae partes, si forte e pluribus composita sit) aut in se redientem aut utrimque in infinitum excurrentem esse, adeoque si ramus aliquis curvae algebraicae in spatium definitum intret, eundem necessario ex hoc spatio rursus alicubi exire debere. [*] [*] Satis bene certe demonstratum esse videtur, curvam algebraicam neque alicubi subito abrumpi posse (uti e.g. evenit in curva transscendente, cuius aequatio $y=1/\log x$ ), neque post spiras infinitas in aliquo puncto se quasi perdere (ut spiralis logarithmica), quantumque scio nemo dubium contra rem movit. Attamen si quis postulat, demonstrationem nullis dubiis obnoxiam alia occasione tradere suscipiam. [English translation] ( Wayback Machine ): But according to higher mathematics, any algebraic curve (or the individual parts of such an algebraic curve if it perhaps consists of several parts) either turns back into itself or extends to infinity. Consequently, a branch of any algebraic curve which enters a limited space, must necessarily exit from this space somewhere. [*] [*] It seems to have been proved with sufficient certainty that an algebraic curve can neither be broken off suddenly anywhere (as happens e.g. with the transcendental curve whose equation is $y = 1/\log x$ ) nor lose itself, so to say, in some point after infinitely many coils (like the logarithmic spiral). As far as I know, nobody has raised any doubts about this. However, should someone demand it then I will undertake to give a proof that is not subject to any doubt, on some other occasion.
{ "source": [ "https://mathoverflow.net/questions/234777", "https://mathoverflow.net", "https://mathoverflow.net/users/89592/" ] }
235,008
What are examples of math hoaxes/interesting jokes published on April Fool's day? For a start P=NP . Added 2022-04-01 Anything new in 2022?
I enjoyed the hexasphere by A. V. Akopyan, J. Crowder, H. Edelsbrunner, R. Guseinov from last year: http://pub.ist.ac.at/~edels/hexasphere/ In the link, the sphere is animated, so you can look at it from all sides.
{ "source": [ "https://mathoverflow.net/questions/235008", "https://mathoverflow.net", "https://mathoverflow.net/users/12481/" ] }
235,347
In a recent publication by the Ukrainian mathematician Maryna Viazovska the Kepler problem for dimension $8$ and $24$, namely the densest packing of spheres, was solved. Admittedly it is very difficult to grasp the work for someone not involved in the field, but it would be incredibly valuable if, briefly, the methodology adopted, and the main breakthrough could be explained at a conceptual level. Such that one can have a rough idea how the solution came to be. This is merely an attempt to get more input in order to understand some of the main ideas involved, as this is very exciting news given that the problem of densest packing of spheres in different dimensions has a long history.
There are two things you need to understand. The first is how to prove sphere packing bounds via harmonic analysis ("linear programming bounds"). My lecture notes from PCMI 2014 give an exposition of this theory, which covers the period up to, but not including, Viazovska's paper on eight dimensions. This reduces the problem to finding radial functions in eight and twenty-four dimensions with certain roots and sign conditions, and the lecture notes discuss why everybody believed such functions exist. The functions themselves are surprisingly subtle; see this paper with Steve Miller for numerical experiments and conjectures. Then Viazovska's breakthrough lies in how to construct the right functions. It's convenient to split them up into eigenfunctions of the Fourier transform, which have to have roots at certain locations. One way to look at it is that she brute forces those roots by including a sine squared factor that produces the roots, and then she multiplies it by something that's essentially the Laplace transform of a (quasi)modular form. To make this work, one has to answer three questions: What sort of modular forms play nice with the sine squared factor and produce radial eigenfunctions of the Fourier transform? Do there exist such modular forms that actually give the desired functions? How can we prove the inequalities these functions are supposed to satisfy? Viazovska's methodology gives nice answers: This amounts to identifying the right level, weight, and depth (for quasimodular forms). Yes. One can reverse engineer them based on the desired properties. In the best way one could hope for, namely at the level of the modular form itself (rather than having to worry about something subtle happening in the Laplace transform). For details, see the eight and twenty-four dimensional papers. As Peter Sarnak points out in Erica Klarreich's Quanta article , the actual construction is surprisingly simple. If you know about linear programming bounds and basic facts about modular forms, the proof itself is very down to earth.
{ "source": [ "https://mathoverflow.net/questions/235347", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
235,620
I am at the stage of learning. Mostly, I am attracted by algebraic number theory. Roughly speaking, I am interested in the rational points of algebraic varieties. I am little bit afraid to start to learn algebraic geometry, since I find the modern language used there is category theory (topos, sheaves, schemes,...). Here is my question: Is it necessary to know the modern algebraic geometry (Grothendieck formalism) to explore the theory of rational points of algebraic varieties? Concrete examples are welcome.
Well if you want to count rational points on varieties than you probably want to know what abelian varieties are, and general type varieties, and Fano varieties, and K3 surfaces, and what Azumaya algebras are, and so on to understand the main conjectures and theorems of the subject. You should probably understand how spreading out varieties into schemes over $\mathbb Z$ lets you view integral points as curves and so attack them with geometric intuition, making it clear how to properly define heights and other useful tools. This already requires quite a bit of algebraic geometry. There may be good reasons not to learn algebraic geometry but a fear of category theory is not one of them. First let me point out that the number of category-theoretic concepts needed is quite small. Certainly topos is not on the list but scheme, sheaf, and sheaf cohomology are. I'm sure there are a few more essential ones that are not on this list but not very many. Second, when learning these things you are not supposed to contemplate them in their pure abstract brilliance - you're supposed to learn a whole bunch of different examples and think about what the fancy words you're saying mean in each example. If you want to study rational points on varieties I hope you already know many examples of varieties that you want to study points on - that's a good start. Third, there is a lot of virtue here in learning things only as you need them - as long as the second or third time you need them you go back and make sure you understand them well. For instance a very large number of the important examples of schemes are varieties. One uses the language of schemes only as a new way of talking about varieties that gives you some new tools to talk about them. Again presumably you already have a reasonably good understanding of varieties so this is not some huge leap. You will not be able to get away with this forever- at some point more general classes of schemes are needed. Schemes over Z are probably the first that show up in arithmetic geometry but I'm sure the other phenomena make an appearance. However, when you encounter these concepts, you will already understand something about the notion of scheme, and it again will not be so big a leap. Finally, let me point out that category theory is the language of algebraic geometry for a reason. When you are thinking about certain geometric ideas and trying to express them in a nontraditional setting (e.g. an arithmetic one) you will naturally be drawn to the category-theoretic concepts. This is how the language arose in the first place (although the fact that Grothendieck was around to dream up a brilliant pure abstract theory didn't hurt). To me it makes much more sense to learn category theory by first learning algebraic geometry than to do it in the other order.
{ "source": [ "https://mathoverflow.net/questions/235620", "https://mathoverflow.net", "https://mathoverflow.net/users/82229/" ] }
235,758
I would like to understand what is the "outer-automorphism group" $Out$ of $SO(p,q)$ and $O(p,q)$, where $p+q >0$ and $pq \neq 0$. My working definition of $Out$ is as follows: Let us denote by $Aut(G)$ the automorphism group of a Lie group $G$. I take the inner-automorphism group $Inn(G)$ of $G$ to be all elements $K\in Aut(G)$ for which there exists a $g\in G$ such that $K = Ad_{g}$, namely $K(h) = g h g^{-1}$ for all $h\in G$. $Inn(G)$ is a normal subgroup of $Aut(G)$ and then $Out(G) = Aut(G)/Inn(G)$ is a group which I define to be the outer-morphism group of $G$. I have not been able to find what $Out(G)$ is for $G = SO(p,q), O(p,q)$. I have noticed that there are many references dealing with the outer-automorphism group of complex Lie algebras, which can be read off from their Dynkin diagram. However, $\mathfrak{so}(p,q)\simeq\mathfrak{o}(p,q)$ is not a complex Lie algebra but a real form. I don't know how the outer-automorphism group of a simple real Lie algebra can be computed in general. In fact, Wikipedia says that the characterization of the outer-automorphism group of a real simple Lie algebra in terms of a short exact sequence involving the full and inner autmorphisms groups (a result classical for complex Lie algebras) was only obtained as recently as in 2010! In any case, I expect the answer to my question to be even more involved since I am not interested in the outer-automorphism group of a real Lie algebra but of the full real Lie group, in my case $SO(p,q)$ and $O(p,q)$. If I am not mistaken, for $q=0$ and $p = even$ we have $O(p,0) = SO(p,0)\rtimes\mathbb{Z}_{2}$, where $\mathbb{Z}_{2}$ is the outer-automorphism group of $SO(p,0)$, so $Out(SO(p,0)) = \mathbb{Z}_{2}$. Thanks.
Let's first address your comment in response to Igor Rivin's answer: why don't we find this topic addressed in textbooks on Lie groups? Beyond the definite (= compact) case, disconnectedness issues become more complicated and your question is thereby very much informed by the theory of linear algebraic groups $G$ over $\mathbf{R}$. That in turn involves two subtle aspects (see below) that are not easy to express solely in analytic terms and are therefore beyond the level of such books (which usually don't assume familiarity with algebraic geometry at the level needed to work with linear algebraic groups over a field such as $\mathbf{R}$ that is not algebraically closed). And books on linear algebraic groups tend to say little about Lie groups. The first subtlety is that $G(\mathbf{R})^0$ can be smaller than $G^0(\mathbf{R})$ (i.e., connectedness for the analytic topology may be finer than for the Zariski topology), as we know already for indefinite orthogonal groups, and textbooks on Lie groups tend to focus on the connected case for structural theorems. It is a deep theorem of Elie Cartan that if a linear algebraic group $G$ over $\mathbf{R}$ is Zariski-connected semisimple and simply connected (in the sense of algebraic groups; e.g., ${\rm{SL}}_n$ and ${\rm{Sp}}_{2n}$ but not ${\rm{SO}}_n$) then $G(\mathbf{R})$ is connected, but that lies beyond the level of most textbooks. (Cartan expressed his result in analytic terms via anti-holomorphic involutions of complex semisimple Lie groups, since there was no robust theory of linear algebraic groups at that time.) The group $G(\mathbf{R})$ has finitely many connected components, but that is not elementary (especially if one assumes no knowledge of algebraic geometry), and the theorem on maximal compact subgroups of Lie groups $H$ in case $\pi_0(H)$ is finite but possibly not trivial appears to be treated in only one textbook (Hochschild's "Structure of Lie groups", which however does not address the structure of automorphism groups); e.g., Bourbaki's treatise on Lie groups assumes connectedness for much of its discussion of the structure of compact Lie groups. The second subtlety is that when the purely analytic operation of "complexification" for Lie groups (developed in Hochschild's book too) is applied to the Lie group of $\mathbf{R}$-points of a (Zariski-connected) semisimple linear algebraic group, it doesn't generally "match" the easier algebro-geometric scalar extension operation on the given linear algebraic group (e.g., the complexification of the Lie group ${\rm{PGL}}_3(\mathbf{R})$ is ${\rm{SL}}_3(\mathbf{C})$, not ${\rm{PGL}}_3(\mathbf{C})$). Here too, things are better-behaved in the "simply connected" case, but that lies beyond the level of introductory textbooks on Lie groups. Now let us turn to your question. Let $n = p+q$, and assume $n \ge 3$ (so the Lie algebra is semisimple; the cases $n \le 2$ can be analyzed directly anyway). I will only address ${\rm{SO}}(p,q)$ rather than ${\rm{O}}(p, q)$, since it is already enough of a headache to keep track of disconnected effects in the special orthogonal case. To be consistent with your notation, we'll write $\mathbf{O}(p,q) \subset {\rm{GL}}_n$ to denote the linear algebraic group over $\mathbf{R}$ "associated" to the standard quadratic form of signature $(p, q)$ (so its group of $\mathbf{R}$-points is what you have denoted as ${\rm{O}}(p,q)$), and likewise for ${\mathbf{SO}}(p,q)$. We will show that ${\rm{SO}}(p, q)$ has only inner automorphisms for odd $n$, and only the expected outer automorphism group of order 2 (arising from reflection in any nonzero vector) for even $n$ in both the definite case and the case when $p$ and $q$ are each odd. I will leave it to someone else to figure out (or find a reference on?) the case with $p$ and $q$ both even and positive. We begin with some preliminary comments concerning the definite (= compact) case for all $n \ge 3$, for which the Lie group ${\rm{SO}}(p,q) = {\rm{SO}}(n)$ is connected. The crucial (non-trivial) fact is that the theory of connected compact Lie groups is completely "algebraic'', and in particular if $G$ and $H$ are two connected semisimple $\mathbf{R}$-groups for which $G(\mathbf{R})$ and $H(\mathbf{R})$ are compact then every Lie group homomorphism $G(\mathbf{R}) \rightarrow H(\mathbf{R})$ arising from a (unique) algebraic homomorphism $G \rightarrow H$. In particular, the automorphism groups of $G$ and $G(\mathbf{R})$ coincide, so the automorphism group of ${\rm{SO}}(n)$ coincides with that of $\mathbf{SO}(n)$. Note that any linear automorphism preserving a non-degenerate quadratic form up to a nonzero scaling factor preserves its orthogonal and special orthogonal group. It is a general fact (due to Dieudonne over general fields away from characteristic 2) that if $(V, Q)$ is a non-degenerate quadratic space of dimension $n \ge 3$ over any field $k$ and if ${\mathbf{GO}}(Q)$ denotes the linear algebraic $k$-group of conformal automorphisms then the action of the algebraic group ${\mathbf{PGO}}(Q) = {\mathbf{GO}}(Q)/{\rm{GL}}_1$ on ${\mathbf{SO}}(Q)$ through conjugation gives exactly the automorphisms as an algebraic group. More specifically, $${\mathbf{PGO}}(Q)(k) = {\rm{Aut}}_k({\mathbf{SO}}(Q)).$$ This is proved using a lot of the structure theory of connected semisimple groups over an extension field that splits the quadratic form, so it is hard to "see'' this fact working directly over the given ground field $k$ (such as $k = \mathbf{R}$); that is one of the great merits of the algebraic theory (allowing us to prove results over a field by making calculations with a geometric object over an extension field, and using techniques such as Galois theory to come back to where we began). Inside the automorphism group of the Lie group ${\rm{SO}}(p,q)$, we have built the subgroup ${\rm{PGO}}(p,q) := {\mathbf{PGO}}(p,q)(\mathbf{R})$ of "algebraic'' automorphisms (and it gives all automorphisms when $p$ or $q$ vanish). This subgroup is $${\mathbf{GO}}(p,q)(\mathbf{R})/\mathbf{R}^{\times} = {\rm{GO}}(p,q)/\mathbf{R}^{\times}.$$ To analyze the group ${\rm{GO}}(p,q)$ of conformal automorphisms of the quadratic space, there are two possibilities: if $p \ne q$ (such as whenever $p$ or $q$ vanish) then any such automorphism must involve a positive conformal scaling factor due to the need to preserve the signature, and if $p=q$ (the "split'' case: orthogonal sum of $p$ hyperbolic planes) then signature-preservation imposes no condition and we see (upon choosing a decomposition as an orthogonal sum of $p$ hyperbolic planes) that there is an evident involution $\tau$ of the vector space whose effect is to negative the quadratic form. Thus, if $p \ne q$ then ${\rm{GO}}(p,q) = \mathbf{R}^{\times} \cdot {\rm{O}}(p,q)$ whereas ${\rm{GO}}(p,p) = \langle \tau \rangle \ltimes (\mathbf{R}^{\times} \cdot {\rm{O}}(p,p))$. Hence, ${\rm{PGO}}(p,q) = {\rm{O}}(p,q)/\langle -1 \rangle$ if $p \ne q$ and ${\rm{PGO}}(p,p) = \langle \tau \rangle \ltimes ({\rm{O}}(p,p)/\langle -1 \rangle)$ for an explicit involution $\tau$ as above. We summarize the conclusions for outer automorphisms of the Lie group ${\rm{SO}}(p, q)$ arising from the algebraic theory. If $n$ is odd (so $p \ne q$) then ${\rm{O}}(p,q) = \langle -1 \rangle \times {\rm{SO}}(p,q)$ and so the algebraic automorphisms are inner (as is very well-known in the algebraic theory). Suppose $n$ is even, so $-1 \in {\rm{SO}}(p, q)$. If $p \ne q$ (with the same parity) then the group of algebraic automorphisms contributes a subgroup of order 2 to the outer automorphism group (arising from any reflection in a non-isotropic vector, for example). Finally, the contribution of algebraic automorphisms to the outer automorphism group of ${\rm{SO}}(p,p)$ has order 4 (generated by two elements of order 2: an involution $\tau$ as above and a reflection in a non-isotropic vector). This settles the definite case as promised (i.e., all automorphisms inner for odd $n$ and outer automorphism group of order 2 via a reflection for even $n$) since in such cases we know that all automorphisms are algebraic. Now we may and do assume $p, q > 0$. Does ${\rm{SO}}(p, q)$ have any non-algebraic automorphisms? We will show that if $n \ge 3$ is odd (i.e., $p$ and $q$ have opposite parity) or if $p$ and $q$ are both odd then there are no non-algebraic automorphisms (so we would be done). First, let's compute $\pi_0({\rm{SO}}(p,q))$ for any $n \ge 3$. By the spectral theorem, the maximal compact subgroups of ${\rm{O}}(p,q)$ are the conjugates of the evident subgroup ${\rm{O}}(q) \times {\rm{O}}(q)$ with 4 connected components, and one deduces in a similar way that the maximal compact subgroups of ${\rm{SO}}(p, q)$ are the conjugates of the evident subgroup $$\{(g,g') \in {\rm{O}}(p) \times {\rm{O}}(q)\,|\, \det(g) = \det(g')\}$$ with 2 connected components. For any Lie group $\mathscr{H}$ with finite component group (such as the group $G(\mathbf{R})$ for any linear algebraic group $G$ over $\mathbf{R}$), the maximal compact subgroups $K$ constitute a single conjugacy class (with every compact subgroup contained in one) and as a smooth manifold $\mathscr{H}$ is a direct product of such a subgroup against a Euclidean space (see Chapter XV, Theorem 3.1 of Hochschild's book "Structure of Lie groups'' for a proof). In particular, $\pi_0(\mathscr{H}) = \pi_0(K)$, so ${\rm{SO}}(p, q)$ has exactly 2 connected components for any $p, q > 0$. Now assume $n$ is odd, and swap $p$ and $q$ if necessary (as we may) so that $p$ is odd and $q>0$ is even. For any $g \in {\rm{O}}(q) - {\rm{SO}}(q)$, the element $(-1, g) \in {\rm{SO}}(p, q)$ lies in the unique non-identity component. Since $n \ge 3$ is odd, so ${\rm{SO}}(p, q)^0$ is the quotient of the connected (!) Lie group ${\rm{Spin}}(p, q)$ modulo its order-2 center, the algebraic theory in characteristic 0 gives $${\rm{Aut}}({\mathfrak{so}}(p,q)) = {\rm{Aut}}({\rm{Spin}}(p, q)) = {\rm{SO}}(p, q).$$ Thus, to find nontrivial elements of the outer automorphism group of the disconnected Lie group ${\rm{SO}}(p, q)$ we can focus attention on automorphisms $f$ of ${\rm{SO}}(p, q)$ that induce the identity on ${\rm{SO}}(p, q)^0$. We have arranged that $p$ is odd and $q>0$ is even (so $q \ge 2$). The elements $$(-1, g) \in {\rm{SO}}(p, q) \cap ({\rm{O}}(p) \times {\rm{O}}(q))$$ (intersection inside ${\rm{O}}(p, q)$, so $g \in {\rm{O}}(q) - {\rm{SO}}(q)$) have an intrinsic characterization in terms of the Lie group ${\rm{SO}}(p, q)$ and its evident subgroups ${\rm{SO}}(p)$ and ${\rm{SO}}(q)$: these are the elements outside ${\rm{SO}}(p, q)^0$ that centralize ${\rm{SO}}(p)$ and normalize ${\rm{SO}}(q)$. (To prove this, consider the standard representation of ${\rm{SO}}(p) \times {\rm{SO}}(q)$ on $\mathbf{R}^{p+q} = \mathbf{R}^n$, especially the isotypic subspaces for the action of ${\rm{SO}}(q)$ with $q \ge 2$.) Hence, for every $g \in {\rm{O}}(q) - {\rm{SO}}(q)$ we have $f(-1,g) = (-1, F(g))$ for a diffeomorphism $F$ of the connected manifold ${\rm{O}}(q) - {\rm{SO}}(q)$. Since $f$ acts as the identity on ${\rm{SO}}(q)$, it follows that the elements $g, F(g) \in {\rm{O}}(q) - {\rm{SO}}(q)$ have the same conjugation action on ${\rm{SO}}(q)$. But ${\rm{PGO}}(q) \subset {\rm{Aut}}({\rm{SO}}(q))$, so $F(g)g^{-1} \in \mathbf{R}^{\times}$ inside ${\rm{GL}}_q(\mathbf{R})$ with $q>0$ even. Taking determinants, this forces $F(g) = \pm g$ for a sign that may depend on $g$. But $F$ is continuous on the connected space ${\rm{O}}(q) - {\rm{SO}}(q)$, so the sign is actually independent of $g$. The case $F(g) = g$ corresponds to the identity automorphism of ${\rm{SO}}(q)$, so for the study of non-algebraic contributions to the outer automorphism group of ${\rm{SO}}(p, q)$ (with $p$ odd and $q > 0$ even) we are reduced to showing that the case $F(g) = -g$ cannot occur. We are seeking to rule out the existence of an automorphism $f$ of ${\rm{SO}}(p, q)$ that is the identity on ${\rm{SO}}(p, q)^0$ and satisfies $(-1, g) \mapsto (-1, -g)$ for $g \in {\rm{O}}(q) - {\rm{SO}}(q)$. For this to be a homomorphism, it is necessary (and sufficient) that the conjugation actions of $(-1, g)$ and $(-1, -g)$ on ${\rm{SO}}(p, q)^0$ coincide for all $g \in {\rm{O}}(q) - {\rm{SO}}(q)$. In other words, this requires that the element $(1, -1) \in {\rm{SO}}(p, q)$ centralizes ${\rm{SO}}(p, q)^0$. But the algebraic group ${\mathbf{SO}}(p, q)$ is connected (for the Zariski topology) with trivial center and the same Lie algebra as ${\rm{SO}}(p, q)^0$, so by consideration of the compatible algebraic and analytic adjoint representations we see that $(1, -1)$ cannot centralize ${\rm{SO}}(p, q)^0$. Thus, no non-algebraic automorphism of ${\rm{SO}}(p, q)$ exists in the indefinite case when $n \ge 3$ is odd. Finally, suppose $p$ and $q$ are both odd, so ${\rm{SO}}(p,q)^0$ does not contain the element $-1 \in {\rm{SO}}(p,q)$ that generates the center of ${\rm{SO}}(p,q)$ (and even the center of ${\rm{O}}(p,q)$). Thus, we have ${\rm{SO}}(p,q) = {\rm{SO}}(p,q)^0 \times \langle -1 \rangle$ with ${\rm{SO}}(p,q)^0$ having trivial center. Any (analytic) automorphism of ${\rm{SO}}(p,q)$ clearly acts trivially on the order-2 center $\langle -1 \rangle$ and must preserve the identity component too, so such an automorphism is determined by its effect on the identity component. It suffices to show that every analytic automorphism $f$ of ${\rm{SO}}(p,q)^0$ arises from an algebraic automorphism of ${\rm{SO}}(p,q)$, as then all automorphisms of ${\rm{SO}}(p,q)$ would be algebraic (so the determination of the outer analytic automorphism group for $p, q$ odd follows as for the definite case with even $n \ge 4$). By the theory of connected semisimple algebraic groups in characteristic 0, for any $p, q \ge 0$ with $p+q \ge 3$ every analytic automorphism of the connected (!) group ${\rm{Spin}}(p,q)$ is algebraic. Thus, it suffices to show that any automorphism $f$ of ${\rm{SO}}(p,q)^0$ lifts to an automorphism of the degree-2 cover $\pi:{\rm{Spin}}(p,q) \rightarrow {\rm{SO}}(p,q)^0$. (Beware that this degree-2 cover is not the universal cover if $p, q \ge 2$, as ${\rm{SO}}(p,q)^0$ has maximal compact subgroup ${\rm{SO}}(p) \times {\rm{SO}}(q)$ with fundamental group of order 4.) The Lie algebra automorphism ${\rm{Lie}}(f)$ of ${\mathfrak{so}}(p,q) = {\mathfrak{spin}}(p,q)$ arises from a unique algebraic automorphism of the group ${\mathbf{Spin}}(p,q)$ since this latter group is simply connected in the sense of algebraic groups . The induced automorphism of the group ${\rm{Spin}}(p,q)$ of $\mathbf{R}$-points does the job, since its compatibility with $f$ via $\pi$ can be checked on Lie algebras (as we are working with connected Lie groups). This final argument also shows that the remaining problem for even $p, q \ge 2$ is to determine if any automorphism of ${\rm{SO}}(p,q)$ that is the identity map on ${\rm{SO}}(p,q)^0$ is itself the identity map. (If affirmative for such $p, q$ then the outer automorphism group of ${\rm{SO}}(p,q)$ is of order 2, and if negative then the outer automorphism group is bigger.)
{ "source": [ "https://mathoverflow.net/questions/235758", "https://mathoverflow.net", "https://mathoverflow.net/users/66688/" ] }
235,840
It is well known that $S^n$ admits an H-space structure if and only if $n=0,1,3,7$. I'm interested in whether there are other suspensions $\Sigma X$ that admit H-space structures: Question 1 For which $X$ (not a sphere) is $\Sigma X$ an H-space? And what about $\Sigma X$ that are associative H-spaces? My motivation is that we have a construction (in the framework of homotopy type theory, and presumably portable to a wide range of model categories) that gives an H-space structure on the join $\Sigma X * \Sigma X$ whenever $\Sigma X$ has a homotopy-associative H-space structure (that is compatible with an involution on $X$ – for details, see these slides ). Thus, it would be interesting to know some more examples where this construction applies. This also leads to a follow-up question (mostly in case the answer to Q1 is “none”): Question 2 If we go to a localization, do we get more answers to Q1? What about in other (non-stable) model categories? Any references would be appreciated. Finally, let me share a little scratch-work in trying to answer Q1 (feel free to ignore!): Any H-space structure $\mu : \Sigma X \times \Sigma X \to \Sigma X$ gives rise a Hopf map $H(\mu) : \Sigma X * \Sigma X \to \Sigma^2 X$. Assuming $X$ is pointed, we get a map $\Sigma^3(X \wedge X) \to \Sigma^2 X$. Applying $K$-cohomology to the cofiber sequence should yield information restricting $X$, but I failed to get much milage out of it without a notion of “bidegree” for $\mu$ (generalizing the case of $X$ a sphere).
If $Y$ is a connected CW-complex of finite type which is both an H-space and a co-H-space, then $Y$ has the homotopy type of $S^1$, $S^3$, $S^7$ or a point. This is a result of Robert West: Robert W. West , $H$-spaces which are co-$H$-spaces , Proc. Amer. Math. Soc. 31 (1972), 580--582. It follows that if $X$ is a finite type CW-complex such that $\Sigma X$ is an H-space, then $\Sigma X$ is homotopy equivalent to one of these spaces. On the other hand, Adams and Walker give an example of a $4$-dimensional infinite CW-complex $Y$ which is both an Eilenberg--Mac Lane space and a Moore space of type $(\mathbb{Q},3)$: J. F. Adams and G. Walker , An example in homotopy theory , Proc. Cambridge Philos. Soc. 60 (1964), 699--700. This $Y$ is a suspension by construction, and an H-space by virtue of being an Eilenberg--Mac Lane space of an abelian group.
{ "source": [ "https://mathoverflow.net/questions/235840", "https://mathoverflow.net", "https://mathoverflow.net/users/2004/" ] }
235,858
I am trying to solve a 4th order nonlinear PDE for a real function $u(x,y)$ of two variables. It is too complicated to reproduce here but it exhibits the following two very nice properties: 1) if $u(x,y)$ is a solution, then $f(u(x,y))$ is also a solution for any function $f$. 2) if we think of the equation in the complex plane $z=x+iy$, then it is satisfied by every (anti-)holomorphic function , that is, $g(z)$ and $g(\bar{z})$ are solutions for any function $g$. Has anyone ever encountered such a nonlinear PDE? If so, does it have a special name, and what are its known properties? I am interested in finding as many real solutions as possible -- due to the nonlinearity, property 2) is not very useful in that regard. However, I've found a handful of solutions by inspection, and I suspect that the equation may in fact be integrable (though I am not sure how to test this), partly because of the above properties, and partly because of physical reasons. Note: Property 1) has a nice geometric origin. Every function $u(x,y)$ defines a foliation of the plane by the level sets $u(x,y)=$ constant. We can then think of $u$ as a coordinate labeling the "leaves" in the foliation, and the change $u\to f(u)$ is just a relabeling of coordinates (as long as $f$ is monotonic). In other words, this PDE can be thought of as an equation for a foliation. Any given foliation can be represented by many functions $u(x,y)$, all of which are functions of each other, and property 1) is just the statement of reparameterization invariance.
If $Y$ is a connected CW-complex of finite type which is both an H-space and a co-H-space, then $Y$ has the homotopy type of $S^1$, $S^3$, $S^7$ or a point. This is a result of Robert West: Robert W. West , $H$-spaces which are co-$H$-spaces , Proc. Amer. Math. Soc. 31 (1972), 580--582. It follows that if $X$ is a finite type CW-complex such that $\Sigma X$ is an H-space, then $\Sigma X$ is homotopy equivalent to one of these spaces. On the other hand, Adams and Walker give an example of a $4$-dimensional infinite CW-complex $Y$ which is both an Eilenberg--Mac Lane space and a Moore space of type $(\mathbb{Q},3)$: J. F. Adams and G. Walker , An example in homotopy theory , Proc. Cambridge Philos. Soc. 60 (1964), 699--700. This $Y$ is a suspension by construction, and an H-space by virtue of being an Eilenberg--Mac Lane space of an abelian group.
{ "source": [ "https://mathoverflow.net/questions/235858", "https://mathoverflow.net", "https://mathoverflow.net/users/7154/" ] }
236,151
I was wondering if there is some description known for the conjugacy classes of $$\mathrm{SL}_2(\mathbb{Z})=\{A\in \mathrm{GL}_2(\mathbb{Z})|\;\;|\det(A)|=1\}.$$ I was not able to find anything about this. Most references only give solutions for $\mathrm{SL}_2(\mathbb{R})$ . Thank you for your help.
One can proceed as follows for $SL_2(\mathbb{Z})$ . First, the trace is a conjugacy invariant. For trace $0$ there are two conjugacy classes represented by $\pmatrix{0 & 1 \\ -1 & 0}$ and $\pmatrix{0 & -1 \\ 1 & 0}$ . These representatives can be thought of as $90^\circ$ and $270^{\circ}$ degree rotations of a lattice generated by the corners of a square centered on the origin. For trace $1$ and $-1$ there are two conjugacy classes each, represented by the matrices $$M=\pmatrix{1 & -1 \\ 1 & 0}, M^2=\pmatrix{0 & -1 \\ 1 & -1}, M^4=\pmatrix{-1 & 1 \\ -1 & 0}, M^5 = \pmatrix{0 & 1 \\ -1 & 1} $$ These representatives can be thought of as $60^\circ$ , $120^\circ$ , $240^\circ$ , and $300^\circ$ degree rotations of a lattice generated by the vertices of a regular hexagon centered at the origin. For trace $2$ there is a $\mathbb{Z}$ -indexed family of conjugacy classes, represented by $\pmatrix{1 & n \\ 0 & 1}$ ; these are all "shear" transformations except for the identity. For trace $-2$ there is a similar $\mathbb{Z}$ -indexed family of conjugacy classes represented by $\pmatrix{-1 & n \\ 0 & -1}$ . In general, for nonzero trace the conjugacy classes come in opposite pairs, represented by a matrix $M$ with trace $t>0$ and an opposite representative $-M$ with trace $-t<0$ . For trace of absolute value $> 2$ , there is one conjugacy class for each word of the form $$\pm R^{j_1} L^{k_1} R^{j_2} L^{k_2} \cdots R^{j_I} L^{k_I} $$ up to cyclic conjugacy, where $I \ge 1$ and all the exponents are positive integers. A matrix representing this form is obtained from the above word by making the replacements $$R=\pmatrix{1 & 1 \\ 0 & 1}, \quad L=\pmatrix{1 & 0 \\ 1 & 1} $$ The transformations represented by such words are all "hyperbolic" transformations, having an independent pair of real eigenvectors. The slope of the expanding eigenvector is a quadratic irrational, and hence has eventually repeating continued fraction expansion. The cyclic sequence $(j_1,k_1,j_2,k_2,\ldots,j_I,k_I)$ can be thought of as the fundamental repeating portion of the continued fraction expansion of the slope of the expanding eigenvector, or, better, as an appropriate power of the fundamental repeating portion where the power is equal to the exponent of the given matrix. Number theorists will tell you that the number of conjugacy classes of each trace $t>2$ is closely related to the class number of the number field generated by $\sqrt{t^2-4}$ .
{ "source": [ "https://mathoverflow.net/questions/236151", "https://mathoverflow.net", "https://mathoverflow.net/users/74051/" ] }
236,187
Looking for a proof of the Lefschetz-Hopf Fixpoint Theorem with the de Rham Cohomology. (I´m more interestet in the Formula then just the simple statement that if the Lefschetz number is not zero then the function has a fixed point. But the weak version would be helpful too.) Does anyone know a book or article? Is this even possible?
One can proceed as follows for $SL_2(\mathbb{Z})$ . First, the trace is a conjugacy invariant. For trace $0$ there are two conjugacy classes represented by $\pmatrix{0 & 1 \\ -1 & 0}$ and $\pmatrix{0 & -1 \\ 1 & 0}$ . These representatives can be thought of as $90^\circ$ and $270^{\circ}$ degree rotations of a lattice generated by the corners of a square centered on the origin. For trace $1$ and $-1$ there are two conjugacy classes each, represented by the matrices $$M=\pmatrix{1 & -1 \\ 1 & 0}, M^2=\pmatrix{0 & -1 \\ 1 & -1}, M^4=\pmatrix{-1 & 1 \\ -1 & 0}, M^5 = \pmatrix{0 & 1 \\ -1 & 1} $$ These representatives can be thought of as $60^\circ$ , $120^\circ$ , $240^\circ$ , and $300^\circ$ degree rotations of a lattice generated by the vertices of a regular hexagon centered at the origin. For trace $2$ there is a $\mathbb{Z}$ -indexed family of conjugacy classes, represented by $\pmatrix{1 & n \\ 0 & 1}$ ; these are all "shear" transformations except for the identity. For trace $-2$ there is a similar $\mathbb{Z}$ -indexed family of conjugacy classes represented by $\pmatrix{-1 & n \\ 0 & -1}$ . In general, for nonzero trace the conjugacy classes come in opposite pairs, represented by a matrix $M$ with trace $t>0$ and an opposite representative $-M$ with trace $-t<0$ . For trace of absolute value $> 2$ , there is one conjugacy class for each word of the form $$\pm R^{j_1} L^{k_1} R^{j_2} L^{k_2} \cdots R^{j_I} L^{k_I} $$ up to cyclic conjugacy, where $I \ge 1$ and all the exponents are positive integers. A matrix representing this form is obtained from the above word by making the replacements $$R=\pmatrix{1 & 1 \\ 0 & 1}, \quad L=\pmatrix{1 & 0 \\ 1 & 1} $$ The transformations represented by such words are all "hyperbolic" transformations, having an independent pair of real eigenvectors. The slope of the expanding eigenvector is a quadratic irrational, and hence has eventually repeating continued fraction expansion. The cyclic sequence $(j_1,k_1,j_2,k_2,\ldots,j_I,k_I)$ can be thought of as the fundamental repeating portion of the continued fraction expansion of the slope of the expanding eigenvector, or, better, as an appropriate power of the fundamental repeating portion where the power is equal to the exponent of the given matrix. Number theorists will tell you that the number of conjugacy classes of each trace $t>2$ is closely related to the class number of the number field generated by $\sqrt{t^2-4}$ .
{ "source": [ "https://mathoverflow.net/questions/236187", "https://mathoverflow.net", "https://mathoverflow.net/users/90337/" ] }
236,392
The existence of a 4-chromatic unit distance graph (e.g., the Moser spindle ) establishes a lower bound of 4 for the chromatic number of the plane (see the Nelson-Hadwiger problem ). Obviously, it would be nice to have an example of a 5-chromatic unit distance graph. To the best of my knowledge, the existence of such a graph is open. Has there been any (documented) attempt to find such a graph through a computer search? For instance, has every $n$-vertex possibility been checked up to some $n$?
As of this morning there is a paper on the ArXiv claiming to show that there exists a 5-chromatic unit distance graph with $1567$ vertices. The paper is written by non-mathematician Aubrey De Grey (of anti-aging fame), but it appears to be a serious paper. Time will tell if it holds up to scrutiny. EDIT: in fact, it must be the one with 1585 vertices, according to checkers, see https://dustingmixon.wordpress.com/2018/04/10/the-chromatic-number-of-the-plane-is-at-least-5/
{ "source": [ "https://mathoverflow.net/questions/236392", "https://mathoverflow.net", "https://mathoverflow.net/users/39475/" ] }
236,563
Does there exist a (finite dimensional) smooth manifold $M$, such that every Riemannian metric on $M$ has no isometries except the identity? Of course, such a manifold must not admit a diffeomorphism of finite order. Since a surface $S$ admits a diffeomorphism of order $n$ iff its mapping class group (MCP) has an element of order $n$ (see here ), it follows that if $S$ has the above property, then its MCP has only elements of infinite order.
The answer to the question in the first sentence is "yes". Let $M$ be a hyperbolic 3-manifold whose isometry group is trivial. Then by Theorem 1.1 of Farb, Benson; Weinberger, Shmuel Hidden symmetries and arithmetic manifolds. Geometry, spectral theory, groups, and dynamics, 111–119, Contemp. Math., 387, Amer. Math. Soc., Providence, RI, 2005. (Reviewer: Bachir Bekka) 53C35 (53C23) (which they attribute to Borel, though they do not give an original reference for it), the isometry group of every Riemannian metric on $M$ is isomorphic to a subgroup of the hyperbolic isometry group, and thus is trivial. Along similar lines, you might be interested in Theorem H of Dinkelbach, Jonathan and Leeb, Bernhard, Equivariant Ricci flow with surgery and applications to finite group actions on geometric 3-manifolds. Geom. Topol. 13 (2009), no. 2, 1129–1173. It says that every finite-order diffeomorphism of a closed hyperbolic 3-manifold is smoothly conjugate to an isometry, so closed hyperbolic 3-manifolds with trivial isometry groups give examples of smooth manifolds with torsion-free diffeomorphism groups.
{ "source": [ "https://mathoverflow.net/questions/236563", "https://mathoverflow.net", "https://mathoverflow.net/users/46290/" ] }
236,572
Let $M$ be an $n$ by $n$ matrix with each diagonal element equal to $k$ and each non-diagonal element equal to $k-1$ where $n$ and $k$ are positive integers. Let $k < n$ and we can assume both $k$ and $n$ are large. What is $$S_{M,k} = \sum_{x \in \mathbb{Z}^n} e^{-x^T M x}\;?$$ Is there some way to estimate this sum? This was also posted to https://math.stackexchange.com/questions/1741157/how-to-estimate-a-specific-infinite-sum a few days ago. Added examples If $k=1$ we know $S_{M,1} \approx (\sqrt{\pi} +2\sqrt{n}e^{-\pi^2})^n \approx 1.7726372048^n$. I don't know a closed form approximation for any other value of $k$. For $n=12$ and $k = 1, \dots, 12$ using computer code to approximate the sum we get $ 962.58329951, 267.409968069, 196.186732001, 171.404195004, 162.313077353, 158.96911585, 157.738949838, 157.286397212, 157.119912408, 157.058666071, 157.036134803, 157.027846013$. In general it seems numerically that for every fixed $n$, $S_{M,k}$ converges fairly quickly to some value as $k$ increases towards $n$.
The answer to the question in the first sentence is "yes". Let $M$ be a hyperbolic 3-manifold whose isometry group is trivial. Then by Theorem 1.1 of Farb, Benson; Weinberger, Shmuel Hidden symmetries and arithmetic manifolds. Geometry, spectral theory, groups, and dynamics, 111–119, Contemp. Math., 387, Amer. Math. Soc., Providence, RI, 2005. (Reviewer: Bachir Bekka) 53C35 (53C23) (which they attribute to Borel, though they do not give an original reference for it), the isometry group of every Riemannian metric on $M$ is isomorphic to a subgroup of the hyperbolic isometry group, and thus is trivial. Along similar lines, you might be interested in Theorem H of Dinkelbach, Jonathan and Leeb, Bernhard, Equivariant Ricci flow with surgery and applications to finite group actions on geometric 3-manifolds. Geom. Topol. 13 (2009), no. 2, 1129–1173. It says that every finite-order diffeomorphism of a closed hyperbolic 3-manifold is smoothly conjugate to an isometry, so closed hyperbolic 3-manifolds with trivial isometry groups give examples of smooth manifolds with torsion-free diffeomorphism groups.
{ "source": [ "https://mathoverflow.net/questions/236572", "https://mathoverflow.net", "https://mathoverflow.net/users/45564/" ] }
236,689
I came across a problem concerning about the convergence of products. I wonder if the complex series $a_n=\prod^n_{k=1}(1-e^{k\alpha \pi i})$ converges to zero when $\alpha$ is irrational. Of course, the infinite product doesn't converge to any nonzero number. It seems that the behavior of $e^{k\alpha \pi i}$ is not quite "predictable". I have no idea how to approach this problem. Thanks in advance!
The product does not tend to the limit zero. For any irrational number $\alpha$ one can show that $$ \limsup_{N\to \infty} \prod_{n=1}^{N} |1- e(n\alpha)| = \infty. \tag{1} $$ (Here I use the usual notation $e(\alpha) =e^{2\pi i\alpha}$ , and the product in the question is stated for $\alpha/2$ rather than $\alpha$ ). I'll prove a slightly weaker result; namely I'll show that (1) holds for any irrational $\alpha$ for which there are infinitely many rational approximations $a/q$ with $(a,q)=1$ and $$ \alpha=\frac{a}{q} +\beta, \qquad \text{with}\qquad |\beta| \le \frac{1}{100q^2}. \tag{2} $$ The $1/100$ is just chosen for ease of exposition, and a more careful argument can make do with $1/\sqrt{5}$ (any constant $<1/2$ is enough), and then it is well known that every irrational number admits infinitely many approximations with $|\alpha -a/q| \le 1/(\sqrt{5}q^2)$ . Condition 2 holds for almost all irrational numbers -- it fails only if the continued fraction expansion only uses numbers below $100$ . Suppose then that $q$ is such that (2) holds, and consider the product in question at $N=q-1$ . By the triangle inequality, for $1\le n\le N$ , $$ |1-e(n\alpha)| = |(1-e(an/q))+e(an/q)(1-e(\beta n))| \ge |1-e(an/q)| \Big(1-\frac{|1-e(n\beta)|}{|1-e(an/q)|}\Big)= 2\Big|\sin\frac{\pi an}{q}\Big| \Big( 1- \frac{|\sin(\pi n\beta)|}{|\sin(\pi an/q)|}\Big). $$ Now write $\Vert x\Vert= \min_{\ell \in {\Bbb Z}} |x-\ell|$ for the distance from $x$ to the nearest integer. Note also that for $0\le x\le \pi/2$ one has $2x/\pi \le \sin x \le x$ . Therefore we get for $1\le n\le N$ $$ |1-e(n\alpha)| \ge |1-e(an/q)| \Big(1 - \frac{\pi n|\beta|}{2\Vert an/q\Vert}\Big) \ge |1-e(an/q)| \exp\Big( - \frac{\pi q|\beta|}{\Vert an/q\Vert}\Big), \tag{3} $$ where in the last inequality we used that $\eta =\pi n|\beta|/(2\Vert an/q\Vert) \le \pi q^2|\beta|/2 \le 1/10$ , so that $(1-\eta) \ge \exp(-2\eta)$ . Multiplying (3) for $n$ from $1$ to $N=q-1$ , we obtain $$ \prod_{n=1}^{N} |1-e(n\alpha)| \ge \prod_{n=1}^{q-1} |1-e(an/q)| \exp\Big( - \sum_{n=1}^{N} \frac{\pi q |\beta|}{\Vert an/q\Vert}\Big). \tag{4} $$ Now $$ \sum_{n=1}^{q-1} \frac{1}{\Vert an/q\Vert} \le 2\sum_{n\le q/2} \frac{q}{n} =2 q \log q +O(q), $$ and $\prod_{n=1}^{N} |1-e(an/q)| = q$ , and so from (4) we conclude that $$ \prod_{n=1}^{q-1} |1-e(n\alpha)| \ge q \exp\Big( - 2\pi q^2 |\beta| \log q +O(q^2|\beta|)\Big) \gg q^{0.9}. $$ This proves the claim. Note that this argument is closely related to the nice attempt of Sangchul Lee. Added information (October, 2021): From comments on Fedja's recent question Can all partial sums $\sum_{k=1}^n f(ka)$ where $f(x)=\log|2\sin(x/2)|$ be non-negative? I was led to the following paper by D.S. Lubinsky which shows that (see Theorem 1.2 there) $$ \limsup_{N\to \infty} \frac{\log \prod_{n=1}^{N} |1-e(n\alpha)|}{\log N} \ge 1, $$ for all irrational numbers $\alpha$ , so that the product in the question gets almost as large as $N$ infinitely often.
{ "source": [ "https://mathoverflow.net/questions/236689", "https://mathoverflow.net", "https://mathoverflow.net/users/64462/" ] }
236,840
A language is a set of finite-length strings from some finite alphabet $\Sigma$. It is no loss of generality (for my purposes) to take $\Sigma=\{0,1\}$; so a language is a set of bit-strings. Languages are commonly classified in a hierarchy, with the enumerable $\equiv$ recursively enumerable $\equiv$ Turing-recognizable the outermost set, beyond decidability: (Figure from Sipser .) My question is about the terminology extending this hierarchy. The non-enumerable are clearly outside the enumerable ($\equiv$ Turing-recognizable). But some languages $L$ are enumerable but their complement $\overline{L}$ is not enumerable (such as the language of all Turing Machines that halt on a given input), whereas other languages $L$ are not enumerable, and their complement $\overline{L}$ is also not enumerable (such as the pairs of "equivalent" Turing machines). Q1 . Are there standard names for languages that go farther outside of the above diagram? The set of all languages is uncountable, so there is plenty of room for expanding the diagram. But I haven't seen a clear, "standard" expansion. I am seeking a progression further and further out into the uncountable to help students (and me!) see the vastness. I am seeking the equivalent of this iconic complexity-theory hierarchy: (Image from Seneca ICT .) Q2 . Is there an equivalent diagram in language theory?
Yes, for starters there is the arithmetical hierarchy , where enumerable = $\Sigma^0_1$ and it continues $\Pi^0_1$ , $\Delta^0_2$ , $\Sigma^0_2$ etc. See also the Computability Menagerie .
{ "source": [ "https://mathoverflow.net/questions/236840", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
237,279
The paper " A proof of Liouville's theorem " by E. Nelson, published in 1961 in Proceedings of AMS, contains just one paragraph, giving a (now) standard proof that every bounded harmonic function in $\mathbb{R}^n$ is a constant. I presume there must be a story behind. First, it is hard to imagine that this proof was unknown before 1961. Second, even if this is the case, it doesn't feel usual, for the author, to submit such a paper and, for the editor, to accept it. So, can anyone tell that story? Or, to make the question precise: 1) are there any earlier references for this proof? 2) what was/were the standard proof(s) before 1961? 3) by a very similar reasoning, one obtains $̣||\nabla h||_{\infty,\Omega}\leq C(\Omega,\Omega')||h||_{\infty, \Omega'}$ for $\Omega\subset\Omega'$. Was that argument also unknown until 1961?
This doesn't answer any of the three specific questions asked, but addresses an implicit question: "Why did the editor accept it?" In 1961, the Proceedings of the AMS established a section called "Mathematical Pearls" devoted to, I quote: The purpose of this department is to publish very short papers of an unusually elegant and polished character, for which normally there is no other outlet. In the issue in which Nelson's proof appears, that section starts on page 991 and continues to the end, including 7 papers in all, none of which exceeds 2 pages. In a different issue you can find this paper which also contains the quoted disclaimer above.
{ "source": [ "https://mathoverflow.net/questions/237279", "https://mathoverflow.net", "https://mathoverflow.net/users/56624/" ] }
237,311
I'm currently working on my PhD thesis. I have several suggested problems to work on, some of them are very similar to some problems that my advisor have worked before and published already, either in his thesis or papers. Basically, the main difference is in the dimension of some singular sets (his works are mainly on the isolated case, but I'm working on a case with a far more hairier, non-isolated singular set), which we were unsure if the argument would hold but it seems that the adaptations I've made were fine. Not that the nature of the problem matters, but the approach I'm making worries me. It seem to me that if there would be a 'railroad' to prove the results I'm working on, it would be the same path that he followed to write his own results, with different objects. That's the way I've been advised to work, and it's been producing results. Is this a reproachable approach? Of course, there is the problem of using similar introductions (and in that subject I've read this previous question Does this qualify as "self plagiarism" or something? , only one that got close to my problem) sinde the objects being studied by me and that has been studied by him were so similar.
What you describe seems to me to be a normal mode of mathematical progress, and I would urge you simply to carry on! Ride that train as far as you can. It often happens that someone's mathematical results can be improved or generalized in various ways, and when this is possible, it is mathematically desirable that the generalization be undertaken well. You may be worried that the value of this work is less than some other totally original work. If the generalizations are routine, then indeed that may be true. But from what you say, this doesn't seem to be your case. Many generalizations are not routine and such work is definitely worth doing. Finally, let me caution you to guard yourself against a certain mistake that sometimes undermines motivation for a young researcher. Namely, it often happens in mathematical research that we begin in a state of terrible confusion about a topic; as research progresses, things only gradually become clarified. After hard work, we finally begin to understand what is the actual question we should be asking; and then, after fitful starts and retreats, we gain some hard-won insight; until finally, after laborious investigation, we have the answer. But alas — it is at this point that the crippling illness strikes. Namely, because the researcher now understands the problem and its solution so well, he or she begins to lose sight of the value of the very solution that was made. The mathematical advance begins to seem trivial or obvious, perhaps without value. Having solved the problem so well, the mathematician becomes a victim of his or her own success. Because all is now so clear, it is harder to appreciate the value of the achievement that was made. Please guard against this disease! Do not denigrate your achievement simply because it seems easy after you have made it. In many mathematical realms, the actual achievement in research is that certain issues and ideas become easy to understand. Please look upon the ease of the answer at the end as part of the achievement itself, and think back to the initial state of confusion at the beginning of the work to realize the value of what you have done. So please carry on and ride that railroad as far as it will take you.
{ "source": [ "https://mathoverflow.net/questions/237311", "https://mathoverflow.net", "https://mathoverflow.net/users/31676/" ] }
237,876
I previously asked this question on MSE and offered a bounty but received no responses. There are examples of compact complex manifolds with no positive-dimensional compact complex submanifolds. For example, generic tori of dimension greater than one have no compact complex submanifolds. The proof of this fact, see this answer for example, shows that these tori also have no positive-dimensional analytic subvarieties either (because analytic subvarieties also have a fundamental class). My question is whether the non-existence of compact submanifolds always implies the non-existence of subvarieties. Does there exist a compact complex manifold which has positive-dimensional analytic subvarieties, but no positive-dimensional compact complex submanifolds? Note, any such example is necessarily non-projective.
There are surfaces of type $VII_0$ on which the only subvariety is a nodal rational curve (I. Nakamura, Invent. math. 78, 393-443 (1984), Theorem 1.7, with $n=0$).
{ "source": [ "https://mathoverflow.net/questions/237876", "https://mathoverflow.net", "https://mathoverflow.net/users/21564/" ] }
237,987
A colleague raised the above question with me; more precisely he said: Suppose that a mathematician were resolved not to publish any theorems unless they had checked the proof of every theorem that they cite (and recursively the proofs of all the theorems that those rely on etc.). Can they have a career in pure mathematics? With the obvious proviso: Of course, there are a few well-known theorems, like the classification of finite simple groups, whose proofs are virtually impossible for any one person to check at all. But one can have a perfectly good mathematical career without ever citing any of those. It seems to me that complete checking might be possible, though perhaps only in narrow fields of mathematics, but does anyone know of mathematicians who actually do it? (or try to)
Possible or not, this should be a goal:-) Let me put it slightly differently: you should understand every result that you use. First of all, a theorem that you use can be wrong. So whenever you rely your proof on a theorem that you did not check, you take a risk. There are many known cases when a result was "accepted" by a mathematical community, and then turned to be either wrong or unproved. If your proof relies on a theorem that you do not understand this really means that you don't fully understand your own proof. In the cases like finite simple group classification, you should clearly state in your publication that your proof depends on it. And in general, if you write a proof which relies on the theorem that you do not fully understand, you should make as clear as possible, where exactly and how you use this theorem. EDIT. When you cite a result you endorse it. You are essentially saying that on your opinion it is correct. Now suppose you are simply asked to endorse some result: just to tell your opinion, whether it is correct or not. Would you endorse it publicly in print, without checking the proof ? On my opinion, citing a result in your paper without any comment is the same.
{ "source": [ "https://mathoverflow.net/questions/237987", "https://mathoverflow.net", "https://mathoverflow.net/users/1587/" ] }
238,153
Question (informal) Is there an empirically verifiable scientific experiment that can empirically confirm that the Lebesgue measure has physical meaning beyond what can be obtained using just the Jordan measure? Specifically, is there a Jordan non-measurable but Lebesgue-measurable subset of Euclidean space that has physical meaning? If not, then is there a Jordan measurable set that has no physical meaning? If you understand my question as it is, great! If not, in the subsequent sections I will set up as clear definitions as I can so that this question is not opinion-based and has a correct answer that is one of the following: Yes, some Jordan non-measurable subset of Euclidean space has physical meaning. No, there is no physically meaningful interpretation of Jordan non-measurable sets (in Euclidean space), but at least Jordan measurable sets do have physical meaning. No, even the collection of Jordan measurable sets is not wholly physically meaningful. In all cases, the answer must be justified. What counts as justification for (1) would be clear from the below definitions. As for (2), it is enough if the theorems in present scientific knowledge can be proven in some formal system in which every constructible set is Jordan measurable, or at least I would like citations of respected scientists who make this claim and have not been disproved. Similarly for (3), there must be some weaker formal system which does not even permit an embedding of Jordan sets but which suffices for the theorems in present scientific knowledge! Definitions Now what do I mean by physical meaning ? A statement about the world has physical meaning if and only if it is empirically verified, so it must be of the form: For every object X in the collection C, X has property P. For example: For every particle X, its speed measured in any reference frame does not exceed the speed of light. By empirical verification I mean that you can test the statement on a large number of instances (that cover the range of applicability well). This is slightly subjective but all scientific experiments follow it. Of course empirical verification does not imply truth, but it is not possible to empirically prove anything, which is why I'm happy with just empirical evidence, and I also require empirical verification only up to the precision of our instruments. I then define that a mathematical structure $M$ has physical meaning if and only if $M$ has a physically meaningful interpretation, where an interpretation is defined to be an embedding (structure-preserving map) from $M$ into the world. Thus a physically meaningful interpretation would be an interpretation where all the statements that correspond to structure preservation have physical meaning (in the above sense). Finally, I allow approximation in the embedding, so $M$ is still said to have (approximate) physical meaning if the embedding is approximately correct under some asymptotic condition. For example: The structure of $V = \mathbb{R}^3$ has an (approximate) physically meaningful interpretation as the points in space as measured simultaneously in some fixed reference frame centred on Earth. One property of this vector-space is: $\forall u,v \in V\ ( |u|+|v| \ge |u+v| )$ . Which is indeed empirically verified for $|u|,|v| \approx 1$ , which essentially says that it is correct for all position vectors of everyday length (not too small and not too big). The approximation of this property can be written precisely as the following pair of sentences: $\forall ε>0\ ( \exists δ>0\ ( \forall u,v \in V\ ( |u|-1 < δ \land |v|-1 < δ \rightarrow |u|+|v| \ge |u+v|-ε ) ) )$ . This notion allows us to classify scientific theories such as Newtonian mechanics or special relativity as approximately physically meaningful, even when they fail in the case of large velocities or large distances respectively. Question (formal) Does the structure of Jordan measurable subsets of $\mathbb{R}^3$ have (approximate) physical meaning? This is a 3-sorted first-order structure, with one sort for the points and one sort for the Jordan sets and one sort for $\mathbb{R}$ , which function as both scalars and measure values. If so, is there a proper extension of the Jordan measure on $\mathbb{R}^3$ that has physical meaning? More specifically, the domain for the sort of Jordan sets as defined above must be extended, and the other two sorts must be the same, and the original structure must embed into the new one, and the new one must satisfy non-negativity and finite additivity. Bonus points if the new structure is a substructure of the Lebesgue measure. Maximum points if the new structure is simply the Lebesgue measure! If not, is there a proper substructure of the Jordan measure on $\mathbb{R}^3$ such that its theory contains all the theorems in present scientific knowledge (under suitable translation; see (*) below)? And what is an example of a Jordan set that is not an element in this structure? Remarks A related question is what integrals have physical meaning. I believe many applied mathematicians consider Riemann integrals to be necessary, but I'm not sure what proportion consider extensions of that to be necessary for describing physical systems. I understand that the Lebesgue measure is an elegant extension and has nice properties such as the dominated convergence theorem, but my question focuses on whether 'pathological' sets that are not Jordan measurable actually 'occur' in the physical world. Therefore I'm not looking for the most elegant theory that proves everything we want, but for a (multi-sorted) structure whose domains actually have physical existence. The fact that we do not know the true underlying structure of the world does not prevent us from postulating embeddings from a mathematical model into it. For a concrete example, the standard model of PA has physical meaning via the ubiquitous embedding as binary strings in some physical medium like computer storage, with arithmetic operations interpreted as the physical execution of the corresponding programs. I think most logicians would accept that this claim holds (at least for natural numbers below $2^{1024}$ ). Fermat's little theorem, which is a theorem of PA, and its consequences for RSA, has certainly been empirically verified by the entire internet's use of HTTPS, and of course there are many other theorems of PA underlying almost every algorithm used in software! Clearly also, this notion of embedding is not purely mathematical but has to involve natural language, because that is what we currently use to describe the real world. But as can be seen from the above example, such translation does not obscure the obvious intended meaning, which is facilitated by the use of (multi-sorted) first-order logic, which I believe is sufficiently expressive to handle most aspects of the real world (see the below note). (*) Since the 3-sorted structure of the Jordan measure essentially contains the second-order structure of the reals and much more, I think that all the theorems of real/complex analysis that have physical meaningfulness can be suitably translated and proven in the associated theory, but if anyone thinks that there are some empirical facts about the world that cannot be suitably translated, please state them explicitly, which would then make the answer to the last subquestion a "no".
There are at least two different $\sigma$ -algebras that Lebesgue measure can be defined on: The (concrete) $\sigma$ -algebra ${\mathcal L}$ of Lebesgue-measurable subsets of ${\bf R}^d$ . The ( abstract ) $\sigma$ -algebra ${\mathcal L}/{\sim}$ of Lebesgue-measurable subsets of ${\bf R}^d$ , up to almost everywhere equivalence. (There is also the Borel $\sigma$ -algebra ${\mathcal B}$ , but I will not discuss this third $\sigma$ -algebra here, as its construction involves the first uncountable ordinal, and one has to first decide whether that ordinal is physically "permissible" in one's concept of an approximation. But if one is only interested in describing sets up to almost everywhere equivalence, one can content oneself with the $F_\delta$ and $G_\sigma$ levels of the Borel hierarchy , which can be viewed as "sets approximable by sets approximable by" physically measurable sets, if one wishes; one can then decide whether this is enough to qualify such sets as "physical".) The $\sigma$ -algebra ${\mathcal L}$ is very large - it contains all the subsets of the Cantor set, and so must have cardinality $2^{\mathfrak c}$ . In particular, one cannot hope to distinguish all of these sets from each other using at most countably many measurements, so I would argue that this $\sigma$ -algebra does not have a meaningful interpretation in terms of idealised physical observables (limits of certain sequences of approximate physical observations). However, the $\sigma$ -algebra ${\mathcal L}/{\sim}$ is separable, and thus not subject to this obstruction. And indeed one has the following analogy: ${\mathcal L}/{\sim}$ is to the Boolean algebra ${\mathcal E}$ of rational elementary sets (finite Boolean combinations of boxes with rational coordinates) as the reals ${\bf R}$ are to the rationals ${\bf Q}$ . Indeed, just as ${\bf R}$ can be viewed as the metric completion of ${\bf Q}$ (so that a real number can be viewed as a sequence of approximations by rationals), an element of ${\mathcal L}/{\sim}$ can be viewed (locally, at least) as the metric completion of ${\mathcal E}$ (with metric $d(E,F)$ between two rational elementary sets $E,F$ defined as the elementary measure (or Jordan measure, if one wishes) of the symmetric difference of $E$ and $F$ ). The Lebesgue measure of a set in ${\mathcal L}/{\sim}$ is then the limit of the elementary measures of the approximating elementary sets. If one grants rational elementary sets and their elementary measures as having a physical interpretation, then one can view an element of ${\mathcal L}/{\sim}$ and its Lebesgue measure as having an idealised physical interpretation as being approximable by rational elementary sets and their elementary measures, in much the same way that one can view a real number as having idealised physical significance. Many of the applications of Lebesgue measure actually implicitly use ${\mathcal L}/\sim$ rather than ${\mathcal L}$ ; for instance, to make $L^2({\bf R}^d)$ a Hilbert space one needs to identify functions that agree almost everywhere, and so one is implicitly really using the $\sigma$ -algebra ${\mathcal L}/{\sim}$ rather than ${\mathcal L}$ . So I would argue that Lebesgue measure as it is actually used in practice has an idealised physical interpretation, although the full Lebesgue measure on ${\mathcal L}$ rather than ${\mathcal L}/{\sim}$ does not. Not coincidentally, it is in the full $\sigma$ -algebra ${\mathcal L}$ that the truth value of various set theoretic axioms of little physical significance (e.g. the continuum hypothesis, or the axiom of choice) become relevant.
{ "source": [ "https://mathoverflow.net/questions/238153", "https://mathoverflow.net", "https://mathoverflow.net/users/50073/" ] }
238,177
This Question was originally posted Here , where I'm more interested in the methods for manual solutions yielding $n$ or less moves on average. I wanted to post it here as well, to see what the people of mathoverflow think about it. I think we are familiar with the classic problem where we need to find one heavier ball among the rest identical lighter $n$ amount of balls using a scale and the minimum number of weightings. But I'm interested in a variation of this problem. You have an even number of balls, $2n$ identical balls. Half of them, $n$ amount of balls, are "Heavy Balls" and the other half are "Light Balls". Find a method to separate the balls into the "Heavy" and the "Light" box with the least weightings as possible; Using a scale instrument, which from you can read exact difference between the total weight of the right and the left side of the scale. What is the minimum number of weightings required if we are given $2n$ balls? What is the optimal method we can use for any case of $n$ to separate the balls with the least weightings as possible? For my progress on the specific cases of $n$ so far, check the original question linked Here .
There are at least two different $\sigma$ -algebras that Lebesgue measure can be defined on: The (concrete) $\sigma$ -algebra ${\mathcal L}$ of Lebesgue-measurable subsets of ${\bf R}^d$ . The ( abstract ) $\sigma$ -algebra ${\mathcal L}/{\sim}$ of Lebesgue-measurable subsets of ${\bf R}^d$ , up to almost everywhere equivalence. (There is also the Borel $\sigma$ -algebra ${\mathcal B}$ , but I will not discuss this third $\sigma$ -algebra here, as its construction involves the first uncountable ordinal, and one has to first decide whether that ordinal is physically "permissible" in one's concept of an approximation. But if one is only interested in describing sets up to almost everywhere equivalence, one can content oneself with the $F_\delta$ and $G_\sigma$ levels of the Borel hierarchy , which can be viewed as "sets approximable by sets approximable by" physically measurable sets, if one wishes; one can then decide whether this is enough to qualify such sets as "physical".) The $\sigma$ -algebra ${\mathcal L}$ is very large - it contains all the subsets of the Cantor set, and so must have cardinality $2^{\mathfrak c}$ . In particular, one cannot hope to distinguish all of these sets from each other using at most countably many measurements, so I would argue that this $\sigma$ -algebra does not have a meaningful interpretation in terms of idealised physical observables (limits of certain sequences of approximate physical observations). However, the $\sigma$ -algebra ${\mathcal L}/{\sim}$ is separable, and thus not subject to this obstruction. And indeed one has the following analogy: ${\mathcal L}/{\sim}$ is to the Boolean algebra ${\mathcal E}$ of rational elementary sets (finite Boolean combinations of boxes with rational coordinates) as the reals ${\bf R}$ are to the rationals ${\bf Q}$ . Indeed, just as ${\bf R}$ can be viewed as the metric completion of ${\bf Q}$ (so that a real number can be viewed as a sequence of approximations by rationals), an element of ${\mathcal L}/{\sim}$ can be viewed (locally, at least) as the metric completion of ${\mathcal E}$ (with metric $d(E,F)$ between two rational elementary sets $E,F$ defined as the elementary measure (or Jordan measure, if one wishes) of the symmetric difference of $E$ and $F$ ). The Lebesgue measure of a set in ${\mathcal L}/{\sim}$ is then the limit of the elementary measures of the approximating elementary sets. If one grants rational elementary sets and their elementary measures as having a physical interpretation, then one can view an element of ${\mathcal L}/{\sim}$ and its Lebesgue measure as having an idealised physical interpretation as being approximable by rational elementary sets and their elementary measures, in much the same way that one can view a real number as having idealised physical significance. Many of the applications of Lebesgue measure actually implicitly use ${\mathcal L}/\sim$ rather than ${\mathcal L}$ ; for instance, to make $L^2({\bf R}^d)$ a Hilbert space one needs to identify functions that agree almost everywhere, and so one is implicitly really using the $\sigma$ -algebra ${\mathcal L}/{\sim}$ rather than ${\mathcal L}$ . So I would argue that Lebesgue measure as it is actually used in practice has an idealised physical interpretation, although the full Lebesgue measure on ${\mathcal L}$ rather than ${\mathcal L}/{\sim}$ does not. Not coincidentally, it is in the full $\sigma$ -algebra ${\mathcal L}$ that the truth value of various set theoretic axioms of little physical significance (e.g. the continuum hypothesis, or the axiom of choice) become relevant.
{ "source": [ "https://mathoverflow.net/questions/238177", "https://mathoverflow.net", "https://mathoverflow.net/users/88524/" ] }
239,383
I understand that the homotopy category of (pointed) topological spaces and continuous maps is not complete. Nor is it cocomplete. In particular it neither has all pullbacks nor all pushouts. What are some simple examples of spans $Z\leftarrow X\rightarrow Y$ and cospans $Z\rightarrow X\leftarrow Y$ that cannot be completed to pushout and pullback squares, respectively?
I'll work in the based category, and consider $S^1$ as $\{z\in\mathbb{C}:|z|=1\}$. Consider the maps $$\text{point}\xleftarrow{}S^1\xrightarrow{f}S^1, $$ where $f(z)=z^2$. Suppose that there is a pushout $P$. We would then have a natural isomorphism $[P,X]=\text{Hom}(\mathbb{Z}/2,\pi_1(X))$. On the other hand, the fibration $$ S^1 = B\mathbb{Z} \xrightarrow{f} S^1 \to \mathbb{R}P^\infty = B(\mathbb{Z}/2) \to \mathbb{C}P^\infty = BS^1 \xrightarrow{Bf} \mathbb{C}P^\infty $$ gives an exact sequence $$ [P,S^1] \to [P,\mathbb{R}P^\infty] \to [P,\mathbb{C}P^\infty]. $$ Now $\pi_1(S^1)=\mathbb{Z}$ and $\pi_1(\mathbb{R}P^\infty)=\mathbb{Z}/2$ and $\pi_1(\mathbb{C}P^\infty)=0$, so if $[P,X]=\text{Hom}(\mathbb{Z}/2,\pi_1(X))$ then we have an exact sequence $0\to\mathbb{Z}/2\to 0$, which is impossible.
{ "source": [ "https://mathoverflow.net/questions/239383", "https://mathoverflow.net", "https://mathoverflow.net/users/54788/" ] }
239,808
I'm looking for a closed form for the expression $$ \sum_{k=1}^{\infty}\frac{1}{(2k)^5-(2k)^3} $$ I know that Ramanujan gave the following closed form for a similar expression $$ \sum_{k=1}^{\infty}\frac{1}{(2k)^3-2k}= \ln(2)-\frac{1}{2} $$ I wonder if it is possible to find such a similarly simple and nice closed form for the above case. Thanks.
$$  \frac{1}{(2k)^5 - (2k)^3} + \frac{1}{(2k)^3} = \frac{1 + (2k)^2 - 1}{(2k)^5 - (2k)^3} = \frac{1}{(2k)^3 -2k}$$ So by Ramanujan's result: $$\sum_{k=1}^{\infty} \frac{1}{(2k)^5 - (2k)^3} = \ln(2) - \frac{1}{2} - \frac{1}{8}\zeta(3)$$
{ "source": [ "https://mathoverflow.net/questions/239808", "https://mathoverflow.net", "https://mathoverflow.net/users/6842/" ] }
239,832
My question is quite simple : we know all closed subgroups of $\mathrm{SO}(3)$; is it also known what are the closed subgroups of $\mathrm{SO}(4)$?
$$  \frac{1}{(2k)^5 - (2k)^3} + \frac{1}{(2k)^3} = \frac{1 + (2k)^2 - 1}{(2k)^5 - (2k)^3} = \frac{1}{(2k)^3 -2k}$$ So by Ramanujan's result: $$\sum_{k=1}^{\infty} \frac{1}{(2k)^5 - (2k)^3} = \ln(2) - \frac{1}{2} - \frac{1}{8}\zeta(3)$$
{ "source": [ "https://mathoverflow.net/questions/239832", "https://mathoverflow.net", "https://mathoverflow.net/users/92172/" ] }
240,337
Here's something that I noticed that quite surprised me. Let $G$ be a finite abelian group. Consider the following expression. $$ \nu(G) = \sum_{\substack{H \leq G \\ H \text{ is cyclic}}} |H| $$ It is easy to see that for cyclic groups, we have that $\nu(G) = \sigma_1(|G|)$. What is significantly more surprising is the following. Theorem For every finite abelian group, we have that $$ \nu(G) = \sigma_1(|G|) $$ That is, this only depends on the order of the group. Now, one can see pretty quickly that this is a multiplicative function, and so proving this reduces to studying Abelian $p$-groups. But even there it isn't obvious: consider the groups $\mathbb{Z}/p \times \mathbb{Z}/p$ and $\mathbb{Z}/p^2$. The first of these has lots of small cyclic subgroups, which the second one just has one large one, and amazingly enough these work out to contribute the same amount. So is there a nice explanation for this? This definitely surprised me, and the only way I can prove this is with not-so-pretty computations that don't enlighten me much.
(Essentially the same answer as Neil Strickland's:) Since a cyclic group of order $n$ has $\varphi(n)$ generators, your sum equals $$ \DeclareMathOperator{\ord}{ord} \nu(G) = \sum_{g\in G} \frac{ \ord(g) }{ \varphi(\ord(g)) } . $$ For elements $g$ of $p$-power order, we get $\ord(g)/ \varphi( \ord(g) ) = p/(p-1)$ and thus if $G$ is a $p$-group, $\nu(G) = 1 + (|G|-1)p/(p-1)$ depends only on the order of $G$ and not on its structure, and so $\nu(G) =\nu(C_{|G|})$. So your equality holds for nilpotent groups, as was remarked in Nilpotency of a group by looking at orders of elements . See also the papers http://arxiv.org/abs/1207.1020 and http://arxiv.org/abs/1503.00355 .
{ "source": [ "https://mathoverflow.net/questions/240337", "https://mathoverflow.net", "https://mathoverflow.net/users/1703/" ] }
241,727
Calculation suggests the following identity: $$ \lim_{n\to \infty}\sum_{k=1}^{n}\frac{(-1)^k}{k}\sum_{j=1}^k\frac{1}{2j-1}=\frac{1-\sqrt{5}}{2}. $$ I have verified this identity for $n$ up to $5000$ via Maple and find that the left-hand side approaches $\frac{1-\sqrt{5}}{2}$. However, this double summation has slow rate of convergence and I am unsure it is true. So I want to ask if it is true. If so, how to prove it?
You can evaluate this by using generating functions and integrating. The answer is $-\pi^2/16 = -0.61685 \ldots$ which is pretty close to $(1-\sqrt{5})/2=-0.61803\ldots$. Here's a sketch: the sum is $$ \sum_{k=1}^{\infty} \frac{(-1)^k}{k} \int_0^1 (1+x^2+ \ldots +x^{2k-2}) dx = \int_0^1 \sum_{j=0}^{\infty} x^{2j} \sum_{k=j+1}^{\infty} \frac{(-1)^k}{k} dx $$ which is $$ = \int_0^1 \sum_{j=0}^{\infty} x^{2j} \Big(\int_0^1 \sum_{k=j}^{\infty} -(-y)^{k} dy \Big) dx = - \int_0^1 \int_0^1 \sum_{j=0}^{\infty} \frac{(x^2 y)^j}{1+y} dy dx, $$ which is $$ = - \int_0^1\int_0^1 \frac{dx dy}{(1+x^2y)(1+y)}. $$ The integral in $y$ can be done easily: $$ \int_0^1 \frac{1}{1-x^2} \Big( \frac{1}{1+y}- \frac{x^2}{1+x^2y}\Big)dy = \log \Big(\frac{2}{1+x^2}\Big) \frac{1}{1-x^2}. $$ We're left with $$ - \int_0^1 \log \frac{2}{1+x^2} \frac{dx}{1-x^2}, $$ which WolframAlpha evaluates as $-\pi^2/16$. (This doesn't look too bad to do by hand, but I don't see a reason to do one variable integrals that a computer can recognize at once.)
{ "source": [ "https://mathoverflow.net/questions/241727", "https://mathoverflow.net", "https://mathoverflow.net/users/6104/" ] }
242,083
In the interview of John Nash taken by Christian Skau and Martin Gaussen, in EMS Newsletter, September, 2015 when asked Is it true, as rumours have it, that you started to work on the embedding problem as a result of a bet? Nash answered I began to work on it. Then I got shifted onto the $^1$ case. It turned out that one could do it in this case with very few excess dimensions of the embedding space compared with the manifold. I did it with two but then Kuiper did it with only one. But he did not do it smoothly, which seemed to be the right thing—since you are given something smooth, it should have a smooth answer. and But a few years later, I made the generalisation to smooth. I published it in a paper with four parts. There is an error, I can confess now. Some forty years after the paper was published, the logician Robert M. Solovay from the University of California sent me a communication pointing out the error. I thought: “How could it be?” I started to look at it and finally I realized the error in that if you want to do a smooth embedding and you have an infinite manifold, you divide it up into portions and you have embeddings for a certain amount of metric on each portion. So you are dividing it up into a number of things: smaller, finite manifolds. But what I had done was a failure in logic. I had proved that—how can I express it?—that points local enough to any point where it was spread out and differentiated perfectly if you take points close enough to one point; but for two different points it could happen that they were mapped onto the same point. My question is: What did Nash mean by very few excess dimensions of the embedding space compared with the manifold? It means that the they can do it in the case the dimension of the embedding space is a little bit greater than the dimension of the manifold, doesn't it? What did Nash mean by his generalization to smooth? What did Nash mean by a certain amount of metric on each portion? Does this mean that each portion has some different metrics? What did Nash mean by "I had proved that that points local enough to any point where it was spread out and differentiated perfectly if you take points close enough to one point"? How can a point be spread out and differentiated? Well, I don't want to make a cross post but this question were posted two days on MSE but there is no answer so I decide to post it here. Please explain for me. Thanks.
Igor already answered questions 1 and 2. What Nash wrote is an attempt to describe to a non-expert audience the solution scheme he had for non -compact manifolds. In the noncompact case he proceeds by a reduction process to the compact case. The process involves decomposing the manifold into smaller neighborhoods each diffeomorphic to disks/balls. Very roughly speaking, Nash then took a partition of unity adapted to this system of neighborhoods, and used that to cut-off the Riemannian metric on the original manifold to these disks. Now, the truncated objects are no longer Riemannian metrics (since they are not everywhere positive definite); but since the problem is not really that of geometry but that of PDEs in this setting, this is not an obstruction. When two neighborhoods overlap, the cut-off function is less than 1 for both of the pieces. So each of the neighborhood inherits "a portion of the metric". In other words, the phrase that you are asking about is just Nash trying to describe the idea of a "partition of unity" without using exactly those words. This is in regards to the second portion of the scheme. After dividing into disks and keeping track of their overlaps (using what Nash called "classes"; that's also the sense of the word in Solovay's message), it is possible to (very roughly speaking) replace the disks by spheres. You do this using that the disks can be topologically identified with the sphere with a closed disk removed, and that the "portion of the metric" on the disc is cut-off so that it approaches 0 smoothly on the boundary. This is just largely playing with cut-off functions. Now, each sphere has an isometric embedding into some Euclidean space by the theorem for compact manifolds (proven earlier in the paper). So all remains is to somehow reassemble all these spheres into one larger Euclidean space while guaranteeing that there are no self-intersections. (Remark: the method of assembly works. Full stop. So if you are happy with an isometric immersion then you are done. The problem that Solovay pointed out has to do with the embedding (non-self-intersecting) part.) The method of assembly goes something like this. a. The choice of the disks earlier means that the disks can be divided into some finitely many classes, such that two disks of the same class cannot intersect. b. For each class, construct a smooth map from $M$ such that points within the discs of that class are sent to the image of the embedding of the corresponding sphere. But points outside the disks of that class are sent to the origin. c. Take the cartesian product over the classes. This guarantees an immersion. To get non-self-intersection Nash tried to exploit the fact that his isometric embedding theorem in the compact case allows one to squeeze the manifold into arbitrarily small neighborhoods. So within each class he can arrange for the different disks to be almost disjointly embedded. He claims that this is enough. Solovay showed that there is a hole in the argument. See below the cut for more info. Incidentally, Nash's paper is available here ; the portion that concerns questions 3 and 4 are all in part D, which is decidedly in the "easy" part of the paper. (The hard analytic stuff all happened in part B; here it is essentially combinatorics.) To reveal the "logic problem" with the embedding proof, we remove all unnecessary portions of the proof and focus on the argument that "ensures" non-self-intersection. This operates entirely on the level of sets and does not require any geometry or analysis. What Nash did boils down to: Given a set $M$, he showed that we can decompose $M$ as the union of a bunch of sets $U^{(i)}_j$. The index $i$ runs from $1, \ldots, n+1$ (a finite number). The index $j$ can be infinite. Each $U^{(i)}_j$ has a subset within called $V^{(i)}_j$, and the union of all these subsets $V^{(i)}_j$ is also assumed to cover $M$. For each $i$ Nash finds a space $X^{(i)}$, and a point $x^{(i)}\in X^{(i)}$, and for each $j$ there is an injective mapping $\psi^{(i)}_j: U^{(i)}_j \to X^{(i)}\setminus \{x^{(i)}\}$. Since $U^{(i)}_j$ and $U^{(i)}_k$ do not intersect unless $j = k$, for each $i$ we can extend this to a map $$ \psi^{(i)}: M \to X^{(i)}$$ by requiring $$ \psi^{(i)}(p) = \begin{cases} \psi^{(i)}_j(p), &p\in U^{(i)}_j \\ x^{(i)}, &\text{otherwise}\end{cases} $$ Let $X = X^{(1)} \times X^{(2)} \cdots \times X^{(n+1)}$. We are interested in the map $$ \psi: M \to X = \psi^{(1)}\times \psi^{(2)}\times \cdots \times \psi^{(n+1)}.$$ We want to show that $\psi$ is injective. Nash's idea: We can assume that $\psi^{(i)}(V^{(i)}_j) \cap \psi^{(i)}(U^{(i)}_k) = \emptyset$ if $k > j$. (This he achieves by the fact that isometric embedding can be "made small".) He claims this is enough to show injectivity of $\psi$, because: If $p,q\in U^{(i)}_j$ for the same $i,j$, then we are done because $\psi^{(i)}_j$ is by construction injective. If $p \in U^{(i)}_j$ and $q \not\in U^{(i)}_k$ for for any $k$, then we know $\psi^{(i)}(p) \neq x^{(i)} = \psi^{(i)}(q)$ by construction, and so we are done. So the main worry is that $p\in U^{(i)}_j$ and $q \in U^{(i)}_k$ for some different $j,k$. To deal with this, Nash argued thus (I paraphrase) Since the $V$'s cover $M$, $p$ is in some $V^{(i)}_j$ and similarly $q$. So either $q$ does not belong to any $U^{(i)}_*$ in which case we are done by point 2, or $q$ belongs to some $U^{(i)}_k$. If $k > j$ by Nash's idea their images are disjoint, so we get injectivity. If $k = j$ we are done by point 1. If $k < j$ we swap the roles of $p$ and $q$ . The bold phrase was not stated as such in Nash's original paper, but it was his intent. Stated in this form, however, it becomes clear what the problem is: in the argument $p$ and $q$ is not symmetric! One cannot simply swap the roles of $p$ and $q$. It could easily be the case that $q \in U^{(i)}_k \setminus V^{(i)}_k$ and then the idea of making the images of $U^{(i)}_*$ "almost disjoint" fails to yield anything useful. Solovay's message instantiates this observation by setting up a situation where $$ p \in (U^{(1)}_1 \setminus V^{(1)}_1) \cap V^{(2)}_2 $$ and $$ q \in (U^{(2)}_1 \setminus V^{(2)}_1) \cap V^{(1)}_2 $$ and both not in any other $U^{(i)}_j$. So in terms of the turn of phrase Nash used: "points local enough to any point" == we take $p \in U^{(i)}_j$ and $q \in U^{(k)}_l$ (they are in neighborhoods of some point) "it was spread out and differentiated perfectly" == the map $\psi(p) \neq \psi(q)$ (is injective; "differentiated" in the sense of "to tell apart") "if you take points close enough to one point" == if $i = k$ and $j = l$ (in the same neighborhood) "but for two different points it could happen that they were mapped onto the same point." == injectivity may fail otherwise.
{ "source": [ "https://mathoverflow.net/questions/242083", "https://mathoverflow.net", "https://mathoverflow.net/users/18761/" ] }
242,379
In his 2014 book , Giovanni Ferraro writes at beginning of chapter 1, section 1 on page 7: Capitolo I Esempi e metodi dimostrativi Introduzione In The Calculus as Algebraic Analysis, Craig Fraser, riferendosi all'opera di Eulero e Lagrange, osserva: A theorem is often regarded as demonstrated if verified for several examples, the assumption being that the reasoning in question could be adapted to any other example one chose to consider (Fraser [1989, p. 328]). Le parole di Fraser colgono un aspetto poco indagato della matematica dell'illuminismo. I am not fluent in Italian but the last sentence seems to indicate that Ferraro endorses Fraser's position as expressed in the passage cited in the original English without Italian translation. I was rubbing my eyes as I was reading this so I decided to check in Fraser's original , thinking that perhaps the comment is taken out of context. I found the following longer passage on Fraser's page 328 quoted by Ferraro: The calculus of EULER and LAGRANGE differs from later analysis in its assumptions about mathematical existence. The relation of this calculus to geometry or arithmetic is one of correspondence rather than representation. Its objects are formulas constructed from variables and constants using elementary and transcendental operations and the composition of functions. When EULER and LAGRANGE use the term "continuous" function they are referring to a function given by a single analytical expression; "continuity" means continuity of algebraic form. A theorem is often regarded as demonstrated if verified for several examples, the assumption being that the reasoning in question could be adapted to any other example one chose to consider. Let us examine Fraser's hypothesis that in Euler and Lagrange, allegedly "a theorem is often regarded as demonstrated if verified for several examples." I don't see Fraser presenting any evidence for this. Now Wallis sometimes used a principle of "induction" in an informal sense that a formula verified for several values of $n$ should be true for all $n$, but for this he was already criticized by his contemporaries, a century before Euler and Lagrange. Several articles were recently published examining Euler's proof of the infinite product formula for the sine function. The proof may rely on hidden lemmas, but it is a sophisticated argument that is a far cry from anything that could be described as "verification for several examples." It seems to me that this passage from Fraser is symptomatic of an attitude of general disdain for the great masters of the past. Such an attitude unfortunately is found among a number of received historians. For example, we find the following comment: Euler's attempts at explaining the foundations of calculus in terms of differentials, which are and are not zero, are dreadfully weak. (p. 6 in Gray, J. ``A short life of Euler.'' BSHM Bull. 23 (2008), no.1, 1--12). In a similar vein, in a footnote on 18th century notation, Ferraro presents a novel claim that for 18th-century mathematicians, there was no difference between finite and infinite sums. (footnote 8, p. 294 in Ferraro, G. ``Some aspects of Euler's theory of series: inexplicable functions and the Euler-Maclaurin summation formula.'' Historia Mathematica 25, no. 3, 290--317.) Far from being a side comment, the claim is emphasized a decade later in the Preface to his 2008 book: a distinction between finite and infinite sums was lacking, and this gave rise to formal procedures consisting of the infinite extension of finite procedures (p. viii in Ferraro, G. The rise and development of the theory of series up to the early 1820s. Sources and Studies in the History of Mathematics and Physical Sciences. Springer, New York.) Grabiner doesn't hesitate to speak about shaky eighteenth-century arguments (p. 358 in Grabiner, J. ``Is mathematical truth time-dependent?'' Amer. Math. Monthly 81 (1974), 354--365); it is difficult to evaluate her claim since she does not specify the arguments in question. Instead of viewing Fraser's passage as problematic, Ferraro opens his book with it, which is surely a sign of endorsement. The attitude of disdain toward the masters seems to have permeated the field to such an extent that it has acquired the status of a sine qua non of a true specialist. In my study of Euler I have seen sophisticated arguments rather than proofs by example, except for isolated instances such as de Moivre's formula. On the other hand Euler's oeuvre is vast. Question . Can Euler be said to have proved theorems by example in other than a handful of exceptional cases, in any meaningful sense? Note 1. Some editors requested examples of what I described above as a disdainful attitude toward the masters of the past on the part of some historians. I provided a couple of additional ones. Editors are invited to provide examples they have encountered; I believe they are ubiquitous. Note 2. We tried to set the record straight on Euler in this recent article and also here .
There's some evidence that precisely the opposite can be said: that Euler is aware of the fallacies of proving theorems by example (of course, this does not necessarily mean he has never used it). One memorable instance is his Exemplum Memorabile Inductionis Fallacis , where he described how he was almost led to conjecture a recursive formula for a particular numerical sequence until he found that they disagreed on the 10th term. (There are other reasons for that formula to have been plausible; that and other topics are discussed in this article .) (Incidentally the "right" formula is now quite well-known .)
{ "source": [ "https://mathoverflow.net/questions/242379", "https://mathoverflow.net", "https://mathoverflow.net/users/28128/" ] }
242,460
Although my experience with DG categories is pretty basic I find them to be a very neat tool for organizing (co-)homological techniques in algebraic geometry. For someone who has algebro-geometric application in mind they seem more attractive then stable $(\infty,1)-$ categories which seem to carry data in a slightly more convoluted way (which I realize makes for more powerful techniques and generalizations). So far though I've only seen a very limited amount of how actual algebraic geometry looks from a DG point of view and most of the stuff I read about DG categories was either definitions or general theory (papers by Toen for example). Here are several questions I have in mind: What is the "correct" DG category associated to a scheme/algebraic space/stack? Can the different possiblities here be organized as different "DG-stacks" (of certain dg-categories of sheaves) on the relevant site? How can I see the classical "category" of derived categories as some kind of "category" of homotopy categories of dg-categories ? (I'm putting category in brackets since i'm not sure that there's such an object, what I really want is to really understand the link between all the classical theory of derived categories and dg-categories). In particular the six functor formalism. I realize that these question might not have a straight yes/no answer and so what I'm looking for is a kind of roadmap to the relevant litrature where the application and formalization of the place of dg-categories in algebraic geometry is discussed. Main question: What are some relevant articles/notes/books which establish and discuss the details of the formalism of DG-categories in the algebro-geometric world?
Let me try to address the bulleted questions and simultaneously advertise the G-R book everyone has mentioned. Since the main question was about literature, I could also mention Drinfeld's article "DG quotients of DG categories," which nicely summarizes the state of the general theory before $\infty$-categories shook everything up. However, it doesn't contain any algebraic geometry. If $X = \text{Spec } A$ is an affine scheme, it's reasonable to define the category of quasicoherent sheaves $\text{QCoh}(X) := A\text{-mod}$ as the category of $A$-modules. Any other definition (e.g. via Zariski sheaves) must reproduce this answer anyway. If we understand this as the derived category of $A$-modules, then there is a canonical DG model: the homotopically projective complexes in the sense of Drinfeld's article. The next step is to construct $\text{QCoh}(X)$ for $X$ not necessarily affine. So write $X = \cup_i \text{Spec } A_i$ as a union of open affines (say $X$ is separated to simplify things). It would be great if we could just "glue" the categories $A_i\text{-mod}$, the way that we compute global sections of a sheaf as a certain equalizer. Concretely, a complex of sheaves on $X$ should consist of complexes of $A_i$-modules for all $i$, identified on overlaps via isomorphisms satisfying cocycle "conditions" (really extra data). This is the kind of thing that totally fails in the triangulated world: limits of 1-categories just don't do the trick. Even if we work with the DG enhancements, DG categories do not form a DG category, so this doesn't help. As you might have guessed, this is where $\infty$-categories come to the rescue. Let me gloss over details and just say that there is a (stable, $k$-linear) $\infty$-category attached to a DG category such as $A$-mod, called its DG nerve . If we take the aforementioned equalizer in the $\infty$-category of $\infty$-categories, then we do get the correct $\infty$-category $\text{QCoh}(X)$, in the sense that its homotopy category is the usual derived category of quasicoherent sheaves on $X$. ( Edit: As Rune Haugseng explains in the comments, it's actually necessary to take the limit of the diagram of $\infty$-categories you get by applying $\text{QCoh}$ to the Cech nerve of the covering. The equalizer is a truncated version of this.) But, you might be thinking, I could have just constructed a DG model for $\text{QCoh}(X)$ using injective complexes of Zariski sheaves or something. That's true, and obviously suffices for tons of applications, but as soon as you want to work with more general objects than schemes you're hosed. True, there are workarounds using DG categories for Artin stacks, but the theory gets very technical very fast. If we instead accept the inevitability of $\infty$-categories, we can make the following bold construction. A prestack is an arbitrary functor from affine schemes to $\infty$-groupoids (i.e. spaces in the sense of homotopy theory). For example, affine schemes are representable prestacks, but prestacks also include arbitrary schemes and Artin stacks. Then for any prestack $\mathscr{X}$ we can define $\text{QCoh}(\mathscr{X})$ to be the limit of the $\infty$-categories $A\text{-mod}$ over the $\infty$-category of affine schemes $\text{Spec } A$ mapping to $\mathscr{X}$. A cofinality argument for Zariski atlases shows this agrees with our previous definition for $\mathscr{X}$ a scheme. For example, if $\mathscr{X} = \text{pt}/G$ is the classifying stack of an algebraic group $G$, then the homotopy category of $\text{QCoh}(\mathscr{X})$ is the derived category of representations of $G$. Even cooler: if $X$ is a scheme the de Rham prestack $X_{\text{dR}}$ is defined by $$\text{Map}(S,X_{\text{dR}}) := \text{Map}(S_{\text{red}},X).$$ Then, at least if $k$ has characteristic zero, our definition of $\text{QCoh}(X_{\text{dR}})$ recovers the derived category of crystals on $X$, which can be identified with $\mathscr{D}$-modules. So we put two different ``flavors" of sheaf theory on an equal footing.
{ "source": [ "https://mathoverflow.net/questions/242460", "https://mathoverflow.net", "https://mathoverflow.net/users/22810/" ] }
242,468
Why is the momentum map in the differential geometry of symmetries called the ''momentum'' (or ''moment'') map?
According to §1.3 and §11.2 of Marsden and Ratiu [1994] (see detailed citation given below), the momentum map concept can be traced back to Sophus Lie's 1890 book, and is an English translation of the French words application moment. To quote directly from Marsden and Ratiu: The notion of the momentum map (the English translation of the French words “application moment”) also has roots going back to the work of Lie. Many authors use the words “moment map” for what we call the “momentum map.” In English, unlike French, one does not use the phrases “linear moment” or “angular moment of a particle,” and correspondingly, we prefer to use “momentum map.” We shall give some comments on the history of momentum maps in §11.2. Add For additional clarification, I asked Tudor Ratiu about the origins of the term momentum map. In an email dated Sept 8th 2016, he gave a beautiful response on the evolution of ideas that led to the modern momentum map. At the very end, he made some comments on what Sophus Lie did. Here are the main points from his email, whose contents can mostly be found in the historical notes provided in Marsden and Ratiu [1994]. References are given at the bottom. (1) The momentum map is a generalization of the usual notions of linear and angular momentum from physics, which have a long history. (2) The first person to introduce the modern version of the momentum map was Bertram Kostant at a conference in Japan. He did not give it a name. He used it to prove what is today called Kostant's symplectic covering theorem . (3) Just a few weeks later, Jean-Marie Souriau independently introduced the modern momentum map. He correctly named it in French application moment. He also realized its physical significance, and linked it to linear and angular momentum. This was an enormous breakthrough. (4) Marsden and Weinstein, in their famous 1974 paper on reduction used the term moment map , which was their translation of the French words application moment . This had an immediate reaction from Hans Duistermaat who pointed out that this is a misnomer and physically incorrect. Indeed, even the wiki entry for moment , indicates in the first line that moment should not be confused with momentum. It's analogous to confusing force with linear momentum. (5) For a circle action, the momentum map (without any name) was introduced by Theodore Frankel in an Annals paper in the 1950s. This paper is famous because it links the existence of fixed points of the action to the Hamiltonian character of the action. This bit of history was not known to Marsden and Ratiu when they wrote their book, so it won't be found there. (6) Sophus Lie knew a lot of Poisson geometry and many people now regard him as the founder of Poisson geometry. However, Lie did not introduce symplectic geometry. The link to symplectic geometry, coadjoint orbits, etc. is due to Kostant and Souriau. See the historical notes in Marsden and Ratiu [1994] for details. References Frankel, T. [1959]. Fixed points on Kahler manifolds. Ann. Math. 70, 1–8. Marsden, Jerrold E. and Ratiu, Tudor [1994]. Introduction to Mechanics and Symmetry . Second Edition, 1999. Current 2nd Printing, 2003. New York, Springer-Verlag. Kostant, B. [1966] Orbits, symplectic structures and representation theory. Proc. US– Japan Seminar on Diff. Geom., Kyoto. Nippon Hyronsha, Tokyo 77. Ratiu, Tudor. Personal Communication, Sept 8th, 2016. Souriau, J.-M. [1970]. Structure des systèmes dynamiques, Maîtrises de mathématiques, Dunod, Paris, 1970. ISSN 0750-2435.
{ "source": [ "https://mathoverflow.net/questions/242468", "https://mathoverflow.net", "https://mathoverflow.net/users/56920/" ] }
242,608
There is a foreword, written by professor Snow, to the book A mathematician's apology . In the foreword, it is written some thing like the following: "Hardy was opposed to a certain mathematical competition in the UK because he believed that such competitions destroyed real mathematics in the UK during one century." My question is: What was that competition and why did he believe that the competition destroyed real mathematics for a century?
Hardy's opposition was to the Mathematical Tripos (the Cambridge undergraduate mathematics degree), as it was prior to its reform in 1909, which Hardy did much to bring about. The text of "A Mathematician's Apology", with Snow's preface, is here ; the relevant paragraphs are Almost since the time of Newton, and all through the nineteenth century, Cambridge had been dominated by the examination for the old Mathematical Tripos. [...] It had only one disadvantage, as Hardy pointed out with his polemic clarity, as soon as he had become an eminent mathematician and was engaged, together with his tough ally Littlewood, in getting the system abolished: it had effectively ruined serious mathematics in England for a hundred years. As for why Hardy was so much against it, Snow's preface gives some of the reasons. It was a system which heavily emphasised fluency in intricate calculations rather than conceptual understanding. It was also a very rigid system which was slow to incorporate new developments in the subject (particularly those originating outside Britain). Moreover, the way the examination questions were set prioritised separating the top handful of candidates, rather than testing whether candidates near the bottom end had grasped the essentials (according to the statistics quoted on the Wikipedia page, in the 1854 examination the cut-off mark for a first-class degree was about 10% of the total marks available, and the cut-off for a third-class was about 2% -- seriously!).
{ "source": [ "https://mathoverflow.net/questions/242608", "https://mathoverflow.net", "https://mathoverflow.net/users/36688/" ] }
243,518
The categories of vector spaces and finite dimensional vector spaces are pretty much as nice as can be, I think. I was wondering what portions of basic linear algebra (first couple of courses) fall out by saying "big"(er) words, and also what standard facts admit a clarifying categorical phrasing. What are some interesting examples of facts about vector spaces and linear maps that admit a nice categorical formulation? Edit. I'm not looking for (completely) elementary things like definitions of universal constructions by universal properties instead of concrete ad hoc realizations, though I can't think of any "nonelementary" things. If you write an elementary property of the category of vectors spaces, e.g a property of any abelian category, please give some nice examples of where it lends its power.
To my mind there are two classes of interesting categorical facts here, loosely speaking "additive" facts and "multiplicative" facts. Some additive facts: Finite-dimensional vector spaces over $k$ has biproducts, and every object is a finite biproduct of copies of a single object, namely $k$. The categories with this property are precisely the categories of finite rank free modules over a semiring $R$ (the endomorphisms of the single object). These biproduct decompositions encapsulate both the idea that vector spaces have bases and that a choice of bases can be used to write linear maps as matrices. The single object $k$ above is simple, and so every object is a finite biproduct of simple objects. The categories with this property, in addition to 1, are precisely the categories of finite rank free modules over a division semiring $R$. Note that additive facts can't see anything about fields being commutative. The multiplicative facts can: Finite-dimensional vector spaces over $k$ is symmetric monoidal with respect to tensor product, and is also closed monoidal and has duals (sometimes called compact closed ). This observation encapsulates the yoga surrounding tensors of various types (e.g. endomorphisms $V \to V$ correspond to elements of $V \otimes V^{\ast}$), as well as the existence and basic properties (e.g. cyclic symmetry) of the trace. The single object $k$ above is the tensor unit, and so every object is a finite biproduct of copies of the unit. Also, the monoidal structure is additive in each variable. I believe, but haven't carefully checked, that the (symmetric monoidal) categories with this property, in addition to 1 and 2 above, are precisely the (symmetric monoidal) categories of finite rank free modules over a commutative division semiring ("semifield") $R$. This encapsulates the concrete description of tensor products as a functor in terms of Kronecker products . There's surprisingly little to say as far as fields being rings as opposed to semirings, though. This mostly becomes relevant when we reduce computing (co)equalizers to computing (co)kernels by subtracting morphisms, as in any abelian category. Edit: I haven't mentioned the determinant yet. This mixes additive and multiplicative: abstractly the point is that we have a natural graded isomorphism $$\wedge^{\bullet}(V \oplus W) \cong \wedge^{\bullet}(V) \otimes \wedge^{\bullet}(W)$$ where $\wedge^{\bullet}$ denotes the exterior algebra (which we need the symmetric monoidal structure, together with the existence of certain colimits, to describe). It follows that if $L_i$ are objects which have the property that $\wedge^k(L_i) = 0$ for $k \ge 2$ then $$\wedge^n(L_1 \oplus \dots \oplus L_n) = L_1 \otimes \dots \otimes L_n.$$ Combined with the facts above this gives the existence and basic properties of the determinant, more or less. Note that the exterior algebra can be defined by a universal property, but to verify that the standard construction has this universal property we need the symmetric monoidal structure to distribute over finite colimits. Fortunately this is implied by compact closure.
{ "source": [ "https://mathoverflow.net/questions/243518", "https://mathoverflow.net", "https://mathoverflow.net/users/69037/" ] }
243,557
I've seen a construction of a sentence of first order logic that is consistent, but has no models with underlying set $\mathbb{N}$ and recursive functions and relations. Do there also exist consistent sentences with no model on $\mathbb{N}$ with arithmetically definable functions and relations? If no, at which levels of the arithmetical hierarchy are there consistent sentences with no model at that level? If yes, what about the same question for analytically definable models?
To my mind there are two classes of interesting categorical facts here, loosely speaking "additive" facts and "multiplicative" facts. Some additive facts: Finite-dimensional vector spaces over $k$ has biproducts, and every object is a finite biproduct of copies of a single object, namely $k$. The categories with this property are precisely the categories of finite rank free modules over a semiring $R$ (the endomorphisms of the single object). These biproduct decompositions encapsulate both the idea that vector spaces have bases and that a choice of bases can be used to write linear maps as matrices. The single object $k$ above is simple, and so every object is a finite biproduct of simple objects. The categories with this property, in addition to 1, are precisely the categories of finite rank free modules over a division semiring $R$. Note that additive facts can't see anything about fields being commutative. The multiplicative facts can: Finite-dimensional vector spaces over $k$ is symmetric monoidal with respect to tensor product, and is also closed monoidal and has duals (sometimes called compact closed ). This observation encapsulates the yoga surrounding tensors of various types (e.g. endomorphisms $V \to V$ correspond to elements of $V \otimes V^{\ast}$), as well as the existence and basic properties (e.g. cyclic symmetry) of the trace. The single object $k$ above is the tensor unit, and so every object is a finite biproduct of copies of the unit. Also, the monoidal structure is additive in each variable. I believe, but haven't carefully checked, that the (symmetric monoidal) categories with this property, in addition to 1 and 2 above, are precisely the (symmetric monoidal) categories of finite rank free modules over a commutative division semiring ("semifield") $R$. This encapsulates the concrete description of tensor products as a functor in terms of Kronecker products . There's surprisingly little to say as far as fields being rings as opposed to semirings, though. This mostly becomes relevant when we reduce computing (co)equalizers to computing (co)kernels by subtracting morphisms, as in any abelian category. Edit: I haven't mentioned the determinant yet. This mixes additive and multiplicative: abstractly the point is that we have a natural graded isomorphism $$\wedge^{\bullet}(V \oplus W) \cong \wedge^{\bullet}(V) \otimes \wedge^{\bullet}(W)$$ where $\wedge^{\bullet}$ denotes the exterior algebra (which we need the symmetric monoidal structure, together with the existence of certain colimits, to describe). It follows that if $L_i$ are objects which have the property that $\wedge^k(L_i) = 0$ for $k \ge 2$ then $$\wedge^n(L_1 \oplus \dots \oplus L_n) = L_1 \otimes \dots \otimes L_n.$$ Combined with the facts above this gives the existence and basic properties of the determinant, more or less. Note that the exterior algebra can be defined by a universal property, but to verify that the standard construction has this universal property we need the symmetric monoidal structure to distribute over finite colimits. Fortunately this is implied by compact closure.
{ "source": [ "https://mathoverflow.net/questions/243557", "https://mathoverflow.net", "https://mathoverflow.net/users/83073/" ] }
243,672
Let $\mathcal{C}$ be a category. Suppose $\mathcal{C}$ contains a terminal object, which I will denote by $\boldsymbol{1}$. Then for any object $B$ in $\mathcal{C}$, a global element of $B$ is a morphism $\boldsymbol{1}\longrightarrow B$. (In many concrete categories, the terminal object is a singleton set, so this definition picks out something close to what we would normally call an "element" of the object $B$.) Given another object $A$ in $\mathcal{C}$, we can say that a morphism $f:A\longrightarrow B$ is constant if there is some global element $\epsilon:\boldsymbol{1}\longrightarrow B$ such that $f = \epsilon \circ \alpha$, where $\alpha:A\longrightarrow \boldsymbol{1}$ is the (unique) morphism from $A$ to the terminal object $\boldsymbol{1}$. So far, these definitions are totally standard and well-known. But they require the existence of a terminal object. So my questions are the following: Suppose $\mathcal{C}$ is a category which does not necessarily have a terminal object. Is there a suitable definition of "global elements" for the objects of $\mathcal{C}$? Is there a suitable definition of "constant morphisms"? Here, by a "suitable" definition, I mean a definition which reduces to the definitions I gave above in the case when $\mathcal{C}$ has a terminal object. Also a suitable definition of "constant morphism" should have the following property: composing any morphism with a constant morphism yields a constant . (Note that the definition via terminal objects has this property.)
Yes. Instead of working in $C$ , you can work in presheaves $[C^{op}, \text{Set}]$ on $C$ using the Yoneda embedding. There is always a terminal presheaf given by sending every object $c \in C$ to $1 \in \text{Set}$ (whether or not it's representable by an object in $C$ ), and so you can make the following definitions using it. Definition: A global element of $c \in C$ is a natural transformation $1 \to \text{Hom}(-, c)$ of presheaves. Definition: A constant morphism $f : c \to d$ is a morphism such that the induced morphism $\text{Hom}(- , f) : \text{Hom}(-, c) \to \text{Hom}(-, d)$ factors through the terminal presheaf. Because the Yoneda embedding is fully faithful and preserves all limits that exist in $C$ , these definitions are guaranteed to reproduce the usual definitions if $C$ does in fact have a terminal object. Unwinding these definitions, we get the following. Definition: A global element of $c$ is a choice, for each object $c' \in C$ , of a morphism $f_{c'} : c' \to c$ such that, for every morphism $g : c' \to c''$ , we have $f_{c''} g = f_{c'}$ . Definition (edited, 7/6/16): A constant morphism $f : c \to d$ is a morphism such that there is a choice $f_{d'} : d' \to d$ of global element of $d$ in the above sense such that, for every morphism $g : c' \to c$ , we have $f g = f_{c'}$ .
{ "source": [ "https://mathoverflow.net/questions/243672", "https://mathoverflow.net", "https://mathoverflow.net/users/94673/" ] }
243,836
Given an $n \times n$ matrix $A$ and the $n\times n$ all-ones matrix $J = (1)_{ij}$, I'm interested in the relation between the eigenvalues and eigenvectors of the matrices $A$ and $A+J$, or more generally $A_t := A + tJ$. Is there a nice description of the eigenvalues or eigenvectors of $A_t$ in terms of those of $A$? If not, what about for $t$ small? It would be great to have an answer for general coefficient fields, but I would also be interested in the case with $A \in M_n(\mathbb{C})$ or $A \in M_n(\mathbb{R})$ with all nonnegative or positive entries, if they have nicer answers. Thank you.
This is a special case of a rank one perturbation or a rank one update , and there is plenty of work on such. See the nice 2010 lecture notes by Andre Ran.
{ "source": [ "https://mathoverflow.net/questions/243836", "https://mathoverflow.net", "https://mathoverflow.net/users/94086/" ] }
244,184
I am looking for examples of algorithms for which the proof of correctness requires deep mathematics ( far beyond what is covered in a normal computer science course). I hope this is not too broad.
Group Isomorphism of simple groups. There is a trivial polynomial time algorithm for testing if two (finite) simple groups $G$ and $H$, specified by their multiplication tables, are isomorphic: guess at most two generators $g_1, g_2$ from $G$, then guess two elements of $H$ that they map into, and check if the map extends to an isomorphism. To prove this algorithms is correct, you need to know that every finite simple group can be generated by at most two elements. This follows from the classification theorem, and, as far as I understand , there is little hope of a proof of this fact that does not involve (most of) the classification. Testing if a matrix is totally unimodular. A matrix $M \in \{-1, 0, 1\}^{m\times n}$ is totally unimodular (TUM) if the determinant of every one of its submatrices is in the set $\{-1,0,1\}$. The definition gives an exponential number of conditions to verify, so the fact that there is a polynomial time algorithm to test if a given $M$ is TUM is far from obvious. Such an algorithm follows from a deep theorem of Paul Seymour characterizing regular matroids (together with other non-trivial ingredients: the algorithm is given in Schrijver's linear programming book). Computing the volume of a convex body. There is a long line of work on algorithms that approximate the volume of a convex body $K$ specified by a membership oracle. The algorithms are based on sampling from $K$ and from modifications of $K$. The sampling itself is done by Markov chain algorithms, which are analyzed using isoperemitric properties of log-concave measures. For example, Kannan, Lovasz and Simonovits (motivated by work on volume computation) proved a lower bound on the Cheeger constant of an isotropic log-concave measure, and conjectured a tighter lower bound. This conjecture, the KLS conjecture, is now a notorious question in asymptotic convex geometry, and implies a number of other deep conjectures: the hyperplane conjecture, and the thin shell conjecture, which themselves have many implications.
{ "source": [ "https://mathoverflow.net/questions/244184", "https://mathoverflow.net", "https://mathoverflow.net/users/24478/" ] }
244,519
This question was asked on MathStackexchange here , but there was no answer, so I am asking it here. Let$$x_1 = 2, \quad x_{n + 1} = {{x_n(x_n + 1)}\over2}.$$What can we say about the behavior of $x_n \text{ mod }2$? Is there an exact formula for $x_n \text{ mod }2$?
As remarked by Joe Silverman, a natural way to look at this question is by phrasing it in terms of the map $f(x):=\frac{1}{2}x(x+1)$ on the $2$-adic integers $\mathbb{Z}_2$. We are then asking about the behaviour of the orbit $f^n(2)$ with respect to the partition of $\mathbb{Z}_2$ into two clopen sets $U_1:=2\mathbb{Z}_2$ and $U_2:=1+2\mathbb{Z}_2$. Most questions of this form are very hard, for reasons which I will attempt to explain. Short answer: dynamically, there is no obvious reason why this should be easier than the Collatz problem or the normality of $\sqrt{2}$. Suppose that we are given a continuous transformation $f$ of a compact metric space $X$ and a partition of $X$ into finitely many sets $U_1,\ldots,U_N$ of reasonable regularity (e.g. such that the boundary of each $U_i$ has empty interior). We are given a special point $x_0$ and we want to know how often, and in what manner, the sequence $(f^n(x_0))_{n=1}^\infty$ visits each of the sets $U_i$. We might believe the following result to be useful: if $\mu$ is a Borel probability measure on $X$ such that $\mu(f^{-1}A)=\mu(A)$ for every Borel measurable set $A\subset X$, and if additionally every $f$-invariant measurable set $A\subseteq X$ satisfies $\mu(A)\in \{0,1\}$, then $$\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\phi(f^i(x)) =\int \phi\,d\mu$$ for $\mu$-almost-every initial point $x$ and every $\mu$-integrable function $\phi \colon X \to \mathbb{R}$. This is the Birkhoff ergodic theorem or pointwise ergodic theorem. In particular we could take $\phi$ to be the characteristic function of a partition element $U_i$, so the above measures the average time spent in $U_i$ by the sequence $f^n(x)$. Or we could take $\phi$ to be the characteristic function of a set such as $U_1\cap f^{-1}U_2 \cap f^{-2}U_1$, which would tell us how often the orbit of $x$ follows the path $U_1 \to U_2 \to U_1$, and so on. The Birkhoff theorem is great if we are content to study points $x$ which are generic with respect to a particular invariant measure. If we want to study a specific $x$ then we have no way to apply the theorem. If multiple different invariant measures $\mu$ exist then the integrals which they assign to the characterstic functions $\chi_{U_i}$ will in general be different and we will get different answers, and the Birkhoff averages along orbits will have fundamentally different behaviours depending on which measure the starting point is generic with respect to (if any). In a broad sense this is an aspect of the phenomenon called "chaos". Here is a nice example: let $X=\mathbb{R}/\mathbb{Z}$, $f(x):=2x$, let $U_1=[0,\frac{1}{2})+\mathbb{Z}$ and $U_2=X\setminus U_1$. What can we say about the manner in which the trajectory $f^n(\sqrt{2})$ visits $U_1$ and $U_2$? For example, does the trajectory spend an equal amount of time in both sets? Put another way, is $\sqrt{2}$ normal to base $2$? Dynamical methods can tell us that Lebesgue a.e. number is normal to base $2$, but they struggle badly for specific numbers. The difficulty of this problem is tied quite closely to the existence of many invariant measures for this map: Lebesgue measure is invariant, but so is the Dirac measure at $0$; there are many invariant measures supported on closed $f$-invariant proper subsets corresponding to restricted digit sets (e.g. numbers where "11" is not allowed in the binary expansion) and also many fully supported invariant measures which are singular with respect to Lebesgue measure (and indeed with respect to one another). Each invariant measure contributes a set of points with its own particular, distinct dynamical behaviour, and given an arbitrary point we have no obvious way of knowing which of these different dynamical behaviours prevails. There is only one situation in which this dynamical problem becomes easy: when a unique $f$-invariant Borel probability measure exists. In this case the convergence in the Birkhoff theorem is uniform [*] over all $x$, and the problem of distinguishing which invariant measure (if any) characterises the behaviour of the trajectory $f^n(x)$ vanishes completely. An example would be the leading digit of $2^n$: take $X=\mathbb{R}/\mathbb{Z}$, $U_k=[\log_{10}k,\log_{10}(k+1))+\mathbb{Z}$ for $k=1,\ldots,9$, $f(x):=x+\log_{10}2$, then the leading digit of $2^n$ is $k$ if and only if $f^n(0) \in U_k$. The irrationality of $\log_{10} 2$ may be applied to show that Lebesgue measure is the only $f$-invariant Borel probability measure, and using the uniform ergodic theorem we can simply read off the average time spent in each $U_k$ as the Lebesgue measure of that interval. So how does this affect $f(x):=x(x+1)/2$ on $\mathbb{Z}_2$? Well, $x=1$ and $x=0$ are both fixed points and carry invariant Borel probability measures $\delta_1$ and $\delta_0$. So the dynamical system is not uniquely ergodic, and the problem of determining which of the (presumably many) invariant measures characterises the trajectory of the initial point $2$ has no obvious solution. [*] to obtain uniform convergence in this case $\phi$ must be a continuous function. By upper/lower approximation we can deduce pointwise convergence to $\int \phi\,d\mu$ for all $x\in X$ in the case where $\phi$ is upper or lower semi-continuous, for example where $\phi$ is the characteristic function of a closed or open set.
{ "source": [ "https://mathoverflow.net/questions/244519", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
244,808
If $f\in\mathbf{C}[[q]]$ is non-constant, and algebraic over $\mathbf{C}[q]$ (in the sense that it is a root of a polynomial with coefficients in in $\mathbf{C}[q]$) then can $f$ be the $q$-expansion of a modular form (for some congruence subgroup of $SL(2,\mathbf{Z})$)? I ask for the following reason. There are geometers in my department who occasionally come up with $q$-expansions (probably from counting things in geometry) and ask if these things are likely to be modular forms. Sometimes they are, sometimes they aren't, sometimes I don't know. But one that came up today I noticed was non-constant and algebraic over $\mathbf{C}[q]$ and so I instantly said that this should rule it out, and then I realised I could not immediately point to a proof of this. Katz proved many years ago that a non-constant polynomial in $q$ can't be the $q$-expansion of a modular form (by which I mean a form which has no poles, even at cusps), because if we have a non-constant polynomial modular form of some weight and level, we can consider all modular forms of that weight and level which are polynomials in $q$, and then it's not hard to check that this space is Hecke stable, but Hecke operators tend to increase the degree of a polynomial modular form if it has positive degree and it's not hard to finish the job now. See Katz Antwerp III, p94 (p26 of the article). Now I've found the time to look at the article, I realise that probably modular forms algebraic over $\mathbf{C}[q]$ might also form a Hecke-stable subspace, although now one can't use the degree trick to finish. I was half-expecting the result to be false in characteristic $p$, but now I'm not so sure. I know that the $\Delta$ function is $\sum_{n\geq1,n\ \mathrm{odd}}q^{n^2}$ in characteristic 2 but presumably this isn't algebraic over $\mathbf{F}_2[q]$; I realise now that I have no good test for this in my head.
A non-constant modular form has a natural boundary on the real line. A power series that's algebraic in $q = e^{2\pi i \tau}$ can't.
{ "source": [ "https://mathoverflow.net/questions/244808", "https://mathoverflow.net", "https://mathoverflow.net/users/1384/" ] }
247,118
Is there a Hausdorff topological space $X$ such that for any continuous map $f: X\longrightarrow \mathbb{R}$ and any $x\in \mathbb{R}$, the set $f^{-1}(x)$ is either empty or infinite?
Yes, there is such a space. Let $X=2^{\omega_1}$ be the space of binary sequences of length $\omega_1$, in the order topology generated by the lexical order. So $X$ consists of the branches through the tree $2^{<\omega_1}$, with the left-to-right order on branches. This is an order topology of a linear order and hence Hausdorff. The key thing to notice is that every element $a\in X$ is the limit of an $\omega_1$-sequence. If $a_\alpha\to a$ for $\alpha<\omega_1$ and $f:X\to\mathbb{R}$ is continuous, it follows that $f(a_\alpha)\to f(a)$. Since every convergent $\omega_1$-sequence in the reals is eventually constant, it must be that $f(a_\alpha)=f(a)$ for all sufficiently large countable ordinals $\alpha$. So this space has your desired property. In fact, I claim that for every continuous function $f:X\to\mathbb{R}$ and every $a\in X$, the function is constant on an interval about $a$. To see this, we may find open intervals in $\mathbb{R}$ so that $\{f(a)\}=\bigcap_n I_n$, and then $f$ is constant on $\bigcap_n f^{-1}I_n$. Each such $f^{-1}I_n$ is an open set containing $a$, and since there are only countably many, we may find an interval containing $a$ inside all of them. Thus, this space has the property that every continuous function $f:X\to\mathbb{R}$ is locally constant, and every nonempty open set has size $2^{\omega_1}$.
{ "source": [ "https://mathoverflow.net/questions/247118", "https://mathoverflow.net", "https://mathoverflow.net/users/86088/" ] }
247,743
I have seen this enough times that I thought it was common practice, but I recently saw someone on a website about writing tips (can't find the link now) say that this was not good. I am hoping to get opinions from people who review or make publication decisions about whether this is viewed negatively. Examples: In [3], the authors prove that... Our theorem generalizes the results in [ABH], showing that... I kind of like it---it seems clear and succinct. On the other hand, I can see how someone could object to it. Hope this is on-topic for MO.
First, I do it all the time and don't really see the objections. A phrase like "In [S] it was shown..." is a good alternative to "Siegel showed, [S], that ...". Out of curiosity I did some cursory research and looked up the citation habits in Annals of Mathematics 1958. There one author (R.D. Anderson) does use "In [1] we considered..." and "an argument in [4]". On the other hand Dold sticks to phrases like "due to J.C. Moore [11]" or "defined by Eilenberg-MacLane [4]" as opposed to "defined by Eilenberg-MacLane in [4]". So the habit is quite old. On the other hand, the list of references at the end of the paper has not been around forever. Again some research I did during some idle moments in Oberwolfach showed that a shift took place in the 1940s-50s. Before, references were handled either inline or via footnotes. Clearly these cannot be used in noun form. Instead,repeated references to the same paper were done using the rather clumsy phrase "loc. cit." or "the third paper by Siegel, cited above". Maybe the objections come from this time. Edit: After reflecting on this issue a bit more I found that even though this style of citation is per se admissible it does have a pitfall which one should be aware of: It makes it all too easy to avoid naming an author explicitly. I consider it as very bad practice if somebody's name appears only in the references.
{ "source": [ "https://mathoverflow.net/questions/247743", "https://mathoverflow.net", "https://mathoverflow.net/users/19088/" ] }
247,755
We say two topologies $\tau$ and $\rho$ on $X$ are similar if the set of continuous functions $f:(X,\tau) \rightarrow (X,\tau)$ is the same as the set of continuous functions $f:(X,\rho)\rightarrow (X,\rho)$. Does there exist a topology $\tau$ that is similar to the euclidean topology on $\mathbb R$? This was asked here but all we could prove is that $\tau$ must be a refinement of the euclidean topology. Regards.
The only topology similar to the Euclidean topology on $\mathbb{R}$ is the Euclidean topology. Suppose there is such a topology $\tau$. I'll use "open," "continuous," etc. to mean with respect to the Euclidean topology and "$\tau$-open" etc. for $\tau$. Since $\tau$ is a refinement of the Euclidean topology, there exists a point (without loss of generality $0$) and a $\tau$-open neighborhood $U$ which does not contain any open interval containing $0$. Then there is a sequence $x_i$ in the complement of $U$ converging to $0$. For convenience choose a monotonic subsequence $y_i$, without loss of generality positive, and suppose $y_1=1$. Now consider the function $\mathbb{R}\to\mathbb{R}$ which fixes the complement of $(0,1)$, takes $[\frac{1}{2n},\frac{1}{2n-1}]$ to $y_n$, and takes $[\frac{1}{2n+1},\frac{1}{2n}]$ to $[y_{n+1},y_n]$ linearly. This function is continuous, hence $\tau$-continuous. Then the preimage of $(-1,1)\cap U$ is a $\tau$-open subset $V$ of $(-1,0] \cup \bigcup_n (\frac{1}{2n+1},\frac{1}{2n})$ which contains $0$. There is a homeomorphism (hence a $\tau$-homeomorphism) $\phi$ of $\mathbb{R}$ which fixes the complement of $(0,2)$, takes $[1,2]$ to $[\frac{1}{2},2]$ linearly, and takes $[\frac{1}{k+1},\frac{1}{k}]$ to $[\frac{1}{k+2},\frac{1}{k+1}]$ linearly. Then $\phi(V)\cap V$ is a $\tau$-open subset of $(-1,0]$ containing $0$. But then $(-\infty,0]$ is $\tau$-open, so that $\mathbb{R}$ is not $\tau$-connected, a contradiction since then there is a $\tau$-continuous map sending $(-\infty,0]$ to one point and $(0,\infty)$ to another.
{ "source": [ "https://mathoverflow.net/questions/247755", "https://mathoverflow.net", "https://mathoverflow.net/users/24478/" ] }
247,823
This question was inspired from What advantage humans have over computers in mathematics? and the answer of Brendan McKay, part of which is quoted in the below: The day will come when not only will computers be doing good mathematics, but they will be doing mathematics beyond the ability of (non-enhanced) humans to understand. Denying it is understandable, but ultimately as short-sighted as it was to deny computers could ever win at Go. The question and the answer got 42 and 35 votes, respectively, so at least a part of the community acknowledges the importance of AI in the future of mathematics or even the possibility that it will eventually replace mathematicians as they will develop mathematics much faster than humans will do. Recently, machine learning algorithms are getting better at lower-level tasks such as computer vision, motion control and speech recognition, thanks to deep learning, reinforcement learning and massive amount of data available online, which boosted the power of supervised learning. However, without more sophisticated unsupervised learning algorithm (such as adversarial networks as discussed here ), higher-level tasks such as reasoning are still difficult, and this is where the research is actively done right now. Most parts of the mainstream ML don't require sophisticated mathematics, and traditionally such areas haven't attracted many mathematicians. However, since ML will eventually achieve human-level AI, which will develop every areas of math like a far-reaching theory, ML should be an interest for mathematicians. (There is nothing that prevents mathematicians from working on any problem whose solution may require non-purely-mathematical process. For example, the proof of four color theorem used a computer, and experimental mathematics has been gaining popularity.) I'm not proposing mathematicians to construct a purely mathematical model of AI such as Universal AI. Rather than such theoretical pursuit or simulating human brain, leading AI researchers argue that the future of AI will be on the line of the current mainstream ML. If mathematicians of this generation will not contribute to ML research, and if ML researchers will achieve human-level AI, that means ML researchers of this generation contributed to the development of math more than we did. This would be a nightmare for us. My question is the following: Why is the math community not spending enough manpower for developing ML, even though the earlier the completion of human-level AI will be, the faster the long-run development of math will be?
This is a interesting question but, in my opinion, are several misguided or at least questionable conceptions: 1) The future is what matters. Scientists should concentrate now on what will have most impact in the very long run. Why is that? I certainly do not agree. 2) The future is clear. Even if you agree with predictions about the Future (like Brendan McKay's prediction which I tend to agree with) you should acknowledge that even a plausible prediction of this kind might be wrong, there are various plausible developments which may be competing, and the time scale is also very unclear and of great importance. 3) Long term projects are most fruitful Even if you do give high weight to long term projects and it is clear to you, say, that computers will have vast impact on mathematics, it is not clear at all that your own efforts in such a direction will be more fruitful than effort in other directions. Long term risky projects more likely fail. 4) It is problematic if non-mathematicians will bring major (or even the major) contribution to future mathematics. It is possible that current non-mathematicians will have major, even dominant, long term contributions to mathematics. What's wrong with that? 5) Mathematicians should mainly concern about ML's impact to mathematics There are more important and more immediate applications of ML which are not to mathematics. This could be as tempting or even more tempting to mathematicians who want to get involve in ML. If you can see how your mathematical ideas could contribute through ML to autonomous driving, go for it! This can be more important than automatic theorem proving. 6) The "math community" allocates its manpower. 7) Impact of mathematics to a new area is measured by current mathematicians working on it. It is probably the case that mathematics and the work of mathematicians in the past (along with statisticians, computer scientists, physicists, ...) had much influence on ML current development. A related thing is: 8) Impact of mathematics to a new area is measured by direct attempts of mathematicians aimed for this area. And finally 9) Mathematicians do not contribute to current ML. I am not sure this is correct. As a matter of fact, I recommend to look at the book Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz and Shai Ben-David to learn more.
{ "source": [ "https://mathoverflow.net/questions/247823", "https://mathoverflow.net", "https://mathoverflow.net/users/57191/" ] }
249,164
Mathworld's discussion of the Gamma function has the pleasant formula: $$ \frac{\Gamma(\frac{1}{24})\Gamma(\frac{11}{24})}{\Gamma(\frac{5}{24})\Gamma(\frac{7}{24})} = \sqrt{3}\cdot \sqrt{2 + \sqrt{3}} $$ This may have been computed algorithmically, according to the page. So I ask how one might derive this? My immediate thought was to look at $(\mathbb{Z}/24\mathbb{Z})^\times = \big( \{ 1,5,7,11 \big| 13, 17, 19 , 23 \}, \times \big)$ where $1,5,7,11$ are relatively prime to 24. And the other half? We could try to use the mirror formula $$ \Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin(\pi z)} $$ or the Euler beta integral but nothing has come up yet: $$ \int_0^1 x^a (1-x)^b \, dx = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a+b)} $$ I am lucky the period integral of some Riemann surface will be the ratio of Gamma functions: $$ \int_0^1 (x - a)^{1/12} (x - 0)^{11/12} (x - 1)^{-5/12} (x - d)^{-7/12} \, dx $$ these integrals appear in the theory of hypergeometric function In light of comments, I found a paper of Benedict Gross and the paper of Selberg and Chowla $$ F( \tfrac{1}{4},\tfrac{1}{4};1;\tfrac{1}{64}) = \sqrt{\frac{2}{7\pi}} \times \left[\frac{ \Gamma(\frac{1}{7})\Gamma(\frac{2}{7})\Gamma(\frac{4}{7}) }{ \Gamma(\frac{3}{7})\Gamma(\frac{5}{7})\Gamma(\frac{6}{7}) }\right]^{1/2} $$ so in our case we are looking at quadratic residues mod 12. However, however it does not tell us that LHS evaluates to RHS.
This formula can actually be proved using only properties of the Gamma function already known to Gauss, with no need to invoke special values of Dirichlet series. The relevant identities are $$ \Gamma(z) \, \Gamma(1-z) = \frac\pi{\sin(\pi z)}, $$ already cited by john mangual as the "mirror formula", and the triplication formula for the Gamma function, i.e. the case $k=3$ of Gauss's multiplication formula : $$ \Gamma(z) \, \Gamma\bigl(z+\frac13\bigr) \, \Gamma\bigl(z+\frac23\bigr) = 2\pi \cdot 3^{\frac12-3z} \Gamma(3z) $$ [the $k=2$ case is the more familiar duplication formula $\Gamma(z) \, \Gamma(z+\frac12) = 2^{1-2z} \sqrt{\pi}\, \Gamma(2z)$]. Take $z=1/24$ and $z=1/8$ in the triplication formula, multiply, and remove the common factors $\Gamma(1/8) \, \Gamma(3/8)$ to deduce $$ \Gamma\bigl(\frac{1}{24}\bigr) \Gamma\bigl(\frac{11}{24}\bigr) \Gamma\bigl(\frac{17}{24}\bigr) \Gamma\bigl(\frac{19}{24}\bigr) = 4 \pi^2 \sqrt{3}. $$ Take $z=5/24$ and $z=7/24$ in the mirror formula and multiply to deduce $$ \Gamma\bigl(\frac{5}{24}\bigr) \Gamma\bigl(\frac{7}{24}\bigr) \Gamma\bigl(\frac{17}{24}\bigr) \Gamma\bigl(\frac{19}{24}\bigr) = \frac{\pi^2}{ \sin (5\pi/24) \sin (7\pi/24) }. $$ Hence $$ \frac{\Gamma(1/24) \, \Gamma(11/24)} {\Gamma(5/24) \, \Gamma(7/24)} = 4 \sqrt{3} \sin (5\pi/24) \sin (7\pi/24), $$ which is soon reduced to the radical form $\sqrt3 \cdot \sqrt{2+\sqrt3}$.
{ "source": [ "https://mathoverflow.net/questions/249164", "https://mathoverflow.net", "https://mathoverflow.net/users/1358/" ] }
249,383
Given a proof for a result, one could denote the proof as a node on a graph, and then draw arrows to the node from axioms and previous results that the proof uses, and then draw arrows from the node to results that the result is used to prove. The sources of such a graph would be mathematical axioms and the sinks would be results which do not yet have corollaries. This is related to the facts that finite categories can be drawn as directed graphs and that proofs form a category . Question: Is there an existing ontological/relational database somewhere that attempts to make use of such an idea to formally encode some of present-day mathematics? It can often be difficult to fully trace all of the logical dependencies of a given result. For example, how can one show whether existing proofs of the Implicit Function Theorem use the Law of the Excluded Middle or not? Rather than spend a couple of days going through tens of textbooks and trying to parse their proofs, not for their content but just to see what assumptions they are using, wouldn't it be easier to ask a computer to do that task? Such a system for tracking the logical dependencies of known proofs for existing theorems could be useful not only for beginners ("What important theorems use the completeness property of the real numbers? I'll check the database."), but it could also be useful for constructivists or intuitionists who want to keep track of which existing results make use of the Axiom of Choice or the Law of the Excluded Middle. And maybe it would be possible to use a sufficiently complete database to run tests to see whether the foundations of mathematics are consistent? I don't actually know, but it sounds on a naive level something that would be useful, even if it would take a long time to build. Note: I am not sure what to tag this question or if it is on-topic.
The reverse mathematics zoo , founded by Damir Dzhafarov and with recent development by Eric Astor, aims to be a database showing the relations and dependencies of mathematical theorems as described in the theory of reverse mathematics. An enormous number of the theorems of classical mathematics are now included in the reverse mathematical hierarchy, and appear in this database. The system accepts new contributions, and is capable of generating diagrams showing the relations. For example, the current results.txt file and the database is available in the rmzoo.zip file. There are various diagrams available http://rmzoo.math.uconn.edu/diagrams/ , which are generated automatically from the database. Pictured here is the Implications and Strict Implications diagram.
{ "source": [ "https://mathoverflow.net/questions/249383", "https://mathoverflow.net", "https://mathoverflow.net/users/93694/" ] }
249,514
It is well-known that the set of nonnegative integers $\mathbb{N}$ is definable in the ring of integers $\mathbb{Z}$. Indeed, by Lagrange's four squares theorem we have $\mathbb{N} = \{n \in \mathbb{Z} : \varphi(n)\}$, where $\varphi$ is the formula $$\varphi(x) := \exists a\, \exists b\, \exists c\, \exists d \; x = a^2 + b^2 + c^2 + d^2$$ However, Lagrange's theorem is not so trivial, so I wonder: Is there a more elementary and self-contained proof of the definability of $\mathbb{N}$ in the ring of integers? Thank you.
Here is an outline of a possible approach. We will show in a simple way that every natural number is a ratio of two sums of four squares, so that formula $\exists a_1,\dots,a_8:(a_1^2+a_2^2+a_3^2+a_4^2)n=a_5^2+a_6^2+a_7^2+a_8^2$ (edit: and not all $a_1,\dots,a_4$ are zero) describes $\mathbb N$. Let's call rational numbers $n$ satisfying this property good . Lemma 1: A product of two good numbers is good. A reciprocal of a nonzero good number is good. Proof: First part follows from the four-square identity , second is immediate. Lemma 2: For any prime $p$ there is an integer $0<k<p$ such that $kp$ is good. Proof: It's true for $2$, so suppose $p$ is odd. Then it's straightforward to verify there are precisely $\frac{p+1}{2}$ squares modulo $p$. Thus the sets $A=\{x^2\},B=\{-1-y^2\}$ for $x,y$ in $\mathbb Z/p\mathbb Z$ both have size $\frac{p+1}{2}$. So $|A|+|B|>p=|\mathbb Z/p\mathbb Z|$, so they have nontrivial intersection, say $x^2\equiv -1-y^2\pmod p$. Now replace $x,y$ with integers congruent to them such that $|x|,|y|<\frac{p}{2}$. Then $p\mid x^2+y^2+1<\frac{p^2}{4}+\frac{p^2}{4}+1<p^2$, so $x^2+y^2+1=kp$ for $0<k<p$ and we are clearly done. Now we can easily finish: we proceed by induction. $0$ and $1$ are good. Consider $n>1$. If $n$ is composite, then $n=ab$ with $a,b<n$, so $a,b$ are good hence so is $n$ by lemma 1. If $n$ is prime, then by lemma 2 $kn$ is good for $0<k<n$. But then $k$ is good, hence so are $\frac{1}{k}$ and $n=\frac{1}{k}\cdot kn$ by lemma 1. Of course for a large part this proof is the same as the proof of Lagrange's theorem, but at the very least we are avoiding a (crucial in the mentioned theorem) step of showing a prime is a sum of four squares.
{ "source": [ "https://mathoverflow.net/questions/249514", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
249,523
As the question title suggests, what is the crux of Dwork's proof of the rationality of the zeta function? What is the intuition behind the proof, what are the key steps that the proof boils down to?
There is an excellent book by Neal Koblitz "p-adic numbers, p-adic analysis and zeta-functions" were the Dwork's proof is stated in a very detailed way, including all preliminaries from p-adic analysis. Let me sketch this proof in comparison with Weil's program of proving his conjecture. First, any variety $X$ can be covered by affine charts $U_i$ with all intersections $U_{i_1}\cap \dots\cap U{i_k}$ being affine. The zeta-function of $X$ can be expressed in terms of zeta-functions of these affine varieties using inclusion-exclusion formula(I omit the base field and the formal variable to shorten notations) $$Z_X=\prod\limits_{i_1<\dots<i_k}Z_{U_{i_1}\cap\dots \cap U_{i_K}}^{(-1)^{k+1}}$$ Thus, it is enough to prove rationality for affine varieties. Next, any affine variety is an intersection of hypersurfaces and the union of hypersurfaces is a hypersurface, so again inclusion-exclusion formula(with $\cap$ and $\cup$ swapped) reduces the problem to hypersurfaces. Now, the idea of Dwork is to prove first that $Z_X$ is $p$-adic meromorphic on $\mathbb{C}_p$ for a hypersurface $X\subset \mathbb{A}^n$. He proceeds by induction on dimension. We cut a hypersurface $f(X)=0$ into lower-dimensional hypersurfaces $f(X)=0,x_i=0$(to be precise, this variety may happen to be the whole $\mathbb{A}^{n-1}$ but this is even better since the rationality for affine space is obvious) and open subvariety $f(X),x_i\neq 0$ for all $i$. By the same inclusion-exclusion argument and induction, it is enough to prove rationality for this open variety. As far as I understand, the following computation is the main insight of Dwork which resembles the Weil's idea. He expresses number of points of this variety over $\mathbb{F}_{q^k}$ in terms of trace of $\Psi^k$ where $\Psi$ is a certain linear operator. That gives an expression for zeta-function in terms of characteristic "polynomial" of $\Psi$. In contrast to Frobenius on Weil cohomology, $\Psi$ acts on an infinite-dimensional space, so its characteristic polynomial is not a polynomial, but rather a meromorphic series(one should do some work to make sense of determinant and trace of an infinite-dimensional operator -- this is perfectly carried out in Koblitz's book). This proves that $Z_X$ is $p$-adically meromorphic. Finally, any meromorphic series with integral coefficients and properly bounded coefficients(for $Z_X$ the bound comes from $\# X(\mathbb{F}_{q^k})\leq q^{nk}$) is a rational function(this follows from p-adic Weierstrass preparation theorem and characterization of rational functions series as those admitting a linear recurrence relation on coefficients).
{ "source": [ "https://mathoverflow.net/questions/249523", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
249,725
So I've been working with moduli stacks in algebraic geometry for a while now, with no formal training in the technicalities of the theory of algebraic stacks (ie, I've read a few articles and I learn what I need, without having spent much time doing exercises or working through examples/counterexamples). One of the difficulties I've experienced is that while discussing morphisms of algebraic stacks, some authors only require certain morphisms to be representable by algebraic spaces , while others require them to be representable (by schemes). Of course I've always found the latter to be more comprehensible (since it saved me from having to learn too much about alg. spaces on the way to working with alg. stacks). On the other hand, all the really comprehensive references on the subject usually fall into the former category. For example, the stacks project's definition of an algebraic stack only requires the diagonal to be representable by alg spaces, while Gomez's article https://arxiv.org/pdf/math/9911199v1.pdf requires that it be representable (by schemes). So far, whenever I read "representable by alg spaces", I sort of "pretend" that it says "representable by schemes" and proceed with that assumption. My questions are: What is the difference between requiring the diagonal of an algebraic stack to be representable by spaces vs representable by schemes? When are the two notions equivalent? What pitfalls are there to avoid? In general, what are some good illustrative examples of morphisms of alg stacks which are only representable by spaces, but not by schemes? When are the two notions equivalent? What pitfalls are there to avoid?
To answer question 2, the best example I know is $\mathscr{M}_1$, the stack of (proper smooth geom. connected) curves of genus 1. Indeed, Raynaud has contructed an elliptic curve $E\to S$ over a scheme $S$ and an $E$-torsor $X\to S$ which is (an algebraic space but) not a scheme. This implies two things. First, in order to define $\mathscr{M}_1$ we are forced to take "curve" to mean "algebraic space in curves": if we insist that curves must be schemes, the resulting $\mathscr{M}_1$ will not be a stack for the flat topology, because the above $X$ is locally a (projective) scheme for the flat (even étale) topology on $S$. Concerning the diagonal, put $I:={\underline{\mathrm {Isom}}}_{\mathscr{M}_1}(E,X)$. There is a monomorphism $X\to I$ (in fact, a closed immersion) identifying $X$ with the subsheaf of $E$-torsor isomorphisms. So, $I$ is not a scheme because $X$ isn't. Viewed as a morphism $S\to \mathscr{M}_1$, $I$ is the pullback of the diagonal under the morphism $S\to \mathscr{M}_1\times\mathscr{M}_1$ given by $(E,X)$. Hence the diagonal is not representable in the scheme sense.
{ "source": [ "https://mathoverflow.net/questions/249725", "https://mathoverflow.net", "https://mathoverflow.net/users/88840/" ] }
249,859
The following is not quite a research level question, but I still find this site appropriate for asking it. I hope I get it right here. I am preparing a talk for a general public and I want to discuss some hyperbolic geometry. I wish I had a good illustration device. I imagine a dynamical version of one of Escher's tessellations of the Poincare model (e.g the one in How might M.C. Escher have designed his patterns? ) which changes isometrically when I slide the computer mouse. Question: Could you please make any recommendation regarding any device of a nature similar to the one I describe above? In fact, I will be happy to have whatever interactive model of whatever geometry, not necessarily hyperbolic. Subquestion: if you're kind enough to make a recommendation, could you also advice regarding copyright issues (if applicable)? Sidequestion: Any other recommendation regarding presentation of geometry will be appreciated. Please note that my concern is more about the quality of the presentation than the actual mathematical content... UPDATE: Thank you! I am thrilled to get in less than 24 hours so many excellent answers and comments. Fortunately, Arnaud Chéritat provided EXACTLY what I asked for, and I happily accept his answer. Indeed, I am going to use his tool for my presentation. However, there are other excellent tools here which could be useful elsewhere. It seems to me a good idea to keep collecting those and MOF is an excellent platform for that. I suppose one should post one of these big-list questions which has a broader scope than this one (but not too broad), something like "Visualizing tools for lectures on geometry". I am not sure how this is done, so you can pick up the glove!
By chance I wrote, not long ago, the following applet (HTML5+JS+WebGL) that works at least on Firefox and Chrome. https://www.math.univ-toulouse.fr/~cheritat/AppletsDivers/Escher/ This work is CC-BY-SA, including the code, but NOT the image by Escher, for which I have not asked permission: you can probably use it a few times in conferences (fair use) but not in a permanent publication (and I will eventually have to remove this image or to create a variant of my own, like Valdimir Bulatov).
{ "source": [ "https://mathoverflow.net/questions/249859", "https://mathoverflow.net", "https://mathoverflow.net/users/89334/" ] }
250,041
Let $f(a,b,c,d)=\sqrt{a^2+b^2}+\sqrt{c^2+d^2}-\sqrt{(a+c)^2+(b+d)^2}$. (it is the defect in the triangle inequality) Then, we discovered by heuristic arguments and then verified by computer that $$\sum f(a,b,c,d)^n = 2-\pi/2$$ where the sum runs over all $a,b,c,d\in\mathbb Z$ such that $a\geq 1,b,c\geq 0, ad-bc=1$ and $n=2$. It seems that when $n=1$, we obtain $2$ in the right hand side. We have failed to guess the result for $n>2$. So the question is: can you prove the result for $n=2$? ( We can, but in a rather unnatural way. We will write this later and now we want to tease the community, probably somebody can find a beautiful proof. ) Added: we have two answers for the above question, so the rest is: Question: Can you guess the result for $n>2$? I tried http://mrob.com/pub/ries/ but nothing interesting was revealed. PS. In case it can help someone, below are these sums of powers ($n=1,2,3,4,5$), calculated by computer: $1.9955289122768913 = 2$ $0.4292036731309361 = 2 - \pi/2$ $0.21349025954227965 = $ ? $0.11983665032283052 = $ ? $0.06933955916793563 = $ ? Added: this and something more can be found in https://arxiv.org/abs/1701.07584 and https://arxiv.org/abs/1711.02089
Let me write down here a proof that $\sum f(a,b,c,d)=2$, maybe someone sees how this may be generalized for the second moment. I do not. We denote the vectors $x=(a,b)$ and $y=(c,d)$ and write $g(x,y)=f(a,b,c,d)$. Denote $S=\sum g(x,y)$, I omit here a routine proof that $S$ is finite. We may fix $x$ and sum up by $y$. For given $x$ the possible $y$ have form $y_0(x)+kx$ for $k=0,1,2,\dots$. The sum by all these $y$ equals $$ \lim_N \sum_{k=0}^{N-1} \|x\|+\|y_0(x)+kx\|-\|y_0(x)+(k+1)x\|=\\ =\lim_N N\|x\|+\|y_0(x)\|-\|y_0+Nx\|=\|y_0(x)\|-pr_x (y_0(x)), $$ where $pr_x$ denotes projection onto $x$. So, $S=\sum_x \|y_0(x)\|-pr_x (y_0(x)).$ Analogously we may fix $y$ and sum up by $x$, and get a similar formula $S=\sum_y \|x_0(y)\|-pr_y (x_0(y))$. In the first formula we take a summand corresponding to $x=(1,0)$ and in the second a summand corresponding to $y=(0,1)$. Those summands are equal to 1. Other summands correspond to $x$, resp. $y$, with strictly positive coordinates. Any such vector, which I denote by $z$, is a sum of $x_0(z)$ and $y_0(z)$, thus when we sum up two expressions for $S$ we get $$ 2S=1+1+\sum_z \|x_0(z)\|+\|y_0(z)\|-\|z\|=2+S, $$ and $S=2$.
{ "source": [ "https://mathoverflow.net/questions/250041", "https://mathoverflow.net", "https://mathoverflow.net/users/4298/" ] }
250,068
My question is: Is it possible to write any scheme as a (1-categorical) colimit of a diagram of affines? If no, what are some examples? I ask this question because I have read that one can write any derived scheme as a colimit (in the $\infty$-categorical sense) over a diagram of affine derived schemes. So I am curious what is the analogous statement is in the classical setting.
Yes, this is just a basic fact in category theory, if interpreted correctly. For $C$ any category, and $F$ any preheaf on $C,$ $F$ is the colimit in presheaves of the diagram $C/F \to C \stackrel{y}{\hookrightarrow} Psh(C),$ which sends a morphism $f:y(C) \to F,$ to $y(C),$ where $y$ is the Yoneda embedding. This follows immediately from the Yoneda lemma. Now, if $F$ is a sheaf for some Grothendiek topology, then since the sheafification functor $a$ is a left adjoint, it preserves all colimits, so we also have that $F$ is the colimit of the diagram $C/F \to C \stackrel{y}{\hookrightarrow} Sh(C).$ Note that this diagram consists entirely of representables (provided the Grothendieck topology is subcanonical, i.e. each representable is a sheaf). Now, take $C$ to the the category of affine schemes, and let the Grothendieck topology be the Zariski topology. The functor of points of any scheme, in particular, is a sheaf. The result now follows.
{ "source": [ "https://mathoverflow.net/questions/250068", "https://mathoverflow.net", "https://mathoverflow.net/users/38075/" ] }
250,080
For the sake of simplicity, let's take a path graph of length 3, $P_3$ and say we want to get from one of its ending points to another. So, we have the vertices $v_1, v_2, v_3$ and thus two edges. The distance between $v_1$ and $v_3$ is 2, therefore we have to steps to take in order to get from $v_1$ to $v_3$. The number of the ways I can take before getting to $v_3$ is 2. Call this number $\mathcal N$. These two scenarios are as following. $v_1 \rightarrow v_2 \rightarrow v_1$ $v_1 \rightarrow v_2 \rightarrow v_3$ Hopefully, you've got the idea. But to make sure, I'm going to calculate the $\mathcal N$ for $P_5$. As you can see, I've got 4 steps to take. $v_1 \rightarrow v_2 \rightarrow v_1 \rightarrow v_2 \rightarrow v_1$ $v_1 \rightarrow v_2 \rightarrow v_1 \rightarrow v_2 \rightarrow v_3$ $v_1 \rightarrow v_2 \rightarrow v_3 \rightarrow v_2 \rightarrow v_3$ $v_1 \rightarrow v_2 \rightarrow v_3 \rightarrow v_2 \rightarrow v_1$ $v_1 \rightarrow v_2 \rightarrow v_3 \rightarrow v_4 \rightarrow v_3$ $v_1 \rightarrow v_2 \rightarrow v_3 \rightarrow v_4 \rightarrow v_5$ So here, $\mathcal N = 6$. Here is my question: I have raised this question myself and found a solution of my own which seems correct from professors' review: What is the $\mathcal N$ of any $P_n$? This question has the answer when you take the ending vertices as the starting and the destination ones. But once you get this formula, you can easily find out a way to calculate the $\mathcal N$ when you choose arbitrary nodes for starting and destination vertices other than $v_1$ and $v_n$. And here is my problem: I cannot find a reference to this kind of questions in order to further my studies and do more research. Any books or articles related to such studies will be much appreciated.
Yes, this is just a basic fact in category theory, if interpreted correctly. For $C$ any category, and $F$ any preheaf on $C,$ $F$ is the colimit in presheaves of the diagram $C/F \to C \stackrel{y}{\hookrightarrow} Psh(C),$ which sends a morphism $f:y(C) \to F,$ to $y(C),$ where $y$ is the Yoneda embedding. This follows immediately from the Yoneda lemma. Now, if $F$ is a sheaf for some Grothendiek topology, then since the sheafification functor $a$ is a left adjoint, it preserves all colimits, so we also have that $F$ is the colimit of the diagram $C/F \to C \stackrel{y}{\hookrightarrow} Sh(C).$ Note that this diagram consists entirely of representables (provided the Grothendieck topology is subcanonical, i.e. each representable is a sheaf). Now, take $C$ to the the category of affine schemes, and let the Grothendieck topology be the Zariski topology. The functor of points of any scheme, in particular, is a sheaf. The result now follows.
{ "source": [ "https://mathoverflow.net/questions/250080", "https://mathoverflow.net", "https://mathoverflow.net/users/98576/" ] }
250,312
How to solve a Diophantine equation like $$3^n-1=2x^2$$. One can easily see that the parity of $n$ and $x$ will be same and equation further can be seen taking if $$n\equiv0\pmod3\quad \text{then }x \equiv0\pmod{13} $$ but I don't know what to do further.
This problem happens to have appeared on the Polish Mathematical Olympiad camp in 2015. Here is the official solution of the problem: (I use $m$ in place of $x$ because this is how the problem was stated there) Suppose first $n$ is even, say $n=2k$. Then the equation is equivalent to $(3^k+1)(3^k-1)=2m^2$. Clearly $\gcd(3^k+1,3^k-1)=2$, so one of these numbers must be a perfect square (and the other one twice a perfect square). $3^k-1\equiv 2\pmod 3$, so it can't be a square, hence $3^k+1=t^2,3^k=(t-1)(t+1)$. Therefore, $t-1,t+1$ are powers of $3$. But they can't both be divisible by $3$, so $t-1=1,t=2$. This leads to solution $(m,n)=(2,2)$. Now suppose $n$ is odd, say $n=2k+1$. Letting $t=3^k$ we get $3t^2-2m^2=1$. Setting $t=2v+u,m=3v+u$ (check $u,v$ are integers) we get the Pell's equation $u^2-6v^2=1$. Standard theory gives us that all its solutions are generated as follows: $$(u_0,v_0)=(5,2),(u_{i+1},v_{i+1})=(5u_i+12v_i,2u_i+5v_i).$$ From this we can get a recurrence for $t=t_i=2v_i+u_i$: $$t_{-1}=1,t_0=9,t_{i+2}=10t_{i+1}-t_i$$ Looking modulo $27$ and $17$, we find $$t_i\equiv 0\pmod{27}\Leftrightarrow i\equiv 3\pmod 9\Leftrightarrow t_i\equiv 0\pmod{17}$$ It follows that the only powers of $3$ in the sequence $t_i$ are $1,9$ which correspond to solutions $(1,1),(5,11)$. So all the solutions are $(1,1),(2,2),(5,11)$. Edit: This is a solution for positive integers. If we allow nonpositive integers, we also get $(0,0)$ and $(1,-1),(2,-2),(5,-11)$, but rather clearly no more.
{ "source": [ "https://mathoverflow.net/questions/250312", "https://mathoverflow.net", "https://mathoverflow.net/users/97687/" ] }
250,679
Let $q = e^{2\pi i\,z}$. I. 24th power The Ramanujan tau function $\tau(n)$ is given by the expansion of the Dedekind eta function $\eta(z)$'s $\text{24th}$ power. Then $$\begin{aligned}\eta(z)^{24} &= \sum_{n=1}^\infty\tau(n)q^n\\&=q - 24q^2 + 252q^3 - 1472q^4 + 4830q^5 - 6048q^6 - 16744q^7 + \dots\end{aligned}$$ Ramanujan observed that $$\tau(n)\equiv\sigma_{11}(n)\ \bmod\ 691\tag1$$ II. 12th power Assume the rho function $\rho(n)$ as, $$\begin{aligned}\eta(2z)^{12} &= \sum_{n=1}^\infty\rho(n)q^n\\&=q - 12q^3 + 54q^5 - 88q^7 -99q^9 +540q^{11} - 418q^{13} -648q^{15} + \dots\end{aligned}$$ Note the odd powers. Is it true that $$\rho(n)\equiv\sigma_{5}(n)\ \bmod\ 2^8\tag2$$ analogous to $(1)$? P.S. It's true for the first 10000 coefficients in OEIS A000735 .
Yes, this is true, as a consequence of an identity in a space of modular forms of weight $6$. The form $\eta(2z)^{12}$ is in this space; and $\sigma_5(n)$ for $n$ odd are the coefficients of the weight-$6$ form $$ \frac1{1008} \Bigl(E_6(z+\frac12) - E_6(z)\Bigr) = q + 244 q^3 + 3126 q^5 + 16808 q^7 + 59293 q^9 + \cdots. $$ The difference is $$ 256(q^3 + 12 q^5 + 66 q^7 + 232 q^9 + 627 q^{11} + 1452 q^{13} + \cdots); $$ after a bit of experimentation we recognize this as $256q^3$ times the $12$-th power of $1+q^2+q^6+q^{12}+q^{20}+\cdots$, which is to say $256$ times the $12$-th power of the weight-$1/2$ modular form $\sum_{k=0}^\infty q^{(2k+1)^2/4}$. Such a formula, once surmised, can be proved by comparing initial segments of the $q$-expansions; I did this to $O(q^{61})$, which is way more than enough. The congruence mod $256$ follows because the $12$-th power of $1+q^2+q^6+q^{12}+q^{20}+\cdots$ clearly has integral coefficients. Added later : This identity (and thus the congruence that Tito Piezas III asked for) gives a formula $(\sigma_5(n) - \rho(n)) / 256$ for the number of representations of $4n$ as the sum of $12$ odd squares, or equivalently of $(n-3)/2$ as the sum of $12$ triangular numbers. Following this lead, I soon found both the formula and the congruence in the paper Ken Ono, Sinai Robins, and Patrick T. Wahl: On the representation of integers as sums of triangular numbers, Aequationes Math. 50 (1995) #1, 73-94 available on Ken Ono's page , where the enumeration is given (in the triangular-number form) as Theorem 7, and the congruence as a "simple consequence" of that theorem.
{ "source": [ "https://mathoverflow.net/questions/250679", "https://mathoverflow.net", "https://mathoverflow.net/users/12905/" ] }
250,826
My question concerns the proof of the following: Let $a,b,n \in \mathbb{N}$. If $n \neq 1$ and $n$ divides both $a$ and $b$, then $b$ is a composite number or $b$ divides $a$. My proof: Suppose $b$ is not composite. Then $b$ is prime. Since $n \neq 1$ and $n$ divides $b$, we must have $n = b$. Thus $b$ divides $a$. This proof has the form $(P \wedge \neg R) \to Q$. If we were using normal first order logic, I could conclude $P \to (Q \vee R)$. But, using intuitionistic logic, I cannot conclude that which was to be proven. How would you carry out this proof in intuitionistic logic?
In general, when working in constructive mathematics, the strategy for proving $Q \lor R$ is to prove $Q$ or to prove $R$. In this case, just knowing abstractly that "there is an $n \not = 1$ that divides both $a$ and $b$" does not directly tell you whether $b$ is composite or whether $b$ divides $a$. So you will need more information to come up with a constructive proof. If you know more than the fact that there is such an $n$, and you actually know the value of $n$, that would help. In particular, if you could tell whether your value of $n$ is equal to $b$, then you can tell which side of the disjunction holds. So we could use the fact $(\forall k)[k = b \lor k \not = b]$ to finish the constructive proof. Many real-world constructive systems include additional facts like that about the natural numbers, which would not be true for other objects. In particular, many constructive systems prove the sentence $(\forall n,m \in \mathbb{N})[n = m \lor n \not = m]$, which says the natural numbers have decidable equality. The motivation for accepting this in constructive systems is that, given two concrete (terms for) natural numbers, we could in principle examine them to see if they are equal. This is not the case for other objects, such as real numbers, for which equality is not decidable. The fact that equality of natural numbers is decidable is provable, in many systems of constructive math, from induction axioms that are already included. With a little work, these constructive systems prove that each natural number is either composite or not composite. And, with that extra fact, the remainder of the original proof goes through constructively. This may be out of the scope of an introductory proofs class, but it is one way a constructivist could prove the result.
{ "source": [ "https://mathoverflow.net/questions/250826", "https://mathoverflow.net", "https://mathoverflow.net/users/40570/" ] }