source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
168,917 |
Since long time ago I have been thinking in two problems that I have not been able to solve. It seems that one of them was recently solved. I have been thinking a lot about the motivation and its consequences. Mostly because people used to motivate one of them with some very interesting implications. My conclusion however, is that there is a mistake in the motivation of the problem, and that, while being a really interesting result, it does not make any sense in the setting in which is formulated. As my opinion is not relevant compared to one of the experts in the area, I do not say anything. My question is if you can provide me some examples of conjectures that were believed to be interesting in the mathematical community because of a specific reason, but that once having the proof, people realized that the reason to motivate the problem was not truly related to its solution. Or in other words, the solution of the problem gives no clues about the original motivation.
|
Computer designers and programmers dreamed, from the earliest days of the computer, of a computer that could play chess and win. Even Alan Turing had that dream, and designed turochamp , the first chess-playing computer algorithm (it was executed on paper by hand at first, since no device could yet implement the algorithm in 1948). As researchers realized the difficulty of playing chess well, the chess challenge was taken on in earnest. The conventional view was that to design a computer that could play chess and win would partake in the essence of artificial intelligence, and in the 1970s, computer chess was described as the Drosophila Melanogaster of Artificial Intelligence , because the work on computer chess was thought to be laying the foundations of artificial intelligence. The basic conjecture, that computers would play chess well, turned out to be true. But the way that computers played chess well was by brute-force methods, rather than with the kind of subtle intelligence that had been expected to be necessary, and so many artificial intelligence researchers were disappointed, and lost interest in chess. Meanwhile, the situation has led to debate in the AI community, as some researchers have argued that AI research should in fact follow the computer chess paradigm.
|
{
"source": [
"https://mathoverflow.net/questions/168917",
"https://mathoverflow.net",
"https://mathoverflow.net/users/39115/"
]
}
|
169,159 |
A recent MO question about non-rigorous reasoning reminded me of something I've wondered about for some time.
The genus–degree formula says that genus $g$ of a nonsingular projective plane curve of degree $d$ is given by the formula $g=(d−1)(d−2)/2$. Here is a heuristic argument for the formula that someone once told me. Take $d$ lines in general position in the plane; collectively these form a (singular) degree-$d$ curve. There are $d\choose 2$ points of intersection. Now think in terms of complex numbers and visualize each line as a Riemannian sphere. If you start with $d$ disjoint spheres and then bring them together so that every one touches every other one (deforming when necessary) then you expect the genus of the resulting surface to be ${d\choose 2}−(d−1)=(d−1)(d−2)/2$, because after you connect them together in a line with $d−1$ connections, each subsequent connection increases the genus by one. Is there any rigorous proof of the genus–degree formula that closely follows the above line of argumentation? A standard proof of the genus–degree formula proceeds by way of the adjunction formula. This doesn't seem to me to answer my question, but perhaps I just don't understand the adjunction formula properly?
|
Yes, this argument can be made rigorous. One needs three steps. Step 1. Show that there is at least one smooth plane curve of degree $d$ with the expected genus. Essentially, the proof is given by your heuristic topological argument (deform the union of $d$ lines in general position). Step 2. Show that if one slightly perturbs the coefficients of a homogeneous polynomial defining a smooth curve, the genus remain unchanged. This is basically a continuity argument. Step 3. Show that the space $\mathbb{C}^{\rm nonsing}[x,\,y,\,z]_d$ of homogeneous polynomials of degree $d$ in three variables defining smooth curves is path-connected. This is because the complement of $\mathbb{C}^{\rm nonsing}[x,\,y,\,z]_d$ in $\mathbb{C}[x,\,y,\,z]_d$ (the so-called "discriminant locus") has real codimension $2$. Putting these three steps together one easily obtains the desired result. For further details you can look at Chapter 4 of Kirvan's book Plane algebraic curves .
|
{
"source": [
"https://mathoverflow.net/questions/169159",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
169,187 |
(To be honest, I actually mean something more general than 'homotopical algebra' - topos theory, $\infty$-categories, operads, anything that sounds like its natural home would be on the nLab.) More and more lately I have been unexpectedly running into things that might be named 'homotopical algebra' - in glancing at symplectic topology I run into Fukaya categories, and thus of $A_\infty$ categories; in looking at algebraic topology at things above the introductory level I see language like 'model categories' or simplicial sets, or more abstractly, algebraic geometry might lead me to stuff like the work of Jacob Lurie. These things feel bizarre and out-of-place to me - it's all so abstract and dry it's difficult to get a handle on, and feels so removed from ""reality"" (a.k.a., things I have a geometric intuition for). I understand that this is a result of my maturity level - I'm only just coming to terms with categorical language, and a few years ago I would have probably made a poor joke about "abstract nonsense". Similarly, I have only recently come to love the language of cohomology and sheaves, and this is perhaps because of the problems they solve rather than the language itself; classifying line bundles on a Riemann surface, say, or any of the classical theorems of algebraic topology - both of which can be done elegantly with tools more modern than those questions themselves. I would perhaps feel more at ease if I saw that homotopical algebra can be used for similar purposes, so: What are some down-to-earth (interesting to non-category theorists, say, that are naturally stated in non-categorical language) questions that have been solved using these sorts of tools, or such that these sorts of tools are expected to be useful? Why is the language of homotopy theory useful to the modern mathematician who is not already immersed in it?
|
As a student, I'm always looking for organizing principles in mathematics to help me keep track of all of the mathematics I learn. It's easy to get lost in a deluge of definitions unless I organize them in some way. For a long time category theory was my main organizing principle (e.g. the idea of adjoint functors alone is already a very helpful way to organize constructions in mathematics), but at some point ordinary category theory became inadequate and I needed the language of higher categories (all that nLab stuff). Here is the sort of thing higher category theory helps me keep organized in my head: Where do long exact sequences come from? Why was that a thing it should have occurred to us to invent? Where can we expect them to show up in mathematics? A standard answer is that long exact sequences come from short exact sequences of chain complexes, but this is inadequate for describing at least one very important long exact sequence in mathematics, namely the long exact sequence of a fibration . This is perhaps the first long exact sequence one learns about which involves nonabelian groups, and so cannot come from homological algebra in the usual sense at all. So where does it come from? From the perspective of higher category theory, long exact sequences are shadows of two dual and more fundamental constructions, namely fiber sequences and cofiber sequences . These in turn come from repeatedly taking homotopy pullbacks resp. homotopy pushouts, which one can think of as the "nonabelian derived functors" of ordinary pullbacks resp. pushouts (which are not homotopically well-behaved and must be corrected). The long exact sequence of a fibration in particular comes from a fiber sequence of the form $$\cdots \to \Omega F \to \Omega E \to \Omega B \to F \to E \to B$$ where $F \to E \to B$ is a fibration, whereas the long exact sequences in ordinary homology and cohomology come from a cofiber sequence of the form $$A \to X \to X/A \to \Sigma A \to \Sigma X \to \Sigma X/A \to \cdots$$ where $A \to X$ is a cofibration. The higher categorical point of view tells you at least one interesting thing right off the bat, which is that these two constructions are categorically dual to each other: running the first one in the opposite category gets you the second one! (And if there's one thing any mathematician should respect, it's a duality.) More generally, there's no reason to restrict our attention to spaces: the higher categorical machinery runs in any higher category with the right structure, and in particular running it in chain complexes also gets us the long exact sequence associated to a short exact sequence of chain complexes, while also explaining that the homological mapping cone is not only analogous to but is precisely the same construction as the topological mapping cone: they're both homotopy pushouts. Let me also make some other comments. What's the deal with model categories and simplicial sets? You say that you've come to love the language of cohomology and sheaves. Great: if you're happy with the idea of using chain complexes as resolutions of objects to compute things like cohomology, then the main thing to know about the model category story is that model categories are a setting for understanding and computing with "nonabelian resolutions" (in particular to make sense of "nonabelian derived functors"), and simplicial objects can be used to build these resolutions; in particular, they can be thought of as "nonabelian chain complexes." That second claim can be made precise using the Dold-Kan theorem , which tells you that the category of simplicial objects in an abelian category is equivalent to the category of chain complexes concentrated in nonnegative degree. Here's a relatively concrete example. The Cech nerve of a nice cover $U \to X$ of a space is a simplicial object which resolves the space $X$ in a particular sense; in particular, if the cover has the property that every finite intersection of opens in the cover is contractible, the resulting resolution can be thought of as a nonabelian analogue of a free resolution of a module. The abelian version of this story, where you're mapping $X$ into abelian objects like Eilenberg-MacLane spaces, gives you Cech cohomology, but the nonabelian version of this story gives you, for example, the Cech cocycle description of a principal $G$-bundle. The Cech cocycle description of a principal $G$-bundle is a great place to start seeing higher category theory at work. First, let me recall the following: if $U_{\alpha}$ is an open cover of a space $X$, then to specify a continuous function $f : X \to Y$ is precisely the same data as specifying continuous functions $f_{\alpha} : U_{\alpha} \to Y$ for all $\alpha$ having the property that the restrictions of $f_{\alpha}$ and $f_{\beta}$ to their intersection $U_{\alpha \beta} = U_{\alpha} \cap U_{\beta}$ agree. This is a Cech $0$-cycle description of $\text{Hom}(X, Y)$; we are just using the fact that $X$ is the coequalizer of a certain diagram built out of the $U_{\alpha}$s. Why doesn't this suffice to describe principal bundles? It's because the functor $\text{Hom}(-, Y)$ preserves colimits, but the functor $[-, BG]$ doesn't, because we're taking homotopy classes. Another way to say this is that there is really a groupoid of principal $G$-bundles on a space $X$ (the fundamental groupoid of the mapping space $\text{Maps}(X, BG)$, in fact) and we want to know this groupoid, or at least its set of isomorphism classes. The functor $[-, BG]$ doesn't preserve colimits, morally because taking colimits isn't guaranteed to play nicely with taking homotopy classes. However, it does play nicely with homotopy colimits, and the precise sense in which a Cech nerve of a space $X$ is a resolution of a space is that, under nice hypotheses, $X$ is the homotopy colimit of that Cech nerve. You can think of this as a fancier version of the coequalizer we talked about above, which is why to specify a principal $G$-bundle you need to talk about triple intersections instead of double intersections: you need continuous functions $g_{\alpha \beta} : U_{\alpha \beta} \to G$ for all $\alpha, \beta$ having the property that the restrictions of $g_{\alpha \beta}, g_{\beta \gamma}, g_{\alpha \gamma}$ to their common intersection $U_{\alpha \beta \gamma} = U_{\alpha} \cap U_{\beta} \cap U_{\gamma}$ satisfy the cocycle relation $g_{\alpha \beta} g_{\beta \gamma} = g_{\alpha \gamma}$. This is precisely a morphism between truncations of the simplicial objects, or nonabelian resolutions, given on the one hand by the Cech nerve of the cover $U \to X$ and on the other hand by the bar resolution of $BG$! Thinking about algebraic topology this way has made it more topological for me: the above story can be adapted to explain ordinary Cech cohomology in a way that doesn't involve passing to chain complexes at any step, for example (the fact that you can in fact use chain complexes comes from Dold-Kan), and more generally doing algebraic topology this way lets you replace homological constructions with constructions that are genuinely about spaces.
|
{
"source": [
"https://mathoverflow.net/questions/169187",
"https://mathoverflow.net",
"https://mathoverflow.net/users/40804/"
]
}
|
169,198 |
What are the main open problems in the theory of quasigroups and loops? A short survey would be welcome. Thanks
|
As a student, I'm always looking for organizing principles in mathematics to help me keep track of all of the mathematics I learn. It's easy to get lost in a deluge of definitions unless I organize them in some way. For a long time category theory was my main organizing principle (e.g. the idea of adjoint functors alone is already a very helpful way to organize constructions in mathematics), but at some point ordinary category theory became inadequate and I needed the language of higher categories (all that nLab stuff). Here is the sort of thing higher category theory helps me keep organized in my head: Where do long exact sequences come from? Why was that a thing it should have occurred to us to invent? Where can we expect them to show up in mathematics? A standard answer is that long exact sequences come from short exact sequences of chain complexes, but this is inadequate for describing at least one very important long exact sequence in mathematics, namely the long exact sequence of a fibration . This is perhaps the first long exact sequence one learns about which involves nonabelian groups, and so cannot come from homological algebra in the usual sense at all. So where does it come from? From the perspective of higher category theory, long exact sequences are shadows of two dual and more fundamental constructions, namely fiber sequences and cofiber sequences . These in turn come from repeatedly taking homotopy pullbacks resp. homotopy pushouts, which one can think of as the "nonabelian derived functors" of ordinary pullbacks resp. pushouts (which are not homotopically well-behaved and must be corrected). The long exact sequence of a fibration in particular comes from a fiber sequence of the form $$\cdots \to \Omega F \to \Omega E \to \Omega B \to F \to E \to B$$ where $F \to E \to B$ is a fibration, whereas the long exact sequences in ordinary homology and cohomology come from a cofiber sequence of the form $$A \to X \to X/A \to \Sigma A \to \Sigma X \to \Sigma X/A \to \cdots$$ where $A \to X$ is a cofibration. The higher categorical point of view tells you at least one interesting thing right off the bat, which is that these two constructions are categorically dual to each other: running the first one in the opposite category gets you the second one! (And if there's one thing any mathematician should respect, it's a duality.) More generally, there's no reason to restrict our attention to spaces: the higher categorical machinery runs in any higher category with the right structure, and in particular running it in chain complexes also gets us the long exact sequence associated to a short exact sequence of chain complexes, while also explaining that the homological mapping cone is not only analogous to but is precisely the same construction as the topological mapping cone: they're both homotopy pushouts. Let me also make some other comments. What's the deal with model categories and simplicial sets? You say that you've come to love the language of cohomology and sheaves. Great: if you're happy with the idea of using chain complexes as resolutions of objects to compute things like cohomology, then the main thing to know about the model category story is that model categories are a setting for understanding and computing with "nonabelian resolutions" (in particular to make sense of "nonabelian derived functors"), and simplicial objects can be used to build these resolutions; in particular, they can be thought of as "nonabelian chain complexes." That second claim can be made precise using the Dold-Kan theorem , which tells you that the category of simplicial objects in an abelian category is equivalent to the category of chain complexes concentrated in nonnegative degree. Here's a relatively concrete example. The Cech nerve of a nice cover $U \to X$ of a space is a simplicial object which resolves the space $X$ in a particular sense; in particular, if the cover has the property that every finite intersection of opens in the cover is contractible, the resulting resolution can be thought of as a nonabelian analogue of a free resolution of a module. The abelian version of this story, where you're mapping $X$ into abelian objects like Eilenberg-MacLane spaces, gives you Cech cohomology, but the nonabelian version of this story gives you, for example, the Cech cocycle description of a principal $G$-bundle. The Cech cocycle description of a principal $G$-bundle is a great place to start seeing higher category theory at work. First, let me recall the following: if $U_{\alpha}$ is an open cover of a space $X$, then to specify a continuous function $f : X \to Y$ is precisely the same data as specifying continuous functions $f_{\alpha} : U_{\alpha} \to Y$ for all $\alpha$ having the property that the restrictions of $f_{\alpha}$ and $f_{\beta}$ to their intersection $U_{\alpha \beta} = U_{\alpha} \cap U_{\beta}$ agree. This is a Cech $0$-cycle description of $\text{Hom}(X, Y)$; we are just using the fact that $X$ is the coequalizer of a certain diagram built out of the $U_{\alpha}$s. Why doesn't this suffice to describe principal bundles? It's because the functor $\text{Hom}(-, Y)$ preserves colimits, but the functor $[-, BG]$ doesn't, because we're taking homotopy classes. Another way to say this is that there is really a groupoid of principal $G$-bundles on a space $X$ (the fundamental groupoid of the mapping space $\text{Maps}(X, BG)$, in fact) and we want to know this groupoid, or at least its set of isomorphism classes. The functor $[-, BG]$ doesn't preserve colimits, morally because taking colimits isn't guaranteed to play nicely with taking homotopy classes. However, it does play nicely with homotopy colimits, and the precise sense in which a Cech nerve of a space $X$ is a resolution of a space is that, under nice hypotheses, $X$ is the homotopy colimit of that Cech nerve. You can think of this as a fancier version of the coequalizer we talked about above, which is why to specify a principal $G$-bundle you need to talk about triple intersections instead of double intersections: you need continuous functions $g_{\alpha \beta} : U_{\alpha \beta} \to G$ for all $\alpha, \beta$ having the property that the restrictions of $g_{\alpha \beta}, g_{\beta \gamma}, g_{\alpha \gamma}$ to their common intersection $U_{\alpha \beta \gamma} = U_{\alpha} \cap U_{\beta} \cap U_{\gamma}$ satisfy the cocycle relation $g_{\alpha \beta} g_{\beta \gamma} = g_{\alpha \gamma}$. This is precisely a morphism between truncations of the simplicial objects, or nonabelian resolutions, given on the one hand by the Cech nerve of the cover $U \to X$ and on the other hand by the bar resolution of $BG$! Thinking about algebraic topology this way has made it more topological for me: the above story can be adapted to explain ordinary Cech cohomology in a way that doesn't involve passing to chain complexes at any step, for example (the fact that you can in fact use chain complexes comes from Dold-Kan), and more generally doing algebraic topology this way lets you replace homological constructions with constructions that are genuinely about spaces.
|
{
"source": [
"https://mathoverflow.net/questions/169198",
"https://mathoverflow.net",
"https://mathoverflow.net/users/36575/"
]
}
|
169,256 |
Let $X$ be a regular scheme, flat and of finite type over $Spec(\mathbb{Z})$ (add "projective" if you want). Then the Hasse-Weil zeta function of $X$ is defined as a product over all prime numbers of certain local factors which are rational functions in $p^{-s}$. The local factor at $p$ is the zeta function of the fiber $X_p$, which is a variety over the finite field $\mathbb{F}_p$. For all but finitely many primes, these local factors should have "similar shape", in some sense. For example, for an elliptic curve, and a good prime $p$, the numerator is a polynomial with coefficients $(1, a_p, p)$, i.e. all these numerators are exactly the same, except of course that the prime $p$ varies. For the denominators the situation is similar. If we take a higher-genus curve, or a higher-dimensional scheme, the patterns of the local zeta function coefficients should also in some sense be "uniform in p". But what exactly is the statement in the general case? In what precise sense are the local factors "the same"? EDIT: I added some (hopefully clarifying) comments related to point counts under the question as well as under ACL's answer.
|
This is an elaboration on ACL's answer, way too long for a comment, which highlights a technical ingredient (well-known to all experts) that underlies the precise sense in which the $\ell$-adic etale cohomology of the geometric generic fiber provides a "uniformity" in $p$: the good properties of constructible $\ell$-adic sheaves.
In particular, I think it is a mistake to try to understand a precise sense of "uniformity in $p$" by focusing on point-counting: this misses the key structure, as noted in ACL's answer, namely certain $\ell$-adic representations (of the absolute Galois group of $\mathbf{Q}$) which individually are not expressed via point-counting at all (away from misleading special cases such as curves and abelian varieties for which degree-1 cohomology over finite fields contains all of the cohomological information). To explain this requires some preparations, hence the length of what follows (which is all standard stuff, but perhaps hard to extract for a non-expert; maybe even what follows is hard to read in parts for a non-expert, but I think it is important to recognize where serious theorems of etale cohomology are doing some work, going beyond the cohomological formula for the zeta function of a single variety over a single finite field). The crux is that the robustness of constructibility provides the magical glue linking behavior at different primes. Literally from the product definition, the zeta function of a separated finite type $\mathbf{Z}$-scheme $X$ is the product $\prod_p \zeta(X_{\mathbf{F}_p}, p^{-s})$ of the zeta functions of the fibers, with ${\rm{Re}}(s)$ is sufficiently large (determined by fiber dimensions alone; see Serre's article in the Purdue conference proceedings on arithmetic geometry from the mid-1960's). By the work of Dwork or Grothendieck-Artin (et al.), the zeta function of any separated scheme of finite type over $\mathbf{F}_p$ (such as $X_{\mathbf{F}_p}$) is a rational function in $p^{-s}$. The cohomological formalism provides an "$\ell$-adic" explanation for the rationality of the factor at each prime $p$ in the sense that
for any prime $\ell \ne p$ we have
$$\zeta(X_{\mathbf{F}_p}, t) = \prod_{i\ge 0} \det(1 - t\phi_p|
{\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p}, \mathbf{Q}_{\ell}))^{(-1)^{i+1}}$$
in $\mathbf{Q}_{\ell}[\![t]\!]$, where the left side is initially just a formal power series in $1 + t \mathbf{Z}[\![t]\!]$ (defined as a product over closed points of $X_{\mathbf{F}_p}$) and the rational function over $\mathbf{Q}_{\ell}$ on the right side might involve internal cancellations among the various determinant polynomials (ruled out for smooth proper $X_{\mathbf{F}_p}$ by the Deligne's work on the Riemann Hypothesis, but not otherwise). In other words, the "$\ell$-adic" explanation for rationality rests on the fact that $\mathbf{Q}(\!(t)\!) \cap \mathbf{Q}_{\ell}(t) = \mathbf{Q}(t)$ inside $\mathbf{Q}_{\ell}(\!(t)\!)$ (and Dwork's approach provides a variant of that explanation with $\ell=p$). In the displayed product on the right side, $i$ goes up to $2 \dim X_{\mathbf{F}_p}$ (which is bounded independently of $p$, and in fact equal to $2 \dim X_{\mathbf{Q}}$ for all but finitely many $p$). That was all just setup. Now fix a prime $\ell$ and an integer $i \ge 0$. One can ask if there is a finite set $S_{i,\ell}$ of primes of $\mathbf{Z}$ with $\ell \in S_{i,\ell}$ such that the polynomials $$R_{p,i,\ell}(t) = \det(1 - t \phi_p|{\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p},\mathbf{Q}_{\ell}))$$
for all $p \not\in S_{i,\ell}$ are "linked" in the sense that there is a single finite-dimensional continuous $\mathbf{Q}_{\ell}$-linear representation
$$\rho_{i,\ell}: G_{\mathbf{Q},S_{i,\ell}} \rightarrow {\rm{GL}}(V_{i,\ell})$$
of the Galois group over $\mathbf{Q}$ of its maximal extension (inside $\overline{\mathbf{Q}}$) unramified outside $S_{i,\ell}$ such that
$$\det(1 - t \rho_{i,\ell}(\phi_p)|V_{i,\ell}) = R_{p,i,\ell}(t)$$
for all $p \not\in S_{i,\ell}$, where $\phi_p \in G_{\mathbf{Q},S_{i,\ell}}$ is a member of the conjugacy class of geometric Frobenius elements at $p$ (all choices giving the same determinant). This would imply in particular that the degree of $R_{p,i,\ell}(t)$ is the same for all $p \not\in S_{i,\ell}$, but it is a much stronger statement: that $\rho_{i,\ell}$ would be a kind of "$\ell$-adic glue" which unifies the disparate $R_{p,i,\ell}(t)$'s coming from the geometric special fibers $X_{\overline{\mathbf{F}}_p}$ in varying characteristics $p \not\in S_{i,\ell}$. The crux of the matter then is the following fundamental fact: the continuous representation $V_{i,\ell} := {\rm{H}}^i_c(X_{\overline{\mathbf{Q}}},\mathbf{Q}_{\ell})$ is such a $\rho_{i,\ell}$, for an appropriate choice of $S_{i,\ell}$. Why? Here is where one has to use a real theorem, namely the preservation of constructibility of $\ell$-adic sheaves under higher direct images with proper support, coupled with the proper base change theorem. More precisely, if $h:Y' \rightarrow Y$ is any separated map of finite type between noetherian schemes over $\mathbf{Z}[1/\ell]$ and if $\mathscr{F}$ is any constructible $\mathbf{Q}_{\ell}$-sheaf on $Y'$ (e.g., the constant sheaf $\mathbf{Q}_{\ell}$) then ${\rm{R}}^i h_{!}(\mathscr{F})$ is a constructible $\mathbf{Q}_{\ell}$-sheaf on $Y$ whose formation moreover commutes with any base change (the latter due to the proper base change theorem). The point is that any constructible $\mathbf{Q}_{\ell}$-sheaf on $Y$ is lisse over a dense open $U$ (depending on the sheaf), and hence "is" just a continuous $\mathbf{Q}_{\ell}$-linear representation of the fundamental group $\pi_1(U,\eta)$ if $Y$ is normal and connected (with geometric generic point $\eta$). In particular, when $Y$ is a connected Dedekind scheme then over $U$ this lisse sheaf is nothing more or less than an $\ell$-adic representation $\rho$ of the absolute Galois group of the function field of $Y$ (i.e., the residue field at the generic point of $Y$) such that $\rho$ is unramified at all closed points $u$ of $U$. The Galois representation at $u$ arising from the $u$-stalk of the lisse sheaf coincides with the residual Galois representation arising from $\rho$ on the Galois group at the generic point by virtue of its unramifiedness at $u$ (upon choosing a decomposition group at $u$ in the Galois group at the generic point, which amounts to working with a strict henselization at $u$ inside a separable closure of the function field of $Y$ in order to compute the specialization homomorphism from geometric stalk at $u$ to a geometric generic stalk). For example, take $Y' = X_{\mathbf{Z}[1/\ell]}$ and $Y = {\rm{Spec}}(\mathbf{Z}[1/\ell])$ and $\mathscr{F} = \mathbf{Q}_{\ell}$. The above says that there is a dense open subscheme $U_{i,\ell} \subset {\rm{Spec}}(\mathbf{Z}[1/\ell])$ such that the constructible ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ on ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ has restriction over $U_{i,\ell}$ that is lisse . Letting $S_{i,\ell}$ be the finite set of closed points of ${\rm{Spec}}(\mathbf{Z})$ complementary to $U_{i,\ell}$, we have that $\pi_1(U_{i,\ell}) = G_{\mathbf{Q},S_{i,\ell}}$ (using geometric generic point as base point of $\pi_1$) and the lisse restriction ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})|_{U_{i,\ell}}$ has respective stalks at the chosen geometric generic point and geometric closed point at $p \not\in S_{i,\ell}$ identified as Galois modules (for $\mathbf{Q}$ and $\mathbf{F}_p$ respectively) with the respective geometric fibral cohomologies $V_{i,\ell} := {\rm{H}}^i_c(X_{\overline{\mathbf{Q}}}, \mathbf{Q}_{\ell})$ and ${\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p},\mathbf{Q}_{\ell})$ (recovering in particular that $V_{i,\ell}$ is unramified at $p$, as we know it must be due to $V_{i,\ell}$ arising from a $\pi_1(U_{i,\ell})$-representation). In other words, it is precisely the lisse pullback of ${\rm{R}}^if_{!}(\mathbf{Q}_{\ell})$ over ${\rm{Spec}}(\mathbf{Z}_{(p)})$ viewed as a representation of $\pi_1({\rm{Spec}}(\mathbf{Z}_{(p)}))$ which is the "$\ell$-adic glue" that links up the $i$th factor in the $\ell$-adic alternating product formula for $\zeta(X_{\overline{\mathbf{F}}_p},t)$ with the single entity $V_{i,\ell}$ that "doesn't know $p$". And the mechanism of this linkage is that (up to conjugation ambiguity!) we can compute that $\pi_1$ using geometric base points over either the generic or closed points of ${\rm{Spec}}(\mathbf{Z}_{(p)})$. So the upshot is that the lisse restriction of ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ over some dense open subscheme of ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ ensures that $V_{i,\ell}$ as built from the cohomology of the geometric generic fiber (no mention of $p$!) is the origin of "uniformity in $p$" when we stare at the $p$-factors of the zeta function of $X$ for varying $p$ (away from some finite set of primes). Note in particular that the set of "bad" primes here is not encoded by geometric means via "good reduction" (a bad notion to consider away from the proper case anyway); it's all about finding a dense open inside ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ over which the constructible ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ has lisse restriction. Note in particular that each $V_{i,\ell}$ on its own does not have anything to do with point-counting (away from special cases like curves and abelian varieties). It is only the alternating product built from these which is related to point-counting. But it is the $V_{i,\ell}$'s which are where the action is. The above is thoroughly $\ell$-adic for each $\ell$ separately whereas the zeta functions above do not mention $\ell$, so a truly satisfying sense of "uniformity in $p$" (away from a finite exceptional set) would be given by proving two more things: $U_{i,\ell}$ is "independent of $\ell$" in the sense that $U_{i,\ell} = U_i - \{\ell\}$ for some single dense open $U_i \subset
{\rm{Spec}}(\mathbf{Z})$ and that the $V_{i,\ell}$ for varying $\ell$ constitute a "compatible family" in the sense defined in Serre's book Abelian $\ell$-adic representations (here, it would mean that for $p$ corresponding to a closed point of $U_i$ and any $\ell \ne p$ the characteristic polynomial of $\phi_p$ on $V_{i,\ell}$ lies in $\mathbf{Q}[t]$ and is independent of such $\ell$). If $X_{\mathbf{Q}}$ were smooth and proper over $\mathbf{Q}$, so $X_{\mathbf{Z}[1/N]}$ is smooth and proper over $\mathbf{Z}[1/N]$ for sufficiently divisible $N > 0$, then the smooth and proper base change theorems would ensure that we could take $U = {\rm{Spec}}(\mathbf{Z}[1/N])$ and the Riemann Hypothesis would provide the "compatible family" aspect (essentially because it rules out cancellation in the alternating $\ell$-adic formula, combined with the zeta function being unaware of $\ell$). But beyond that case we don't know: "independence of $\ell$" for the characteristic polynomial of Frobenius acting on the $i$th compactly supported $\ell$-adic cohomology of a separated finite type $\mathbf{F}_p$-scheme is believed to be true but remains an unsolved problem. (If you look at the Introduction to deJong's IHES paper on alterations you'll see that he was initially hopeful that his results replacing absence of resolutions of singularities in positive characteristic might have applications to prove new "independence of $\ell$" results, but that this didn't pan out; I am not aware of anyone having made substantial progress on it since that time either, but would be happy to hear to the contrary. Even if we grant resolution of singularities then I don't think an implication is known. In the absence of precise control on weights as the purity provided by RH in the smooth proper case, it is hard to geometrically isolate the contribution in a single cohomological degree from the rest, as the long exact excision sequence associated to a stratification lumps together all cohomological degrees. Deligne's Weil II is very suggestive, but alas I think not enough even assuming resolution.)
|
{
"source": [
"https://mathoverflow.net/questions/169256",
"https://mathoverflow.net",
"https://mathoverflow.net/users/349/"
]
}
|
169,266 |
These days, there are many conference centers and universities recording seminars and conference talks and make them available on the web. Some examples: http://www.fields.utoronto.ca/video-archive http://www.birs.ca/videos/2014 http://video.ias.edu/sm http://www.newton.ac.uk/webseminars/ https://www4.math.duke.edu/video/video.html However, keeping track of all the websites for these videos is simply not practical. Does anyone know of website that is a bit like arxiv, but for videos? If not, would anyone be interested in starting one? What I have in mind: Everyone can add links for talks that they find interesting. Vidoes are divided into subjects, with tags, like the ones we have on MOF. Weekly email people can subscribe to in order to get latest updates. A discussion board for each video link so that people can add comments and ask questions. Any other ideas?
|
This is an elaboration on ACL's answer, way too long for a comment, which highlights a technical ingredient (well-known to all experts) that underlies the precise sense in which the $\ell$-adic etale cohomology of the geometric generic fiber provides a "uniformity" in $p$: the good properties of constructible $\ell$-adic sheaves.
In particular, I think it is a mistake to try to understand a precise sense of "uniformity in $p$" by focusing on point-counting: this misses the key structure, as noted in ACL's answer, namely certain $\ell$-adic representations (of the absolute Galois group of $\mathbf{Q}$) which individually are not expressed via point-counting at all (away from misleading special cases such as curves and abelian varieties for which degree-1 cohomology over finite fields contains all of the cohomological information). To explain this requires some preparations, hence the length of what follows (which is all standard stuff, but perhaps hard to extract for a non-expert; maybe even what follows is hard to read in parts for a non-expert, but I think it is important to recognize where serious theorems of etale cohomology are doing some work, going beyond the cohomological formula for the zeta function of a single variety over a single finite field). The crux is that the robustness of constructibility provides the magical glue linking behavior at different primes. Literally from the product definition, the zeta function of a separated finite type $\mathbf{Z}$-scheme $X$ is the product $\prod_p \zeta(X_{\mathbf{F}_p}, p^{-s})$ of the zeta functions of the fibers, with ${\rm{Re}}(s)$ is sufficiently large (determined by fiber dimensions alone; see Serre's article in the Purdue conference proceedings on arithmetic geometry from the mid-1960's). By the work of Dwork or Grothendieck-Artin (et al.), the zeta function of any separated scheme of finite type over $\mathbf{F}_p$ (such as $X_{\mathbf{F}_p}$) is a rational function in $p^{-s}$. The cohomological formalism provides an "$\ell$-adic" explanation for the rationality of the factor at each prime $p$ in the sense that
for any prime $\ell \ne p$ we have
$$\zeta(X_{\mathbf{F}_p}, t) = \prod_{i\ge 0} \det(1 - t\phi_p|
{\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p}, \mathbf{Q}_{\ell}))^{(-1)^{i+1}}$$
in $\mathbf{Q}_{\ell}[\![t]\!]$, where the left side is initially just a formal power series in $1 + t \mathbf{Z}[\![t]\!]$ (defined as a product over closed points of $X_{\mathbf{F}_p}$) and the rational function over $\mathbf{Q}_{\ell}$ on the right side might involve internal cancellations among the various determinant polynomials (ruled out for smooth proper $X_{\mathbf{F}_p}$ by the Deligne's work on the Riemann Hypothesis, but not otherwise). In other words, the "$\ell$-adic" explanation for rationality rests on the fact that $\mathbf{Q}(\!(t)\!) \cap \mathbf{Q}_{\ell}(t) = \mathbf{Q}(t)$ inside $\mathbf{Q}_{\ell}(\!(t)\!)$ (and Dwork's approach provides a variant of that explanation with $\ell=p$). In the displayed product on the right side, $i$ goes up to $2 \dim X_{\mathbf{F}_p}$ (which is bounded independently of $p$, and in fact equal to $2 \dim X_{\mathbf{Q}}$ for all but finitely many $p$). That was all just setup. Now fix a prime $\ell$ and an integer $i \ge 0$. One can ask if there is a finite set $S_{i,\ell}$ of primes of $\mathbf{Z}$ with $\ell \in S_{i,\ell}$ such that the polynomials $$R_{p,i,\ell}(t) = \det(1 - t \phi_p|{\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p},\mathbf{Q}_{\ell}))$$
for all $p \not\in S_{i,\ell}$ are "linked" in the sense that there is a single finite-dimensional continuous $\mathbf{Q}_{\ell}$-linear representation
$$\rho_{i,\ell}: G_{\mathbf{Q},S_{i,\ell}} \rightarrow {\rm{GL}}(V_{i,\ell})$$
of the Galois group over $\mathbf{Q}$ of its maximal extension (inside $\overline{\mathbf{Q}}$) unramified outside $S_{i,\ell}$ such that
$$\det(1 - t \rho_{i,\ell}(\phi_p)|V_{i,\ell}) = R_{p,i,\ell}(t)$$
for all $p \not\in S_{i,\ell}$, where $\phi_p \in G_{\mathbf{Q},S_{i,\ell}}$ is a member of the conjugacy class of geometric Frobenius elements at $p$ (all choices giving the same determinant). This would imply in particular that the degree of $R_{p,i,\ell}(t)$ is the same for all $p \not\in S_{i,\ell}$, but it is a much stronger statement: that $\rho_{i,\ell}$ would be a kind of "$\ell$-adic glue" which unifies the disparate $R_{p,i,\ell}(t)$'s coming from the geometric special fibers $X_{\overline{\mathbf{F}}_p}$ in varying characteristics $p \not\in S_{i,\ell}$. The crux of the matter then is the following fundamental fact: the continuous representation $V_{i,\ell} := {\rm{H}}^i_c(X_{\overline{\mathbf{Q}}},\mathbf{Q}_{\ell})$ is such a $\rho_{i,\ell}$, for an appropriate choice of $S_{i,\ell}$. Why? Here is where one has to use a real theorem, namely the preservation of constructibility of $\ell$-adic sheaves under higher direct images with proper support, coupled with the proper base change theorem. More precisely, if $h:Y' \rightarrow Y$ is any separated map of finite type between noetherian schemes over $\mathbf{Z}[1/\ell]$ and if $\mathscr{F}$ is any constructible $\mathbf{Q}_{\ell}$-sheaf on $Y'$ (e.g., the constant sheaf $\mathbf{Q}_{\ell}$) then ${\rm{R}}^i h_{!}(\mathscr{F})$ is a constructible $\mathbf{Q}_{\ell}$-sheaf on $Y$ whose formation moreover commutes with any base change (the latter due to the proper base change theorem). The point is that any constructible $\mathbf{Q}_{\ell}$-sheaf on $Y$ is lisse over a dense open $U$ (depending on the sheaf), and hence "is" just a continuous $\mathbf{Q}_{\ell}$-linear representation of the fundamental group $\pi_1(U,\eta)$ if $Y$ is normal and connected (with geometric generic point $\eta$). In particular, when $Y$ is a connected Dedekind scheme then over $U$ this lisse sheaf is nothing more or less than an $\ell$-adic representation $\rho$ of the absolute Galois group of the function field of $Y$ (i.e., the residue field at the generic point of $Y$) such that $\rho$ is unramified at all closed points $u$ of $U$. The Galois representation at $u$ arising from the $u$-stalk of the lisse sheaf coincides with the residual Galois representation arising from $\rho$ on the Galois group at the generic point by virtue of its unramifiedness at $u$ (upon choosing a decomposition group at $u$ in the Galois group at the generic point, which amounts to working with a strict henselization at $u$ inside a separable closure of the function field of $Y$ in order to compute the specialization homomorphism from geometric stalk at $u$ to a geometric generic stalk). For example, take $Y' = X_{\mathbf{Z}[1/\ell]}$ and $Y = {\rm{Spec}}(\mathbf{Z}[1/\ell])$ and $\mathscr{F} = \mathbf{Q}_{\ell}$. The above says that there is a dense open subscheme $U_{i,\ell} \subset {\rm{Spec}}(\mathbf{Z}[1/\ell])$ such that the constructible ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ on ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ has restriction over $U_{i,\ell}$ that is lisse . Letting $S_{i,\ell}$ be the finite set of closed points of ${\rm{Spec}}(\mathbf{Z})$ complementary to $U_{i,\ell}$, we have that $\pi_1(U_{i,\ell}) = G_{\mathbf{Q},S_{i,\ell}}$ (using geometric generic point as base point of $\pi_1$) and the lisse restriction ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})|_{U_{i,\ell}}$ has respective stalks at the chosen geometric generic point and geometric closed point at $p \not\in S_{i,\ell}$ identified as Galois modules (for $\mathbf{Q}$ and $\mathbf{F}_p$ respectively) with the respective geometric fibral cohomologies $V_{i,\ell} := {\rm{H}}^i_c(X_{\overline{\mathbf{Q}}}, \mathbf{Q}_{\ell})$ and ${\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p},\mathbf{Q}_{\ell})$ (recovering in particular that $V_{i,\ell}$ is unramified at $p$, as we know it must be due to $V_{i,\ell}$ arising from a $\pi_1(U_{i,\ell})$-representation). In other words, it is precisely the lisse pullback of ${\rm{R}}^if_{!}(\mathbf{Q}_{\ell})$ over ${\rm{Spec}}(\mathbf{Z}_{(p)})$ viewed as a representation of $\pi_1({\rm{Spec}}(\mathbf{Z}_{(p)}))$ which is the "$\ell$-adic glue" that links up the $i$th factor in the $\ell$-adic alternating product formula for $\zeta(X_{\overline{\mathbf{F}}_p},t)$ with the single entity $V_{i,\ell}$ that "doesn't know $p$". And the mechanism of this linkage is that (up to conjugation ambiguity!) we can compute that $\pi_1$ using geometric base points over either the generic or closed points of ${\rm{Spec}}(\mathbf{Z}_{(p)})$. So the upshot is that the lisse restriction of ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ over some dense open subscheme of ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ ensures that $V_{i,\ell}$ as built from the cohomology of the geometric generic fiber (no mention of $p$!) is the origin of "uniformity in $p$" when we stare at the $p$-factors of the zeta function of $X$ for varying $p$ (away from some finite set of primes). Note in particular that the set of "bad" primes here is not encoded by geometric means via "good reduction" (a bad notion to consider away from the proper case anyway); it's all about finding a dense open inside ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ over which the constructible ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ has lisse restriction. Note in particular that each $V_{i,\ell}$ on its own does not have anything to do with point-counting (away from special cases like curves and abelian varieties). It is only the alternating product built from these which is related to point-counting. But it is the $V_{i,\ell}$'s which are where the action is. The above is thoroughly $\ell$-adic for each $\ell$ separately whereas the zeta functions above do not mention $\ell$, so a truly satisfying sense of "uniformity in $p$" (away from a finite exceptional set) would be given by proving two more things: $U_{i,\ell}$ is "independent of $\ell$" in the sense that $U_{i,\ell} = U_i - \{\ell\}$ for some single dense open $U_i \subset
{\rm{Spec}}(\mathbf{Z})$ and that the $V_{i,\ell}$ for varying $\ell$ constitute a "compatible family" in the sense defined in Serre's book Abelian $\ell$-adic representations (here, it would mean that for $p$ corresponding to a closed point of $U_i$ and any $\ell \ne p$ the characteristic polynomial of $\phi_p$ on $V_{i,\ell}$ lies in $\mathbf{Q}[t]$ and is independent of such $\ell$). If $X_{\mathbf{Q}}$ were smooth and proper over $\mathbf{Q}$, so $X_{\mathbf{Z}[1/N]}$ is smooth and proper over $\mathbf{Z}[1/N]$ for sufficiently divisible $N > 0$, then the smooth and proper base change theorems would ensure that we could take $U = {\rm{Spec}}(\mathbf{Z}[1/N])$ and the Riemann Hypothesis would provide the "compatible family" aspect (essentially because it rules out cancellation in the alternating $\ell$-adic formula, combined with the zeta function being unaware of $\ell$). But beyond that case we don't know: "independence of $\ell$" for the characteristic polynomial of Frobenius acting on the $i$th compactly supported $\ell$-adic cohomology of a separated finite type $\mathbf{F}_p$-scheme is believed to be true but remains an unsolved problem. (If you look at the Introduction to deJong's IHES paper on alterations you'll see that he was initially hopeful that his results replacing absence of resolutions of singularities in positive characteristic might have applications to prove new "independence of $\ell$" results, but that this didn't pan out; I am not aware of anyone having made substantial progress on it since that time either, but would be happy to hear to the contrary. Even if we grant resolution of singularities then I don't think an implication is known. In the absence of precise control on weights as the purity provided by RH in the smooth proper case, it is hard to geometrically isolate the contribution in a single cohomological degree from the rest, as the long exact excision sequence associated to a stratification lumps together all cohomological degrees. Deligne's Weil II is very suggestive, but alas I think not enough even assuming resolution.)
|
{
"source": [
"https://mathoverflow.net/questions/169266",
"https://mathoverflow.net",
"https://mathoverflow.net/users/49492/"
]
}
|
170,352 |
Suppose $P$ is a closed polyhedron in space (i.e. a union of polygons which is homeomorphic to $S^2$) and $X$ is an interior point of $P$. Is it true that $X$ can see at least one vertex of $P$? More precisely, does the entire open segment between $X$ and some other vertex lie in the interior of $P$?
|
There are many points in the interior of this polyhedron, constructed (independently) by
Raimund Seidel and Bill Thurston, that see no vertices.
Interior regions are cubical spaces with "beams" from the indentations passing above
and below, left and right, fore and aft. Standing in one of these cubical cells,
you are surrounded by these beams and can see little else. Figure from: Discrete and Computational Geometry . ( Book link ). The indentations visible are not holes , in that they do not go all the way through, but rather stop just short of penetrating to the other side. So the three back faces of the
surrounding cube—obscured in this view—are in fact square faces
of the cube. Thus $P$ is indeed
homeomorphic to a sphere. To follow Tony Huynh's point: This polyhedron $P$ cannot be tetrahedralized, i.e.,
it cannot be partitioned into tetrahedra all of whose corners are vertices of $P$.
|
{
"source": [
"https://mathoverflow.net/questions/170352",
"https://mathoverflow.net",
"https://mathoverflow.net/users/51663/"
]
}
|
171,457 |
The classic Hurwitz theorem for rational approximations (in simplest form; the constant can of course be improved) gives infinitely many approximations $\frac mn$ to an irrational $\alpha$ with $|\frac mn-\alpha|\lt\frac1{n^2}$. Just recently, in trying to answer a question related to rational approximation of $\pi$ I tripped over a limitation of this theorem: it tells us nothing about the specific $m,n$ of an approximation. I'm interested in $n$ particularly, and wondering if there are any 'Dirichlet-style' results that say that for any irrational $\alpha$ and for any $a, d$ we can get good approximations (in the sense above) with $n\equiv a\pmod d$. Is this a known result?
|
The answer is no. Take $\alpha=\sqrt{2}$ and note that if $|\sqrt{2}-m/n|\le 1/n^2$ then we have $0<|2n^2-m^2| \le (\sqrt{2}n+m)/n \le 3$. Now suppose we want $n\equiv 4\pmod p$ say. Then we must have that $32-m^2 \equiv b \pmod p$ for some $|b|\le 3$. But we can find a prime $p$ for which the numbers $29$ to $35$ are all quadratic non-residues $\pmod p$ (for example, choose $p$ so that $2, 7, 11, 29, 31$ are all non-residues $\pmod p$, and $3, 5, 17$ are residues). Thus there are no good approximations to $\sqrt{2}$ with $n\equiv 4\pmod p$ for such a prime $p$. One can clearly vary this argument a fair bit.
|
{
"source": [
"https://mathoverflow.net/questions/171457",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7092/"
]
}
|
171,717 |
Grothendieck's homotopy hypothesis, is, as the $n$lab states: Theorem: There is an equivalence of $(∞,1)$-categories $(\Pi⊣|−|): \mathbf{Top} \simeq \mathbf{\infty Grpd}$. What are the applications of this hypothesis? Why is it so fundamental? Can it be "generalized", perhaps by using the following definition of spaces : a space is simply a sheaf of sets on some site $\mathbf{Loc}$ of local models with a Grothendieck topology $τ$ on it?
|
For me this result fits in a context of other results that give complete algebraic invariants for homotopy types. The broad program sometimes goes under the rubric Whitehead's algebraic homotopy program . If we define a homotopy $n$-type (for $n \geq 1$) as an object of the localization of a suitable category of spaces (e.g., $Top$ or simplicial sets) with respect to maps $f: X \to Y$ that induce isomorphisms on homotopy groups $\pi_k(X, x) \to \pi_k(Y, f(x))$ for each choice of basepoint $x$ and $1 \leq k \leq n$, then it is well-known and classical that homotopy 1-types are classified by their fundamental groupoids, i.e., the localization is equivalent to the category of groupoids. The homotopy hypothesis can be seen as a far-reaching generalization of this basic result; the result is essentially a 1-dimensional truncation of the homotopy hypothesis. Thus, we could extend this idea of $n$-truncating $\infty$-groupoids past $n = 1$. Homotopy $n$-types are thus classified by $n$- groupoids ; it is interesting to see how this subsumes some of the classical results. For example, looking at connected 2-types, these are classified by groupal monoidal groupoids; passing to appropriate skeletal models, this means that connected 2-types $X$ are classified by triples $(\pi_1(X), \pi_2(X), k)$ where $k \in H^3(\pi_1(X), \pi_2(X))$ (an example of a $k$- invariant ) is the class of a 3-cocycle $$\pi_1(X) \times \pi_1(X) \times \pi_1(X) \to \pi_2(X)$$ that in essence specifies an associativity constraint for a monoidal category structure. This description is a modern rendering of work going back to Eilenberg and Mac Lane: S. Eilenberg and S. MacLane, Determinationation of the Second Homology and Cohomology Groups of a Space by Means of Homotopy Invariants . Proc. Nat. Acad. Sci., Vol. 32 (1946) 277-280 where the $k$-invariant of a 2-type $X$ can be described in terms of its Postnikov tower. Along similar lines, Joyal and Tierney (in apparently unpublished work, but circa 1984) described algebraic 3-types in terms of Gray-enriched groupoids, which are appropriately strictified groupoidal tricategories. Of course, we also know that homotopy types can be described in terms of Kan complexes (which are classical models for $\infty$-groupoids), which is one simplicial manifestation of the homotopy hypothesis. My understanding is that Grothendieck was interested in fundamental algebraic operations on a more globular type of structure that arises from homotopy $n$-types $X$, where the $j$-cells for $j < n$ are maps $D^j \to X$ (thinking here of $D^j$ as a co-globular space) and $n$-cells are maps $D^n \to X$ modulo homotopy rel boundary. Surely this type of structure fits roughly into the sequence of the "classical" weak $n$-categories (category, bicategory, tricategory). If I can indulge in some shameless self-promotion, a main motivation for the notion of weak $n$-category that I presented in 1999 (see Tom Leinster's article ) was actually to give a definition of Grothendieck fundamental $n$-groupoids along those sorts of "classical" lines (this is somewhat predating the "$(\infty, 1)$-revolution"). I am not sure of the state of the art here, but if I can be allowed to speculate wildly (and with it being understood that I am not a homotopy theorist): we might begin by observing that objects like Kan complexes (or their truncations) are not, technically speaking, algebraic in the sense of being strictly described in terms of operations subject to equational axioms. They can be algebraized, and something like an algebraic notion of (weak) $n$-groupoid or $\infty$-groupoid results, but actually milking usable algebraic invariants out of such structures, i.e., extracting usable algebraic invariants for $n$-types in a way that extends the results of Eilenberg-Mac Lane, Joyal-Street, etc., seems to me a possibly worthy research project. Possibly such study would proceed by locating usable semi-strictifications of $n$-categorical structures (à la the manner in which Gray-groupoids are semistrict tricategorical groupoids). In other directions: researchers such as Ronnie Brown have worked on $n$-types via more cubical notions (cubical $\omega$-groupoids), especially with a view toward higher van Kampen theorems, and Baues (whose work I don't know) has also worked on Whitehead's program (see his book Algebraic Homotopy, I guess). While we're in this speculative mode: I don't know what really to say to the suggestion that we generalize the HH to objects of a Grothendieck topos $E$ (whose objects are thought of as "generalized spaces"). The only thing that comes to my mind right away is that we could try defining homotopy groups of objects in $E$ by appealing to a suitable "geometric realization" that passes through a left-exact left adjoint $E \to \textbf{Simp-Set}$. But there could be many choices of such lex left adjoints; they correspond to "$E$-models" in simplicial sets, where model is in the sense of whatever geometric theory the Grothendieck topos $E$ classifies. This could be interesting, or could be a dead end. Hard for me to say.
|
{
"source": [
"https://mathoverflow.net/questions/171717",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
171,724 |
This is perhaps unanswerable,
or perhaps I am too algebraically ignorant to phrase it cogently, but: Is there some identifiable reason that polynomials over
$\mathbb{C}$,
$\mathbb{R}$, $\mathbb{Q}$, $\mathbb{Z}$, $\mathbb{Z}/n\mathbb{Z}$
are so pervasively useful in mathematics? Is it because polynomials are in some sense the most natural functions
defined on a field?
I know that every function over a finite field $\mathbb{F}^n \to \mathbb{F}$ is a polynomial.
And, by the Stone–Weierstrass theorem ,
every continuous function on an interval can be approximated by a polynomial.
Is this the universal aspect of polynomials that
"explains" their ubiquity? Even tropical polynomials, which employ alternative addition/multiplication operations forming a semiring, are proving useful. I'd appreciate your insights!
|
Polynomials are, essentially by definition, precisely the operations one can write down starting from addition and multiplication. More formally, polynomials with coefficients in a commutative ring $R$ are precisely the morphisms in the Lawvere theory of commutative $R$-algebras. So in some sense caring about polynomials is equivalent to caring about commutative rings and, more generally, commutative algebras. See, for example, this blog post for some details; in particular, that blog post makes precise the assertion that polynomials are not only the most natural but in fact the only natural operations on commutative $R$-algebras.
|
{
"source": [
"https://mathoverflow.net/questions/171724",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
171,733 |
I asked a question at Math.SE last year and later offered a bounty for it, but it remains unsolved even in the simplest case. So I finally decided to repost this case here: Is it possible to express the following indefinite integral in elementary functions?
$${\large\int}\sqrt{x+\sqrt{x+\sqrt{x+1}}}\ dx$$
|
The answer is 'no'. Making the substitution
$$
x = \frac{(t-1)(t-5)(t^2+2t+5)}{16t^2},
$$
one finds
$$
{\textstyle\sqrt{x+\sqrt{x+\sqrt{x+1}}}\,\mathrm{d}x}
= \frac{(t^2-2t+5)(t^2-5)\sqrt{t^4{-}2t^2{-}40t+25}\ \mathrm{d}t}{32t^4}.
$$
Denote the right hand side of the above equation by $\beta$. Now, setting
$$
Q(t) = \frac{(25-25t-14t^2-3t^3+t^4)}{96t^3},
$$
one finds that
$$
\beta = \mathrm{d}\left(Q(t)\sqrt{t^4{-}2t^2{-}40t+25}\right)
+ \frac{(10+15t-2t^2+t^3)\,\mathrm{d}t}{8t \sqrt{t^4-2t^2-40t+25}}\,
$$
The second term on the right hand side is in normal form for differentials on the elliptic curve $s^2 = t^4-2t^2-40t+25$ that are odd with respect to the involution $(t,s)\mapsto(t,-s)$. Since this term represents a differential that has two poles of order $2$ (over the two points where $t=\infty$) and two poles of order $1$ (over the two points where $t=0$), an application of Liouville's Theorem (on integration in elementary terms, with the differential field taken to be the field of meromorphic functions on the elliptic curve) shows that this term is not integrable in elementary terms. Hence $\beta$ is not, and hence the original integrand is not.
|
{
"source": [
"https://mathoverflow.net/questions/171733",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9550/"
]
}
|
171,754 |
Let $N=pq$ where $p$ and $q$ are primes of the form $4k+1$. Let $\mathbb{Z}_N$ be the set of integers modulo $N$ and $\mathbb{Z}_N^*$ be the units in $\mathbb{Z}_N$. Let $QR$ be the quadratic residues in $\mathbb{Z}_N^*$. If none of $p$ and $q$ is $5$, then show that $\mathbb{Z}_N=\{a-b: a, b \in QR\}$.
That is, we need to show that $\mathbb{Z}_N$ can be expressed as difference of quadratic residues. Actually I observed that for N being product of two primes, apart from 3 and 5, Z_N can always be represented as difference of its set of quadratic residues. I tried to prove it but I failed. That's why I asked it here. Initially I started with primes of the form 4k+1 as -1 is a quadratic residue there. However, nothing seems to work out.
|
The answer is 'no'. Making the substitution
$$
x = \frac{(t-1)(t-5)(t^2+2t+5)}{16t^2},
$$
one finds
$$
{\textstyle\sqrt{x+\sqrt{x+\sqrt{x+1}}}\,\mathrm{d}x}
= \frac{(t^2-2t+5)(t^2-5)\sqrt{t^4{-}2t^2{-}40t+25}\ \mathrm{d}t}{32t^4}.
$$
Denote the right hand side of the above equation by $\beta$. Now, setting
$$
Q(t) = \frac{(25-25t-14t^2-3t^3+t^4)}{96t^3},
$$
one finds that
$$
\beta = \mathrm{d}\left(Q(t)\sqrt{t^4{-}2t^2{-}40t+25}\right)
+ \frac{(10+15t-2t^2+t^3)\,\mathrm{d}t}{8t \sqrt{t^4-2t^2-40t+25}}\,
$$
The second term on the right hand side is in normal form for differentials on the elliptic curve $s^2 = t^4-2t^2-40t+25$ that are odd with respect to the involution $(t,s)\mapsto(t,-s)$. Since this term represents a differential that has two poles of order $2$ (over the two points where $t=\infty$) and two poles of order $1$ (over the two points where $t=0$), an application of Liouville's Theorem (on integration in elementary terms, with the differential field taken to be the field of meromorphic functions on the elliptic curve) shows that this term is not integrable in elementary terms. Hence $\beta$ is not, and hence the original integrand is not.
|
{
"source": [
"https://mathoverflow.net/questions/171754",
"https://mathoverflow.net",
"https://mathoverflow.net/users/52945/"
]
}
|
171,995 |
Given an algebraically closed field $F$, for any positive integer $n$, are there always only finitely many non-isomorphic (noncommutative) associative algebras (possibly without identity) with dimension $n$ over $F$? This questions is motivated by the classification of low dimensional algebras. It seems that at least when $n$ is less than 6, the answer is yes. I'm also guessing that the number of non-isomorphic classes doesn't depend on the choice of algebraically closed fileds--I've convinced myself this is true for low dimensional cases. So far I have two ideas: 1. To compute the dimension of the variety of associative algebras of dimension n, and then consider the $GL_n(F)$ action on the variety; 2. Every algebra of dimension $n$ can be embedded as a subalgebra of $M_{n+1}(F)$. But 1. is also a difficult problem for me and I don't know how to use 2.
|
Even for $4$-dimensional algebras with identity it's not true. For $a\in F$ let $B(a)=F\langle x,y|x^2=y^2=0,xy=ayx\rangle$. Then $B(a)\not\cong B(b)$ unless $a=b$ or $a=b^{-1}$. This is quite easy to see by considering which elements of $B(a)$ square to zero: If $z=\lambda_11+\lambda_xx+\lambda_yy+\lambda_{yx}yx$ with $z^2=0$, then clearly $\lambda_1=0$. So $z^2=\lambda_x\lambda_y(a+1)yx$, which is only zero if $\lambda_x=0$ or $\lambda_y=0$ (unless $a=-1$, which characterizes $B(-1)$ as the only $B(a)$ with a $3$-dimensional space of square zero elements, so let's assume $a\neq-1$). So, modulo the ideal $(yx)$, the only square zero elements are scalar multiples of $x$ and $y$, and any two such elements $z$ and $z'$ that generate $B(a)$ satisfy $zz'=az'z$ or $z'z=azz'$. So the isomorphism type of $B(a)$ determines $\{a,a^{-1}\}$.
|
{
"source": [
"https://mathoverflow.net/questions/171995",
"https://mathoverflow.net",
"https://mathoverflow.net/users/27976/"
]
}
|
172,690 |
How and when did the word "normal" acquire this meaning ? When I first thought of this, I couldn't really come up with any explanation that wasn't complete speculation -- pretty much all I was able to see was that it isn't any stranger than "right" in "right angle" -- the angle is probably as right as the lines are normal. But on reading the etymology note in the entry for "normal" in the American Heritage Dictionary, Middle English, from Late Latin normalis, from Latin, made according to the square, from norma, carpenter's square; I thought that was probably it -- it probably came from the perpendicular sides of a carpenter's square. Did it then? And how was it introduced? If this is indeed what's going on, either the meaning must have been introduced long ago, when people still realized what the etymology of "normal" is, or some really erudite mathematician must have introduced it, perhaps to show off their erudition. So how did it happen?
|
normalis already meant right-angled in classical Latin; for example, angulus normalis appears in the first century text De institutione oratoria (volume XI, paragraph 3.141) by Marcus Fabius Quintilianus. In a commentary on this text from the fifteenth century this early use of the word "normalis" is explained as "rectus", see screenshot: "Angulus normalis est idem qui angulus rectus" = "a normal angle is the same as a right angle" In response to Ketil Tveiten's question: "How did normal come to mean ordinary" : according to this source, the meaning of normal as conforming to common standards seems to be of recent origin (1828?).
|
{
"source": [
"https://mathoverflow.net/questions/172690",
"https://mathoverflow.net",
"https://mathoverflow.net/users/20803/"
]
}
|
172,838 |
Metamathematics has a reasonably clear connotation,
enough to have a Wikipedia page ,
with Gödel, Tarski, and Turing playing leading roles;
Kleene's book ( Introduction to Metamathematics ( Amazon link ));
Chaitin's article (" Meta-mathematics and the foundations of mathematics. " EATCS Bulletin , June 2002, vol. 77, pp. 167-179); etc.
My question is: Q . Is there an identifiable meta -metamathematics , a scholarly study of metamathemaics,
perhaps in the philosophical (rather than mathematical) literature?
Or does all the literature essentially "devolve" to metamathemathics,
without an identifiable line that can be drawn between metamathematics and
meta-metamathematics? Citations to possibly-meta-metamathematical studies would be appreciated! Thanks!
|
My opinion is that there is no crisp distinction between
mathematics, metamathematics and meta-metamathematics, and the
subjects thoroughly blend one into another in such a way that
prevents any coherent distinction. Furthermore, even the categorization of particular topics as
mathematics or metamathematics has changed radically over time,
and many topics that were formerly considered metamathematics are
now just mathematics. For example, the ultrapower construction was
born in metamathematics, but is now widely seen as a fundamental
mathematical construction. The method of forcing was initially
used only for relative consistency proofs, but is now saturated
with a mixture of infinite combinatorics, ideals, Boolean
algebras, topology, transfinite limits, and so on. Computability
theory was born in purely philosophical speculation about what it
means for a human to undertake a computable procedure, but gave
birth to complexity theory and other extremely applied
mathematical topics. Is the polynomial time
hierarchy regarded today as metamathematics? I don't think so, but
it is a part of complexity theory, which is a part of
computability theory, which is traditionally considered
metamathematics. The study of large cardinals is tied to
fundamental issues in logic, such as definability and
constructiblity, but also involves at its core essentially
mathematical questions about infinite combinatorics, measure
theory, complex systems of embeddings and so on. Where does the
mathematics end and the metamathematics begin? It is all wrapped
up together. The term metamathematics has traditionally included the entire
subjects of model theory, set theory, proof theory and
computability theory, but I think this kind of usage of the term
is simply no longer accurate, since huge parts of these subjects
are now more mathematical than metamathematical. I think that the
term "metamathematics" may have made more sense as a unifying
umbrella term in an earlier age, when many mathematicians were
simply less familiar with these subjects than is the case today. Consider my work with Benedikt Löwe on the modal logic of forcing . The main theorem is that
the ZFC provably valid principles of forcing are exactly those in
the modal theory known as S4.2. Now, the principles under
consideration, the principles of forcing, can themselves surely
be considered as metamathematical, as they concern how truth varies in the generic
multiverse, the Kripke model of possible worlds consisting of the
set-theoretic universe in the context of all its forcing
extensions. Since the principles are thus metamathematics, and we are
proving theorems about which principles are provably valid, one could consider this to be solidly a case of meta-metamathematics. But if
you look at the paper, I think you will mainly find just plain old
mathematics, with detailed inductions and finite partial order
combinatorics and some infinite combinatorics and forcing
iterations, mixed in with some modal logic, which is essentially finite combinatorics. This example therefore illustrates my point that there
is really no coherent distinction into
mathematics/metamathematics/meta-metamathematics.
|
{
"source": [
"https://mathoverflow.net/questions/172838",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
172,869 |
The question is whether the below is true. $$\sum _{k=0}^{s-1} \binom{n}{k}=\sum _{k=1}^s 2^{k-1} \binom{n-k}{s-k}$$ Mathematica can simplify as follows, but it fails to Reduce[] or Solve[]. $$2^n=\binom{n}{s} \, _2F_1(1,s-n;s+1;-1)+\binom{n-1}{s-1} \, _2F_1(1,1-s;1-n;2)$$
|
A slightly less computational method is to note that both sides of the identity count the number of subsets of $\{1,\dots,n\}$ with fewer than $s$ elements. This is obvious for the left hand side. It's true for the right hand side because $2^{k-1}\pmatrix{n-k\\s-k}$ is the number of such subsets $S$ for which $k$ is minimal such that $|S\cup\{1,\dots,k\}|\geq s$, since such a subset $S$ is the union of an arbitrary subset of $\{1,\dots,k-1\}$ and a subset of size $s-k$ of $\{k+1,\dots,n\}$.
|
{
"source": [
"https://mathoverflow.net/questions/172869",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
175,775 |
To clarify the terms in the question above: The symmetric group Sym($\Omega$) on a set $\Omega$ consists of all bijections from $\Omega$ to $\Omega$ under composition of functions. A generating set $X \subseteq \Omega$ is minimal if no proper subset of $X$ generates Sym($\Omega$). This might be a difficult question, but perhaps the answer is known already?
|
I think it follows from Theorem 1.1 of "Subgroups of Infinite Symmetric Groups" by Macpherson and Neumann (J. London Math. Soc. (1990) s2-42 (1): 64-84) that there is no minimal generating set of $S(\Omega)$for infinite $\Omega$. The theorem states that any chain of proper subgroups of $S(\Omega)$ whose union is $S(\Omega)$ must have cardinality strictly greater than $|\Omega|$. Now suppose $X$ is a minimal generating set. Let $C=\{x_0,x_1,\dots\}$ be a countable subset of $X$. If
$$H_i=\langle X\setminus C,x_0,\dots,x_i\rangle$$
for $i\in\mathbb{N}$, then
$H_0<H_1<\dots$ is a countable chain of proper subgroups whose union is $S(\Omega)$, contradicting the theorem. (Note: There's a paper of Bigelow pointing out some unstated set-theoretic assumptions in Macpherson and Neumann's paper, but I don't think that affects the theorem I mention.)
|
{
"source": [
"https://mathoverflow.net/questions/175775",
"https://mathoverflow.net",
"https://mathoverflow.net/users/55865/"
]
}
|
175,800 |
This question was originally asked on MathStackExchange and is migrated here with opinion from MO meta. I am integrating the inputs from users Daniel Fischer and Emil Jerabek there into this post. Which sets occur as boundaries of other sets in topological spaces? Of course the boundary of a set is closed. But not every closed set in a topological space is the boundary of some set in that space; only the empty set occurs as a boundary in a discrete space. More generally, boundaries cannot contain isolated points of the ambient space (but can well have isolated points of itself). It is tempting to assert that boundaries have empty interiors, but this is not true, as is shown by the fact that the boundary of Q in R is R. In fact it can be seen in general that the boundary of a dense set with empty interior is the whole space. Thus the only boundary in an indiscrete space is the whole set, just as the only boundary in a discrete space is the empty set. However the intuitive feeling comes right for open sets (and then for closed sets as well): The boundary of an open set cannot contain an open set. This question concerns all subsets of a topological space. An alternative/related question shall be to characterise all topological spaces in which every closed set occurs as a boundary .
Being a perfect space (i.e., one without isolated points) is a necessary condition, as a previous remark above about isolated points shows.
Emil Jerabek has conjectured on meta that this (being perfect) is sufficient too, for $T_0$ second countable spaces.
|
The spaces in which every closed set is a boundary are precisely the resolvable spaces. A topological space is said to be resolvable if it can be partitioned into two dense subspaces. $\mathbf{Proposition}$ A space is resolvable if and only if every closed set is the boundary of some set. $\leftarrow$ If $X$ is a topological space such that every closed subspace of $X$ is a boundary of some set, then the set $X$ is the boundary of some set $A$. However, if
$X=\partial A=\overline{A}\setminus(A^{\circ})$, then $A^{\circ}=\emptyset$, so
$\overline{A^{c}}=(A^{\circ})^{c}=X$. Therefore $A$ and $A^{c}$ are two disjoint dense subsets of $X$, so $X$ is resolvable. $\rightarrow$ Suppose that $X$ is resolvable. Then there is a partition $A,B$ of $X$ into two dense subspaces. Let $C\subseteq X$ be a closed subspace. Then I claim that $C=\partial (\partial C\cup(C^{\circ}\cap A))$. Clearly $\partial C\cup(C^{\circ}\cap A)\subseteq C$, so
$\overline{\partial C\cup(C^{\circ}\cap A)}\subseteq C$. Clearly $\partial C\subseteq\overline{\partial C\cup(C^{\circ}\cap A)}$. On the other hand, if $x\in C^{\circ}$, then for each open neighborhood $U$ of $x$, the set $U\cap C^{\circ}$ is also an open neighborhood of $x$. Therefore the set $A\cap U\cap C^{\circ}$ is non-empty since $A$ is a dense set. We therefore conclude that $x\in\overline{C^{\circ}\cap A}\subseteq\overline{\partial C\cup(C^{\circ}\cap A)}$. We therefore conclude that $C^{\circ}\subseteq\overline{\partial C\cup(C^{\circ}\cap A)}$. Therefore, we have $C=C^{\circ}\cup\partial C\subseteq\overline{\partial C\cup(C^{\circ}\cap A)}$. Therefore $C=\overline{\partial C\cup(C^{\circ}\cap A)}$. On the other hand, if $U\subseteq\partial C\cup(C^{\circ}\cap A)$ is open, then $U\subseteq C^{\circ}\cap A$. However, since $B$ is dense, $U$ must be empty. Therefore, $(\partial C\cup(C^{\circ}\cap A))^{\circ}=\emptyset$. We conclude that $C=\partial(\partial C\cup(C^{\circ}\cap A))$. $\mathbf{QED}$ As was pointed out by Will Sawin, a closed subset of a topological space is a boundary of some space if and only if its interior is resolvable. Since the notion of resolvability has not appeared on this website before, let me state a few facts about resolvability. Most spaces with no isolated points that one deals with in topology are resolvable (and much more ).
Let's call a space $X$ $\kappa$-resolvable if $X$ can be partitioned into $\kappa$ many dense subsets. The dispersion character $\Delta(X)$ of a topological space $X$ is the minimal cardinality of a non-empty open subspace of $X$. A topological space $X$ is said to be maximally resolvable if it is $\Delta(X)$-resolvable. Every compact Hausdorff space and every metric space is maximally resolvable. Furthermore, assuming $V=L$, there every Baire space with no isolated points is $\aleph_{0}$-resolvable. In fact, the existence of a Baire irresolvable space with no isolated points is equiconsistent with the existence of a measurable cardinal. Also, every countably compact regular space is $\omega_{1}$-resolvable. Also, resolvability is equivalent to a seemingly weaker condition. A space $X$ is resolvable if and only if it can be partitioned into finitely many subsets with empty interiors.
|
{
"source": [
"https://mathoverflow.net/questions/175800",
"https://mathoverflow.net",
"https://mathoverflow.net/users/50650/"
]
}
|
175,833 |
I think we all occasionally come across terminology that we'd like to see supplanted (e.g. by something more systematic). What I'd like to know is, under what circumstances is it reasonable to believe that such a fight is winnable? Question. What recent programmes to alter highly-entrenched mathematical terminology have succeeded, and under what conditions do
they tend to succeed or fail? Definitions. Recent = In the past fifty years. Highly-entrenched = The literature at the time had the old terminology written all over it. Succeeded = The new terminology is now used almost exclusively in research papers.
|
Although just beyond your 50-year scope, this may be of interest. Among the series $\mathsf A_n, \mathsf B_n, \mathsf C_n, \mathsf D_n$ in the Cartan-Killing classification of simple Lie groups, everyone (I believe) always agreed to call $\mathsf A_n$ the special linear group, $\mathbf{SL}(n)$, and $\mathsf B_n$ and $\mathsf D_n$ the special orthogonal groups, $\mathbf{SO}(2n+1)$ and $\mathbf{SO}(2n)$. But $\mathsf C_n$? Jordan in his Traité des substitutions (1870) called it (or rather its product with dilations) the abelian group , because of its role in Hermite's "important investigations on the transformation of abelian functions"; p.172 : It is clear that if two [linear] substitutions $S, S'$ multiply $\varphi\ [=x_1\eta_1-\xi_1y_1+\dots+x_n\eta_n-\xi_ny_n]$ respectively by constant integers $m, m'$, $SS'$ will multiply $\varphi$ by the constant integer $mm'$. Hence the sought substitutions form a group. We will call it the abelian group , and its substitutions abelian . This was well entrenched by the time Dickson wrote his Linear groups (1901); p.89 : A linear homogeneous substitution on $2m$ indices (...) is called Abelian if (...) it leaves formally invariant up to a factor (belonging to the field) the bilinear function
$$
\varphi\equiv\sum_{i=1}^m
\begin{vmatrix}
\xi_{i1}&\eta_{i1}\\
\xi_{i2}&\eta_{i2}
\end{vmatrix}.
$$
The totality of such substitutions constitutes a group called the general Abelian linear group ${}^2)$ $GA(2m,p^n)$. These of its substitutions which leave $\varphi$ absolutely invariant form the special Abelian linear group $SA(2m,p^n)$. ${}^2)$ To distinguish these groups from the ordinary Abelian , i.e. commutative, groups, we prefix the adjective linear . The Abelian linear group is not commutative in general. On the other hand, Sophus Lie and his school called $\mathsf C_n$ the linear complex group because it consists of symmetries of Plücker's linear line complex (a degree 1 hypersurface in the 4-dimensional space of affine lines in $\mathbf R^3$; Plücker (1866), p.341 : "The latin word complexus , which means an intertwining , an inter-crossing , has seemed to us very appropriate to express the new idea we are presenting here. For lack of a better term, we ask for permission to introduce it in the mathematical language.") Thus for instance one finds in Lie and Engel's Transformationsgruppen , vol. II (1890), p.522 : One can say that the Pfaffian equation (73) represents a linear complex of the space $x'_1\cdots x'_n$, $y'_1\cdots y'_n$, $z'$. The group (72) should therefore be called the projective linear complex group . Needless to say, both proposed names came into conflict with other spreading usages of these words. Hence Weyl's famous footnote in The classical groups (1938), p.165 : $$\text{CHAPTER VI}$$
$$\textbf{THE SYMPLECTIC GROUP}$$
$$\text{1. Vector Invariants of the Symplectic Group}^*$$
* The name "complex group" formerly advocated by me in allusion to line complexes,
as these are defined by the vanishing of antisymmetric bilinear forms, has become more and
more embarrassing through collision with the word "complex" in the connotation of
complex number. I therefore propose to replace it by the corresponding Greek adjective
"symplectic." Dickson calls the group the "Abelian linear group" in homage to Abel
who first studied it. Weyl's change was (obviously) highly successful. Ironically, Plücker's linear line complex term had won over Chasles' proposed focal system (1837). If it hadn't, today's symplectic geometers would probably all be doing "focal geometry"!
|
{
"source": [
"https://mathoverflow.net/questions/175833",
"https://mathoverflow.net",
"https://mathoverflow.net/users/26080/"
]
}
|
175,847 |
G. H. Hardy's A Mathematician's Apology provides an answer as to why one would do mathematics, but I'm unable to find an answer as to why mathematics deserves public funding. Mathematics can be beautiful, but unlike music, visual art or literature, much of the beauty, particularly at a higher level, is only available to the initiated. And while many great scientific advances have been built on mathematical discoveries, many mathematicians make no pretense of caring about any practical use their work may have. So when the taxpayer or a private organization provides grants for mathematical research, what are they expecting to get in return? I'm asking not just so I can write more honest research proposals, or in case it comes up in an argument, but so I have an answer for myself.
|
Benson Farb answered this beautifully in his University of Chicago commencement address. Here is an excerpt: Since I am a pure mathematician, Dean Hefley suggested as a possible
topic for this talk: “Why the square root of negative 1 is necessary”.
I could take up this challenge of justifying pure science on its vast
applicability; indeed the square root of negative 1, the basic
“imaginary number”, underlies a huge swath of modern technology, from
the design of circuits, airplanes and skyscrapers, to the construction
of economic and financial models, to robotics. I have decided,
however, to take the opposite point of view. I want to defend the
value of basic science for its own sake... ...the purpose of pure mathematics, of basic
science, is not the quick harvest. It is nothing less than an attempt
to bring human thought and understanding to a higher level. It is an
attempt to change not just what we think about the world, but how we
think about it. The importance of this for human evolution is
incalculable. As British physicist JJ Thomson said: “Research in
applied science leads to reforms, research in pure science leads to
revolutions.” Benson Farb, 2012
|
{
"source": [
"https://mathoverflow.net/questions/175847",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
175,923 |
The classical version of the van Kampen theorem is concerned about the fundamental group of a based space. In fact, it says that the functor $\pi_1$ preserves certain types of pushouts in $Top_*$. There is also a generalization of the van Kampen theorem that holds for the fundamental groupoid of a space $X$, where in this case it states precisely that the fundamental groupoid functor, $\Pi$, preserves certain colimits in $Top$, namely, those that arise from "nice" open coverings of $X$. The "groupoid" version of the van Kampen theorem seems to me more conceptual and more elegant than the classical version. Also, the groupoid version allows one to prove the classical version in a more or less easier way. Although, apart from the conceptual advantages of the groupoid version of van Kampen theorem, I would like to know if we are able to do any interesting calculations using the fundamental groupoid version of the van Kampen. In fact, explicitly describing a groupoid as a colimit of "simpler" groupoids is something that it is not clear at all for me. I would like to know some concrete cases where is it possible to describe the fundamental groupoid of a space, using this generalization form of the van Kampen theorem, and, if possible, to calculate the fundamental group directly from our calculation of $\Pi(X).$
|
I confess, in 1965 or so, I first thought that the version of the SvKT (Seifert-van Kampen Theorem) for the fundamental groupoid enabled us to get rid of base points. But then I wanted to calculate the fundamental group of the circle, and gradually realised that we needed $\pi_1(X,A)$ , the fundamental groupoid on a set of $A$ of base points chosen according to the geometry.
and for which covering space methods are not ideal. In fact one needs combinatorics and combinatorial group(oid) theory to calculate individual $\pi_1(X,a)$ from $\pi_1(X,A)$ . See my book Topology and Groupoids and also the classic 1971 book (downloadable) by Philip Higgins, Categories and Groupoids . Groupoids model homotopy 1-types. So one first determines the 1-type before calculating an individual fundamental group. This idea is nicely modelled in higher dimensions: one can calculate some 2-types and higher types as "big" algebraic objects inside which are the homotopy groups which one might want. The methods are explained more in a talk given in Paris on June 5, 2014, at the IHP, available on my preprint page . As Mariano points out, this has been discussed elsewhere on stackexchange and mathoverflow. July 14: The most general version of the SvKT is given in [41] (downloadable) on my publication list, R. Brown and A. Razak, `A van Kampen theorem for unions of
non-connected spaces'', Archiv. Math . 42 (1984) 85-88. in the form of a coequaliser statement when given an open cover and a set $A$ (of ``base points'' which meets each path component of each 1-,2-, and 3-fold intersections of the sets of the cover. The style of proof goes back to Crowell's original version, and has the advantage of generalising to higher dimensions. For example, in [32] R. Brown and P.J. Higgins, ``Colimit theorems for relative homotopy
groups'', J. Pure Appl. Algebra 22 (1981) 11-41. The retraction from the version for the full fundamental groupoid, $A=X$ , to this version is quite difficult to manage and is done in May's "Concise..." book, without the refinement on the conditions, but I feel is the wrong way to go, though it is quite elegant for the pushout version in dimension 1. July 17, 2014: Actually I have missed out on three reasons why one is interested in the fundamental groupoid $\pi_1 X$ . The notion of fibration of groupoids is relevant to topology particularly in construction of operations of groupoids on homotopy sets, and exact sequences. If $p: E \to B$ is a fibration of spaces then $\pi_1 p: \pi_1 E \to \pi_1 B$ is a fibration of groupoids. This is exploited in Chapter 7 of Topology and Groupoids . See for example arXiv:1207.6404 for other current uses of the algebra of groupoids, and in particular fibrations. Similarly if $p: E \to B$ is a covering map of spaces then on applying $\pi_1$ we get a covering morphism of groupoids . Thus a map is modelled by a morphism , and this often makes the theory easier to follow (IMHO!), particularly with regard to questions of lifting maps. See Chapter 10 of T&G . If $G$ is a (discrete) group acting on a space $X$ then it also acts on the fundamental groupoid $\pi_1 X$ . So we have not only orbit spaces $X/G$ but also orbit groupoids $(\pi_1 X)/\!/G$ . There is a canonical morphism $(\pi_1 X)/\!/G \to \pi_1(X/G)$ and there are useful conditions which ensure this is an isomorphism, e.g. $X$ is Hausdorff, has a universal cover, and the action is properly discontinuous. See Chapter 11 of T&G . One example given there is the cyclic group $Z_2$ acting on $X \times X$ whose orbit space is the symmetric square of $X$ . Under useful conditions, its fundamental group is that of $X$ made abelian. It would be good to see lots more examples. Aug 17, 2016: I can now refer to my answer to my own mathoverflow question on the relation of the van Kampen Theorem to the notion of descent. Oct 17, 2016: I should add that a whole area of research into the use of strict higher groupoids in algebraic topology, one part of which is described in the book Nonabelian Algebraic Topology , (EMS 2011, 703 pages), arose out of seeking generalisations to higher dimensions of the use of the fundamental groupoid. December 23, 2016 It may be useful to point out that a new volume by Bourbaki "Topologie alg'ebrique" Ch 1-4, (Springer) 2016, uses the fundamental groupoid extensively, and relates its use to descent theory. It does have results on orbit spaces, but no example of applications. It does not use the fundamental groupoid on a set of base points. December 9, 2019 It should be remembered that homotopy groups of a pointed space were introduced at the 1932 ICM at Zurich, by E. Cech, but the idea was not welcomed because of their abelian nature, so that they did not seem satisfactory higher dimensional versions of the fundamental group. The idea was taken up by Hurewicz, and the fascination and utility of homotopy groups led to the idea of a nonabelian version being regarded as a mirage, though the early homotopy theorists were fascinated by the action of the fundamental group on the higher homotopy groups (a comment of J,H.C. Whitehead, 1958). We now know that you can construct higher analogues of the fundamental groupoid for certain structured spaces , e.g. filtered spaces and $n$ -cubes of pointed spaces. For the filtered case, see this 2011 book . Note that the fascination and difficulty of the study of higher homotopy groups may, since these groups are defined only for pointed spaces, have deterred people from considering the possible uses of the many pointed case. Also the use of groupoids in algebraic topology has been in the past dismissed by many. Nonetheless, there is the basic fact of life (to use a term of JHCW) that while "group objects in the category of groups are abelian groups", that is not so if the word "group" is replaced by "groupoid". In view of the importance of group theory in maths and science, it is reasonable to ask of the potential significance of that basic fact of life. The questioner asks: " explicitly describing a groupoid as a colimit of "simpler" groupoids is something that it is not clear at all for me". This is answered in Appendix B of Nonabelian Algebraic Topology . More information on the history of fundamental groupoids and higher homotopy groupoids is in the Open Access article available from its link Mathematics Intelligencer . Can someone explain why the more general theorem in NOT usually given?
|
{
"source": [
"https://mathoverflow.net/questions/175923",
"https://mathoverflow.net",
"https://mathoverflow.net/users/53100/"
]
}
|
176,425 |
Archimedes (ca. 287-212BC) described what are now known as the 13 Archimedean solids in a lost work, later mentioned by Pappus.
But it awaited Kepler (1619) for the 13 semiregular polyhedra to be
reconstructed. (Image from tess-elation.co.uk/johannes-kepler .) So there is a sense in which a piece of mathematics was "lost" for 1800 years before it was "rediscovered." Q . I am interested to learn of other instances of mathematical results or insights that were known to at least one person, were essentially correct, but were lost (or never known to any but that one person), and only rediscovered later. 1800 years is surely extreme, but 50 or even 20 years is a long time
in the progress of modern mathematics. Because I am interested in how loss/rediscovery
might shed light on the inevitability of mathematical
ideas,
I would say that Ramanujan's Lost Notebook does not speak
to the same issue, as the rediscovery required locating his
lost "notebook" and interpreting it, as opposed to independent
rediscovery of his formulas.
|
Bernhard Bolzano .... ( interesting reading ) Much of his work was unpublished until much later (for reasons see the link), thus remaining largely unknown. For example, a theorem of Weierstrass is now known as the "Bolzano-Weierstrass theorem", acknowledging that Bolzano had proved it previously. He anticipated Cantor and Dedekind in work on doing calculus without infinitesimals. His example of a continuous nowhere-differentiable function is in a manuscript from 1830, but only published in 1930.
|
{
"source": [
"https://mathoverflow.net/questions/176425",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
176,472 |
The recent talks of Voevodsky (for example, http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf ), which describe subtle errors in proofs by him as well as others, as well as the famous essay by Jaffe and Quinn ( http://www.ams.org/journals/bull/1993-29-01/S0273-0979-1993-00413-0/ ) and responses to it ( http://www.ams.org/journals/bull/1994-30-02/S0273-0979-1994-00503-8/ ), raises for me the following question: What are some explicit examples of wrong or non-riogour proofs that did damage to mathematics or some significant part of it? Famous examples of non-rigorous proofs include Newton's development of calculus and the latter stages of the Italian school in algebraic geometry. Although these caused a lot of dismay and consternation, my impression is that they also inspired a lot of new work. Is it wrong for me to view it this way? In particular, I'm told that people proved false theorems using Newton's approach to calculus. What are some examples of this and what damage did they do?
|
Since the question is specifically about damage: I think that what really causes damage to a mathematical area is when an important result is claimed by someone prominent in the field, but the proof is never completely written.
Younger researchers are then likely to spend a lot of time and energy "cleaning up the mess", for little credit. Things are even worse when there is some freedom of interpretation of what might have been proven.
A younger researcher might want to use the announced result for some other purpose, but they might use a version of the theorem that ends up not being the one that got proved. When a proof is (widely) accepted to be wrong or non-rigorous, or when someone retracts the claim of having proved a given result, that's when things are getting better for a field.
|
{
"source": [
"https://mathoverflow.net/questions/176472",
"https://mathoverflow.net",
"https://mathoverflow.net/users/613/"
]
}
|
176,629 |
Let $M$ be a complete finite volume Riemannian manifold and $\gamma : \mathbb{R}^{\geq 0} \to M$ a geodesic. Suppose that $\mathrm{im}(\gamma)$ is dense. Is it equidistributed in the Riemannian measure? That is, does
$$
\lim_{T \to +\infty} \frac{1}{T} \int_0^T f(\gamma(t)) \, dt = \frac{1}{\mathrm{vol}(M)} \int_M f \, \mathrm{d vol}
$$
for every $f \in C_0(M)$? [False in general; true for Nilmanifolds. True a.e. in negative curvature, where the geodesic flow is ergodic. ] Let now $N \subset M$ be an (immersed) submanifold and $\gamma$ a geodesic of $M$ which is contained densely in $N$. Is the submanifold $N \subset M$ totally geodesic? [False in general, though true for some variants in constant negative curvature. But what if "totally geodesic" is weakened to "minimal"?] Added. Asaf's answer nonethtless begs a follow-up question to 1: (Revised). If there is a dense geodesic, must there also be an equidistributed one? Could it in fact be that almost every geodesic is then equidistributed? Does a single dense geodesic imply ergodic geodesic flow? And in particular: does one dense geodesic imply almost all geodesics dense? [A similar revision of 2 would instead involve the condition that almost every geodesic of $M$ that is tangent to $N$ at some point is contained in $N$; but then it should follow trivially (I think) that $N$ is totally geodesic. ] Note: There is an analogy with the equidistribution and Manin-Mumford theorems, due to Szpiro, Ullmo, and Zhang, for torsion points in abelian varieties $A/\bar{\mathbb{Q}}$: For a sequence of torsion points which is eventually outside of every torsion translate of an abelian subvariety, the Dirac masses at the Galois orbits converge to the normalized Haar measure on $A(\mathbb{C})$ (where an embedding $\bar{\mathbb{Q}} \hookrightarrow \mathbb{C}$ has been fixed). Here, I would be tempted to think of a geodesic as corresponding to a Galois orbit of torsion points (either minimizes an energy functional -- or a canonical height); and of a totally geodesic subvariety as corresponding to a Galois orbit of a translate of an abelian subvariety by a torsion point (note that in the basic case of a flat torus, the totally geodesic submanifolds are precisely the subtori). The analogy is probably only superficial, but I thought it could be worth pointing out (if only because it led me to asking this question). Added later. One more (final) question along the line of 2. In algebraic geometry, we have the following general fact: For $L$ a nef line bundle on a projective variety $X$, if $\deg_LC =0$ for a Zariski-dense set of curves $C \subset X$, then $\deg_LX = 0$. (Nef =non-negative intersection numbers, =non-negative on every curve). For if $\deg_LX > 0$, Riemann-Roch and the almost vanishing of the higher cohomology of powers of nef line bundles imply that $L$ is big, hence a power of $L$ is effective. We may also do this in an arithmetic setting. In the analogy of the preceding note which led me to consider totally geodesic submanifolds, I was misled by the Manin-Mumford theorem, which is specific to commutative group varieties and fails even for algebraic dynamical systems. Instead, subvarieties of minimal height ought to be analogous to minimal immersed submanifolds: the images of harmonic isometric immersions (which include totally geodesic ones as a particular case, and coincide with the geodesics in dimension one). Considering the previous paragraph, then, does the following question make any sense: If the closure of a minimal submanifold happens to be an immersed submanifold, is this submanifold still minimal ? In the same vein: If we have a sequence of complex algebraic curves in $\mathbb{CP}^n$ (images of non-constant holomorphic maps from compact Riemann surfaces) whose supports converge to a compact real-analytic immersed submanifold $M \subset \mathbb{CP}^n$, must $M$ be a complex (algebraic) submanifold?
|
The first question is false as stated.
By Artin's encoding, geodesics on $SL_{2}(\mathbb{R})/SL_{2}(\mathbb{Z})$ corresponding to continued fractions, and the geodesic flow corresponds to the shift.
It's easy to find one fraction where you'll see any given prefix (hence dense), but you won't be equidistributed (say think about larger and larger blocks composed out of $1$'s). The situation is the same even for cocompact (hyperbolic) homogeneous spaces, and relays on the fact that the corresponding dynamical system is a Bernoulli system, see for example the survey by Katok in the Clay Pisa proceedings for more information about the encoding. In the case where the manifold is a Nilmanifold, the answer is indeed true, which follows from say Furstenberg's theorem about skew-products (when you use both the topological version and the ergodic version).
Finer (quantitative) results are probably attained by Green-Tao (see Tao's post about the Nilmanifold version of Ratner's theorem).
In the toral case, this boils done to merely Fourier series computations and Weyl's equidistribution criterion or so. In the higher rank (semisimple) case, things get more complicated, as one might think about multi-parameter actions, and then the measure-classification theorem by Lindenstrauss kicks in, but it was observed by Furstenberg in the $60$'s (and maybe before that) that even for multi-parameter actions, there might be dense but not equidistributed orbits.
Maybe the easiest toy model to think of is to think about the multiplicative action of $<2,3>$ as a semi-group on the torus $\mathbb{R}/\mathbb{Z}$, and start at say a Liouville number for base $6$. This action is some $S$-adic analouge for a higher-rank multi-parameter diagonalizable action. Edit - to address the revised question, here the geometrical settings are being addressed more intimately.
In the case of homogeneous spaces ($G/\Gamma$, or you can take the appropriate locally symmetric space as well), where $G$ is semi-simple say, then the geodesic flow is ergodic (it follows for example from the Howe-Moore theorem, or from the Bernoullicity theorem I've mentioned above). As a result, a simple application of the pointwise ergodic theorem will tell you that for almost every point and every direction (the approperiate measures here will be the Liouville measure on the unit tangent bundle, which is really where the geodesic flow "lives"), the orbit is equidistributed.
For the variable curvature case, as long as some natural conditions are met (say an upper bound on the sectional curvature making it negative everywhere), the dynamical picture is pretty much the same (but the proofs are significantly more involved, as you don't have rep. theory at hand).
Again in the Nilmanifold case, the situation is much more simple, the toy model for that is tori, where the question of rationality implies both density and equidistribution. I will address the Andre-Oort question in the comments, as I'm not an expert on this subject.
|
{
"source": [
"https://mathoverflow.net/questions/176629",
"https://mathoverflow.net",
"https://mathoverflow.net/users/26522/"
]
}
|
176,862 |
I've been calculating some Jones polynomials lately and I was just curious if there was a "physical" (or, rather, geometric) meaning to evaluating the Jones polynomial at a particular value of $t$. For example, if I take the Jones polynomial for the (right) Trefoil knot, I have $J(t) = t + t^3 - t^4$. Is there some way I can interpret $J(0)$? $J(1)$? I understand that the Jones polynomial is a laurent polynomial, so I don't expect $J(0)$ to make sense for a lot of knots (for example the left trefoil has $J(t) = t^{-1} + t^{-3} - t^{-4}$), but I thought it was worth asking. I also know that $J(t^{-1})$ gives the Jones polynomial of the mirror image knot. Is there a way to interpret $J(-t)$? $J(t^2)$? How about $J(t) = 0$? Edit to clarify what I mean when I say "physical meaning":
Since the Jones polynomial is a link invariant, $J(0)$ is also a link invariant (if it exists). Does this invariant correspond to a property of the knot that you can visualise, such as, say, the linking number or the crossing number?
|
The evaluation of the Jones polynomial at $e^{i\pi/3}$ is related to the number of 3-colourings $tri(K)$ of $K$ (see also here ) as well as to the topology of the branched double cover $\Sigma(K)$: $$tri(K) = 3\left|V^2_K(e^{i\pi/3})\right| = 3^{\dim H_1(\Sigma(K);\mathbb{Z}/3\mathbb{Z})+1}$$ This was proved by Przytycki in this paper (Theorem 1.13) and Lickorish-Millet here . I don't know whether similar relations hold for more general Fox colourings. This is not really an answer to the precise questions you're asking, but it's a pretty result. UPDATE (Aug 19, 2014): I have found some more references and some more info in this problem list : the third remark on page 383 (page 11 of the PDF) covers what was known in 2004. In particular, it says that computing $V_K(\omega)$ is $\#P$-hard (see Neil Hoffman's comment below) unless $\omega$ is a power of $e^{i\pi/3}$ or $\omega = \pm i$, and it gives the interpretation for $V_K(\omega)$ in the four remaining cases (the first two have been mentioned by Jim Conant in the comments above).
If $L$ is a link, I will call $\ell$ the number of components, and $\Sigma(L)$ the double cover of $S^3$ branched over all components of $L$. $V_L(1) = (-2)^{\ell - 1}$; for a knot, $V_K(1) = 1$; $\left|V_L(-1)\right| = \left|H_1(\Sigma(L))\right|$ if $H_1(\Sigma(L))$ is torsion, and is 0 otherwise; for a knot, $\left|V_K(-1)\right| = \left|\det(K)\right|$; $V_L(i) = (-\sqrt2)^{\ell-1}(-1)^{\mathrm{Arf}(L)}$ if $L$ is a proper link (i.e. ${\rm lk}(K,L\setminus K)$ is even for every component $K$ of $L$), and vanishes otherwise ( Murakami ); notice that the Arf invariant is defined only for proper links. $V_L(e^{2i\pi/3}) = 1$.
|
{
"source": [
"https://mathoverflow.net/questions/176862",
"https://mathoverflow.net",
"https://mathoverflow.net/users/47547/"
]
}
|
177,208 |
Maybe this question is not suitable for here, but I don't think I would receive a satisfactory answer in Math StackExchange.
I could never understand the intuition behind polarization of abelian varieties and how it arises. I know that there is an analogy roughly with a prequantum line bundle, but I think the concept of polarization of abelian varieties came first. Furthermore, I know that an embedding in the projective space is equivalent to a Riemann form (in the complex analytic case). Why the word "polarization"? Is there any partition of the abelian variety (as in the case of a geometric quantization, for instance)? How can I think geometrically (in the lattice) about fixing a polarization? Thanks in advance.
|
Weil introduced the term "polarization" in connection with his study of abelian varieties with complex multiplication. His definition is slightly different from what one sees today; one might call it a polarization up to isogeny instead of a polarization. One can find a discussion in Weil's article "On the theory of complex multiplication" ([1955d] in volume 2 of his Oeuvres Scientifiques ). There are a few notes on the context of that article (and its relation to the work of Shimura and Taniyama) at the end of the volume. Weil says in the article itself (and again in the notes) that "the word 'polarization' is chosen so as to suggest an analogy with the concept of 'oriented manifold' in topology." As Weil mentions in the article, Matsusaka (who might be described as a student of Weil) also did some important early work on polarized varieties apparently around the time that Weil invited him to Chicago (1954). Kollár, a student of Matsusaka, wrote a nice memorial article for the Notices in 2006, which discusses the work of Matsusaka, especially in relation to the theory of moduli. One can find Matsusaka's early work on polarizations in his "Polarized varieties, fields of moduli, and generalized Kummer varieties of polarized abelian varieties" in American J. of Math. vol. 80 No. 1, 1958. The introduction starts as follows: "In studying theta-functions and abelian functions, we sometimes fix the set of scalar multiples of a principal matrix attached to the given Riemann Matrix. This implies that we fix one divisor class and scalar multiples of it with respect to homology on the corresponding complex torus. In this paper we shall introduce the corresponding notion in the abstract case, not only to abelian varieties, but also to arbitrary varieties as done in Weil [22]." (Reference [22] is Weil's paper on complex multiplication mentioned above.) For independent confirmation of Weil and Matsusaka as the origin of the notion of polarization, one may see various articles of Shimura from around the same time, where he repeatedly cites these two. For example, at the end of "Modules des variétés abéliennes polarisées et fonctions modulaires," he writes, "Les notions d'une variété polarisée, d'un corps du module et d'une variété de Kummer sont dues à Weil et Matsusaka; ce dernier a traité le cas général de variétés quelconques." For intuition about polarizations, it may be helpful to think like this: the category of abelian varieties over a field $k$ (say up to isogeny) acts like a full subcategory of the category of representations of a group $G$ on $\mathbb{Q}$-vector spaces. (To show the shift in thinking we can write $V(A)$ when we think of $A$ as like a vector space.) The category of abelian varieties is too small to admit anything like the tensor products that exist in the larger category of $G$-representations, but one can see some multilinear structures. In particular, the dual abelian variety is like a dual representation---or at least a dual representation twisted by a character. A divisorial correspondence between abelian varieties $A$ and $B$ (a sufficiently rigidified line bundle on $A\times_k B$---a biextension of $(A,B)$ by $\mathbf{G}_m$) is like a $G$-invariant bilinear form on $V(A)\times V(B)$. A polarization on $A$ is a divisorial correspondence on $A\times_k A$ satisfying a symmetry and positivity condition (ampleness of pullback along the diagonal), which is like a $G$-invariant bilinear form on $V(A)\times V(A)$ satisfying a symmetry and positivity condition. There are some subtleties in the translation; for example, the symmetry on side of $A$ corresponds to antisymmetry on the side of $V(A)$. We know from the early pages of books on representation theory that $G$-invariant bilinear forms satisfying positivity conditions are useful---they give us complete reducibility, for example---and Weil used polarizations in a similar way in [1955d]. The fact that abelian varieties are polarizable corresponds to something like the fact that $G$ is a compact group, but the aforementioned subtleties mean that statement is not quite right. These vague remarks can be made precise when $k = \mathbb{C}$ in the way that Francesco Polizzi suggests: take $V(A)$ to be $H_1(A^{\rm an},\mathbb{Q})$ equipped with its standard Hodge structure, and the above remarks correspond to some of the theory of theta functions (which is what Matsusaka had in mind in his introduction). A final remark: I've always been a bit suspicious of how Weil justifies the polarization terminology, since there is a notion in classical projective geometry with a similar flavor called a "polarity." I wonder if Weil also had this classical terminology in mind when coining the term "polarization." (A polarity on the projective space associated to a vector space $V/k$ is a geometric structure related to a non-degenerate symmetric bilinear form $B:V\times V\to k$. A polarization on an abelian variety $A/\mathbb{C}$ is a geometric structure related to a certain type of alternating bilinear form on the period lattice of $A$. Both geometric structures are symmetric divisorial correspondences.)
|
{
"source": [
"https://mathoverflow.net/questions/177208",
"https://mathoverflow.net",
"https://mathoverflow.net/users/40883/"
]
}
|
177,234 |
In number theory there is often an analogue between statements which holds over a number field (that is, a finite field extension $K/\mathbb{Q}$) and function fields (that is, finite extensions of the form $K/\mathbb{F}_q(t)$ where $q = p^k$ for some prime $p$ and $k \geq 1$). One of the most famous examples of such an analogue is the Riemann hypothesis. Weil showed that the Riemann Hypothesis holds for the function field case. However, progress on the function field case has not shed much, if any, light on the corresponding case for number fields. An example of where the analogy breaks down (conjecturally) is the Brauer-Manin obstruction. For hypersurfaces defined over $\mathbb{P}^n(K)$ for some number field $K$, it is expected that most hypersurfaces $X$ of degree $d \geq n+1$ will be of general type, and hence neither satisfy the Hasse principle nor have its failure to satisfy the Hasse principle accounted for by the Brauer-Manin obstruction. Harari and Voloch showed that for function fields of positive characteristic, this is not the case: indeed in this paper ( http://www.ma.utexas.edu/users/voloch/Preprints/carpobs3.pdf ) they showed that the Brauer-Manin obstruction is the only reason Hasse principle can fail in this case. Another case where the analogy is not convincing is ranks of elliptic curves. It is known that elliptic curves over a function field of positive characteristic may have arbitrarily large rank. However, this question is not known even conjecturally in the number field case. Indeed it seems that many experts disagree on this question. At a recent summer school on counting arithmetic objects in Montreal, Bjorn Poonen gave a take on a heuristic suggesting that elliptic curves over number fields should have bounded rank. Andrew Granville, one of the organizers, agrees with this assertion. However, other experts present including Manjul Bhargava disagreed. My question is, what are some other situations where one expects genuinely different behavior between the function field setting and the number field setting?
|
Instead of the final results, let me focus on the underlying reasons why number fields and function fields are different. A. Every function field has subfields of arbitrarily large index (e.g. by taking the field generated by a rational function of large degree). But each number field has a subfield of maximal (finite) index. This actually explains the discrepancy in the bounded ranks heuristic. The heuristic, being probabilistic, is not expected to apply to a special family of curves that has unusually high rank for a good reason. For instance, if you take an extension of $\mathbb Q$ with Galois group $(\mathbb Z/2)^n$ , by looking at root numbers you can see that the average rank of curves in the family is at least $2^{n-1}$ . The constructions of curves of large rank over function fields all, I believe, involve a similar pullback - but the pullback is from a subfield of $\mathbb F_q(t)$ to $\mathbb F_q(t)$ . B. There exist isotrivial objects over function fields. These have properties that cannot occur over number fields, because the fact that each prime number is different places a lower bound on how similar the reductions of a variety modulo different places can be. For instance, there are no elliptic curves with good reduction at every place of $\mathbb Q$ , and no non-isotrivial elliptic curves with good reduction at every place of $\mathbb F_q(t)$ , but there are isotrivial examples. C. Numerical statements tend to be much simpler over function fields. Szpiro's conjecture over a function field has the form $\Delta= O(N^6)$ , not $\Delta= O(N^{6+\epsilon})$ as is known to be best possible over number fields. (This was changed from the ABC conjecture to answer Vesselin Dimitrov's objection about the Vojta conjecture being a more natural statement than the ABC conjecture in the function field setting - I think Szpiro's conjecture is also a very natural statement). As Vesselin points out, this might also be related to B, and the fact that the moduli space of elliptic curves is isotrivial. There exist constructions hitting simple numerical lower bounds, such as extensions of $\mathbb F_q(t)$ of degree $n$ and Galois group $S_n$ whose conductor exactly reaches the lower bound you get by looking at L functions, $q^{2(n-1)}$ . One can get similar lower bounds by looking at L functions over number fields, but it is not at all obvious that there exist extensions that reach them. D. The zeta function has infinitely many poles over function fields, but only one pole over a number field. This makes the ideal-counting and prime-counting functions both logarithmically periodic - i.e. all polynomials, and all prime polynomials, have norm $q^n$ for some $n$ , not smoothly distributed like the sizes of numbers and prime numbers are. One usually deals with this by considering polynomial of fixed degree, and viewing that as the function field analogue of a large interval, but sometimes it recurs in ways that might be surprising. For instance, the error term in the formula for the number of squarefree polynomials of degree $n$ is $2$ -periodic in $n$ . This sort of makes sense because squaring is $2$ -periodic also. E. The zeta function has finite complexity over function fields, but infinite complexity over number fields. The obvious aspect of this is that the zeros are periodic. Usually one accounts for this by, if considering a problem that involves many zeros, taking the large $g$ limit. However another facet is that each zero is an object with a simple description, being the log of an algebraic number. This means that phenomena (like two zeros being equal, or a linear dependence among the zeros) that would be infinitely improbable over number fields, and thus we expect that they never happen, unless there is a good reason (like the same L function appearing twice in the product for a Dedekind zeta function, forcing some zeros to occur with multiplicity), are only finitely improbable and thus we expect them to happen occasionally. So the linear independence conjecture, for instance, is known to be false over some function fields, but is known to hold for randomly chosen function fields. F. There is no Archimedean place and no $p$ -adic Hodge theory over function fields. This causes a number of statements to be simpler - for instance, the analogues of the Fontaine-Mazur conjecture and Langlands conjectures are much simpler. G. $p$ -adic properties, like the Newton polygon of Frobenius, behave much better over function fields, because you don't have to keep changing $p$ . For instance, a non-isotrivial elliptic curve over a function field has only finitely many supersingular primes. H. Over function fields, the Mobius function $\mu(f + g^p)$ is proportional to a quadratic Dirichlet character in $g$ modulo the derivative of $f$ . The set of such sums behaves like a short interval / arithmetic progression / Bohr set, in addition to being the set of values of a polynomial, but in none of these special sets is the Mobius function expected to behave like a Dirichlet character over the integers. This underlies the deviation in the Bateman-Horn conjecture mentioned in Lior Bary-Soroker's answer. It also was exploited in recent work of Mark Shusterman (EDIT: and myself) . I. Additive combinatorics seemingly behaves much differently over function fields. Work of Ellenberg and Gijswijt showed that the maximum size of a set of polynomials of degree $<d$ free of three-term arithmetic progressions has size at most $q^d / \left(q^d\right)^\epsilon$ for some $\epsilon>0$ depending on the characteristic. On the other hand, over the integers there are examples due to Behrend of subsets of $\{1,2,\dots,N\}$ free of three-term progressions of size at least $N/ e^ { O(\sqrt \log N)}$ . Because $N$ is the analogue of $q^d$ , the upper bound in the function field case is much smaller than the lower bound in the number field case, so whatever the true maximum size in each case, the two must be very different.
|
{
"source": [
"https://mathoverflow.net/questions/177234",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10898/"
]
}
|
177,481 |
The continued fraction
$$[1;1,2,3,4,5,\dots]=1+\cfrac{1}{1+\cfrac{1}{2+\cdots}}, $$ for instance, is known explicitly as a ratio of Bessel function values and is (I believe - SS) known to be transcendental. Similarly, $[1,2,2^2,2^3,2^4,2^5,\dots] $ is surely transcendental (and likely related to Liouville numbers). Are there any explicitly known algebraic numbers with unbounded continued fraction coefficients? (Of course, such a number must have degree greater than $2$, by all the usual theorems on continued fraction representations of quadratics.) Any references would be greatly appreciated.
|
As you indicate, real algebraic numbers of degree $\leq 2$ have this property in view of Lagrange's classical result characterizing them by the eventual periodicicty of the continued fractions expansion. It may be useful to know (if you don't already) that $\alpha \in \mathbb{R}$ having bounded continued fractions coefficients is equivalent to the sharpness of Dirichlet's approximation theorem: $|\alpha - p/q| > cq^{-2}$ for all but finitely many $p/q \in \mathbb{Q}$ . For algebraic numbers this means that the exponent $2+\varepsilon$ in Roth's theorem can be reduced to $2$ . For quadratic irrationalities this holds with the uniform constant $c = 1/\sqrt{5} - \epsilon$ ; google "Lagrange spectrum." As far as I know it is widely believed that this may not happen for any algebraic number of degree $> 2$ , although Serge Lang has suggested that a milder improvement in Roth's theorem, whereby $q^{2+\varepsilon}$ gets replaced with $(q\log{q})^2$ , is always possible. (This is wide open; there is an analogous statement known to hold true in Nevanlinna theory). Certainly no algebraic number of degree $> 2$ is known to have unbounded partial fractions coefficients, but worse, it does not appear to be even known that there exists an algebraic number having this property. A reference where this appears explicitly in print is p. 366 of Hindry and Silverman's Diophantine Geometry: An Introduction . Added. There is also a rather interesting variant of this question for $\mathbb{Z}[i]$ -continued fractions expansions of complex algebraic numbers. As is typical in diophantine analysis, both Roth's theorem and the continued fractions algorithm extend to the relative setting over number fields other than $\mathbb{Q}$ ; and to a large extent, so does the relationship between the two. To be concrete, consider rational approximations over the Gaussian field $\mathbb{Q}(i)$ . The relative Roth theorem over $\mathbb{Q}(i)$ states: If $\alpha \in \mathbb{C}$ is algebraic, then for every $\varepsilon > 0$ there are only finitely many pairs $p,q \in \mathbb{Z}[i]$ in the Gaussian lattice satisfying $|\alpha - p/q| < |q|^{-2-\varepsilon}$ . [For the general statement over any number field see Thm. 6.2.3 of Bombieri and Gubler's Heights in Diophantine Geometry .] Likewise, Hurwitz has attached to a complex number a canonical continued fractions expansion with entries in $\mathbb{Z}[i]$ , by using the same algorithm as over $\mathbb{Q}$ , but with the nearest rounding to the Gaussian lattice $\mathbb{Z}[i]$ (whereas over $\mathbb{Q}$ the conventional choice of rounding involves the floor function $\lfloor \cdot \rfloor$ instead of the nearest rounding to $\mathbb{Z}$ ). The trichotomy "rational-quadratic-higher degree" would appear to extend to the relative setting over this (or any other) number field $\mathbb{Q}(i)$ : the $\alpha \in \mathbb{C}$ with terminating expansion are of course precisely the numbers in $\mathbb{Q}(i)$ , and Hurwitz has shown that his expansion is eventually periodic if and only if $[\mathbb{Q}(\alpha,i):\mathbb{Q}(i)] \leq 2$ . We can ask, then, the same questions for Hurwitz's complex continued fractions. Rather suprisingly, there are algebraic numbers whose $\mathbb{Z}[i]$ -continued fractions expansion has bounded coefficients, and whose relative degree over $\mathbb{Q}(i)$ is $> 2$ ! One such number, due to D. Hensley [1, Ch. 5] in 2006, is $\sqrt{2} + i\sqrt{5}$ , of relative degree four. More generally, W. Bosma and D. Gruenewald [3] have shown that a complex number has this property if the square of its modulus is a rational integer which is not a norm from $\mathbb{Z}[i]$ ; algebraic such examples thus include $\sqrt[m]{2} + i\sqrt{n - \sqrt[m]{4}}$ for all $n \equiv 3 \mod{4}$ and $m$ . (On the other hand, to my knowledge no particular algebraic number has been proven to have unbounded coefficients in its Hurwitz $\mathbb{Z}[i]$ -continued fractions expansion.) But now there is something even more curious about the implication of such examples for Roth's theorem over $\mathbb{Q}(i)$ (diophantine approximations by Gaussian numbers). I realize this just now after looking up Hensley's very interesting paper [2] (from 2006), and I wonder why this point isn't brought up in the literature on complex continued fractions. While the convergents $p_n/q_n \in \mathbb{Q}(i)$ in Hurwitz's expansion no longer exhaust all good $\mathbb{Q}(i)$ -rational approximations to $\alpha \in \mathbb{C}$ (as they do in the case over $\mathbb{Q}$ ), Theorem 1 in [2] shows that up to a multiplicative constant, they still give the best $\mathbb{Q}(i)$ -rational approximations. As a result, for those numbers (algebraic of arbitrarily high relative degree over $\mathbb{Q}(i)$ ), the exponent $2+\varepsilon$ in Roth's theorem rel. $\mathbb{Q}(i)$ (stated above) can be reduced to $2$ , and this is moreover effective: Consequence: If $\alpha \in \mathbb{C}$ has $|\alpha|^2 \in \mathbb{Q}$ a rational number not a norm from $\mathbb{Q}(i)$ [e.g., the rel. degree-four algebraic example $\sqrt{2} + i\sqrt{5}$ ], then there is an effective $c(\alpha) > 0$ such that $|\alpha - \beta| > cH(\beta)^{-2}$ for all $\beta \in \mathbb{Q}(i)$ . (Here, $H(\cdot)$ is the absolute multiplicative height on $\bar{\mathbb{Q}}$ ; for rational or imaginary quadratic integers $n$ , it coincides with $|n|$ ) Consider now Khintchine's principle according to which an algebraic number of degree $> 2$ should be generic. As almost every complex number $x \in \mathbb{C}$ has infinitely many $\mathbb{Q}(i)$ -rational approximants $\beta \in \mathbb{Q}(i)$ with $|x - \beta| < 1/(H(\beta)^2\log{H(\beta)})$ , the numbers $\alpha$ of the above form are not generic in this sense. As they contain algebraic numbers of arbitrarily high (though necessarily even) rel. degree over $\mathbb{Q}(i)$ , Khintchine's principle would appear to fail in the relative setting over $\mathbb{Q}(i)$ ! Nevertheless I think that the story is more interesting, as the above examples still resemble quadratic irrationalities in some vague sense. Perhaps, in the relative setting of diophantine approximations over a number field $K$ , we could salvage Khintchine's principle and the above trichotomy by enlarging the class of special algebraic numbers -- which a priori contain all numbers of rel. degree $\leq 2$ over $K$ . Allow me to record a few Problems. Might the numbers of degree $\leq 2$ over $\mathbb{Q}(i)$ (whose Hurwitz expansions are eventually periodic) and the algebraic numbers mentioned in the above "Consequence" (whose Hurwitz expansions are aperiodic yet have bounded coefficients) exhaust all the algebraic numbers for which the exponent $2+\varepsilon$ in Roth's theorem rel $\mathbb{Q}(i)$ may be reduced to $2$ ? Should we expect that all algebraic numbers not of this shape ought to satisfy Khintchine's principle rel $\mathbb{Q}(i)$ ? What are the special algebraic numbers in diophantine approximations rel a general given number field $K$ ? Finally, the same problem can be considered about $p$ -adic (and $S$ -adic) $K$ -rational approximations to algebraic numbers; I do not know if this has been done even for $K = \mathbb{Q}$ . References: [1] D. Hensley: Continued Fractions (World Scientific, Singapore, 2006). [2] D. Hensley: The Hurwitz complex continued fraction (2006): http://mosaic.math.tamu.edu/~dhensley/SanAntonioShort.pdf [3] W. Bosma, D. Gruenewald: Complex numbers with bounded partial quotients, J. Aust. Math. Soc. , vol. 93 (2012), pp. 9--20.
|
{
"source": [
"https://mathoverflow.net/questions/177481",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14024/"
]
}
|
177,759 |
Let $n$ be a natural number whose prime factorization is
$$n=\prod_{i=1}^{k}p_i^{\alpha_i} \; .$$
Define a function $g(n)$ as follows
$$g(n)=\sum_{i=1}^{k}p_i {\alpha_i} \;,$$
i.e., exponentiation is "demoted" to multiplication,
and multiplication is demoted to addition.
For example: $n=200=2^3 5^2$, $f(n) = 2 \cdot 3 + 5 \cdot 2 = 16$. Define $f(n)$ to repeat $g(n)$ until a cycle is reached.
For example:
$n=154=2^1 7^1 11^1$,
$g(n)=20$,
$g^2(n)=g(20)=9$,
$g^3(n)=g(9)=6$,
$g^4(n)=g(6)=5$, and now $g^k(n)=5$ for $k \ge 4$. So $f(154)=5$. It is clear that every prime is a fixed point of $f(\;)$.
I believe that $n=4$ is the only composite fixed point of $f(\;)$. Q1 . Is it the case that $4$ is the only composite fixed point
of $f(\;)$, and that there are no cycles of length greater than $1$?
( Yes : See EmilJeřábek 's comment.) Q2 . Does every prime $p$ have an $n \neq p$ such that $f(n) = p$,
i.e., is every prime "reached" by $f(\;)$?
( Yes : See JeremyRouse 's answer.) There appear to be interesting patterns here. For example, it seems
that $f(n)=5$ is common.
( Indeed : See მამუკა ჯიბლაძე's graphical display.)
|
A way to get a non-trivial solution to $f(n) = p$ is that every odd number $\geq 7$ can be written as a sum of three primes (by Helfgott's recent work), so if $p \geq 7$ is prime, we can write $p = q + r + s$, and we have $g(qrs) = q + r + s = p$. (This is of course a bit overkill, we don't really need such a difficult result to see this.)
|
{
"source": [
"https://mathoverflow.net/questions/177759",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
177,774 |
I'm trying to find an algebraic curve that represents a specific Riemann surface and my question goes like this: Given divisors
$(\omega_1) = P_1 + 5 P_2 + 2 P_3,$
$(\omega_2) = 5 P_1 + P_2 + 2 P_3,$
$(\omega_3) = 2 P_1 + 2 P_2 + 4 P_3,$ is there a way one can come up with an algebraic equation $P(\omega_1, \omega_2, \omega_3) = 0$ with no further information? What I'm trying to do here is to paste 8 sheets of 3-punctured spheres. At each point $P_i$(or slit), the $j$-th sheet gets glued to $(j+ a_i)(mod \, 8)$-th sheet given $(a_1, a_2, a_3) = (1, 5, 2).$ $\omega_i$s are the holomorphic 1-forms on this surface and assuming that $P_1$ is sent to $\infty,$ $P_2$ to 0, and $P_3$ to 1, I end up with $(\omega_1^2 - \omega_2^2) \omega_1 \omega_2 = \omega_3^4.$ Though I was wondering if there is a way to find this when only the divisors $(\omega_i)$ are given.
|
A way to get a non-trivial solution to $f(n) = p$ is that every odd number $\geq 7$ can be written as a sum of three primes (by Helfgott's recent work), so if $p \geq 7$ is prime, we can write $p = q + r + s$, and we have $g(qrs) = q + r + s = p$. (This is of course a bit overkill, we don't really need such a difficult result to see this.)
|
{
"source": [
"https://mathoverflow.net/questions/177774",
"https://mathoverflow.net",
"https://mathoverflow.net/users/56739/"
]
}
|
178,139 |
I try to generate a lot of examples in my research to get a better feel for what I am doing. Sometimes, I generate a plot, or a figure, that really surprises me, and makes my research take an unexpected turn, or let me have a moment of enlightenment. For example, a hidden symmetry is revealed or a connection to another field becomes apparent. Question: Give an example of a picture from your research, description on how it was generated, and what insight it gave. I am especially interested in what techniques people use to make images, this is something that I find a bit lacking in most research articles.
From answers to this question; hope to learn some "standard" tricks/transformations one can do on data, to reveal hidden structure. As an example, a couple of years ago, I studied asymptotics of (generalized) eigenvalues of non-square Toeplitz matrices. The following two pictures revealed a hidden connection to orthogonal polynomials in several variables, and a connection to Schur polynomials and representation theory.
Without these hints, I have no idea what would have happened.
Explanation: The deltoid picture is a 2-dimensional subspace of $\mathbb{C}^2$ where certain generalized eigenvalues for a simple, but large Toeplitz matrix appeared, so this is essentially solutions to a highly degenerate system of polynomial equations.
Using a certain map, these roots could be lifted to the hexagonal region, revealing a very structured pattern. This gave insight in how the limit density of the roots is.
This is essentially roots of a 2d-analogue of Chebyshev polynomials, but I did not know that at the time. The subspace in $\mathbb{C}^2$ where the deltoid lives is quite special, and we could not explain this. A subsequent paper by a different author answered this question, which lead to an analogue of being Hermitian for rectangular Toeplitz matrices. Perhaps you do not have a single picture; then you might want to illustrate a transformation that you test on data you generate. For example, every polynomial defines a coamoeba, by mapping roots $z_i$ to $\arg z_i$. This transformation sometimes reveal interesting structure, and it partially did in the example above. If you don't generate pictures in your research, you can still participate in the discussion, by submitting a (historical) picture you think had a similar impact (with motivation). Examples I think that can appear here might be the first picture of the Mandelbrot set , the first bifurcation diagram , or perhaps roots of polynomials with integer coefficients .
|
The third image below was certainly unexpected for my soon-to-be-collaborators, Emmanuel Candes and Justin Romberg. They started with a standard image in signal processing, the Logan-Shepp phantom : They took a sparse set of Fourier measurements of this image along 22 radial lines (simulating a crude MRI scan). Conventional wisdom was that this was a very lossy set of measurements, losing most of the original data. Indeed, if one tried to use the standard least squares method to reconstruct the image from this data, one got terrible results: However, Emmanuel and Justin were experimenting with a different method, in which one minimised the total variation norm rather than the least squares norm subject to the given measurements, and were hoping to get a somewhat better reconstruction. What they actually got was this: Unbelievably, using only about 2% of the available Fourier coefficients, they had managed to reconstruct the original Logan-Shepp phantom so perfectly that the differences were invisible to the naked eye. When Emmanuel told me this result, I couldn't believe it either, and tried to write down a theoretical proof that such perfect reconstruction was impossible from so little data. Much to my surprise, I found instead that random matrix theory could be used to guarantee exact reconstruction from a remarkably small number of measurements. We then worked together to optimise and streamline the results; this led to some of the pioneering work in the area now known as compressed sensing .
|
{
"source": [
"https://mathoverflow.net/questions/178139",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1056/"
]
}
|
178,296 |
I have seen that Euclid's Elements was written 300 BC and first set in type in 1482. Are there scans of that old versions available? How were formulas / images added to the books created with printing presses?
Can / could formulas / images also be printed automatically or had they to draw every single image again for every single new book? Related Writing papers in pre-LaTeX era? How was Euclid's Elements likely written?
|
Euclid's Elements of Geometry (1482) In the 1455 Gutenberg bible the illustrations were hand-drawn after printing.The illustrations in the 1482 Elements were printed. [In some copies the diagrams were colored in by hand, see right image above.] In a foreword the printer Erhard Ratdolt attributed the prior lack of printed mathematical works to the difficulty produced by the diagrams, and adds "Having perceived, that it was this alone that formed an obstacle to something that would be useful to all, I have achieved, by applying myself to the problem and not without putting in much hard work, that geometrical figures can be composed with the same ease as movable type" . There appears to be some uncertainty on the nature of Ratdolt's method, which he did not explain any further. The common practice in the 15th century was to use woodcuts for illustrations, as discussed by Bowers . Alternatively, according to this source , Ratdolt had devised elementary geometrical forms in type metal which could be combined to form figures which, being in metal, could be printed at the same time as the typeset page. The OP asks also for early printed formulas. The earliest appearance of + and - symbols appears to be Johannes Widmann (1489), see image below. Notation that would need subscript or superscripts (such as exponents) was generally avoided, as were fractions with a bar. A quote : The bar is generally found in Latin manuscripts of the late Middle Ages, but when printing was introduced it was frequently omitted, doubtless owing to typographical difficulties. This inference is confirmed by such books as Rudolff's Kunstliche rechnung (1526), where the bar is omitted in all ordinary fractions but is inserted in fractions printed in larger type and those having large numbers.
|
{
"source": [
"https://mathoverflow.net/questions/178296",
"https://mathoverflow.net",
"https://mathoverflow.net/users/36477/"
]
}
|
178,318 |
A well known proof of the Chevally-Warning Theorem is the one listed on wikipedia: http://en.wikipedia.org/wiki/Chevalley%E2%80%93Warning_theorem Are there any other proofs of this, or generalizations of it?
|
I am working on a book-length manusript, Around the Chevalley-Warning Theorem . A complete answer to your question is estimated at about 150 pages! In terms of what exists at the moment, here are two papers. Both of them make connections between the classical results of Chevalley and Warning and modern polynomial methods. The first concerns a generalization of the (unjustly almost forgotten) Warning's Second Theorem to restricted variables. The second explains the connection between Chevalley's Theorem and Alon's Combinatorial Nullstellensatz . I take the perspective that the Combinatorial Nullstellensatz is in fact a very direct generalization of Chevalley's proof of Chevalley's Theorem. (I don't mean "very direct" as a slight against Alon: I am certainly a fan. Rather it is meant to indicate a useful -- at least for a number theorist -- way of thinking about these results.) I'm afraid the above seems overly self-promotional. Let me also give what I think are the most important papers in this area, with an emphasis on relatively elementary work. [So I will not list e.g. work of Esnault, though I agree with Daniel Loughran's suggestion that it is, at least in some sense, the most important result of Chevalley-Warning type.] J. Ax, \emph{Zeroes of polynomials over finite fields}. Amer. J. Math. 86 (1964), 255-–261. C. Chevalley, \emph{D'emonstration d’une hypoth`ese de M. Artin.} Abh. Math. Sem. Univ. Hamburg 11 (1935), 73–-75. D.R. Heath-Brown, \emph{On Chevalley-Warning theorems}. (Russian. Russian summary) Uspekhi Mat. Nauk 66 (2011), no. 2(398), 223--232; translation in Russian Math. Surveys 66 (2011), no. 2, 427-–436. D.J. Katz, \emph{Point count divisibility for algebraic sets over $\mathbb{Z}/p^{\ell}\mathbb{Z}$ and other finite principal rings}. Proc. Amer. Math. Soc. 137 (2009), 4065-–4075. N.M. Katz, \emph{On a theorem of Ax}. Amer. J. Math. 93 (1971), 485-–499. M. Marshall and G. Ramage, \emph{Zeros of polynomials over finite principal ideal rings}. Proc. Amer. Math. Soc. 49 (1975), 35-–38. O. Moreno and C.J. Moreno, \emph{Improvements of the Chevalley-Warning and the Ax-Katz theorems}. Amer. J. Math. 117 (1995), 241--244. S.H. Schanuel, \emph{An extension of Chevalley's theorem to congruences modulo prime powers}. J. Number Theory 6 (1974), 284-–290. G. Terjanian, \emph{Sur les corps finis}. C. R. Acad. Sci. Paris S'er. A-B 262 (1966), A167-–A169. D.Q. Wan, \emph{An elementary proof of a theorem of Katz}.
Amer. J. Math. 111 (1989), 1-–8. E. Warning, \emph{Bemerkung zur vorstehenden Arbeit von Herrn Chevalley}. Abh. Math. Sem. Hamburg 11 (1935), 76–-83. Less than a year ago I thought there were about ten papers which generalize and refine the Chevalley-Warning Theorem. I now think I was off by a full order of magnitude, and indeed my current bibliography contains about 100 references. (I will admit that at this point, the radius of the circle referred to in Around the Chevalley-Warning Theorem is rather large. It includes for instance material on Davenport constants and on polynomial interpolation.) Added : To answer the question more directly: Chevalley's proof critically uses the observation that if $P_1,\ldots,P_r$ are polynomials in $n$ variables over $\mathbb{F}_q$ , then the function $x \in \mathbb{F}_q^n \mapsto \chi(x_1,\ldots,x_n) = \prod_{i=1}^r(1-P_i(x_1,\ldots,x_n)^{q-1})$ is the characteristic function of the set $Z = \{ (x_1,\ldots,x_n) \in \mathbb{F}_q^n \mid P_1(x_1,\ldots,x_n) = \ldots = P_r(x_1,\ldots,x_n) = 0\}$ . Every other proof of Chevalley's Theorem I know uses this observation. The subsequent proofs of Chevalley's Theorem other than Ax's proof look (to me) essentially the same as Chevalley's. Ax's proof uses (only!) Chevalley's observation and Ax's Lemma : if $\operatorname{deg} P < (q-1)n$ , then $\sum_{x \in \mathbb{F}_q^n} P(x) = 0$ . Ax's Lemma is impressively easy to prove: it would be a fair question on many undergraduate algebra midterms. I think I saw somewhere the claim that it goes back to V. Lebesgue. I still cannot quite see Ax's argument as a recasting of Chevalley's. So after all this I suppose I would say that there are "really two proofs".
|
{
"source": [
"https://mathoverflow.net/questions/178318",
"https://mathoverflow.net",
"https://mathoverflow.net/users/40983/"
]
}
|
178,555 |
Given two integers $a \ge b >1$, can we encode them as a unique integer $a^b + b^a$? I asked this question on math.SE, and after surviving a week with a bounty, it seems that this question is harder than I initially thought. Apparently these things have been named Leyland Numbers , but none of the literature I've been able to find on them provides proof that there are no repeats.
|
I will show that if we assume the $abcd$ conjecture (which is the case $n=4$ of Browkin and Brzezinski's $n$-conjecture that generalizes the $abc$ conjecture), then $a^b+b^a=c^d+d^c$ has only finitely many solutions with $\{a,b\}\neq \{c,d\}$. The $abcd$ conjecture claims that if $a,b,c,d$ are integers with $a+b+c+d=0$ and nonzero subsums (i.e. $a+b+c\neq 0$ etc.), $\gcd(a,b,c,d)=1$ and $\varepsilon>0$, then
$$\max\{|a|,|b|,|c|,|d|\}\leq K_{\varepsilon} \text{rad}(abcd)^{3+\varepsilon}$$
(it is enough for us to assume the conjecture with any absolute constant in place of $3+\varepsilon$). Here $\text{rad}(m)$ is the product of the prime divisors of $m$. Given that Mochizuki seems to have proved the $abc$ conjecture and some generalizations of it, perhaps the $abcd$ conjecture is not that distant an assumption. If $a^b+b^a=c^d+d^c$ and $\gcd(a,b,c,d)=1$, we obtain
$$\max\{a^b,b^a,c^d,d^c\}\leq K_{\varepsilon}\text{rad}(a^bb^ac^dd^c)^{3+\varepsilon}=K_{\varepsilon}\text{rad}(abcd)^{3+\varepsilon}\leq K_{\varepsilon}(abcd)^{3+\varepsilon},$$ unless some subsum of $a^b+b^a-c^d-d^c=0$ is zero, which would give $a^b=c^d$ and $b^a=d^c$ (or vice versa), but then $a$ and $c$ have the same prime factors, and writing $a=\prod_{i=1}^s p_i^{\alpha_i}, b=\prod_{i=1}^s p_i^{\beta_i}$, we see from $a^b=c^d$ that $\frac{\alpha_i}{\beta_i}=\frac{d}{b}$, which is independent of $i$, so $a\mid c$ or $c\mid a$. Similarly $b\mid d$ or $d\mid b$. After this it is easy to see that $a^b=c^d$, $b^a=d^c$ has no nontrivial solutions. In fact, if for instance $d=kb,a=\ell c$, then after simplification the equations become $\ell=c^{k-1},k=b^{\ell-1}$, so $k\geq 2^{\ell-1}$ and then $\ell\geq 2^{2^{\ell-1}-1}$. Hence $\ell=2$ and similarly $k=2$ (or $\{a,b\}=\{c,d\}$), but then $b=c$, and thus $a=d$. Now we may assume that the subsums are nonzero. Choose $\varepsilon=1,$ say, and let $d=\max\{a,b,c,d\}$. Then $2^d\leq c^d\leq K_1\text{rad}(abcd)^4\leq K_1\cdot d^{16}$, so $d$ is bounded by an absolute constant, and hence $a,b,c,d$ are all bounded. Now assume $r=\gcd(a,b,c,d)>1$. The next step is to show that $r$ is bounded. Now $a^b+b^a=c^d+d^a$ is of the form $x^r+y^r=z^r+w^r$, where $x=a^{\frac{b}{r}},...,w=d^{\frac{c}{r}}$. We will show that if $r$ is large and $(x,y,z,w)$ is any quadruple of positive integers satisfying $x^r+y^r=z^r+w^r$, then $\{x,y\}=\{z,w\}$. By homogeneity, it suffices to show that the coprime solutions satisfy $\{x,y\}=\{z,w\}.$ Since we were allowed to make the assumption $\gcd(x,y,z,w)=1$, the $abcd$ conjecture implies
$$\max\{x^r,y^r,z^r,w^r\}\leq K_1\text{rad}(x^ry^rz^rw^r)^4\leq K_1(xyzw)^4,$$
unless a subsum of $x^r+y^r-z^r-w^r$ vanishes, which leads to $\{x,y\}=\{z,w\}$.
If the subsums are nonzero and $w=\max\{x,y,z,w\}$, then $w^r\leq K_1w^{16}$, so $r$ is bounded or $w=1$. The last case leads to $x=y=z=w=1$. Therefore, for large $r$, the only solutions to $x^r+y^r=z^r+w^r$ are those where $\{x,y\}=\{z,w\}$. Thus also $\{a^b,b^a\}=\{c^d,d^c\}$, which was already seen to give no nontrivial solutions. Finally, let $M$ be an absolute constant that is an upper bound for $r$. Let $R=\gcd(a^b,b^c,c^d,d^c)$. We apply the $abcd$ conjecture once again to see that $\max\{\frac{a^b}{R},\frac{b^a}{R},\frac{c^d}{R},\frac{d^c}{R}\}\leq K_1\text{rad}(abcd)^4$. Now if $abcd$ has no prime divisor greater than $M$, we have
$$\min\{a^b,b^a,c^d,d^c\}\geq R\geq c_0\max\{a^b,b^a,c^d,d^c\}$$
for some absolute constant $c_0>0$. Now if there exists a prime $p_1$ that divides some of $a,b,c,d$ but not all of them, then
$$R\leq \prod_{p\leq M, p\neq p_1}p^{\min\{v_p(a^b),v_p(b^a),v_p(c^d),v_p(d^c)\}}\leq \frac{\max\{a^b,b^a, c^d, d^c\}}{2^{\min\{a,b,c,d\}}}\quad \quad (1).$$
If no such $p_1$ exists, all the numbers $a,b,c,d$ have the same prime factors. Let $a=\prod_{i=1}^s p_i^{\alpha_i},...,d=\prod_{i=1}^s p_i^{\delta_i}$. The condition $\gcd(a,b,c,d)\leq M$ tells $\min\{\alpha_i,\beta_i,\gamma_i,\delta_i\}\leq M$. Let $P^{\delta}$ be the largest prime power dividing $d$. Then $\min\{v_{P}(a),v_{P}(b),v_{P}(c)\}\leq M$ if $d$ is large. For example, let $v_{P}(c)\leq M$. Write $d=P^{\delta}D,c=P^{\gamma}C$, where $P\nmid C,D$. Then
$$R=\gcd(a^b,b^a,c^d,d^c)\leq \gcd(c^d,d^c)\leq P^{Md} D^c=P^{Md-c\delta}d^c\leq M^{Md}d^{-\frac{c}{s}}d^c,$$
where $s\leq M$ is the number of prime factors of $d$. The last quantity is at most $\left(\frac{d}{2}\right)^c$ if $\frac{1}{2}d^{\frac{1}{s}}\geq M^{\frac{Md}{c}},$ which holds for large enough $d$ if $\frac{d}{c}$ is bounded. It must indeed be bounded since we had $\frac{c^d}{d^c}\geq c_0$. Therefore $(1)$ holds again for all large $d$. Next we show that $(1)$ holds also if $a,b,c,d$ have some prime factors greater than $M$, and then use $(1)$ to deduce a contradiction. Now one of $a,b,c,d$ has a prime factor $p_0>M$. For example, let $p_0\mid d$. Then
$$R\leq \prod_{p\leq M}p^{v_p(d^c)}\leq \left(\frac{d}{p_0}\right)^c\leq \frac{\max\{a^b,b^a,c^d,d^c\}}{2^{\min\{a,b,c,d\}}},$$
so $(1)$ must always hold for large $d$. However, we had $R\geq \frac{\max\{a^b,b^a,c^d,d^c\}}{K_1\text{rad}(abcd)^4}\geq \frac{\max\{a^b,b^a,c^d,d^c\}}{K_1 d^{16}}$. Thus $K_1d^{16}\geq 2^{\min\{a,b,c,d\}}$. But $$\min\{a,b,c,d\}^d\geq \min\{a^b,b^a,c^d,d^c\}\geq R\geq \frac{\max\{a^b,b^a,c^d,d^c\}}{K_1d^{16}}\geq \frac{c^d}{K_1d^{16}},$$
implies $\min\{a,b,c,d\}\geq k_0c$ for some constant $k_0$,so $K_1d^{16}\geq 2^{k_0c}$. Still we have $\frac{d^c}{c^d}\geq \frac{1}{K_1d^{16}}$. In particular, $d^c\geq \frac{2^d}{K_1d^{16}}$, so $c\geq k_2 \frac{d}{\log d}$. But then $K_1d^{16}\geq 2^{k_0c}\geq 2^{k_3\frac{d}{\log d}}$ shows that $d$ is bounded and hence all $a,b,c,d$ are bounded.
|
{
"source": [
"https://mathoverflow.net/questions/178555",
"https://mathoverflow.net",
"https://mathoverflow.net/users/54339/"
]
}
|
178,831 |
(this is basically a repost of a question I asked at M.SE last year) Is there an explicit real algebraic number (such that we can write its minimal polynomial and a rational isolating interval) that cannot be expressed as a combination of the constants $\pi, e,$ integers and elementary functions (rational functions, powers, logarithms, direct and inverse trigonometric functions)? In case it is an open question, can you give an example of an algebraic number such that we do not know how to express it in elementary functions?
|
I addressed this exact question in my American Mathematical Monthly paper, What is a closed-form number? Corollary 1 in that paper states that if Schanuel's conjecture holds, then the EL numbers (i.e., the numbers expressible according to your list of rules) that are algebraic are precisely the numbers expressible using radicals. So any algebraic number that is not expressible in radicals would be an answer to your question. However, Schanuel's conjecture is not known to be true. I believe that your question is still open.
|
{
"source": [
"https://mathoverflow.net/questions/178831",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9550/"
]
}
|
179,611 |
I have a history question for which I've had trouble finding a good answer. The common story about nonmeasurable sets is that Vitali showed that one existed using the Axiom of Choice, and Lebesgue et al. put the blame squarely on this axiom and its non-constructive character. It was noticed however that some amount of choice was required to get measure theory off the ground, namely Dependent Choice seemed to be the principle typically employed. But the full axiom of choice which allows uncountably many arbitrary choices to be made is of a different character, and is the culprit behind the pathological sets. This viewpoint was not really justified until Solovay showed in the 1960s that ZF+DC could not prove the existence of a nonmeasurable set, assuming the consistency of an inaccessible cardinal. My question is, in the many years before Solovay's theorem, was there any effort aimed at showing the existence of a nonmeasurable set without the use of the full AC? Was something like the following question ever posed or worked on: "Can constructions similar to those of Vitali, Hausdorff, and Banach-Tarski be done without appeal to the Axiom of Choice?"
|
Paul Cohen posed the question of getting a model of "All Sets Lebesgue Measurable" in his early talks on his own results. (He did not mention the principle of Dependent Choices. Adding that to the problem was my idea.) I know of no work trying to prove the Vitali result constructively. Certainly Cohen's conjecture (which I presume was widely shared) was that the use of choice was essential. It is quite striking (if one works through Halmos) that all the positive results in measure theory can be carried out in ZF + DC. Only the counterexample section uses full choice.
|
{
"source": [
"https://mathoverflow.net/questions/179611",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11145/"
]
}
|
180,035 |
In their 2001 paper defining periods, Kontsevich and Zagier (pdf) without further comment state that $e$ is conjecturally not a period while many other numbers showing up naturally (conjecturally) are. The former claim is repeated at many other internet sources including Wikipedia but nowhere could I find a heuristic making the conjecture that $e$ is not a period more plausible than its negation. Does anyone here know of such an argument? EDIT: I figured it would be good measure (i.e. 'shows research effort') to write what was the best I could come up with myself. I don't find it very convincing however so feel free to ignore. The number $e$ is more or less defined as the value at a rational number (1) of a function that is a solution to a ordinary differential equation ( $y' = y$ ) with rational boundary condition ( $y(0) = 1$ ). Now K & Z point out that all periods arise in this way (replace a rational number in the defining integral with a parameter and it will satisfy an ODE). However they also warn us that the differential equations are really special and (conjecturally) satisfy a lot of criteria among which having at most regular singularities. Now the singularity at infinity of $y'= y$ is not regular as it has order 2 (while the equation is of order 1) but of course this proves nothing since nothing is stopping $e$ from being the value at some rational number of a solution to a much more complicated differential equation which might be of the right class. So what is missing from an argument along these lines is some way of making precise that $y'= y$ really is the simplest equation which produces $e$ and that 'therefore' more complicated equations can be 'reduced' to it by a series of simplifications innocent enough to preserve the regularity of the singularities if it exists (quod non). Now personally I would not buy such a claim if it wasn't for the fact that it is a bit akin to conjecture 1 from K & Z . However this line of reasoning requires a lot of 'making precise' and perhaps is an entirely wrong way of looking at it, so better ideas are welcome!
|
To my understanding, the reason is simple: in the almost 300 years since $e$ was discovered, no representation of it as a period has been found. I think this is quite a strong evidence. Remark. To those who think that periods were introduced by Kontsevich and Zagier, I recommend the paper of Euler, On highly transcendental quantities which cannot be expressed by integral formulas English translation .
(Strange that he does not mention his own $e$ as a candidate. Perhaps he was still looking for an integral that equals $e$ when he wrote this paper.)
|
{
"source": [
"https://mathoverflow.net/questions/180035",
"https://mathoverflow.net",
"https://mathoverflow.net/users/41139/"
]
}
|
180,173 |
Recently, prompted by considerations in conformal field theory, I was lead to guess that for every compact connected Lie group $G$, the fourth cohomology group of it classifying space is torsion free. By using the structure theory of connected Lie groups and a couple of Serre spectral sequences, I was quickly able to prove that result. However, this feels unsatisfactory: as I hinted on the first paragraph, the fact that $H^4(BG,\mathbb Z)$ is torsion free seems to have a meaning . But what that meaning exactly is is not quite clear to me... In order to get a better feeling of what that meaning might be, I therefore ask the following: Question: Can someone come up with a non-computational proof of the fact that for every connected compact Lie group $G$, the cohomology group $H^4(BG,\mathbb Z)$ is torsion free? [Added later]:
My paper on WZW models and $H^4(BG,\mathbb Z)$ has recently appeared on the arXiv. In it, I present a proof of the torsion-freeness of $H^4(BG,\mathbb Z)$ which is slightly different from the one below. I also show that $H^4(BG,\mathbb Z)=H^4(BT,\mathbb Z)^W$. For the reader's interest, I include a proof that $H^4(BG)$ is torsion-free [all cohomology groups are with $\mathbb Z$ coefficients, which is omitted from the notation]. Let $\tilde G$ be the universal cover of $G$, and let $\pi:=\pi_1(G)$. Then there is a Puppe sequence
$$
\pi\to\tilde G\to G \to K(\pi,1)\to B\tilde G \to BG \to K(\pi,2)
$$
It is a well known fact that $\pi_2$ of any Lie group is trivial: it follows that $B\tilde G$ is 3-connected and that $H^4(B\tilde G)$ is torsion-free (actually $H_4(B\tilde G)$ is also torsion-free, but that's not needed for the argument). Now, here comes the computation: $H^*(K(\mathbb Z/p^n,2)) = [\mathbb Z, 0,0,\mathbb Z/p^n,0,...]$ from which it follows that for any finite abelian group $A$ $H^*(K(A,2)) = [\mathbb Z, 0,0,A,0,...]$ from which it follows that for any finitely generated abelian group $\pi=\mathbb Z^n\oplus A$ $H^*(K(\pi,2)) = [\mathbb Z, 0,\mathbb Z^n,A,\mathbb Z^{(\!\begin{smallmatrix} \scriptscriptstyle n+1 \\ \scriptscriptstyle 2 \end{smallmatrix}\!)},...]$ The Serre spectral sequence for the fibration $B\tilde G \to BG \to K(\pi,2)$ therefore looks as follows:
$$
\begin{matrix}
\vdots & \vdots\\
H^4(B\tilde G) & 0 & \vdots & \vdots & & \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \cdots\\
\mathbb Z & 0 & \mathbb Z^n & A &\mathbb Z^{(\!\begin{smallmatrix} \scriptscriptstyle n+1 \\ \scriptscriptstyle 2 \end{smallmatrix}\!)} & H^5(K(\pi,2)) & \cdots\\
\end{matrix}
$$
and the $d_5$ differential $H^4(B\tilde G)\to H^5(K(\pi,2))$
cannot create torsion in degree four. QED PS: By a result of McLane (1954), $H^5(K(\pi,2))$ is naturally isomorphic to the the group of $\mathbb Q/\mathbb Z$-valued quadratic forms on $\pi$ modulo those that lift to a $\mathbb Q$-valued quadratic form... I wonder what the above $d_5$ differential is.
|
I try to give an argument without spectral sequences, not sure if this can be considered non-computational though. At least, there is a non-computational syllabus: torsion classes in $H^4(BG,\mathbb{Z})$ would be characteristic classes of torsion bundles over $S^3$ but the latter have to be trivial. Now for a slightly more detailed argument: first, the coefficient formula tells us that torsion in $H^4(BG,\mathbb{Z})$ comes from torsion in $H_3(BG,\mathbb{Z})$ because torsion in $H_4(BG,\mathbb{Z})$ would not survive the dualization. Since $G$ is connected, the Hurewicz theorem provides a surjection $\pi_3(BG)\to H_3(BG,\mathbb{Z})$, so any torsion class in $H_3(BG,\mathbb{Z})$ comes from a map $S^3\to BG$ classifying a $G$-bundle over $S^3$. The corresponding clutching map $S^2\to G$ is homotopic to the constant map because $\pi_2(G)$ is trivial. So we find that any $G$-bundle on $S^3$ is trivial, hence $H_3$ and therefore $H^4$ of $BG$ do not have non-trivial torsion.
|
{
"source": [
"https://mathoverflow.net/questions/180173",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5690/"
]
}
|
180,400 |
The (general) analytic class number formula gives a value for the residue of the Dedekind zeta function of a number field at the point $s=1$ (or, as I prefer, the leading Taylor coefficient at $s=0$). To whom should this formula be attributed? My usual go-to place for such history questions is Narkiewicz's book Elementary and analytic theory of algebraic numbers , but it only discusses the abelian case. It lists references as late as 1929 for the result in the abelian case. Does one have to wait for Artin or Hasse?
|
Do you insist that the formula be interpreted as the value of a residue, hence requiring that it be known that the zeta-function of every number field is meromorphic around $s = 1$? It goes back to Dedekind that for every $K$ the limit $\lim_{s \rightarrow 1^+} (s-1)\zeta_K(s)$ exists and is given by the standard formula, but this couldn't be interpreted by him as a residue calculation since at that time it wasn't known that $\zeta_K(s)$ could be continued beyond ${\rm Re}(s) > 1$. However, if you're willing to accept the computation of that limit as a poor man's residue, then the formula goes back to Dedekind. In 1903 Landau proved for every $K$ that $\zeta_K(s)$ can be analytically continued to ${\rm Re}(s) > 1 - 1/[K:\mathbf Q]$ (see the section on the zeta-function of a number field in Lang's Algebraic Number Theory). This was part of his proof of the prime ideal theorem, and it was the first proof for general $K$ that $\zeta_K(s)$ is meromorphic around $s = 1$. By this work Dedekind's calculation could be interpreted as a residue calculation. Therefore if you require the formula arise as a residue then in principle I'd say the result is due to Landau, although I haven't looked at his 1903 paper to see if he is explicit about that.
|
{
"source": [
"https://mathoverflow.net/questions/180400",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1021/"
]
}
|
180,846 |
Some theorems are true in vector spaces or in manifolds for a given dimension $n$ but become false in higher dimensions. Here are two examples: A positive polynomial not reaching its infimum. Impossible in dimension $1$ and possible in dimension $2$ or more. See more details here . A compact convex set whose set of extreme points is not closed. Impossible in dimension $2$ and possible in dimension $3$ or more. See more details here . What are other "interesting" results falling in the same category?
|
An n-dimensional brownian motion visits every neighborhood of $\mathbb{R}^n$ infinitely often with probability 1 iff $n \leq 2$
|
{
"source": [
"https://mathoverflow.net/questions/180846",
"https://mathoverflow.net",
"https://mathoverflow.net/users/41060/"
]
}
|
180,865 |
The other direction is well known.
I think it is true and I was told by several other guys doing algebraic geometry that it is indeed true but they did not know how to prove. I am also wondering whether there is a general nonsense style proof. It looks like the statement can be proved by playing adjunctions in several categories. Note that a scheme $X$ is quasi compact iff the structure sheaf $O_X$ is a compact object in category of quasi coherent sheaves $Qcoh(X)$. We also know that if $X$ is noetherian scheme, then coherent sheaves are exactly the compact objects in $Qcoh(X)$, so it looks like this statement is equivalent to say if $f_*:Qcoh(X)\rightarrow Qcoh(Y)$ preseves compact objects, then $f$ itself is proper morphism of scheme. But this somehow means that $f^{-1}$ preserves compactness in the topological sense. (I think this can also be phrased into general nonsense language,say the compact objects in category of topological space.) Then one might be able play the commutativity of $Hom(M,?)$ functor and filtered colimits game to get proof. All the comments are welcome
Thanks
|
Well, as already explained in the comment by Vivek Shende, the answer is no: there are non-proper morphisms with push-forward that preserves coherence. I've thought a bit on this the last couple of years and on the "positive" side we have: Theorem 1. If $f\colon X\to Y$ is a universally closed morphism of finite type between noetherian schemes, then $f_*$ preserves coherence. This is quite easy to show with Raynaud–Gruson's refined Chow Lemma (also when $X$ and $Y$ are algebraic spaces) and standard dévissage arguments. With a completely different proof I have also extended this to algebraic stacks. The morphism $\mathbb{P}^2\setminus 0\to *$ does not preserve coherence as there exist non-proper curves. If $f_*$ preserves coherence after arbitrary noetherian base change , then $f$ is universally closed: the valuative criterion is easily seen to hold (as indicated by Piotr Achinger) since the fraction field of a DVR is not coherent. However, it turns out that it is not necessary to involve base change and it is also possible to characterize properness in terms of $R^1f_*$ (cf comment by Tom Graber). Theorem 2. Let $f\colon X\to Y$ be a morphism of finite type between noetherian schemes. Then (i) $f$ is universally closed if and only if $f_*$ preserves coherence, and (ii) $f$ is proper if and only if $f_*$ and $R^1f_*$ preserves coherence. The necessity of the conditions is Theorem 1 in (i) and well-known in (ii). To see that they are sufficient we need some lemmas that say that the valuative criteria can be checked using curves in $X$. A curve is a $1$-dimensional noetherian integral scheme. First we show that it is enough with curves of finite type over $Y$. Lemma 1. Let $f\colon X\to Y$ be a morphism of finite type between noetherian schemes. If $f$ is not universally closed (resp. not separated) then there exists (i) a separated curve C together with a morphism $C\to Y$ of finite type; and (ii) a closed subscheme $C'$ of $X\times_Y C$ such that $C'$ is a curve and $C'\to C$ is birational and not surjective (resp. not separated). Proof. By the valuative criterion, there exists a DVR $D$, a morphism $Y_0:=\operatorname{Spec} D\to Y$ and an integral closed subscheme $Z_0\subseteq X_0:=X\times_Y Y_0$ such that $Z_0\to Y_0$ is an open immersion: the inclusion of the generic point (resp. not separated). In the latter case, we can assume that there are two different sections of $Z_0\to Y_0$ that only agree over the generic point $U_0=\operatorname{Spec} K(D)$. We now approximate the map $Y_0\to Y$ and the data $U_0\subseteq Y_0$, $Z_0\to Y_0$: there exists $Y'\to Y$ of finite type with $Y'$ integral, an (affine) open immersion $U'\subseteq Y'$, and a closed integral subscheme $Z'\subseteq X'=X\times_Y Y'$ such that $Z'\to Y'$ is an isomorphism over $U'$. Moreover, we can arrange so that $Z'\to Y'$ is equal to $U'\to Y'$ (resp. there are $2$ sections of $Z'\to Y'$ which only coincide over $U'$). Pick a closed point $y'$ in $Y'\setminus U'$ and a generization $u'\in U'$ such that $C:=\overline{u'}$ has dimension $1$. Let $C'=Z\cap X'\times_{Y'} C$. QED Now we refine Lemma 1 and show that it is enough to consider either "vertical curves" (contained in a fiber) or "horizontal curves" (birational to a curve in the base). Lemma 2. Let $f\colon X\to Y$ be a morphism of finite type between noetherian schemes. If $f$ is not universally closed (resp. not separated) then there exists a birational morphism of curves $C'\to C$ which is not universally closed (resp. not separated) such that either: (i) there is a finite morphism $C'\to X_y$ for a point $y\in Y$ (of finite type) and $C$ is a projective curve over $y$; or (ii) there is a finite morphism $C\to \operatorname{Spec} \mathcal{O}_{Y,y}$ and $C'$ is a closed subscheme of $X\times_Y C$. Proof. Let $C\to Y$ be a curve as in Lemma 1. The image is either a point or the morphism is quasi-finite. Case (i): the image of $C$ is a point $y\in Y$. We may replace $C$ by a compactification and assume that $C\to y$ is projective. If the image of $C'\to X_y$ is a point, then it is a closed point and it follows that $C'\to C$ is finite which gives a contradiction since $C'\to C$ is not proper. Thus the image of $C'\to X_y$ is $1$-dimensional and $C'\to X_y$ is quasi-finite and proper, hence finite. Case (ii): $C\to Y$ is quasi-finite. We may replace $Y$ with the schematic image of $C$ and localize at the image of a suitable closed point of $C$. Then $Y$ is integral and $1$-dimensional. Using Zariski's main theorem, we have $C\to \overline{C}\to Y$ where $C\to \overline{C}$ is an open immersion and $\overline{C}\to Y$ is finite. We may replace $C$ with $\overline{C}$ and $C'$ with its closure in $X\times_Y \overline{C}$. QED Remark. The localization in (ii) is only necessary when $Y$ is not Jacobson. The third lemma shows that curves as in Lemma 2 give rise to non-coherent cohomology. Lemma 3. Let $f\colon C'\to C$ be a birational morphism of finite type of curves. If $f$ is not universally closed (resp. not separated), then $f_*\mathcal{O}_{C'}$ is not coherent (resp. $R^1f_*\mathcal{O}_{C'}$ is not coherent). If $C$ is projective over $k$, then in addition $\Gamma(C',\mathcal{O}_{C'})$ (resp. $H^1(C',\mathcal{O}_{C'})$) is infinite-dimensional. Proof. For the first statement, it is enough to show that the sheaf is not coherent after passing to the henselization of a suitable point $c$ in $C$. We may thus assume that $C$ is local and henselian (although not necessarily irreducible). If $f$ is not universally closed, then (after replacing $C'$ with a connected component if it is reducible) $C'\to C$ is an open immersion. Then $f_*\mathcal{O}_{C'}$ is not coherent. If $f$ is not separated, then (after replacing $C'$ with a connected component if it is reducible) $C'\to C$ is not separated and there is an open covering of $C'$ consisting of the local rings at the points above $c$. A Cech calculation gives that $H^0(C',\mathcal{O}_{C'})$ is coherent whereas $H^1(C',\mathcal{O}_{C'})$ is not coherent. For the second statement, we note that there is a dense open subscheme $U\subseteq C$ such that $f_*\mathcal{O}_{C'}$ is coherent over $U$ and $R^1f_*\mathcal{O}_{C'}$ is zero over $U$. If $f_*\mathcal{O}_{C'}$ is not coherent, then choose a coherent subsheaf $\mathcal{F}\subseteq f_*\mathcal{O}_X$ that is an isomorphism over $U$. Then the cokernel $\mathcal{Q}$ is a non-coherent sheaf that is zero over $U$. It follows that $H^0(C,\mathcal{Q})$ is infinite, hence that $H^0(C,f_*\mathcal{O}_{C'})$ is infinite since $H^1(C,\mathcal{F})$ is finite. If $R^1f_*\mathcal{O}_{C'}$ is not coherent, then $H^0(C,R^1f_*\mathcal{O}_{C'})$ is infinite. QED. Theorem 2 is an easy consequence of the lemmas above: Proof of Theorem 2. We have seen that the conditions are necessary. Conversely, assume that $f$ is not universally closed (resp. not separated). We have to show that $f_*$ does not preserve coherence (resp. $R^1f_*$ does not preserve coherence).
Let $C'\to C$ be as in Lemma 2 and choose a coherent sheaf $\mathcal{F}\in \mathrm{Coh}(X)$ restricting to the push-forward of $\mathcal{O}_{C'}$ in $\mathrm{Coh}(X\times \operatorname{Spec} \mathcal{O}_{Y,y})$. By Lemma 3, we have that $f_*\mathcal{F}$ (resp. $R^1f_*\mathcal{F}$) is not coherent at $y$. QED. Remark 1. With small modifications, the proof of Theorem 2 also holds for algebraic spaces and Deligne–Mumford stacks: first one takes a finite cover of $Y$ to reduce to $Y$ a scheme, then uses the valuative criterion for stacks. An alternative proof for schemes, algebraic spaces and Deligne–Mumford stacks would be to use Raynaud–Gruson's Chow lemma to reduce the question to morphisms of the form $f\colon X\to \mathbb{P}^n_Y \to Y$ where $f\colon X\to \mathbb{P}^n_Y$ is étale and birational and $Y$ is a scheme. Remark 2. The situation for algebraic stacks with infinite-dimensional stabilizers is more subtle. Theorem 1 holds as mentioned above and Theorem 2 (i) is probably ok. Theorem 2 (ii) however has to be modified. If $X$ has a good moduli space then it follows from Theorem 2 that $X$ has coherent cohomology if and only if $X_\mathrm{gms}$ is proper. For example, note that $BGL_n$ has coherent cohomology but is not separated. This is all well-known and one may argue that such stacks are "almost proper" (cf. paper by Halpern-Leistner–Preygel, arXiv:1402.3204).
|
{
"source": [
"https://mathoverflow.net/questions/180865",
"https://mathoverflow.net",
"https://mathoverflow.net/users/41650/"
]
}
|
181,063 |
This is a chaser for the examples of using physical intuition to solve math problems question. Physical intuition seems to be used relatively frequently for solving math problems as well as stating new interesting ones. What would be examples of interesting Math questions or frameworks or problem solutions derived from field other than Physics?
|
Douglas Zare's comment mentioning linguistics brings to mind the important example of the Chomsky hierarchy. In the 50s, in the field of linguistics, Noam Chomsky introduced the notion of formal grammars and identified certain levels of complexity of formal grammars (regular, context-free, etc, these forming the above-mentioned hierarchy). These ideas then became important (even fundamental) in theoretical computer science as corresponding to hierarchies of formal languages, with associated automata that recognized them (e.g. Turing Machines). So I think this counts as an intuition and a framework from outside of math that came to play an important role within it.
|
{
"source": [
"https://mathoverflow.net/questions/181063",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38448/"
]
}
|
181,226 |
If a metric space is separable, then any open set is a countable union of balls. Is the converse statement true? UPDATE1. It is a duplicate of the question here https://math.stackexchange.com/questions/94280/if-every-open-set-is-a-countable-union-of-balls-is-the-space-separable/94301#94301 UPDATE2. Let me summarize here the positive answer following Joel David Hamkins and Ashutosh. It is a matter of taste, but I omit using ordinals and use Zorn lemma instead, which may be more usual for most mathematicians (at least, it is for me). Lemma 1. If $(X,d)$ is non-separable metric space, then for some $r>0$ there exists an uncountable subset $X_1\subset X$ such that $d(x,y)>r$ for any two points $x\ne y$ in $X_1$. Proof. For each $r=1/n$ consider the maximal (by inclusion) subset with such property. If it is countable, then $X$ has a countable $1/n$-net for each $n$, hence it is separable. Now consider two cases. Define $X_2\subset X_1$ as a set of points $x\in X_1$ for which there exist a point $y_x\in X$ such that $0<d(x,y_x)<r/10$. Consider two cases. 1) $X_2$ is uncountable. Consider the union of open balls $U=\cup_{x\in X_2} B(x,d(x,y_x))$. Consider any open ball $B(z,a)$ containing in $U$. We have $z\in U$, so $d(z,x)<d(x,y_x)$ for some $x$, but $r/5>2d(x,y_x)\geq d(x,z)+d(x,y_x)\geq d(z,y_x)>a$ since $y_x\notin B(z,a)$. It implies that $B(z,a)$ is contained in a unique ball $B(x,d(x,y_x))$, hence we need uncountably many such balls to cover whole $U$. 2) $X_3=X\setminus X_2$ is uncountable. For any $x\in X_3$ define $R(x)>0$ as a radius of maximal at most countable open ball centered in $x$. Clearly $R(x)\geq r/10$ for any $x\in X_3$. For any $x\in X_3$ define a star centered in $x$ as a union $D=x\cup C$, where $C=\{z_1,z_2,\dots\}\subset X_3$ is a countable sequence of points with $d(x,z_i)\rightarrow R(x)+0$. Choose a maximal disjoint subfamily of stars. Clearly it is uncountable, else we may easily increase it. Denote by $U$ the set of centers of chosen stars. It is open (as any subset of $X_3$), assume that it is a countable union of balls $U=\cup_{i=1}^{\infty} B(x_i,r_i)$, $x_i\in U$. We have $r_i> R(x_i)$ for some $i$, else $U$ is at most countable. But then $B(x_i,r_i)$ contains infinitely many points of the star $D$ centered in $x_i$, while by our construction $U\cap D=\{x_i\}$. A contradiction.
|
Towards a contradiction, let us assume that we have a metric space $X = \{x_i : i < \omega_1\}$ in which any two points are at least unit distance apart and every subset of $X$ is the union of a countable family of open balls. Let $r_i$ be the supremum of all $r > 0$ such that $B(x_i, r)$ is countable. Construct $\{C_i : i < \omega_1\}$ such that (1) Each $C_i$ is countable and the infimum of $\{d(x_i, y): y \in C_i\}$ is $r_i$ (2) If $i < j$, then $x_i \notin C_j$ Now let $I \in [\omega_1]^{\omega_1}$ be such that whenever $i < j$, $x_j \notin C_i$. Suppose $Y = \{x_i : i \in I\}$ can be covered by a family $F$ of countably many balls. Let $i \in I$ be least such that $B(x_i, r) \in F$ for some $r > r_i$. Pick $y \in C_i \cap B(x_i, r)$. So $y \in \bigcup F = Y$ so that $y = x_j$ for some $j \in I$ which is impossible.
|
{
"source": [
"https://mathoverflow.net/questions/181226",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4312/"
]
}
|
181,530 |
What justification can you give for the fact that "most ODEs do not have an explicit solution"?
|
If the ODE is linear --and the notion of «explicit» refers to Liouvillian solutions (towers of iterated quadrature and exponential of meromorphic functions)-- then its differential Galois group ( Picard-Vessiot theory ) must be a solvable algebraic subgroup of $GL_n (\mathbb C)$. Such subgroups are rare: they define a proper algebraic subvariety. The defining equations correspond to trivial commutations relations, e.g $[G,G]=0$ in the Abelian case, encoding the tower of normal subgroups intervening in the definition of solvable group . In the non-linear context the intuition is the same: nice «transverse structures» are rare. Edit : for a statement regarding the non-linear case, see my paper http://fr.arxiv.org/abs/1308.6371v2 , Corollary C.
|
{
"source": [
"https://mathoverflow.net/questions/181530",
"https://mathoverflow.net",
"https://mathoverflow.net/users/44691/"
]
}
|
181,855 |
In the latest what-if Randall Munroe ask for the smallest number of geodesics that intersect all regions of a map. The following shows that five paths of satellites suffice to cover the 50 states of the USA: A similar configuration where the lines are actually great circles is claimed by the author: They're all slightly curved, since the Earth is turning under the satellites, but it turns out that this arrangement of lines also works for the much simpler version of the question that ignores orbital motion: "How many straight (great-circle) lines does it take to intersect every state?" For both versions of the question, my best answer is a version of the arrangement above. There has been quite some work on similar sounding problems. For stabbing (or finding transversals of) line segments see as an example Stabbing line segments by H. Edelsbrunner, H. A. Maurer,
F. P. Preparata, A. L. Rosenberg, E. Welzl and D. Wood (and papers which reference it.)
or L.M. Schlipf's dissertation with examples of different kinds. Is there an algorithmic approach known to tackle this problem (or for the
simpler problem when all regions of the map are convex)? In the case of the 50 states of the USA, it is of course easy to see that one great circle does not suffice: take two states (e.g. New York and Louisiana) such that all great circles that intersect those do not pass through a third state (e.g. Alaska). Similarly one can show that we need at least 3 great circles. Maybe it would be helpful to consider all triples of regions that do not lie on a great circle and use this hypergraph information to deduce lower bounds. What are good methods to find lowers bounds? Randall Munroe's conjectures that 5 is optimal: I don't know for sure that 5 is the absolute minimum; it's possible
there's a way to do it with four, but my guess is that there isn't.
[...] If
anyone finds a way (or proof that it's impossible) I'd love to see it!
|
Looking at this old question again, I'm now fairly convinced that the easiest route of solving this problem is using similar ideas to the one suggested by David E Speyer in a comment , namely basically setting up a integer program after some combinatorial information (like inclusion maximal sets of states that lie on a geodesic). For example if longest_geodesics is a such a set of maximal geodesics, then the following sage code will give the answer $5$ . p = MixedIntegerLinearProgram(maximization=False)
m = p.new_variable(binary=True)
for states in all_states:
p.add_constraint(p.sum(m[line]*(1 if (states in line) else 0)
for line in longest_geodesics)>=1)
p.set_objective(p.sum(m[line] for line in longest_geodesics))
p.solve() If we assume that the set of longest_geodesics actually contained all geodesics and that the solver actually found a correct solution, then this should be a proof that it is impossible to do with $4$ geodesics. Obtaining a complete set of geodesics (or a superset thereof) could be done by finding a complete set of "forbidden triples", i.e. triples of states that do not lie on a geodesic and then build a lattice out of all sets that do not contain any forbidden triple.
I took the map data cb_2018_us_state_20m.zip from this page https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html and found a (hopefully) complete set of longest_geodesics . And in fact the integer program didn't find a solution with only $4$ great circles. Here's one with $5$ : This is using a gnomonic projections (suggested by Martin M. W. in a comment ) to make all the great circles appear as line. A more conventional map would look like this: However some people might prefer a more human checkable proof , so I provide an argument using techniques and ideas also mentioned in other answers/comments. Throughout the argument the only assumption that is made is that certain triples of states cannot lie on a great circle. I want to show that it is impossible to cover all $50$ states with $4$ great circles. First consider the following set of states: {'Alaska', 'Delaware', 'Hawaii', 'Kentucky', 'Vermont', 'Washington'} . I claim that any three of them cannot lie on a great circle. Therefore if those $6$ states are covered by $4$ lines, either they are split $2, 2, 2$ or $1, 1, 2, 2$ , since this are the only partitions of $6$ into at most $4$ parts with size of at most $3$ . First let's consider splitting {'Alaska', 'Delaware', 'Hawaii', 'Kentucky', 'Vermont', 'Washington'} into $3$ pairs, i.e. the split $2, 2, 2$ .
Hence we assume that $3$ great circles each cover two of those those 6 states (and potentially many more states). We proof that one more great circle is not enough to cover all states by giving a triple of states which itself cannot lie on one great circle and each state in the triple cannot lie on any of the $3$ great circles covering the pairs in the split. For each partition we first write the split and then the triple. For example {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'} means that there is no great circle through {'Florida', 'South Carolina', 'Wyoming'} and each of these three states does not lie on a great circle containing any of the three pairs. So for example {'Florida', 'Alaska', 'Delaware'} cannot lie on a great circle and the same for {'Florida', 'Hawaii', 'Kentucky'} and {'Florida', 'Vermont', 'Washington'} . Analogously for 'South Carolina' and 'Wyoming' . {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky', 'Washington'}}: {'Rhode Island', 'Arkansas', 'Colorado'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky', 'Vermont'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Kentucky'}, {'Vermont', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Vermont'}, {'Kentucky', 'Washington'}}: {'Louisiana', 'Rhode Island', 'New Mexico'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Washington'}, {'Kentucky', 'Vermont'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Hawaii'}, {'Vermont', 'Washington'}}: {'Connecticut', 'New Mexico', 'Louisiana'}
{{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Washington'}}: {'Louisiana', 'Michigan', 'New Mexico'}
{{'Alaska', 'Washington'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Vermont'}}: {'Connecticut', 'South Carolina', 'Minnesota'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Washington'}}: {'Rhode Island', 'Arkansas', 'Colorado'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Washington'}, {'Hawaii', 'Vermont'}}: {'Rhode Island', 'Arkansas', 'Colorado'}
{{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Washington'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}}: see argument below
{{'Alaska', 'Vermont'}, {'Delaware', 'Washington'}, {'Hawaii', 'Kentucky'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Washington'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Kentucky'}}: {'Rhode Island', 'Iowa', 'North Dakota'} For the split {{'Alaska', 'Washington'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}} , we first notice that 'Connecticut' and 'South Carolina' can't lie on any of the three great circles going through the three pairs in the split, therefore the $4$ th great circle has to contain both of those. Then we look at 'Indiana' and 'Tennessee' : the following triples cannot lie on a great circle: {'Indiana', 'Washington', 'Alaska'}
{'Indiana', 'Hawaii', 'Vermont'}
{'Indiana', 'Connecticut', 'South Carolina'}
{'Tennessee', 'Washington', 'Alaska'}
{'Tennessee', 'Hawaii', 'Vermont'}
{'Tennessee', 'Connecticut', 'South Carolina'} Therefore both 'Indiana' and 'Tennessee' must lie on the great circle containing 'Delaware' and 'Kentucky' , and a contradiction is reached because the triple {'Tennessee', 'Indiana', 'Delaware'} does not lie on a great circle. Second let's consider splitting {'Alaska', 'Delaware', 'Hawaii', 'Kentucky', 'Vermont', 'Washington'} into $4$ non-empty parts, i.e. splits of the type $1, 1, 2, 2$ .
Here we assume that $4$ great circles cover those $6$ states (and potentially many more states). We now consider all possible such partition (there are 45).
For some of these partitions we prove that they are impossible to complete by giving a triple of states that have the following properties: Each of the three states in the triple does not lie on a great circle going through any of the two parts with two elements. The three states in the triple cannot lie on one great circle. Each combination of two states from the triple cannot lie on the great circle through each of the two parts with one element. From (1) it follows that the three states have to be assigned to the two parts with one element. Not all them can go to only one of them because of (2). But splitting " $3$ " into two nonempty parts leaves one part with $2$ elements and this lead to a contradiction by (3).
Take for example {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont'}, {'Washington'}}: {'Rhode Island', 'South Carolina', 'Arkansas'} We have (1) because 'Rhode Island' cannot lie on the great circle through {'Alaska', 'Delaware'} nor {'Hawaii', 'Kentucky'} and analogously for 'South Carolina' and 'Arkansas' .
We have (2) because {'Rhode Island', 'South Carolina', 'Arkansas'} cannot lie on a great circle.
We have (3) because now matter what combination of two states you take from {'Rhode Island', 'South Carolina', 'Arkansas'} , this pair cannot lie on a great circle with 'Vermont' nor 'Washington' .
Here's the list of all the partitions we can exlcude with that argument: {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont'}, {'Washington'}}: {'Rhode Island', 'South Carolina', 'Arkansas'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky'}, {'Washington'}}: {'Rhode Island', 'Arizona', 'Florida'}
{{'Alaska', 'Delaware'}, {'Hawaii'}, {'Kentucky', 'Vermont'}, {'Washington'}}: {'Utah', 'South Carolina', 'Florida'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky'}, {'Vermont'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Delaware'}, {'Hawaii'}, {'Kentucky', 'Washington'}, {'Vermont'}}: {'Louisiana', 'Oklahoma', 'Connecticut'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Vermont'}, {'Kentucky'}, {'Washington'}}: {'Arizona', 'Rhode Island', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware'}, {'Kentucky', 'Vermont'}, {'Washington'}}: {'Arizona', 'Rhode Island', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Washington'}, {'Kentucky'}, {'Vermont'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware'}, {'Kentucky', 'Washington'}, {'Vermont'}}: {'Arkansas', 'Michigan', 'Connecticut'}
{{'Alaska', 'Hawaii'}, {'Delaware'}, {'Kentucky'}, {'Vermont', 'Washington'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Vermont'}, {'Washington'}}: {'New Mexico', 'Michigan', 'Florida'}
{{'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Washington'}, {'Vermont'}}: {'Oklahoma', 'Wisconsin', 'Connecticut'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Washington'}, {'Hawaii'}, {'Vermont'}}: {'Louisiana', 'Oklahoma', 'Rhode Island'}
{{'Alaska', 'Kentucky'}, {'Delaware'}, {'Hawaii', 'Washington'}, {'Vermont'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Vermont'}, {'Delaware'}, {'Hawaii', 'Kentucky'}, {'Washington'}}: {'South Carolina', 'Michigan', 'Arkansas'}
{{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont'}}: {'Iowa', 'Connecticut', 'North Dakota'}
{{'Alaska'}, {'Delaware', 'Washington'}, {'Hawaii', 'Kentucky'}, {'Vermont'}}: {'Rhode Island', 'South Carolina', 'Arkansas'}
{{'Alaska', 'Vermont'}, {'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Vermont'}, {'Delaware'}, {'Hawaii'}, {'Kentucky', 'Washington'}}: {'Louisiana', 'Oklahoma', 'Wisconsin'}
{{'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Washington'}, {'Kentucky'}}: {'Rhode Island', 'Arizona', 'South Dakota'}
{{'Alaska'}, {'Delaware', 'Washington'}, {'Hawaii', 'Vermont'}, {'Kentucky'}}: {'Rhode Island', 'Arizona', 'Florida'}
{{'Alaska'}, {'Delaware', 'Washington'}, {'Hawaii'}, {'Kentucky', 'Vermont'}}: {'Utah', 'South Carolina', 'Florida'}
{{'Alaska'}, {'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky', 'Vermont'}}: {'Rhode Island', 'Arizona', 'South Dakota'} For the rest of these partitions, we consider a pair such that Each state the pair does not lie on a great circle going through any of the two parts with two elements. The two states in the pair cannot lie on a great circle together with any of the two part with one element. Therefore the two states in the pair have to be distributed on the two parts with one element one way or the other. We list both possible ways and the resulting partitions of the $8$ states into $4$ great circles with at least $2$ elements. Then we proof that this partition cannot be completed in either of the following two ways: (a) Giving a single state which cannot lie in a great circle with any of its parts (b) Giving three states which cannot lie in a great circle, but have to lie in one of the parts. (Which part can be seen because one of the three states will already be a state in the corresponding part). Let's take as an example {{'Alaska', 'Kentucky'}, {'Delaware', 'Hawaii'}, {'Vermont'}, {'Washington'}} . Here a pair would be ('Arkansas', 'Rhode Island') . We have (1) because each triple {'Arkansas', 'Alaska', 'Kentucky'}, {'Arkansas', 'Delaware', 'Hawaii'}, {'Rhode Island', 'Alaska', 'Kentucky'} and {'Rhode Island', 'Delaware', 'Hawaii'} cannot lie on a great circle and we have (2) because {'Arkansas', 'Rhode Island', 'Vermont'} and {'Arkansas', 'Rhode Island', 'Washington'} cannot lie on a great circle. Hence we consider the two partitions where 'Arkansas' and 'Rhode Island' are paired with 'Vermont' and 'Washington' .
For the candidate [{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Arkansas', 'Vermont'}, {'Washington', 'Rhode Island'}] we are in case (b). Each of 'Florida' and 'South Carolina' cannot lie in any of the great circles given by {'Delaware', 'Hawaii'} , {'Arkansas', 'Vermont'} or {'Washington', 'Rhode Island'} Therefore they must both lie on the great circle going through {'Kentucky', 'Alaska'} , but this is a contradiction since {'Florida', 'South Carolina', 'Alaska'} cannot lie on a common great circle.
For the candidate [{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Rhode Island', 'Vermont'}, {'Washington', 'Arkansas'}] we are in case (a): the state 'New Mexico' cannot lie in a great circles going any of the pairs in the candidate.
Here's list of all but one of the remaining partitions of the $6$ states into $4$ with proofs as described above: {{'Alaska', 'Delaware'}, {'Hawaii'}, {'Kentucky'}, {'Vermont', 'Washington'}}: ('Wyoming', 'Arizona')
[{'Delaware', 'Alaska'}, {'Hawaii', 'Wyoming'}, {'Kentucky', 'Arizona'}, {'Washington', 'Vermont'}]: Mississippi
[{'Delaware', 'Alaska'}, {'Hawaii', 'Arizona'}, {'Kentucky', 'Wyoming'}, {'Washington', 'Vermont'}]: Connecticut
{{'Alaska', 'Hawaii'}, {'Delaware', 'Kentucky'}, {'Vermont'}, {'Washington'}}: ('Michigan', 'South Carolina')
[{'Hawaii', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Michigan', 'Vermont'}, {'Washington', 'South Carolina'}]: Florida
[{'Hawaii', 'Alaska'}, {'Delaware', 'Kentucky'}, {'South Carolina', 'Vermont'}, {'Washington', 'Michigan'}]: Wyoming
{{'Alaska', 'Kentucky'}, {'Delaware', 'Hawaii'}, {'Vermont'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Arkansas', 'Vermont'}, {'Washington', 'Rhode Island'}]: ['Florida', 'South Carolina', 'Alaska']
[{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Rhode Island', 'Vermont'}, {'Washington', 'Arkansas'}]: New Mexico
{{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky'}, {'Washington'}}: ('New Mexico', 'South Dakota')
[{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'New Mexico'}, {'Washington', 'South Dakota'}]: Louisiana
[{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'South Dakota'}, {'Washington', 'New Mexico'}]: Michigan
{{'Alaska', 'Washington'}, {'Delaware', 'Hawaii'}, {'Kentucky'}, {'Vermont'}}: ('North Dakota', 'Connecticut')
[{'Washington', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'North Dakota'}, {'Connecticut', 'Vermont'}]: ['Louisiana', 'New Mexico', 'Alaska']
[{'Washington', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Connecticut'}, {'Vermont', 'North Dakota'}]: South Carolina
{{'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky'}, {'Vermont', 'Washington'}}: ('North Carolina', 'Connecticut')
[{'North Carolina', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Connecticut'}, {'Washington', 'Vermont'}]: ['Delaware', 'Kansas', 'Wyoming']
[{'Alaska', 'Connecticut'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'North Carolina'}, {'Washington', 'Vermont'}]: ['Oklahoma', 'North Carolina', 'South Dakota']
{{'Alaska', 'Kentucky'}, {'Delaware', 'Vermont'}, {'Hawaii'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Arkansas'}, {'Washington', 'Rhode Island'}]: Wyoming
[{'Kentucky', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Rhode Island'}, {'Washington', 'Arkansas'}]: New Mexico
{{'Alaska', 'Kentucky'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Arkansas'}, {'Hawaii', 'Vermont'}, {'Washington', 'Rhode Island'}]: Kansas
[{'Kentucky', 'Alaska'}, {'Delaware', 'Rhode Island'}, {'Hawaii', 'Vermont'}, {'Washington', 'Arkansas'}]: New Mexico
{{'Alaska', 'Kentucky'}, {'Delaware'}, {'Hawaii'}, {'Vermont', 'Washington'}}: ('Oklahoma', 'Connecticut')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Oklahoma'}, {'Hawaii', 'Connecticut'}, {'Washington', 'Vermont'}]: Louisiana
[{'Kentucky', 'Alaska'}, {'Delaware', 'Connecticut'}, {'Oklahoma', 'Hawaii'}, {'Washington', 'Vermont'}]: Wyoming
{{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii'}, {'Washington'}}: ('Michigan', 'South Carolina')
[{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Michigan'}, {'Washington', 'South Carolina'}]: Florida
[{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'South Carolina'}, {'Washington', 'Michigan'}]: Wyoming
{{'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}, {'Washington'}}: ('North Carolina', 'Connecticut')
[{'North Carolina', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}, {'Washington', 'Connecticut'}]: ['Delaware', 'Oklahoma', 'Louisiana']
[{'Alaska', 'Connecticut'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}, {'Washington', 'North Carolina'}]: Florida
{{'Alaska', 'Washington'}, {'Delaware', 'Kentucky'}, {'Hawaii'}, {'Vermont'}}: ('Michigan', 'South Carolina')
[{'Washington', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Michigan'}, {'South Carolina', 'Vermont'}]: ['Iowa', 'North Dakota', 'Hawaii']
[{'Washington', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'South Carolina'}, {'Michigan', 'Vermont'}]: Connecticut
{{'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Washington'}, {'Vermont'}}: ('Iowa', 'Connecticut')
[{'Iowa', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Washington', 'Hawaii'}, {'Connecticut', 'Vermont'}]: Michigan
[{'Alaska', 'Connecticut'}, {'Delaware', 'Kentucky'}, {'Washington', 'Hawaii'}, {'Iowa', 'Vermont'}]: South Carolina
{{'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii'}, {'Vermont', 'Washington'}}: ('North Carolina', 'Connecticut')
[{'North Carolina', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Connecticut'}, {'Washington', 'Vermont'}]: ['Delaware', 'Oklahoma', 'Louisiana']
[{'Alaska', 'Connecticut'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'North Carolina'}, {'Washington', 'Vermont'}]: Wyoming
{{'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Kentucky'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Arkansas', 'Alaska'}, {'Delaware', 'Vermont'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Rhode Island'}]: ['Hawaii', 'New Mexico', 'Ohio']
[{'Rhode Island', 'Alaska'}, {'Delaware', 'Vermont'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Arkansas'}]: Michigan
{{'Alaska'}, {'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont', 'Washington'}}: ('Connecticut', 'Arkansas')
[{'Alaska', 'Connecticut'}, {'Delaware', 'Arkansas'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Vermont'}]: South Carolina
[{'Arkansas', 'Alaska'}, {'Delaware', 'Connecticut'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Vermont'}]: ['Hawaii', 'New Mexico', 'Ohio']
{{'Alaska', 'Vermont'}, {'Delaware', 'Washington'}, {'Hawaii'}, {'Kentucky'}}: ('Wyoming', 'Arizona')
[{'Alaska', 'Vermont'}, {'Washington', 'Delaware'}, {'Hawaii', 'Wyoming'}, {'Kentucky', 'Arizona'}]: Mississippi
[{'Alaska', 'Vermont'}, {'Washington', 'Delaware'}, {'Hawaii', 'Arizona'}, {'Kentucky', 'Wyoming'}]: ['Maine', 'Connecticut', 'Alaska']
{{'Alaska', 'Washington'}, {'Delaware', 'Vermont'}, {'Hawaii'}, {'Kentucky'}}: ('North Dakota', 'Rhode Island')
[{'Washington', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'North Dakota'}, {'Kentucky', 'Rhode Island'}]: Iowa
[{'Washington', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Rhode Island'}, {'Kentucky', 'North Dakota'}]: ['Oklahoma', 'Washington', 'Arizona']
{{'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii'}, {'Kentucky', 'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Arkansas', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Rhode Island'}, {'Washington', 'Kentucky'}]: New Mexico
[{'Rhode Island', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Arkansas'}, {'Washington', 'Kentucky'}]: Michigan
{{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky'}}: see argument below
{{'Alaska'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky', 'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Arkansas', 'Alaska'}, {'Delaware', 'Rhode Island'}, {'Hawaii', 'Vermont'}, {'Washington', 'Kentucky'}]: New Mexico
[{'Rhode Island', 'Alaska'}, {'Delaware', 'Arkansas'}, {'Hawaii', 'Vermont'}, {'Washington', 'Kentucky'}]: Colorado
{{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii'}, {'Kentucky', 'Vermont'}}: ('Michigan', 'South Carolina')
[{'Washington', 'Alaska'}, {'Delaware', 'Michigan'}, {'Hawaii', 'South Carolina'}, {'Kentucky', 'Vermont'}]: Rhode Island
[{'Washington', 'Alaska'}, {'Delaware', 'South Carolina'}, {'Hawaii', 'Michigan'}, {'Kentucky', 'Vermont'}]: ['Washington', 'New Mexico', 'Kansas'] For the only remaining partition, namely {{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky'}} we provide the following ad-hoc argument:
We use the fact that following states cannot lie on the great circles through any of the the part {'Alaska', 'Washington'} nor {'Hawaii', 'Vermont'} : ['South Carolina', 'West Virginia', 'Tennessee', 'Illinois', 'Rhode Island']. Therefore 'South Carolina' must either lie on a great circle with 'Delaware' or 'Kentucky' . Assume 'South Carolina' and 'Delaware' lie on a great circle with 'Delaware' . On this great circle we cannot also have any of 'West Virginia', 'Tennessee' and 'Illinois' , which therefore must lie on the great circle through 'Kentucky' , but this is a contradiction, since 'West Virginia', 'Tennessee' and 'Illinois' cannot lie on a great circle.
Therefore 'South Carolina' has to lie on the same great circle as Kentucky. Since 'Rhode Island' does not lie on a great circle with 'Kentucky' and 'South Carolina' , we have to assume that it lies on the same great circle as 'Delaware' , which leaves us with the following partition: [{'Alaska', 'Washington'}, {'Delaware', 'Rhode Island'}, {'Hawaii', 'Vermont'}, {'Kentucky', 'South Carolina'}], which can be seen to be absurd with the method described above by considering ['Arkansas', 'New Mexico', 'Washington'] , i.e. both 'Arkansas' and 'New Mexico' can only lie on the part with 'Alaska' , but at the same time cannot lie on a great circle with it. This now got a little longer that I expected; I do like the proof using integer programming better.
|
{
"source": [
"https://mathoverflow.net/questions/181855",
"https://mathoverflow.net",
"https://mathoverflow.net/users/39495/"
]
}
|
181,858 |
The values of Riemann's function at the integers have been extensively studied. I was wondering, is there anything interesting known (or conjectured) to happen arithmetically outside the real line (and outside of critical strip)? Conjectured irrationality, mysteriously neat expressions or something of the like. Edit . Since the question is necessarily broad, I'll try to make it a bit more clear: Is there any $s\in \mathbb{C}$, $\mathfrak{I}(s)\neq0$, $\mathfrak{R}(s)>1$ such that $s$ or its real/imaginary components are suspected to be transcendental, or to be give any new arithmetical information about $\mathbb{Q}$?
|
Looking at this old question again, I'm now fairly convinced that the easiest route of solving this problem is using similar ideas to the one suggested by David E Speyer in a comment , namely basically setting up a integer program after some combinatorial information (like inclusion maximal sets of states that lie on a geodesic). For example if longest_geodesics is a such a set of maximal geodesics, then the following sage code will give the answer $5$ . p = MixedIntegerLinearProgram(maximization=False)
m = p.new_variable(binary=True)
for states in all_states:
p.add_constraint(p.sum(m[line]*(1 if (states in line) else 0)
for line in longest_geodesics)>=1)
p.set_objective(p.sum(m[line] for line in longest_geodesics))
p.solve() If we assume that the set of longest_geodesics actually contained all geodesics and that the solver actually found a correct solution, then this should be a proof that it is impossible to do with $4$ geodesics. Obtaining a complete set of geodesics (or a superset thereof) could be done by finding a complete set of "forbidden triples", i.e. triples of states that do not lie on a geodesic and then build a lattice out of all sets that do not contain any forbidden triple.
I took the map data cb_2018_us_state_20m.zip from this page https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html and found a (hopefully) complete set of longest_geodesics . And in fact the integer program didn't find a solution with only $4$ great circles. Here's one with $5$ : This is using a gnomonic projections (suggested by Martin M. W. in a comment ) to make all the great circles appear as line. A more conventional map would look like this: However some people might prefer a more human checkable proof , so I provide an argument using techniques and ideas also mentioned in other answers/comments. Throughout the argument the only assumption that is made is that certain triples of states cannot lie on a great circle. I want to show that it is impossible to cover all $50$ states with $4$ great circles. First consider the following set of states: {'Alaska', 'Delaware', 'Hawaii', 'Kentucky', 'Vermont', 'Washington'} . I claim that any three of them cannot lie on a great circle. Therefore if those $6$ states are covered by $4$ lines, either they are split $2, 2, 2$ or $1, 1, 2, 2$ , since this are the only partitions of $6$ into at most $4$ parts with size of at most $3$ . First let's consider splitting {'Alaska', 'Delaware', 'Hawaii', 'Kentucky', 'Vermont', 'Washington'} into $3$ pairs, i.e. the split $2, 2, 2$ .
Hence we assume that $3$ great circles each cover two of those those 6 states (and potentially many more states). We proof that one more great circle is not enough to cover all states by giving a triple of states which itself cannot lie on one great circle and each state in the triple cannot lie on any of the $3$ great circles covering the pairs in the split. For each partition we first write the split and then the triple. For example {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'} means that there is no great circle through {'Florida', 'South Carolina', 'Wyoming'} and each of these three states does not lie on a great circle containing any of the three pairs. So for example {'Florida', 'Alaska', 'Delaware'} cannot lie on a great circle and the same for {'Florida', 'Hawaii', 'Kentucky'} and {'Florida', 'Vermont', 'Washington'} . Analogously for 'South Carolina' and 'Wyoming' . {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky', 'Washington'}}: {'Rhode Island', 'Arkansas', 'Colorado'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky', 'Vermont'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Kentucky'}, {'Vermont', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Vermont'}, {'Kentucky', 'Washington'}}: {'Louisiana', 'Rhode Island', 'New Mexico'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Washington'}, {'Kentucky', 'Vermont'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Hawaii'}, {'Vermont', 'Washington'}}: {'Connecticut', 'New Mexico', 'Louisiana'}
{{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Washington'}}: {'Louisiana', 'Michigan', 'New Mexico'}
{{'Alaska', 'Washington'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Vermont'}}: {'Connecticut', 'South Carolina', 'Minnesota'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Washington'}}: {'Rhode Island', 'Arkansas', 'Colorado'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Washington'}, {'Hawaii', 'Vermont'}}: {'Rhode Island', 'Arkansas', 'Colorado'}
{{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Washington'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Washington'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}}: see argument below
{{'Alaska', 'Vermont'}, {'Delaware', 'Washington'}, {'Hawaii', 'Kentucky'}}: {'Florida', 'South Carolina', 'Wyoming'}
{{'Alaska', 'Washington'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Kentucky'}}: {'Rhode Island', 'Iowa', 'North Dakota'} For the split {{'Alaska', 'Washington'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}} , we first notice that 'Connecticut' and 'South Carolina' can't lie on any of the three great circles going through the three pairs in the split, therefore the $4$ th great circle has to contain both of those. Then we look at 'Indiana' and 'Tennessee' : the following triples cannot lie on a great circle: {'Indiana', 'Washington', 'Alaska'}
{'Indiana', 'Hawaii', 'Vermont'}
{'Indiana', 'Connecticut', 'South Carolina'}
{'Tennessee', 'Washington', 'Alaska'}
{'Tennessee', 'Hawaii', 'Vermont'}
{'Tennessee', 'Connecticut', 'South Carolina'} Therefore both 'Indiana' and 'Tennessee' must lie on the great circle containing 'Delaware' and 'Kentucky' , and a contradiction is reached because the triple {'Tennessee', 'Indiana', 'Delaware'} does not lie on a great circle. Second let's consider splitting {'Alaska', 'Delaware', 'Hawaii', 'Kentucky', 'Vermont', 'Washington'} into $4$ non-empty parts, i.e. splits of the type $1, 1, 2, 2$ .
Here we assume that $4$ great circles cover those $6$ states (and potentially many more states). We now consider all possible such partition (there are 45).
For some of these partitions we prove that they are impossible to complete by giving a triple of states that have the following properties: Each of the three states in the triple does not lie on a great circle going through any of the two parts with two elements. The three states in the triple cannot lie on one great circle. Each combination of two states from the triple cannot lie on the great circle through each of the two parts with one element. From (1) it follows that the three states have to be assigned to the two parts with one element. Not all them can go to only one of them because of (2). But splitting " $3$ " into two nonempty parts leaves one part with $2$ elements and this lead to a contradiction by (3).
Take for example {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont'}, {'Washington'}}: {'Rhode Island', 'South Carolina', 'Arkansas'} We have (1) because 'Rhode Island' cannot lie on the great circle through {'Alaska', 'Delaware'} nor {'Hawaii', 'Kentucky'} and analogously for 'South Carolina' and 'Arkansas' .
We have (2) because {'Rhode Island', 'South Carolina', 'Arkansas'} cannot lie on a great circle.
We have (3) because now matter what combination of two states you take from {'Rhode Island', 'South Carolina', 'Arkansas'} , this pair cannot lie on a great circle with 'Vermont' nor 'Washington' .
Here's the list of all the partitions we can exlcude with that argument: {{'Alaska', 'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont'}, {'Washington'}}: {'Rhode Island', 'South Carolina', 'Arkansas'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky'}, {'Washington'}}: {'Rhode Island', 'Arizona', 'Florida'}
{{'Alaska', 'Delaware'}, {'Hawaii'}, {'Kentucky', 'Vermont'}, {'Washington'}}: {'Utah', 'South Carolina', 'Florida'}
{{'Alaska', 'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky'}, {'Vermont'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Delaware'}, {'Hawaii'}, {'Kentucky', 'Washington'}, {'Vermont'}}: {'Louisiana', 'Oklahoma', 'Connecticut'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Vermont'}, {'Kentucky'}, {'Washington'}}: {'Arizona', 'Rhode Island', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware'}, {'Kentucky', 'Vermont'}, {'Washington'}}: {'Arizona', 'Rhode Island', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware', 'Washington'}, {'Kentucky'}, {'Vermont'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Hawaii'}, {'Delaware'}, {'Kentucky', 'Washington'}, {'Vermont'}}: {'Arkansas', 'Michigan', 'Connecticut'}
{{'Alaska', 'Hawaii'}, {'Delaware'}, {'Kentucky'}, {'Vermont', 'Washington'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Vermont'}, {'Washington'}}: {'New Mexico', 'Michigan', 'Florida'}
{{'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Washington'}, {'Vermont'}}: {'Oklahoma', 'Wisconsin', 'Connecticut'}
{{'Alaska', 'Kentucky'}, {'Delaware', 'Washington'}, {'Hawaii'}, {'Vermont'}}: {'Louisiana', 'Oklahoma', 'Rhode Island'}
{{'Alaska', 'Kentucky'}, {'Delaware'}, {'Hawaii', 'Washington'}, {'Vermont'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Vermont'}, {'Delaware'}, {'Hawaii', 'Kentucky'}, {'Washington'}}: {'South Carolina', 'Michigan', 'Arkansas'}
{{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont'}}: {'Iowa', 'Connecticut', 'North Dakota'}
{{'Alaska'}, {'Delaware', 'Washington'}, {'Hawaii', 'Kentucky'}, {'Vermont'}}: {'Rhode Island', 'South Carolina', 'Arkansas'}
{{'Alaska', 'Vermont'}, {'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky'}}: {'Arizona', 'Louisiana', 'Wyoming'}
{{'Alaska', 'Vermont'}, {'Delaware'}, {'Hawaii'}, {'Kentucky', 'Washington'}}: {'Louisiana', 'Oklahoma', 'Wisconsin'}
{{'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Washington'}, {'Kentucky'}}: {'Rhode Island', 'Arizona', 'South Dakota'}
{{'Alaska'}, {'Delaware', 'Washington'}, {'Hawaii', 'Vermont'}, {'Kentucky'}}: {'Rhode Island', 'Arizona', 'Florida'}
{{'Alaska'}, {'Delaware', 'Washington'}, {'Hawaii'}, {'Kentucky', 'Vermont'}}: {'Utah', 'South Carolina', 'Florida'}
{{'Alaska'}, {'Delaware'}, {'Hawaii', 'Washington'}, {'Kentucky', 'Vermont'}}: {'Rhode Island', 'Arizona', 'South Dakota'} For the rest of these partitions, we consider a pair such that Each state the pair does not lie on a great circle going through any of the two parts with two elements. The two states in the pair cannot lie on a great circle together with any of the two part with one element. Therefore the two states in the pair have to be distributed on the two parts with one element one way or the other. We list both possible ways and the resulting partitions of the $8$ states into $4$ great circles with at least $2$ elements. Then we proof that this partition cannot be completed in either of the following two ways: (a) Giving a single state which cannot lie in a great circle with any of its parts (b) Giving three states which cannot lie in a great circle, but have to lie in one of the parts. (Which part can be seen because one of the three states will already be a state in the corresponding part). Let's take as an example {{'Alaska', 'Kentucky'}, {'Delaware', 'Hawaii'}, {'Vermont'}, {'Washington'}} . Here a pair would be ('Arkansas', 'Rhode Island') . We have (1) because each triple {'Arkansas', 'Alaska', 'Kentucky'}, {'Arkansas', 'Delaware', 'Hawaii'}, {'Rhode Island', 'Alaska', 'Kentucky'} and {'Rhode Island', 'Delaware', 'Hawaii'} cannot lie on a great circle and we have (2) because {'Arkansas', 'Rhode Island', 'Vermont'} and {'Arkansas', 'Rhode Island', 'Washington'} cannot lie on a great circle. Hence we consider the two partitions where 'Arkansas' and 'Rhode Island' are paired with 'Vermont' and 'Washington' .
For the candidate [{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Arkansas', 'Vermont'}, {'Washington', 'Rhode Island'}] we are in case (b). Each of 'Florida' and 'South Carolina' cannot lie in any of the great circles given by {'Delaware', 'Hawaii'} , {'Arkansas', 'Vermont'} or {'Washington', 'Rhode Island'} Therefore they must both lie on the great circle going through {'Kentucky', 'Alaska'} , but this is a contradiction since {'Florida', 'South Carolina', 'Alaska'} cannot lie on a common great circle.
For the candidate [{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Rhode Island', 'Vermont'}, {'Washington', 'Arkansas'}] we are in case (a): the state 'New Mexico' cannot lie in a great circles going any of the pairs in the candidate.
Here's list of all but one of the remaining partitions of the $6$ states into $4$ with proofs as described above: {{'Alaska', 'Delaware'}, {'Hawaii'}, {'Kentucky'}, {'Vermont', 'Washington'}}: ('Wyoming', 'Arizona')
[{'Delaware', 'Alaska'}, {'Hawaii', 'Wyoming'}, {'Kentucky', 'Arizona'}, {'Washington', 'Vermont'}]: Mississippi
[{'Delaware', 'Alaska'}, {'Hawaii', 'Arizona'}, {'Kentucky', 'Wyoming'}, {'Washington', 'Vermont'}]: Connecticut
{{'Alaska', 'Hawaii'}, {'Delaware', 'Kentucky'}, {'Vermont'}, {'Washington'}}: ('Michigan', 'South Carolina')
[{'Hawaii', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Michigan', 'Vermont'}, {'Washington', 'South Carolina'}]: Florida
[{'Hawaii', 'Alaska'}, {'Delaware', 'Kentucky'}, {'South Carolina', 'Vermont'}, {'Washington', 'Michigan'}]: Wyoming
{{'Alaska', 'Kentucky'}, {'Delaware', 'Hawaii'}, {'Vermont'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Arkansas', 'Vermont'}, {'Washington', 'Rhode Island'}]: ['Florida', 'South Carolina', 'Alaska']
[{'Kentucky', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Rhode Island', 'Vermont'}, {'Washington', 'Arkansas'}]: New Mexico
{{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky'}, {'Washington'}}: ('New Mexico', 'South Dakota')
[{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'New Mexico'}, {'Washington', 'South Dakota'}]: Louisiana
[{'Alaska', 'Vermont'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'South Dakota'}, {'Washington', 'New Mexico'}]: Michigan
{{'Alaska', 'Washington'}, {'Delaware', 'Hawaii'}, {'Kentucky'}, {'Vermont'}}: ('North Dakota', 'Connecticut')
[{'Washington', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'North Dakota'}, {'Connecticut', 'Vermont'}]: ['Louisiana', 'New Mexico', 'Alaska']
[{'Washington', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Connecticut'}, {'Vermont', 'North Dakota'}]: South Carolina
{{'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky'}, {'Vermont', 'Washington'}}: ('North Carolina', 'Connecticut')
[{'North Carolina', 'Alaska'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'Connecticut'}, {'Washington', 'Vermont'}]: ['Delaware', 'Kansas', 'Wyoming']
[{'Alaska', 'Connecticut'}, {'Delaware', 'Hawaii'}, {'Kentucky', 'North Carolina'}, {'Washington', 'Vermont'}]: ['Oklahoma', 'North Carolina', 'South Dakota']
{{'Alaska', 'Kentucky'}, {'Delaware', 'Vermont'}, {'Hawaii'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Arkansas'}, {'Washington', 'Rhode Island'}]: Wyoming
[{'Kentucky', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Rhode Island'}, {'Washington', 'Arkansas'}]: New Mexico
{{'Alaska', 'Kentucky'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Arkansas'}, {'Hawaii', 'Vermont'}, {'Washington', 'Rhode Island'}]: Kansas
[{'Kentucky', 'Alaska'}, {'Delaware', 'Rhode Island'}, {'Hawaii', 'Vermont'}, {'Washington', 'Arkansas'}]: New Mexico
{{'Alaska', 'Kentucky'}, {'Delaware'}, {'Hawaii'}, {'Vermont', 'Washington'}}: ('Oklahoma', 'Connecticut')
[{'Kentucky', 'Alaska'}, {'Delaware', 'Oklahoma'}, {'Hawaii', 'Connecticut'}, {'Washington', 'Vermont'}]: Louisiana
[{'Kentucky', 'Alaska'}, {'Delaware', 'Connecticut'}, {'Oklahoma', 'Hawaii'}, {'Washington', 'Vermont'}]: Wyoming
{{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii'}, {'Washington'}}: ('Michigan', 'South Carolina')
[{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Michigan'}, {'Washington', 'South Carolina'}]: Florida
[{'Alaska', 'Vermont'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'South Carolina'}, {'Washington', 'Michigan'}]: Wyoming
{{'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}, {'Washington'}}: ('North Carolina', 'Connecticut')
[{'North Carolina', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}, {'Washington', 'Connecticut'}]: ['Delaware', 'Oklahoma', 'Louisiana']
[{'Alaska', 'Connecticut'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Vermont'}, {'Washington', 'North Carolina'}]: Florida
{{'Alaska', 'Washington'}, {'Delaware', 'Kentucky'}, {'Hawaii'}, {'Vermont'}}: ('Michigan', 'South Carolina')
[{'Washington', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Michigan'}, {'South Carolina', 'Vermont'}]: ['Iowa', 'North Dakota', 'Hawaii']
[{'Washington', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'South Carolina'}, {'Michigan', 'Vermont'}]: Connecticut
{{'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Washington'}, {'Vermont'}}: ('Iowa', 'Connecticut')
[{'Iowa', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Washington', 'Hawaii'}, {'Connecticut', 'Vermont'}]: Michigan
[{'Alaska', 'Connecticut'}, {'Delaware', 'Kentucky'}, {'Washington', 'Hawaii'}, {'Iowa', 'Vermont'}]: South Carolina
{{'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii'}, {'Vermont', 'Washington'}}: ('North Carolina', 'Connecticut')
[{'North Carolina', 'Alaska'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'Connecticut'}, {'Washington', 'Vermont'}]: ['Delaware', 'Oklahoma', 'Louisiana']
[{'Alaska', 'Connecticut'}, {'Delaware', 'Kentucky'}, {'Hawaii', 'North Carolina'}, {'Washington', 'Vermont'}]: Wyoming
{{'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Kentucky'}, {'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Arkansas', 'Alaska'}, {'Delaware', 'Vermont'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Rhode Island'}]: ['Hawaii', 'New Mexico', 'Ohio']
[{'Rhode Island', 'Alaska'}, {'Delaware', 'Vermont'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Arkansas'}]: Michigan
{{'Alaska'}, {'Delaware'}, {'Hawaii', 'Kentucky'}, {'Vermont', 'Washington'}}: ('Connecticut', 'Arkansas')
[{'Alaska', 'Connecticut'}, {'Delaware', 'Arkansas'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Vermont'}]: South Carolina
[{'Arkansas', 'Alaska'}, {'Delaware', 'Connecticut'}, {'Kentucky', 'Hawaii'}, {'Washington', 'Vermont'}]: ['Hawaii', 'New Mexico', 'Ohio']
{{'Alaska', 'Vermont'}, {'Delaware', 'Washington'}, {'Hawaii'}, {'Kentucky'}}: ('Wyoming', 'Arizona')
[{'Alaska', 'Vermont'}, {'Washington', 'Delaware'}, {'Hawaii', 'Wyoming'}, {'Kentucky', 'Arizona'}]: Mississippi
[{'Alaska', 'Vermont'}, {'Washington', 'Delaware'}, {'Hawaii', 'Arizona'}, {'Kentucky', 'Wyoming'}]: ['Maine', 'Connecticut', 'Alaska']
{{'Alaska', 'Washington'}, {'Delaware', 'Vermont'}, {'Hawaii'}, {'Kentucky'}}: ('North Dakota', 'Rhode Island')
[{'Washington', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'North Dakota'}, {'Kentucky', 'Rhode Island'}]: Iowa
[{'Washington', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Rhode Island'}, {'Kentucky', 'North Dakota'}]: ['Oklahoma', 'Washington', 'Arizona']
{{'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii'}, {'Kentucky', 'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Arkansas', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Rhode Island'}, {'Washington', 'Kentucky'}]: New Mexico
[{'Rhode Island', 'Alaska'}, {'Delaware', 'Vermont'}, {'Hawaii', 'Arkansas'}, {'Washington', 'Kentucky'}]: Michigan
{{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky'}}: see argument below
{{'Alaska'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky', 'Washington'}}: ('Arkansas', 'Rhode Island')
[{'Arkansas', 'Alaska'}, {'Delaware', 'Rhode Island'}, {'Hawaii', 'Vermont'}, {'Washington', 'Kentucky'}]: New Mexico
[{'Rhode Island', 'Alaska'}, {'Delaware', 'Arkansas'}, {'Hawaii', 'Vermont'}, {'Washington', 'Kentucky'}]: Colorado
{{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii'}, {'Kentucky', 'Vermont'}}: ('Michigan', 'South Carolina')
[{'Washington', 'Alaska'}, {'Delaware', 'Michigan'}, {'Hawaii', 'South Carolina'}, {'Kentucky', 'Vermont'}]: Rhode Island
[{'Washington', 'Alaska'}, {'Delaware', 'South Carolina'}, {'Hawaii', 'Michigan'}, {'Kentucky', 'Vermont'}]: ['Washington', 'New Mexico', 'Kansas'] For the only remaining partition, namely {{'Alaska', 'Washington'}, {'Delaware'}, {'Hawaii', 'Vermont'}, {'Kentucky'}} we provide the following ad-hoc argument:
We use the fact that following states cannot lie on the great circles through any of the the part {'Alaska', 'Washington'} nor {'Hawaii', 'Vermont'} : ['South Carolina', 'West Virginia', 'Tennessee', 'Illinois', 'Rhode Island']. Therefore 'South Carolina' must either lie on a great circle with 'Delaware' or 'Kentucky' . Assume 'South Carolina' and 'Delaware' lie on a great circle with 'Delaware' . On this great circle we cannot also have any of 'West Virginia', 'Tennessee' and 'Illinois' , which therefore must lie on the great circle through 'Kentucky' , but this is a contradiction, since 'West Virginia', 'Tennessee' and 'Illinois' cannot lie on a great circle.
Therefore 'South Carolina' has to lie on the same great circle as Kentucky. Since 'Rhode Island' does not lie on a great circle with 'Kentucky' and 'South Carolina' , we have to assume that it lies on the same great circle as 'Delaware' , which leaves us with the following partition: [{'Alaska', 'Washington'}, {'Delaware', 'Rhode Island'}, {'Hawaii', 'Vermont'}, {'Kentucky', 'South Carolina'}], which can be seen to be absurd with the method described above by considering ['Arkansas', 'New Mexico', 'Washington'] , i.e. both 'Arkansas' and 'New Mexico' can only lie on the part with 'Alaska' , but at the same time cannot lie on a great circle with it. This now got a little longer that I expected; I do like the proof using integer programming better.
|
{
"source": [
"https://mathoverflow.net/questions/181858",
"https://mathoverflow.net",
"https://mathoverflow.net/users/43108/"
]
}
|
182,006 |
When many proofs by contradiction end with "we have built an object with such, such and such properties, which does not exist", it seems relevant to give this object a name, even though (in fact because) it does not exist. The most striking example in my field of research is the following. Definition : A random variable $X$ is said to be uniform in $\mathbb{Z}$ if it is $\mathbb{Z}$-valued and has the same distribution as $X+1$. Theorem : No random variable is uniform in $\mathbb{Z}$. What are the non-existing objects you have come across?
|
The elliptic curve attached to a nontrivial solution of $x^n+y^n=z^n,\quad n>2$.
|
{
"source": [
"https://mathoverflow.net/questions/182006",
"https://mathoverflow.net",
"https://mathoverflow.net/users/56097/"
]
}
|
182,078 |
Ever since Newton invented Calculus, mathematicians have been using differential equations to model natural phenomena. And they have been very successful in doing such. Yet, they could have been just as successful in modeling natural phenomena with difference equations instead of differential equations (Just choose a very small $\Delta x$ instead of $dx$). Furthermore, difference equations don't require complicated epsilon-delta definitions. They are simple enough for anybody who knows high school math to understand. So why have mathematicians made things difficult by using complicated differential equations instead of simple difference equations? What is the advantage of using differential equations? My question was inspired by this paper by Doron Zeilberger ""Real" Analysis is a Degenerate Case of Discrete Analysis" http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/real.html
|
Although small discrete systems are easy to work with, continuum models are easier to deal with than large discrete systems.
Whether or not nature is fundamentally discrete, the most useful models are often continuous because the discreteness can only occur in very small scales.
Discreteness is useful to include in the model if it occurs in the situation we are interested in.
I think this is to a large extent a question of scales of interest. For example, if I have a mole of gas in a container, I could well model it as individual particles.
But if I want a simpler model to work with and I am only interested in the behaviour at scales well above the atomic one, the usual "continuous" fluid mechanics is a good choice.
This is because at such scales the gas is essentially scaling invariant (it obeys similar laws if you zoom in) and thus calculus becomes applicable (and very powerful).
This is of course not true if I go all the way to the atomic scale, but I am not interested in that scale, so it does not matter if my model treats gas in the same way at those scales as well.
Large scale continuous quantities like pressure and density give a good understanding (including the ability to make good predictions quickly) and that should not be neglected.
(Of course, if I want something more coarse, I can go to a thermodynamic description. Either way, modelling includes a step where the number of particles is taken to infinity to simplify mathematics.) The "scales of interest" phenomenon happens in both directions; we may neglect both too small and too large scales.
For example, it might be a good idea to model a long rod by an infinitely long one (thus in a sense removing discreteness from the model).
Then one can apply Fourier analysis or any other such tools that assume that the rod is infinitely long and mathematics becomes easier.
This is maybe more common with respect to time than length: Fourier or Laplace transforms with respect to time are used for systems that have finite lifetime.
If we are not interested in very large scales, we can assume our system to be infinitely large. Discrete models are probably useful if nature has genuinely discrete structure (regarding the physical system in question) and we are interested in phenomena at the scale where discreteness is visible.
But seen on a larger scale, a discrete model would contain something (particles or some other discrete structure) that we cannot measure and might not even be interested in.
Something that cannot be measured and does not have a significant impact on the behaviour of the system should be left out of the model.
This is related to the observation that continuum models often work well for large discrete systems. Let me conclude with an observation that is easy to miss because we are so used to it:
At human scales nature seems continuous.
|
{
"source": [
"https://mathoverflow.net/questions/182078",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7089/"
]
}
|
182,178 |
Is there an example of an irreducible polynomial $f(x) \in \mathbb{Q}[x]$ with a real root expressible in terms of real radicals and another real root not expressible in terms of real radicals?
|
The answer to the question is yes (so the answer to the title is no ) and I will give an example later. Let me first recall a couple of results. The first one is the following, that can be found in [Cox, Galois Theory , Theorem 8.6.5]. Theorem 1. Let $F$ be a subfield of $\mathbb{R}$ and let $f \in F[x]$ be an irreducible polynomial with splitting field $F \subset L \subset \mathbb{R}$ . Then the following conditions are equivalent. (1) Some root of $f$ is expressible by real radicals over $F$ . (2) All roots of $f$ are expressible by real radicals over $F$ in which only square roots appear. (3) $[L:F]$ is a power of $2$ . So, if $f$ splits completely over $\mathbb{R}$ , the existence of a root expressible by real radicals forces all the roots to be so. On the other hand, when $f$ does not split completely in $\mathbb{R}$ this is no longer true . Let us state our second result, that can be found in [A. Loewy, Über die Reduktion algebraischer Gleichungen durch Adjunktion insbesondere reeller Radikale , Math. Zeitschr. 15 , 261-273 (1922)], see also Cox book, Theorem 8.6.12. Theorem 2. Let $F$ be a subfield of $\mathbb{R}$ and $f \in F[x]$ irreducible of degree $2^mn$ , with $n$ odd. Then $f$ has at most $2^m$ roots expressible by real radicals over $F$ . In particular, when $f \in \mathbb{Q}[x]$ is irreducible and of odd degree, Theorem 2 implies that at most one root of $f$ is expressible by real radicals. Note that if the degree is $3$ then Theorem 2 is consistent with Cardano's formulas, and if the degree is a power of $2$ then it is consistent with Theorem 1. Finally, let us give the following example answering the question, that can be found in Loewy's paper quoted above, page 272. Let us consider the polynomial $$x^6+6x^4-234x^2-54x-3 =(x^3+(3+9 \sqrt{3})x+ \sqrt{3})(x^3+(3-9 \sqrt{3})x- \sqrt{3}).$$ It is irreducible over $\mathbb{Q}$ by Eisenstein's criterion and it has one real root expressible by real radicals, three real roots not expressible by real radicals and two complex roots.
|
{
"source": [
"https://mathoverflow.net/questions/182178",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29500/"
]
}
|
182,215 |
I've recently been reading some standard textbooks on the philosophy of mathematics, and I've become quite frustrated that (surely due to my own limitations) I don't seem to be gleaning any mathematical insights from them. My naïve expectation would be that philosophy might take a difficult construction or proof, and clarify it by isolating the key ideas behind it. Having isolated the key ideas, philosophy might then highlight their relevance and thus point the way forward. Beyond this, I would hope that philosophy might elucidate the `true meaning' of axioms and of definitions by examining their ontology in a wider context. In reality, to the best of my knowledge (please prove me wrong!) both of the above tasks seem to be carried out exclusively by mathematicians, physicists, computer scientists, and other natural scientists, as far as I can see. To play the devil's advocate, philosophy seems to me like it might historically have largely played an opposite role, labeling certain objects as "unreal" and "unnatural" which in fact later turned out to be fruitful to study (negative numbers, irrational numbers, complex numbers...). Question : Has it ever happened that philosophy has elucidated and clarified a mathematical concept, proof, or construction in a way useful to research mathematicians? Philosophers have created much new mathematics ( e.g. the work of C.S. Peirce, much of which is bona fide mathematical research), but the question is not about this, but rather about philosophy as practiced by philosophers providing elucidation, explanation, and clarification of existing mathematics.
|
I find the case of Alan Turing's development of the concept of computatibility to be an example. Before Turing, the logicians had no clear concept of what it means to say that a function is computable. Even Gödel had despaired to ever have a formal notion of computability, because he had expected that for any such formal notion of computability, we would be able to diagonalize against it and thereby find a function that was computable in an intuitive sense, but not with respect to the formal notion. This was true of the class of primitive recursive functions and other extensions of that idea. Meanwhile, Turing proceeded on a mainly philosophical level to consider what it was that a human did when undertaking a computation, imagining a person with paper and pencil and plenty of time, following a rote computational procedure, and was thereby led to his notion of Turing machine, which led to the fields of computability theory, complexity theory and so on.
|
{
"source": [
"https://mathoverflow.net/questions/182215",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2051/"
]
}
|
182,346 |
Let's call a polygon $P$ shrinkable if any down-scaled (dilated) version of $P$ can be translated into $P$. For example, the following triangle is shrinkable (the original polygon is green, the dilated polygon is blue): But the following U-shape is not shrinkable (the blue polygon cannot be translated into the green one): Formally, a compact $\ P\subseteq \mathbb R^n\ $ is called shrinkable iff: $$\forall_{\mu\in [0;1)}\ \exists_{q\in \mathbb R^n}\quad \mu\!\cdot\! P\, +\, q\ \subseteq\ P$$ What is the largest group of shrinkable polygons? Currently I have the following sufficient condition: if $P$ is star-shaped then it is shrinkable. Proof : By definition of a star-shaped polygon, there exists a point $A\in P$ such that for every $B\in P$, the segment $AB$ is entirely contained in $P$. Now, for all $\mu\in [0;1)$, let $\ q := (1-\mu)\cdot A$. This effectively translates the dilated $P'$ such that $A'$ coincides with $A$. Now every point $B'\in P'$ is on a segment between $A$ and $B$, and hence contained in $P$. My questions are: A. Is the condition of being star-shaped also necessary for shrinkability? B. Alternatively, what other condition on $P$ is necessary?
|
Any simply connected polygon must be star-shaped to be shrinkable. I have made minor edits below to treat the more general case. Let $D$ be a polygon with convex hull $H$. Assume we are given a non-trivial shrinking of $D$; view this as a map from $H$ to itself. This map must have a fixed point $x$, either by algebraic topology or an iterative construction. This means it suffices to consider only dilations centered at a point $x$ in $H$, rather than dilations followed by translations. For any $x$, if there is a point $y$ in $D$ so that the segment from $x$ to $y$ is not contained in $D$, then a $(1-\epsilon)$-dilation of $H$ centered at $x$ will not carry $D$ into $D$ for any positive $\epsilon$ smaller than some $\epsilon(x)>0$. If $D$ is not star-shaped, take the minimum $\delta$ of $\epsilon(x)$ over $x\in H$, and then no $(1-\delta)$-dilation of $H$ centered at a point in $H$ carries $D$ into $D$.
|
{
"source": [
"https://mathoverflow.net/questions/182346",
"https://mathoverflow.net",
"https://mathoverflow.net/users/34461/"
]
}
|
182,412 |
While playing around with Mathematica I noticed that most polynomials with real coefficients seem to have most complex zeroes very near the unit circle. For instance, if we plot all the roots of a polynomial of degree 300 with coefficients chosen randomly from the interval $[27, 42]$ , we get something like this: The Mathematica code to produce the picture was: randomPoly[n_, x_, {a_, b_}] :=
x^Range[0, n] . Table[RandomReal[{a, b}], {n + 1}];
Graphics[Point[{Re[x], Im[x]}] /.
NSolve[randomPoly[300, x, {27, 42}], x], Axes -> True] If I try other intervals and other degrees, the picture is always mostly the same: almost all roots are close to the unit circle. Question: why does this happen?
|
Let me give an informal explanation using what little I know about complex analysis. Suppose that $p(z)=a_{0}+\dotsm+a_{n}z^{n}$ is a polynomial with random complex coefficients and suppose that $p(z)=a_{n}(z-c_{1})\cdots(z-c_{n})$ . Then take note that $$\frac{p'(z)}{p(z)}=\frac{d}{dz}\log(p(z))=\frac{d}{dz}\log(z-c_{1})+\dotsm+\log(z-c_{n})=
\frac{1}{z-c_{1}}+\dotsm+\frac{1}{z-c_{n}}.
$$ Now assume that $\gamma$ is a circle larger than the unit circle. Then $$\oint_{\gamma}\frac{p'(z)}{p(z)}dz=\oint_{\gamma}\frac{na_{n}z^{n-1}+(n-1)a_{n-1}z^{n-2}+\dotsm+a_{1}}{a_{n}z^{n}+\dotsm+a_{0}}\approx\oint_{\gamma}\frac{n}{z}dz=2\pi in.$$ However, by the residue theorem, $$\oint_{\gamma}\frac{p'(z)}{p(z)}dz=\oint_{\gamma}\frac{1}{z-c_{1}}+...+\frac{1}{z-c_{n}}dz=2\pi i|\{k\in\{1,\ldots,n\}|c_{k}\,\,\textrm{is within the contour}\,\,\gamma\}|.$$ Combining these two evaluations of the integral, we conclude that $$2\pi i n\approx 2\pi i|\{k\in\{1,\ldots,n\}|c_{k}\,\,\textrm{is within the contour}\,\,\gamma\}|.$$ Therefore there are approximately $n$ zeros of $p(z)$ within $\gamma$ , so most of the zeroes of $p(z)$ are within $\gamma$ , so very few zeroes can have absolute value significantly greater than $1$ . By a similar argument, very few zeroes can have absolute value significantly less than $1$ . We conclude that most zeroes lie near the unit circle. $\textbf{Added Oct 11,2014}$ A modified argument can help explain why the zeroes tend to be uniformly distributed around the circle as well. Suppose that $\theta\in[0,2\pi]$ and $\gamma_{\theta}$ is the pizza slice shaped contour defined by $$\gamma_{\theta}:=\gamma_{1,\theta}+\gamma_{2,\theta}+\gamma_{3,\theta}$$ where $$\gamma_{1,\theta}=([0,1+\epsilon]\times\{0\})$$ $$\gamma_{2,\theta}=\{re^{i\theta}|r\in[0,1+\epsilon]\}$$ $$\gamma_{3,\theta}=\cup\{e^{ix}(1+\epsilon)|x\in[0,\theta]\}.$$ Then $$\oint_{\gamma_{\theta}}\frac{p'(z)}{p(z)}dz=
\oint_{\gamma_{\theta,1}}\frac{p'(z)}{p(z)}dz+\oint_{\gamma_{\theta,2}}\frac{p'(z)}{p(z)}dz+\oint_{\gamma_{\theta,3}}\frac{p'(z)}{p(z)}dz$$ $$\approx O(1)+O(1)+\oint_{\gamma_{\theta,3}}\frac{p'(z)}{p(z)}dz$$ $$\approx O(1)+O(1)+\oint_{\gamma_{\theta,3}}\frac{na_{n}z^{n-1}+(n-1)a_{n-1}z^{n-2}+\dotsm+a_{1}}{a_{n}z^{n}+\dotsm+a_{0}}dz
$$ $$\approx O(1)+O(1)+\oint_{\gamma_{\theta,3}}\frac{n}{z}dz\approx n i\theta$$ . Therefore, there should be approximately $\frac{i\theta}{2\pi}$ zeroes inside the pizza slice $\gamma_{\theta}$ .
|
{
"source": [
"https://mathoverflow.net/questions/182412",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1176/"
]
}
|
182,719 |
Let $(X,d)$ be a metric space and $x,y \in X$. Assume that for all $r > 0$ the balls $B_r(x)$ and $B_r(y)$ are isometric. Is it true that there exists an isometry of $X$ sending $x$ to $y$?
|
No. Let $x$ and $y$ be connected by an edge and let's use the graph distance as our metric.
At $x$, connect paths of length $n$ for each $n\in\mathbb N$. At $y$, do the same, but also connect an infinite path.
|
{
"source": [
"https://mathoverflow.net/questions/182719",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29319/"
]
}
|
184,787 |
A curious puzzle for which I would appreciate an explanation. For $x$ and $y$ both uniformly and independently distributed in $[0,1]$,
the value of $\lfloor 1/(x y) \rfloor$ has a bias toward odd numbers.
Here are $10$ random trials:
$$51, 34, 1, 239, 9, 4, 2, 1, 1, 1 $$
with $7$ odd numbers.
Here are $10^6$ trials, placed into even and odd bins: About 53% of the reciprocals are odd.
If I use the ceiling function instead of the floor, the bias reverses, with
approximately 47% odd. And finally, if
I round to the nearest integer instead,
then about 48% are odd. None of these biases appear to be statistical or numerical artifacts
(in particular, it seems that the 47% and 48% are numerically distinguishable),
although I encourage you to check me on this. Update .
To supplement Noam Elkies' answer,
a plot of $x y = 1/n$ for $n=2,\ldots,100$:
|
You're dividing the square $S = \{ (x,y) \colon 0 < x < 1, 0 < y < 1\}$
into two regions according to the parity of $\lfloor 1/(xy) \rfloor$,
separated by the segments of the hyperbolas $xy = 1/n$ ($n=2,3,4,\ldots$)
contained in $S$. There's no reason to expect that the two regions
have the same area. If I did this right, the area between the $n$-th
hyperbola and the top right corner of the square is
$$
A(n) := 1 - \frac{1 + \log n}{n}
$$
so the discrepancy between odd and even values of $\lfloor 1/(xy) \rfloor$ is
$$
(A(2)-A(1)) - (A(3)-A(2)) + (A(4)-A(3)) - + \cdots
$$
which is numerically $0.066556553635\ldots$ according to the gp calculation A(n) = 1 - (log(n)+1)/n
sumalt(n=1, (-1)^n*(A(n)-A(n+1))) So we expect about 53.33% odd and 46.67% even values,
which seems consistent with your experiment. P.S. Using a formula I found in MO Question 140547 ,
I gather that this number $0.066556553635\ldots$ has the closed form
$$
(\log 2)^2 + \bigl(2 (1 - \gamma) \log 2\bigr) - 1,
$$
where $\gamma$ is Euler's constant $0.5772156649\ldots$. P.P.S. I see that I didn't address the end of the original question:
"If I use the ceiling function instead of the floor, the bias reverses, [...]
if I round to the nearest integer instead, then about 48% are odd."
The first part is clear because changing
$\lfloor 1/(xy) \rfloor$ to $\lceil 1/(xy) \rceil$
switches even and odd values (except in the negligible case that
$1/(xy)$ is an exact integer). For the nearest-integer function,
the discrepancy between odd and even values is
$$\bigl(A(3/2)-A(1)\bigr)
- \bigl(A(5/2)-A(3/2)\bigr)
+ \bigl(A(7/2)-A(5/2)\bigr)
- + \cdots
$$
which evaluates numerically to $-0.03500998166\ldots$
(using sumalt in gp as before), which again
is consistent with observation (48.25% odd, 51.75% even).
There's still a "closed form" for this discrepancy, but
more complicated:
$$
-3 + 4 \log(2)
+ \pi \bigl(1 + \log(\pi/2) - \gamma - 4 \log\Gamma(3/4) \bigr).
$$
This requires evaluation of
$\log(1)/1 - \log(3)/3 + \log(5)/5 - \log(7)/7 + - \cdots$,
which can be achieved by differentiating the functional equation for
the Dirichlet L-function
$L(s,\chi_4) = 1 - 3^{-s} + 5^{-s} - 7^{-s} + - \cdots$
and evaluating at $s=1$.
|
{
"source": [
"https://mathoverflow.net/questions/184787",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
185,152 |
This is a cross-posted question, originally active here on math.stackexchange. For a given group $G=(S,\cdot)$ with underlying set $S$, consider the function
$$
F_G:S\times S\to\mathcal P(S)\\
F_G(a,b):=\{a\cdot b,~b\cdot a\}
$$
from $S\times S$ to the power set of $S$. I'd like to figure out how much information from $G$ is encoded by $F_G$. In particular, does $F_G$ determine the group $G$ up to isomorphism? Given different groups $G_1$ and $G_2$ with underlying sets $S_1$ and $S_2$, suppose that a function $\varphi:S_1\to S_2$ has the property $\varphi\bigl(F_{G_1}(a,b)\bigr)=F_{G_2}\bigl(\varphi(a),\varphi(b)\bigr)$ for all $a$ and $b$ in $G_1$; if $\varphi^{-1}$ exists and has this property as well, let's say that $F_{G_1}\cong F_{G_2}$. For one thing, if $G_1\cong G_2$ then $F_{G_1}\cong F_{G_2}$. Going the other way, if $F_{G_1}\cong F_{G_2}$ then $|G_1|=|G_2|$ and $Z(G_1)\cong Z(G_2)$. Given $F_G$, it's easy to find out which pairs of elements in $G$ commute, which subsets of $G$ constitute subgroups of $G$, and which subsets of $G$ are generating sets of $G$. Moreover, if $F_{G_1}\cong F_{G_2}$ then $G_1$ and $G_2$ must have the same cycle graph . This means that if $F_{G_1}\cong F_{G_2}$ and the order of these groups is less than 16, then $G_1\cong G_2$. Indeed, according to Wikipedia's page on cycle graphs, "For groups with fewer than 16 elements, the cycle graph determines the group (up to isomorphism)." The question that's been plaguing me is whether $F_{G_1}\cong F_{G_2}\implies G_1\cong G_2$ in the general finite case. I think that a counterexample would need to involve groups of order 16 or larger. Any ideas?
|
The answer is yes: the map $F_G$ determines the group. Mansfield makes good use of this in his elementary proof of the fact that the group determinant determines the group (v. Lemma 4 in his paper). As mentioned in Mansfield's paper, this appears as Problems §4, No. 26 in p.139 of Bourbaki, Algebra - 1.
|
{
"source": [
"https://mathoverflow.net/questions/185152",
"https://mathoverflow.net",
"https://mathoverflow.net/users/58233/"
]
}
|
185,645 |
I have three related questions about conventions for defining Clifford algebras. 1) Let $(V, q)$ be a quadratic vector space. Should the Clifford algebra $\text{Cliff}(V, q)$ have defining relations $v^2 = q(v)$ or $v^2 = -q(v)$? 2) Should $\text{Cliff}(n)$ denote the Clifford algebra generated by $n$ anticommuting square roots of $1$ or by $n$ anticommuting square roots of $-1$? That is, after you pick an answer to 1), should $\text{Cliff}(n)$ be $\text{Cliff}(\mathbb{R}^n, \| \cdot \|)$ or $\text{Cliff}(\mathbb{R}^n, - \| \cdot \|)$? More generally, after you pick an answer to 1), should $\text{Cliff}(p, q)$ be the Clifford algebra associated to the quadratic form of signature $(p, q)$ or of signature $(q, p)$? 3) Let $(X, g)$ be a Riemannian manifold with Riemannian metric $g$. After you pick an answer to 1), should the bundle of Clifford algebras $\text{Cliff}(X)$ associated to $X$ be given fiberwise by $\text{Cliff}(T_x(X), \pm g_x)$ or by $\text{Cliff}(T_x^{\ast}(X), \pm g_x^{\ast})$? For 1), on the one hand, $v^2 = q(v)$ seems very natural, especially if you think of the Clifford algebra functor as a version of the universal enveloping algebra functor , and it is used in Atiyah-Bott-Shapiro. On the other hand, Lawson-Michelson and Berline-Getzler-Vergne use $v^2 = -q(v)$, I think because they want $\text{Cliff}(\mathbb{R}^n, \| \cdot \|)$ to be the Clifford algebra generated by $n$ anticommuting square roots of $-1$. This is, for example, the correct Clifford algebra to write down if you want to write down a square root of the negative of the Laplacian (which is positive definite). For 2), this choice affects the correct statement of the relationship between $\text{Cliff}(n)$-modules and real $K$-theory, but there is something very confusing going on here, namely that with either convention, $\text{Cliff}(n)$-modules are related to both $KO^n$ and $KO^{-n}$; see Andre Henriques' MO question on this subject. For 3), whatever the answer to 1) or 2) I think everyone agrees that $\text{Cliff}(X)$ should be given fiberwise by $n$ anticommuting square roots of $-1$, where $n = \dim X$, so once you fix an answer to 1) that fixes the signs. The choice of sign affects the correct statement of the Thom isomorphism in K-theory. Lawson-Michelson use the tangent bundle but Berline-Getzler-Vergne use the cotangent bundle. The tangent bundle seems natural if you want to think of Clifford multiplication as a deformation of a covariant derivative, and the cotangent bundle seems natural if you want to think of the Clifford bundle as a deformation of exterior forms. I'm not sure how important this choice is. Anyway, I just want to know whether there are good justifications to sticking to one particular set of conventions so I can pick a consistent one for myself; reconciling the conventions of other authors is exhausting, especially because I haven't decided what conventions I want to use.
|
This is not really an answer, but rather a meta-answer as to why there exist many conventions in the first place. The symmetric monoidal category $\mathit{sVect}$ of super-vector spaces has a non-trivial involution $J$.
The symmetric monoidal functor $J:\mathit{sVect}\to \mathit{sVect}$ is the identity at the level of objects and at the level of morphisms.
But the coherence $J(V \otimes W) \xrightarrow{\cong} J(V) \otimes J(W)$ is non-trivial. It is given by $-1$ on $V_{odd} \otimes W_{odd}$ and $+1$ on the rest. The image of $\mathit{Cliff}(V,q)$ under $J$ is $\mathit{Cliff}(V,-q)$.
So anything that you do with one convention can equally well be done with the other convention. Over the complex numbers, $J$ is equivalent to the identity functor.
The symmetric monoidal natural transformation $J\Rightarrow Id$ that exhibits the equivalence acts as $i$ on the odd part and as $1$ on the even part of any super-vector space. Over the reals, $J$ is not equivalent to the identity functor, as can be seen from the fact that $\mathit{Cliff}(\mathbb R,|\cdot|^2)\not\simeq\mathit{Cliff}(\mathbb R,-|\cdot|^2)$. One last technical comment: Over $\mathbb C$, the action of $\mathbb Z/2$ on $\mathit{sVect}$ defined by $J$ is still non-trivial , despite the fact that $J$ is trivial. A trivialization of the action isn't just an equivalence $\alpha:J\cong Id$. For such an equivalence to trivialize the action, it would need to satisfy the further coherence $\alpha\circ \alpha = 1$, which isn't satisfied by any choice of $\alpha$. (To trivialize the action of a group $G$, one needs to trivialize the actions of each $g\in G$ in such a way that the trivializations of $g,h\in G$ compose to the trivialization of $gh$.) Now, as far as practical things are concerned, I would recommend minimizing the number of minus signs that you end up writing down.
|
{
"source": [
"https://mathoverflow.net/questions/185645",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
186,298 |
Let $G$ and $H$ be two abelian groups and let $n>1, m>1$ be two different integers. How many different spaces $X$ (up to homotopy) do we have with the property $\pi_{n} X=G$ , $\pi_{m} X=H$ and $\pi_{\ast} X=0$ otherwise? is this number finite ?
|
Assuming $m > n$, there is a method for classifying such spaces using a technique from the Postnikov tower. Namely, such a space has a map $X \to K(G,n)$ inducing an isomorphism on $\pi_n$, and if we convert this into a fibration it has fiber $K(H,m)$. Such bundles are classified by a "k-invariant": an element in $H^{m+1}(K(G,n) ; H)$. One direction takes such a cohomology class, represents it as a map $K(G,n) \to K(H,m+1)$, and then takes a (homotopy) pullback. To see that this is actually a bijective correspondence requires a little bit of obstruction theory (and uses critically that $n > 1$; otherwise we'd also need to classify an action of $G$ on $H$). One should note that if you only ask that $\pi_n X$ and $\pi_m X$ are abstractly isomorphic to $G$ and $H$ rather than choosing isomorphisms, the $k$-invariant is only well-defined up to the action of the automorphism groups $Aut(G)$ and $Aut(H)$ on $H^{m+1}(K(G,n); H)$. If $G$ and $H$ are finitely generated abelian groups and at least one of them is finite, then Serre's work using mod-$\cal C$ theory shows that this cohomology group is finite; however, in general you can certainly have infinitely many distinct isomorphism classes (e.g. there are infinitely many homotopy types with $\pi_2 = \pi_3 = \Bbb Z$, because $H^4 K(\Bbb Z,2) \cong \Bbb Z$).
|
{
"source": [
"https://mathoverflow.net/questions/186298",
"https://mathoverflow.net",
"https://mathoverflow.net/users/61328/"
]
}
|
186,329 |
I heard two quotes, one from Alain Connes and an other one from Orlov.
Alain Connes was talking about noncommutative geometry and he said the following: " a noncommutative algebra creates its own internal time " In a talk by Orlov about Mirror symmetry, he was asked if he considers the monoidal structure on the derived category of a scheme. Orlov said that " the monoidal structure is not natural in this context and we should not base our theory on this structure " he added " The tensor product is a NATURAL structure to consider in the category of Motives " I will be happy if someone can put some enlightenment to what Connes and Orlov meant (if the quotations above make sense ) ?
|
Here is a guess about the remark of Orlov. Suppose that one wants to define a good notion of noncommutative scheme , given that an affine noncommutative scheme is an associative algebra. Trying to define the spectrum of an associative algebra leads to various problems (c.f. this answer ), so a different approach is needed. On the other hand there are various invariants of assocative algebras, like algebraic K-theory, which all happen to be Morita invariant : they depend only on derived categories. Further, a theorem of Bondal and Van den Bergh says that the derived category of a commutative scheme is equivalent to the derived category of some dg-algebra. This suggests to define a noncommutative scheme as the derived category of a dg-algebra. This will then include both affine noncommutative schemes and commutative schemes. This is the approach practised by the Moscow school (Bondal, Orlov, Kaledin, Lunts, etc.). This explains the first remark: it is essential to consider noncommutative spaces only up to Morita equivalence. If we would consider the symmetric monoidal structure in the definition of noncommutative scheme, then any two commutative schemes would be isomorphic in the noncommutative world iff they were isomorphic in the usual commutative sense. See work of Balmer for this. In fact most interesting morphisms in the noncommutative world do not respect the symmetric monoidal structures coming from the commutative world. Regarding the second remark, suppose that we want a theory of motives for noncommutative schemes. In fact, such theories already exist after Cisinski-Tabuada and Robalo. For example, following Robalo, we can echo the Morel-Voevodsky construction in our setting: take the (infinity-)category of noncommutative schemes, pass to the free cocompletion (infinity-presheaves), impose A^1-homotopy invariance and descent with respect to a noncommutative version of the Nisnevich topology, and then formally stabilize with respect to P^1. This gives a noncommutative version of the stable motivic homotopy category. Now suppose that we were to include symmetric monoidal structures in the picture, i.e. start with the infinity-category of symmetric monoidal dg-categories, and repeat the same construction. I have not checked this, but I think that one would just recover (an enlargement of) the Morel-Voevodsky stable motivic homotopy category of commutative schemes. My guess is that Orlov may have had something along these lines in mind.
|
{
"source": [
"https://mathoverflow.net/questions/186329",
"https://mathoverflow.net",
"https://mathoverflow.net/users/61328/"
]
}
|
186,851 |
To provide context, I'm a differential geometry grad student from a physics background. I know some category theory (at the level of Simmons) and differential and Riemannian geometry (at the level of Lee's series) but I don't have any background in categorical logic or model theory. I've recently come across some interesting surveys and articles on synthetic differential geometry (SDG) that made the approach seem very appealing. Many of the definitions become very elegant, such as the definition of the tangent bundle as an exponential object. The ability to argue rigorously using infinitesimals also appeals to the physicist in me, and seems to yield more intuitive proofs. I just have a few questions about SDG which I hope some experts could answer. How much of modern differential geometry (Cartan geometry, poisson geometry, symplectic geometry, etc.) has been reformulated in SDG? Have any physical theories such as general relativity been reformulated in SDG? If so, is the synthetic formulation more or less practical than the classical formulation for computations and numerical simulations? Also, how promising is SDG as an area of research? How does it compare to other alternative theories such as the ones discussed in comparative smootheology ?
|
One point of synthetic differential geometry is that, indeed, it is "synthetic" in the spirit of traditional synthetic geometry but refined now from incidence geometry to differential geometry. Hence the name is rather appropriate and in particular highlights that SDG is more than any one of its models, such as those based on formal duals of C-infinity rings (" smooth loci "). Indeed, traditional algebraic geometry with formal schemes is another model for SDG and this is where the origin of the theory lies: William Lawvere was watching Alexander Grothendieck's work and after abstracting the concept of elementary topos from what Grothendieck did with sheaves, he next abstracted the Kock-Lawvere axioms of SDG from what Grothendieck did with infinitesimal extensions , formal schemes and crystals / de Rham spaces . The idea of SDG is to abstract the essence of all these niceties, formulate them in terms of elementary topos theory, and hence lay mathematical foundations for differential geoemtry that are vastly more encompassing than either algebraic geometry or traditional differential geometry alone. For instance there are also models in supergeometry , in complex analytic geometry and in much more exotic versions of "differential calculus" (such as Goodwillie calculus, see below). Regarding applications, a curious fact that remains little known is that Lawvere, while widely renowned for his work in the foundations of mathematics, has from the very beginning and throughout the decades been directly motivated by, actually, laying foundations for continuum physics. See here for commented list of pointers and citations on that aspects. In particular SDG was from the very beginning intended to formalize mechanics, that's why one of the earliest texts on the topic is titled " Toposes of laws of motion " (referring to SDG toposes). A little later Lawvere tried another approach to such foundations, not via the KL-axioms this time, but via " axiomatic cohesion ". One may recover SDG in axiomatic cohesion in a way that realizes it in close parallel to modern D-geometry with axiomatic de Rham stacks , jet-bundles, D-modules and all. I like to call this differential cohesion but of course it doesn't matter what one calls it. Viewed from this perspective the scope of models for the SDG axiomatics becomes more powerful still. For instance Goodwille tangent calculus is now also part of the picture, in terms of synthetic tangent cohesion . Another model is in spectral derived geometry that knows about structures of relevance in arithmetic geometry, chromatic homotopy theory and class field theory, this is discussed at differential cohesion and idelic structure . All this synthetic reasoning is maybe best viewed from the general perspective that it is useful in mathematics to stratify all theory as much as possible by the hierarchy of assumptions and axioms needed, try to prove as much as possible from as little assumptions as necessary and pass to fully concrete models only at the very end. If you are interested only in one specific model, such as derived geometry over $C^\infty$-rings, then such synthetic reasoning may offer some guidance but might otherwise seem superfluous. The power of the synthetic method is in how it allows to pass between models, see their similarities and differences, and prove model-independent theorems. As in "I don’t want you to think all this is theory for the sake of it, or rather for the sake of itself. It’s theory for the sake of other theory." ( Lurie, ICM 2010 ) Synthetic geometry is "inter-geometric", to borrow a term-formation from Mochizuki. If you run into something like the function field analogy then it may be time to step back and ask if such analogy between different flavors of geometry maybe comes from the fact that they all are models for the same set of "synthetic" axioms.
|
{
"source": [
"https://mathoverflow.net/questions/186851",
"https://mathoverflow.net",
"https://mathoverflow.net/users/56938/"
]
}
|
187,677 |
Suppose you were standing inside a regular tetrahedron $T$ whose
internal face surfaces were perfect mirrors.
Let's assume $T$'s height is $3{\times}$ yours, so that your
eye is roughly at the centroid,
and that you look perpendicular to a face: My question is: Q . What do you see? Either qualitatively; or if anyone can find an image, that would be revealing. Asking the same question for the view inside a mirrored cube is
easier to visualize: the opposing parallel square-face mirrors would produce
a " house of mirrors " effect (in three perpendicular directions). (Image from this link .) Addendum 1 ( 21Nov2014 ).
To respond to Yoav Kallus' (good) question, here is a quote from Handbook of Dynamical Systems , Volume 1, Part 1.
(ed. B. Hasselblatt, A. Katok), 2010, p.194: Addendum 2 ( 24Nov2014 ).
Now that we have Ryan Budney's amazing POV-ray images,
I would appreciate someone making an attempt to describe
his $32$-reflection image qualitatively.
I find it so complex that this may be an instance
where a thousand words might be superior to one picture.
|
Here are a couple pictures. If you'd like to do more, I created this with a simple povray script. Feel free to e-mail me and I'll send it to you. With four reflections. With 8 levels of reflections. One with 24 reflections. And one with 32 reflections, and each mirror having a slightly different tint, a red sphere, and a slightly wider viewing angle. Here is the view from inside a mirrored tetrahedron, near one of the vertices, looking towards the opposite face. Near the centre of each face is a number indicating which vertex the triangle is opposite. The sequence shows the triangles becoming more reflective and less opaque. The field of view is 120 degrees so there is a little bit of lens distortion at the fringes of the image. And here is the link to the scripts for the tetrahedra with the numbering and without the spheres.
|
{
"source": [
"https://mathoverflow.net/questions/187677",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
187,995 |
Let $A=\{a_1,\ldots,a_k\}$ be a fixed, finite set of reals. Let $S_A(n)$ be the set of all reals that are expressible as the sum of at most $2^n$ terms, where each term is a product of at most $n$ numbers from $A$ (here each element of $A$ can be reused an unlimited number of times). Finally, let $d_A(n)$ be the minimum of $|x|$, over all nonzero $x\in S_A(n)$. I'm interested in how quickly $d_A(n)$ can decrease as a function of $n$. Exponential decrease is easy to obtain (indeed, we have it whenever there's an $a\in A$ such that $|a|\lt 1$), but anything faster than that would require extremely finely-tuned cancellations, which continue to occur even as $n$ gets arbitrarily large. Thus, let's call $A$ tame if there exists a polynomial $p$ such that $d_A(n)\gt 1/\exp(p(n))$ for all $n$, and non-tame otherwise. Then here's my question: Does there exist a non-tame $A$? Note that I don't care much about the dependence on $k$ (holding fixed the approximate absolute values of the $a_i$'s), which might be triple-exponential or worse. To illustrate, if $A$ is a set of rationals, then it's easy to show that $A$ is tame. By using known results about the so-called Sum-of-Square-Roots Problem (which rely on results about the minimum spacing between consecutive roots of polynomials, from algebraic geometry), I can also show that if $A$ is a set of square roots of rationals---or more generally, ratios of sums and differences of square roots of positive integers---then $A$ is tame. More generally, I conjecture that every set of algebraic numbers is tame, and maybe this can even be shown similarly, but I haven't done so. But while thinking about this, it occurred to me that I don't have even a single example of a non-tame set---hence the question. Meanwhile, I would also be interested in results showing, for example, that every $A$ consisting of sines and cosines of rational numbers is tame. If anyone cares, the origin of this question is that, if every $A$ is tame, then it follows that given any $n^{O(1)}$-size quantum circuit over a fixed, finite set of gates (i.e., unitary transformations acting on $O(1)$ qubits at a time), the probability that the circuit outputs "Accept" is at least $1/\exp(n^{O(1)})$. Likewise, if all $A$'s that are subsets of some $S\subseteq\Re$ are tame, then the same result follows, but for quantum circuits over finite sets of gates where all the unitary matrix entries belong to $S$. (So for example, from the result mentioned above about square roots of rationals, we get that any quantum circuit composed of Hadamard and Toffoli gates satisfies this property.) This issue, in turn, is relevant to making fully precise the definition of the complexity class PostBQP (quantum polynomial time with postselected measurements), which I invented in 2004. If all $A$'s are tame, then there's no problem with my 2004 definition; if some $A$'s are not tame, then the definition needs to be amended, to restrict the set of gates to ones like {Toffoli,Hadamard} that won't give rise to doubly-exponentially-small probabilities. Update (Nov. 30): For those who are interested, I now have a blog post that discusses this MO question and where it came from (among several other things).
|
Let $a_n$ be an increasing sequence of positive integers which grows really fast, say $a_{n+1} > \exp(a_n)$. Take $A = \{10^{-1}, \sum 10^{-a_n}\}$. Then $d_A(a_n) \leq 2\cdot 10^{- a_{n+1}} \leq 2\cdot 10^{-\exp a_n}$, so $A$ cannot be tame. EDIT. One could replace $1/10$ by some transcendental $0<x<1$ such that $\sum x^{-a_n}$ is transcendental as well. This gives an example of a non-tame $A$ consisting of transcendental elements.
|
{
"source": [
"https://mathoverflow.net/questions/187995",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2575/"
]
}
|
188,707 |
Is the singleton space the only Hausdorff space $X$ such that the set of automorphisms $\varphi: X\to X$ equals $\{\textrm{id}_X\}$?
|
Not at all. Those spaces are called rigid and there are plenty of examples in the literature. The opposite notion is homogeneity which is a better studied property. The first (non-trivial) rigid space was constructed by Kuratowski in " Sur la puissance de l'ensemble des nombres de dimension au sens de M Frechet " (Fund.Math 8, 1926, 201-208). I also recommend " Decompositions of rigid spaces " by van Engelen and van Mill (PAMS 89, 1983, 533-536), where they give two nice examples: a rigid space that can be decomposed into two homeomorphic homogeneous subspaces and a homogeneous space that can be decomposed into two homeomorphic rigid subspaces. By the way, these two examples are even separable and metrizable.
|
{
"source": [
"https://mathoverflow.net/questions/188707",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8628/"
]
}
|
188,808 |
I know that it is possible to construct the hyperreal number system in ZFC by using the axiom of choice to obtain a non-principal ultrafilter. Would the non-existence of a set of hyperreals be consistent with just ZF, without choice? Let me be conservative, and say that by a "set of hyperreals," I just mean a set together with some relations and functions such that the transfer principle holds, and there exists $\epsilon > 0$ smaller than any real positive real number.
|
The answer is yes, provided ZF itself is consistent. The reason is that the existence of the hyperreals, in a context with the transfer principle , implies that there is a nonprincipal ultrafilter on $\mathbb{N}$. Specifically, if $N$ is any nonstandard (infinite) natural number, then let $U$ be the set of all $X\subset\mathbb{N}$ with $N\in X^*$. This is a nonprincipal ultrafilter on $\mathbb{N}$, since: If $X\in U$ and $X\subset Y$, then $N\in X^*\subset Y^*$, and so $Y\in U$. If $X,Y\in U$, then $N\in X^*\cap Y^*=(X\cap Y)^*$ and so $X\cap Y\in U$. If $X\subset\mathbb{N}$, then every number is in $X$ or in $\mathbb{N}-X$, and so either $N\in X^*$ or $N\in(\mathbb{N}-X)^*$ and thus $X\in U$ or $\mathbb{N}-X\in U$. For any particular standard natural number $n$, the set $X=\{m\in \mathbb{N}\mid n\leq m\}$ is in $U$, because $n^*\leq N$. The empty set $\emptyset$ is not in $U$, since $N\notin\emptyset=\emptyset^*$. So $U$ is a nonprincipal ultrafilter on $\mathbb{N}$. The way that I think about $U$ is that it concentrates on sets that express all and only the properties held by the nonstandard number $N$. (See also my answer to A remark of Connes , where I make a similar point, and explain that, therefore, nonstandard analysis with the transfer property implies that there must be a non-measurable set of reals.) Thus, in a model of ZF with no nonprincipal ultrafilter on $\mathbb{N}$ (and as Asaf mentions in the comments, there are indeed such models if there are any models of ZF at all), there is no structure of the hyperreals satisfying the transfer principle.
|
{
"source": [
"https://mathoverflow.net/questions/188808",
"https://mathoverflow.net",
"https://mathoverflow.net/users/62575/"
]
}
|
189,097 |
The four color theorem asserts that every planar graph can be properly colored by four colors. The purpose of this question is to collect generalizations, variations, and strengthenings of the four color theorem with a description of their status. (Motivated by a comment of rupeixu to a recent blog post on my blog presenting a question by Abby Thompson regarding a natural generalization of the 4CT.)
Related question: Generalizations of Planar Graphs .
|
One of the most important generalizations of the four color theorem is Hadwiger's conjecture . The Hadwiger conjecture asserts that a graph without a $K_{r+1}$ minor is $r$-colorable. There is a further generalization called the Weak Hadwiger Conjecture . It is known that the Hadwiger conjecture is false for graphs with infinite chromatic number (consider the disjoint union of $K_n$ where $n\in\mathbb{N}$), whereas the Weak Hadwiger conjecture is true for those graphs (see this paper). An interesting question is whether the Weak Hadwiger conjecture implies the Hadwiger conjecture for finite graphs.
|
{
"source": [
"https://mathoverflow.net/questions/189097",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1532/"
]
}
|
189,323 |
This is a question that occurred to me years ago when I was first learning algebraic topology. I've since learned that it's a somewhat aesthetically displeasing question, but I'm still curious about the answer. Is it possible for a subset of $\mathbb R^2$ to have a nontrivial singular homology group $H_2$? What about a nontrivial homotopy group $\pi_2$?
|
The higher-dimensional analog has the surprising answer "yes". Namely, for $n\geq 2$, the $n$-dimensional Hawaiian earring $H_n = \bigcup_{k=1}^\infty S(k)$, where $S(k)\subseteq \mathbb{R}^{n+1}$ is the $n$-sphere with center ${1\over 2k}\mathbf{e}_1$ and radius ${1\over 2k}$ has nonzero homology in arbitrarily high dimensions. This is a result of Barratt and Milnor ( An Example of Anomalous Singular Homology ).
|
{
"source": [
"https://mathoverflow.net/questions/189323",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
190,530 |
Is there an elliptic curve over $\mathbb{C}[t, t^{-1}]$ that has a nonconstant $j$-invariant? What is an equation for such a curve, if it exists?
|
Let me give an analytic argument, to complement Noam's algebraic one. Suppose $E\to \mathbb{C}^*$ is an elliptic curve. Then pulling back $E$ along the universal covering map (also known as the exponential map) $\mathbb{C}\to \mathbb{C}^*$, one obtains an elliptic curve $\tilde E\to \mathbb{C}$. Choosing a basis for the first homology of $\tilde E$ induces a holomorphic map $$\mathbb{C}\to \mathbb{H},$$ where $\mathbb{H}$ is the upper half-plane, viewed as the moduli space of elliptic curves with a homology basis for $H_1$. But any such map must be constant by Liouville's theorem. $\blacksquare$ Added Later. Since the OP enjoyed the sketch algebraic version in the comments (corrected by user74230), let me give some more details (and in particular, say what happens in characteristic $p>3$). WLOG $k$ is algebraically closed. Suppose $E\to \mathbb{G}_m/k$ is an elliptic curve. If we can find some $n$ so that the pullback of $E$ along $\mathbb{G}_m\overset{[n]}\longrightarrow \mathbb{G}_m$ has trivial $\ell$-torsion, with $(\ell, p)=1$ and $\ell \gg 0$, we're done, because choosing a trivialization we get a map from $\mathbb{G}_m$ to a high genus modular curve, which must be constant as $\mathbb{G}_m$ is rational. To do this, we must show that for infinitely many $\ell$, the map $E[\ell]\to \mathbb{G}_m$ has tame ramification at $0, \infty$. It suffices to find $\ell$ so that $GL_2(\mathbb{Z}/\ell\mathbb{Z})$ has order prime to $p$. But this order is $(\ell^2-1)(\ell^2-\ell)=\ell(\ell-1)^2(\ell+1).$ But by Dirichlet's theorem on primes in arithmetic progressions, there are infinitely many $\ell$ so that $\ell(\ell-1)^2(\ell+1)$ is prime to $p$, if $p>3$. $\blacksquare$ As Noam observes in the comment, the result is false in characteristic $2$ and $3$; he gives examples of non-isotrivial families in these characteristics.
|
{
"source": [
"https://mathoverflow.net/questions/190530",
"https://mathoverflow.net",
"https://mathoverflow.net/users/63877/"
]
}
|
190,711 |
Are there existential theorems of ZFC, or PA say, with no witnesses? Ie does there exist a formula $\phi$ such that ZFC $\vdash\exists x \phi(x)$, but for all numerals $\underline{n}$, ZFC $\nvdash \phi(\underline{n})$? Can you give an example of such a formula?
|
The answer is yes, provided these theories are consistent. For example, PA proves that there is a number $n$, such that if there is no proof of a contradiction in PA of size at most $n$, then there is no proof of a contradiction in PA at all. This is trivial if you think about it, since either there is a proof of a contradiction in PA, in which case you can let $n$ be any number larger than that proof, or else there isn't such a proof of a contradiction, in which case any $n$ has the desired property. But PA does not prove that any particular number $n$ has this property, since then PA would prove its own consistency, which violates the incompleteness theorem. Here is another kind of example: Let $\sigma$ be any statement that is independent of PA, and let $\phi(n)$ assert that $(n=0\wedge\sigma)\vee(n=1\wedge\neg\sigma)$. So PA proves $\phi(0)\vee \phi(1)$, and in particular it proves $\exists x\ \phi(x)$, but PA doesn't prove either case separately, because $\sigma$ is independent. If you strengthen the conclusion of your question, however, to say that the theory should actually prove $\neg\phi(n)$ for every numeral $n$, then you would be asking whether these theories are $\omega$-inconsistent . Of course, we all think that our theories are $\omega$-consistent, because we want and expect that the natural numbers referred to inside the theory are somehow the same as those in the meta theory. But of course, because of the incompleteness theorem, we cannot prove this within the theory itself. Notice that a theory must be $\omega$-consistent in the case that it has an $\omega$-model, a model in which the natural numbers of the model are the same as the natural numbers of the meta-theory. For this reason, our stronger theories can often prove that our weaker theories are $\omega$-consistent. For example, ZFC proves that PA is $\omega$-consistent, and ZFC+large cardinals proves that ZFC is $\omega$-consistent. My own view is the assertion that a theory is $\omega$-consistent is just one stop along a continuum, from assuming consistency up to iterated consistency up to having a well-founded model up to large cardinals and so on, moving up in consistency strength at each step. Meanwhile, if these theories are consistent, then it is easy to make examples of $\omega$-inconsistent theories extending them. For example, the theories PA+$\neg$Con(PA) and ZFC+$\neg$Con(ZFC) are consistent by the incompleteness theorem, but not $\omega$-consistent, because each of them asserts that there is a certain proof of a contradiction, which is provably not instantiated by any actual natural number witness.
|
{
"source": [
"https://mathoverflow.net/questions/190711",
"https://mathoverflow.net",
"https://mathoverflow.net/users/63959/"
]
}
|
190,837 |
I would like to ask about, does there exists an entire function which is bounded on every line parallel to $x$ - axis , but unbounded on the $x$ - axis.
|
Yes, there are such functions. Take a very narrow region $D$ containing the positive ray,
with nice boundary and such that $D$ intersects every any horizontal line other than the real line by a bounded interval.
Let $g$ be a conformal map of $D'$ onto the right half-plane, where $D'\subset D$ is another similar region. Then with appropriate choice of $D$ and $D'$ $$f(z)=\int_{\partial D'} \frac{e^{g(\zeta)}}{\zeta-z}d\zeta$$ will converge for $z$ outside $D$ and the function $f$ will be bounded outside $D$ and extend to an entire function (by a deformation of the contour). For the details, see for example MR2753600 or MR0545054. EDIT. The method is very flexible of course. Taking $D$ to be a half-strip and $g(z)=e^z$ one obtains a Mittag-Leffler function. Replacing it by $f(z+4i)$ you obtain a function that is bounded on every line from the origin. But to obtain a function as you ask, a half-strip $D$ will not work, so the function is less elementary. With the same method one can also construct functions which tend to zero on every line: just replace $f$ by $f(z)/(z-z_0)$ where $f(z_0)=0$ . Existence of infinitely many zeros of the original $f$ is easy to prove. Repeating this you can find a function which tends to $0$ on every line faster than any polynomial.
|
{
"source": [
"https://mathoverflow.net/questions/190837",
"https://mathoverflow.net",
"https://mathoverflow.net/users/54245/"
]
}
|
190,902 |
Let's say that a (recursively axiomatizable) set theory $T$ extending ZF is "ordinal-categorical" if, whenever $M$ and $N$ are standard models of $T$ sharing the same ordinals, one has $M = N$. For example, if $T$ proves $V = L$, then $T$ is ordinal-categorical. I think the same is true if $T$ proves $V = L(0^\sharp)$. Is there anything else that can be said about such theories $T$? In particular, is there a way to find an axiom or axiom schema A such that all ordinal-categorical theories $T$ are precisely the (recursively axiomatizable) extensions of ZF+A? If this is not known, is finding such an A a goal of the inner model program? (Note: In the original post I used the word "canonical" instead of "ordinal-categorical.)
|
In this edit, the statement of Friedman's theorem is reformulated (the previous formulation was incorrectly stated). Thanks to Dmytro Taranovsky and Farmer Schultzenberg for pointing out the blooper. See also Remark 2 for recent progress (January 2023) on this topic by Schultzenberg and Taranovsky. An old theorem of Harvey Friedman answers the question: Theorem. Under a mild set theoretical hypothesis $\mathrm{H}$ (see Note 1 below), there is a cofinal subset $U$ of $\omega_1$ (see Note 2 below) such that if $T$ is any r.e. extension of $\mathrm{ZF + V \neq L}$ that has a countable transitive model $M$ of height $\alpha$ , then $T$ has another model $N \neq M$ of height $\alpha$ . Note 1. The mild set theoretical hypothesis $\mathrm{H}$ above asserts that there is an ordinal $\alpha$ of uncountable cofinality such that $V_{\alpha} \models \mathrm{ZF}$ . Thus the existence of a strongly inaccessible cardinal implies $\mathrm{H}$ . Note 2. $U$ consists of $\omega_1 \cap G$ , where $G$ consists of ordinals $\alpha$ such that $L_{\mathrm{n}(\alpha)} \cap V_{\alpha}=L_{\alpha}$ . Here $\mathrm{n}(\alpha)$ is Friedman's notation (in the paper below) for the next admissiable ordinal after $\alpha$ , i.e., the least admissible ordinal greater than $\alpha$ ; this ordinal is denoted $\alpha^{+,\mathrm{CK}}$ in this MO question of Taranovsky . Note 3. The above Theorem follows from putting together the proof of the hard direction of Theorem 6.2 together with the proof of Lemma 6.3.1 of Friedman's paper below: Countable models of set theories , In A. R. D. Mathias & H. Rogers (eds.), Cambridge Summer School in Mathematical Logic , Lecture Notes in Mathematics vol. 337, Springer-Verlag. pp. 539--573 (1973). Two Remarks are in order. Remark 1. It is known that if the theory $\mathrm{ZF + V = L(0 ^{\#}})$ has transitive model of height $\alpha < \omega_1$ , then $\alpha \in U$ ; coupled with Friedman's above theorem, this implies that $\mathrm{ZF + V = L(0 ^{\#}})$ has more than one transitive model of height $\alpha$ . Thus two transitive models of set theory of the same height could both believe that $0 ^{\#}$ exists, and yet they might have distinct $0 ^{\#}$ s. Recall that the complexity of $0 ^{\#}$ is $\Delta^1_3$ . Remark 2. In the introduction to his aforementioned paper, Friedman posed the hitherto open question of whether the conclusion of the theorem above holds for $\alpha$ = the ordinal height of the Shepherdson-Cohen minimal model of set theory (it is known that $\alpha \notin U$ ). Schultzenberg's construction to this MO question of Taranovsky implicitly yields a negative answer to Friedman's question. See Taranovsky's answer below for a proposed striking generalization of Schultzenberg's construction.
|
{
"source": [
"https://mathoverflow.net/questions/190902",
"https://mathoverflow.net",
"https://mathoverflow.net/users/17218/"
]
}
|
190,965 |
In 1962 Toda published his book "Composition methods in homotopy groups of spheres", which contains computations of $\pi_{n+k}(S^n)$ for $k\le 19$ and $n\le 20$. The values of these groups are conveniently tabulated, and reproduced on the Wikipedia page Homotopy groups of spheres . The computations involve composition product, Toda brackets and the EHP sequence, as well as cohomology operations for the higher values of $k$. I am fairly certain that more values of $\pi_{n+k}(S^n)$ have been computed in the intervening years, perhaps with more modern methods such as the unstable Adams spectral sequence. Does anybody know of an up-to-date table of known unstable homotopy groups of spheres, beyond the range shown in Toda's tables?
|
I don't know the answer to your question, but I asked Fred Cohen. He had this to say: Most of the computations are in Mahowald's work
with the EHP sequence. This gives infinite
families at p = 2 with Rob Thompson's extensions to p > 2. Specific extensions of 2-primary components in fixed stems
are in N. Oda, On the 2-components of the unstable homotopy groups of spheres. II, Proc. Japan Acad. 53, Ser. A(1977), 215-218. N. Oda, Unstable homotopy groups of spheres, Bull. of the Inst. for Advanced Re-
search of Fukuoka Univ. 44 (1979), 49-152. N. Oda, Some relations in the 18-stem of the homotopy groups of spheres, Bull.
Central Res. Inst. Fukuoka Univ., 104, (1988), 75–83. Brayton Gray gave families of elements of order $p^r$ in
$\pi_*(S^{2r+1})$ for $p$ an odd prime. Example: It is not known whether there are elements of order 64 in $\pi_*(S^{11})$. Similarly, it is not known whether there are elements of order 64 in the stable
homotopy of $\mathbb RP^{10}$, thus a potential counterexample to the Freyd conjecture. Other than that, not much more in terms of specific computations
are in the literature as far as I know.
|
{
"source": [
"https://mathoverflow.net/questions/190965",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8103/"
]
}
|
191,222 |
Some years back (before MathOverflow was born), Tom Leinster asked an interesting question at the $n$-Category Café which I don't recall ever seeing an answer for: Does there exist a category $C$ that admits an essentially surjective functor $F: C \to Set^{C^{op}}$? Terminology: we say that a functor $F: C \to D$ is essentially surjective if every object $d$ of $D$ is isomorphic to some value $F(c)$. This is the good notion of surjectivity for the 2-category $Cat$, or at least one good notion. As is well-known from categorical circles, the presheaf category $Set^{C^{op}}$ here plays a role of "power object" $P(C)$ that is usefully regarded as analogous to power sets in set theory or more generally in toposes. (For example, the Yoneda embedding $y_C: C \to Set^{C^{op}}$ plays a role analogous to the singleton mapping $\{-\}: S \to P(S)$ from set theory.) In fact Tom's question is embedded in a larger discussion of what one should mean by a '2-topos' -- see that discussion for more on the analogy. So the question above is analogous to one that Cantor's theorem answers: can one have a set $S$ that maps onto its power set $S$? So the expected answer to the question is ' no '. Note however that the standard diagonalization technique behind Cantor's theorem, as explained for example here , doesn't apply in any obvious way since there is no general decent notion of diagonal map $C \to C \times C^{op}$. Regarding foundational issues: I'll leave that up to you. :-) If you want me to impose a constraint, we might add the condition that $C$ is locally small, but note that we'll soon be leaving the land of local smallness anyway, since there is a result due to Freyd and Street that if also $Set^{C^{op}}$ is locally small, then $C$ is (equivalent to) a small category, and that would be a huge constraint that makes the question not so interesting.
|
No such category exists. My original argument for this assumed local smallness and is below the break; here is a simpler argument that does not require local smallness (though it does basically use my original argument in the special case $\mathbf{C}=\mathbf{Set}$). Let us take $\kappa$ to be an inaccessible cardinal and work with $V_\kappa$ as our universe, so the categories $\mathbf{Set}$ and $\mathbf{C}$ we start with are classes in $V_\kappa$ (so $\mathbf{Set}$ is the category of sets in $V_\kappa$), and we only go outside $V_\kappa$ to form functor categories. Suppose that there is an essentially surjective functor $\mathbf{C}\to\mathbf{Set}^{\mathbf{C}^{op}}$ and let $H$ be the associated functor $\mathbf{C}\times\mathbf{C}^{op}\to\mathbf{Set}$. I will obtain a contradiction by proving that there are $2^\kappa$ non-isomorphic objects of $\mathbf{Set}^{\mathbf{C}^{op}}$. First of all, for every cardinal $\lambda<\kappa$, there is a constant presheaf $\lambda$ on $\mathbf{C}$, and so there is some object $A_\lambda$ with the property that $|H(A_\lambda,B)|=\lambda$ for all $B$. Now fix any object $B$, considered as an object of $\mathbf{C}^{op}$. Write $G(A)=H(A,B)$; then $G$ is a functor $\mathbf{C}\to\mathbf{Set}$ with the property that $|G(A_\lambda)|=\lambda$ for all $\lambda$. Consider $G$ as a functor $G^{op}:\mathbf{C}^{op}\to\mathbf{Set}^{op}$. We can then compose $G^{op}$ with any functor $P:\mathbf{Set}^{op}\to\mathbf{Set}$ to get a new presheaf $PG^{op}$ on $\mathbf{C}$. I claim that there are $2^\kappa$ choices of $P$ which give rise to non-isomorphic presheaves $PG^{op}$. Indeed, since $G^{op}$ is essentially surjective, it suffices to give $2^\kappa$ different functors $P$ such that the induced maps $\{\text{cardinals }\lambda<\kappa\}\to\{\text{cardinals }\lambda<\kappa\}$ are distinct. This is not difficult; for instance, it can be done by a variant of the "wedge of spheres" construction below (let $\mathbf{C}=\mathbf{Set}$, $F(A)=|A|$, and instead of just taking a single copy of each sphere when constructing $T(Q)$, add enough spheres to change the cardinality of $T(Q)$ at $G(\alpha)$). Let's work in the context of Grothendieck universes and require our categories to be locally small. Then I claim that no such category exists. Let $\kappa$ be an inaccessible cardinal and let $V_\kappa$ be our base universe. Let $\mathbf{C}$ be a locally small category. Define an exhaustion of $\mathbf{C}$ to be an unbounded function $F:\operatorname{Ob}(\mathbf{C})\to \kappa$ such that if $B$ is a retract of $A$ then $F(B)\leq F(A)$. First, I claim that if $\mathbf{C}$ has an exhaustion $F$, then $\mathbf{Set}^{\mathbf{C}^{op}}$ has $2^\kappa$ non-isomorphic objects and hence there is no essentially surjective functor $\mathbf{C}\to \mathbf{Set}^{\mathbf{C}^{op}}$. Let $1$ be the constant singleton presheaf on $\mathbf{C}$; let $*_B$ denote the unique element of $1(B)$ for all objects $B$. Given an object $A$ of $\mathbf{C}$, let $S^A$ (the "$A$-sphere", by analogy with the case $\mathbf{C}=\Delta$) be the presheaf obtained from $1$ by freely adjoining an element of $S^A(A)$ whose image under every map $A\leftarrow B$ is $*_B$ for all $B$ such that $F(B)<F(A)$. Since $A$ is not a retract of any such $B$, this new element of $S^A(A)$ will not be equal to $*_A$. Now let $I\subseteq \kappa$ be the image of $F$ and choose a right inverse $G:I\to\operatorname{Ob}(\mathbf{C})$ of $F$. For each $Q\subset I$, define $T(Q)$ to be the colimit of the diagram consisting of the inclusions $1\to S^{G(\alpha)}$ for all $\alpha\in Q$. This colimit exists because for any object $A$, $1\to S^{G(\alpha)}$ is an isomorphism at $A$ for all $\alpha>F(A)$, and hence this colimit is small at $A$. We can determine the set $Q$ from the presheaf $T(Q)$ the same way you can determine the non-degenerate simplices of a simplicial set. Thus the presheaves $T(Q)$ are all non-isomorphic. Since there are $2^\kappa$ different values of $Q$, this proves the claim. To prove the claimed theorem, it now suffices to show that any essentially large locally small category has an exhaustion. Let $\mathbf{C}$ be an essentially large locally small category, and assume WLOG it is skeletal. By essential largeness, let $f:\operatorname{Ob}(\mathbf{C})\to \kappa$ be a bijection. By local smallness, each object of $\mathbf{C}$ has fewer than $\kappa$ other objects as retracts (since a retraction is determined by the associated idempotent endomorphism). We can thus define $F:\operatorname{Ob}(\mathbf{C})\to \kappa$ by $$F(A)=\sup \{f(B):B\text{ is a retract of }A\},$$
and this $F$ will be an exhaustion.
|
{
"source": [
"https://mathoverflow.net/questions/191222",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2926/"
]
}
|
191,452 |
How many elements of $\mathrm{SL}_n(\mathbb{F}_p)$ have all nonzero entries? Just the answer mod $p$ would be fine as well. This seems like it should be easy/in the literature but I couldn't find it.
|
Mod $p$ it's $(-1)^{n+1} n!$. Let's compute the number of points with determinant $1$ and all entries nonzero by inclusion-exclusion, modulo $p$. For each set of entries, we get a term for matrices in $SL _n$ with those entries $0$. This is an affine hypersurface of degree $n$ in some affine space. By Warning's theorem the number of points is a multiple of $p$ unless the number of variables is at most $n$. But the number of variables is the number of nonzero entries. A matrix with $\leq n$ nonzero entries that is invertible is a permutation matrix times a diagonal matrix. We can easily count the contribution if these. It is $(-1)^{n^2-n} (p-1)^{n-1} n!$. Mod $p$ we get the stated claim.
|
{
"source": [
"https://mathoverflow.net/questions/191452",
"https://mathoverflow.net",
"https://mathoverflow.net/users/12138/"
]
}
|
191,609 |
In the Princeton Companion to Mathematics one reads that even pure mathematicians should know some theoretical physics and applied mathematics. What are some well-organized comprehensive companions to theoretical physics for working mathematicians? I have heard of Armin Wachter and Henning Hoeber's, but I don't know if it is rigorous enough (i.e., for example, there are enough proofs of the theorem given).
|
If you allow such a comprehensive reference to re-introduce basic mathematics, then either as a layman or a working mathematician your prayers are answered by the following (he even prefaces by saying that his intended layman-audience must have some mathematical sophistication): The Road to Reality: A Complete Guide to the Laws of the Universe , by Roger Penrose Now let's try to break down the subjects. Classical Mechanics: 1) Mathematical Methods of Classical Mechanics , by Arnold 2) A Mathematical Introduction to Fluid Mechanics , by Chorin-Marsden Quantum Mechanics: 1) Mathematical Foundations of Quantum Mechanics , by Mackey 2) The Theory of Groups and Quantum Mechanics , by Weyl General Relativity: 1) General Relativity for Mathematicians , by Sachs-Wu 2) The Large Scale Structure of Space-Time , by Hawking-Ellis Electrodynamics: 1) Electromagnetic Theory and Computation: A Topological Approach , by Gross-Kotiuga 2) On the Mathematical Foundations of Electrical Circuit Theory , by Smale 3) This is a plug for Gauge theory: 3a) On Some Recent Developments in Yang-Mills Theory , by Bott 3b) On Some Recent Interactions Between Mathematics and Physics , by Bott 3c) Concept of Nonintegrable Phase Factors and Global Formulation of Gauge Fields , by Wu-Yang 3d) From Superconductors and Four-Manifolds to Weak Interactions , by Witten Standard Model: The Algebra of Grand Unified Theories , by Baez-Huerta Quantum Field Theory and String Theory: 1) Quantum Physics: A Functional Integral Point of View , by Jaffe-Glimm 2) Geometry and Quantum Field Theory , 1994 IAS lectures 3) Quantum Fields and Strings: A Course for Mathematicians , 1996 IAS lectures
|
{
"source": [
"https://mathoverflow.net/questions/191609",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
191,929 |
There is no shortage of anecdotes and conjectures on both sides of this widespread belief, but good supporting data either way is harder to find. Can anyone provide any references for serious (preferably academic rather than journalistic) research that actually crunched the data and produced interesting conclusions about whether this bit of folklore is reality-based? I put "mathematicians do their best work when they're young" in quotes because this is clearly not a well-posed question--it is only intended to be shorthand for any of a number of questions on this topic. Historical studies (Évariste Galois, etc.) are OK, but studies on people born after, say, 1950, would be of greater interest and relevance.
|
These two studies arrive at what seems to be a more sensible conclusion: Age and Scientific Performance , Stephen Cole (1976). The long-standing belief that age is negatively associated with scientific productivity and creativity is shown to be based upon incorrect
analysis of data. Analysis of data from a cross-section of academic
scientists in six different fields indicates that age has a slight
curvilinear relationship with both quality and quantity of scientific
output. These results are supported by an analysis of a cohort of
mathematicians who received their Ph.D.'s between 1947 and 1950.
There was no decline in the quality of work produced by these
mathematicians as they progressed through their careers. Age and Achievement in Mathematics: A Case-Study in the Sociology of Science , Nancy Stern (1978). The claim that younger mathematicians (whether for physiological or
sociological reasons) are more apt to create important work is
unsubstantiated... I have found no clear relationship between age and
achievement in mathematics. For anecdotes and "advice to aging mathematicians", I might recommend Mathematical menopause, or, a young man's game?, by Reuben Hersh (The Mathematical Intelligencer, 2001). Until we find a consensus about which advances are "major," we can't
refute Hardy's claim that no major advance has been made by a
mathematician over 50. But his slogan, "Mathematics is a young man's
game," is misleading, even harmful.
|
{
"source": [
"https://mathoverflow.net/questions/191929",
"https://mathoverflow.net",
"https://mathoverflow.net/users/64529/"
]
}
|
193,491 |
Recall that a space (=CW complex) is called simple if it is connected, the fundamental group is abelian, and the fundamental group acts trivially on all higher homotopy groups. Call Simp(X) a simplification of X if it is universal for maps from X to a simple space. Does Simp(X) exist for any connected space X? This would give a higher analogue of abelianization of groups. (A natural guess would be the loop suspension of X, but I'm pretty sure that doesn't work as the following example shows. Let X be $\mathbb{R}\mathrm{P}^2$, and note that the $2$-type of $X$ can be described completely by saying that $\pi_1(X) = \mathbb{Z}/2$, $\pi_1(X) = \mathbb{Z}$, the action is the nontrivial one sending $n \mapsto -n$, and the gluing is by the nontrivial element of $H^3(\mathbb{Z}/2,\mathbb{Z})$. There is a simple space $Y$ which is a $2$-type where $\pi_1(X) = \mathbb{Z}/2$, $\pi_2(X) = \mathbb{Z}/2$, the action is trivial, and the gluing is by the nontrivial element of $H^3(\mathbb{Z}/2,\mathbb{Z}/2)$. There's a natural map from X to Y since the nontrivial action on $\mathbb{Z}$ becomes trivial when you kill $2$. On the other hand, $\pi_2(\mathbb{R}\mathrm{P}^2) = \mathbb{Z}$ maps to $\pi_2(\Omega \Sigma \mathbb{R}\mathrm{P}^2) = \mathbb{Z}/4$ sending $1 \mapsto 2$, and so the surjective map $\mathbb{Z} = \pi_2(X) \rightarrow \pi_2(Y) = \mathbb{Z}/2$ cannot factor through $\pi_2(\Omega \Sigma \mathbb{R}\mathrm{P}^2)$.)
|
The space $S^1\vee S^1$ does not have a simplification. Indeed, suppose $f:S^1\vee S^1\to X$ is a simplification and let $i:S^1\vee S^1\to S^1\times S^1$ be the standard inclusion. Then since $X$ is simple, the commutator of the generators of $\pi_1(S^1\vee S^1)$ becomes nullhomotopic after composing with $f$, so $f$ extends over $i$ to a map $S^1\times S^1\to X$. But $S^1\times S^1$ is already simple, so $i$ must factor through $f$. Thus we have maps $S^1\times S^1\to X\to S^1\times S^1$, and the composition induces the identity on $\pi_1$ and is thus homotopic to the identity. This implies that $H_2(X)$ has $\mathbb{Z}$ as a direct summand, and thus $H^2(X)$ is nontrivial. But this is a contradiction since $H^2(S^1\vee S^1)=0$ and $K(\mathbb{Z},2)$ is simple.
|
{
"source": [
"https://mathoverflow.net/questions/193491",
"https://mathoverflow.net",
"https://mathoverflow.net/users/22/"
]
}
|
193,837 |
Let $$\small F_n=(a+b+c)^n+(b+c+d)^n-(c+d+a)^n-(d+a+b)^n+(a-d)^n-(b-c)^n$$ and $ad=bc$ , then $$64 F_6 F_{10}=45 F_8^2$$ This fascinating identity is due to Ramanujan and can be found in "Ramanujan for Lowbrows", by B.C. Berndt and S. Bhargava . Would anyone have any idea how Ramanujan discovered this identity? The proofs of the identity offered so far in the papers "A Note on an Identity of Ramanujan", by T. S. Nanjundiah , "Two or Three Identities of Ramanujan", by M.D. Hirschhorn and "A remarkable identity found in Ramanujan's third notebook", by B.C. Berndt and S. Bhargava make the identity less mysterious, but how Ramanujan found the identity in the first place still remains a mystery. As Berndt and Bhargava remarked, it is also not clear whether this is an accidental, isolated result (along with the 3-7-5 counterpart discovered by Hirschhorn), or if there is some deeper theorem lurking behind it.
|
You have two questions: 1) How Ramanujan discovered it? 2) Is an accidental, isolated result? The second one is easier to answer and may shed light on the first. I. Define $F_n = x_1^n+x_2^n+x_3^n-(y_1^n+y_2^n+y_3^n),\;$ where $\,\small x_1+x_2+x_3=y_1+y_2+y_3 = 0$. Theorem 1: If $F_1 = F_3 = 0$, then, $$9x_1x_2x_3 F_6 = 2F_9 = 9y_1y_2y_3 F_6$$ Theorem 2: If $F_2 = F_4 = 0$, then, $$64F_6F_{10} = 45F_8^2\quad \text{(Ramanujan)}$$ $$25F_3F_{7} = 21F_5^2\quad \text{(Hirschhorn)}$$ II. Define $F_n = x_1^n+x_2^n+x_3^n+x_4^n-(y_1^n+y_2^n+y_3^n+y_4^n),\;$ where also $\,\small \sum x_i =\sum y_i= 0$. Theorem 3: If $F_1 = F_3 = F_5 = 0$, then, $$7F_4F_9 = 12F_6F_7\quad \text{(yours truly)}$$ This also has a similar Ramanujan-type formulation. Define, $$\small P_n = ((a+b+c)^n + (a-b-c)^n + (-a-b+c)^n + (-a+b-c)^n – ((d+e+f)^n + (d-e-f)^n + (-d-e+f)^n + (-d+e-f)^n)$$ If two conditions are satisfied, $$\small abc = def,\quad a^2+b^2+c^2 = d^2+e^2+f^2$$ then, $$7P_4P_9 = 12P_6P_7$$ (The two conditions have an infinite number of primitive solutions, one of which is $1,10,12;\,2,4,15$.) If you are looking for the general theory behind Ramanujan's 6-10-8 Identity, the theorems flow from the properties of equal sums of like powers . The 6-10-8 needed only one condition, namely $ad=bc$. Going higher, you now need two . Presumably going even higher would need more. Also, there is a constraint $\,\small \sum x_i =\sum y_i= 0$. Without this constraint, then more generally, III. Define $F_n = x_1^n+x_2^n+x_3^n-(y_1^n+y_2^n+y_3^n),\;$ and $m = (\sum x_i^4)/(\sum x_i^2)^2$. Theorem 4: If $F_2 = F_4 = 0$, then, $$32F_6F_{10} = 15(m+1)F_8^2$$ ( Note: Ramanujan's simply was the case $m=1/2$.) IV. Define $F_n = x_1^n+x_2^n+x_3^n+x_4^n-(y_1^n+y_2^n+y_3^n+y_4^n),\;$ and $m = (\sum x_i^4)/(\sum x_i^2)^2$. Theorem 5: If $F_2 = F_4 = F_6 = 0$, then, $$25F_8F_{12} = 12(m+1)F_{10}^2$$ and so on for similar identities with more terms and multi-grade higher powers, ad infinitum . V. Conclusion: Thus, was the 6-10-8 Identity an isolated result? No, it is a special case (and a particularly beautiful one at that) of a more general phenomenon. And how did Ramanujan find it? Like most of his discoveries, he plucked it out of thin air, I suppose. P.S. Humor aside, what I read was that, as he found paper expensive, he would scribble on a small slateboard with chalk. After he was satisfied with a result, he would write it down on his notebook, and erase the intermediate steps that was on the slateboard . (Sigh.) Also, since he spent most of his waking hours thinking about mathematics, I think it was only natural it spilled over into his dream state. (A similar thing happened with the chemist Kekule and the discovery of the benzene ring.) P.P.S. Results for multi-grades can be found in Table 2 of this MO answer .
|
{
"source": [
"https://mathoverflow.net/questions/193837",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32389/"
]
}
|
193,933 |
One of my daughters was having a small programming exercise. Let's consider following algorithm: Take a list of length $n$: $\ (1\,\ 2\,\ \ldots\,\ n)$. Remove every $2$nd number. From the resulting list, remove every $3$rd number. From the resulting list, remove every $4$th number. ... Follow on until the list remains unchanged and let $u_n$ be the number of remaining elements. Example with $n=11$ $(\ 1\,\ 2\,\ 3\,\ 4\,\ 5\,\ 6\,\ 7\,\ 8\,\ 9\,\ 10\,\ 11\ )\quad \Rightarrow\quad (\ 1\ *\ 3\ *\ 5\ *\ 7\ *\ 9 \ *\ 11\ )$ $(\ 1\,\ 3\,\ 5\,\ 7\,\ 9\,\ 11\ )\quad \Rightarrow\quad (\ 1\,\ 3\ *\ 7\,\ 9\ *\ )$ $(\ 1\, 3\,\ 7\,\ 9\ )\quad \Rightarrow\quad (\ 1\,\ 3\,\ 7\ *\ )$ $(\ 1\,\ 3\,\ 7\ )\ $ -- will not be modified anymore, and therefore $u_n=3$. QUESTION: why do we have $\lim\limits_{n \to +\infty} \frac{n}{u_n^2}=\frac{\pi}{4}$ ? Thanks!
|
This problem was studied first by the founder of sieve theory, Brun himself, who proved this asymptotic. For a fairly recent paper on this subject look at Andersson who gives more precise estimates for $u_n$. The MO question Sequences with integral means is also closely related, and see also my answer there.
|
{
"source": [
"https://mathoverflow.net/questions/193933",
"https://mathoverflow.net",
"https://mathoverflow.net/users/41060/"
]
}
|
194,267 |
Let $X, Y$ be normed space and $f:X\to Y$ be a mapping. Assume that for all $n\in\mathbf{N}$, $$\|x-y\|=n\iff\|f(x)-f(y)\|=n.$$
Under what conditions this map will be an isometry?
Thanks
|
So it turns out my earlier intuition was incorrect, and one can leverage the order properties of ${\bf R}$ to show: Theorem . Let $X, Y$ be real Hilbert spaces, with the dimension of $X$ at least two, and let $f: X \to Y$ be a function such that $\|f(x)-f(y)\|=\|x-y\|$ whenever $\|x-y\|$ is a natural number. (We do not assume $f$ to be continuous.) Then $f$ is an isometry. Proof By passing to an affine subplane of $X$, we may assume without loss of generality that $X$ is two-dimensional, so we will write $X = {\bf R}^2$. If $e$ is a unit vector in $X$, $x$ is an element of $X$ and $n,m$ are natural numbers then $f(x), f(x+ne), f(x+(n+m)e)$ form a degenerate triangle with side lengths $n,m,n+m$, which implies that $f(x+ne) = f(x) + nu$ and $f(x+(n+m)e) = f(x) + (n+m) u$ for some unit vector $u$. In particular, $f$ maps any isometric copy of ${\bf Z}$ to another isometric copy of ${\bf Z}$. Next, if $e, f$ are orthogonal unit vectors in $X$, then $f(x), f(x+3e), f(x+4f)$ form a triangle with side lengths $3,4,5$ and in particular $f(x+3e)-f(x)$ is orthogonal to $f(x+4f)-f(x)$. From this and the previous claim we see that $f$ maps any isometric copy of ${\bf Z}^2$ to another isometric copy of ${\bf Z}^2$. By the preceding discussion we have
$$f(x,y)-f(0,0) =x (f(1,0)-f(0,0)) + y (f(0,1)-f(0,0)) \qquad (1)$$
whenever $x,y$ are integers, but also $f(n e)-f(0,0) = n (f(e)-f(0,0))$ for any unit vector $n$. In particular, for any Pythagorean triple $a^2+b^2=c^2$, we have (1) when $(x,y)$ is an integer multiple of $(\frac{a}{c},\frac{b}{c})$; conjugating by translation we then see that (1) also holds when $(x,y)$ is an integer multiple of $(\frac{a}{c},\frac{b}{c})$ plus an element of ${\bf Z}^2$. Iterating this we see that (1) holds whenever $(x,y)$ is an integer linear combination of Pythagorean fractions $(\frac{a}{c}, \frac{b}{c})$. In particular, (1) holds on a dense subset $D$ of ${\bf R}^2$. This is already enough to get isometry in the continuous case. Now we remove the continuity hypothesis. Observe that if $\|x-y\| \leq 2$, then $x$ can be reached from $y$ by a pair of moves of unit length, and hence by the triangle inequality $\|f(x)-f(y) \| \leq 2$. Now let us normalise $f,X,Y$ so that ${\bf R}^2 \subset Y$, $f(0,0)= (0,0)$, $f(1,0)=(1,0)$, and $f(0,1)=(0,1)$. Then by (1) we see that $f$ is the identity on $D$. For any $p \in {\bf R}^2$ and $\varepsilon>0$, we can find elements $q, r$ of $D$ within $\varepsilon$ of $p + (2,0)$ and $p-(2,0)$ that lie within $2$ of $p$, and thus by the preceding we see that $f(p)$ lies within $2+\varepsilon$ of $p+(2,0)$ and $p-(2,0)$. Sending $\varepsilon$ to zero we see that $f(p)=p$, and the claim follows.
|
{
"source": [
"https://mathoverflow.net/questions/194267",
"https://mathoverflow.net",
"https://mathoverflow.net/users/52860/"
]
}
|
195,165 |
There have been other similar questions before (e.g. What is your picture of the flat topology? ), but none of them seem to have been answered fully. As someone who originally started in topology/complex geometry, the étale topology makes some sense to me. It's sort of like the complex topology, in that there are enough "open sets" that things like the inverse function theorem work. But I don't really understand what these other more "exotic" topologies represent. From the (naive) background of topology, it sounds like we're just "adding more open sets" to our topology (which I know isn't right, since this isn't a topology in the standard sense). So what do they represent? What do they do for us?
|
I'm a topologist, and so this answer is going to specifically be about analogies with ordinary topology. I like to think of different Grothendieck topologies as corresponding to different "allowable" ways to build up a space using a quotient topology. The Zariski topology is like recognizing that a space $X$ can be built up from a collection of open subsets $U \subset X$ which cover $X$ and identifying points in common intersections. The etale topology is like recognizing that if a map $Y \to X$ is surjective and a local homeomorphism, then this map makes $X$ homeomorphic to the quotient of $Y$ by an equivalence relation. The fppf topology is similar to the etale topology, but now allowing maps $Y \to X$ which are open. The fpqc topology, for lack of a better analogy, is like allowing arbitrary maps $Y \to X$ which make $X$ into a quotient space of $Y$. Much of the study of these topologies is the study of sheaves on them. The choice of topology has at several significant consequences. The choice of topology determines how much functoriality your sheaf has. (In particular, choices like "big site" or "small site" determine how many objects $Y$ you can evaluate your sheaf on!) The choice of topology determines what kind of data you need to construct things (which is code for descent theory). We know that if $Y \to X$ is a quotient map, we can build a vector bundle on $X$ by taking a vector bundle on $Y$ and imposing a compatible equivalence relation on it (e.g. from a clutching function). Similarly, to build a sheaf in topology $\tau$ we just need to build it on a $\tau$-cover and glue it together. The choice of topology determines your definition of "local". I really like Simon Pepin Lehalleur's comment here, because it describes exactly the following problem: when is a map ${\cal F} \to {\cal G}$ of sheaves surjective? It is surjective when it is surjective locally . For example, the n'th power map $\Bbb G_m \to \Bbb G_m$ is a surjection of sheaves precisely when, locally , every invertible function is an n'th power of some other invertible function. That's rarely true in the Zariski topology; it's true in the etale topology when $n$ is invertible, because then adjoining a solution of the polynomial $(x^n - \alpha)$ is an etale extension; it's always true in the fppf topology because adjoining a solution of the polynomial $(x^n - \alpha)$ is always fppf.
|
{
"source": [
"https://mathoverflow.net/questions/195165",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1703/"
]
}
|
195,325 |
I'm trying to sum the remainders when dividing N by numbers from $1$ up to $N$
$$\sum_{i = 1}^{N} N \bmod i$$ It's easy to write a program to evaluate the sum if N is small in $O(N)$ but what if N is large ~ 1e10 so I was wondering if there is a formula or an algorithm to get the sum in a better way than $O(N) $. I searched online and found a paper which drives some interesting properties of the sum yet doesn't state an efficient way to compute it although it contains a recursive formula yet that formula isn't practical since it reduced the problem from $N$ to $N-1$ and the relationship contains $\sigma (n)$ which can be evaluated at best in $O$(#prime factors of $N$) resulting in a much worse time than $O(N)$ on the other hand the paper contains the formula $$ \sum_{i = 1}^{N} N \bmod i = N^{2} - \sum_{i = 1}^{N} \sigma (i) $$ so if I can evaluate the sum of the sum-of-divisors function I would get the answer, yet I searched online and found only this discussion which had a formula for the sum of the sum-of-divisors function yet it's not exact and contains the sum of fractions which introduce rounding errors so I was wondering if there is a way to evaluate any of the two sums exactly and in a better time complexity than $O(N)$.
|
One can use the Dirichlet hyperbola method to compute $\sum_{i \leq n} \sigma(i)$ in time $O( n^{1/2} )$ (up to logarithmic factors coming from arithmetic operations such as division): \begin{align}
\sum_{i \leq n} \sigma(i) &= \sum_{i \leq n} \sum_{d|i} d \\
&= \sum_{d,m: dm \leq n} d \\
&= \sum_{d \leq \sqrt{n}} d \sum_{\sqrt{n} < m \leq n/d} 1 + \sum_{m \leq \sqrt{n}} \sum_{d \leq n/m} d \\
&= \sum_{d \leq \sqrt{n}} d( \lfloor \frac{n}{d} \rfloor - \lfloor \sqrt{n}\rfloor ) + \sum_{m \leq \sqrt{n}} \frac{\lfloor\frac{n}{m}+1\rfloor \lfloor \frac{n}{m} \rfloor}{2}.
\end{align} One can obtain some small speedups (by a factor of $O(1)$) by collecting some like terms here. One can improve this to $O(n^{1/2-c})$ for some small $c>0$ by approximating the hyperbola by a polygon: see the Polymath4 paper in which this was done for the sum $\sum_{i\leq n} \tau(i)$. Other than small improvements in $c$, I think this is the fastest algorithm currently known.
|
{
"source": [
"https://mathoverflow.net/questions/195325",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66543/"
]
}
|
195,739 |
Recently I gave an interview to local media where I explained some basic open problems in billiard dynamics. After a 45 min interview the reported asked me what "real life" problems can be solved using billiards...and I gave a really vague answer. I'm looking for a precise example of a "real life" problem (besides the billiard game of course) that can be modeled using billiard dynamics. "real life" = applied (I'm not english native speaker)
|
The
billiard-ball computer , also known as a conservative logic
circuit, is an idealized model of a reversible mechanical computer
based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and
Tommaso Toffoli. Instead of using electronic signals like a
conventional computer, it relies on the motion of spherical billiard
balls in a friction-free environment made of buffers against which the
balls bounce perfectly. It was devised to investigate the relation
between computation and reversible processes in physics. The billiard-ball computer was never realized in this form, but it played a significant role in the development of the quantum computer. Since the unitary evolution of quantum mechanics is reversible, it cannot employ the irreversible logical operations of a conventional computer. (This story is told here .) For an altogether different application of billiard ball dynamics, to semiconductor device physics, see Billiard model of a ballistic multiprobe conductor (1989).
|
{
"source": [
"https://mathoverflow.net/questions/195739",
"https://mathoverflow.net",
"https://mathoverflow.net/users/892/"
]
}
|
195,750 |
This is equivalent to my earlier question A question about something like "shelling" in a PL manifold , but maybe more comprehensible and to the point. Given a triangulation of the PL sphere $S^n$, is there always a subdivision (a.k.a. refinement, a.k.a. finer triangulation) that makes it shellable? Put this way, I'm guessing that the answer is well-known. EDIT: I quickly got two different answers, each of which seems to give just what I need. I'm more or less arbitrarily accepting Allan's.
|
The
billiard-ball computer , also known as a conservative logic
circuit, is an idealized model of a reversible mechanical computer
based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and
Tommaso Toffoli. Instead of using electronic signals like a
conventional computer, it relies on the motion of spherical billiard
balls in a friction-free environment made of buffers against which the
balls bounce perfectly. It was devised to investigate the relation
between computation and reversible processes in physics. The billiard-ball computer was never realized in this form, but it played a significant role in the development of the quantum computer. Since the unitary evolution of quantum mechanics is reversible, it cannot employ the irreversible logical operations of a conventional computer. (This story is told here .) For an altogether different application of billiard ball dynamics, to semiconductor device physics, see Billiard model of a ballistic multiprobe conductor (1989).
|
{
"source": [
"https://mathoverflow.net/questions/195750",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6666/"
]
}
|
195,770 |
I am interested in the following question. Are maps which induce the same homomorphism on homotopy and homology groups homotopic? I am sure the answer is no, however I cannot imagine how to construct counterexamples.
|
Take the composition of a degree one map $f:T^3\to S^3$ with the Hopf map $g:S^3\to S^2$, where $T^3$ is the 3-torus. This composition is trivial on homotopy groups since $T^3$ is aspherical and $\pi_1S^2=0$. It is trivial on $H_i$ for $i>0$ since this is true for $g$. If $gf$ were nullhomotopic we could lift a nullhomotopy to a homotopy of $f$ to a map to a circle fiber of $g$, which would imply that $f$ had degree 0, a contradiction. Thus $gf$ induces the same maps on homology and homotopy groups as a constant map, but it isn't homotopic to a constant map. (I forget where I first saw this example, maybe in something of Arnold.)
|
{
"source": [
"https://mathoverflow.net/questions/195770",
"https://mathoverflow.net",
"https://mathoverflow.net/users/65937/"
]
}
|
196,058 |
Let $\theta = \tan^{-1}(t)$ . Nowadays it is taught: 1º that $$
\frac{d\theta}{dt} = \frac 1{dt\,/\,d\theta} = \frac 1{1+t^2},
\tag1
$$ 2º that, via the fundamental theorem of calculus, this is equivalent to $$
\theta = \int_0^t\frac{du}{1+u^2},
\tag2
$$ 3º that, expanding the integrand in a geometric series and integrating term by term, this becomes the Nilakantha Madhava-Gregory-Leibniz formula $$
\theta = t - \frac{t^3}3+\frac{t^5}5-\frac{t^7}7+\dots.
\tag3
$$ Question: Who first proved $(1)$ in print as we do, by deriving an inverse function? I can’t find it in Nilakantha: According to Ranjan Roy (1990, p.300) , Nilakantha first published $(3)$ without proof in his Tantrasangraha (1501) ; a later commentary known as Yuktibhasa contains a proof by rectification of an arc of circle, which is beautiful but certainly not quite the same as $(1)$ . I can’t find it in Gregory: According to Dehn & Hellinger (1943, p.149) , Gregory communicated $(3)$ in a 1671 letter to Collins , and never published a proof; speculation exists that he found it by deriving $\tan^{-1}$ enough times to figure out its Taylor series at $0$ , but in any event, he left no trace of how he may have computed these derivatives. I can’t find it in Leibniz: According to González-Velasco (2011, p.347) , Leibniz communicated $(3)$ in 1674 letters to Oldenburg and Huygens, and later published the case $t=1$ in Acta Eruditorum (1682, pp.41-46) ; his unpublished proof is available (many times over) in the 700+ pages of his Collected Works, Vol. VII,6 . There, or in the nice exposition given in Hairer & Wanner (1996, 2nd printing, pp.49-50) , one sees that he was squaring the circle in an elaborate way which has nothing to do with $(1)$ . Of course Leibniz must have become aware of $(1)$ and $(2)$ at some point , as (later!) inventor of the notation that makes them almost automatic. Unfortunately, I can’t find any written evidence of that. Maybe someone else will have better luck! (The closest I can find is a 1707 letter of Wolff to Leibniz , where the new notation is used to write in effect that $d\theta = dt:(1+t^2)$ , and then deduce $(2)$ and $(3)$ . The two correspondents may well have had in mind the modern proof $(1)$ of this differential relation, but neither says so.) I can’t find it in Jacob Bernoulli: With Leibniz notation spreading, one might think that a disciple would write $(1)$ at the first opportunity. But that’s not what Jacob B. does to rectify a unit circle in Positionum de Seriebus Infinitis Pars Tertia (Basel, 1696, Prop. XLV) : instead, he parametrizes one with $(x,y)=$ $\bigl(x,\sqrt{2x-x^2}\bigr)$ and then expresses the resulting arc length differential — also seen in Leibniz ( 1686 ) — as $$
d\theta
=\sqrt{\smash{dx^2+dy^2}\vphantom{a^2}}
=\frac{dx}{\sqrt{2x-x^2}}
=\frac{2d\mathsf t}{1+\mathsf t^2}
\tag{$*$}
$$ $(=\mathrm{LH}$ on his Fig. 3 $)$ by introducing a “diophantine” (a.k.a. Weierstraß ) substitution $\smash{\mathsf t=\frac xy}$ $=\smash{\tan\frac\theta2}$ $(=\mathrm{BI}$ on the figure, as he notes; so his $\mathsf t=\tan\mathrm{BAI}$ is not our $t=\tan\mathrm{BAH})$ . While this still proves $(2)$ and $(3)$ for the halved angle and its tangent, the argument definitely isn’t $(1)$ . $\hspace{8.5em}$ I can’t find it in Johann Bernoulli: When faced with the task of integrating $\smash{\frac{dt}{1+t^2}}$ in his paper on rational integrals (1702) , Johann B. proposes two substitutions: The first (in Probl. I, Corol.) comes from the partial fraction decomposition $\frac1{1+t^2}=\frac{1/2}{1+it} + \frac{1/2}{1-it}$ , and consists in putting $u = \frac{1+it}{1-it}$ so that $
\frac{dt}{1+t^2}=\frac{du}{2iu}=d\left[\frac1{2i}\log\frac{1+it}{1-it}\right]
$ . The second (in Probl. II) consists in putting $u=\frac1{1+t^2}$ so that $\frac{dt}{1+t^2} = \frac{-du}{\sqrt{4(u-u^2)}}$ , which he knows (perhaps by recognizing half $(*)$ with $x=2(1-u)$ ?) is a “circular sector or arc differential”. Neither of these is the substitution $\theta = \tan^{-1}(t)$ , which via $(1)$ would have led directly to $\frac{dt}{1+t^2} = d\theta$ . And in later papers ( 1712 , 1719 ) Bernoulli is content to describe this relation as “well-known”. I can’t find it in de Moivre: Schneider (1968, footnotes 248 & 250) seems to claim that a 1708 letter of de Moivre to Bernoulli has the proof $(1)$ and also the “Euler” formula $\theta=\smash{\frac1i}\log(\cos\theta+i\sin\theta)$ . But this is not borne out by the letter’s text in ( 1931 , pp.241-257): there, as also in his paper ( 1703 , p.1124) and book ( 1730 , p.44), de Moivre simply states $\smash{d\theta=\frac{dt}{1+t^2}}$ without proving it anew. I can’t find it in Euler: Euler was of course well aware of $(2)$ , which appears for instance in his fifth paper ( 1729 , pp.93, 95) and in his later E60, 65, 66, 125, 129, 130, 162, 217, 391, 475, 482, etc . But when it comes to proving $(2)$ or $(3)$ , then again he seems to eschew $(1)$ : In his precalculus book (1748, §§139-140) he chooses to first establish Bernoulli’s above formula $$
\theta = \frac1{2i}\log\frac{1+it}{1-it}
\tag4
$$ (this he does by multiplying numerator and denominator by $\cos\theta$ so they become $e^{\pm i\theta}$ ), and then to deduce $(3)$ by plugging $it$ in the series for $\smash{\log\frac{1+x}{1-x}}$ . None of this requires $(1)$ , $(2)$ , or any calculus. In his differential calculus book (1755, §§194-197) he first differentiates a similar logarithmic formula for $\theta = \sin^{-1}(s)$ , namely $\theta = -i\log(\sqrt{1-s^2} + is)$ , to obtain $$
d\theta = \frac{ds}{\sqrt{1-s^2}}.
\tag5
$$ Plugging $s = t/\sqrt{1+t^2}$ into $(5)$ then gives him $(2)$ . He might as well have differentiated $(4)$ directly! Either way, $(1)$ is not used, although to be fair, Euler at least gives (§195) an alternative proof of $(5)$ using the argument $(1)$ , but applied to $\smash{\sin^{-1}}$ instead of $\smash{\tan^{-1}}$ . So where can you find it? It’s in Lacroix (1797, pp.113-114) and its progeny. Still I have trouble believing it took over 100 years for $(1)$ to become the standard proof — hence my question.
|
I now believe that my question (and suggestion that proof $(1)$ should have become standard before Lacroix) relied on the misconception that tangent was easier to differentiate than arctangent. In fact the calculus of inverse trigonometric functions took off earlier , as has been explained by G. Eneström ( 1905 ), C. Boyer ( 1947 ), or V. J. Katz ( 1987 ): it was quite common [at first] to deal with what we call the arcsine function rather than the sine. C. Wilson ( 2001 , 2007 ) concurs and stresses that our trig functions with periodic graphs weren’t much seen or differentiated until Euler “found” them to solve 2nd order linear differential equations ( 1741 ); so much so that he still wrote in ( 1749 , p.15): as this way of operating is not yet commonplace, it will be apropos to warn that the differentials of the formulas $\sin.$ : $\cos.$ : $\mathrm{tang}.$ : $\cot.$ are $d\,\cos.$ : $-d\,\sin.$ : $\smash[b]{\frac{d}{\cos.\,^2}}$ & $\smash[b]{-\frac{d}{\sin.\,^2}}$ — and e.g. in ( 1796 , p.163) L’Huilier still computed $\tan'$ from $\arctan'$ rather than vice versa. In this vein, $d = \frac{dt}{1+t^2}$ was not proved like $(1)$ , but by a differential triangle argument similar to $(*)$ but simpler and attributed to Cotes ( Aestimatio errorum , 1722): in modern notation, parametrize the unit circle with $(x,y)=\frac{(1,\,t)}{\sqrt{1+t^2}}$ and obtain $$
d\theta
=\sqrt{\smash{dx^2+dy^2}\vphantom{a^2}}
=\frac{dt}{1+t^2}
\tag6
$$ ( $=\mathrm{CE}$ in Cotes’ figure , which became standard in many books even before his own — the list could almost be described as “everyone but Euler”): 1708 Charles-René Reyneau §590 fig. 41 1718 John Craig pp.52–54 1722 Roger Cotes (posthumous) Lemma II 1730 Edmund Stone p.63 fig. 13 1736 James Hodgson p.230 1736 John Muller §247 fig. 153 1737 Thomas Simpson §143 1742 Colin MacLaurin §195 fig. 52 1743 William Emerson pp.171–172 fig. 76 1748 Maria Agnesi p.639 fig. 4 1749 Charles Walmesley (credits Cotes) pp.3,53 fig. 10 1749 William Emerson p.29 fig. 6 1750 Thomas Simpson §142 1754 Louis-Antoine de Bougainville p.24 fig. 9 1761 Abraham Kästner §299 fig. 18 1765 Jean Le Rond D’Alembert p.640 fig. 25 1767 Étienne Bézout p.146 fig. 46 1768 Thomas Le Seur & François Jacquier p.63 fig. 7 1774 Jean Saury pp.25,63 fig. 3 1779 Samuel Horsley pp.298–299 1786 Simon L’Huilier pp.103–104 fig. 20 1795 Simon L’Huilier §76 fig. 17 As to our usual proof $(1)$ , it appears before Lacroix in 18-year-old Legendre ’s Theses mathematicæ (1770), then in a book by their common teacher J.-F. Marie and several others: 1770 Adrien-Marie Legendre pp.10,16 1772 Joseph-François Marie §904 1777 Jacques-Antoine Joseph Cousin p.81 1781 Claude Bertrand p.140 1781 Louis Lefèvre-Gineau p.31 1795 Simon L’Huilier §76 1797 Sylvestre-François Lacroix pp.113–114 1801 Joseph-Louis Lagrange p.81 1810 Sylvestre-François Lacroix pp.lii , 203–204
|
{
"source": [
"https://mathoverflow.net/questions/196058",
"https://mathoverflow.net",
"https://mathoverflow.net/users/19276/"
]
}
|
196,576 |
It is a standard consequence of Hurewicz's theorem that a homology eqivalence between simply connected spaces is a weak equivalence (and hence a homotopy equivalence, if the spaces are CW-complexes). What is more, it is even enough to assume that the map is a homology equivalence with local coefficients and an iso on $\pi_1$, see e.g. this question . (On the other hand, a map which is only a homology equivalence does not need to be an isomorphism on $\pi_1$ and hence not a weak equivalence.) What is a concrete example of a map that is an isomorphism on homology with $\mathbb Z$ coefficients, and also an isomorphism on $\pi_1$, but not a weak equivalence?
|
Let $X$ be the CW complex obtained from $S^1 \vee S^n$, $n>1$, by attaching an $(n+1)$-cell via a map $S^n\to S^1\vee S^n$ representing the element $2t-1$ in $\pi_n(S^1\vee S^n) \cong {\mathbb Z}[t,t^{-1}]$, so $\pi_n(X)\cong{\mathbb Z}[t,t^{-1}]/(2t-1)\cong {\mathbb Z}[1/2]\subset{\mathbb Q}$. The inclusion map $S^1 \to X$ then induces an isomorphism on homology and on $\pi_i$ for $i<n$ but not on $\pi_n$. This example relies heavily on the nontriviality of the action of $\pi_1$ on $\pi_n$, but this is necessary since Whitehead's theorem that homology isomorphisms between simply-connected CW complexes are homotopy equivalences holds more generally for CW complexes with trivial action of $\pi_1$ on all homotopy groups including $\pi_1$. (This is Proposition 4.74 in my Algebraic Topology book, and the example is Example 4.35. Sorry for the self-references, but I should know the book as well as anyone!)
|
{
"source": [
"https://mathoverflow.net/questions/196576",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14233/"
]
}
|
196,681 |
Let $H(n) = 1/1 + 1/2 + \dotsb + 1/n,$ and for $i \leq j,$ let $a_1$ be the least $k$ such that $$H(k) > 2H(j) - H(i),$$ let $a_2$ be the least k such that $$H(k) > 2H(a_1) - H(j),$$ and for $n \geq 3,$ let $a_n$ be the least $k$ such that $$H(k) > 2H(a_{n-1}) - H(a_{n-2}).$$ Prove (or disprove) that if $i = 5$ and $j = 8$, then $(a_n)$ is the sequence $(5,8,13,21,\dotsc)$ of Fibonacci numbers, and determine all $(i,j)$ for which $(a_n)$ is linearly recurrent.
|
The statement is true. Write $F_n$ for the $n$-th Fibonacci (my indexing starts at $(F_0, F_1, F_2, F_3, \dots) = (0,1,1,2,\dots)$). We are being asked to show that
$$\frac{1}{F_{j+1}} > \sum_{m=F_{j}+1}^{F_{j+1}} \frac{1}{m} - \sum_{m=F_{j-1}+1}^{F_{j}} \frac{1}{m} > 0\ \mbox{for}\ j \geq 6.$$
Computer computations easily check this for $6 \leq j \leq 20$, so we only need to check large $j$. We know that
$$H(n) = \log n + \gamma + \frac{1}{2n} + O(1/n^2)$$
where the constant in the $O( \ )$ can be made explicit -- something like $1/12$.
So
$$\sum_{m=B+1}^C \frac{1}{m} - \sum_{m=A+1}^B \frac{1}{m} = \log \frac{AC}{B^2} + \frac{1}{2} \left(\frac{1}{A}-\frac{2}{B}+\frac{1}{C} \right) + O(1/A^2)$$
where the constant in the $O( \ )$ is something like $1/3$. Now, $F_{j-1} F_{j+1} = F_j^2 \pm 1$. So
$$\log \frac{F_{j-1} F_{j+1}}{F_j^2} = \log \left( 1 \pm \frac{1}{F_j^2} \right) = O(1/F_j^2).$$
So, up to terms with error $O(1/F_j^2)$ and a fairly small constant in the $O( \ )$, we are being asked to show that
$$\frac{1}{F_{j+1}} > \frac{1}{2} \left( \frac{1}{F_{j+1}} - \frac{2}{F_j} + \frac{1}{F_{j-1}} \right) > 0.$$ Set $\tau = \frac{1+\sqrt{5}}{2}$, and recall that $F_j = \frac{\tau^j}{\sqrt{5}} + O(\tau^{-j})$. Then, up to errors of $O(\tau^{-3j}) = O(1/F_j^3)$, this turns into the inequalities
$$\frac{1}{\tau^2} > \frac{1}{2} \left( 1 - \frac{2}{\tau} + \frac{1}{\tau^2} \right) > 0.$$
The latter is obviously true; the former can be checked by calculation. It looks like the same analysis should apply sufficiently far out any linear recursion with solution of the form $G_j = c_1 \theta_1^j + \sum_{r=2}^s c_r \theta_r^j$ where $1 < \theta_1 < 1+\sqrt{2}$ and $|\theta_r|<1$ for $r>1$. (So $\theta_1$ should be a Pisot number .) The inequality $|\theta_r|<1$ makes $\log \frac{G_{j+1} G_{j-1}}{G_j^2} \approx \frac{\max_{r \geq 2} |\theta_r|^j \theta_1^j}{\theta_1^{2j}}$ be much less than $1/G_j \approx 1/\theta_1^j$. The inequality $\theta_1 < 1+\sqrt{2}$ makes $1/\theta^2 > (1/2)(1-2/\theta+1/\theta^2)$.
|
{
"source": [
"https://mathoverflow.net/questions/196681",
"https://mathoverflow.net",
"https://mathoverflow.net/users/61426/"
]
}
|
197,917 |
Numerical evidence shows the validity of the following identity
$$\int\limits_0^z\frac{xdx}{\sin{x}\sqrt{\sin^2{z}-\sin^2{x}}}=\frac{\pi}{4\sin{z}}\ln{\frac{1+\sin{z}}{1-\sin{z}}},\tag{1}$$
if $0< z< \pi/2$.
How can it be proved? An indirect proof can be found in the paper http://link.springer.com/article/10.1134%2FS1547477113010044 (Potential of multiphoton exchange in the scattering of light charged particles of a heavy target, by Yu.M. Bystritskiy, E.A. Kuraev and M.G. Shatnev). Formula (1) is equivalent to (12) from the paper where it is called "the marvelous identity".
|
Actually, I now think that the easiest method is to do this: Write $k=\sin z$, so that $|k|<1$, and make the substitution $x = \arcsin(k\sin\theta)$, where $0\le \theta\le \frac\pi2$. The integral becomes
$$
\int_0^{\pi/2} \frac{\arcsin(k\sin\theta)}{k\sin\theta \,\,(1-k^2\sin^2\theta)^{(1/2)}}\ \mathrm{d}\theta
= \int_0^{\pi/2} \sum_{n=0}^{\infty} c_n\,k^{2n}\sin^{2n}\theta\,\mathrm{d}\theta,
$$
where the numbers $c_n$ are the coefficients in the even power series
$$
\frac{\arcsin(t)}{t \,(1{-}t^2)^{(1/2)}} = \sum_{n=0}^{\infty} c_n t^{2n},
$$
which are easily calculated to be
$$
c_n = \frac{2^{2n} (n!)^2}{(2n{+}1)!}.
$$
Combining this with the well-known formula
$$
\int_0^{\pi/2} \sin^{2n}\theta\,\mathrm{d}\theta = \frac{\pi}{2^{2n+1}}\ {{2n}\choose{n}},
$$
one obtains
$$
\int_0^{\pi/2} \frac{\arcsin(k\sin\theta)}{k\sin\theta (1-k^2\sin^2\theta)^{(1/2)}}\ \mathrm{d}\theta
= \pi \sum_{n=0}^{\infty} \frac{k^{2n}}{(4n{+}2)}
= \frac{\pi}2 \sum_{n=0}^{\infty} \frac{\sin^{2n}z}{(2n{+}1)}\ .
$$
The rest should be clear.
|
{
"source": [
"https://mathoverflow.net/questions/197917",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32389/"
]
}
|
198,665 |
For $x\in(0,1)$, put
$$f(x):=\sum_{n=0}^{\infty}(-1)^{n}x^{2^{n}}.$$
This function possesses interesting properties. It grows monotonically from $0$ up to certain point. Then it starts to oscillate around the value $1/2$ on a left neighborhood of $1$, see Figure 1. So there is at least a numerical evidence that $f(x)>1/2$ for some $x\in(0,1)$, for instance $f(0.995)\dot{=}0.500882$. Nevertheless, I did not find an analytic way to prove this observation. Thus, I have two questions concerning the function $f$. First, I would like to ask if there is an analytic proof (simple at best $\sim$ understandable for undergraduate students) that there exists $x\in(0,1)$ such that $f(x)>1/2$. Second, since the limit of $f(x)$ as $x\to1-$ does not exists, I would like to know at least something about the value
$$\limsup_{x\to1-}f(x).$$ Many thanks.
|
This series goes back 100+ years to Hardy: "On certain oscillating series", Quarterly J. Math. 38 (1907), 269-288 (pages 146-168 in
the sixth volume of Hardy's collected papers). I did not know of this
when I posed the question as a puzzle on my webpage about 10 years ago
( puzzle 8 , solution );
I thank Tanguy Rivoal for the Hardy reference.
Shortly afterwards the question appeared in the Fall
2004 issue of MSRI's newsletter EMISSARY . The easiest proof is surely the computational one that has been
noted here already (and which I gave in my puzzle solution):
it takes only a dozen terms of the series to confirm that $f(.995) > 1/2$,
at which point the functional equation $f(x) = x - f(x^2)$ shows that
$f(x) > f(.995) > 1/2$ when $x$ is a $(4^m)$-th root of $0.995$
for some $m=1,2,3,\ldots$ . One can also give "harder" or "softer" explanations that may feel
more satisfactory. On the "hard" side, as $x \rightarrow 1$ from below
the difference $f(x) - 1/2$ approaches a periodic function of
$\log_4(\log(1/x))$ that's a nearly pure sine wave of magnitude
$$
\frac2{\log 2} \, \bigl|\, \Gamma(\pi i / \log 2) \,\bigr|
= 2 \Big/ \sqrt{\log(2)\sinh(\pi^2/\log 2)} \, = \, 0.00274922\ldots
$$
(the higher harmonics' coefficients involve the values of $\Gamma$
at higher odd multiples of $\pi i / \log 2$). Hardy obtained this by
residue calculations; it can also be recovered from Poisson summation. On the "soft" side, the fact that $f(x)$ does not converge
as $x \rightarrow 1$ from below is a consequence of a Tauberian theorem of
Hardy and Littlewood: a subset $S$ of $\{1,2,3,\ldots\}$ has natural density iff $S$ has Abel density, and then the two limits are equal .
The Abel density of $S$ is $\lim_{x \rightarrow 1-} (1-x) \sum_{s\in S} x^s$
if the limit exists. If we take
$S = \bigcup_{m=0}^\infty [2^{2m}, 2^{2m+1})$
then $(1-x) \sum_{s\in S} x^s = f(x)$; but $S$ is a standard example
of a set without natural density, so there's no Abel density either
and we're done. Hence there is some $\epsilon > 0$ such that
$|f(x) - 1/2| > \epsilon$ for a sequence of $x$'s approaching $1$,
and then either $f(x)$ or $f(\sqrt x)$ eventually exceeds $1/2$.
I don't know whether Hardy ever observed in print that the
non-convergence of $f(x)$ is a consequence of his and Littlewood's theorem.
[I found this Tauberian theorem in Persi
Diaconis's doctoral thesis (Theorem 4 on p.37), with a reference
to page 423 of Feller's An Introduction to Probability Theory and its
Applications, Vol. II (Wiley, 1966).]
|
{
"source": [
"https://mathoverflow.net/questions/198665",
"https://mathoverflow.net",
"https://mathoverflow.net/users/56553/"
]
}
|
198,722 |
There are (apparently) 261 distinct unfoldings of the 4D hypercube, a.k.a., the
tesseract, into 3D. 1 These unfoldings (or "nets") are analogous to the 11 unfoldings of
the 3D cube into the plane. 2 Usually only one hypercube unfolding is illustrated, (Image from this link .) the one made famous in
Salvador Dali's painting Corpus Hypercubus .
My question is: Q . Has anyone made models/images of the 261 unfoldings as solid objects in $\mathbb{R}^3$? (If not, I might do so myself.) 1 Peter Terney, "Unfolding the Tesseract." Journal of Recreational Mathematics , Vol. 17(1), 1984-85. 2 Update . See also the followup question, " Which unfoldings of the hypercube tile 3-space: How to check for isometric space-fillers? ."
|
I implemented the ideas in the paper using Mathematica. I pushed it a bit further to actually generate the images below. You can download this Mathematica notebook to see the code and detailed explanation. You might notice Dali's original in the middle of the third row from the bottom.
|
{
"source": [
"https://mathoverflow.net/questions/198722",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
198,872 |
$V=\mathbb C^n$ is a $\mathbb CS_n$-module, where $S_n$ is the symmetric group of degree $n$, via the representation sending a permutation to the corresponding permutation matrix. The tensor power $V^{\otimes m}$ is therefore also a $\mathbb CS_n$-module via the action $\sigma(v_1\otimes\cdots\otimes v_m) = \sigma v_1\otimes\cdots \otimes \sigma v_m$ on elementary tensors. But $V^{\otimes m}$ is also a $\mathbb CS_m$-module where $S_m$ acts by permuting the tensor factors. These two actions commute and hence $V^{\otimes m}$ is a representation of $S_n\times S_m$ in a natural way. I would like a pointer to the literature on the decomposition of $V^{\otimes m}$ into irreducible representations of $S_n\times S_m$.
|
By Schur–Weyl duality there is an isomorphism of $\mathrm{GL}(V) \times S_m$-representations $$V^{\otimes m} \cong \bigoplus_\lambda \Delta^\lambda(V) \boxtimes S^\lambda$$ where the sum is over all partitions $\lambda$ of $m$ with at most $n$ parts, $\Delta^\lambda$ is the Schur functor for $\lambda$, $S^\lambda$ is the irreducible $\mathbb{C}S_m$-module canonically labelled by $\lambda$ and $\boxtimes$ denotes the outer tensor product of two representations. Restricting to $S_n \times S_m$ we get $$V^{\otimes m} \cong \bigoplus_\lambda \bigl( \Delta^\lambda(V) \bigl\downarrow^{\mathrm{GL}(V)}_{S_n} \bigr) \boxtimes S^\lambda.$$ Hence $$[V^{\otimes m} : S^\nu \boxtimes S^\lambda] =
[\Delta^\lambda(V) : S^\nu]$$ for all partitions $\nu$ of $n$ and $\lambda$ of $m$ with at most $n$ parts. Determining the multiplicities on the right-hand side is, as far as I know, an open problem, equivalent to computing inner plethysms of symmetric functions. Some special cases are known. For instance, if $\lambda = (1^m)$ and $m \le n$ then $\Delta^{(1^m)}(V) = \bigwedge^mV = S^{(n-m,1^m)} \oplus S^{(n-m+1,1^{m-1})}$. This result has been reproved , many , times (see Proposition 5.1) in the symmetric group literature.
|
{
"source": [
"https://mathoverflow.net/questions/198872",
"https://mathoverflow.net",
"https://mathoverflow.net/users/15934/"
]
}
|
198,945 |
Why and how publishing a paper in proceedings? What are the difference with a "classical" journal? What's the list of the main proceedings in which one can publish? Do proceedings papers (never, sometimes, often or always) appear on mathscinet?
|
Proceedings of conferences are often published as special issues of "classical" journals. But even those that are not are usually included in MathSciNet if they include a statement (often a footnote on the first page of each paper) to the effect that the papers are in final form and will not be published elsewhere. Some but not all conference proceedings are refereed less thoroughly than reputable journal articles. As a result, mathematicians are sometimes suspicious about results published in conference proceedings, and administrators sometimes assign less value to such publications. Some conference proceedings have responded to this problem by explicitly saying (usually in the preface of the proceedings volume) that the papers have been refereed to the standards of such-and-such journal. Nevertheless, I would advise young (= not yet tenured) mathematicians to publish most if not all of their work in regular (and reputable, of course) journals. Once you have tenure, so that administrators' opinions are less critical for your life, it becomes reasonable to contribute more to conference proceedings.
|
{
"source": [
"https://mathoverflow.net/questions/198945",
"https://mathoverflow.net",
"https://mathoverflow.net/users/34538/"
]
}
|
199,097 |
Recently Mark McClure constructed and displayed the 261 unfoldings of the hypercube (tesseract) in response to the question,
" 3D models of the unfoldings of the hypercube? ": The first 9 unfoldings in Mark McClure's display Each of the 11 unfoldings of the cube form monohedral tilings of the plane,
as so well illustrated in the "Etudes" video to which Igor Pak pointed: A polyhedron that is the prototile of a monohedral tiling is called
an isometric space-filler : $\mathbb{R}^3$ can be tiled by congruent copies
of that one shape (rotated and translated but not reflected). Now that we have the unfoldings of the hypercube, analogy with the cube raises the
question: Q1 . Which (if any) of the 261 unfoldings of the hypercube are isometric space-fillers? Asking this question raises another: Q2 . How can one determine if a given shape, in this case a polycube / 3D polyomino, is an isometric space-filler? Update ( 7Dec2015 ).
Aside from the two hypercube unfoldings that Steven Stadnicki showed
tile space (below), with a student I found two more that tile $\mathbb{R}^3$,
including the Dali hypercube cross unfolding ,
confirming Steven's intuition ("I don't know if the 'Dali unfolding' tiles space, though I'd be surprised if it didn't.")
We posted an arXiv note on the topic.
|
Answer to Q1 : All of the 261. I looked at this question because of a video of Matt Parker and wrote an algorithm to find solutions. See here for an example of how a solution would look like. I dumped all solutions on github . For some cases I list multiple solutions. The files start with a number followed by an underscore and the number is between 0 and 260 corresponding to one of the unfoldings in this list . 129 <--- Number of unfolding (this time 0-indexed)
[[-2, -2, 0], [1, 1, 1]] <---- the box, we put it in
Boolean Program (maximization, 1152 variables, 428 constraints) <---
some information to ignore
32.0 <--- How many of the voxels in the cube are covered (here it
happens to be all of them)
[(-2, -2, 0), (-1, -2, 0), (-1, -2, 1), (-1, -1, 0), (-1, 0, 0), (-1,
1, 0), (0, -2, 1), (1, -2, 1)]
[(-2, -2, 1), (-2, -1, 0), (-2, -1, 1), (-2, 0, 0), (-2, 1, 0), (-1,
-1, 1), (0, -1, 1), (1, -1, 1)]
[(-2, 0, 1), (-1, 0, 1), (0, 0, 1), (1, -2, 0), (1, -1, 0), (1, 0, 0),
(1, 0, 1), (1, 1, 1)]
[(-2, 1, 1), (-1, 1, 1), (0, -2, 0), (0, -1, 0), (0, 0, 0), (0, 1, 0),
(0, 1, 1), (1, 1, 0)] (the last rows are then the coordinates of the copies of the
unfolding, whatever sticks out of the box fits in another one..) Here is an example of one of the two one that fit in a 4x4x2 box ( in an exploded view, you can shift them together and they neatly fit into a 4x4x2 box, which can then be stacked to obtain a tiling.: Answer to Q2 : One way is to use integer programming. The basic idea is to take a box with volume divisible by 8 as a fundamental domain and to cover it as much as possible with non-overlapping copies of an unfolding and make sure that the stuff that sticks out of the bottom actually fits into gaps at the top and so on. Here's a two-dimensional example:
The tile: A 5x6 box filled with copies of that tile: A tiling produced from this: This can be formulated a as an integer program and it turns out that those 261 unfoldings all have feasible solution for some smallish box (for most 4x4x4 is enough). Setting up the integer program is just a few lines of sage : from sage.combinat.tiling import Polyomino
import itertools
def get_mod_points(point, diff):
for coeffs in itertools.product([-1, 0, 1], repeat=diff.length()):
yield vector([diff[i]*coeffs[i] + point[i] for i in range(diff.length())])
def milp_mod_box(tile, inner_box, outer_box, solver=None):
# tile, inner_box and outer_box are `Polyomino`.
begin, end = inner_box.bounding_box()
diff = vector(end) - vector(begin)
diff += vector([1]*diff.length())
tiles = [Q for Q in tile.isometric_copies(outer_box)]
p = MixedIntegerLinearProgram(solver=solver, maximization=True)
x = p.new_variable(binary=True)
for i in inner_box:
cons = p.sum(sum([int(j in T) for j in get_mod_points(i, diff)])*x[T] for T in tiles) == 1
p.add_constraint(cons)
for i in outer_box:
p.add_constraint(p.sum(x[T] for T in tiles if i in T) <= 1)
p.set_objective(p.sum(x[T] for i in inner_box for T in tiles if i in T))
return p, x Here we optimize for having as much voxels filled in the box as possible, but only so that the solutions look nicer, it would give valid tilings even without setting that objective. A similar integer programming approach could be used to prove that a certain polyomino is not a space filler: We can check for larger and larger boxes that if it is not possible to cover them completely with non-overlapping copies of the original polyomino, but this wasn't necessary. Update :
I added 3d-plots of all solutions to 3d-renderings of all 261 unfoldings page . When you click on the number of each one you get a 3d-rending of copies of that unfolding mostly in a box (drawn in grey), which then can be used to tile the plane. Note this is a different order than the one above (and it starts indexing with $1$ . #213 is the Dalí-unfolding, needs 8 pieces in the prototile. #3 is the only one that only needs 3 pieces (in a 4x3x2 box) (what a coincidence..) #72 and #159 are the ones that perfectly fit in a 4x4x2 box. #139 needs a long 8x2x2 box.
|
{
"source": [
"https://mathoverflow.net/questions/199097",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
199,212 |
Maryam Mirzakhani has made several contributions to the theory of moduli spaces of Riemann surfaces. Mirzakhani was awarded the Fields Medal in 2014 for "her outstanding contributions to the dynamics and geometry of Riemann surfaces and their moduli spaces." She died July 15, 2017 . I'm not expert in these areas of mathematics, but I am eager to know her main ideas and the importance of her results. Can one draw a general picture of her works? (Any expository reference will be appreciated).
|
A very good expository article (in Farsi) on recent work of Maryam Mirzakhani can be found here . ( PDF )
|
{
"source": [
"https://mathoverflow.net/questions/199212",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
199,814 |
The p-adic integers $\mathbb{Z}_p$ can be thought of as a subgroup of the direct product group $P = \prod_{n \geq 1} \mathbb{Z}/p^n\mathbb{Z}$. Are they a direct summand of this group? That is, is the inclusion $\mathbb{Z}_p \hookrightarrow P$ split?
|
Assuming the axiom of choice, yes. Choose a non-principal ultrafilter on $\mathbb N$. This gives a consistent way to, given a function from $\mathbb N$ to a finite set, choose an element of the finite set, that doesn't depend on any finite subset of $\mathbb N$. You can take an element of $\prod_{n\geq 1} \mathbb Z/p^n \mathbb Z$ and by modding out by $p^k$ obtain an element of $\prod_{n \geq k} \mathbb Z/p^k$, then by applying the ultrafilter obtain an element of $\mathbb Z/p^k$ (ignroing the first $k-1$ values). Doing this for all $k$ gives an element of $\mathbb Z_p$. Using the standard properties of ultrafilters this is easily seen to be a homomorphism $P \to \mathbb Z_p$ that splits the inclusion. I'm not sure whether the converse is true, that splitting gives you an ultrafilter.
|
{
"source": [
"https://mathoverflow.net/questions/199814",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6481/"
]
}
|
199,874 |
From the point of view of formal math, what would constitute an appropriate statement of the classification of finite simple groups? As I understand it, the classification enumerates 18 infinite families and 26 sporadic groups and asserts that a finite group is simple iff it is in one of these families. Now the 18 infinite families are all fairly clearly defined as cyclic groups, permutation groups, matrix groups over finite fields, etc. so I don't think there is much difficulty in defining these precisely. Much more problematic are the sporadic groups, which are "known" and hence apparently need no definition. To give an example, since the monster group is some finite object we could just write down its Cayley table and define that to be the monster group. There are two big problems with this: (1) this table is huge and redundant, and (2) it's not easy to work with this table to prove properties of it. The main problem is that we don't think about the monster group in terms of its Cayley table, nor even as the group generated by a certain pair of $196882^2$ matrices. Instead we view it as a specific group which satisfies some properties and is uniquely defined by those properties; presumably it is in this context that a given sporadic group will show up in the course of the classification proof. My problem is that I have no idea what those characterizing properties are. Indeed under some definitions it would rather weaken or trivialize the statement of classification, for example if I defined the sporadic groups as the simple groups that are not in the 18 families. What definition of these objects is actually used in the proof? (Side question: 16 of the 18 families are usually collected under one label, the "groups of Lie type". Is this class definable in some uniform way, or are the definitions individualized and the name is just due to some commonalities we recognize between these families?)
|
There are really two separate questions that you seem to be conflating here. The first is how to state the CFSG in a way that could be mechanically formalized. The second is how to state the CFSG that adequately reflects how human mathematicians think about it. For the former question, one straightforward possibility for the sporadic groups, since we know their orders, is simply to state something like, "There exists a unique simple group, not in one of the aforementioned families, of each of the following orders: 7920, 95040," etc. This is the barest possible statement that could count as a classification theorem, and for a computer, it provides (in principle) enough information to reconstruct the groups in question. For the second question, though, there's no sharp boundary demarcating where the classification theorem ends and the detailed study of the properties of the sporadic groups begins. There's also no canonical way of describing a particular group of interest in a way that satisfies a human that he or she now "knows what the group is." But there's nothing unique about group theory here. Any sufficiently large and complicated mathematical object is going to suffer from this problem. There will be some bare-minimum way of referring to it that in principle picks it out from the amorphous universe of all mathematical objects but that fails to answer basic questions about it. There will be a continuum of theorems that answer other basic questions, shading off into questions that we can't answer. It is a matter of opinion how many questions we have to be able to answer before we can claim to have "adequately described" the object.
|
{
"source": [
"https://mathoverflow.net/questions/199874",
"https://mathoverflow.net",
"https://mathoverflow.net/users/34444/"
]
}
|
199,926 |
Perhaps under the influence of a recent question
on perverse sheaves ,
in conjunction with the impending $\pi$-day (3/14/15 at 9:26:53),
I recalled a long-ago parody of abstruse mathematical language
that I can no longer remember in detail nor find by searching. I am not seeking merely
" examples of colorful language ,"
as in that earlier MO question, but rather parodies
almost in the Alan Sokal Fashionable Nonsense sense
(although I don't think he parodied abstract mathematics directly). I am partly motivated by the possible educational advantage
of self-mockery (or self-awareness),
tangentially related to
an MESE question, " Wonder as Motivation ."
But I ask here to tap into the likely greater density
of mathematicians working in abstract fields ripe for parody. Q . Can you provide examples of (or pointers to) intentionally comic parodies of abstruse
mathematical language, written by knowledgeable mathematicians so that they
could (in another universe) make mathematical sense.
|
Is this what you're looking for? http://thatsmathematics.com/mathgen/ Mathgen is an random math paper generator, based on SCIgen which does the same for computer science papers. It will provide you with an unlimited supply of abstruse nonsense: definitions, theorems, proofs, references, and all. Here is a sample title and abstract. "Some Reducibility Results for Ultra-Universally Nonnegative Arrows" Assume we are given a contra-intrinsic subring $\mathscr{{W}}$. Recently, there has been much interest in the description of manifolds. We show that $\| t' \| > \mathbf{{b}}$. So is it possible to describe compactly ultra-prime systems? Hence recent interest in finitely Huygens--Hilbert, closed, meager groups has centered on describing canonical homomorphisms. (Disclosure: edited by Nate Eldredge, author of Mathgen, to include additional details.)
|
{
"source": [
"https://mathoverflow.net/questions/199926",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
200,656 |
In a conversation where it came up that the Pythagoreans probably found an enumeration of the rational numbers I erroneously remarked that Georg Cantor found a natural bijection from $\mathbb{N}$ to $\mathbb{Q}$ with his pairing function. Is there a natural bijection bethween these sets? Naturalness is of course not a precise criterion. But we may distinguish between degrees of naturalness and say that a bijection $f$ between $\mathbb{N}$ and $\mathbb{Q}$ is more natural than another bijection $g$ between these sets if for the identity statements $f(n)=\alpha(n)$ and $g(n)=\beta(n)$ the formula $\alpha(n)$ is lower in the arithmetical hierarchy than formula $\beta(n)$. Also, $f$ is more natural than $g$ if the formula $\alpha(n)$ is shorter than the formula $\beta(n)$.
|
There is a following result which is quite lovely, I think (I don't remember right away whose result this is): Let us define a function $f\colon\mathbb{N}\to\mathbb{Q}^+$ as follows: $f(1)=1$, and also $f(2n)=f(n)+1$, $f(2n+1)=\frac{1}{f(n)+1}$. Then: $f$ is a bijection (Sketch of a proof: A. Show using induction on $m$ that we have $f(n)\ne f(m)$ for $n\ne m$. B. Show that $f$ is surjective, that is for each continued fraction $q=[q_0;q_1,\ldots,q_s]$ there exists $n$ for which $f(n)=q$, this is done using induction on $q_0+q_1+\cdots+q_s$); The binary expansion $n=2^{m_0}+2^{m_1}+\cdots+2^{m_k}$ with $0\le m_0<\cdots<m_k$ and the continued fraction expansion $f(n)=q_0+\cfrac{1}{q_1+\cfrac{1}{q_2+\cdots}}=[q_0;q_1,q_2,\ldots,q_s]$ chosen in the way that $q_s=1$, are related as follows: $s=k+1$, and for all $i=0,\ldots,k$ we have $m_i=q_0+\cdots+q_i$. (This is easily proved by induction on $n$).
|
{
"source": [
"https://mathoverflow.net/questions/200656",
"https://mathoverflow.net",
"https://mathoverflow.net/users/37385/"
]
}
|
200,867 |
What are best examples of questions in mathematics that are not interesting until one knows the answers, whose answers themselves are what is interesting? The thing that prompts me to post this is just one example. I've seen others, but they escape me at the moment. Here it is: A torus is embedded in just the usual way in $\mathbb R^3$. It has parallels of latitude and meridians of longitude. A curve that meets every parallel of latitude at the same angle, or, equivalently, meets every meridian of longitude at the same angle, is a loxodrome . Suppose that angle is so chosen, given the shape of the particular torus, that the loxodrome goes through all $360^\circ$ of longitude in just the length it takes to go through all $360^\circ$ of latitude, returning there to its starting point. (There must be some conventional terminology for describing these windings, but I don't know it.) The question is: What are the curvature and torsion at the various points along this curve? Doubtless some will consider this question interesting, but to me, and, I suspect, to many, the answer, because it is so unexpected, is where this starts to get interesting. The answer is that the curvature is constant --- the same at all points on the curve --- and the torsion is everywhere $0$. (And it's really easy to deduce from that the precise value of the curvature.) I believe this was discovered in the 1890s and is stronger than the celebrated theorem of Villarceau , published in 1848. Villarceau's theorem says that a plane bitangent to a torus intersects the torus in two circles. This proposition does not assume as a hypothesis, but rather has as a (trivial corollary of its) conclusion, that the curve lies in a plane.
|
Uninteresting question: find $$\int_0^1{x^4(1-x)^4\over1+x^2}\,dx$$ Interesting answer: $${22\over7}-\pi$$
|
{
"source": [
"https://mathoverflow.net/questions/200867",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6316/"
]
}
|
200,876 |
Is there a topological space $(C,\tau_C)$ and two points $c_0\neq c_1\in C$ such that the following holds? A space $(X,\tau)$ is connected if and only if for all $x,y\in X$ there is a continuous map $f:C\to X$ such that $f(c_0) = x$ and $f(c_1) = y$. Is there also a Hausdorff space satisfying the above?
|
No such space $C$ can exist. We will derive a contradiction from the assumption that $C,c_0,c_1$ as desired exists. Let κ be any cardinal greater than $|C|$. View $\kappa$ as an ordinal. For each β in κ add a copy of the unit interval between β and β+1, and add a point ∞ at the end. The resulting "Very Long Line" $L_\kappa$ is dense and complete, hence connected as a topological space. As $L_\kappa$ is connected, there must be a continuous map $f$ from $C$ into $L_\kappa$ whose image $L'$ contains $0$ and $\infty$. Let $b\notin L'$. Then the map $h$ that sends everything below $b$ to $0$ and everything above $b$ to $1$ is continuous from $L'$ to the discrete space $\{0,1\}$. So the map $h\circ f:C\to \{0,1\}$ witnesses that $\{0,1\}$ is connected, a contradiction. (This is just a variant of Helene Sigloch's earlier argument.)
|
{
"source": [
"https://mathoverflow.net/questions/200876",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8628/"
]
}
|
200,880 |
Some time ago I stumbled on an alleged identity
$$\int\limits_0^\infty \frac{dx}{x} \int\limits_0^x \frac{dy}{y}
\int\limits_0^y \frac{dz}{z} [\sin{x}+\sin{(x-y)}-\sin{(x-z)}-\sin{(x-y+z)}]=
-\frac{\pi^3}{12}.$$
How this identity can be proved? The context under which this integral has emerged is described in Multiple Integral (American Mathematical Monthly problem 11621 and its generalization) .
|
No such space $C$ can exist. We will derive a contradiction from the assumption that $C,c_0,c_1$ as desired exists. Let κ be any cardinal greater than $|C|$. View $\kappa$ as an ordinal. For each β in κ add a copy of the unit interval between β and β+1, and add a point ∞ at the end. The resulting "Very Long Line" $L_\kappa$ is dense and complete, hence connected as a topological space. As $L_\kappa$ is connected, there must be a continuous map $f$ from $C$ into $L_\kappa$ whose image $L'$ contains $0$ and $\infty$. Let $b\notin L'$. Then the map $h$ that sends everything below $b$ to $0$ and everything above $b$ to $1$ is continuous from $L'$ to the discrete space $\{0,1\}$. So the map $h\circ f:C\to \{0,1\}$ witnesses that $\{0,1\}$ is connected, a contradiction. (This is just a variant of Helene Sigloch's earlier argument.)
|
{
"source": [
"https://mathoverflow.net/questions/200880",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32389/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.