source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
251,395 |
Let $k$ be an arbitrary field. Let $(A, e)$ be an abelian variety over $k$, and let $X$ be a torsor for $A$, i.e. $X$ is a proper smooth $k$-variety, and there is an $A$-action acting $:A \times X \to X$ such that for any $k$-scheme $L$ and a point $x \in X(L)$, the induced "orbit" map $A_L \to X_L$ given by $a \mapsto a + x$ is an isomorphism. When $k = \mathbb{F}_q$ is a finite field, how do I see that $X$ always has a $k$-rational point, and thus $A \simeq X$?
|
This is a theorem of Lang's from 1956. Here's an online document giving a proof (in the form $H^1(A,k)=0$): Lecture 14: Galois Cohomology of Abelian Varieties over Finite Fields,
William Stein. http://wstein.org/edu/2010/582e/lectures/582e-2010-02-12/582e-2010-02-12.pdf Stein notes that there is a "more modern proof" in the first few sections of
Chapter VI of Serre's Algebraic Groups and Class Fields . The original article is Serge Lang, "Abelian varieties over finite fields," Proceedings of the National Academy of Sciences 41.3 (1955): 174-176. It is available at http://www.pnas.org/content/41/3/174.short . It's very short, but phrased in the "old-style" Weil language of algebraic geometry. Here's a quick sketch of a proof with $k=\mathbb F_q$. Choosing a point of $X(\overline{k})$, we can make $X$ into an abelian variety over $\overline{k}$. Then the $q$-power Frobenius map $\phi_q:X\to X$ is the composition of an isogeny and a translation, say $\phi_q(x)=f(x)+x_0$ with $f:X\to X$ an isogeny (defined over $\overline{k}$) and $x_0\in X(\overline{k})$. The fact that $\phi_q$ is inseparable implies that $f$ is inseparable, and hence $(1-f)^*$ acts as the identity map on differentials. Thus $1-f$ has finite kernel, so it is surjective, and thus there is a point $x_1\in X(\overline k)$ satisfying $(1-f)(x_1)=x_0$. This implies that $\phi_q(x_1)=x_1$, and hence that $x_1\in X(k)$.
|
{
"source": [
"https://mathoverflow.net/questions/251395",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
251,434 |
Suppose we have a finitely presented group $G$ with decidable word problem. Is it decidable to check whether a given element $x\in G$ has finite order or infinite?
|
A finitely presented group with decidable word problem and undecidable order problem is in McCool, James
Unsolvable problems in groups with solvable word problem.
Canad. J. Math. 22 1970 836–838 .
|
{
"source": [
"https://mathoverflow.net/questions/251434",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10443/"
]
}
|
251,470 |
I was very happy to learn that the work which led to the award of the 2016 Nobel Prize in Physics (shared between David J. Thouless, F. Duncan M. Haldane and J. Michael Kosterlitz) uses Topology. In particular, the prize was awarded "for theoretical discoveries of topological phase transitions and topological phases of matter" . Having read both the popular and advanced versions of the scientific background found on the website linked above, I'm left with the impression that the most advance topological concepts used are winding numbers, vector fields and the Poincaré-Hopf index theorem. I wonder if anyone who reads MO is sufficiently familiar with this work to explain to me why this impression is correct or incorrect. To ask a precise question: Which topological concepts and results are involved in the work which led to the award of the 2016 Nobel Prize in Physics? And a follow-up question: Where should a topologist go to read about topological phases of matter and topological phase transitions?
|
Roughly speaking: When a system consists of particles interacting strongly, you won't have large movements for any particle unless all of them move together. There is still "quantization", so you'll have jumps in the type of movement (example: the quantum Hall effect). So the topology of the system plays a role. For example, given a collection of electrons with their spins and an external magnetic field, there are alignment issues with how the spins interact with each other and the magnetic field (and that's where your vector fields and winding numbers pop up). But you're not going to really understand the use of the topology without the terminology coming from Solid State physics (alignments of particles in crystals, spin chains, band gaps, etc). Granted that, I highly recommend Witten's "Three Lectures On Topological Phases Of Matter" . He ends up giving a description of our new Nobelist Haldane's model for the quantum Hall effect. He also explains the relation to Chern-Simons theory, which I think is the last remaining "topological concept" that your question is looking for. In a sense, this is analogous to the gauge-theoretic formulation of electrodynamics. To make "in a sense" rigorous, I recommend Simons' "Holonomy, the Quantum Adiabatic Theorem, and Berry's Phase" . He gives the relation to the work of Thouless on the quantum Hall effect, and the key here is Berry's geometrical phase factor (which for example appears in Yang-Mills theory that describes electrodynamics). To better bring in the relation to "topological phase", consider the aforementioned collection of particles with spins (a "spin configuration"). You can imagine a region where these spins rotate 360 degrees (a "vortex"). Traversing this path of rotated spins gives you a phase -- the analog is the Aharanov-Bohm effect (the attainment of a "topological phase" to the electron wavefunction as you traverse a non-simply connected region that contains a magnetic field). An example of a "topological phase transition" is that it may be energetically favorible for more or less vortices to appear, as temperature of the system is varied (an analog here is superconductivity where you transition from penetrating magnetic fields to expulsions of the fields, with appearances of vortices inbetween). This is the work of our new Nobelists Kosterlitz and Thouless. More references for topologists, related to the quantum Hall effect: The Fractional Quantum Hall Effect,
Chern-Simons Theory, and Integral Lattices (Frohlich's 1994 ICM talk) Mathematical Aspects of the Quantum Hall Effect (by Frohlich)
|
{
"source": [
"https://mathoverflow.net/questions/251470",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8103/"
]
}
|
251,472 |
Let $(M,g)$ be a connected Riemannian manifold of dimension $n>1$. Then the Hopf-Rinow theorem states that $(M,g)$ is geodesically complete if and only if $(M,d_g)$ is complete as a metric space ($d_g$ is the induced intrinsic metric). I need to know if a similar result is true under weaker hypothesis: 1) Suppose $(M,d)$ is a connected metric space, then is it true that if $(X,d)$ is geodetically complete then it is also complete? 2) If the answer to the previous question is no, would it change something if we consider $(M,d_g)$, where $d_g$ is the intrinsic induced metric of the Riemannian metric $g=\sum_{i,j=1}^ng_{ij}dx_idx_j$ where the $g_{ij}$ are just continuous, not $C^\infty$ (in this case I already know from this other question Geodesics for non differentiable riemannian metric that $(M,d_g)$ is locally geodesic). Thank you!
|
Roughly speaking: When a system consists of particles interacting strongly, you won't have large movements for any particle unless all of them move together. There is still "quantization", so you'll have jumps in the type of movement (example: the quantum Hall effect). So the topology of the system plays a role. For example, given a collection of electrons with their spins and an external magnetic field, there are alignment issues with how the spins interact with each other and the magnetic field (and that's where your vector fields and winding numbers pop up). But you're not going to really understand the use of the topology without the terminology coming from Solid State physics (alignments of particles in crystals, spin chains, band gaps, etc). Granted that, I highly recommend Witten's "Three Lectures On Topological Phases Of Matter" . He ends up giving a description of our new Nobelist Haldane's model for the quantum Hall effect. He also explains the relation to Chern-Simons theory, which I think is the last remaining "topological concept" that your question is looking for. In a sense, this is analogous to the gauge-theoretic formulation of electrodynamics. To make "in a sense" rigorous, I recommend Simons' "Holonomy, the Quantum Adiabatic Theorem, and Berry's Phase" . He gives the relation to the work of Thouless on the quantum Hall effect, and the key here is Berry's geometrical phase factor (which for example appears in Yang-Mills theory that describes electrodynamics). To better bring in the relation to "topological phase", consider the aforementioned collection of particles with spins (a "spin configuration"). You can imagine a region where these spins rotate 360 degrees (a "vortex"). Traversing this path of rotated spins gives you a phase -- the analog is the Aharanov-Bohm effect (the attainment of a "topological phase" to the electron wavefunction as you traverse a non-simply connected region that contains a magnetic field). An example of a "topological phase transition" is that it may be energetically favorible for more or less vortices to appear, as temperature of the system is varied (an analog here is superconductivity where you transition from penetrating magnetic fields to expulsions of the fields, with appearances of vortices inbetween). This is the work of our new Nobelists Kosterlitz and Thouless. More references for topologists, related to the quantum Hall effect: The Fractional Quantum Hall Effect,
Chern-Simons Theory, and Integral Lattices (Frohlich's 1994 ICM talk) Mathematical Aspects of the Quantum Hall Effect (by Frohlich)
|
{
"source": [
"https://mathoverflow.net/questions/251472",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
251,688 |
How much should an average mathematician not working in an area like logic, set theory, or foundations know about the foundations of mathematics? The thread Why should we believe in the axiom of regularity? suggests that many might not know about the cumulative hierarchy, which is the intended ontology that ZFC describes. ( Q1 ) But is this something everyone should know or does the naive approach of set theory suffice, even for some mathematical research (not in the areas listed above)? EDIT: Let me add another aspect. As Andreas Blass has explained in his answer to the MO-question linked above , when set theorists talk about "sets", they mean an entity generated in the following transfinite process: Begin with some non-set entities called atoms ("some" could be "none" if you want a world consisting exclusively of sets), then form all sets of these, then all sets whose elements are atoms or sets of atoms, etc. This "etc." means to build more and more levels of sets, where a set at any level has elements only from earlier levels (and the atoms constitute the lowest level). This iterative construction can be continued transfinitely, through arbitrarily long well-ordered sequences of levels. Now, for foundational purposes it may be prudent to take the "none atoms" (pure) version of set theory. But in mathematical practice, one is working with many different types of mathematical objects, and many of them are not sets (even though one might often encode them as such). But even if one adds some atoms/non-sets – for example the real numbers –, one does not get a universe which is satisfactory for the practice of mathematics. This is because notions like "functions" or "ordered tuples" are from a conceptual perspective not sets; but we can't take them as our atoms for the cumulative hierarchy – the set of all "functions" or "ordered pairs" ... leads to paradoxes (Russell). Now I wonder: ( Q2 ) What should mathematicians not working in an area like logic, set theory, or foundations understand by "set"? Note that there is also the idea of structural set theory (see the nlab ), and systems such as ETCS or SEAR (try to) solve these "encoding problems" and remove the problem with "junk theorems" in material set theories. One can argue that these structural set theories match (more than material set theories do) with the mathematical practice. But my personal problem with them is that they have no clear ontology. So this approach doesn't answer Q2 , I think. Or, ( Q3 ) do mathematicians not working in foundational subjects not need to have a particular picture in mind of how their set-theoretic universe should look like? This would be very unsatisfactory for me since this would imply that one had to use axioms that work – but one doesn't understand why they work (or why they are consistent). The only argument I can think of that they are consistent would be "one hasn't found a contradiction yet".
|
The answer is essentially the same as how much should the average mathematician know about combinatorics ? Or group theory ? Or algebraic topology ? Or any broad area of mathematics... It's good to know some, it's always helpful to know more, but only really need the amount that is relevant to your work. Perhaps a small but significant difference with foundations is that there is a natural curiosity about it, just like I'm naturally curious about the history and geography of where I live, even though I only need minimal knowledge in day-to-day life. One does need to know where to go when deeper foundational questions arise. So one should keep a logician colleague as a friend, just in case. This entails knowing enough about foundations and logic to engage in casual conversation and, when the need arises, to be able to formulate the right question and understand the answer you get. This is no different than any other area of mathematics. Some mathematicians may have more than just a casual curiosity about foundations, even if they work in a completely different area. In that case, learn as much as your time and curiosity permits. This is great since, like other areas of mathematics, foundations needs to interact with other areas in order to advance. So, what do you need to have a casual conversation with a logician? Adjust to personal taste: Some understanding of formal languages and the basic interplay between syntax and semantics. Some understanding of incompleteness and undecidability. Some understanding of the paradoxes that led to the current state of set-theoretic foundations. Some understanding that logic and foundations does interact with your discipline. To address the additional questions regarding sets. Personally, I don't think it's right to say that the notion of set is defined by foundations. It's a perfectly fine mathematical concept though it (sometimes confusingly) has two distinct and equally important flavors. The main evidence for this point of view is that the notion of set existed well before Cantor and their use was common. Here is one of my favorite early definitions due to Bolzano ( Paradoxien des Unendlichen , 1847): There are wholes which, although they contain the same parts A , B , C , D ,..., nevertheless present themselves as different when seen from our point of view or conception (this kind of difference we call 'essential'), e.g. a complete and a broken glass viewed as a drinking vessel. [...] A whole whose basic conception renders the arrangement of its parts a matter of indifference (and whose rearrangement therefore changes nothing essential from our point of view, if only that changes), I call a set. (See this MO question for additional early occurrences of sets of various shapes and forms.) What Bolzano describes is the combinatorial flavor of sets: it's a basic container in which to put objects, a container that is so basic and plain that it has no structure of its own to distract us from the objects inside it. There is another flavor to sets in which they are used to classify objects, to put into a whole all objects of the same kind or share a common feature. This usage is also very common and also prior to foundational theories. Mathematicians use both flavors of sets, often together. For a variety practical reasons, foundational theories tend to focus on one, and formalize standard (albeit awkward) ways to accommodate the other. So-called "material" set theories (ZFC, NBG, MK) focus on the combinatorial flavor of sets. To accommodate classification, these theories allow for as many collections of objects as possible to be put together in a set (with little to no concern whether this is motivated, necessary, or even useful). So-called "structural" set theories (ETCS, SEAR, many type theories) focus on the classification flavor of sets. To accommodate combinatorics, these theories include a lot of machinery to relate sets and identify similar objects across set boundaries (with little to no concern about the nature of elements within sets). Both of these approaches are viable and they both have advantages over the other. However, it's plainly wrong to think that working mathematicians have to choose one over the other, or even worry about the fact that it's difficult to formalize both simultaneously. The fact is that the sets mathematicians use in their day-to-day work are just as suitable as containers as they are as classifiers.
|
{
"source": [
"https://mathoverflow.net/questions/251688",
"https://mathoverflow.net",
"https://mathoverflow.net/users/99445/"
]
}
|
252,671 |
I am looking for examples of Markov Chains which are surprising in the following sense: a stochastic process $X_1,X_2,...$ which is "natural" but for which the Markov property is not obvious at first glance. For example, it could be that the natural definition of the process is in terms of some process $Y_1,Y_2,...$ which is Markovian and where $X_i=f(Y_i)$ for some function $f$. Let me give you an example which is slightly surprising, but not surprising enough for my taste. Suppose we have $n$ bins that are initially empty, and at each time step $t$ we throw a ball into one of the bins selected uniformly at random (and independently of previous time steps). Let $X_t$ be the number of empty bins at time $t$. Then $X_1,X_2,...$ form a Markov chain. Are there good, more surprising examples? I apologize if this question is vague.
|
I could go back to Markov himself, who in 1913 applied the concept of a Markov chain to sequences of vowels and consonants in Alexander Pushkin's poem Eugene Onegin. In good approximation, the probability of the appearance of a vowel was found to depend only on the letter immediately preceding it, with $p_{\text{vowel after consonant}}=0.663$ and $p_{\text{vowel after vowel}}=0.128$ . These numbers turned out to be author-specific, suggesting a method to identify authors of unknown texts. (Here is a Mathematica implementation. ) Brian Hayes wrote a fun article reviewing how "Probability and poetry were unlikely partners in the creation of a computational tool" The first 100 cyrillic letters of 20,000 total letters compiled by Markov from the first one and a half chapters of Pushkin's poem. The numbers surrounding the block of letters were used to demonstrate that the appearance of vowels is a Markov chain.
|
{
"source": [
"https://mathoverflow.net/questions/252671",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83481/"
]
}
|
252,728 |
Turing proved that not all real numbers are effectively computable. In the sense that no algorithm exists to compute some real numbers. Here is Turing's definition: A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integer $ n \geq 1$ as input and produces the $n$-th digit of the real number's decimal expansion as output . I am interested in what is known about the efficiency of representing real numbers and decoding speed. Specifically, I want to know the relationship between the compactness of the algorithm (that encodes a real number) and the speed of producing the digits of a real number (decoding speed as a function of $n$). A real number is efficiently computable if the number of steps needed to compute digit $n$ is bounded by a polynomial in $n$. It is hard to compute if no such polynomial bound exists. Are there any real numbers that are hard to compute (not necessarily computable one)? What are the references? I have no formal definitions in mind.
|
EDIT: This was in a comment below, but I now think it should be part of the main answer: There are two different ways to ask the question in the OP: Is there a real number $r$ such that no polytime algorithm computes all the bits of $r$? Is there a real number $r$ such that no individual bit of $r$ can be computed by a polytime algorithm? The former is the question I answer below; the latter is trivial! Given any real $r$, and any $n$, there is a polytime (indeed, constant time) algorithm $p_{r, n}$ which computes the first $n$ bits of $r$ correctly. (Consider the silly algorithm which has the string $\langle r(0), r(1), . . . , r(n)\rangle$" "hard-coded" in - on input $k$ for $k\le n$ this algorithm outputs $r(k)$, and on input $k$ for $k>n$ it outputs $0$ (say).) So in order to get anything interesting, we need to look at algorithms which attempt to compute all the bits simultaneously. Note that noncomputable reals trivially satisfy the first question, so the right question to ask is: Is there a computable real number $r$ such that no polytime algorithm computes all the bits of $r$? Here's an explicit construction of a computable real which is hard to compute: Let $\{\varphi_e\}$ list all (partial) computable functions, and $\{p_e\}$ all polynomials. We define a real number $r$ as follows: the $n$th bit of the binary expansion of $r$ is a $1$ iff $\varphi_i(n)$ does not halt and output $1$ in $\le p_j(n)$ steps (so, either doesn't halt in that time, or does halt and outputs something $\not=1$) - where $n=\langle i, j\rangle$. (Here "$\langle\cdot,\cdot\rangle$" denotes the Cantor pairing function .) $r$ is computable (note that we run each computable function for only a bounded amount of time before deciding what to do for that bit), but $r$ is not computable in polynomial time since it diagonalizes against all polynomial-time computations: if $\varphi_i$ has running time bounded by $p_j$, then $\varphi_i(\langle i, j\rangle)\not=r(\langle i, j\rangle)$. Note that nothing special about polynomials was used here: given any reasonable complexity class (for instance, any complexity class of the form "Running time bounded by some $f\in\mathcal{F}$" for $\mathcal{F}$ a computable set of computable total functions), there is a computable real whose bits are not computable by any algorithm in that class. Going back to the second, "trivial" version of the question, it can actually be "de-trivialized" in an interesting way: look at how hard it is to get successively longer approximations to $r$. That is, the silly algorithm I describe which gets the first $n$ bits correctly has length $\approx 2^n$; can we do better? This question forms the basis of Kolmogorov complexity . Roughly speaking, given a real $r$, let $K^r$ be the function whose $n$th value is the length of the shortest Turing machine which computes the first $n$ bits of $r$ correctly. Then we can ask (even for noncomputable $r$!): how quickly does $K^r$ grow ? If $r$ is computable, then $K^r$ is eventually constant; if $r$ is not computable, however, things get really interesting. (See e.g. the notion of K-trivials , which are reals which are "almost computable" in the sense of Kolmogorov complexity.) Now, this isn't quite what you're asking about - you'd want to look at $K_{poly}^r$, the function whose $n$th value is the length of the least polytime Turing machine which computes the first $n$ bits of $r$ correctly. This, and other resource-bounded variations of Kolmogorov complexity, don't appear to be as studied - but see this article .
|
{
"source": [
"https://mathoverflow.net/questions/252728",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8784/"
]
}
|
253,010 |
The classical lore is that $H^1(X,\mathcal F)$ is obstruction to lifting local data to global data. However I don't understand why one would want to compute $H^3(X,\mathcal F), H^4(X,\mathcal F), \cdots$. For complex manifold $X$, $H^1(X,\mathcal O),H^{0,1}(X)$ both represent obstruction to local-to-global lifting of holomorphic functions. This in particular allows one to determine whether Mittag-Leffler problem can be solved. $H^1(X,\mathcal O)=0$ implies local solutions can be modified to identify a global solution, and $H^{0,1}(X)=0$ implies that local solutions can be multiplied by a smooth bump function, after which $\bar\partial$-exactness kicks in to save the day. However, what does $H^q(X,\Omega^p)$ or $H^{p,q}(X)$ mean (which are the same by Dolbeault theorem)? $H^1(X,\Omega^p)$ or $H^{p,1}(X)$ should represent the same local-to-global problem for holomorphic $p$-forms, but what about, say, $H^3(X,\Omega^p)$? Sure, one can talk about Cech cohomology for a good cover; $H^3(X,\mathcal F)$, for example, is about lifting sections defined on quadruple-intersections to triple-intersections. That's fine and all, but that doesn't sound very compelling to me. It seems to me that lifting sections on $q$-fold intersections to sections on $(q-1)$-fold intersections doesn't really solve any natural, interesting problems that arise independently of the formalisms introduced (for example, Lefschetz fixed point formula solves the problem of counting fixed points, and this is defined independently of singular homology and therefore I'd consider this a very compelling reason to study singular homology groups). Similar situation appears when we study Chern class and line bundles. The Bockstein morphism for exponential sheaf sequence $H^1(X,\mathcal O^\times) \rightarrow H^2(X,\mathbb Z)$ is precisely taking Chern class for line bundle, so it helps to know things like $H^1(X,\mathcal O)=H^2(X,\mathcal O)=0$, which allow us to classify line bundles for manifolds like $\mathbb {CP}^n$. However, there does not seem to be a reason to care about $H^3(X,\mathcal O)$, etc. Note: I have checked out a similar question . Here, one of the answers point out that for sheaf $\mathcal F$, if we can find an acyclic sheaf $\mathcal A$ such that $\mathcal F$ is a subsheaf of $\mathcal A$, then
$$0\rightarrow \mathcal F \rightarrow \mathcal A \rightarrow \mathcal A/\mathcal F \rightarrow 0 $$
is exact and therefore by long exact sequence coming from this,
$$ H^{p}(X,\mathcal F) \cong H^{p-1}(X,\mathcal A / \mathcal F)$$
and therefore higher cohomology groups can be understood as obstruction ($H^1$), and actually even global section ($H^0$) of $\mathcal A_1/(\mathcal A_2 / \cdots (\mathcal A_p /\mathcal F)\cdots )$. This just mystifies the issue further for me, somewhat, largely because I can't think of a canonical choice of such acyclic $\mathcal A$ and therefore I can't interpret the meaning of local-to-global lifting of iterated-quotient sheaves.
|
I think these kinds of lifting-based statements are the wrong place to start. For me these arguments about lifting and such are most convincing as answers to questions like "What are cohomology groups?". For instance $H^1(X,\mathcal O_X)$ is the tangent space to the moduli space of line bundles on $X$. But that don't really explain why to study it. How much do you really care about the Mittag-Leffler problem? Probably not enough to base your entire mathematical career around it. But a huge fraction of modern mathematicians have based their mathematical careers around studying cohomology or generalisations of it. Instead, as you suggest, what motivates the study of cohomology are its immensely powerful applications. For Dolbeaut cohomology and coherent sheaf cohomology in particular, I would say the premier applications are to classification-type problems in algebraic geometry. First, Dolbeaut cohomology groups provide natural invariants (the Hodge numbers) that can be used to distinguish algebraic varieties. Second, the machinery of sheaf cohomology is crucial in answering all sorts of geometric questions about line bundles and other geometric structures. A good example might be the proof of the Hodge index theorem via Hirzebruch-Riemann-Roch, as in Hartshorne. We need all the cohomology groups to define the Euler characteristic, and the Euler characteristic is such a nice invariant that there is a simple formula for it, and this simple formula helps us understand purely geometric questions about intersection of curves. I would say that at a typical algebraic geometry seminar the majority of the talks involve higher sheaf cohomology at some point, but most talks are not devoted to answering a question expressed in terms of cohomology, so one could find many examples there. Third, there are the applications via Hodge theory. The natural isomorphism between the singular cohomology and the Dolbeaut cohomology itself has nontrivial structure which is related in a highly nontrivial way to the geometry and arithmetic of the variety. In particular it is supposed to tell you about algebraic cycles - this is the Hodge conjecture. While that is open, many interesting facts are known - e.g. the K3 surfaces are completely determined by their Hodge structure. Studying how these Hodge structures vary in a family of varieties leads to all sorts of interesting theory, which again can be used to solve purely geometric questions - the statement I remember off the top of my head is that varieties of general type cannot have a nowhere vanishing one-form (Popa and Schnell). And this is not even getting into the very interesting applications of other cohomology theories for algebraic varieties, such as etale cohomology.
|
{
"source": [
"https://mathoverflow.net/questions/253010",
"https://mathoverflow.net",
"https://mathoverflow.net/users/68215/"
]
}
|
253,059 |
The ordinary intermediate value theorem (IVT) is not provable in constructive mathematics. To show this, one can construct a Brouwerian "weak counterexample" and also promote it to a precise countermodel: the basic idea is that the root may not depend continuously or computably on the function, since a small perturbation in a function's value may cause a root to appear or disappear. There are, however, many variants of the IVT that are constructively provable. This question is about the approximate IVT, which says that if $f(a)<0<f(b)$ then for any $\epsilon>0$ there is a point $x$ with $|f(x)|<\epsilon$. It seems to be well-known that the approximate IVT can be proven assuming either (1) countable (or maybe dependent) choice, or (2) that $f$ is uniformly continuous. This paper contains many versions of approximate IVT using somewhat weaker hypotheses such as "strong continuity" of $f$. But I would like to know: Can the approximate IVT be proven constructively about an arbitrary (pointwise) continuous function $f$, without using any form of choice or excluded middle (e.g. in the mathematics valid in any elementary topos with NNO)? If the answer is no, I would like to see at least a weak counterexample, or even better a specific countermodel (e.g. a topos in which it fails). Edit: To clarify, the functions in question are from the real numbers (or some interval therein) to the real numbers. I'll accept an answer that defines "the real numbers" either as equivalence classes of Cauchy sequences or as Dedekind cuts (but not as a "setoid" of Cauchy sequences).
|
Here's a constructive proof of the approximate Intermediate Value Theorem from pointwise continuity, not relying on Dependent Choice and not relying on a setoid construction of the reals. Theorem: If $f$ is pointwise continuous with $f(a)<0, \ f(b)>0,\ \epsilon>0$ then there is some $x$ with $|f(x)|<\epsilon$. Proof: Define the following inductively:
$$a_1 = a$$
$$b_1 = b$$
$$c_n = (a_n+b_n) / 2$$
$$d_n = \max( 0, \min( \textstyle\frac{1}{2}+ \frac{ f(c_n)}{\epsilon}, 1))$$
$$a_{n+1} = c_n - d_n (b-a)/2^n $$
$$b_{n+1} = b_n - d_n (b-a)/2^n$$
Then $b_n - a_n = (b-a)/2^{n-1}.$
So the $c_n$'s converge to some $c.$ By pointwise continuity at $c$,
let $\delta$ be such that $ |x-c|<\delta$
implies $|f(x)-f(c)| < \epsilon.$ Claim :
For any $m\in\mathbb{N}$, either (i) $\exists j \le m,\ |f(c_j)| < \epsilon$
or (ii) $f(a_m) < 0 $ and$ f(b_m) > 0$. Proof of theorem from claim: Choose $c_m$ such that $|c-c_m| < \delta / 2$ and $(a-b)/2^m < \delta / 2$, and apply the claim. In the first case of the claim, the theorem is immediate. In the second case of the claim,
$$ |c-a_m| \le |c-c_m| + |c_m-a_m| < \delta, \text{ so }|f(c)-f(a_m)| < \epsilon$$
$$ |c-b_m| \le |c-c_m| + |c_m-b_m| < \delta, \text{ so }|f(c)-f(b_m)| < \epsilon$$
So $f(c)$ is within $\epsilon$ of both a negative and a positive number, and $|
f(c)| < \epsilon$, QED. Proof of claim by induction on $m$.
The base case is given by $ f(a) < 0, \ f(b) > 0.$ Now assume the claim for $m$. In case (i), for some $ j < m,\ |f(c_j)| < \epsilon$, and the inductive step is trivial. In case (ii), use trichotomy with either
$f(c_m) < -\epsilon/2, \ |f(c_m)| < \epsilon$, or $f(c_m) > \epsilon/2.$
If $|f(c_m)| < \epsilon$, then the inductive step is again trivial.
If $f(c_m) > \epsilon/2$, then
$$d_m = 1$$
$$a_{m+1} = a_m, \text{ so }f(a_{m+1}) < 0$$
$$b_{m+1} = c_m,\text{ so } f(b_{m+1}) > 0$$ If $f(c_m) < - \epsilon/2$, then
$$d_m = 0$$
$$a_{m+1} = c_m, \text{ so }f(a_{m+1}) < 0$$
$$b_{m+1} = b_m,\text{ so } f(b_{m+1}) > 0$$ QED (claim and theorem). I think this one works; I look forward to seeing what MO says.
|
{
"source": [
"https://mathoverflow.net/questions/253059",
"https://mathoverflow.net",
"https://mathoverflow.net/users/49/"
]
}
|
253,090 |
I am supposed to give a talk about the Riemann-Roch theorem to a seminar of first and second year graduate students. I want to do Riemann-Roch for compact Riemann surfaces, but I am open to perhaps doing the version for projective curves. I assume the crowd knows little algebraic geometry. I assume however, knowledge of elementary differential geometry, sheaves, cohomology and complex analysis. I am looking for an elementary proof (especially because I want time to be able to work out some examples). Any references?
|
RRT
There is a big difference in difficulty between the compact Riemann surface case and the projective curve case, for reasons already mentioned. Namely a projective curve comes equipped with a large supply of meromorphic functions, but the proof that they exist for a compact Riemann surface is a major step. I suggest you decide what your goal is. E.g., do you want to make clear why the theorem is true, without giving all steps of verification, or do you want to show how some of the trivial steps are derived easily using modern sheaf techniques, or perhaps give complete derivations of some significant parts of the statement? if you use sheaf theory, it is trivial to show that chi(D)-chi(O) = deg(D) for any divisor D, where chi is the holomorphic Euler characteristic: chi(D) = h^0(D)-h^1(D). To get full RRT from this one needs to compute chi(O) = 1-g, where g is the topological genus, and then to prove duality, that h^1(D) = h^0(K-D), where K is the canonical divisor. In 4 pages of the notes on my web page, I take an argument from Bill Fulton to do one of these for plane curves, namely that chi(O) = 1-g, with some hand waving over the computation of topological Euler characteristics by deformation. see pages 38-42: http://alpha.math.uga.edu/%7Eroy/rrt.pdf If you just want to show why the result is true with some arguments omitted, I feel nothing beats Riemann’s own exposition. Riemann himself proved the theorem in clear natural stages: 1st the theorem in the special case of the canonical divisor, i.e. he proved that there are exactly g independent holomorphic differential forms on a compact Riemann surface of genus g. Then he proved that for each point p, there is one meromorphic differential with a double pole at p and zero residue, equivalently he proved the RRT for divisors of form K+2p, i.e. that h^0(K+2p) = g+1, so that in addition to the g holomorphic forms there is one with a double pole at p, (and necessarily zero residue). (Riemann also allows himself the simplification of assuming all points of the divisor considered are distinct.) Then, from the existence of these basic types of differentials, one deduces the converse of the residue theorem. Even if you barely read German, you can get the idea of what is most important just from deciphering the headings of the first two paragraphs in Riemann's treatment of the theorem in his great paper "On Abelian Integrals"; (since differentials the things you integrate, he speaks of their integrals): "Integrale erster Gattung" (integrals of first kind, i.e. of holomorphic differentials); "Integrale zweiter Gattung" (integrals of second kind, i.e. of differentials with double pole at one point).... With this information, one can deduce the RRT in two steps, as clearly explained in Griffiths and Harris’s book, (see in particular pages 233, 244-5). Namely for each effective divisor D, to compute the number of independent meromorphic functions with pole divisor dominated by D, is equivalent to computing the number of meromorphic differential forms that would occur as their differentials. The assumed facts give the number of meromorphic differentials with the right singularities, and then one has only to compute how many of those differentials are exact, which is a period computation. As in Riemann’s original paper they give this calculation in terms of the periods of integrals, and as in Roch’s follow-up these periods are computed in terms of residues using Green’s theorem. All this is done nicely in Griffiths Harris, where they depend on the Kodaira vanishing theorem for the requisite existence of differential forms of first and second kinds. (They also discuss this deep theorem earlier in their book.) The point is that the deduction of RRT from the known existence of the right number of differential forms of first and second kinds is very clear and elementary complex calculus. Hence it only remains to provide an elementary proof of these prerequisite facts about forms. Now if one restricts attention to plane curves, at least the holomorphic forms can be immediately written down in coordinates, as Riemann himself points out. He also says one can write them down in the meromorphic case as well, but does not do so. I believe this can indeed be done in an elementary way, as indicated in the book of Brieskorn and Knorrer, but I have not done it except in special cases. In fact one can finesse the existence of the forms of second kind by the trick of Brill and Noether; i.e. one can use duality to rely exclusively on the existence of holomorphic forms for the theorem. This argument is explained in the books of Arbarello, Cornalba, Grifiths and Harris (appendix A, chapter 1), and in the book on algebraic curves by Griffiths. I have also written out this argument in the notes for my course in pdf form, at the link below: http://alpha.math.uga.edu/%7Eroy/8320.pdf If you want a complete argument for compact Riemann surfaces, a nice treatment using sheaves is in the book Lectures on Riemann surfaces by Robert Gunning, including some nice Hilbert space arguments to deduce the finiteness of cohomology groups. I like the old book, but a more current version of his notes are posted on his web site at: https://web.math.princeton.edu/~gunning/ Finally David Mumford recommended reading the argument by George Kempf, in Crelle’s Journal, and reproduced in his book Algebraic Varieties, using sheaf theory heavily for the algebraic case. I found this rather terse, but aspire to understanding it. Here is a link to it: http://gdz.sub.uni-goettingen.de/dms/load/img/?PID=GDZPPN002194015&physid=PHYS_0046 Please forgive the somewhat rushed answer, I hope some of the references are useful. I apologize if my correction of an error causes a bump, but I hope it does not.
|
{
"source": [
"https://mathoverflow.net/questions/253090",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
253,193 |
I'd like to formulate an abstract definition of convex sets: a set $K$ is convex if it is endowed with a ternary operation $K\times[0,1]\times K\to K$, written $(x:t:y)$, satisfying axioms $(x:0:y)=(x:t:x)=x$ $(x:t:y)=(y:1-t:x)$ $(x:t:(y:\frac ut:z))=((x:\frac{t-u}{1-u}:y):u:z)$ The axioms imply that, for every $x_1,\dots,x_n\in K$ and $t_i\ge0$ with $\sum t_i=1$ the convex combination $\sum t_i x_i\in K$ is well defined. Examples are the usual convex subsets of real vector spaces, with $(x:t:y)=(1-t)x+ty$, but also trees with $(x:t:y)$ the unique point at ratio $t$ on the geodesic from $x$ to $y$, and more generally $\mathbb R$-trees (geodesic metric spaces in which all triangles are isometric to tripods). I'm sure this has already been explored, and I'd rather not reinvent the wheel (and I possibly missed some useful axioms), but I couldn't find any reference to such notions. Natural questions that come to mind are: define an affine map between convex sets $K,L$ as a map $f\colon K\to L$ with $f(x:t:y)=(f(x):t:f(y))$. Topologize then convex sets by making all affine maps to $\mathbb R$ continuous. What can be said of these topological spaces? can every convex set be represented in a vector space? I have in mind the map $K\to\ell^1(K)/\{\delta_{(x:t:y)}=(1-t)\delta_x+t\delta_y\forall x,t,y\}$, though the topologies will probably not match (and I'm not sure I want to close the space I quotient $\ell^1$ by). Thanks to all! Any kinds of references or comments are welcome.
|
There has been a bunch of work along these lines, and I think the idea has been rediscovered several times. I suggest looking at the papers of Anna Romanowska , who refers to them as "barycentric algebras", to get an idea of what's known. Her book with Smith, "Modes", covers this as well as generalizations where $t$ is not required to be in $[0,1]$. Here are some slides that cover the basics. It's known that not all of them are representable as vector spaces. For example, if you mush everything in the interior of $[0,1]$ to a single point, your operations are still well-defined, but you can't embed it in a vector space. The Modes book has a structure theorem.
|
{
"source": [
"https://mathoverflow.net/questions/253193",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10481/"
]
}
|
253,703 |
I find the following averaged-integra l amusing and intriguing, to say the least. Is there any proof? For any pair of integers $n\geq k\geq0$, we have
$$\frac1{\pi}\int_0^{\pi}\frac{\sin^n(x)}{\sin^k(\frac{kx}n)\sin^{n-k}\left(\frac{(n-k)x}n\right)}dx=\binom{n}k. \tag1$$ I also wonder if there's any reason to relate these with an MO question that I just noticed . Perhaps by inverting? AN UPDATE. I'm extending the above to a stronger conjecture shown below. For non-negative reals with $r\geq s$, a generalization is given by
$$\frac1{\pi}\int_0^{\pi}\frac{\sin^r(x)}{\sin^s(\frac{sx}r)\sin^{r-s}\left(\frac{(r-s)x}r\right)}\,dx
=\binom{r}{s}. \tag2$$
|
Tonight I read here [the answer by esg to another your question] that $\frac1{2\pi}\int_{-\pi}^\pi e^{-ik t}(1+e^{it})^ndt=\binom{n}{k}$, which is, well, obvious at least when both $n$ and $k$ are positive integers: just expand the binomial $(1+e^{it})^n$ and integrate. Denoting $\alpha=k/n$ we may rewrite this as $\frac1{2\pi}\int_{-\pi}^\pi (f(t))^n dt=\binom{n}{\alpha n}$, where the function $f(t)=(1+e^{it})e^{-i\alpha t}$ is complex-valued. For making it real-valued, we change the path between the points $-\pi$ and $\pi$. The value of the integral does not change (since $f^n$ is analytic between two paths, for integer $n$ it is simply entire function.) On the second path $f$ takes real values. Namely, for $t\in (-\pi,\pi)$ we define $s(t)=\ln \frac{\sin (1-\alpha)t}{\sin \alpha t}$. It is straightforward (some elementary high school trigonometry) that $$f(t+is(t))=\frac{\sin t}{\sin^{\alpha} \alpha t\cdot \sin^{1-\alpha}(1-\alpha)t},$$
so we replace the path from $(-\pi,\pi)$ to $\{t+s(t)i:t\in (-\pi,\pi)\}$ (limit values of $s(t)$ at the endpoints are equal to 0) and take only the real part of the integral (this allows to replace $d(t+s(t)i)$ to $dt$ in the differential). We get
$$
\frac1{2\pi}\int_{-\pi}^\pi \frac{\sin^n t}{\sin^{\alpha n} \alpha t\cdot \sin^{(1-\alpha)n}(1-\alpha)t}dt=\binom{n}{\alpha n}
$$
as desired.
|
{
"source": [
"https://mathoverflow.net/questions/253703",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
253,711 |
In Harris's book there is an exercise that the image of the diagonal $\Delta \subset \mathbb{P}^n\times\mathbb{P}^n$ under Segre map is the Veronese variety $\nu_2(\mathbb{P}^n)$. I want to understand this in general. Is this correct for any variety? Precisely let $X\subseteq\mathbb{P}^n$ be a closed projective variety. Is the following correct? The $r$th Veronese of $X$ is the diagonal in the Segre i.e. $\nu_r(\mathbb{P}^n)=\Delta\cap (X\times\ldots\times X)$, Where $\Delta$ is the diagonal of $\mathbb{P}^n\times\ldots\mathbb{P}^n, \; r$-times and $X\times\ldots\times X$ is the $r$-fold (Segre) of $X$. If yes, how one should proceed to prove it. Only hints are enough.
|
Tonight I read here [the answer by esg to another your question] that $\frac1{2\pi}\int_{-\pi}^\pi e^{-ik t}(1+e^{it})^ndt=\binom{n}{k}$, which is, well, obvious at least when both $n$ and $k$ are positive integers: just expand the binomial $(1+e^{it})^n$ and integrate. Denoting $\alpha=k/n$ we may rewrite this as $\frac1{2\pi}\int_{-\pi}^\pi (f(t))^n dt=\binom{n}{\alpha n}$, where the function $f(t)=(1+e^{it})e^{-i\alpha t}$ is complex-valued. For making it real-valued, we change the path between the points $-\pi$ and $\pi$. The value of the integral does not change (since $f^n$ is analytic between two paths, for integer $n$ it is simply entire function.) On the second path $f$ takes real values. Namely, for $t\in (-\pi,\pi)$ we define $s(t)=\ln \frac{\sin (1-\alpha)t}{\sin \alpha t}$. It is straightforward (some elementary high school trigonometry) that $$f(t+is(t))=\frac{\sin t}{\sin^{\alpha} \alpha t\cdot \sin^{1-\alpha}(1-\alpha)t},$$
so we replace the path from $(-\pi,\pi)$ to $\{t+s(t)i:t\in (-\pi,\pi)\}$ (limit values of $s(t)$ at the endpoints are equal to 0) and take only the real part of the integral (this allows to replace $d(t+s(t)i)$ to $dt$ in the differential). We get
$$
\frac1{2\pi}\int_{-\pi}^\pi \frac{\sin^n t}{\sin^{\alpha n} \alpha t\cdot \sin^{(1-\alpha)n}(1-\alpha)t}dt=\binom{n}{\alpha n}
$$
as desired.
|
{
"source": [
"https://mathoverflow.net/questions/253711",
"https://mathoverflow.net",
"https://mathoverflow.net/users/100393/"
]
}
|
253,714 |
Could you give me an example of a clear and beautiful application of Bott residue formula in torus-equivariant cohomology (see below)? I found an example calculating a product of Chern classes on singular quadric in section 5.3 of Edidin, Graham Localization in equivariant intersection theory and the Bott residue formula , but in singular case one need equivariant Chow groups instead of equivariant cohomology , and anyway the example is too technical. Let $T=(\mathbb C^*)^n$ be an algebraic torus acting on a smooth complex projective variety $X$ such that there is a finite number of $T$-fixed points of $X$. A technically simple case of Bott residue formula says that for $T$-equivariant vector bundle $E$ with $\operatorname{rk} E=\dim X$ the top Chern class is a number given by
$$\int_X c_{top}(X, E)=\sum_{x \in X^T} \frac{c_{top}^T (x, E_x)}{c_{top}^T (x, TX_x)},$$
where the right hand side should be calculated in the quotient field of $T$-equivariant cohomology ring of a point, that is simply
$$\operatorname{Quot} H_T^*(x)=\operatorname{Quot} \mathbb Q[t_1, \ldots, t_n]=\mathbb Q(t_1, \ldots, t_n);$$ the $T$-equivariant top Chern class $c_{top}^T(x, V)$ of a $T$-representation $V$ (above it is the $x$-fiber $E_x$ or $TX_x$) is the product of $T$-characters of $V$ as elements
$$a_1t_1+\ldots+a_nt_n \in \mathbb Q[t_1, \ldots, t_n];$$ the equality is taken in $\mathbb Q(t_1,\ldots,t_n)$ as the summands lie there, but the sum lies in $\mathbb Q$. So one can derive the formula $\chi(X)=\chi(X^T)$ by choosing $E=TX$, but it is too degenerate. Could you give me an example which is more meaningful but still clear enough? Any reference or idea is also welcome!
|
Tonight I read here [the answer by esg to another your question] that $\frac1{2\pi}\int_{-\pi}^\pi e^{-ik t}(1+e^{it})^ndt=\binom{n}{k}$, which is, well, obvious at least when both $n$ and $k$ are positive integers: just expand the binomial $(1+e^{it})^n$ and integrate. Denoting $\alpha=k/n$ we may rewrite this as $\frac1{2\pi}\int_{-\pi}^\pi (f(t))^n dt=\binom{n}{\alpha n}$, where the function $f(t)=(1+e^{it})e^{-i\alpha t}$ is complex-valued. For making it real-valued, we change the path between the points $-\pi$ and $\pi$. The value of the integral does not change (since $f^n$ is analytic between two paths, for integer $n$ it is simply entire function.) On the second path $f$ takes real values. Namely, for $t\in (-\pi,\pi)$ we define $s(t)=\ln \frac{\sin (1-\alpha)t}{\sin \alpha t}$. It is straightforward (some elementary high school trigonometry) that $$f(t+is(t))=\frac{\sin t}{\sin^{\alpha} \alpha t\cdot \sin^{1-\alpha}(1-\alpha)t},$$
so we replace the path from $(-\pi,\pi)$ to $\{t+s(t)i:t\in (-\pi,\pi)\}$ (limit values of $s(t)$ at the endpoints are equal to 0) and take only the real part of the integral (this allows to replace $d(t+s(t)i)$ to $dt$ in the differential). We get
$$
\frac1{2\pi}\int_{-\pi}^\pi \frac{\sin^n t}{\sin^{\alpha n} \alpha t\cdot \sin^{(1-\alpha)n}(1-\alpha)t}dt=\binom{n}{\alpha n}
$$
as desired.
|
{
"source": [
"https://mathoverflow.net/questions/253714",
"https://mathoverflow.net",
"https://mathoverflow.net/users/43639/"
]
}
|
254,219 |
Suppose you have a closed $m$-dimensional manifold $M$, which embeds in $\mathbb{R}^{n+1}$ for some $n$. Can it have a closed submanifold $N$ (of dimension strictly smaller than $m$) which does not embed in $\mathbb{R}^n$? (By which I mean there is no embedding, not just that the restriction/projection of the first one doesn't work.) I'm not sure how important the dimension of $N$ is; it seems like codimension 1 would be the easiest place to find an example, but I'm also interested in higher codimension examples. In fact, is there an upper limit to the codimension in which such examples can exist? I would particularly like a codimension 1 example where both $M$ and $N$ are orientable. Or an example (orientable or not) for $n=2$ or $3$, but maybe there are obstructions in low dimensions... I am primarily thinking about smooth manifolds, but examples in the topological category would also be interesting.
|
Here's another way to get examples, in codimension one and in low dimensions. There are lots of oriented closed 3-manifolds that don't embed in 4-space, for example any 3-manifold $M$ with $H_1(M) \cong \mathbb{Z}_{2n}$. But a theorem of Hirsch says that $M$ embeds in $\mathbb{R}^5$. By a standard transversality argument (like the one that produces Seifert surfaces for higher-dimensional knots) $M = \partial W$ for some oriented $W \subset \mathbb{R}^5$. Now $W$ has a trivial normal bundle, so $W \times I \subset \mathbb{R}^5$, and so $M \subset \partial(W \times I)$, which is just the double of $W$. You can do a similar thing with an oriented closed $n$-manifold that doesn't embed in $n+1$-space but does embed in $n+2$ space. These aren't so easy to come by; for $n=4$ they were constructed by Tim Cochran (Inventiones Math. 77 (1984), 173--184). If you don't mind non-orientable examples, then you can do this with $n=2$. For a Klein bottle embeds in the quaternionic space form $S^3/Q8$, which in turn embeds in $\mathbb{R}^4$. (It is the boundary of a tubular neighborhood of an embedded $RP^2$.) But a Klein bottle cannot embed in $\mathbb{R}^3$.
|
{
"source": [
"https://mathoverflow.net/questions/254219",
"https://mathoverflow.net",
"https://mathoverflow.net/users/91903/"
]
}
|
254,364 |
Other than reading MathSciNet regularly, is there a way to get notified when one of my papers gets a review on MathSciNet?
|
While you could do as @Geoff Robinson suggests and look at MathSciNet regularly, in early 2017, we plan to have email alerts enabled on MathSciNet. The target is February 2017. Other new features will be rolled out in January and February. The first demos will be at the Joint Mathematics Meetings in Atlanta. Edward Dunne,
Executive Editor, Mathematical Reviews
|
{
"source": [
"https://mathoverflow.net/questions/254364",
"https://mathoverflow.net",
"https://mathoverflow.net/users/955/"
]
}
|
254,669 |
What is some current research going on in the foundations of mathematics about? Are the foundations of mathematics still a research area, or is everything solved? When I think about foundations I'm thinking of reducing everything to ZFC set theory + first order logic. Is this still contemporary?
|
It is quite difficult to answer this question comprehensively. It's a bit like asking "so what's been going on in analysis lately?" It is probably best if logicians who work in various areas each answer what is going on in their area. I will speak about logic in computer science. I am very curious to see what logicians from other areas have to say about their branches of logic. A hundred years ago logic was driven by philosophical and foundational questions about mathematics. The development of first-order logic and ZFC was one important branch of logic, but others were model theory and computability theory. In fact, computability theory led directly to the invention of modern computers. Ever since logic has had a very fruitful relationship with computer science. One could argue that applications of logic in computer science are not about foundations of mathematics, but such an opinion can hardly be defended. Of course, many applications of logic in computer science are just that, applications , but not all of them. Some are directly related to foundations of mathematics (see below), while others are about understanding what computation is about (Turing got us on the right track, but there's a lot more that can be said about computation apart from "it's all equivalent to Turing machines"). I hope you can agree that computation is a foundational concept in mathematics, equal to such concepts as (mathematical) structure and deductive method . We live in an era of logic engineering : we are learning how to apply methods of logic in situations that were not envisioned 100 years ago. Consequently, we need to devise new kinds of logic that are well-suited for new problems at hand. To give just one example: real-world computer systems often work concurrently with each other in an environment which acts in a stochastic fashion. Computer scientists have learned that Turing machines, first-order logic and set theory are not a very good way of describing or analyzing such systems (the computers are of course no more powerful than Turing machines and can be simulated by them, but that's hardly a helpful way to look at things). Instead they devise new kinds of logic which are well suited for their problems. Often we design logics that have good computational properties, so that machines can use them. Another aspect of logic in computer science, which is closer to a mathematician's heart, is computer-supported formalization of mathematics . Humans are not able to produce complete formal proofs of mathematical theorems. Instead we adhere to what is best described as "informal rigour". Computers however, are very good at checking all the details of a mathematical proof. Modern tools, such as various proof assistants ( Coq , HOL , Agda , Lean ), are improving at a fast rate, and many believe they will soon become useful tools for the working mathematician. Be that as it may, the efforts to actually do mathematics formally are driving research in how this can be feasibly done. While some proof assistants do use first-order logic and set theory (e.g. Mizar ), most use type theory. This is probably not just a fashion. It is a response to the fact that first-order logic and set theory suffice for formalization of mathematics in principle , but not so much in practice . Logicians have been telling us that in principle all mathematics can be formalized. And now that we're actually doing it, we're learning new things about logic. This to my mind is one of the most interesting contermporary developments in foundations of mathematics.
|
{
"source": [
"https://mathoverflow.net/questions/254669",
"https://mathoverflow.net",
"https://mathoverflow.net/users/101121/"
]
}
|
254,881 |
Let $\lceil a\rceil=$ the smallest integer $\geq a$, otherwise known as the ceiling function . When the arguments are real, interpret $\binom{a}b$ using the Euler's gamma function, $\Gamma$. Recently, I posted a problem on MO about "sin-omials" . Here is another one in the same spirit. QUESTION. Numerical evidence suggest that
$$\left\lceil \int_0^n\binom{n}x dx\right\rceil=\sum_{k=0}^n\binom{n}k.$$
If true, how do we prove this? REMARK. The RHS evaluates to $2^n$; I only want to stress parallelism between the integral and sum.
|
Following the hint by Noam D. Elkies, we just need to show that the remainder $$R_n:=\int_{-\infty}^0{n!\over\Gamma(n+1-x)\Gamma(1+x)}dx+ \int_n^{+\infty} {n!\over\Gamma(n+1-x)\Gamma(1+x)}dx $$ satisfies $$0\le R_n<1.$$ The integrand writes
$${n!\over\Gamma(n+1-x)\Gamma(1+x)}={n! \over (n-x)(n-1-x)\dots(1-x)\Gamma(1-x)x\Gamma(x)}={n!\over \pi}{ \sin \pi x \over (n-x)(n-1-x)\dots(1-x)x}.$$
With a linear change of variables it is easy to see that the two integrals above coincide, so that the remainder takes the form
$$R_n= {2n!\over\pi}\int_{0}^{+\infty} {{ \sin \pi x \over x(x+1)\dots(x+n)}}\, dx $$ so in particular $R_0=1$. If we further write the integral as a Leibnitz alternating sum of the integrals over unit intervals, we find
$$ R_n= {2n!\over\pi}\sum_{k=0}^{\infty}\; (-1)^k\int_{0}^{1} {{ \sin \pi x \over (x+k)\dots(x+k+n)}}\, dx$$
$$ ={2n!\over\pi}\int_{0}^{1}\sin \pi x\bigg[ \sum_{k=0}^{\infty}\; {{ 1 \over (x+2k)\dots(x+2k+n)}}-{{ 1 \over (x+2k+1)\dots(x+2k+1+n)}}\bigg]\, dx $$
$$ ={2\over\pi}\int_{0}^{1}\sin \pi x \bigg[\sum_{k=0}^{\infty}\; {{ (n+1)! \over (x+2k)\dots(x+2k+1+n)}} \bigg]\, dx. $$
This way it is apparent that $R_n$ is positive and strictly decreasing w.r.to $n$, and we conclude that for $n\ge1$ $$0< R_n< R_0=1.$$
|
{
"source": [
"https://mathoverflow.net/questions/254881",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
255,098 |
Is there a known theorem $T$ in $ZF+DC$ (or $ZF$ or $ZFC$) such that the only proof we know of $T$ is by using the LEM applied to $A$ ( "$A$ or not $A$" ), where $A$ is independent of $ZF+DC$ ?
|
Here is an example: It is provable from $\sf ZF$ that there exists four infinite cardinals, $\frak p,q,r,s$ with $\frak p<q, r<s$ such that $\frak p^r=q^s$ . (Here cardinals do not mean just finite ordinals and $\aleph$ numbers.) You can find the proof here. The proof using the axiom of choice is easy, and the proof not using the axiom of choice begins by using the fact that there is a set which cannot be well-ordered in order to construct the example. Of course, the axiom of choice is indepedendent of $\sf ZF$ . I also don't know an explicit proof of this theorem (in fact, before seeing this in a a math.SE question, I don't know if someone had proven this outside of a $\sf ZFC$ context either).
|
{
"source": [
"https://mathoverflow.net/questions/255098",
"https://mathoverflow.net",
"https://mathoverflow.net/users/100552/"
]
}
|
255,503 |
The proof by Cauchy induction of the arithmetic/geometric-mean inequality is well known. I am looking for a further theorem whose proof is much neater by this method than otherwise.
|
A nice proof by Cauchy induction can be given for the identity
$$ \|A^n\|=\|A\|^n, $$
which holds for a bounded, self-adjoint operator $A:H\to H$ on a real Hilbert space $(H,\langle\cdot,\cdot\rangle)$. Here $\|\cdot\|$ denotes the operator norm. Indeed, the inequality $\|A^n\|\le\|A\|^n$ is trivial by submultiplicativity of such norm. Let us turn to the converse inequality: if you know that $\|A^n\|\ge\|A\|^n$, then
$$ \|A^{n-1}\|\|A\|\ge\|A^n\|\ge\|A\|^n, $$
so the same holds with $n-1$ in place of $n$. Thus it suffices to show that, whenever it holds for $n$, it holds also for $2n$:
$$ \|A^{2n}\|=\sup_{\|x\|\le 1}\|A^{2n}x\|\ge\sup_{\|x\|\le 1}\langle A^{2n}x,x\rangle=\sup_{\|x\|\le 1}\langle A^nx,A^nx\rangle=\|A^n\|^2\ge\|A\|^{2n}. $$ This last line used Cauchy induction (i.e. the idea to prove $n\Rightarrow 2n$ rather than, e.g., $n\Rightarrow n+1$) in an essential way!
|
{
"source": [
"https://mathoverflow.net/questions/255503",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7458/"
]
}
|
255,722 |
I've been learning Arakelov geometry on surfaces for a while. Formally I've understood how things work, but I'm still missing a big picture. Summary: Let $X$ be an arithmetic surface over $\operatorname{Spec } O_K$ where $K$ is a number field (we put on $X$ the good properties: regular, projective,...). Moreover suppose that $\{X_\sigma\}$ are the "archimedean fibers" of $X$, where each $\sigma$ is a field embedding $\sigma:K\to\mathbb C$ (up to conjugation). Each $X_\sigma$ is a smooth Riemann surface. We accomplished to "compactify" our arithmetic surface $X$, so now we want a reasonable intersection theory. An Arakelov divisor $\widehat D$ is a formal sum
$$\widehat D=D+\sum_\sigma g_\sigma\sigma$$
where $D$ is a usual divisor of $X$ and $g_\sigma$ is a Green function on $X_\sigma$. Question: Why on the archimedian fibers $X_\sigma$ do we need Green functions? A Green function on a Riemann surface is simply a function $g:X_\sigma\to\mathbb R$ which is $C^\infty$ on all but finitely points and at the "non-smooth" points is locally represented by
$$a\log|z|^2+\text{smooth function}$$
in other words a Green function is a decent real valued smooth function which "explodes at infinity" in a finite set of points. Simply I don't understand why in order to define a reasonable intersection theory we need an object with such properties. The key point should be that Green functions satisfy a Poincare-Lelong formula which can be expressed in terms of currents as:
$$dd^c[g]=[\operatorname{div}^G(g)]+[dd^c(g)]$$
but still I don't get the meaning of this formula. Somewhere I've read something like: Green function are useful in order to "measure distances" on a Riemann
surface. (This is not a quote, but simply what I remember). I really don't have any idea on the meaning of this sentence. Addendum: Moreover if one wants to calculate the intersection between two Arakelov divisors, at the archimedian places one has to take the $\ast$-product between Green functions. I don't understand the geometry of this product, probably because I don't understand well the geometry behind distribution and currents on Riemann surfaces.
|
The Green's function is used not to measure distances in the surface but to measure distances in the line bundle. A Green's function on $X_{\mathbb C}$ that blows up at $D$ can be used to measure sections of $\mathcal O(D)$. Indeed, if we represent a section of $\mathcal O(D)$ as a holomorphic function $f$ on $X_{\mathbb C}$ with poles at the points of $D$, and $g$ is a Green's function on $X_{\mathbb C}$ with poles at $D$, then $|f(x)|^2 e^{- |g(x)|}$ is a smooth nonnegative function that vanishes only where $f$ vanishes as a section of $\mathcal O(D)$ - i.e. where it lies in the image of the natural map $\mathcal O(D-P) \to \mathcal O(D)$. In fact it is easy to see that this is a Hermitian form on the space of sections. So the Green's function gives a metric on the line bundle. We need a metric at $\infty$ in an Arakelov line bundle because we already have a metric (in the form of a $p$-adic valuation) at every other place - any section of $\mathcal O(D)$ on $X_{\mathbb Q_p}$ has a well-defined $p$-adic valuation at each point of $x\in X_{\mathbb Q_p}$, where a local generator of the line bundle at the reduction mod $p$ $\overline{x}\in X_{\mathbb F_p}$ is given absolute value $0$ - because such a generator is also a generator of the fiber of $\mathcal O(D)$ at $x$, this determines the valuation on every section.
|
{
"source": [
"https://mathoverflow.net/questions/255722",
"https://mathoverflow.net",
"https://mathoverflow.net/users/47136/"
]
}
|
255,820 |
Note that "a working mathematician" is probably not the best choice of words, it's supposed to mean "someone who needs the theory for applications rather than for its own sake". Think about it as a homage to Mac Lane's classic. I'm in no way implying that set theory is not "real mathematics" (whatever that expression might mean, though I've heard some people say it, and I don't respect this point of view that something abstract is "not real mathematics") and I have a great respect for that field of study. However, I'm personally not interested in set theory and its logic for their own sake (as of now). For a while I have treated them naively, and it was fine as I haven't needed anything beyond introductory chapters in compheresnive books on algebra, analysis or topology. But recently I decided to understand the foundations of category theory based on Grothendieck universes and inaccessible cardinals. So, I went to read some sources on set theory. And was really confused at first about such definitions as of a "transitive set", which implicitly assume that all elements of all sets are sets. Then I read more about it and discovered that in $\mathrm{ZFC}$ everything is a set! It seemed absurd to me at first. After consulting several sources, I realized that ZFC was meant to be a (or even the ) foundation for mathematics, rather than simply a theory which gives us a framework to work with sets, so at that time people thought that every mathematical object can be defined in term of sets. It didn't seem as unreasonable as before anymore, but still... It still doesn't feel right for me. I understand that at the time when Zermelo and Fraenkel were developing axiomatic set theory, it was reasonable to think that every conceivable mathematical object is set. But it was a long time ago; is it still this way - especially concerning category theory? If we work in $\mathrm{ZFC}$ (+ $\mathrm{UA}$) we have to assume that every object in any category is a actually as set. And the same should go for morphisms. Because, given a category $\mathrm{C}$, $\operatorname{ob} \mathrm{C}$ and $\operatorname{mor} \mathrm{C}$ are sets, so their elements, namely, objects and morphisms of $\mathrm{C}$, should also be sets. The question is : is the assumption that there are no urelements, that is, that every conceivable mathematical object can be modeled in term of sets, reasonable, as of the second decade of the 21st century? Is there an area of mathematics where we need urelements? Can this way of thinking be a burden in some mathematical fields? (Actually, it's three questions, sorry. But they are related) P.S. I hope this question is not too "elementary" for this site. But as I understand there are quite a lot of working mathematicians who don't think much about foundations. So, even if this question is not useful for them, it can at least be interesting for them.
|
Set theory provides a foundation for mathematics in roughly the same way that Turing machines provide a foundation for computer science. A computer program written in Java or assembly language isn't actually a Turing machine, and there are lots of good reasons not to do real programming in Turing machines - real languages have all sorts of useful higher order concepts. But Turing machines are a useful foundation because everything else can be encoded by Turing machines, and because it's much easier to study Turing machines than it is to study a more complicated higher order language. Similarly, the point isn't that every mathematical object is a set, the point is that every mathematical object can be encoded by a set. It doesn't represent higher level ideas, like the fact that mathematical objects usually have types (as one of my colleagues likes to point out, the question "is the integer 6 an abelian group" is technically a reasonable one in set theory, but not in mathematics). But it's a (relatively) simple system to study, and just about everything we want to do can be encoded in set theory. To answer your specific questions, yes, it's still true that every mathematical object can be encoded as a set. Because sets are very flexible, there's no reason to think this will not continue to be true. There is no current field of mathematics in which urelements are essential, and because things one would do with urelements can instead be encoded with sets, there is unlikely to be such a field. ZFC does impose some limitations on category theory, because it doesn't allow objects on the same scale of the universe of sets. (For instance the category of categories is awkward to consider within ZFC, because the objects of this category cannot be a set.) These are reflected in the discussions of "small" and "locally small" categories. These issues can be worked around in mild extensions of ZFC by using things like Grothendieck universes. (Note that this is a feature of ZFC, not of set theoretic foundations in general. Quine's New Foundations allows certain self-containing sets.) This way of thinking can't really be burden because ZFC doesn't impose a way of thinking. The fact that things can be encoded as sets doesn't, and shouldn't, mean that we always think of them that way. It's perfectly consistent with having a set theoretic foundation to work with things like urelements, or to think about groups and categories without thinking of them as sets. (Worrying about things like self-containing categories can be a burden, but it's a necessary one given the history of paradoxical objects containing themselves.)
|
{
"source": [
"https://mathoverflow.net/questions/255820",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83143/"
]
}
|
255,821 |
Excuse me if my question is stupid. I'm seeking the references on the dependence of the (linear) optimization problem on (linear) constraints. Namely, consdier the following optimization problem: $$P(\alpha)~~:=~~\sup_{p\in\mathcal A(\alpha)}~c^T p,$$ with $$\mathcal A(\alpha)~~:=~~\big\{p\in\mathbb R^n:~ Ap~=~\alpha\big\},$$ where $\alpha\in\mathbb R^m$, $c\in\mathbb R^n$ and $A\in\mathbb R^{m\times n}$ are given. I'm looking for the references on the map $\alpha\longrightarrow P(\alpha)$. Any suggestions and comments are highly appreciated. Thanks a lot! PS: Thanks for the reply. Indeed, here we suppose of course that $m>n$ and actually $p_i\ge 0$ and $\sum_{i=1}^np_i=1$. So if the set $\mathcal{A}(\alpha)\neq \emptyset$, then the maximizer always exists.
|
Set theory provides a foundation for mathematics in roughly the same way that Turing machines provide a foundation for computer science. A computer program written in Java or assembly language isn't actually a Turing machine, and there are lots of good reasons not to do real programming in Turing machines - real languages have all sorts of useful higher order concepts. But Turing machines are a useful foundation because everything else can be encoded by Turing machines, and because it's much easier to study Turing machines than it is to study a more complicated higher order language. Similarly, the point isn't that every mathematical object is a set, the point is that every mathematical object can be encoded by a set. It doesn't represent higher level ideas, like the fact that mathematical objects usually have types (as one of my colleagues likes to point out, the question "is the integer 6 an abelian group" is technically a reasonable one in set theory, but not in mathematics). But it's a (relatively) simple system to study, and just about everything we want to do can be encoded in set theory. To answer your specific questions, yes, it's still true that every mathematical object can be encoded as a set. Because sets are very flexible, there's no reason to think this will not continue to be true. There is no current field of mathematics in which urelements are essential, and because things one would do with urelements can instead be encoded with sets, there is unlikely to be such a field. ZFC does impose some limitations on category theory, because it doesn't allow objects on the same scale of the universe of sets. (For instance the category of categories is awkward to consider within ZFC, because the objects of this category cannot be a set.) These are reflected in the discussions of "small" and "locally small" categories. These issues can be worked around in mild extensions of ZFC by using things like Grothendieck universes. (Note that this is a feature of ZFC, not of set theoretic foundations in general. Quine's New Foundations allows certain self-containing sets.) This way of thinking can't really be burden because ZFC doesn't impose a way of thinking. The fact that things can be encoded as sets doesn't, and shouldn't, mean that we always think of them that way. It's perfectly consistent with having a set theoretic foundation to work with things like urelements, or to think about groups and categories without thinking of them as sets. (Worrying about things like self-containing categories can be a burden, but it's a necessary one given the history of paradoxical objects containing themselves.)
|
{
"source": [
"https://mathoverflow.net/questions/255821",
"https://mathoverflow.net",
"https://mathoverflow.net/users/37850/"
]
}
|
256,268 |
I heard the claim as in the title for a long time, but can not find the precise reference for this claim, what's the reference with proof for this claim? Thanks for the help.
To be more precise, is there a canonical topology structure on the space $\Omega$ of all compact $n$-dim smooth manifolds, such that for any compact smooth $n$-dim manifold $M^n$, any neighborhood $U$ of $M^n$ in $\Omega$, there is a $n$-dim smooth manifold $N^n\in U$ such that $N^n$ admits a Riemannian metric with curvature equal to $-1$. Everything in my mind is just Riemannian hyperbolic, no complex structure involved.
|
The quotes are from Thurston's survey paper Three dimensional manifolds, kleinian groups and hyperbolic geometry page 362: 2.6. THEOREM [Th 1]. Suppose $L \subset M^3$ is a link such that $M — L$ has a hyperbolic structure. Then most manifolds obtained from $M$ by Dehn surgery along $L$ have hyperbolic structures. In fact, if we exclude, for each component of $L$, a finite set of choices of identification maps (up to the appropriate equivalence relation as mentioned above), all the remaining Dehn surgeries yield hyperbolic manifolds. Every closed 3-manifold is obtained from the three-sphere $S^3$ by Dehn
surgery along some link whose complement is hyperbolic, so in some sense
Theorem 2.6 says that most 3-manifolds are hyperbolic.
|
{
"source": [
"https://mathoverflow.net/questions/256268",
"https://mathoverflow.net",
"https://mathoverflow.net/users/100486/"
]
}
|
256,342 |
Endow $S^7$ with a structure of an $H$-space induced from multiplication in octonions $\mathbb{O}=\mathbb{R}^8$. It is not associative as octonion multiplication is not associative. Is it associative up to homotopy, i.e. are maps $m(m(-,-),-):S^7\times S^7\times S^7\to S^7$ and $m(-,m(-,-)):S^7\times S^7\times S^7\to S^7 $ homotopic?
|
It is not. See Theorem 1.4 of this paper by I.M. James (Trans. AMS 84 (1957), 545-558). In particular, there exists no homotopy associative multiplication on $S^n$ unless $n=1$ or $n=3$.
|
{
"source": [
"https://mathoverflow.net/questions/256342",
"https://mathoverflow.net",
"https://mathoverflow.net/users/39304/"
]
}
|
256,469 |
This question is probably really naive. And, I hope the title doesn't come off as too combative. I think that topoi of $\mathbf{Set}$-valued sheaves provide an excellent motivation for higher-order intuitionistic logic, because most such topoi aren't boolean. But, I have an extremely hard time accepting that $\mathbf{Set}$ is intuitionistic. Let me elaborate on this a bit. Suppose our intuition for the phrase "subset of $X$" comes from the idea of having an effective total function $X \rightarrow \{0,1\}$ that returns an answer in a finite amount of time. In this case, the subsets of $X$ ought to form a Boolean algebra. Suppose on the other hand that our intuition for the phrase "subset of $X$" comes from the idea of having an effective partial function $X \rightarrow 1$, such that the program for $f(x)$ either halts in a finite amount of time, or else runs forever. Then, with a bit of thought, we see that subsets of $X$ should be closed under union and intersection. But also, there's really no way to take the complement of a subset in general. So, under this viewpoint, the subsets of $X$ form a bounded distributive lattice. So, I'm willing to believe that $\mathbf{Set}$ is a Boolean topos (the classical viewpoint), and I'm also willing to believe that it's much, much less than that (the "algorithmic" viewpoint), because I think these are both well-motivated and reasonable positions. The thing about trying to apply intuitionistic logic to $\mathbf{Set}$ is that it seems to straddle a very strange middle ground between these two extremes. It says: yes, you can take the complement of any subset. But no, it may not be the case that $A^c \cup A = X$ (where $A$ is a subset of $X$, and $A^c := X \setminus A.$). And, it may not be the case that $(A^c)^c = A.$ But nonetheless, you can be assured that $((A^c)^c)^c = A^c$. Huh? Look, I realize this makes perfect sense from a BHK perspective; for instance, the complement of the complement of $(0,1) \cup (1,2)$ is $(0,2)$. So, the double negation law fails. But, that's in a topological space! I mean, why would $\mathbf{Set}$ be subject to this kind of reasoning?
|
You wrote: Suppose our intuition for the phrase "subset of $X$ " comes from the idea of having an effective total function $X \rightarrow \{0,1\}$ that returns an answer in a finite amount of time. In this case, the subsets of $X$ ought to form a Boolean algebra. Unfortunately, this is not a workable intuition at all. If you insist that all subsets be classified by "effective total functions $X \to \{0,1\}$ " then the subsets of $\mathbb{N}$ are not even closed under countable unions and countable intersections. Or to put it another way, you are not allowed to say "for all $n \in \mathbb{N}$ " and "there exists $n \in \mathbb{N}$ " in an unrestricted way. How then are we supposed to do any actual mathematics? For instance, number theorists would not even be allowed to state "For every $n \in \mathbb{N}$ there is $k > n$ such that both $k$ and $k+2$ are prime." without proving that there is a decision procedure which, on input $n$ , decides whether there is a pair of twin primes above it. So, intuitionistic mathematicians never proposed such a view. The idea that subsets be classified by termination of a partial effective procedure is equally unworkable, you just need to go one level up. It allows you to write statements of the form $\exists n \in \mathbb{N} . \phi(n)$ for decidable $\phi$ , but not $\forall n \in \mathbb{N} . \phi(n)$ . General subsets are not classified by any kind of decision procedures, or semidecision procedures, or even procedures with access to oracles. You can always beat those with diagonalization to create more subsets. In computational terms, subsets should be thought of as being classified by completely arbitrary computational evidence. Then, among all the subsets some are classified by effective decision procedures (the decidable subsets), and some are classified by entirely uninformative computational evidence (those are the complemented subsets which are equal to their double complements). Now, you ask why anybody would think of sets as intuitionistic objects. First of all, since you already subscribe to the idea that intuitionistic toposes make sense, you might be interested to know that every such topos begets an intuitionistic set theory, and vice versa, see Relating first-order set theories and elementary toposes by Awodey et al. In this sense, the distinction between intuitionistic set theories and intuitionistic toposes (and intuitionistic type theories) is not all that important. It is known that we may often (but not always) pass between them as desired. There are mathematical reasons for considering intuitionistic set theories, rather than toposes. For example, in theoretical computer science people tried to figure out how to do synthetic domain theory in toposes, and there were always some obstacles. In the end Alex Simpson switched to intuitionistic set theories to solve some of the problems . Speaking a bit vaguely, he needed replacement and a non-trivial object $P$ , or a set, with magical properties, such as $P$ containing $P^P$ as a retract. In classical set theory this is not possible due to cardinality reasons, but in intuitionistic set theory one can have such sets (and replacement). In topos theory there is no replacement. Apart from the fact that there are good mathematical reasons for using intuitionistic set theories, one can also ask for philosophical or foundational reasons. I can only relate a personal view here. After having worked for some time with intuitionistic type theory and the internal languages of realizability toposes, I have definitely developed the idea that "bare sets", "bare types" and "bare objects" are not really just "bare collections of elements". They behave much more like topological spaces. The "intrinsic" topological structure of mathematical objects cannot be separated from the objects themselves. This point has been made by many people, but perhaps the most illuminating is the work of Martin Escardó, who puts the topological way of thinking to do surprising and interesting things in functional programming and type theory . Under this view the classical sets are just the "boring" intuitionistic sets. They are like the indiscrete spaces, and who would want to study those? Once we subscribe to the idea that even the "bare sets" or "bare objects" may carry intrinsic structure, i.e., they are not just "Cantor's dust", then the double complement not being equal to the original is the norm rather than the exception. To give a couple of examples: Suppose we work in presheaves on a category with two objects and two parallel arrows, which is just the topos of directed graphs. The double complement of a subgraph is not the original graph, but rather the full graph induced on the vertices of the original subgraph. Suppose we work in a realizability topos, where an element of a subset carries computational evidence expressing why it is in the subset. Then an element $g$ of the subset $S = \{f : [0,1] \to \mathbb{R} \mid \exists x \in [0,1] . f(x) = 0\}$ of $[0,1] \to \mathbb{R}$ carries the computational evidence $\langle{p,q\rangle}$ where $p$ is a program for computing $g$ and $q$ is a program for calculating a real $a \in [0,1]$ such that $g(a) = 0$ . An element of the double complement carries only $p$ . Since $q$ cannot be algorithmically recovered just from $p$ , the double complement is therefore not equal to the original. We see here that double complement acts as a sort of regularization operation. In fact, that is precisely what it is in the topological interpretation of intuitionistic connectives: the complement of an open set $U$ in the topology is its exterior, and the double exterior of $U$ is its regularization. By observing that complements already are "regular" (the complement of a subgraph is a full subgraph, the complement of a subset has trivial computational evidence, the exterior of an open set already is regular) we come to expect that triple complements are the same thing as complements.
|
{
"source": [
"https://mathoverflow.net/questions/256469",
"https://mathoverflow.net",
"https://mathoverflow.net/users/26080/"
]
}
|
256,623 |
I have a normed space $(E,||\cdot||)$ which is homeomorphic (as a topological space) to a Banach space $F$. Does this imply that $(E,||\cdot||)$ is also a Banach space? I think I read something like this to be true if $E$ (and therefore also $F$) is separable, but I am not totally sure. So, also this special case would be interesting.
|
Let $\bar{E}$ be the norm completion of $E$, which is a Banach space. Then we can consider $E$ as a dense linear subspace of $\bar{E}$, where the subspace topology and the norm topology on $E$ coincide. In particular, since this topology is homeomorphic to $F$, it is completely metrizable, so $E$ is a $G_\delta$ in $\bar{E}$ (Kechris, Classical Descriptive Set Theory , Theorem 3.11). As a dense $G_\delta$, in particular $E$ is comeager in $\bar{E}$. If $x \in \bar{E} \setminus E$, then $E, E+x$ are disjoint comeager subsets of $\bar{E}$, which is absurd by the Baire category theorem. So $E = \bar{E}$ and thus the norm on $E$ is complete. I think I did not use separability anywhere.
|
{
"source": [
"https://mathoverflow.net/questions/256623",
"https://mathoverflow.net",
"https://mathoverflow.net/users/101778/"
]
}
|
257,214 |
Write an integer partition $\lambda\vdash n$ in two different ways: (1) $\lambda=\lambda_1\geq\lambda_2\geq\lambda_3\cdots\geq\lambda_k\geq1$ (2) $\lambda=1^{m_1}2^{m_2}3^{m_3}\cdots n^{m_n}$ for some $m_i\geq0$. Denote the length of a partition, compatible with (1) and (2), by $m_1+m_2+\cdots+m_n=k$. QUESTION. According to enough experimental evidence,
$$\sum_{\lambda\vdash n}(-1)^{n-k}\frac{k!}{m_1!\cdots m_n!}\prod_{i=1}^k\binom{n+1}{\lambda_i}=\binom{2n}n.$$
Is it true? If so, any proof?
|
Multiply the whole equality by $(-1)^n$. First of all, notice that $k\choose m_1,\dots,m_k$ is the number of ways to permute the numbers $\lambda_1,\dots,\lambda_k$, so the left-hand side equals
$$
L=\sum_{\lambda_i\geq 1, \; \lambda_1+\dots+\lambda_k=n}
(-1)^k\prod_{i=1}^k{n+1\choose \lambda_i}.
$$
Denote
$$
S(x)=\sum_{j=1}^{n+1} {n+1\choose j}x^l=(1+x)^{n+1}-1.
$$
Then, due to the formula above, $L$ is the coefficient of $x^n$ in
$$
1-S(x)+S(x)^2-\dots=\frac1{1+S(x)}=\frac1{(1+x)^{n+1}}
=\sum_{i=0}^\infty(-1)^i{n+i\choose i}x^i,
$$
thus $L=(-1)^n{2n\choose n}$, as desired.
|
{
"source": [
"https://mathoverflow.net/questions/257214",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
257,310 |
Hope that the following soft question is still appropriate on MathOverflow. I was wondering if there is any communal protocol or etiquette with regard to the resubmission of a research paper after it has been superseded by another as yet unpublished paper. Here's the situation in detail. Suppose that a paper of yours, call it paper A, regarding the existence of some mathematical object foobar has been rejected by some journal X. Before you received the news of rejection of paper A by journal X, you obtained bounds on the complexity of foobar and submitted these complexity estimates as paper B to journal Y. Given that paper B is still pending the refereeing process, would you resubmit paper A to another journal X'? Is it unethical to do so? Of course, one question is what is the worth of paper A, given that it has been superseded by paper B. One possible factor to take into consideration is that paper B is substantially longer than paper A (in my case paper B about 40 pp. and paper A is about 20 pp.). There may be readers interested in the existence argument of paper A without bothering about the complexity estimates in paper B.
|
This is a tricky situation, though just one spin off of the current absurd state of academic publishing, and particularly the preposterously slow and uneven peer review process in math. My personal feeling is that you should try about as hard to get paper A published as you would any other paper (though strategically, I would probably submit to a somewhat less selective journal). It does feel a little silly; I actually now am in the doubled version of this situation, where I wrote a paper, wrote another that superseded it, and then wrote another that superseded that one, all of which are under review simultaneously at the moment. The process in mathematics has gotten a bit silly just generally, but any reason you had to publish paper A before (whether it is padding your CV, making sure that it is regarded as a reliable part of the peer reviewed literature, or hoping to get useful feedback from a referee) is equally in force now, so I don't see what has really changed.
|
{
"source": [
"https://mathoverflow.net/questions/257310",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
257,416 |
From time to time, I run into the finite product $\prod_{j=1}^n(1+q^j)$. And, the more it happens, the more fascinated I've become. So, herein, I wish to get help in collecting such results. To give some perspective into what I look for, check out the below examples. First, some nomenclature: $(q)_k=(1-q)(1-q^2)\cdots(1-q^k)$ and $\binom{n}k_q=\frac{(q)_n}{(q)_k(q)_{n-k}}$. (0) It's almost silly, but the sum of elementary functions of the specialization $\pmb{q}=(q,q^2,\dots,q^n)$:
$$e_0(\pmb{q})+e_1(\pmb{q})+\cdots+e_n(\pmb{q})=\prod_{j=1}^n(1+q^j).$$ (1) The classical $q$-binomial theorem, which results from counting restricted distinct partitions or number of weighted tilings:
$$\sum_{k=0}^nq^{\binom{k+1}2}\binom{n}k_q=\prod_{j=1}^n(1+q^j).$$ (2) I can't remember where I saw this (do you?) but
$$\sum_{k=0}^nq^k\binom{n}k_{q^2}=\prod_{j=1}^n(1+q^j).$$ (3) The $H$-polynomial of a symplectic monoid $MSp_n$ ( see this paper, page 13 ):
$$\sum_{k=0}^n(-1)^kq^{k^2}\binom{n}k_{q^2}^2\prod_{i=1}^k(1-q^{2i})\prod_{j=1}^{n-k}(1+q^j)^2=\prod_{j=1}^{2n}(1+q^j),$$
although the authors did not seem to be aware of the RHS. QUESTION. Can you provide such formulas (in any field) with the same RHS (always a finite product) as in above, together with resources or references? Thank you.
|
Up to scaling, $\prod_{j=1}^n(1+q^j)$ is the character of the principal specialization of
the spinor representation of $\mathfrak{so}(2n+1)$. This was first
explicitly stated by J. W. B. Hughes, Lie algebraic proofs of some theorems on partitions, in Number Theory and
Algebra , Academic Press, 1977, pp. 135--155.
|
{
"source": [
"https://mathoverflow.net/questions/257416",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
257,495 |
So I've been reading about derived categories recently (mostly via Hartshorne's Residues and Duality and some online notes), and while talking with some other people, I've realized that I'm finding it difficult to describe what a "triangle" is (as well as some other confusions, to be described below). Let $\mathcal{A}$ be an abelian category, and let $K(\mathcal{A})$ and $D(\mathcal{A})$ be its homotopy category and derived categories respectively. By definition, in either $K(\mathcal{A})$ or $D(\mathcal{A})$, (1) A triangle is a diagram $X\rightarrow Y\rightarrow Z\rightarrow X[1]$ which is isomorphic to a diagram of the form
$$X\stackrel{f}{\rightarrow}Y\rightarrow Cone(f)\rightarrow X[1]$$ This is the definition, though I don't really understand the motivation. Somewhat more helpful for me, is the definition: (2) A cohomological functor from a triangulated category $\mathcal{C}$ to an abelian category $\mathcal{A}$ is an additive functor which takes triangles to long exact sequences. Since taking cohomology of a complex (in either $K(\mathcal{A})$ or $D(\mathcal{A})$) is a cohomological functor, this definition tells me that I should think of a triangle as being like "a short exact sequence" (in the sense that classically, taking cohomology of a short exact sequence results in a long exact sequence). This idea is also supported by the fact: (3) If $0\rightarrow X^\bullet\rightarrow Y^\bullet\rightarrow Z^\bullet\rightarrow 0$ is an exact sequence of chain complexes, then there is a natural map $Z^\bullet\rightarrow X[1]^\bullet$ in the derived category $D(\mathcal{A})$ making $X^\bullet\rightarrow Y^\bullet\rightarrow Z^\bullet\rightarrow X[1]^\bullet$ into a triangle (in $D(\mathcal{A})$). This leads to my first precise question : Can there exist triangles in $D(\mathcal{A})$ which don't come from exact sequences? If so, is there a characterization of them? Is (3) false in the homotopy category $K(\mathcal{A})$? (certainly the same proof doesn't work). I sort of expect that the answers to the first and last questions above to both be "Yes", which makes the comparison between triangles and "exact sequences" a bit weird. Of course, $K(\mathcal{A})$ and $D(\mathcal{A})$ are almost never abelian categories, and so it's "weird" to talk about exact sequences there. I suppose at a fundamental level, I find the "homotopy category" somewhat mysterious. I don't completely understand it's role in the construction of the derived category, since after all homotopy equivalences are quasi-isomorphisms. I also find it difficult to internalize this notion of "homotopic morphisms of chain complexes". To me, it's just a "technical trick" which allows one to do all this magic with mapping cones which allows $K(\mathcal{A})$ to be a triangulated category, whereas the normal abelian category of chain complexes is not. I can prove things with it, but whenever I do, I sort of feel unsettled - as if I'm playing with something 'magical' that could, at any moment, turn on me unexpectedly. In addition to my specific questions above, I suppose I was hoping that someone would be able to articulate in a nice way how one should think of triangles, why this notion of a triangulated category is so successful, and hopefully alleviate some of my unsettlement regarding homotopy.
|
To answer your first precise questions: Yes, every distinguished triangle in $D(A)$ comes from a short exact sequence. For every distinguished triangle $X \to Y \to \mathrm{Cone}(f) \stackrel{+1}\to $ there is a short exact sequence
$$ 0 \to Y \to \mathrm{Cone}(f) \to X[-1] \to 0$$
of complexes in $A$, and our distinguished triangle arises from this one by rotation. By the same argument, every distinguished triangle in $K(A)$ comes from a short exact sequence (at least up to a rotation). However, not every short exact sequence gives rise to a distinguished triangle in $K(A)$. If
$$ 0 \to X \stackrel f \to Y \to Z \to 0$$ is a short exact sequence of complexes in $A$, then there is a natural map $\mathrm{Cone}(f) \to Z$ which in general is only a quasi-isomorphism, not a homotopy equivalence. If for example the short exact sequence splits degree-wise, then it is always a homotopy equivalence, and we get a triangle in $K(A)$. Alright. About your question "Why $K(A)$?". You are right that homotopy equivalences are quasi-isomorphisms. So in principle one could construct $D(A)$ in "one step" by taking the category of complexes and inverting quasi-isomorphisms. But there are some technical reasons for preferring the construction via the category $K(A)$. First off, the category $K(A)$ is already triangulated, and this is easy to prove. This means that to construct $D(A)$ we are in the situation of Verdier localization : we have a triangulated category, and we localize it at the class of morphisms whose cone is in a specific thick triangulated subcategory. In particular $D(A)$ becomes triangulated. In general, localizations of categories can be complicated, and in the "one-step" construction it is not even obvious that $D(A)$ is a category, i.e. that the morphisms from one object to another form a set. To motivate why we should care about triangles at all, note that it doesn't make sense to talk about kernels or images of morphisms in $D(A)$, so that we can't talk about exactness. Given that we want to be able to do homological algebra we need some sort of substitute for this. Triangles, encoding short exact sequences, turn out to be enough to develop much of the theory, and a posteriori one could want to axiomatize the theory only in terms of triangles. One reason we could expect the notion of a "distinguished triangle" to really be intrinsic to $D(A)$ (even though the class of distinguished triangles need to be specified in the axioms for a triangulated category) is that any triangle
$$ X \to Y \to Z \stackrel{+1}\to$$
gives rise to a long sequence of abelian groups after applying $[W,-]$ for any object $W$; this long sequence will be a long exact sequence when the triangle is distinguished, for any $W$, and this is really a very special property! A remark is that nowadays people will tell you that "stable $\infty$-categories" are for all purposes better than triangulated categories, and in a stable $\infty$-category, the class of distinguished triangles does not need to be specified in advance, so to speak: equivalent stable $\infty$-categories will have the same distinguished triangles.
|
{
"source": [
"https://mathoverflow.net/questions/257495",
"https://mathoverflow.net",
"https://mathoverflow.net/users/15242/"
]
}
|
257,628 |
Many mathematicians know the Four Color Theorem and its history: there were two alleged proofs in 1879 and 1880 both of which stood unchallenged for 11 years before flaws were discovered. I am interested in more such examples, especially, in former (famous or interesting) "theorems" where no remedy could be found to the present day.
|
I just discovered that
Wikipedia maintains a page entitled " List of incomplete proofs ."
Each of the more than $60$ entries is marked with these symbols: "Several of the examples on the list were taken from answers to questions on the MathOverflow site."
|
{
"source": [
"https://mathoverflow.net/questions/257628",
"https://mathoverflow.net",
"https://mathoverflow.net/users/70146/"
]
}
|
257,659 |
According to Google Scholar original Gallager's article Low-density parity-check codes is cited more than 10000 times. It looks scary for non-experts. I suspect that the number of algorithms for constructing good sparse matrices for LDPC-codes is much less than 10000. Is there any survey which contains description of such algorithms? Basically I'm interested in explicit constructions based on number theory methods.
|
I just discovered that
Wikipedia maintains a page entitled " List of incomplete proofs ."
Each of the more than $60$ entries is marked with these symbols: "Several of the examples on the list were taken from answers to questions on the MathOverflow site."
|
{
"source": [
"https://mathoverflow.net/questions/257659",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5712/"
]
}
|
257,885 |
How does one go from an understanding of basic algebraic topology (on the level of Allen Hatcher's Algebraic Topology and J.P. May's A Concise Course in Algebraic Topology) to understanding the paper of Hill, Hopkins, and Ravenel on the Kervaire invariant problem ? I have read that some understanding of chromatic homotopy theory and equivariant homotopy theory (neither of which I am familiar with, aside from their basic idea) is required, but it seems that there are still quite a few steps from basic algebraic topology to these two subjects and from these two subjects to the Hill-Hopkins-Ravenel paper. I do have some idea of a few topics beyond what is discussed in the books of Hatcher and May, such as model categories on the level of Dwyer and Spalinski's Homotopy Theories and Model Categories, as well as some idea of spectral sequences (both of which are required reading for the paper, I believe), but it would be nice if we could come up with some sort of roadmap (with an ordered list of subjects, and even better if we had references) from the Hatcher and May books to the Hill-Hopkins-Ravenel paper. For that matter, I don't even know how spectral sequences and model categories fit in, so it would be nice I guess if there's also an explanation as to how each of these prerequisite subjects fit into the paper. I'm looking for something like the very nice answer to this query regarding the Norm Residue Isomorphism theorem.
|
There is one major topic that is missing from your list: spectra . Spectra (and $E_\infty$-ring spectra) are the basics of modern stable homotopy theory and are not treated, if not very cursorily, in the references you've looked at. Seriously, spectra are the bread and butter of a homotopy theorist nowadays and their absence from the standard references is as embarrassing as the absence of schemes would be from the standard material for algebraic geometry. That said, let me try to put together a list of references that should help you getting started. The first place to look at is Adams' amazing Stable homotopy and generalized cohomology . This is a collection of lecture notes held at the university of Chicago and, famously, has to be read in reverse order (first part III, then part II and optionally part I) since they arranged the lectures chronologically rather than logically. Part III will give you a basic understanding of spectra, Part II will introduce the very basics of chromatic homotopy theory. Be warned that the treatment is very old (in fact if I'm not mistaken it is the first published construction of the stable homotopy category) and in particular the construction of the smash product of spectra is very lacking from a modern perspective, so you could skip it and just take the basic properties. In addition, the treatment of localizations has a couple of imprecisions, I suggest you complement that section with Bousfield's The localization of spectra with respect to homology . To get a more modern perspective, you cannot go much wrong with Schwede's Symmetric spectra . This is a modern treatment of a different (and better) model for spectra, which has the advantage of introducing you to $E_\infty$-ring spectra too, one of the basic structures that play a very important role in Hill-Hopkins-Ravenel (and all the rest of modern homotopy theory). Next you should get some familiarity with equivariant homotopy theory. A nice starting point is Adams' Prerequisites (on equivariant homotopy theory) for Carlsson's lecture . Again it is a bit old fashioned but having read it helps hitting the ground running. Since we're talking about equivariant homotopy theory, how could I not mention Schwede's Lectures on stable equivariant homotopy theory ? The model he uses for $G$-spectra is slightly different from the one in Hill-Hopkins-Ravenel, but again having read it will certainly help. Hill-Hopkins-Ravenel's paper itself has a nice self-contained introduction to stable homotopy theory, that I strongly encourage you to read. Maybe skip the proof that the norm is homotopically meaningful in appendix B (it is very technical and while it has some very interesting ideas the only thing you really need is the statement of the theorem). Chromatic homotopy theory is surprisingly less present in the actual solution of the Kervaire invariant one problem (you basically need only to know about the Adams-Novikov spectral sequence and the statement of a few highly nontrivial computations), but if you want to learn it (you should if you're interested in this circle of ideas), I can recommend Ravenel's Nilpotence and periodicity in stable homotopy theory and Lurie's lecture notes on the subject. If you are interested on how to do those computation that are needed for the HHR paper, your best friend is Ravenel's Complex cobordism and stable homotopy groups of spheres (credits to Lennart Meier for prompting me to add this). Also reading Ravenel's previous paper on the odd Kervaire invariant problem is helpful for understanding how the main HHR paper connects with chromatic homotopy theory. Since you mentioned spectral sequences, I think you will generally pick them up as you work through the above material. However a paper that really helped me with them is Boardman's Conditionally convergent spectral sequences . Highly recommended. I am sure my list has some glaring omissions, but this should at least get you started (and keep you busy for a while I suspect). Good luck and feel free to hang around the homotopy theory chatroom if you have questions!
|
{
"source": [
"https://mathoverflow.net/questions/257885",
"https://mathoverflow.net",
"https://mathoverflow.net/users/85392/"
]
}
|
258,459 |
The famous Sylvester-Gallai theorem states that for any finite set $X$ of points in the plane $\mathbf{R}^2$, not all on a line, there is a line passing through exactly two points of $X$. What happens if we replace $\mathbf{R}$ by $\mathbf{Q}_p$? It is well-known that the theorem fails if we replace $\mathbf{R}$ by $\mathbf{C}$: the set $X$ of flexes of a non-singular complex cubic curve has the property that every line passing through two points of $X$ also passes through a third. For example, the flexes of the Fermat curve $C:X^3+Y^3+Z^3=0$ are given by the equation $XYZ=0$ and are all defined over the field $\mathbf{Q}(\zeta_3)$ generated by a cube root of unity $\zeta_3$. As a consequence, if a field $K$ contains $\mathbf{Q}(\zeta_3)$ then the set of flexes of $C$ gives a counterexample to the Sylvester-Gallai theorem over $K$. For example, for any prime $p \equiv 1 \pmod{3}$, the field $\mathbf{Q}_p$ contains $\mathbf{Q}(\zeta_3)$ so that Sylvester-Gallai fails over $\mathbf{Q}_p$. I don't know what happens in the case $p=3$ or $p \equiv 2 \pmod{3}$. Note that the set of flexes of $C$ is not defined over $\mathbf{Q}_p$ anymore, but nothing prevents more complicated configurations of points giving counterexamples to the theorem over $\mathbf{Q}_p$. More generally, what happens over an arbitrary field $K$? Is it true that the Sylvester-Gallai theorem holds over $K$ if and only if $K$ does not contain the cube roots of unity? EDIT . David Speyer's beautiful example shows that the Sylvester-Gallai theorem fails over $\mathbf{Q}_p$ for any prime $p \geq 5$. Furthermore, regarding the problem of deciding whether SG holds over a given field $K$ (which looks like a difficult question, at least to me), this and Gro-Tsen's example show that the condition that $K$ does not contain the cube roots of unity clearly needs to be refined. In order for SG to hold over a characteristic $0$ field $K$, it is necessary that $K$ does not contain any root of unity of order $\geq 3$. I don't know whether this is also a sufficient condition.
|
If $n \geq 3$ and $K$ is a field of characteristic not dividing $n$, containing a primitive $n$-th root of unity $\zeta$, then the $3n$ points of the form $(1:-\zeta^a:0)$, $(0:1:-\zeta^b)$, $(-\zeta^c:0:1)$ are a Sylvester-Gallai configuration. In particular, taking $n=p-1$, this gives an SG configuration over $\mathbb{Q}_p$ for $p \geq 5$.
|
{
"source": [
"https://mathoverflow.net/questions/258459",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6506/"
]
}
|
258,497 |
I want to do some computation which need the mod p cohomologies of classifying spaces of connected compact Lie groups as input. I need the table for both the simply connected case and the central quotient case($G/\Gamma$, $\Gamma\subset Z(G)$). Are there a complete table for all these results.
|
If $n \geq 3$ and $K$ is a field of characteristic not dividing $n$, containing a primitive $n$-th root of unity $\zeta$, then the $3n$ points of the form $(1:-\zeta^a:0)$, $(0:1:-\zeta^b)$, $(-\zeta^c:0:1)$ are a Sylvester-Gallai configuration. In particular, taking $n=p-1$, this gives an SG configuration over $\mathbb{Q}_p$ for $p \geq 5$.
|
{
"source": [
"https://mathoverflow.net/questions/258497",
"https://mathoverflow.net",
"https://mathoverflow.net/users/103066/"
]
}
|
258,515 |
Consider the following: 1) How many connected regions can $n$ hyperplanes form in $\mathbb R^d$? 2) What if the set of hyperplanes are homogeneous? 3) Given a set of $n$ pairs of hyperplanes, such that each pair is parallel, what is the maximum number of regions that can be formed? I saw here , here and here that the answer to (1) is
$$f(d,n)=\sum_{i=0}^d {n \choose i}$$ However I find it non-trivial to generalize the proof of $\mathbb R^2$ that was provided to $\mathbb R^d$ (without using "lower"/"upper" descriptions). Is there any "nice" way to show it recursively? Q3 is what I'm really after. Any ideas?
|
If $n \geq 3$ and $K$ is a field of characteristic not dividing $n$, containing a primitive $n$-th root of unity $\zeta$, then the $3n$ points of the form $(1:-\zeta^a:0)$, $(0:1:-\zeta^b)$, $(-\zeta^c:0:1)$ are a Sylvester-Gallai configuration. In particular, taking $n=p-1$, this gives an SG configuration over $\mathbb{Q}_p$ for $p \geq 5$.
|
{
"source": [
"https://mathoverflow.net/questions/258515",
"https://mathoverflow.net",
"https://mathoverflow.net/users/97327/"
]
}
|
258,525 |
Let $f$ be a function such that :$f:\mathbb{R}\to \mathbb{R}$ and $f^{-1}$ is a compositional inverse of $f$. I would'd like to know how do I solve this class of differential equation : $$\displaystyle \ f'= e^{\displaystyle {f}^{-1}}?$$ Note 01: $f' =\displaystyle\frac{df}{dx}$. Edit: ${f}^{-1}$ is the inverse compositional of $f$, for example $\log$ is the inverse application of exp function . Note 02 : I have edited my question to clarify the titled question that related to ${f}^{-1}$ Thank you for any help
|
There is no such function. Since $f$ would have to map $\mathbb R$ onto $\mathbb R$ for the equation to make sense at all $x\in\mathbb R$, it follows that $f^{-1}(x)\to -\infty$ also as $x\to -\infty$, so $f'\to 0$. Thus $f(x)\ge x$, say, for all small enough $x$, hence $f^{-1}(x)\le x$ eventually, but then the equation shows that $f'\le e^x$, which is integrable on $(-\infty, 0)$, so $f$ would approach a limit as $x\to -\infty$ and not be surjective after all.
|
{
"source": [
"https://mathoverflow.net/questions/258525",
"https://mathoverflow.net",
"https://mathoverflow.net/users/51189/"
]
}
|
258,535 |
Let $(\mathsf{Rel},\otimes,1)$ denote the monoidal category of sets and relations, where $1$ is the one-element set. I once conjectured (with a little help from Jamie Vicary) that $\mathsf{Rel}$ is "quotient-free" in the sense that if a strong monoidal functor $F\colon\mathsf{Rel}\to S$ identifies any parallel pair of morphisms, then $F$ identifies every parallel pair of morphisms, and hence it factors through the terminal monoidal category (since $\mathsf{Rel}$ has a zero-object). [I'd be happy to hear suggestions for a better name than "quotient-free monoidal category", or for a better way of thinking of such things.] Definition: We say a monoidal category $M$ is quotient-free if for any monoidal category $S$ and strong monoidal functor $F\colon M\to S$, if $F(f_0)=F(f_1)$ for distinct morphisms $f_0\neq f_1\colon A\to B$ then $F$ factors through a terminal monoidal category. Explaining the conjecture to Tobias Fritz, he quickly proved it (by contradiction) as follows. Proposition: The monoidal category $(\mathsf{Rel},\otimes,1)$ is quotient-free. Proof (Fritz): Suppose that $A$ and $B$ are sets and that $R_0,R_1\colon A\to B$ are relations such that $R_0\neq R_1$. Then there exists $a\in A$ and $b\in B$ such that $(a,b)\notin R_0$ and $(a,b)\in R_1$ (without loss of generality). Let $e_a\colon 1\to A$ and $e_b\colon 1\to B$ correspond to the relations characterizing the subsets $\{a\}\subseteq A$ and $\{b\}\subseteq B$, respectively, and let $e'_b\colon B\to 1$ be the transpose of $e_b$. Then we have two different relations
$$1\xrightarrow{e_a}A\xrightarrow{R_0\ ,\ R_1}B\xrightarrow{e'_b}1.$$
These ($e'_bR_0e_a$ and $e'_bR_1e_a$) are the only two relations $1\to 1$, equaling the "null" relation $\emptyset_{1,1}$ and the identity $\mathrm{id}_1$, respectively. Assuming now that $F(R_0)=F(R_1)$, we have $F(\mathrm{id}_1)=F(\emptyset_{1,1})$. It follows that $F$ identifies any given relation $X\colon C\to D$ with the null relation $\emptyset_{C,D}\colon C\to D$, because
$$
FX\cong
F(X)\otimes F(\mathrm{id}_1)=
F(X)\otimes F(\emptyset_{1,1})\cong
F(X\otimes\emptyset_{1,1})=
F(\emptyset_{C,D}).
$$
Thus for any set $A$, we obtain an isomorphism $F(A)\cong F(\emptyset)$, where $\emptyset$ is the zero-object of $\mathsf{Rel}$. $\square$ Question: What are other examples of quotient-free monoidal categories? Question: Might we consider quotient-free monoidal categories as acting like fields, which are also somehow quotient-free? That is, maps to quotient-free monoidal categories would be analogous to points? Any thoughts on this would be useful.
|
There is no such function. Since $f$ would have to map $\mathbb R$ onto $\mathbb R$ for the equation to make sense at all $x\in\mathbb R$, it follows that $f^{-1}(x)\to -\infty$ also as $x\to -\infty$, so $f'\to 0$. Thus $f(x)\ge x$, say, for all small enough $x$, hence $f^{-1}(x)\le x$ eventually, but then the equation shows that $f'\le e^x$, which is integrable on $(-\infty, 0)$, so $f$ would approach a limit as $x\to -\infty$ and not be surjective after all.
|
{
"source": [
"https://mathoverflow.net/questions/258535",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2811/"
]
}
|
258,914 |
Question. Is the polynomial $x^{2k+1} - 7x^2 + 1$ irreducible over $\mathbb{Q}$ for every positive integer $k$? It is irreducible for all positive integers $k \leq 800$.
|
Here is a proof, based on a trick that can be used to prove that
$x^n + x + 1$ is irreducible when $n \not\equiv 2 \bmod 3$. We work with Laurent polynomials in $R = \mathbb Z[x,x^{-1}]$; note that
$R$ has unit group $R^\times = \pm x^{\mathbb Z}$.
We observe that for $f \in R$, the sum of the squares of the coefficients
is given by
$$\|f\| = \int_0^1 |f(e^{2 \pi i t})|^2\,dt
= \int_0^1 f(e^{2 \pi i t}) f(e^{-2 \pi i t})\,dt
= \int_0^1 f(x) f(x^{-1})\big|_{x = e^{2 \pi i t}}\,dt .$$
Now assume that $f(x) = g(x) h(x)$. Then, since
$f(x) f(x^{-1}) = \bigl(g(x)h(x^{-1})\bigr)\bigl(g(x^{-1})h(x)\bigr)$,
$G(x) := g(x) h(x^{-1})$ satisfies $\|G\| = \|f\|$
and $G(x) G(x^{-1}) = f(x) f(x^{-1})$. Now we consider $f(x) = x^n - 7 x^2 + 1$; then $\|f\| = 51$.
If $f = g h$ as above, then write
$G(x) = \pm x^m(1 + a_1 x + a_2 x^2 + \ldots)$
and $G(x^{-1}) = \pm x^l(1 + b_1 x + b_2 x^2 + \ldots)$.
The relation $G(x) G(x^{-1}) = f(x) f(x^{-1})$ translates into
(equality of signs and)
$$(1 + a_1 x + \ldots)(1 + b_1 x + \ldots) = 1 - 7 x^2 + O(x^{n-2}).$$
Assuming that $n > 40$ and considering terms up to $x^{20}$, one can
check (see below) that the only solution such that
$a_1^2 + a_2^2 + \ldots + a_{20}^2 + b_1^2 + b_2^2 + \ldots + b_{20}^2\le 49$
is, up to the substitution $x \leftarrow x^{-1}$, given by
$1 + a_1 x + \ldots = 1 - 7x^2 + O(x^{21})$,
$1 + b_1 x + \ldots = 1 + O(x^{21})$. Since the $-7$ (together with the
leading and trailing 1) exhausts our allowance
for the sum of squares of the coefficients, all other coefficients
must be zero, and we obtain that $G(x) = \pm x^a f(x)$
or $G(x) = \pm x^a f(x^{-1})$. Modulo interchanging $g$ and $h$,
we can assume that $g(x) h(x^{-1}) = \pm x^a f(x)$,
so $h(x^{-1}) = \pm x^a f(x)/g(x) = \pm x^a h(x)$,
and $x^{\deg h} h(x^{-1})$ divides $f(x)$.
This implies that $h(x)$ divides $x^n f(x^{-1})$, so $h(x)$ must divide
$$f(x) - x^n f(x^{-1}) = 7 x^2 (x^{n-4} - 1).$$
So $h$ also divides
$$f(x) - x^4 (x^{n-4} - 1) = x^4 - 7 x^2 + 1 = (x^2-3x+1)(x^2+3x+1).$$
Since $h$ also divides $f$, it must divide the difference $x^n - x^4$,
which for $n \neq 4$ it clearly doesn't, since the quartic has no
roots of absolute value 0 or 1; contradiction. The argument shows that $x^n - 7 x^2 + 1$ is irreducible for $n > 40$;
for smaller $n$, we can ask the Computer Algebra System we trust.
This gives: Theorem. $x^n - 7 x^2 + 1$ is irreducible over $\mathbb Q$
for all positive integers $n$ except $n=4$. ADDED 2017-01-08: After re-checking the computations, I realized that there
was a small mistake that ruled out some partial solutions prematurely.
It looks like one needs to consider terms up to $x^{20}$. Here is a file with MAGMA code that verifies that
there are no other solutions.
|
{
"source": [
"https://mathoverflow.net/questions/258914",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38889/"
]
}
|
259,054 |
Is there a natural geometric generalization of the winding number to higher dimensions? I know it primarily as an important and useful index for closed, plane curves
(e.g., the Jordan Curve Theorem),
and for its role in Cauchy's theorem integrating holomorphic functions.
I would be interested to learn of generalizations that essentially
replace the role of the circle $\mathbb{S}^1$ with $\mathbb{S}^n$. I've encountered references to the Fredholm index ,
the Pontryagin index ,
and to Bott periodicity ,
but none seem to be straightforward geometric generalizations of winding number. This is an entirely naive question, and references and high-level descriptions
would be appreciated, and more than suffice.
|
This is a very naive answer which I am sure you already considered, but isn't the most obvious generalization just given by the topological degree ( https://en.wikipedia.org/wiki/Degree_of_a_continuous_mapping )? The winding number of $f:S^1\rightarrow \mathbb{R}^2$ around $p$ is just the degree of the composition of $f$ with the radial projection from $p$, considered as a map from $S^1$ to $S^1$. It is obvious how to do the same thing for general $n$. (This should surely just be a comment.)
|
{
"source": [
"https://mathoverflow.net/questions/259054",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
259,155 |
As far as I know, in modern physics we assume that the underlying field of work is the field of real numbers (or complex numbers). Imagine one second that we make a crazy assumption and suggest that the fundamental equations of physics can be expressed with $p$-adic numbers. What could be really rewritten formally? Does it make sense in the physical world to work over $p$-adic numbers. Is there such attempt in the history of mathematical physics ? What is the most convincing justification (in physics) that we need to work over the field of real numbers ?
|
For an overview of applications of p-adic numbers in physics I would refer to the Wikipedia and Physics.stackexchange links, and to this nLab entry. Regarding the second question " What is the most convincing justification in physics that we need to work over the field of real or complex numbers " I would like to quote Freeman Dyson in Birds and Frogs: When I look at the history of mathematics, I see a succession of
illogical jumps, improbable coincidences, jokes of nature. One of the
most profound jokes of nature is the square root of minus one that the
physicist Erwin Schrödinger put into his wave equation when he
invented wave mechanics in 1926. Starting from wave optics as a model,
he wrote down a differential equation for a mechanical particle, but
the equation made no sense. The equation looked like the equation of
conduction of heat in a continuous medium. Heat conduction has no
visible relevance to particle mechanics. Schrödinger’s idea seemed to
be going nowhere. But then came the surprise. Schrödinger put the
square root of minus one into the equation, and suddenly it made
sense. Suddenly it became a wave equation instead of a heat conduction
equation. And Schrödinger found to his delight that the equation has
solutions corresponding to the quantized orbits in the Bohr model of
the atom. It turns out that the Schrödinger equation describes correctly
everything we know about the behavior of atoms. It is the basis of all
of chemistry and most of physics. And that square root of minus one
means that nature works with complex numbers and not with real
numbers. All through the nineteenth century, mathematicians from Abel to
Riemann and Weierstrass had been creating a magnificent theory of
functions of complex variables. They had discovered that the theory of
functions became far deeper and more powerful when it was extended
from real to complex numbers. But they always thought of complex
numbers as an artificial construction, invented by human
mathematicians as a useful and elegant abstraction from real life. It
never entered their heads that this artificial number system that they
had invented was in fact the ground on which atoms move. They never
imagined that nature had got there first.
|
{
"source": [
"https://mathoverflow.net/questions/259155",
"https://mathoverflow.net",
"https://mathoverflow.net/users/103287/"
]
}
|
259,180 |
Let $(a_{i})$ be an increasing sequence of positive integers given by a linear recurrence $a_{i+n}=c_{n}a_{i+n-1}+\dots +c_{1}a_{i}$ with $c_{i}\in\{-1,0,1\}$ and $a_{i}=2^{i}$ for $i=1,\dots n$ such that the characteristic polynomial $p(x)=-x^{n}+c_{n}x^{n-1}+\dots +c_{1}x^{0}$ has a unique dominating real root $\alpha> 1$. Let $(s_{i})\in\{-1,0,1\}^{m}$ with $m\ge n$.
Is it true that
$\sum_{i=1}^{m}s_{i}a_{i}=0$
if and only if $p$ divides the polynomial $\sum_{i=1}^{m}s_{i}x^{i}$? Where I can find a proof of this kind of result?
|
For an overview of applications of p-adic numbers in physics I would refer to the Wikipedia and Physics.stackexchange links, and to this nLab entry. Regarding the second question " What is the most convincing justification in physics that we need to work over the field of real or complex numbers " I would like to quote Freeman Dyson in Birds and Frogs: When I look at the history of mathematics, I see a succession of
illogical jumps, improbable coincidences, jokes of nature. One of the
most profound jokes of nature is the square root of minus one that the
physicist Erwin Schrödinger put into his wave equation when he
invented wave mechanics in 1926. Starting from wave optics as a model,
he wrote down a differential equation for a mechanical particle, but
the equation made no sense. The equation looked like the equation of
conduction of heat in a continuous medium. Heat conduction has no
visible relevance to particle mechanics. Schrödinger’s idea seemed to
be going nowhere. But then came the surprise. Schrödinger put the
square root of minus one into the equation, and suddenly it made
sense. Suddenly it became a wave equation instead of a heat conduction
equation. And Schrödinger found to his delight that the equation has
solutions corresponding to the quantized orbits in the Bohr model of
the atom. It turns out that the Schrödinger equation describes correctly
everything we know about the behavior of atoms. It is the basis of all
of chemistry and most of physics. And that square root of minus one
means that nature works with complex numbers and not with real
numbers. All through the nineteenth century, mathematicians from Abel to
Riemann and Weierstrass had been creating a magnificent theory of
functions of complex variables. They had discovered that the theory of
functions became far deeper and more powerful when it was extended
from real to complex numbers. But they always thought of complex
numbers as an artificial construction, invented by human
mathematicians as a useful and elegant abstraction from real life. It
never entered their heads that this artificial number system that they
had invented was in fact the ground on which atoms move. They never
imagined that nature had got there first.
|
{
"source": [
"https://mathoverflow.net/questions/259180",
"https://mathoverflow.net",
"https://mathoverflow.net/users/23542/"
]
}
|
259,263 |
I have read Connes' survey article http://www.alainconnes.org/docs/rhfinal.pdf and I am somewhat familiar with his classic paper on the trace formula: http://www.alainconnes.org/docs/selecta.ps Very roughly speaking the idea is to describe a dictionary which translates the concepts and techniques used in the proof of the analogue of the Riemann Hypothesis for function fields.
This translation uses various techniques from tropical geometry and topos theory. At first I was hopeful I might understand the key issues with this translation, since I have some experience with the theory of Grothendieck toposes (or topoi).
Nevertheless I have been lost when it comes to understanding precisely what the remaining problems are. As already briefly discussed in this thread: Riemann hypothesis via absolute geometry in the proof of the Riemann hypothesis for a curve $X$ over $F_p$ the surface $X \times X$ plays an important role. According to a new preprint of Connes / Consani there is now a suitable analogy for the surface $X \times X$ which is given by a topos called the "scaling site", cf. https://arxiv.org/abs/1603.03191 I would like to know what are the issues that are left open to complete the analogy with the proof of RH in the case of a curve over $F_p$?
|
First, recall the step's of Weil's proof, other than defining the surface: Develop an intersection theory of curves on surfaces. Show that the intersection of two specially chosen curves is equal to a coefficient of the zeta function. Prove the Hodge index theorem using the Riemann-Roch theorem for surfaces (or is there another proof?). By playing around with the two curves from step 2 and a few other simple curves, deduce an inequality for each coefficient of the logarithmic derivative of the zeta function. Conclude the Riemann hypothesis for the roots of the zeta function. Any approach which closely follows Weil's strategy would have to find analogues of these 5 steps. I think most people would say the crux of the issue is step 3 - it would be very interesting if some abstract/categorical/algebraic method produced any inequality involving the Riemann zeta function or related functions that was not previously known, even if it was short of the full Riemann hypothesis. Equally, many people fear that if we generalize too far from the world of curves and surfaces that we know and love, we might lose the ability to prove concrete inequalities like this. nfdc23 discussed this issue in the comments. In the survey article you linke, Connes says: At this point, what is missing is an intersection theory and a Riemann-Roch theorem on the square of the arithmetic site. I think this is still the case - the Riemann-Roch theorem in the paper you link seems to be for certain curves in the scaling site. (If it were for the surface, I wouldn't expect to see it on the arXiv until the Riemann hypothesis were deduced from it, or a very convincing argument was found that it was not sufficient to prove the Riemann hypothesis!) I'm not sure how helpful knowledge of toposes (alone) will be for understanding this issue. The reason is that the key parts of any definition of intersection theory of curves on surfaces do not involve very much the topos structure of the surface, but rather other structures. I know of a few different perspectives on intersection theory: Add ample classes, apply a moving lemma, and then concretely count the intersections of curves. Here the main issues are global geometric considerations - ample divisors and their relationships to projective embeddings and hyperplanes and so on. I don't think toposes are a very useful tool for understanding these things. Apply local formulas for the intersection number, and then sum over the point of intersection. This is more promising as e.g. Serre's formula is cohomological in nature, but it's the wrong kind of cohomology - for modules, not for toposes. I think it might be hard to define the right cohomology groups over semirings. Take the cup product of their classes in a suitable cohomology theory. This at first looks promising, because one of our possible choices for a cohomology theory, etale cohomology, is defined using a topos. Unfortunately, because etale cohomology is naturally l-adic, it is difficult to establish positivity from it. Of course one could look to generalizing Deligne's proof of the Riemann hypothesis, but that is a different matter and much more complicated. (Peter Sarnak has suggested that Deligne's proof provides a guide to how one should try to prove the Riemann hypothesis for number fields, but it is not the etale topos but rather the use of fmailies in the proof that he wants to mimic.) View one curve as an honest curve and the other as the divisor of a line bundle and pull the line bundle back to the curve, then compute its degree. This is, I think, the easiest way to define the intersection number, but one needs some of the other characterizations to prove its properties (even symmetry). This looks to me like it is at least potentially meaningful in some topos and I don't know exactly to what extent it can be applied to the arithmetic site. Remember that even among locally ringed spaces, the ones where you have a good intersection theory are very specific (If you want to intersect arbitrary closed subsets I think you need smooth proper varieties over a field). For locally semiringed topoi, the situation is presumably much worse. No one is saying it is impossible to develop an applicable intersection theory, but I don't think anyone really knows how to begin -- if they did, I'm sure they would be working furiously on it.
|
{
"source": [
"https://mathoverflow.net/questions/259263",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2258/"
]
}
|
259,664 |
Suppose you have a tetrahedron $T$ in Euclidean space with edge lengths $\ell_{01}$ , $\ell_{02}$ , $\ell_{03}$ , $\ell_{12}$ , $\ell_{13}$ , and $\ell_{23}$ . Now consider the tetrahedron $T'$ with edge lengths $$\begin{aligned}
\ell'_{02} &= \ell_{02} &
\ell'_{13} &= \ell_{13}\\
\ell'_{01} &= s-\ell_{01} &
\ell'_{12} &= s-\ell_{12}\\
\ell'_{23} &= s-\ell_{23}&
\ell'_{03} &= s-\ell_{03}
\end{aligned}
$$ where $s = (\ell_{01} + \ell_{12} + \ell_{23} + \ell_{03})/2$ .
If the edge lengths of $T'$ are positive and satisfy the triangle inequality, then the volume of $T'$ equals the volume of $T$ . In particular, if $T$ is a flat tetrahedron in $\mathbb{R}^2$ , then $T'$ is as well. This is easily verified by plugging the values $\ell'_{ij}$ above into the Cayley-Menger determinant. In fact, it's possible to show that the linear symmetries of $\mathbb{R}^6$ that preserve the Cayley-Menger determinant form the Weyl group $D_6$ , of order $2^5 * 6! = 23040$ . This is a factor of $15$ times larger than the natural geometric symmetries obtained by permuting the vertices of the tetrahedron and negating the coordinates. The transformations don't always take Euclidean tetrahedra to Euclidean tetrahedra, but they do sometimes. For instance, if you start with an equilateral tetrahedron $T$ with all side lengths equal to $1$ , then $T'$ is also an equilateral tetrahedron. Thus if $T$ is a generic Euclidean tetrahedron close to equilateral, $T'$ will also be one, and $T$ and $T'$ will not be related by a Euclidean symmetry. I can't be the first person to observe this. (In fact, I vaguely recall hearing about this in the context of quantum groups and the Jones polynomial.) What's the history? How to best understand these transformations (without expanding out the determinant)? Are $T$ and $T'$ scissors congruent? Etc.
|
These are the so-called Regge symmetries, described by T. Regge in a 1970-ish paper. For a bit on it, with references, see the paper Philip P. Boalch , MR 2342290 Regge and Okamoto symmetries , Comm. Math. Phys. 276 (2007), no. 1, 117--130.\
|
{
"source": [
"https://mathoverflow.net/questions/259664",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5010/"
]
}
|
259,698 |
I'm reading the elementary proof of prime number theorem (Selberg / Erdős, around 1949). One key step is to prove that, with $\vartheta(x) = \sum_{p\leq x} \log p$, $$(1) \qquad\qquad \vartheta(x) \log x+\sum_{p\leq x}(\log p) \vartheta(x/p) = 2x\log x+O(x)$$ What's the idea behind this identity? Here is the heuristic argument I understand : if $p$ is a prime number, the average gap before the next prime is $\approx \log p$. Thus if we set a weight $0$ for non-prime numbers, and $\log p$ for prime numbers, then when we sum all these weights for $n = 1 ... x$, we should get $\vartheta(x) = \sum_{p \leq x} \log p \sim x$, and that's equivalent to the PNT. Ok. But where comes the idea from, to add on the left hand side of the identity (1) the part $\sum_{p\leq x}(\log p) \vartheta(x/p)$ ? What's the idea here? What's the idea to go from (1) to the prime number theorem? Note: the identity (1) can be replaced by an equivalent identity , with $\psi(x) = \sum_{p^k\leq x} \log p = \sum_{n \leq x} \Lambda(n)$ (where $\Lambda(n) = \log p$ if $n=p^k$ and $0$ otherwise): $$(2) \qquad\qquad \psi(x) \log x+\sum_{n\leq x}\Lambda(n) \psi(x/n) = 2x\log x+O(x)$$ Another equivalent form is (just in case (2) or (3) might be easier to understand): $$(3) \qquad\qquad \sum_{p \leq x} (\log p)^2 + \sum_{pq\leq x} \log p \log q= 2x\log x+O(x)$$
|
The complex-analytic proof of the prime number theorem can help inform the elementary one. The von Mangoldt function $\Lambda$ is related to the Riemann zeta function $\zeta$ by the formula
$$ \sum_n \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)},$$
at least in the region $\mathrm{Re}(s) > 1$. The right-hand side has a pole of residue 1 at the simple pole $s=1$ of $\zeta$, and a pole of residue -1 at each zero $s=\rho$ of $\zeta$. Applying Perron's formula carefully, one eventually arrives at the von Mangoldt explicit formula $$ \sum_{n \leq x} \Lambda(n) = x - \sum_\rho \frac{x^\rho}{\rho} + \dots$$
where the remaining terms $\dots$ (as well as the way in which the infinite sum over zeroes is interpreted) will be ignored for this discussion. This formula strongly suggests that zeroes $\rho$ with real part strictly less than one will eventually only make a lower order contribution to $\sum_{n \leq x} \Lambda(n)$, whereas zeroes $\rho$ with real part equal to one would likely destroy the prime number theorem. In fact one can pursue this line of reasoning more carefully and show that the prime number theorem is equivalent to the absence of zeroes of zeta on the line $\mathrm{Re}(s)=1$. But what if there was a way to make the contribution of the zeroes of zeta smaller than those of the pole, even when said zeroes lie on the line $\mathrm{Re}(s)=1$? One way to do this is to replace the log-derivative $-\frac{\zeta'(s)}{\zeta(s)}$ by the variant $\frac{\zeta''(s)}{\zeta(s)}$. Assuming for simplicity that all zeroes are simple, this function still has a simple pole at every zero $\rho$ (with residue $\frac{\zeta''(\rho)}{\zeta'(\rho)}$), but now has a double pole at $s=1$ (with residue $2$). As it turns out, this variant also has a Dirichlet series representation
$$ \sum_n \frac{ \Lambda_2(n) }{n^s} = \frac{\zeta''(s)}{\zeta(s)}$$
which by Perron's formula suggests an asymptotic of the form
$$ \sum_{n \leq x} \Lambda_2(n) = 2 x \log x + \sum_\rho \frac{\zeta''(\rho)}{\zeta'(\rho)} \frac{x^\rho}{\rho} + \dots$$
Now, even if there are zeroes on $\mathrm{Re}(s)=1$, their contribution should still be smaller than the main term, and one may now be led to predict an asymptotic of the form
$$ \sum_{n \leq x} \Lambda_2(n) = 2 x \log x + O(x) \quad (4)$$
and furthermore this asymptotic should be easier to prove than the prime number theorem, as it is not reliant on preventing zeroes at the edge of the critical strip. The functions $\Lambda_2$ and $\Lambda$ are related via the identity
$$ \frac{\zeta''(s)}{\zeta(s)} = -\frac{d}{ds} (-\frac{\zeta'(s)}{\zeta(s)}) +
(-\frac{\zeta'(s)}{\zeta(s)})^2$$
which after inverting the Dirichlet series leads to
$$ \Lambda_2(n) = \Lambda(n) \log n + \sum_{d|n} \Lambda(d) \Lambda(\frac{n}{d}) \quad (5)$$
and from this and (4) we soon obtain (2) and hence (1). One can also think about this from a "music of the primes" standpoint. The explicit formula is morally resolving $\Lambda$ into "harmonics" as
$$ \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1} + \dots \quad (6)$$
where I am deliberately being vague as to how to interpret the symbols $\approx$, $\sum$, and $\dots$. The Dirichlet convolution of $1$ with itself resembles $\log n$, while the Dirichlet convolution of $-n^{\rho-1}$ with itself resembles $n^{\rho-1} \log n$; also, cross-terms such as the convolution of $1$ with $-n^{\rho-1}$ experience significant cancellation and should be lower order. Thus the right-hand side of (5) amplifies the "1" harmonic of $\Lambda$ by a factor of about $2 \log n$, while damping out the contributions of each of the zeroes $\rho$ due to cancellation between the two terms, leading one to (1) or (2) as a smoothed out version of the prime number theorem. Conversely, (2) combined with (5) can be viewed as an elementary way to encode (the largest terms of) the explicit formula (4), without explicit mention of zeroes. The most difficult part of the Erdos-Selberg elementary proof of the PNT is the passage from (1) to the prime number theorem, as here we must actually eliminate the scenario in which there is a zero on $\mathrm{Re}(s)=1$. The key to doing so is to exploit the non-negativity of $\Lambda$; morally, the expansion (6) prevents $\rho$ (and hence also $\overline{\rho}$) from lying on $\mathrm{Re}(s)=1$ as this would make the right-hand side RHS negative "on average" for certain sets of $n$. There are several ways to make this intuition precise; the standard complex-analytic proofs often proceed by testing $\Lambda$ against $n^{\pm it}$ and $n^{\pm 2it}$, where $1+it$ is a hypothetical zero of $\zeta$. Very roughly speaking, the idea of the elementary proof is to exploit not only the non-negativity of $\Lambda$, but also upper bounds coming from sieve-theoretic bounds such as the Brun-Titchmarsh inequality , which roughly speaking tell us that $\Lambda$ behaves "on average" as if it lies between $0$ and $2$. (Actually, the Selberg formula (2) already gives an adequate substitute for this inequality for the purposes of this argument.) On the other hand, from (6) we see that if there was a zero $\rho = 1+it$ on the line $\mathrm{Re}(s)=1$, then $\Lambda-1$ should have an extremely large correlation with $\mathrm{Re} n^{\rho-1} = \cos(t \log n)$, and these two facts can be placed in contradiction with each other, basically because the average value of $|\cos(t \log n)|$ is strictly less than one. However, because the zeroes $\rho$ are not explicitly present in the Selberg formula (2), one has to work quite a bit in the elementary proof to make this argument rigorous. In this blog post of mine I use some Banach algebra theory to tease out the zeroes $\rho=1+it$ from the Selberg formula (they can be interpreted as the "Gelfand spectrum" of a certain Banach algebra norm associated to that formula) to make this connection more explicit.
|
{
"source": [
"https://mathoverflow.net/questions/259698",
"https://mathoverflow.net",
"https://mathoverflow.net/users/85239/"
]
}
|
259,834 |
I am to teach a second year grad course in analysis with focus on Schwartz distributions. Among the core topics I intend to cover are: Some multilinear algebra including the Kernel Theorem and Volterra composition, Some Fourier analysis including the Bochner-Schwartz Theorem, An introduction to wavelets with a view to structure theorems for spaces of distributions or function spaces, Probability theory on spaces of distributions including the Lévy Continuity Theorem, A study of homogeneous distributions and elementary solutions to linear PDEs. My question is: What "cool topics/applications" would it be nice to include in such a course? I am particularly interested in examples with a high return on investment, i.e., that would not take too long to cover yet would provide the students with valuable tools for eventually a future research career in analysis. Please provide references where I can learn more about your suggestions. I would like some variety if possible. I got suggestions pertaining to probability, PDEs and mathematical physics, but it would be nice to get apps related to other areas of math.
|
Unsurprisingly, the topics that occur to me have various connections to number theory (and related harmonic analysis) (and unclear to me what might have already been done in your course...): EDIT: inserted some links... EDIT-EDIT: one more... Genuinely distributional proof of Poisson summation: http://www.math.umn.edu/~garrett/m/fun/poisson.pdf Meromorphic continuation of distributions $|x|^s$ and ${\mathrm sgn}(x)\cdot |x|^s$ http://www.math.umn.edu/~garrett/m/fun/notes_2013-14/mero_contn.pdf (Fancier version of the previous: mero cont'n of $|\det x|^s$ on $n\times n$ matrices (if this is interesting, I have some notes, or maybe it's a fun exercise). In particular, stimulated by a math-overflow question of A. Braverman some time ago, there are equivariant distributions (e.g., on two-by-two matrices) such that both they and their Fourier transform are supported on singular matrices... Wacky! http://www.math.umn.edu/~garrett/m/v/det_power_distn.pdf Decomposition of $L^2(A)$ for compact abelian topological groups $A$ (by Hilbert-Schmidt, hence compact, operators). http://www.math.umn.edu/~garrett/m/fun/notes_2012-13/06c_cpt_ab_gps.pdf Reconsideration of Sturm-Liouville problems (with reasonable hypotheses), to really prove things that are ... ahem... "suggested" in usual more-naive discussions. Quadratic reciprocity over number fields (and function fields not of char=2) ... cf. http://www.math.umn.edu/~garrett/m/v/quad_rec_02.pdf (This presumes Poisson summation for $\mathbb A/k$...) Explanation that Schwartz' kernel theorem is a corollary of a Cartan-Eilenberg adjunction (between $\otimes$ and $\mathrm{Hom(,-)}$), when we know that there are genuine (i.e., categorically correct) tensor products for "nuclear Frechet spaces", ... which leads to the issue of suitable notions of the latter. http://www.math.umn.edu/~garrett/m/fun/notes_2012-13/06d_nuclear_spaces_I.pdf The idea that termwise differentiation of Fourier series is " always ok" (with coefs that grow at most polynomially), if/when the outcome is interpreted as lying in a suitable Sobolev space on the circle. And that polynomial-growth-coefficiented Fourier series _always_converge_... if only in a suitable Sobolev space. http://www.math.umn.edu/~garrett/m/fun/notes_2012-13/03b_intro_blevi.pdf ** Possibility of ranting about the limitations of pointwise convergence... especially when placed in contrast to convergence in Sobolev spaces... Use of Snake Lemma to talk about mero cont'n of the Gamma function, via mero cont'n of $|x|^s$. :) http://www.math.umn.edu/~garrett/m/v/snake_lemma_gamma.pdf Peetre's theorem that any (not necessarily continuous!) linear functional on test functions that does not increase support is a differential operator. (I have a note on this, which may be more palatable to beginners than Peetre's paper.) http://www.math.umn.edu/~garrett/m/v/peetre.pdf Uniqueness of invariant functionals... As the easiest case (which is easy, but cognitive-dissonance-provoking, in my experience), proving that $u'=0$ for a distribution $u$ implies that $u$ is (integration-against) a constant. (Maybe you'd do this along the way...) http://www.math.umn.edu/~garrett/m/v/uniq_of_distns.pdf ... this is not a stand-alone topic, but: the usual discussions of pseudo-differential operators (e.g., "symbols") somehow shrink from talking about quotients of TVS's in a grown-up way... If that hadn't been done earlier, and/or people had a (reasonable!) feeling of discomfort about the usual style of chatter in the psi-DO world, perhaps this could be happy-making. Meromorphic/holomorphic vector-valued functions (cf. Grothendieck c. 1953-4, and also various of my notes) e.g., meromorphic families of distributions... E.g., the $|x|^s$ family on $\mathbb R$ has residues which are the even-order derivatives of $\delta$, and ${\mathrm {sgn}}(x)\cdot |x|^s$ has as residues the odd-order derivatives of $\delta$. http://www.math.umn.edu/~garrett/m/fun/Notes/09_vv_holo.pdf Depending on context, there are somewhat-fancier things that I do find entertaining and also useful. Comments/correspondence are welcome.
|
{
"source": [
"https://mathoverflow.net/questions/259834",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7410/"
]
}
|
259,844 |
The purpose of this question is to collect the most outrageous (or ridiculous) conjectures in mathematics. An outrageous conjecture is qualified ONLY if: 1) It is most likely false (Being hopeless is NOT enough.) 2) It is not known to be false 3) It was published or made publicly before 2006. 4) It is Important: (It is based on some appealing heuristic or idea;
refuting it will be important etc.) 5) IT IS NOT just the negation of a famous commonly believed conjecture. As always with big list problems please make one conjecture per answer. (I am not sure this is really a big list question, since I am not aware of many such outrageous conjectures. I am aware of one wonderful example that I hope to post as an answer in a couple of weeks.) Very important examples where the conjecture was believed as false when it was made but this is no longer the consensus may also qualify! Shmuel Weinberger described various types of mathematical conjectures . And the type of conjectures the question proposes to collect is of the kind: On other times, I have conjectured to lay down the gauntlet: “See, you can’t even disprove this ridiculous idea." Summary of answers (updated: March, 13, 2017 February 27, 2020): Berkeley Cardinals exist There are at least as many primes between $2$ to $n+1$ as there are between $k$ to $n+k-1$ P=NP A super exact (too good to be true) estimate for the number of twin primes below $n$ . Peano Arithmetic is inconsistent . The set of prime differences has intermediate Turing degree . Vopěnka's principle. Siegel zeros exist . All rationally connected varieties are unirational . Hall's original conjecture (number theory). Siegel's disk exists . The telescope conjecture in homotopy theory . Tarski's monster do not exist (settled by Olshanski) All zeros of the Riemann zeta functions have rational imaginary part . The Lusternik-Schnirelmann category of $Sp(n)$ equals $2n-1$ . The finitistic dimension conjecture for finite dimensional algebras . The implicit graph conjecture (graph theory, theory of computing) $e+\pi$ is rational . Zeeman's collapsing conjecture . All groups are sofic . (From comments, incomplete list) 21. The Jacobian conjecture ; 22. The Berman–Hartmanis conjecture 23. The Casas-Alvero conjecture 24. An implausible embedding into $L$ (set theory). 25. There is a gap of at most $\log n$ between threshold and expectation threshold (Update: a slightly weaker version of this conjecture was proved by Keith Frankston, Jeff Kahn, Bhargav Narayanan, and Jinyoung Park!; Further update: the conjecture was fully proved by Jinyoung Park and Huy Tuan Pham ). 26. NEXP-complete problems are solvable by logarithmic depth, polynomial-size circuits consisting entirely of mod 6 gates. 27. Fermat had a marvelous proof for Fermat's last theorem. (History of mathematics).
|
W. Hugh Woodin, at a 1992 seminar in Berkeley at which I was present, proposed a new and ridiculously strong large cardinal concept, now called the Berkeley cardinals , and challenged the seminar audience to refute their existence. He ridiculed the cardinals as overly strong, stronger than Reinhardt cardinals, and proposed them in a "Refute this!" manner that seems to be in exactly the spirit of your question. Meanwhile, no-one has yet succeeded in refuting the Berkeley cardinals.
|
{
"source": [
"https://mathoverflow.net/questions/259844",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1532/"
]
}
|
260,057 |
The following is not a proper mathematical question but more of a metamathematical one. I hope it is nonetheless appropriate for this site. One of the non-obvious consequences of the axiom of choice is the Banach-Tarski paradox and thus the existence of non-measurable sets. On the other hand, there seem to be models of Zermelo-Fraenkel set theory without axiom of choice where every set would be measurable. What does this say about the "plausibility" of the axiom of choice? Are there reasons why it is plausible (for physicists, philosophers, mathematicians) to believe that not all sets should be measurable? Is the Banach-Tarski paradox one more reason why one should "believe" in the axiom of choice, or is it on the opposite shedding doubt on it?
|
There are two ingredients in the Banach-Tarski decomposition theorem: The notion of space , together with derived notions of part and decomposition . The axiom of choice. Most discussion about the theorem revolve around the axiom of choice. I would like to point out that the notion of space can be put under scrutiny as well. The Banach-Tarski decomposition of the sphere produces non-measurable parts of the sphere. If we restrict the notion of "part" to "measurable subset" the theorem disappears. For instance, if we move over into a model of set theory (without choice) in which all sets are measurable, we will have no Banach-Tarski. This is all well known. Somewhat amazingly, we can make the Banach-Tarski decomposition go away by extending the notion of subspace, and keep choice too. Alex Simpson in Measure, Randomness and Sublocales (Annals of Pure and Applied Logic, 163(11), pp. 1642-1659, 2012) shows that this is achieved by generalizing the notion of topological space to that of locale . He explains it thus: "The different pieces in the partitions defined by Vitali and by Banach and Tarski are deeply intertangled with each other. According to our notion of “part”, two such intertangled pieces are not disjoint from each other, so additivity does not apply. An intuitive explanation for the failure of disjointness is that, although two such pieces share no point in common, they nevertheless overlap on the topological “glue” that bonds neighbouring points in $\mathbb{R}^n$ together." Peter Johnstone explained in The point of pointless topology why locales have mathematical significance that goes far beyond fixing a strange theorem about decomposition of the sphere. Why isn't everyone using locales? I do not know, I think it is purely a historic accident. At some point in the 20th century mathematicians collectively lost the idea that there is more to space than just its points. I personally prefer to blame the trouble on the notion of space, rather than the axiom of choice. As far as possible, geometric problems should be the business of geometry, not logic or set theory. Mathematicians are used to operating with various kinds of spaces (in geometry, in analysis, in topology, in algebraic geometry, in computation, etc.) and so it seems only natural that one should worry about using the correct notion of space first, and about underlying foundational principles later. Good math is immune to changes in foundations.
|
{
"source": [
"https://mathoverflow.net/questions/260057",
"https://mathoverflow.net",
"https://mathoverflow.net/users/39082/"
]
}
|
260,107 |
Let $P$ be a convex polytope in $\mathbb{R}^d$ with $n$ vertices and $f$ facets.
Let $\text{Proj}(P)$ denote the projection of $P$ into $\mathbb{R}^2$. Can $\text{Proj}(P)$ have more than $f$ facets? In the general case, each successive projection can increase the number of facets from $f$ to $\left\lfloor \frac{f^2}{2} \right\rfloor$, but I'm wondering if $\mathbb{R}^2$ is a special case.
|
Consider the polytope in $\mathbb{R}^3$ with $8$ vertices at coordinates $(\pm 1, \pm 2, 1), (\pm 2, \pm 1, -1)$. Geometrically this looks like a cube where the top face is stretched in the direction of the $y$-axis and the bottom face is stretched in the direction of the $x$-axis, but it still has the face structure of a cube and in particular has $6$ facets. It's projection onto the first two coordinates is clearly an octagon, and hence has two more facets than the original polytope.
|
{
"source": [
"https://mathoverflow.net/questions/260107",
"https://mathoverflow.net",
"https://mathoverflow.net/users/103831/"
]
}
|
260,156 |
Suppose we are given some polynomial with integer coefficients, which we regard as carving out an affine variety $E$, for example: $$ 3x^2y - 12 x^3y^5 + 27y^9 - 2 = 0 \tag{$*$} $$ (We might consider a bunch of equations, we might work over projective space, but let's keep it simple for now). We are interested in the number of points on $E$ when we reduce modulo $p$, i.e. over the finite field $\mathbb{F}_p$ as the prime $p$ varies. For our single equation in two variables, as a rough approximation, we would expect $p$ points in general. So for each prime $p$, we define numbers $a_p$ which measure the deviation from this, $$a_p := p - \text{number of solutions to $(*)$ over $\mathbb{F}_p$}$$ Question: When is it true that the numbers $\{a_p\}$ are "modular", in the sense that there exists a modular form $$f = \sum_{n=1}^\infty b_n q^n$$ such that $b_p = a_p$ for almost all primes $p$? (The "almost all" is to avoid problems with bad primes. Note that $f$ is still uniquely determined by the above requirement.) The Modularity Theorem of Breuil, Conrad, Taylor and Diamond says that this is true when $E$ is an elliptic curve, i.e. takes the form $y^2 = 4x^3 - g_2 x - g_3$ for some integers $g_2, g_3$. In that case, $f$ is a weight 2 modular form of level $N$ where $N$ is the "conductor" of $E$. But is it true for more general varieties? (Note: I am aware of a generalized "Modularity Theorem" for certain Abelian varieties, but it's not clear to me that what people mean by "Modular" in that context is the same as the simple-minded notion I'm using --- that an adjusted count of points mod $p$ gives the Fourier coefficients of a modular form.)
|
The correct setting for this construction turns out to be projective varieties, so let me suppose we have a smooth variety $X$ inside $\mathbf{P}^N$, for some $N \ge 1$, defined by the vanishing of some homogenous polynomials $F_1, \dots, F_r$ in variables $x_0, \dots, x_N$, with the $F_i$ having coefficients in $\mathbf{Q}$. Actually, let me assume the $F_i$ have coefficients in $\mathbf{Z}$, which is no loss since we can just multiply up. Then we can make sense of the reduction of $X$ modulo $p$; and we want to study the point counts $\#X(\mathbf{F}_p)$ as a function of $p$, possibly neglecting some finite set $\Sigma$ containing all primes such that the reduction of $X$ mod $p$ is singular. Thanks to Grothendieck, Deligne, and others, we have a very powerful bunch of tools for analysing this problem. The setup is as follows. Choose your favourite prime $\ell$. Then the theory of etale cohomology attaches to $X$ a bunch of finite-dimensional $\mathbf{Q}_\ell$-vector spaces $H^i_{\mathrm{et}}(X_{\overline{\mathbf{Q}}}, \mathbf{Q}_\ell)$ (let me abbreviate this by $H^i_\ell(X)$ to save typing). The dimension of $H^i_\ell$ is the same as the $i$-th Betti number of the manifold $X(\mathbb{C})$; but they encode much more data, because each $H^i_\ell(X)$ is a representation of the Galois group $\operatorname{Gal}(\overline{\mathbf{Q}} / \mathbf{Q})$, unramified outside $\Sigma \cup \{\ell\}$; so for every prime not in this set, and every $i$, we have a number $t_i(p)$, the trace of Frobenius at $p$ on $H^i_\ell$, which turns out to be independent of $\ell$. Theorem : $\#X(\mathbf{F}_p)$ is the alternating sum $\sum_{i=0}^{2 dim(X)} (-1)^i t_i(p)$. Now let's analyse $H^i_\ell$ as a Galois representation. Representations of Galois groups needn't be direct sums of irreducibles, but we can replace $H^i_\ell$ by its semisimplification, which does have this property and has the same trace as the original $H^i_\ell$. This semisimplification will look like $V_{i, 1} + \dots + V_{i, r_i}$ where $V_{i, j}$ are irreducible; and the $V_{i, j}$ all have motivic weight $i$, so the same $V$ can't appear for two different $i$'s. So we get a slightly finer decomposition $\#X(\mathbf{F}_p) = \sum_{i=0}^{2 \mathrm{dim} X} (-1)^i \sum_{j=1}^{k_i} t_{i, j}(p)$ where $t_{i,j}(p)$ is the trace of $Frob_p$ on $V_{i,j}$. Let me distinguish now between several different types of irreducible pieces: $V_{i, j}$ is a one-dimensional representation. Then $i$ must be even, and the trace of Frobenius on $V_{i, j}$ is $p^{i/2} \chi(p)$ where $\chi$ is a Dirichlet character. $V_{i, j}$ is two-dimensional and comes from a modular form. Then $t_{i,j}(p) = a_p(f)$, and $f$ must have weight $i+1$. $V_{i,j}$ is two-dimensional and doesn't come from a modular form. This can happen, but it's rare, and it's expected that all examples come from another kind of analytic object called a Maass wave form; in particular this forces $i$ to be even. $V_{i, j}$ has dimension $> 2$. Then $V_{i, j}$ cannot be the Galois representation coming from a modular form, because these always have dimension 2. You seem to want your varieties to have $X(\mathbf{F}_p)$ = (polynomial in $p$) + (coefficient of a modular form). From the above formulae, it's clear that this can only happen if all the $V_{i,j}$ have dimension 1 or 2; there is exactly one with dimension 2 and it comes from a modular form; and the one-dimensional pieces all come from the trivial Dirichlet character. This always happens for genus 1 curves, because the $H^0$ and $H^2$ are always 1-dimensional for a curve, and the genus condition forces the $H^1$ to be two-dimensional. However, once you step away from genus 1 curves, this is totally not the generic behaviour, and it will only occur for unusual and highly symmetric examples, such as the rigid Calabi-Yaus and extremal $K_3$ surfaces in the links you've posted.
|
{
"source": [
"https://mathoverflow.net/questions/260156",
"https://mathoverflow.net",
"https://mathoverflow.net/users/401/"
]
}
|
260,171 |
I am looking for a reference to the following claims: Any compact group (connected or not) acting on $S^2$ is differentiably conjugate to a linear action. This must be classical. A circle $S^1$ acting on $RP^3$ (and supposedly any spherical space form) is differentiably conjugate to a linear action.
This is probably true for every compact group acting on a $3$-dimensional spherical space form? Wolfgang Ziller
|
The correct setting for this construction turns out to be projective varieties, so let me suppose we have a smooth variety $X$ inside $\mathbf{P}^N$, for some $N \ge 1$, defined by the vanishing of some homogenous polynomials $F_1, \dots, F_r$ in variables $x_0, \dots, x_N$, with the $F_i$ having coefficients in $\mathbf{Q}$. Actually, let me assume the $F_i$ have coefficients in $\mathbf{Z}$, which is no loss since we can just multiply up. Then we can make sense of the reduction of $X$ modulo $p$; and we want to study the point counts $\#X(\mathbf{F}_p)$ as a function of $p$, possibly neglecting some finite set $\Sigma$ containing all primes such that the reduction of $X$ mod $p$ is singular. Thanks to Grothendieck, Deligne, and others, we have a very powerful bunch of tools for analysing this problem. The setup is as follows. Choose your favourite prime $\ell$. Then the theory of etale cohomology attaches to $X$ a bunch of finite-dimensional $\mathbf{Q}_\ell$-vector spaces $H^i_{\mathrm{et}}(X_{\overline{\mathbf{Q}}}, \mathbf{Q}_\ell)$ (let me abbreviate this by $H^i_\ell(X)$ to save typing). The dimension of $H^i_\ell$ is the same as the $i$-th Betti number of the manifold $X(\mathbb{C})$; but they encode much more data, because each $H^i_\ell(X)$ is a representation of the Galois group $\operatorname{Gal}(\overline{\mathbf{Q}} / \mathbf{Q})$, unramified outside $\Sigma \cup \{\ell\}$; so for every prime not in this set, and every $i$, we have a number $t_i(p)$, the trace of Frobenius at $p$ on $H^i_\ell$, which turns out to be independent of $\ell$. Theorem : $\#X(\mathbf{F}_p)$ is the alternating sum $\sum_{i=0}^{2 dim(X)} (-1)^i t_i(p)$. Now let's analyse $H^i_\ell$ as a Galois representation. Representations of Galois groups needn't be direct sums of irreducibles, but we can replace $H^i_\ell$ by its semisimplification, which does have this property and has the same trace as the original $H^i_\ell$. This semisimplification will look like $V_{i, 1} + \dots + V_{i, r_i}$ where $V_{i, j}$ are irreducible; and the $V_{i, j}$ all have motivic weight $i$, so the same $V$ can't appear for two different $i$'s. So we get a slightly finer decomposition $\#X(\mathbf{F}_p) = \sum_{i=0}^{2 \mathrm{dim} X} (-1)^i \sum_{j=1}^{k_i} t_{i, j}(p)$ where $t_{i,j}(p)$ is the trace of $Frob_p$ on $V_{i,j}$. Let me distinguish now between several different types of irreducible pieces: $V_{i, j}$ is a one-dimensional representation. Then $i$ must be even, and the trace of Frobenius on $V_{i, j}$ is $p^{i/2} \chi(p)$ where $\chi$ is a Dirichlet character. $V_{i, j}$ is two-dimensional and comes from a modular form. Then $t_{i,j}(p) = a_p(f)$, and $f$ must have weight $i+1$. $V_{i,j}$ is two-dimensional and doesn't come from a modular form. This can happen, but it's rare, and it's expected that all examples come from another kind of analytic object called a Maass wave form; in particular this forces $i$ to be even. $V_{i, j}$ has dimension $> 2$. Then $V_{i, j}$ cannot be the Galois representation coming from a modular form, because these always have dimension 2. You seem to want your varieties to have $X(\mathbf{F}_p)$ = (polynomial in $p$) + (coefficient of a modular form). From the above formulae, it's clear that this can only happen if all the $V_{i,j}$ have dimension 1 or 2; there is exactly one with dimension 2 and it comes from a modular form; and the one-dimensional pieces all come from the trivial Dirichlet character. This always happens for genus 1 curves, because the $H^0$ and $H^2$ are always 1-dimensional for a curve, and the genus condition forces the $H^1$ to be two-dimensional. However, once you step away from genus 1 curves, this is totally not the generic behaviour, and it will only occur for unusual and highly symmetric examples, such as the rigid Calabi-Yaus and extremal $K_3$ surfaces in the links you've posted.
|
{
"source": [
"https://mathoverflow.net/questions/260171",
"https://mathoverflow.net",
"https://mathoverflow.net/users/103876/"
]
}
|
260,196 |
Riemann zeta function is a function of complex variable $s$ that analytically continous the sum of Dirichlet series .defined as :$$\zeta(s)=\sum_{n=1}^{\infty}\displaystyle \frac{1}{n^s} $$ for when the real part is greater than $1$. My question here is: Could the Riemann zeta function be a solution for a known differential equation? Note: I would like if there is a paper or ref show that zeta function presented a solution for known Differential equation.
|
When posed properly, a long-standing open problem, but in the form you ask: Robert A. Van Gorder , MR 3276353 Does the Riemann zeta function satisfy a differential equation? , J. Number Theory 147 (2015), 778--788.
|
{
"source": [
"https://mathoverflow.net/questions/260196",
"https://mathoverflow.net",
"https://mathoverflow.net/users/51189/"
]
}
|
260,854 |
Background Martin Hairer gave recently some beautiful lectures in Israel on "taming infinities," namely on finding a mathematical theory that supports the highly successful computations from quantum field theory in physics. (Here are slides of a similar talk at Heidelberg . and a video of a related talk at UC Santa Cruz.) I think that a relevant paper where Hairer's theory is developed is : A theory of regularity structures along with later papers with several coauthors. Taming infinities Quantum field theory computations represent one of the few most important scientific successes of the 20th century (or all times, if you wish) and allow extremely good experimental predictions. They have the feature that computations are based on computing the first terms in a divergent series, and a rigorous mathematical framework for them is still lacking. This issue is sometimes referred to as the problem of infinities. Here is one relevant slide from Hairer's lecture about the problem. And here is a slide about Hairer's theory. The Question My question is for further introduction/explanation of Hairer's theory. 1) What are these tailor-made space-time functions? 2) What is the role of noise? 3) Can the amazing fact be described/explained in little more details? 4) In what ways, does the theory provides a rigorous mathematical framework for renaormalization and to physics' computations in quantum field theory. 5) How is Hairer's theory compared/related to other mathematical approaches for this issue. (Renormalization group, computations in quantum field theory, etc.)
|
Let me try to expand a little bit on Ofer's answer, in particular on points 1-3. These functions (or rather distributions in general) are essentially the multilinear functionals of the driving noise that appear when one looks at the corresponding Picard iteration. For example, if we consider the equation (formally) given by $$\partial_t \Phi = \Delta \Phi - \Phi^3 + \xi,\tag{$*$}$$ write $P$ for the heat kernel, and write $X$ for one of the space-time coordinate functions, then we would try to locally expand the solution as a linear combination of the functions / distributions $1$, $X$, $P \star \xi$, $P\star (P\star \xi)^2$, $P\star (P\star \xi)^3$, $P\star (X\cdot (P\star \xi)^2)$, etc. The squares / cubes appearing here are of course ill-defined as soon as $d \ge 2$, so that one has to give them a suitable meaning. Each of these distributions naturally comes with a degree according to the rule that $\deg \xi = -{d + 2\over 2}$, $\deg (P\star \tau) = \deg \tau + 2$, and the degree is additive for products. One then remarks that, given any space-time point $z_0$ and any of these distributions, we can subtract a (generically unique) $z_0$-dependent linear combination of distributions of lower degree, so that the resulting distribution behaves near $z_0$ in a way that reflects its degree, just like what we do with the usual Taylor polynomials. To be consistent with existing notation, let's denote by $\Pi_{z_0}$ this recentering procedure, so for example $(\Pi_{z_0} X)(z) = z-z_0$. In our example, $\Pi_{z_0} \tau$ will be self-similar of degree $\deg \tau$ when zooming in around $z_0$. We can now say that a distribution $\eta$ has "regularity $\gamma$" if, for every point $z_0$, we can find coefficients $\eta_\tau(z_0)$ such that the approximation
$$
\eta \approx \sum_{\deg \tau < \gamma} \eta_\tau(z_0)\,\Pi_{z_0}\tau
$$
holds "up to order $\gamma$ near $z_0$". The "amazing fact" referred to in the slide is that even in situations where $\xi$ is very irregular, the solution to $(*)$ has arbitrarily high regularity in this sense, so that it can be considered as "smooth". There are now several review articles around detailing this construction, for example https://arxiv.org/pdf/1508.05261v1.pdf . Regarding the role of the noise, I already alluded to the fact that the squares / cubes / etc appearing in these expressions may be ill-posed, so that if you start with an arbitrary space-time distribution $\xi$ of (parabolic) regularity $-{d+2\over 2}$, there is simply no canonical way to define $(P\star \xi)^2$ as soon as $d \ge 2$. There is a general theorem saying that there is always a consistent way of defining these objects, yielding a solution theory for which all I said above is true, but this is not very satisfactory since it relies on many arbitrary choices. (In the case $d=2$ it relies on the choice of two arbitrary distributions with certain regularity properties, and quite a bit more in dimension $3$.) If however $\xi$ is a stationary generalised random field then, under rather mild assumptions, there is a way of defining these objects which is "almost canonical" in the sense that the freedom in the construction boils down to finitely many constants , as recently shown in https://arxiv.org/abs/1612.08138 .
|
{
"source": [
"https://mathoverflow.net/questions/260854",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1532/"
]
}
|
260,953 |
So I asked this question a few weeks ago on MSE and I was suggested to repost it here. Let $I$ be the unit interval. Suppose that $X$ and $Y$ are topological spaces such that $X\times I$ is homeomorphic to $Y\times I$. Does it follow that $X$ is homeomorphic to $Y$? As pointed out in the comments in the other thread, there are counterexamples to analogous questions with $I$ replaced by the circle or the real line. Therefore I expect the answer to my question to be negative too. I would be also interested in what one can assume about $X$ and $Y$ to make the implication true.
|
There are indeed counterexamples to which Igor Belegradek gave reference. Here is another counterexample in the plane, perhaps the simplest there is: Let $X$ be an annulus with one arc attached to one of its boundary components and another arc attached to the other boundary component, and $Y$ - an annulus with two disjoint arcs attached to the same one of its boundary components.
|
{
"source": [
"https://mathoverflow.net/questions/260953",
"https://mathoverflow.net",
"https://mathoverflow.net/users/104252/"
]
}
|
261,036 |
In a paper I'm working on, I'm tempted to write something like: Note that the argument above also proves the following result: Scholium. bla bla Is this ok? Is it correct to say that a "scholium" is a "corollary of a proof"?
|
I am not a specialist in either etymology nor the english language (I am not a native speaker of english as well) but since the words scholium and porism have both greek origins, I thought it might be of some interest to add some info on how these words have been used in both ancient and modern greek: The word "porism" comes from the greek word "Πόρισμα" and it indicates something which is a direct implication of the preceding statement. I think the closest in english is corollary. (In non-mathematical contexts, the word πόρισμα also means the conclusion of some work). The word "Scholium" comes from the greek word "Σχόλιο" and it it indicates something which although may be very closely connected to the preceding statement, it does not necessarily stem directly from it (neither logically nor conceptually). In this sense, a scholium may indicate some resemblance with another notion or method from some other field or some distant application, even some piece of info on the origins of the preceding result or its importance from a more conceptual viewpoint. In mathematical texts (in greek) the word "σχόλιο" is often used to discuss something related to relaxing the assumptions of the previous statement or indicating its limitations, under the stated assumptions. (A working translation might be "comment", but i think that in greek it is commonly used to indicate something more important than simply a comment -however I am not a philologist to tell for sure). So, in my opinion, if you want to discuss something which is not a direct implication of your statement but it is proved using methods similar to the argument(s) provided to prove your statement or if you want to provide additional insight then the word "scholium" might be appropriate. If however, you wish to simply present some consequence of a line of argument you have already used, "porism" or "corollary" seem more appropriate -as other users have already indicated in their comments. In case you decide to use it, it would be better to avoid bold letters. Edit: Since the OP's original question is how (and if) the word "Scholium" should be used in a mathematical paper, I feel that the question is of interest to the community of professional mathematics researchers. After all, the question has to do with the way a mathematical research paper is written and structured. However, the community will finally respond, one way or another. Maybe, it would also be instructive (with regard to the OP's original question) to have a look at how the term "scholium" is used in this edition of Euclid's Elements. (see for example the scholia in p.104). Edit-2: Maybe it would be also of some interest to add that in greek, the word "Σχόλιο" has the same root with the greek word "Σχολείο" which means "School". (as has already been mentioned in a comment above, by user Pietro Majer).
|
{
"source": [
"https://mathoverflow.net/questions/261036",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1516/"
]
}
|
261,562 |
Whenever we can interchange summation (perhaps due to Tonelli-Fubini), good things happen. Otherwise, one has to struggle evaluating double sums in just one way, because the alternative results in a divergent series. Having said that, I'm currently interested in the following: Have you encountered in your own research or do you recall from someone's research paper "interesting" double sums that are convergent where reversing order of summation does not work? Please provide references.
|
Since there's a "number theory" tag, I suggest the quasimodular form
$E_2(\tau)$, defined for $\tau$ in the upper half-plane as a multiple of
$\sum_{m\in\bf Z} {\sum_{n\in\bf Z}}' (m\tau+n)^{-2}$
where the $\prime$ indicates omission of the term $(m,n)=(0,0)$.
For even $k>2$, the corresponding sum
$\sum_{m\in\bf Z} {\sum_{n\in\bf Z}}' (m\tau+n)^{-k}$
converges absolutely and yields modularity of $E_k$.
But for $k=2$, switching the sums yields $\tau^{-2} E_2(1/\tau)$,
which is not the same thing as $E_2(\tau)$! (But you can still recover the
formula for the difference by carefully keeping track of how
switching $\sum_m$ and $\sum_n$ changes the sum).
|
{
"source": [
"https://mathoverflow.net/questions/261562",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
261,659 |
Let $G$ be a quasi-split connected reductive group over a $p$-adic field $F$. Let $B$ be a Borel subgroup which is defined over $F$, with $B = TU$, $T$ defined over $F$. The choice of $T$ and $B$ gives a set of nonrestricted roots $\tilde{\Delta}$, which together with an $F$-splitting and a nontrivial unitary character $F \rightarrow S^1$ yields a unitary character $\chi$ of $U(F)$. Let $V$ be a smooth, irreducible, admissible representation of $G(F)$. A linear functional $\lambda: V \rightarrow \mathbb{C}$ is called a Whittaker functional for $\chi$ if for all $u \in U(F)$ and $v \in V$, we have $$\lambda(u \cdot v) = \chi(u) \lambda(v)$$ It is a theorem that the space of Whittaker functionals for $\chi$ is at most one dimensional. If this dimension is one, then $V$ is called $\chi$-generic . Fix a nonzero Whittaker functional $\lambda$, and for $v \in V$, define a function $W_v :G(F) \rightarrow \mathbb{C}$ by $W_v(g) = \lambda(g \cdot v)$. The set $W = W_{\lambda}$ of such functions is closed under addition and scalar multiplication, and becomes a representation of $G(F)$ if we set $g \cdot W_v = W_{g\cdot v}$. Then the representation $W$ is called a Whittaker model , and up to $G(F)$-isomorphism it does not depend on the choice of $\lambda$, since these things are all scalar multiples of each other. From what I can see, $V \rightarrow W_{\lambda}, v \mapsto W_v$, is an isomorphism of $G(F)$-modules. It is clearly surjective, and for injectivity, if we suppose that $0 \neq v \in V$, and $W_v(g) = 0$ for all $g \in G(F)$, then $\lambda(g \cdot v) = 0$ for all $g \in G(F)$. This is impossible, because $g \cdot v : g \in G(F)$ spans $V$, because $V$ is irreducible. So my question is, what is the point of defining Whittaker models if the thing we defined is just isomorphic to our original representation? Why would it be important to regard $V$ as a set of functions $G(F) \rightarrow \mathbb{C}$?
|
This question is a bit like saying "what's the point of the theory of bases for vector spaces -- this just gives you an isomorphism of your space with $\mathbb{R}^n$. What is the point of defining this isomorphism if the thing we defined is just isomorphic to our original representation? Why would it be important to regard $V$ as a set of vectors in $\mathbb{R}^n$? As you probably know, some questions involving vector spaces do not involve choosing a basis. But sometimes when you want to do some explicit calculation, you have to resort to coordinates. Here is an example with vector spaces. How are you going to prove that the natural map from a finite-dimensional real vector space to its double-dual is an isomorphism? A natural way to do this would be to check it's an injection and then do a dimension count to check it's an isomorphism. This reduces the question to checking that the dual of an $n$-dimensional vector space is $n$-dimensional, and if you pick a basis this reduces the question to checking it for the "model" $\mathbb{R}^n$ of an $n$-dimensional vector space, and this case is easy. Now consider the following theorem of Casselman. Let $G=GL_2(\mathbb{Q}_p)$ and consider a smooth irreducible admissible representation of $G$ on a complex vector space $V$. Casselman observed that if we look at the invariants for the congruence subgroups $\Gamma_n:=\begin{bmatrix} * & * \\ 0 & 1 \end{bmatrix}$ mod $p^n$ as $n$ increases, then there were two possibilities. Either $V$ is 1-dimensional and and the dimensions of the invariants are $0,0,0,\ldots,0,1,1,1,1,\ldots$ (the jump being where the basis vector becomes stable) or $0,0,0,\ldots,0,1,2,3,4,\ldots$ and in particular the first point when the dimension becomes positive (which it has to do at some point by smoothness) it becomes 1. However are we going to prove this in the infinite-dimensional case? We just have some completely abstract vector space with an action of $G$. How do we even begin to get a feeling for any action of any element of $G$ on any vector in $V$ at all? I invite you to go away and think about trying to solve this problem. Now imagine that someone tells you that there's this completely natural model of $V$ -- of any infinite-dimensional smooth irreducible representation of $GL_2(\mathbb{Q}_p)$ -- as a certain space of functions, and we know completely explicitly how certain elements of $G$ act on this space. All of a sudden we have a completely concrete "model" for the situation and can begin to do calculations! And this gives you a foothold into beginning the proof.
|
{
"source": [
"https://mathoverflow.net/questions/261659",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38145/"
]
}
|
261,877 |
A few months ago I came up with a proof for an old theorem. After being excited for a moment, I then tried to find my proof in the literature. Since I did not find it, then I started to wonder if it was worth publishing it. I asked a few people about journals that could publish something like this, and they gave me two recommendations: (1) The Mathematical Gazette, http://www.m-a.org.uk/the-mathematical-gazette (2)The Plus Magazine, https://plus.maths.org/content/about-plus First I submitted to the Mathematical Gazette, and my article was rejected because according to the reviewer I was trying to prove something very simple using something much more complex (although I just used undergraduate level math). Then I submitted to Plus, and it was also rejected by the editors (it probably doesn't fit well with their magazine). Do you have any suggestions? Thanks.
|
If the old theorem is something commonly seen in an undergraduate math class (with the old demonstration), then this might be appropriate as a "Note" in the American Mathematical Monthly . What could happen if you submit it? They may publish it. The referee may give you a reference for it. They may respond in the same way as the Gazette . What if the old theorem is not commonly seen in an undergraduate math course? When you write a textbook on that area of math, you can include your new proof. But if you think it unlikely you will write a textbook on this, then probably there is little prospect for publishing this. Maybe if you make it known to the experts* then some day one of them may include it in their new textbook. *Perhaps by posting somewhere on-line...
|
{
"source": [
"https://mathoverflow.net/questions/261877",
"https://mathoverflow.net",
"https://mathoverflow.net/users/104725/"
]
}
|
262,030 |
In my previous MO question , the inequality was about a specific series and nicely answered by Cherng-tiao Perng. After testing with a few more numerical infinite sums, I came to realize that perhaps more is true. Does the following hold true for any convergent series?
$$\left(\sum_{k\geq1}t_k\right)^4
<\pi^2\left(\sum_{k\geq1}t_k^2\right)\left(\sum_{k\geq1}k^2t_k^2\right).$$
Is the constant $\pi^2$ optimal?
|
This is known to be Carlson's inequality from 1935 (for $t_k\geq 0$, and not all $t_k$ are $0$). The Swedish mathematician Fritz Carlson (1888-1952) also proved the optimality of the constant $\pi^2$. For an elegant elementary proof of the inequality see G. H. Hardy, A note on two inequalities, J. London Math. Soc. 11, 167-170, 1936 (DOI https://doi.org/10.1112/jlms/s1-11.3.167 or http://onlinelibrary.wiley.com/doi/10.1112/jlms/s1-11.3.167/abstract ). Note : This inequality has found many modern day applications and generalizations, see the book "Multiplicative Inequalities of Carlson Type and Interpolation" by L. Larsson et al., World Scientific, 2006. (DOI https://doi.org/10.1142/6063 ; there you will find a free sample chapter with the classical proofs of Carlson and Hardy.)
|
{
"source": [
"https://mathoverflow.net/questions/262030",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
262,184 |
While trying to understand a paper of Cayley, he left something unexplained, I managed to show that it is equivalent to the following formula, which I got stuck at: $$k \cdot (f^k)^{(k-1)} = \sum_{j=0}^{k-1} {{k} \choose {j}} (f^{j})^{(j)}(f^{k-j})^{(k-j-1)}. $$ Can somebody help?
|
A detailed historical discussion of identities like this one can be found in Warren P. Johnson's paper The Pfaff/Cauchy derivative identities and Hurwitz type extensions , The Ramanujan Journal 13 (2007) pp. 167–201.
In particular, his formula (1.3) is $$\frac{d^n\ }{dx^n} \phi^n(x) u(x) v(x) = \sum_{k=0}^n \binom nk \left(\frac{d^k\ }{dx^k} \phi^k(x) u(x)\right)
\left(\frac{d^{n-k-1}}{dx^{n-k-1}}\phi^{n-k}(x) v'(x)\right)\tag{1}\label{1}$$ where for $n=k$ , $\frac{d^{n-k-1}}{dx^{n-k-1}}\phi^{n-k}(x) v'(x)$ is to be interpreted as $v(x)$ . This identity was given by Cauchy in 1826, but is equivalent to an identity given by Pfaff in 1795. The OP's identity is equivalent to the case $\phi(x)=f$ , $u(x)=1$ , $v(x) = x$ . There is a related formula in Cayley's paper On the partitions of a polygon , Coll. Math. Papers 13 (1897), 93–113; Proc. London Math. Soc. (1) 22 (1891), 237–262, but I didn't see this formula there. Formulas like \eqref{1} are discussed in
my survey paper Lagrange Inversion , Journal of Combinatorial Theory, Series A 144 (2016), pp. 212–249, section 2.6. There is also a discussion of a somewhat related formula on Terry Tao's blog at Another problem about power series . Added November 12, 2022: A new paper with multivariable generalizations of these identities is Wenchang Chu, Multiple derivative inversions and Lagrange-Good expansion formulae ( DOI is being registered), Mathematics 10 (2022), 4234.
|
{
"source": [
"https://mathoverflow.net/questions/262184",
"https://mathoverflow.net",
"https://mathoverflow.net/users/104880/"
]
}
|
262,349 |
This might be easy, but let's see. Question 1. If $\mathfrak{S}_n$ is the group of permutations on $[n]$, then is the following true?
$$\sum_{\pi\in\mathfrak{S}_n}\prod_{j=1}^n\frac{j}{\pi(1)+\pi(2)+\cdots+\pi(j)}=1.$$ Update. After Lucia's answer, I got motivated to ask: Question 2. Let $z_1,\dots,z_n$ be indeterminates. Is this identity true too?
$$\sum_{\pi\in\mathfrak{S}_n}\prod_{j=1}^n\frac{z_j}{z_{\pi(1)}+z_{\pi(2)}+\cdots+z_{\pi(j)}}=1.$$ Update. Now that we've an analytic and an algebraic proof, is there a combinatorial argument too? At least for Question 1 .
|
Yes. Write the sum as
$$
\sum_{\pi \in S_n} n! \int_0^{\infty} e^{-\pi(1) x_1} \int_{0}^{\infty} e^{-(\pi(1)+\pi(2)) x_2} \ldots \int_0^{\infty} e^{-(\pi(1) + \ldots +\pi(n))x_n} dx_n \ldots dx_1.
$$
Upon writing $y_n = x_n$, $y_{n-1}=x_n+x_{n-1}$, etc this becomes
$$
\sum_{\pi \in {S_n} } n! \int_{y_1 \ge y_2 \ge \ldots y_n \ge 0} e^{-\sum_{i=1}^{n} \pi(i)y_i } dy_1 \ldots dy_n.
$$
Since the sum is over all $\pi \in S_n$, it doesn't matter that we are in the region $y_1 \ge y_2 \ge \ldots \ge y_n$ -- for any other ordering of the non-negative variables $y_i$, the answer would be the same. So recast the above as
$$
\int_0^{\infty} \ldots \int_0^{\infty} \sum_{\pi \in S_n} e^{-\sum_{i=1}^{n} \pi(i) y_i} dy_1 \ldots dy_n
= \sum_{\pi \in S_n} \frac{1}{n!} = 1.
$$
|
{
"source": [
"https://mathoverflow.net/questions/262349",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
263,703 |
Consider the sequence $a_n=2^{2n}\binom{2n}n^{-1}$. Stirling's approximation shows that $a_n\sim \sqrt{\pi n}$, thus
$$\sum_{n\geq0}\frac{\pi}{2a_n}\qquad \text{and} \qquad
\sum_{n\geq0}\frac{a_n}{2n+1}$$
are both divergent series. However, their difference should converge with terms of order $\sim\frac1{n^{3/2}}$. Question. In fact, is this true?
$$\sum_{n=0}^{\infty}\left(\frac{\pi}{2a_n}-\frac{a_n}{2n+1}\right)=1.$$
|
We have
$$ f(x):=\sum_{n\geq 0}\frac{x^{2n}}{a_n} =
\frac{1}{\sqrt{1-x^2}} $$
and
$$ g(x):=\sum_{n\geq 0} \frac{a_n}{2n+1}x^{2n} =
\frac{\sin^{-1}x} {x\sqrt{1-x^2}}. $$
It is routine to compute that
$$ \lim_{x\to 1-}\left(\frac 12\pi f(x)-g(x)\right)=1 $$
and then apply Abel's theorem .
|
{
"source": [
"https://mathoverflow.net/questions/263703",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
264,805 |
Is it true, that a path connecting two opposite points (i.e. such that the segment joining them passes through the centre of mass of the cube) on the surface of the $d$-dimensional unit cube (with $d>1$) is not shorter than $2$?
|
Consider the sphere with equator 4.
Divide it into spherical cubes , the central projections from an inscribed cube. Note that the exponential map from tangent plane to the sphere is short.
Note also that if one maps a unit cube centered at the origin by the exponential map it will cover the spherical cube .
It is easy to modify the map to get a short symmetric map from the unit cube to each spherical cube. Joining all these maps, we get a short map fro the surface of the cube to our sphere, where the statement is evident.
|
{
"source": [
"https://mathoverflow.net/questions/264805",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2158/"
]
}
|
265,299 |
There are several conventions for the definition of the Fourier transform on the real line. 1 . No $2\pi$. Fourier (with cosine/sine), Hörmander, Katznelson, Folland. $ \int_{\bf R} f(x) e^{-ix\xi} \, dx$ 2 . $2\pi$ in the exponent. L. Schwartz, Trèves $\int_{\bf R} f(x) e^{-2i\pi x\xi} \, dx$ 3 . $2\pi$ square-rooted in front. Rudin. ${1\over \sqrt{2\pi}} \int_{\bf R} f(x) e^{-ix\xi} \, dx$ I would like to know what are the mathematical reasons to use one convention over the others? Any historical comment on the genesis of these conventions is welcome.
Who introduced conventions 2 and 3? Are they specific to a given context? From the book of L. Schwartz, I can see that the second convention allows for a perfect parallel in formulas concerning Fourier transforms and Fourier series. The first convention does not make the Fourier transform an isometry, but in Fourier's memoir the key formula is the inversion formula, I don't think that he discussed what is now known as the Plancherel formula. Regarding the second convention, Katznelson warns about the possibility of increased confusion between the domains of definition of a fonction and its transform.
|
The version number 2 is the only one that makes the Fourier transform both a unitary operator on $L^2$ and an algebra homomorphism from the convolution algebra in $L^1$ to the product algebra in $L^\infty $. It is not, however, of widespread use in analysis as far as I know. From the point of view of semiclassical analysis, it amounts roughly speaking to consider Planck's constant $h $ rather than $\hslash=h/2\pi $ as the semiclassical parameter (or as the constant set to one in quantum systems). This is somewhat differing from the common practice in physics.
|
{
"source": [
"https://mathoverflow.net/questions/265299",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6129/"
]
}
|
265,310 |
If I swap the digits of $\pi$ and $e$ in infinitely many places, I get two new numbers. Are these two numbers transcendental?
|
Nice question, Erin. Here is one quick easy thing to say. If $\pi$ and $e$ disagree in infinitely many digits, then there are continuum many choices of the particular subset of those digits to swap, and so we get continuum many different numbers this way. Since there are only countably many algebraic numbers, it would follow that most of the time, yes, you do get transcendental numbers by doing this. I'm unsure, however, whether one can say that all the resulting reals are transcendental. Perhaps we'll have to wait for some number theory experts to answer. Lastly, if it happens (as seems unlikely) that all but finitely many digits of $\pi$ and $e$ are the same, then $\pi-e$ would be rational, and furthermore swapping the digits doesn't actually do anything except on those finitely many digits of difference, and so this won't affect transcendentality. In this case, there are only finitely many possible reals resulting, but they are all differing from the original reals by only finitely many digits, and so yes, they are all transcendental.
|
{
"source": [
"https://mathoverflow.net/questions/265310",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
265,438 |
Question. The following is always an integer. Is it not?
$$\frac{(2^n-1)(2^n-2)(2^n-4)(2^n-8)\cdots(2^n-2^{n-1})}{n!}.$$ John Shareshian has supplied a cute proof. I'm encouraged to ask: Question. Can you give alternative proofs, even if they are not particularly as short?
|
It is, because $S_n$ embeds in $GL_n({\mathbb F}_2)$.
|
{
"source": [
"https://mathoverflow.net/questions/265438",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
265,493 |
Thurston's 1982 article on three-dimensional manifolds 1 ends with $24$ "open questions": $\cdots$ Two naive questions from an outsider:
(1) Have all $24$ now been resolved?
(2) If so, were they all resolved in his lifetime? 1 Thurston, William P. "Three dimensional manifolds, Kleinian groups and hyperbolic geometry." Bull. Amer. Math. Soc , 6.3 (1982).
Also: In Proc. Sympos. Pure Math , vol. 39, pp. 87-111. 1983. Citseer PDF download link . Answered by Ian Agol, Andy Putman, and Igor Rivin. Ian: "Problems 1-18 have been completely answered....Problems 19-24 are more open-ended," and difficult to declare "all settled" (as emphasized by YCor). But, as Andy says,
"with the exception of problem 23." Back to Ian:
"One can imagine, however, a complete and satisfactory answer eventually to question 23."
|
A nice summary of the status of these problems may be found here: Otal, Jean-Pierre , William P. Thurston: ``Three-dimensional manifolds, Kleinian groups and hyperbolic geometry" , Jahresber. Dtsch. Math.-Ver. 116, No. 1, 3-20 (2014). ZBL1301.00035 . I would charaterize it this way: Problems 1-18 have been completely answered, although some of the answers are still unpublished (but exist as preprints and are still under submission). However, some of these questions (maybe 4., 7, and 8.) are less precisely formulated and somewhat open-ended, so one could argue whether or not they are answered completely. For problem 4., Hodgson's thesis addresses one part of the question: "Describe the limiting geometry
which occurs when hyperbolic Dehn surgery breaks down." I could list several other papers on this topic. Problems 19-24 are more open-ended, and thus will likely never be completely satisfactorily addressed ( I discussed some of these problems here) . No progress has been made on problem 23 as far as I know (which was due to Milnor originally, not Thurston). One can imagine,
however, a complete and satisfactory answer eventually to question 23. Otal does not comment much on Problem 24, which again is somewhat imprecise (what does "most" mean?). As Igor mentions, Joseph Maher has given a satisfactory answer . One could also argue that this was satisfactorily answered in Hempel's paper together with geometrization (Heegaard distance $>2$ implies hyperbolic). But there is other work on this question, such as giving a model for a hyperbolic manifold of bounded genus and bounded geometry (a lower bound on the injectivity radius) by Brock, Minsky, Namazi, and Souto. Moreover, these authors have a program to understand the unbounded geometry case as well (so it would eventually give a description of the geometry of all hyperbolic manifolds of bounded Heegaard genus in some sense). Thus, one could consider this problem to still be open for a variety of reasons.
|
{
"source": [
"https://mathoverflow.net/questions/265493",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
266,738 |
(Disclaimer: I'm no expert in homotopy theory nor in higher categories!) If I understand it correctly, Grothendieck's homotopy hypothesis states that there should be an equivalence (of $(n+1)$ -categories) between "homotopy $n$ -types" and $n$ -groupoids. Where, by "homotopy $n$ -types" is probably meant the $(\infty,n+1)$ -category that has (nice) topological spaces with vanishing homotopy groups above the $n$ -th as objects, and higher morphisms given by homotopies and homotopies-between-homotopies etc. And by " $n$ -groupoid" is probably understood $(\infty,n)$ -groupoid. Edit: the homotopy types probably are defined to be some localization of the thing I stated above? To which extent has the homotopy hypothesis been proved? By "proved" I mean precise statements and rigorous proofs, not just "philosophical" evidence; and not "tautological" solutions in which homotopy types are defined to be $\infty$ -groupoids in the first place. All this fits in the context of Whitehead's algebraic homotopy programme Which is the present status of that programme, both in the sense of formalization and of proof? How can any advance in the programme be made at all if Grothendieck's conjecture is not fully proven first?
|
The problem is that the question is highly dependent on the definition of $n$-groupoids. The notion of strict $n$-groupoid is very clear and precise but we know very well (and Grothendieck knew that) that the homotopy hypothesis is false if we only use strict groupoids. One needs to use weak $n$-groupoids (where for example composition is associative up to isomorphism and so one). But there is not a unique definition of what an $n$-groupoids is. There are plenty of non-equivalent definitions, which are supposed to become equivalent once we move to "homotopy categories". For example, even for $2$-groupoids, you could ask to have a binary composition operation that composes two (composable) arrows and satisfies associativity up to isomorphism, plus the coherence condition between the associativity isomorphisms, or, follow the "unbiased" path and have for each $n$ a $n$-ary composition operation that compose string of (composable) $n$ arrows plus compatibility isomorphisms between these operations. The two approaches are not strictly equivalent, but will produce the same "homotopy category" (i.e. will be equivalent if one only considers groupoids up to categorical equivalence). For example, taking "Kan complex" as a definition of $\infty$-groupoids can be considered as a reasonable choice and not as you said a "tautological solutions in which homotopy types are defined to be $\infty$-groupoids in the first place": In a Kan complex, you do have a notion of $n$-morphisms and you can compose them and so on. It is a purely algebraic notion so to some extent I guess it could be considered an answer to the Whitehead program, depending on how you interpret the vague formulation given on the link you mention. Finally, the equivalence between the homotopy category of spaces and of Kan complexes is a rather non trivial result relying on an "algebraic approximation" result (the "simplicial approximation theorem"). (My understanding of history is that the realization that one can do homotopy theory purely algebraically using simplicial sets follows from Kan's work in the 50's, so a few years after Whitehead ICM talk, but I don't know much about it, so maybe someone would clarify this ? ) In Pursuing Stacks , Grothendieck did gave a different definition of $\infty$-groupoid, which follow a "globular" combinatorics, i.e. where instead of simplex as in Kan complexes, one has just have a notion of $n$-arrows between each pair or parallel $n-1$-arrows and operations on those. By the "homotopy hypothesis" one often refer to the statement that the homotopy category of Grothendieck $\infty$-groupoids is equivalent to the homotopy category of spaces. (see G.Maltsiniotis paper on this ) This version of the homotopy hypothesis is still widely open. On the other hand, a proof of this version of the homotopy hypothesis does not seem that it would provide a better answer to the Whitehead program than a Kan complex: it is basically not easier to compute homotopy classes of maps with globular $\infty$-groupoids that it is with Kan complexes. The only difference between the two is the type of combinatorics that you have to describe your $n$-arrows and the relations between them. A bit in the same vein as the example I gave in the beginning with $2$-groupoids. Finally, if I'm allowed to quote my own work (which I think bring some light on the question) in a recent paper I gave a different definition of $\infty$-groupoid, which is still globular (i.e. where you just have a collection of $n$-arrows between each pair of parallel $n-1$-arrows and some operation one those) but for which one can prove the homotopy hypothesis. These groupoids can be defined informally as globular sets equipped with all the operations than you can construct on a type in a weak version of intentional type theory. I also proved in the same paper that Grothendieck formulation of the homotopy hypothesis is implied by some technical conjecture on the behavior of homotopy group of finitely generated Grothendieck $\infty$-groupoids, which can be dually understood as the fact that certain operation that you should be able to perform on arrows in a $\infty$-groupoid can indeed be defined from the operations Grothendieck puts on his $\infty$-groupoids (because map between finitely generated object, are the same as operations...). I believe that my results show that our inability to prove Grothendieck's formulation of the homotopy hypothesis has nothing to do with an inability to express homotopy type as $\infty$-groupoids, but rather with some technical combinatorial difficulties inherent to Grothendieck's definition (basically, that it is not totally clear yet if Grothendieck's definition of $\infty$-groupoid is 'correct' and well behaved).
|
{
"source": [
"https://mathoverflow.net/questions/266738",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4721/"
]
}
|
266,849 |
Let me apologize in advance as this is possibly an extremely stupid question: can one prove or disprove the existence of a bijection from the plane to itself, such that the image of any circle becomes a square? Or, more generally, are there any shapes other than a square such that a bijection does exist? (obviously, a linear map sends a circle to an ellipse of fixed dimensions and orientation)
|
There is no such bijection. To see this, imagine four circles all tangent to some line at some point $p$, but all of different radii, so that any two of them intersect only at the point $p$. (E.g., any four circles from this picture .) Under your hypothetical bijection, these four circles would map to four squares, any two of which have exactly one point in common, the same point for any two of them. You can easily convince yourself that no collection of four squares has this property.
|
{
"source": [
"https://mathoverflow.net/questions/266849",
"https://mathoverflow.net",
"https://mathoverflow.net/users/70190/"
]
}
|
266,863 |
Let $\phi:X\dashrightarrow Y$ be a generically finite, dominant rational map between smooth projective varieties over $\mathbb{C}$. Assume that $Y$ is of general type. May we conclude then that $X$ is of general type as well?
|
There is no such bijection. To see this, imagine four circles all tangent to some line at some point $p$, but all of different radii, so that any two of them intersect only at the point $p$. (E.g., any four circles from this picture .) Under your hypothetical bijection, these four circles would map to four squares, any two of which have exactly one point in common, the same point for any two of them. You can easily convince yourself that no collection of four squares has this property.
|
{
"source": [
"https://mathoverflow.net/questions/266863",
"https://mathoverflow.net",
"https://mathoverflow.net/users/56323/"
]
}
|
267,002 |
Let $G(V,E)$ be a graph. I am searching for graphs with only disjoint perfect matchings (i.e. every edge only appears in at most one of the perfect matchings). Examples: Cyclic graph $C_n$ with even $n$, with $m=2$ disjoint perfect matchings. Complete graph $K_4$, with $m=3$ disjoint perfect matchings. I have three questions: How are such graphs called? Are there other examples than $C_n$ and $K_4$? What is the maximum number $m$ of perfect matchings, if the graph has only completly disjoint perfect matchings? For question 3, it seems to me that $K_4$ with $m=3$ different, disjoint perfect matchings is the optimum, but I have no proof for that. Every hint to an answer or to relevant literature would be very much appreciated! Edit: I am interested in undirected graphs only for the moment. Edit2: The answer to this question I have used in a recent article in Physical Review Letters , where I cite this MO question as reference [24]. See Figure 2 for a detailed variant of the application of Ilya's answer. Thanks Ilya!
|
$m=3$ is indeed the maximum, and $K_4$ is the only example for this value of $m$. Two perfect matchings form a disjoint union of cycles. If there is more than one cycle, then you may swap one of them, obtaining a third matching on the same edges. So any two of the $m$ matchings form a Hamiltonian cycle. Assume that $m\geq3$; consider a Hamiltonian cycle $v_1,\dots,v_{2n}$ formed by the first two matchings, and check how the third one looks like. If some its edge $(v_i,v_j)$ subtends an arc of odd length (i.e. if $i-j$ is odd), then we may split the vertices outside this arc into pairs, and split the cycle $v_i,\dots,v_j$ into edges including $(v_i,v_j)$, obtaining a matching intersecting the third one but not coinciding with it. This should not be possible; thus each subtended arc is even. Now take an edge $(v_i,v_j)$ subtending minimal such arc, and consider an edge $(v_{i+1},v_k)$ of the third matching (going otside this arc). Now split the cycle $v_i,v_j,v_{j+1},\dots,v_k,v_{i+1}$ into edges containing $(v_i,v_j)$, and split all the remaining vertices into edges of the Hamiltonian cycle (it is possible, according to the parity). If $2n>4$, you again get a fourth matching sharing edges with the third ones bot diferent from it. Thus $2n\leq 4$ and $m\leq 3$.
|
{
"source": [
"https://mathoverflow.net/questions/267002",
"https://mathoverflow.net",
"https://mathoverflow.net/users/63938/"
]
}
|
267,045 |
The Fibonacci recurrence $F_n=F_{n-1}+F_{n-2}$ allows values for all indices $n\in\mathbb{Z}$. There is an almost endless list of properties of these numbers in all sorts of ways. The below question might even be known. Yet, if true, I like to ask for alternative proofs. Question. Does the following identity hold? We have
$$\frac{\sum_{k=0}^{\infty}\frac{F_{n+k}}{k!}}{\sum_{k=0}^{\infty}\frac{F_{n-k}}{k!}}=e,$$
independent of $n\in\mathbb{Z}$.
|
It follows from the identity $$F_{n+k} = \sum_{j=0}^k {k \choose j} F_{n-j}$$ which is obtained by applying the standard recurrence $k$ times to the left side, each time splitting up each term into two terms. Indeed, this gives $$\sum_{k=0}^{\infty} \frac{F_{n+k}}{k!} = \sum_{k=0}^{\infty} \frac{ \sum_{j=0}^k {k \choose j} F_{n-j}}{k!}= \sum_{k=0}^{\infty} \sum_{j=0}^{k} \frac{1}{j! (k-j)!} F_{n-j} = \sum_{j=0}^{\infty} \sum_{k-j=0}^{\infty} \frac{1}{(k-j)!} \frac{F_{n-j}}{j!} = \sum_{j=0}^{\infty} e \frac{F_{n-j}}{j!}$$
|
{
"source": [
"https://mathoverflow.net/questions/267045",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
267,055 |
Let me first ask the question for two-dimensional compact, connected manifolds and orbifolds.
Then, if the answer is No , one can remove various conditions on the dimension,
and allow non-compact examples and disconnected examples, to realize a (perhaps) wider range of rationals. This came up after a class I'm teaching and I didn't know the answer. Related: MO question " Euler characteristic of orbifolds ." Wikipedia table for 2-dim orbifolds
|
Products of 2-orbifolds with manifolds will do the trick. There are 2-orbifolds of Euler characteristic $1/n$ (take a quotient of $S^2$ by a rotation of order $2n$). Then take a product with a manifold of Euler characteristic $m \in \mathbb{Z}$ to get all rationals.
|
{
"source": [
"https://mathoverflow.net/questions/267055",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
267,095 |
Can rotations and translations of this shape perfectly tile some equilateral triangle? I originally asked this on math.stackexchange where it was well received and we made some good progress. Here's what we learnt: Because the area of the triangle has to be a multiple of the area of the tile, the triangle must have side length divisible by $5$ (where $1$ is the length of the short edges of the tile). The analogous tile made of three equilateral triangles can tile any equilateral triangle with side length divisible by three. There is a computer program, Burr Tools , which was designed to solve this kind of problem. Josh B. used it to prove by exhaustive search that there is no solution when the side length of the triangle is $5$, $10$, $15$, $20$ or $25$. The case of a triangle with side length $30$ would take roughly ten CPU-years to check using this method. Lee Mosher pointed me in the direction of Conway's theory of tiling groups . This theory can be used to show that if the tile can cover an equilateral triangle of side length $n$ then $a^nb^nc^n=e$ in the group $\left<a,b,c\;\middle|\;a^3ba^{-2}c,a^{-3}b^{-1}a^2c^{-1},b^3cb^{-2}a,b^{-3}c^{-1}b^2a^{-1},c^3ac^{-2}b,c^{-3}a^{-1}c^2b^{-1}\right>$. But sadly it turns out that we do have that $a^nb^nc^n=e$ in this group whenever $n$ divides by $5$. In fact one can use the methods in this paper of Michael Reid to prove that this tile's homotopy group is the cyclic group with $5$ elements. I think this means that the only thing these group theoretic methods can tell us is a fact we already knew: that the side length must be divisible by $5$. These group theoretic methods are also supposed to subsume all possible colouring arguments, which means that any proof based purely on colouring is probably futile. The smallest area that can be left uncovered when trying to cover a triangle of side length $(1,\dots,20)$ is $($ $1$ $,\,$ $4$ $,\,$ $4$ $,\,$ $1$ $,\,$ $5$ $,\,$ $6$ $,\,$ $4$ $,\,$ $4$ $,\,$ $6$ $,\,$ $5$ $,\,$ $6$ $,\,$ $4$ $,\,$ $4$ $,\,$ $6$ $,\,$ $5$ $,\,$ $6$ $,\,$ $4$ $,\,$ $4$ $,\,$ $6$ $,\,$ $5$ $)$ small triangles. In particular it's surprising that when the area is $1\;\mathrm{mod}\;5$ one must sometimes leave six triangles uncovered rather than just one. We can look for "near misses" in which all but $5$ of the small triangles are covered and in which $4$ of the missing small triangles could be covered by the same tile. There's essentially only one near miss for the triangle of side $5$, none for the triangle of side $10$ and six ( 1 , 2 , 3 , 4 , 5 , 6 ) for the triangle of side $15$. (All other near misses can be generated from these by rotation, reflection, and by reorienting the three tiles that go around the lonesome missing triangle.) This set of six near misses are very interesting since the positions of the single triangle and the place where it "should" go are very constrained. I'd also be interested in learning what kind of methods can be used to attack this sort of problem. Are there any high-level approaches other than the tiling groups? Or is a bare hands approach most likely to be successful?
|
It seems that one can color a 15-15-15-30 trapezoid with the given tiles. Here is a picture (sorry about adjacent figures that are the same color, I used random colors so hopefully there are no ambiguities): In particular, OP pointed out that these scaled 1-1-1-2 trapezoids can tile any equilateral triangle whose side length is a multiple of three. So the original tile can tile any equilateral triangle whose side length is a multiple of 45. I bet we didn't see answers for smaller $n$ due to an Aztec-diamond-like boundary condition with the corners.
|
{
"source": [
"https://mathoverflow.net/questions/267095",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4613/"
]
}
|
267,252 |
Alternate formulation of the question (I think): What's a precise version of the statement: "In a stable $\infty$-category, finite limits and finite colimits coincide"? Recall that a stable $\infty$-category is a type of finitely complete and cocomplete $\infty$-category characterized by certain exactness conditions. Namely, There is a zero object $0$, i.e. an object which is both initial and terminal. Every pushout square is a pullback square and vice versa. Item (2) takes advantage of a peculiar symmetry of the "square" category $S = \downarrow^\to_\to \downarrow$; namely $S$ can either be regarded as $S' \ast \mathrm{pt}$ where $S' = \cdot \leftarrow \cdot \rightarrow \cdot$ is the universal pushout diagram, or $S$ can be regarded as $S = \mathrm{pt} \ast S''$ where $S'' = \cdot \rightarrow \cdot \leftarrow \cdot$ is the universal pullback diagram. Hence it makes sense to ask, for a given $S$-diagram, whether it is a pullback, a pushout, or both. Item (1) similarly takes advantages of the identities $\mathrm{pt} = \emptyset \ast \mathrm{pt} = \mathrm{pt} \ast \emptyset$. But I can't shake the feeling that notion of a stable infinity category somehow "transcends" this funny fact about the geometry of points and squares. For one thing, one can use a different "combinatorial basis" to characterize the exactness properties of a stable $\infty$-category, namely: 1.' The category is (pre)additive (i.e. finite products and coproducts coincide) 2.' The loops / suspension adjunction is an equivalence. True, (2') may be regarded as a special case of (2) -- but it may also be regarded as a statement about the (co)tensoring of the category in finite spaces. Both of these ways of defining stability say that certain limits and colimits "coincide", and my sense is that in a stable $\infty$-category, all finite limits and colimits coincide -- insofar as this makes sense . Question: Is there a general notion of "a limit and colimit coinciding" which includes zero objects biproducts (= products which are also coproducts) squares which are both pullbacks and pushouts suspensions which are also deloopings and if so, is it true that in in a stable $\infty$-category, finite limits and finite colimits coincide whenever this makes sense? I would regard this as investigating a different sort of exactness to the exactness properties enjoyed by ($\infty$)-toposes. In the topos case, I think there are some good answers. For one, in a topos $C$, the functor $C^\mathrm{op} \to \mathsf{Cat}$, $X \mapsto C/X$ preserves limits. Foir another, a Grothendieck topos $C$ is what Street calls "lex total": there is a left exact left adjoint to the Yoneda embedding. It would be nice to have similar statements here which in some sense formulate a "maximal" list of exactness properties enjoyed by (presentable, perhaps) stable $\infty$-categories, rather than the "minimal" lists found in (1,2) and (1',2') above.
|
I don't think this "finite limits and finite colimits coincide" business can be taken very far. If you take any small category $S_0$ you can add an initial and a terminal object to form $S = \mathrm{pt} \ast S_0 \ast \mathrm{pt}$. A diagram of shape $S$ could potentially be both a colimiting cocone and a limiting cone and you might hope those conditions are equivalent in a stable $\infty$-category. (And for $S_0 = \mathrm{pt} \sqcup \mathrm{pt}$, this does happen, of course: it is the condition that a square is a pushout if and only if it is a pullback.) But this fails 1 for three points, $S_0 = \mathrm{pt} \sqcup \mathrm{pt} \sqcup \mathrm{pt}$. I think it's probably better to focus on a lesser known characterization of stable $\infty$-categories: they are precisely the finitely complete and cocomplete ones in which finite limits commute with finite colimits. 2 1 The colimit of the $\mathrm{pt} \ast S_0$ shaped diagram with $X$ at the cone point and zeroes in the other slots is $\Sigma X \amalg \Sigma X$. Analogously the limit of the $S_0 \ast \mathrm{pt}$ diagram with $Y$ in the cocone point and zeroes in the other slots is $\Omega Y \times \Omega Y$. But for $Y = \Sigma X \amalg \Sigma X$ we do not have $X = \Omega Y \times \Omega Y$. 2 If $\mathcal{C}$ is a stable $\infty$-category, then it is finitely cocomplete, and thus if $S$ is a finite diagram shape, there is a functor $\mathrm{colim} : \mathrm{Fun}(S, \mathcal{C}) \to \mathcal{C}$. It's domain is also stable and $\mathrm{colim}$ preserves finite colimits --because colimits commute with colimits. The functor is therefore exact and so preserves finite limits as well. Now, assume that $\mathcal{C}$ is finitely complete and cocomplete and that finite limits commute with colimits in it. Consider the following diagram:
$$\require{AMScd}\begin{CD}
X @<<< X @>>> 0 \\
@VVV @VVV @VVV \\
0 @<<< X @>>> 0 \\
@AAA @AAA @AAA \\
0 @<<< X @>>> X \\
\end{CD}$$
Taking pushouts of the rows we get the diagram $0 \to \Sigma X \leftarrow 0$ whose pullback is $\Omega \Sigma X$. If instead we take pullbacks of the columns, we get $X \leftarrow X \to X$, whose pushout is $X$. Pullbacks commuting with pushouts tell us then that $\Omega \Sigma X \cong X$ so $\mathcal{C}$ is stable.
|
{
"source": [
"https://mathoverflow.net/questions/267252",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2362/"
]
}
|
267,543 |
Among the papers indexed by MathSciNet and Zentralblatt MATH,
I occasionally have seen papers which consist essentially only
of text copied from elsewhere without proper attribution and without
adding any significant value. I would be interested whether anyone
has an idea what the frequency of such papers among those indexed in
the mentioned databases roughly is. -- Are these extremely rare cases, or are such papers more common than one usually thinks, and perhaps even not easy to keep out of the databases if
one doesn't want to be too restrictive in which journals to cover? --
Is there any data known on this? Also, if one spots such a paper -- should one report this to the
authors or copyright holders of the pieces of text from which the
paper is composed, to the editorial board of the journal in which
the paper is published, or to MathSciNet / Zentralblatt MATH --
or rather just ignore it? What is common practice in such case?
|
On behalf of zbMATH (which is certainly also the case for MathSciNet), we would very much appreciate a notification of such cases, if they have not yet been detected at the level of editors or reviewers. There is the general impression of our editors (which has been discussed with our MathSciNet colleagues who seem to share this) that this behaviour has become significantly more widespread recently, and that such papers make it frequently into journals which usually have shown a level of decent peer review (which should generally filter such submissions). The notification could either be done by an email to [email protected] or to volunteer to write a short review about this case https://zbmath.org/become-a-reviewer/ . We would then evaluate the level of copying and 1) Inform the editorial board,
2) our colleagues of MathSciNet,
3) Add a review or editorial remark mentioning the degree of overlap, ideally taking into account statements of the editorial board and, possibly, the author(s) if provided. We do not display automated warnings like on arXiv because all existing tools (known to us) produce too many false positives when applied to math content, which seem unsatisfactory for public statements (e.g., arXiv claims overlap for arXiv:1609.02231 and arXiv:1412.0555 where the same problem is considered for genus three and even genus). Searching for "plagiarism" will not result in all cases, because that means that intention and priority has been clearly identified, which is not always clear especially when things are under investigation (indeed, we had various cases where the paper which was published, or even submitted, first turned out to be a copy of ongoing unpublished other work published later). Hence, the documents will be usually labeled as "identical", "almost identical", "parts are almost identical" etc. - the results https://zbmath.org/?t=&s=0&q=%28%28%22reviewer%27s+remark%22%7C+%22editorial+remark%22%29+%26+%28identical+%7C+plagiarism%29 may give an impression. Olaf Teschke, Managing Editor, zbMATH
|
{
"source": [
"https://mathoverflow.net/questions/267543",
"https://mathoverflow.net",
"https://mathoverflow.net/users/28104/"
]
}
|
267,887 |
Note: This question was already asked on Math.SE nearly a week and a half ago but did not receive any responses. To the best of my knowledge, free probability is an active topic of research, so I hope that the level of the question is appropriate for this website. If not, please let me know so I can delete the question myself. Question: One often sees statements to the effect that "free probability is a generalization of probability theory, which is commutative, to the non-commutative case". But in what sense does classical probability theory only concern itself with commutative quantities? If my understanding is correct, and probability theory also deals with non-commutative quantities, then in what sense is free probability a generalization of probability theory? Context: The simplest random variables are real-valued, and obviously real numbers have commutative multiplication. But random variables can take values in any measurable space (at least this is my understanding and also that of Professor Terry Tao ), i.e. random variables can also be random vectors, random matrices, random functions, random measures, random sets, etc. The whole theory of stochastic processes is based on the study of random variables taking values in a space of functions. If the range of the functions of that space is the real numbers, then yes we have commutative multiplication, but I don't see how that's the case if we are e.g. talking about functions into a Riemannian manifold. EDIT: To clarify what I mean by "classical probability theory", here is Professor Tao's definition of random variable, which is also my understanding of the term (in the most general sense): Let $R=(R, \mathcal{R})$ be a measurable space (i.e. a set $R$ , equipped with a $\sigma$ -algebra $\mathcal{R}$ of subsets of $R$ ). A random variable taking values in $R$ (or an $R$ -valued random variable ) is a measurable map $X$ from the sample space to $R$ , i.e. a function $X: \Omega \to R$ such that $X^{-1}(S)$ is an event for every $S \in \mathcal{R}$ . Then, barring that I am forgetting something obvious, classical probability theory is just the study of random variables (in the above sense). /EDIT To be fair though, I don't have a strong understanding of what free probability is. Reading Professor Tao's post about the subject either clarified or confused some things for me, I am not sure which. In contrast to his other post, where he gives (what seems to me) a more general notion of random variable, in his post about free probability, Professor Tao states that there is a third step to probability theory after assigning a sample space, sigma algebra, and probability measure -- creating a commutative algebra of random variables, which supposedly allows one to define expectations. (1) How does one need a commutative algebra of random variables to define expectations? (2) Since when was defining a commutative algebra of random variables part of Kolmogorov's axiomatization of probability? Later on his post about free probability, Professor Tao mentions that random scalar variables form a commutative algebra if we restrict to the collection of random variables for which all moments are finite . But doesn't classical probability theory study random variables with non-existent moments? Even in an elementary course I remember learning about the Cauchy distribution. If so, wouldn't this make classical probability more general than free probability, rather than vice versa, since free probability isn't relevant to, e.g., the Cauchy distribution? Professor Tao also mentions random matrices (specifically ones with entries which are random scalar variables with all moments finite, if I'm interpreting the tensor product correctly) as an example of a noncommutative algebra which is outside the domain of classical probability but within the scope of free probability. But as I mentioned before, aren't random matrices an object of classical probability theory? As well as random measures, or random sets, or other objects in a measurable space for which there is no notion of multiplication, commutative or non-commutative? Attempt: Reading Professor Tao's post on free probability further, it seems like the idea might be that certain nice families of random variables can be described by commutative von Neumann algebras. Then free probability generalizes this by studying all von Neumann algebras, including non-commutative ones. The idea that certain nice families of random variables correspond to the (dual category of) commutative von Neumann algebras seems like it is explained in these two answers by Professor Dmitri Pavlov on MathOverflow (1) (2) . But, as Professor Pavlov explains in his answers, commutative von Neumann algebras only correspond to localizable measurable spaces, not arbitrary measurable spaces. While localizable measurable spaces seem like nice objects based on his description, there is one equivalent characterization of them which makes me suspect that they are not the most general objects studied in probability theory: any localizable measurable space "is the coproduct (disjoint union) of points and real lines" . This doesn't seem to characterize objects like random functions or random measures or random sets (e.g. point processes), and maybe even not random vectors or matrices, so it does not seem like this is the full scope of classical probability theory. Thus, if free probability only generalizes the study of localizable measurable spaces, I don't see how it could be considered a generalization of classical probability theory. By considering localizable measurable spaces in the more general framework of possibly non-commutative von Neumann algebras, it might expand the methods employed in probability theory by borrowing tools from functional analysis, but I don't see at all how it expands the scope of the subject. To me it seems like proponents of free probability and quantum probability might be mischaracterizing classical probability and measure theory. More likely I am misinterpreting their statements. Related question . Professor Pavlov's comments on this article may be relevant. I am clearly deeply misunderstanding at least one thing, probably several things, here, so any help in identifying where I go wrong would be greatly appreciated.
|
Quite a lot of questions here! It is perhaps worth making a distinction between scalar classical probability theory - the study of scalar classical random variables - and more general classical probability theory, in which one studies more general random objects such as random graphs, random sets, random matrices, etc.. The former has the structure of a commutative algebra in addition to an expectation, which allows one to then form many familiar concepts in probability theory such as moments, variances, correlations, characteristic functions, etc., though in many cases one has to impose some integrability condition on the random variables involved in order to ensure that these concepts are well defined; in particular, it can be technically convenient to restrict attention to bounded random variables in order to avoid all integrability issues. In the more general case, one usually does not have the commutative algebra structure, and (in the case of random variables not taking values in a vector space) one also does not have an expectation structure any more. My focus in my free probability notes is on scalar random variables (commutative or noncommutative), in which one needs both the algebra structure and the expectation structure in order to define the concepts mentioned above. Neither structure is necessary to define the other, but they enjoy some compatibility conditions (e.g. ${\bf E} X^2 \geq 0$ for any real random variable $X$, in both the commutative and noncommutative settings). In my notes, I also restricted largely to the case of bounded random variables $X \in L^\infty$ for simplicity (or at least with random variables $X \in L^{\infty-}$ in which all moments were finite), but one can certainly study unbounded noncommutative random variables as well, though the theory becomes significantly more delicate (much as the spectral theorem becomes significantly more subtle when working with unbounded operators rather than bounded operators). When teaching classical probability theory, one usually focuses first on the scalar case, and then perhaps moves on to the general case in more advanced portions of the course. Similarly, noncommutative probability (of which free probability is a subfield) usually focuses first on the case of scalar noncommutative variables, which was the also the focus of my post. For instance, random $n \times n$ matrices, using the normalised expected trace $X \mapsto \frac{1}{n} {\bf E} \mathrm{tr} X$ as the trace structure, would be examples of scalar noncommutative random variables (note that the normalised expected trace of a random matrix is a scalar, not a matrix). It is true that random $n \times n$ matrices, when equipped with the classical expectation ${\bf E}$ instead of the normalised expected trace $\frac{1}{n} {\bf E} \mathrm{tr}$, can also be viewed as classical non-scalar random variables, but this is a rather different structure (note now that the expectation is a matrix rather than a scalar) and should not be confused with the scalar noncommutative probability structure one can place here. It is certainly possible to consider non-scalar noncommutative random variables, such as a matrix in which the entries are themselves elements of some noncommutative tracial von Neumann algebra (e.g. a matrix of random matrices); see e.g. Section 5 of these slides of Speicher . Similarly, there is certainly literature on free point processes (see e.g. this paper ), noncommutative white noise (see e.g. this paper ), etc., but these are rather advanced topics and beyond the scope of the scalar noncommutative probability theory discussed in my notes. I would not recommend trying to think about these objects until one is completely comfortable conceptually both with non-scalar classical random variables and with scalar noncommutative random variables, as one is likely to become rather confused otherwise when dealing with them. (This is analogous to how one should not attempt to study quantum field theory until one is completely comfortable conceptually both with classical field theory and with the quantum theory of particles. Much as one should not conflate the superficially similar notions of a classical field and a quantum wave function, one should also not conflate the superficially similar notions of a non-scalar classical random variable and a scalar noncommutative random variable.) Regarding localisable measurable spaces: all standard probability spaces generate localisable measurable spaces. Technically, it is true that there do exist some pathological probability spaces whose corresponding measurable spaces are not localisable; however the vast majority of probability theory can be conducted on standard probability spaces, and there are some technical advantages to doing so, particularly when it comes to studying conditional expectations with respect to continuous random variables or continuous $\sigma$-algebras.
|
{
"source": [
"https://mathoverflow.net/questions/267887",
"https://mathoverflow.net",
"https://mathoverflow.net/users/93694/"
]
}
|
267,901 |
Notes: 1) I know next to nothing about algebraic geometry, although I am greatly interested in the field. 2) I realize that "constructive" might be a technical term, here I am using it only in an informal manner. I hope that this question belongs on this site, since it is not strictly research-level. As an autodidact (I have an ongoing formal education in physics, but the amount of math we learn here is abysmal, so most of my mathematics knowledge is self-taught), I have noted that algebraic geometry seems to be really impenetrable for somebody who has no formal education in the field, unlike, say, differential geometry or functional analysis, which are areas I can effectively learn on my own. Pretty much every time I encounter AG-related stuff on sites such as this one or math.se, I see layers and layers of abstractation on top of one another to the point where it makes me wonder, is this field of mathematics constructive, in the sense that can it be used to actually calculate anything or have any use outside mathematics? The point I am trying to make is that, using differential geometry as an example, is is constructive. No matter how abstractly do I define manifolds, tensor fields, differential forms, connections, etc, they are always resoluble into component functions in some local trivializations, with which one can actually calculate stuff. Every time I used DG to calculate stellar equilibrium or cosmological evolution or geodesics of some model spacetime, I get actual, direct, palpable, realizable results in terms of real numbers. I can use differential forms to calculate the volumes of geometric shapes, and every time I use a Lagrangian or Hamiltonian formalism to calculate trajectories for classical mechanical systems, I make use of differential geometry to obtain palpable results. On top of that, I know that DG is useful outside physics too, I have heard of uses in economy, music theory etc. I am curious if there is any real-world application of AG where one can use AG to obtain palpable results. I am not curious (for the purpose of this question) about uses to mathematics itself, I know they are numerous. But every time I try to read about AG I get lost in the infinitude of sheaves, stacks, schemes, functors and other highly abstract objects, which often seem so impossible to me to be resolved into calculable numbers. The final point is , I would like to hear about some interesting applications of algebraic geometry outside mathematics, if there are any.
|
If you forget about all the layers of abstraction, algebraic geometry is, ultimately (and very roughly speaking), the study of polynomial equations in several variables, and of the geometric objects they define. So in a certain sense, whenever you're doing anything with multivariate polynomials, there's probably some algebraic geometry behind it; and conversely, algebraic geometry questions can generally be reduced, at least in principle, to "does this system of polynomial equations have a solution?".
Now to go a (little) bit further, one should consider the field upon which these equations are defined, and more importantly, in which the solutions are sought. The whole point of algebraic geometry is that most of the formalism can be made uniform in the base field, or indeed, ring. But there are at least two main flavors: Geometric questions are over algebraically closed fields (and all that really matters, then, is the characteristic of the field). These questions are, in principle, algorithmically decidable (although the complexity can be very bad), at least if we bound every degree involved in the problem; Gröbner bases are a the key tool to solve these geometric problems in practice. Arithmetic questions are over any other field, typically the rational numbers (→ diophantine equations); for example, an arithmetic question could be "does the curve $x^n + y^n = z^n$ (where $x,y,z$ are homogeneous coordinates) have rational solutions beyond the obvious ones?". Arithmetic questions can be undecidable, so there is no universal tool like Gröbner bases to solve them. There is a subtle interplay between geometry and arithmetic (for example, in the simplest nontrivial case, that of curves, the fundamental geometric invariant, the genus $g$, determines very different behaviors on its rational points according as $g=0$, $g=1$ or $g\geq 2$). Then there are some fields which are "not too far" from being algebraically closed, like the reals, the finite fields, and the $p$-adics. Here, it is still decidable in principle whether a system of polynomial equations has a solution, but the complexity is even worse than for algebraically closed fields (for finite fields, there is the obvious algorithm consisting of trying possible value). Some theory can help bring it down to a manageable level. As for applications outside mathematics, they mostly fall in this "not too far from algebraically closed" region: Algebraic geometry over the reals has applications in robotics, algebraic statistics (which is part of mathematics but itself has applications to a wide variety of sciences), and computer graphics, for example. Algebraic geometry over finite fields has applications in cryptography (and perhaps more generally boolean circuits) and the construction of error-correcting codes. But I would like to emphasize that the notion of "applications" is not quite clear-cut. Part of classical algebraic geometry is the theory of elimination (i.e., essentially given a system of polynomial equations in $n+k$ variables, find the equations in the $n$ first variables defining whether there exists a solution in the $k$ last): this is a very useful computational tool in a huge range of situations where polynomials or polynomial equations play any kind of rôle. For example, a number of years ago, I did some basic computations on the Kerr metric in general relativity (ultimately to produce such videos as this one ): the computations themselves were differential-geometric in nature (and not at all sophisticated), but by remembering that, in the right coordinate system, everything is an algebraic function, and by using some elimination theory, I was able to considerably simplify some symbolic manipulations in those computations. I wouldn't call it an application of algebraic geometry to physics, but knowing algebraic geometry definitely help me not make a mess of the computations.
|
{
"source": [
"https://mathoverflow.net/questions/267901",
"https://mathoverflow.net",
"https://mathoverflow.net/users/85500/"
]
}
|
268,222 |
Update : Please restrict your answers to "tweets" that give more than just the statement of the result, and give also the essence (or a useful hint) of the argument/novelty. I am looking for examples that the essence of a notable mathematical development fits a tweet (140 characters in English, no fancy formulas). Background and motivation: This question was motivated by some discussion with Lior Pachter and Dave Bacon (over Twitter). Going over my own papers and blog posts I found very few such examples. (I give one answer.) There were also few developments that I do not know how to tweet their essence but I feel that a good tweet can be made by a person with deep understanding of the matter. I think that a good list of answers can serve as a good educational window for some developments and areas in mathematics and it won't be overly big. At most 140 characters per answer, single link is permitted. Tweeting one's own result is encouraged. Update: I learned quite a few things, and Noam's tweet that I accepted is mind-boggling.
|
Every rational r is xyz(x+y+z) for some rational x,y,z. Proof: Euler (1749) found x(r),y(r),z(r). Nobody knows how. I have a guess. https://people.math.harvard.edu/~elkies/euler_14t.pdf
|
{
"source": [
"https://mathoverflow.net/questions/268222",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1532/"
]
}
|
268,305 |
I don't even know where to begin. There's a discussion of stacks and they talk about $\mathrm{Bun}(G)$. I don't know what it is, or what it's elements are or why it is important. Google and wikipedia don't really help since they pre-suppose. One resource says $\mathrm{Bun}(G)$ is the moduli stack of $G$-bundles where $G$ is an affine algebraic group over a field $k$. the embedding $G \to GL_n$ induces a morphism of stacks $\mathrm{Bun}_G \to \mathrm{Bun}_{{GL}_n}$ $\mathrm{Bun}_G$ is depends on the space $X$, $\mathrm{Bun}_G(X)$ is a groupoid $\mathrm{Bun}_G$ is a functor, meaning that it is well behaved under maps of spaces. A map $Y \to X$ yields another "induced" map:
$$ \mathrm{Bun}_G(X) \to \mathrm{Bun}_G(Y) $$ I believe $X$ is a scheme; projective, connected smooth curve over a field $k$, not necessarily algebraically complete. $G$ is an affine algebraic group over $k$. What are the étale or fppf topologies here? The only projective curves I can think of are Riemann surfaces, which are often smooth and connected. The field $k$ could be $\mathbb{C}$ but possibly $\mathbb{Q}$ or a number field. My idea of a "bundle" is something with local trivialization and around each loop you assign an element of $G$; gluing copies of $U_i \times G$ with transition maps. Over a number field, perhaps we lose the analytic topology. The embedding theorem maybe says we can encode the gluing information using $n \times n$ matrices (a representation) and functorality says a map between schemes should "lift" to a map between bundles. So these moduli stacks should be spaces of $n \times n$ matrices with various restrictions. They may themselves form a variety of some kind (certainly a scheme). As you can see I am totally clueless.
|
I am surprised that nobody has mentioned (yet) that an essential point of the algebro-geometric notions of (algebraic) stack and algebraic space is to completely shift the burden of construction problems: one gives up on trying to make any kind of actual ringed space at all, and in fact the whole point is to create a kind of "geometry for functors". In particular, Bun$_G$ is demoted to the status of a definition (there's no "space" to be constructed: one just defines a certain groupoid-valued functor) and the real content is to show that it is "near enough" to representable functors that one can import to it many concepts from algebraic geometry. It is easiest to explain this with an example. Let's contrast the approaches to the "Hilbert scheme" for a projective scheme $X$ over a field $k$ (to fix ideas). In the original approach of Grothendieck, he defined a functor $\underline{\rm{Hilb}}_{X/k}$ on the category $k$-schemes, namely
$\underline{\rm{Hilb}}_{X/k}(S)$ is the set of finitely presented closed subschemes $Y \subset X \times S$ such that the structure map $Y \rightarrow S$ is flat; this is a contravariant functor of $S$ via pullback (i.e., for any $f:S' \rightarrow S$ over $k$, the map
$$\underline{\rm{Hilb}}_{X/k}(f): \underline{\rm{Hilb}}_{X/k}(S) \rightarrow \underline{\rm{Hilb}}_{X/k}(S')$$ carries $Y \subset X \times S$ to $Y \times_S S' \subset (X \times S) \times_S S' = X \times S'$).
The real substance in this approach is to show $\underline{\rm{Hilb}}_{X/k}$ is a representable functor. This amounts to constructing a "universal structure": a distinguished $k$-scheme $H$ equipped with an $H$-flat finitely presented closed subscheme $Z \subset X \times H$ such that for any $k$-scheme $S$ and any $Y \in \underline{\rm{Hilb}}_{X/k}(S)$ whatsoever there exists a unique $k$-map $g:S \rightarrow H$ for which the $S$-flat closed subscheme
$$Z \times_{H,g} S \subset (X \times H) \times_{H, g} S = X \times S$$
coincides with $Y$. In effect, $k$-maps to $H$ "classify" the functor $\underline{\rm{Hilb}}_{X/k}$. But Grothendieck's representability result was proved via methods of projective geometry, and there were strong indications that one should seek a result like this more widely for proper $k$-schemes (beyond the projective case) yet nobody could find such a way to do this in the framework of schemes. Artin brilliantly solved the problem by widening the scope of algebraic geometry via his introduction of algebraic spaces, but it must be appreciated that in so doing one gives up on constructing a "representing space" in any sense akin to what Grothendieck does. In particular, algebraic spaces are not ringed spaces at all. Instead, they are Set-valued functors which are "near enough" to representable functors that it becomes possible to make sense of many concepts from algebraic geometry for these structures (and there is an associated topological space permitting one to speak of irreducibility, connectedness, etc.). The crucial point is that in the Artin approach, the functor is the algebraic space . That is, in Artin's approach it is a complete misnomer to speak of "constructing the moduli space" for the Hilbert functor $\underline{\rm{Hilb}}_{X/k}$ for a proper $k$-scheme: the functor one has defined above literally is to be the algebraic space, so there is nothing to be constructed there. Instead, the real work is to build the representable functors "near enough" to this so that one can conclude that the functor one has already defined is indeed an algebraic space (and hence one can do meaningful geometry with it, after getting acclimated to the new setting). Artin's breakthrough was to identify checkable criteria with which one could show (entailing some real work) that many functors of interest are "near enough" to representable functors to be algebraic spaces (and his PhD student Donald Knutson devoted his thesis to working out very many concepts of algebraic geometry in the wider setting of algebraic spaces, often entailing new kinds of proofs from what had been done for schemes). But even with $\underline{\rm{Hilb}}_{X/k}$ for projective $X$ there is a fundamental difference in the nature of the results: Grothendieck built the Hilbert scheme ${\rm{Hilb}}_{X/k}$ as a countably infinite disjoint union of quasi-projective schemes, so one knows the connected components are quasi-compact, whereas the Artin approach gives no information whatsoever on quasi-compactness properties for the algebraic space $\underline{\rm{Hilb}}_{X/k}$ (but of course it has the huge merit of being applicable far more widely, allowing to "do algebraic geometry" with many many more functors of interest; nonetheless, Grotendieck's quasi-compactness results for Hilbert schemes remain vital as a tool for proving quasi-compactness features of rather abstract algebraic spaces). As another illustration, consider the functor $M_{g,n}$ of $n$-pointed smooth genus-$g$ curves. In the Mumford approach via GIT (say with $n$ big enough to get rid of non-trivial automorphisms), there is tremendous effort done to build a universal structure. In the Artin approach via stacks (which allows $n$ to be quite small too), the moduli problem is the stack: it is incorrect to speak of "constructing the moduli space" in this approach, since one has literally defined $M_{g,n}$ and there's all there is to it (granting that one is sufficiently fluent with descent theory and coherent duality to see at a glance that $M_{g,n}$ enjoys nice descent properties). But instead the real effort is to build appropriate "scheme charts" over $M_{g,n}$ to give precise meaning to a sense in which $M_{g.n}$ is "near enough" to representable functors that we can make sense of many concepts of algebraic geometry for this functor or for its groupoid-valued version when $n$ is small. The situation with Bun$_G$ is similar: the groupoid assignment is the geometric object. Techniques of Artin provide a precise sense in which some schemes can be regarded as "smooth" over this gadget, with those used to define geometric concepts for Bun$_G$ in the same spirit as one uses open balls to define concepts for manifolds, but it is rather a misnomer to speak of "constructing Bun$_G$"! That is, one has to clearly distinguish the serious task of describing some "charts" on Bun$_G$ (with which to make computations and prove theorems) from the much more mundane matter of simply defining what Bun$_G$ is: it is the assignment to any $X$ of the groupoid of $G$-bundles on $X$, and the descent-like properties are easy to verify (so it is a "stack in groupoids"), and there is nothing more to do as far as "making the moduli space" is concerned. Of course, one cannot do anything serious with this until having carried out the real effort with Artin's criteria to affirm that this stack admits enough "scheme charts" to be an Artin stack and hence admit the rich array of meaningful concepts for it as in algebraic geometry (irreducible components, coherent sheaf theory, cohomology, smoothness properties, dimension, etc.) There is no gluing to be done: we define the global moduli problem at the outset. We have to do work to show this admits meaningful geometric concepts, and in practice there can be especially useful "scheme charts" or techniques or "local description" with which one can explore it. However, one really should not regard these explicit descriptions as "constructing Bun$_G$"; the construction is just the initial basic definition mentioned above. Another wrinkle is that in the algebro-geometric setting one has to use some serious Grothendieck topologies to make this all work, so it isn't as simple as with making manifolds by gluing open balls (likewise for the notion of $G$-bundle). Of course, there are interesting and instructive analogies with constructions in homotopy theory, but to make coherent sense of actual algebro-geometric proofs involving Bun$_G$ one should recognize that this "space" is demoted to the status of a definition (in complete contrast with the setting for Hilbert schemes by Grothendieck, where he had to really build a representing scheme before geometry could be done) and the substantial effort is to build many kinds of interesting "scheme charts" over it with which to explore the geometric features of this stack. The framework of Artin stacks gives a systematic way to make sense of this process for many kinds of moduli problems encountered in practice. But at the end of the day stacks and their cousins are not ringed spaces (even though their structure can be explored using many auxiliary ringed spaces); e.g., even though there is an associated topological space with which to define various topological notions, in no sense is a map determined in terms of some kind of map of ringed spaces resting on those associated topological spaces (in contrast with the cast of schemes). The way one gets around the loss of contact with a specific ringed space is by a huge amount of descent theory. I think it is very important to always keep that in mind or else a lot of relevant issues in definitions and proofs will be rather confusing.
|
{
"source": [
"https://mathoverflow.net/questions/268305",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1358/"
]
}
|
268,372 |
For a prime $p$, define $\delta(p)$ to be the smallest offset $d$
from which $p$ differs from a square:
$p = r^2 \pm d$, for $d,r \in \mathbb{N}$.
For example,
\begin{eqnarray}
\delta(151) & = & +7 \;:\; 151 = 12^2 + 7 \\
\delta(191) & = & -5 \;:\; 191 = 14^2 - 5 \\
\delta(2748971) & = & +7 \;:\; 2748971= 1658^2 + 7
\end{eqnarray}
For a particular $\delta=d$ value, define $\Delta(n,d)$ to be the number
of primes $p$ at most $n$ with $\delta(p) = +d$, minus the number with $\delta(p) = -d$. In other words, $\Delta$ records the cumulative prevalence of $+d$ offsets over $-d$. For example, $\Delta(139,5)=-2$ because there are two more $-5$'s than $+5$'s up to
$n=139$:
$$
\delta(31)=-5 \;,\; \delta(41)=+5 \;,\; \delta(59)=-5\;,\; \delta(139) =-5 \;.
$$
The figure below shows $\Delta(p,5)$ and $\Delta(p,7)$ out to the $200000$-th
prime $2750159$.
The offset $+7$ occurs $161$ times more than the offset $-7$, and
the reverse occurs for $|\delta|=5$: $-5$ is more common than $+5$. Q . Is there a simple explanation for the different behaviors of offsets
$5$ and $7$? Obviously the question can be generalized to explaining the growth for
any $|\delta|$. I previously asked a version of this question on MSE ,
using somewhat different notation conventions
and with less focused questions.
|
Consider $n^2+7$ and $n^2-7$ modulo $3$. If these are to be prime they must be non-zero $\pmod 3$, and in the first case $n$ can be anything mod $3$, whereas in the second case $n$ must be $0 \pmod 3$. If you consider $n^2+5$ and $n^2-5$, you see that the pattern reverses. This is already a huge bias for one offset to be preferred over the other. The Hardy-Littlewood conjectures make this precise. One expects that (for a number $k$ not minus a square) the number of primes of the form $n^2+k$ with $n\le N$ (say) is
$$
\sim \frac 12 \prod_{p\ge 3} \Big(1 -\frac{(\frac{-k}{p})}{p-1} \Big) \frac{N}{\log N},
$$
where in the numerator of the product above is the Legendre symbol. The constants in front of the $N/\log N$ explain these biases. You don't have to worry about a smaller offset than $\pm 5$ or $\pm 7$, since an application of the sieve shows that the numbers $n^2+a$ and $n^2+b$ (for fixed $a$, $b$) are both prime only $\le CN/(\log N)^2$ of the time for a constant $C$.
|
{
"source": [
"https://mathoverflow.net/questions/268372",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
268,614 |
"Derived algebraic geometry" usually means the study of geometry locally modeled on "$Spec R$" where $R$ is a connective $E_\infty$ ring spectrum (perhaps with further restrictions). Why "connective", though? In my (limited) understanding, approaches to the subject like that of Toen and Vezzosi are motivated as an approach to studying things like intersection theory in ordinary algebraic geometry. The picture is that a connective $E_\infty$ ring $R$ (incarnated as a simplicial commutative ring, usually) is an "infinitesimal thickening" of the ordinary ring $\pi_0 R$. This picture breaks down if $R$ is not connective, motivating one to restrict attention to the connective case (moreover, I don't know of a way to model nonconnective ring spectra analogous to simplicial rings). But another motivation comes from homotopy theory, $TMF$, and the moduli stack of elliptic curves, which is a nonconnective derived Deligne-Mumford stack. When the basic motivating objects are nonconnective, it leaves me puzzled that Lurie continues to focus primarily on the connective case in Spectral Algebraic Geometry . I see basically two mutually exclusive possible reasons for this: The theory of nonconnective derived algebraic geometry is wild / ill-behaved / hard to understand, so one restricts attention to the connective cases which is more tractable. The theory of nonconnective derived algebraic geometry is a straightforward extension of the theory of connective derived algebraic geometry; it is easy to study nonconnective objects in terms of connective covers, but the results are most naturally phrased in terms of the connective objects, so that's the way the theory is expressed. Question A. Which of (1) / (2) is closer to the truth? Maybe as an illustrative test case, here are two statements pulled at random from SAG. Let $R$ be a connective $E_\infty$ ring, let $Mod_R$ denote its $\infty$-category of modules, and $Mod_R^{cn}$ its $\infty$-category of connective modules (both of which are symmetric monoidal), and let $M \in Mod_R$. $M$ is perfect(=compact in $Mod_R$) iff $M$ is dualizable in $Mod_R$. $M$ is locally free (= retract of some $R^n$) iff $M$ is connective and moreover dualizable in $Mod_R^{cn}$. Question B. Do these statements have analogs when $R$ is nonconnective? If so, are they straightforward extensions of these statements from the connective case? For Question B, feel free to substitute a better example of a statement if you like.
|
Here is an example of a nonconnective $E_\infty$ ring spectrum which, I think, illustrates a key problem. (A more extensive discussion of this phenomenon occurs in Lurie's DAG VIII and in a paper by Bhatt and Halpern-Leinster.) Let $R$ be an ordinary commutative ring, viewed as an $E_\infty$ ring concentrated in degree zero, and $A$ be the homotopy pullback / derived pullback in the following diagram of $E_\infty$ rings: Then $\pi_0 A = R[x,y]$ and $\pi_{-1} A$ is the local cohomology group $R[x,y] / (x^\infty, y^\infty)$; all the other homotopy groups of $A$ are zero. As a result, there's a map $R[x,y] \to A$ of $E_\infty$ rings, and any $A$-module becomes an $R[x,y]$-module by restriction. Here's a theorem. The forgetful map from the derived category $D(A)$ to the derived category $D(R[x,y])$ is fully faithful, and its essential image consists of modules supported away from the origin. This extends to an equivalence of $\infty$-categories. We could think of this in the following way. The ring $A$ is the $E_\infty$ ring of sections $\Gamma(\Bbb A^2 \setminus \{0\}, \mathcal{O}_{\Bbb A^2})$ on the complement of the origin in affine 2-space over $R$, and the above tells us we actually have an equivalence between $A$-modules and (complexes of) quasicoherent sheaves on $\Bbb A^2 \setminus \{0\}$. Here are some takeaways from this. Nonconnective ring spectra are actually quite natural. Global section objects $\Gamma(X, \mathcal{O}_X)$ are usually nonconnective, and we're certainly interested in those. The above says that even though the punctured plane is not affine but merely quasi-affine, it becomes affine in nonconnective DAG. This is a general phenomenon. Solely on the level of coefficient rings , the map $R[x,y] \to A$ looks terrible. It is indistinguishable from a square-zero extension $R[x,y] \oplus R[x,y]/(x^\infty,y^\infty)[-1]$. (There is more structure that does distinguish them.) Many of the definitions as given in DAG for a map are given in terms of the effect (locally) of a map $B \to A$ of ring spectra (e.g. flatness, étaleness, etc etc). For connective objects, this works very well. However, we have just shown that for nonconnective objects, a map of ring spectra may have nice properties—the map $Spec(A) \to Spec(R[x,y])$ should be an open immersion!—which are completely invisible on the level of coefficient rings. This goes for the rings themselves and doubly so for their module categories. If I have one point here, it is that trying to give definitions in nonconnective DAG in terms of coefficient rings is like trying to define properties of a map of schemes $X \to Y$ in terms of the global section rings $\Gamma(Y,\mathcal{O}_Y) \to \Gamma(X,\mathcal{O}_X)$ . This makes nonconnective DAG fundamentally harder. So far as your question A, this places me somewhere in between your two options (1) and (2). I don't think (1) is right because I think that nonconnective objects are much too important; I have a mild objection to the language in (2) because I don't think that nonconnective objects are straightforward.
|
{
"source": [
"https://mathoverflow.net/questions/268614",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2362/"
]
}
|
268,764 |
Do there exist countably many proper holomorphic submersions of complex manifolds $\mathcal{X}_n \to B_n$ such that every compact complex manifold appears as a fiber in at least one of the families? ( Conventions : Assume that each $B_n$ is Hausdorff, connected, and paracompact; these conditions imply that $B_n$ is also second countable. For the question at hand, one could equivalently require each $B_n$ to be an open subset of $\mathbf{C}^m$ for some $m$ depending on $n$.)
|
Edit. I spoke with some of the experts in my department. I added one or two references. I reorganized some of the arguments. This is true . This follows from a couple of big theorems and then a separability / second countability argument that ultimately reduces to the Stone-Weierstrass approximation theorem. Step 1. Countably many closed oriented manifolds. Dennis Sullivan has a general theory with the following goal: find explicit topological invariants that bound differentiable manifolds $M$ up to finite indeterminacy and diffeomorphism. MR0646078 (58 #31119) Sullivan, Dennis Infinitesimal computations in topology. IHES. Publ. Math. No. 47 (1977), 269–331 (1978). http://www.numdam.org/article/PMIHES_1977__47__269_0.pdf To be honest, I do not completely understand the definitions of models and lattices. There is a different proof that follows from Cheeger's finiteness theorem . Every closed manifold $M$ admits a Riemannian metric $d_M$. Cheeger's original finiteness theorem was only for simply connected Riemannian manifolds, and the proof required stronger hypotheses in dimension $4$. That theorem was generalized by Cheeger and Ebin. The most general form of the theorem that I have found is due to Stefan Peters and independently to Gromov. MR0743966 (85j:53046) Peters, Stefan(D-DORT) Cheeger's finiteness theorem for diffeomorphism classes of Riemannian manifolds. J. Reine Angew. Math. 349 (1984), 77–82. https://www.degruyter.com/view/j/crll.1984.issue-349/crll.1984.349.77/crll.1984.349.77.xml Generalized Cheeger Finiteness Theorem . For fixed dimension $n$, for fixed diameter $\delta$, for fixed volume $V$, and for fixed bound on the norm of the sectional curvature $\Lambda$, there are only finitely many diffeomorphism types of closed $n$-dimensional manifolds $M$ that admit a Riemannian metric $d_M$ such that $\text{diam}(M)\leq \delta$, $\text{vol}(M)\geq V$, and the norm of the sectional curvature satisfies $|K_M|\leq \Lambda^2$. Considering $\delta$ an integer, $V$ of the form $1/m$ for $m$ a positive integer, and $\Lambda$ an integer, it follows that there are at most countably many diffeomorphism types of closed manifolds. Sullivan's theorem proves a similar finiteness theorem, but fixing topological data rather than Riemannian data. Because of these finiteness theorems, it suffices to prove the result for complex manifolds whose underlying differentiable manifold is diffeomorphic to a fixed closed, oriented, differentiable manifold. Edit. As Bjorn Poonen points out, countability of closed manifolds up to diffeomorphism also follows from Nash's theorem that every closed manifold is diffeomorphic to one connected component of the set of real points in $\mathbb{R}^n$ of a smooth, real algebraic variety (with the differentiable structure inherited from $\mathbb{R}^n$ via the Implicit Function Theorem). The reason I refer to the theorem of Sullivan a is that this theorem (and similar theorems) is essential in important boundedness theorems for compact complex manifolds of particular classes (e.g., hyper-Kähler manifolds). I learned of these boundedness results from Ljudmila Kamenova and Misha Verbitsky. Step 2. The Fréchet manifold of almost complex structures. Let $M$ be a fixed closed, oriented, differentiable manifold. There exists a smooth embedding of $M$ in a Euclidean space $\mathbb{R}^e$. Let $d_M$ be the restriction to $M$ of the Euclidean Riemannian metric on $\mathbb{R}^e$. There is an induced Riemannian metric on the differentiable vector bundle $T_{\mathbb{R}^e}|_M$, the restriction to $M$ of the tangent bundle of $\mathbb{R}^e$, and this metric is induced by a smooth family of inner products on the fibers. These inner products determine a smooth and orthogonal direct sum decomposition of differentiable vector bundles, $T_{\mathbb{R}^e}|_M = T_M\oplus N_{M/\mathbb{R}^e}$. This in turn defines a smooth and orthogonal direct sum decomposition of the differentiable vector bundle $\text{Hom}_M(T_{\mathbb{R}^e}|_M,T_{\mathbb{R}^e}|_M)$ with one summand equal to $\text{Hom}_M(T_M,T_M)$. There is a closed submanifold $AC_M$ of this summand bundle whose fiber over every $t\in M$ is the set of orientation-preserving linear automorphisms $J:T_{M,p}\to T_{M,p}$ such that $J\circ J$ equals $-\text{Id}$. This is a differentiable fiber bundle over $M$. The direct sum decompositions above determine a smooth and isometric embedding of $AC_M$ into the vector bundle $T_{\mathbb{R}^e}$, which in turn is isometrically diffeomorphic to $\mathbb{R}^{2e}$ with its standard inner product. Denote by $\mathcal{J}$ the set of all smooth sections $J$ of the fiber bundle $AC_M\to M$, i.e., the set of all almost complex structures on $M$. This is a subset of the set of all smooth functions $C^\infty(M,AC_M)$. Via the embedding above, this is a subset of the set of all smooth function $C^\infty(M,\mathbb{R}^{2e})$. This set is a Fréchet topological vector space topologized by using the countably many metrics $(d_n)_{n\in \mathbb{N}}$ arising from the uniform metrics for partial derivatives of all orders. (Because $M$ is compact, and thus has finite volume, the uniform topology is finer than any $L^p$-topology.) Since it does not change the metric topology, replace each $d_n$ by $d_n'$, the pointwise minimum of $d_n$ and the constant $1$. For the associated metric $d=\sum_n (1/2^n) d'_n$, the metric topology of $d$ on $C^\infty(M,\mathbb{R}^{2e})$ is the Fréchet topology. By the Stone-Weierstrass approximation theorem, and the generalization due to Nachbin, there is a countable dense subset of this Fréchet space, e.g., it suffices to take the restrictions to $M\subset \mathbb{R}^e$ of the countable set of polynomial functions $\mathbb{R}^e\to \mathbb{R}^{2e}$ that have rational coefficients. Thus, this Fréchet space with the metric $d$ is separable, or equivalently, it is second countable. The subset $\mathcal{J}$ with the induced metric from $d$ is a metric space. As a susbspace of a second countable space, also $\mathcal{J}$ is second countable. In fact, $\mathcal{J}$ is a Fréchet submanifold of $C^\infty(M,\mathbb{R}^{2e})$. Step 3. Newlander-Nirenberg and Kuranishi's theorem. By the Newlander-Nirenberg theorem, for the Fréchet manifold $\mathcal{J}$ of smooth sections of $AC_M \to M$, there is a closed subspace $\text{Comp}_M$ of almost complex structures that are integrable -- this turns out to be those for which the Nijenhuis tensor is zero (which is defined in terms of smooth sections and finitely many derivatives, thus is a continuous function on this Fréchet manifold). Kuranishi showed how to reduce to a fixed Sobolev manifold. Moreover, Kuranishi proved a completeness theorem at one point for his families of complex structures; what algebraic geometers would call "versality". Finally, Kuranishi proved completeness for all points in an open neighborhood; what algebraic geometers would call "openness of versality." MR0141139 (25 #4550) Kuranishi, M. On the locally complete families of complex analytic structures. Ann. of Math. (2) 75 1962 536–577. https://www.jstor.org/stable/pdf/1970211.pdf Define a Kuranishi patch to be a triple $(P,\pi:\mathcal{X}\to S,g:P\to S)$ of an open subset $P$ of $\text{Comp}_M$ (with its subspace topology), a proper, holomorphic submersion of complex analytic spaces $\pi:\mathcal{X}\to S$ such that $S$ is connected, complex analytic subspace of an affine space, and a surjective function $g:P\to S$ such that (i) for every $J\in P$ with image $s=g(J)$, there exists a diffeomorphism of $M$ with the fiber $\mathcal{X}_s=\pi^{-1}(\{s\})$ such that $J$ is identified with the natural complex structure on $\mathcal{X}_s$, and (ii) for every $s\in S$, the family is complete at $s$. Precisely, for every proper holomorphic submersion of complex analytic spaces, $\pi':\mathcal{X}'\to S'$, for every point $s'\in S'$ and a biholomorphism $\mathcal{X}'_{s'}\cong \mathcal{X}_s$, up to replacing $S'$ by an open neighborhood of $s$ and replacing $\mathcal{X}$ by the inverse image of this open neighborhood, there exists a holomorphic morphism $h:S'\to S$ mapping $s'$ to $s$ and such that $\mathcal{X}'$ is biholomorphic, over $S'$, to the pullback by $h$ of $\mathcal{X}$. Kuranishi's Theorem. Every integrable almost complex structure $J$ is contained in a Kuranishi patch. Moreover, there exists such a Kuranishi patch with $S$ a complex analytic subspace of $H^1(M,T^{1,0}_{M,J})$. By the way, I learned all of this from Misha Verbitsky and Ljudmila Kamenova. Verbitsky has a quick summary of Kuranishi theory at the beginning of the following article. The Bourbaki seminar by Adrien Douady reporting on Kuranishi's theorem is also very readable. Teichmueller spaces, ergodic theory and global Torelli theorem Misha Verbitsky https://arxiv.org/pdf/1404.3847.pdf MR1608786 Douady, Adrien Le problème des modules pour les variétés analytiques complexes (d'après Masatake Kuranishi). Séminaire Bourbaki, Vol. 9, Exp. No. 277, 7–13, Soc. Math. France, Paris, 1995. http://archive.numdam.org/article/SB_1964-1966__9__7_0.pdf Step 4. Second countability. The Fréchet space $C^\infty(M,\mathbb{R}^{2e})$ is second countable by the Stone-Weierstrass approximation theorem / Nachbin's theorem. Thus the subspaces $\mathcal{J}$ and $\text{Comp}_M$ are also second countable. Let $\{U_n\}_{n\in \mathbb{Z}}$ be a countable basis for the topology of $\text{Comp}_M$. Let $I\subset \mathbb{Z}$ be the subset of $n$ such that $U_n$ is contained in a Kuranishi patch. As a subset of a countable set, also $I$ is countable. For every $n\in I$, let $P_n$ be a Kuranishi patch that contains $n$. By Kuranishi's Theorem, for every $J$ in $\text{Comp}_M$, there exists a Kuranishi patch that contains $J$. Since $\{U_n\}_n$ is a basis for the topology, there exists $n$ such that $J$ is contained in in $U_n$, and $U_n$ is contained in this Kuranishi patch. Thus, $n$ is an element of $I$, and $J$ is contained in $P_n$, since $P_n$ contains $U_n$. Thus, the countably many Kuranishi patches $(P_n)_{n\in I}$ form a countable open covering of $\text{Comp}_M$. So the countably many associated families $\pi_n:\mathcal{X}_n \to S_n$ have the desired property: every complex manifold that is diffeomorphic to $M$ is biholomorphic to a fiber of one of the proper, holomorphic submersions $S_n$. If you really insist, you can form a resolution of singularities $\widetilde{S}_{n}\to S_{n}$. More sensibly, as nfdc23 suggests, you can partition $S_{n}$ into a union of finitely many locally closed analytic subspaces, each of which is smooth. Personally, I am happy to work with singular complex analytic spaces $S_n$, provided that they are a little bit closer to a "universal family".
|
{
"source": [
"https://mathoverflow.net/questions/268764",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2757/"
]
}
|
268,811 |
I'm writing a paper in which I cite a lot of results that appear in Schikhof's Ultrametric Calculus . Some of these results are exercises in Schikhof's book. These exercises are not difficult, but are laborious. Thus, if I write the proofs, the article may extend by about two or three pages. Should I write the proofs or simply cite them? Schikhof is a very well respected mathematician, and I have never found any errors in his book. Obviously, I have checked that the exercises are correct. (If it were one exercise, I would write the proof in my article, as I have seen in other articles, but in my case there are about five exercises.)
|
The answer is essentially given in the comments, so let me summarize: It is a frequent situation that one has to cite an exercise. It is legitimate. (Polya-Szego is cited > 1400 times according to Mathscinet) The best thing is to cite a place where the statement is proved, but if you cannot find such a place, citing an exercise is the second best choice. You can solve the exercise in your paper, or not solve (depending on the difficulty of the exercise and space limitations and other considerations). And finally my own recommendation: When you refer to an exercise, solve it yourself, no matter whether you include a solution to your paper or not. Similar considerations apply to handbooks, like Tables of Integrals, etc.
They are essentially made for this purpose, but there are sometimes mistakes,
not frequently. (Gradshtein-Ryzhik is cited > 2200 times according to Mathscinet, Abramowitz-Stegun 1740.)
|
{
"source": [
"https://mathoverflow.net/questions/268811",
"https://mathoverflow.net",
"https://mathoverflow.net/users/109085/"
]
}
|
269,064 |
Suppose that $f: \mathbb{R}^+ \to \mathbb{R}^+$ is a continuous function such that for all positive real numbers $x,y$ the following is true :
$$(f(x)-f(y)) \left ( f \left ( \frac{x+y}{2} \right ) - f ( \sqrt{xy} ) \right )=0.$$
Is it true that the only solution to this is the constant function ?
|
Yes. If $f$ were not constant, then (since ${\bf R}^+$ is connected) it could not be locally constant, thus there exists $x_0 \in {\bf R}^+$ such that $f$ is not constant in any neighbourhood of $x_0$ . By rescaling (replacing $f(x)$ with $f(x_0 x)$ ) we may assume without loss of generality that $x_0=1$ . For any $y \in {\bf R}^+$ , there thus exists $x$ arbitrarily close to $1$ for which $f(x) \neq f(y)$ , hence $f((x+y)/2) = f(\sqrt{xy})$ . By continuity, this implies that $f((1+y)/2) = f(\sqrt{y})$ for all $y \in {\bf R}^+$ . Making the substitution $z := (1+y)/2$ , we conclude that $f(z) = f(g(z))$ for all $z > 1/2$ , where $g(z) := \sqrt{2z-1}$ . The function $g$ attracts $[1, \infty)$ to the fixed point $z=1$ , so on iteration and by using the continuity of $f$ we conclude that $f(z)=f(1)$ for all $z >1$ . Similarly, $h = g^{-1}$ defined by $h(z) = (z^2 + 1)/2$ attracts $(0, 1]$ to the fixed point $z = 1$ , so by the same argument $f(z) = f(1)$ for $z < 1$ , making $f$ constant on all of $\bf R^+$ .
|
{
"source": [
"https://mathoverflow.net/questions/269064",
"https://mathoverflow.net",
"https://mathoverflow.net/users/109471/"
]
}
|
269,476 |
The function $\text{sinc}(x)=\frac{\sin x}x$ permeates mathematics and physics in several aspects, and it carries multiple presentations/formulations. My interest is to inject yet another one of such. Let's understand the determinant $\det(M_{ij})_1^{\infty}$ to mean $\lim_{n\rightarrow\infty}\det(M_{ij})_1^n$. The following has experimental basis, therefore I like to ask: Question. Is there a proof for the determinantal representation
$$\det\left[\frac{(i-1)!}{(2j-1)!}\binom{i^2-\theta^2}j\right]_{i,j=1}^{\infty}=\text{sinc}(\pi\theta) \,\,\,?$$
|
Let's look at $a_n=\det\left[\frac{(i-1)!}{(2j-1)!}\binom{i^2-\theta^2}j\right]_{i,j=1}^{n}$. By taking out common factors from rows we can write
$$a_n=\left(\prod_{i=1}^n (i-1)!(i^2-\theta^2)\right)\det\left[\frac{1}{(2j-1)!}\frac{1}{(i^2-\theta^2)}\binom{i^2-\theta^2}{j}\right]_{i,j=1}^{n}$$
$$=\left(\prod_{i=1}^n (i-1)!(i^2-\theta^2)\right)\det\left[\frac{2}{2j!}\binom{i^2-\theta^2-1}{j-1}\right]_{i,j=1}^{n}$$
$$=2^n \left(\prod_{i=1}^n \frac{(i-1)!(i^2-\theta^2)}{2i!}\right)\det\left[\binom{i^2-\theta^2-1}{j-1}\right]_{i,j=1}^{n}.$$
By the result I mentioned in this answer , with polynomials $P_j(x)=\binom{x-\theta^2-1}{j-1}$, the last determinant evaluates as
$$\det\left[\binom{i^2-\theta^2-1}{j-1}\right]_{i,j=1}^{n}=\left(\prod_{j=1}^n\frac{1}{(j-1)!}\right)\left(\prod_{i<j}(j^2-i^2)\right)=\left(\prod_{j=1}^n\frac{1}{(j-1)!}\right)\left(\prod_{j=1}^n \frac{(2j-1)!}{j}\right).$$
Putting everything together we have
$$a_n=\frac{1}{n!^2}\prod_{i=1}^n(i^2-\theta^2)=\prod_{i=1}^n\left(1-\frac{\theta^2}{i^2}\right),$$
which implies that
$$\lim_{n\to \infty}a_n=\prod_{i\geq 1}\left(1-\frac{\theta^2}{i^2}\right)=\operatorname{sinc}(\pi\theta).$$
|
{
"source": [
"https://mathoverflow.net/questions/269476",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
269,884 |
I have been in situations when I submitted a paper to a math journal, but at the end of the refereeing process the final report was not sent to me. It has happened both in case when my paper was rejected or accepted. Do I have a right to see the report(s) on my paper? Does it depend on a journal? If it does depend on a journal, I would like to ask specifically on Annals of Math, Journal of Differential Geometry, Advances in Mathematics.
|
No, you do not have this right. When this first happened to me (more precisely, to my student), I also was outraged and demanded a report. They replied that this is a journal policy: they decide when to send a report to an author and when not to.
Gradually I understood that this is a right of a journal, to establish any policy they find appropriate. At the present time many journals even do not send all papers to referees. Many papers are rejected without refereeing. They show the paper to one or several experts for a "quick opinion". If this quick opinion is negative, (or not sufficiently positive) they just reject the paper without having any formal report. Of course this has a simple explanation: some journals, especially those which are considered highly rated, are really overwhelmed with papers, and it is difficult for them to find a referee for each. As a referee, I also receive too many papers some of which I do not want to read.
So sometimes I advise the journal to reject just after a brief look, without writing a report. The time
each of us can spend on refereeing is limited. And in general, what is a "right"? It is a principle recognized by an overwhelming majority in some community. Using this definition, I am afraid that
nowadays, the authors do not have a right to receive a report on each paper that they submit. 30 years ago I thought that there exists such a right. Probably this changed.
|
{
"source": [
"https://mathoverflow.net/questions/269884",
"https://mathoverflow.net",
"https://mathoverflow.net/users/16183/"
]
}
|
269,893 |
It is a classic result that the irrational rotations of the circle are ergodic. Formally, let $T:\mathbb{T}\to \mathbb{T}$ be defined by $Tz=ze^{2\pi i\alpha}$. If $\alpha$ is irrational, then $T$ is ergodic. This result appears in many textbooks (e.g., Walters, An Introduction to Ergodic Theory ), and even in Wikipedia . However, none of these refer to the original. A Google Scholar search didn't help either. My question is simply, who was the first to prove or to notice this result, and is there a reference to the original paper?
|
the proof goes back to Nicole Oresme in his paper De commensurabilitate vel incommensurabilitate motuum celi [On the Commensurability or Incommensurability of the Motions of the Heavens] , dated around 1360, see Nicole Oresme and the commensurability or incommensurability of celestial motions (contains an annotated English translation of Oresme's Latin text) Oresme's Proof of the Density of Rotations of a Circle through an
Irrational Angle Nicole Oresme and the ergodicity of rotations
|
{
"source": [
"https://mathoverflow.net/questions/269893",
"https://mathoverflow.net",
"https://mathoverflow.net/users/42864/"
]
}
|
269,966 |
Is there some simple proof that $\mathbb{Z}$ is not isomorphic to the fundamental group of any compact Kähler manifold? This follows from the main result of https://arxiv.org/abs/0709.4350 which states that any $3$-manifold group which is Kähler must be finite. The simplest non-finite 3-manifold group is $\mathbb{Z} = \pi_{1}(S^{2} \times S^{1})$. So I was wondering, is there an argument to rule out this particularly simple group, without applying to a lot of machinery? (also historically when were we first able to rule this group out?)
|
If $X$ is a compact Kähler manifold, then $h^{p,q}(X) = h^{q,p}(X)$ and $b_k(X) = \sum_{p+q=k}h^{p,q}(X)$, so in particular, $b_1(X) = h^{1,0}(X) + h^{0,1}(X) = 2h^{1,0}(X)$ is even. Now, $$b_1(X) = \operatorname{rank} H^1(X; \mathbb{Z}) = \operatorname{rank} \operatorname{Hom}(\pi_1(X), \mathbb{Z}).$$ If $\pi_1(X) = \mathbb{Z}$, $b_1(X) = 1$ which is not even, so $X$ is not Kähler. More generally, if $\pi_1(X)$ is abelian, then it must have even rank. Note, there are non-compact Kähler manifolds with fundamental group $\mathbb{Z}$, for example $X = \mathbb{C}^*$.
|
{
"source": [
"https://mathoverflow.net/questions/269966",
"https://mathoverflow.net",
"https://mathoverflow.net/users/99732/"
]
}
|
270,088 |
I'm conscious that this isn't necessarily a research level question, but I've asked this question on mathstackexchange, and received no answer. So I'm trying it here. A usual mantra in field theories is the assertion that only massless theories can be conformally invariant . By a theory I mean an action $$ S = \int \mathcal{L} \, \mathrm{dVol}, $$ where $\mathcal{L}$ is the Lagrangian density, and the integral is taken over a 4-dimensional Lorentzian manifold with metric $g$. By conformal invariance I mean the statement that under the conformal rescaling of the metric $$ \hat{g} = \Omega^2g, $$ the Lagrangian transforms as $\hat{\mathcal{L}} = \Omega^{-4} \mathcal{L}$. Then, as the volume form transforms as $\widehat{\mathrm{dVol}} = \Omega^{4} \mathrm{dVol}$, the action $S$ is invariant, and the theory is said to be conformally invariant. The usual physics explanation given is that "if a theory is supposed to be conformally invariant, then there cannot exist an intrinsic scale to it, such as mass or a Compton wavelength". Of course, this is a load of hand waving. I guess I don't strictly know what I mean by a massless theory. Maxwell's equations, for example, are a massless conformally invariant theory. My guess would have been that the mass of a theory is its ADM mass, but as has been pointed out in the comments, this is a property of a solution to a theory, not the theory itself. So, if $m$ is the mass of a theory, whatever it stands for exactly, and $m \neq0$, why must conformal invariance fail?
|
A physicist would answer this question as follows. (Everything I'll say can be expressed in a way that the purest of mathematicians would understand, but that translation would take a lot of work, so I'll only do it on demand.) In physics we have units of mass ($M$), length ($L$) and time ($T$). In special relativity we have a fundamental constant $c$, the speed of light, with units $L/T$. Thus, in special relativity, any length determines a time and vice versa. In quantum mechanics we have a fundamental constant $\hbar$, Planck's constant, with units $ML^2/T$. $\hbar / c$ has units $M L$. Thus, in theories involving both special relativity and quantum mechanics, any mass determines an inverse length, and vice versa. Relativistic quantum field theory involves both special relativity and quantum mechanics. Thus, in relativistic quantum field theory, if we have a particle of some mass $m$, it determines a length, namely $\hbar / c m$. This is called the Compton wavelength of that particle. Some physicists may prefer to use $h = 2 \pi \hbar$ in the definition of the Compton wavelength, but this isn't important here: what really matters is that when you have a relativistic quantum field theory with a particle of some given mass, it will have a preferred length scale and will thus not be invariant under scale transformations, hence not under conformal transformations... unless that mass is zero , in which case the Compton wavelength is undefined. What is the meaning of the Compton wavelength? Here's a rough explanation. In quantum mechanics, to measure the position of a particle, you can shoot light (or some other kind of wave) at it. To measure the position accurately, you need to use light with a short wavelength, and thus a lot of energy. If you try to measure the position of a particle with mass $m$ more accurately than its Compton wavelength, you need to use an energy that exceeds $mc^2$. This means that the collision is energetic enough to create another particle of the kind whose position you're trying to measure. So, in relativistic quantum field theory, we should imagine any particle as part of a 'cloud' of 'virtual' particles which can become 'real' if you try to measure its position too accurately. The size of this cloud is not sharply defined, but it's roughly the Compton wavelength. So, the theory of this particle is not invariant under conformal transformations.
|
{
"source": [
"https://mathoverflow.net/questions/270088",
"https://mathoverflow.net",
"https://mathoverflow.net/users/104213/"
]
}
|
270,386 |
Let $\det_d = \det((x_{i,j})_{1 \leq i,j\leq d})$ be the determinant of a generic $d \times d$ matrix. Suppose $k \mid d$, $1 < k < d$. Can $\det_d$ be written as the determinant of a $k \times k$ matrix of forms of degree $d/k$? Even writing $\det_4$ as the determinant of a $2 \times 2$ matrix of quadratic forms seems impossible, just intuitively. The space of $2 \times 2$ matrices of quadratic forms in $16$ variables has dimension $4 \cdot \binom{17}{2} = 544$, while the space of quartics in $16$ variables has dimension $\binom{19}{4} = 3876$.
|
I think a result of Hochster allows to get a quick proof that it is not possible to express the determinant of the generic $d \times d$ matrix as the determinant of a $k \times k$ matrices with entries being homogeneous forms of degree $\dfrac{d}{k}$, provided that $1 <k <d$. I will work over an algebraically closed field.
Relying on some cohomological methods (similar to the ones Jason Starr is using in his answer), Hochster proved that the variety of $n \times n$ matrices with $\textrm{rank} < n-1$ can be set-theoretically defined by $2n$ equations and not less (see this paper by Bruns and Schwänzl for some improvement of Hochster's result : http://www.home.uni-osnabrueck.de/wbruns/brunsw/pdf-article/NumbEqDet.published.pdf ). Now, we proceed by absurd. Assume that the determinant of the generic $d \times d$ matrix can be written as the determinant of $k \times k$ matrix (say $A$) with forms homogeneous of degree $\dfrac{d}{k}$. We assume $1 < k < d$. We denote by $P_1, \ldots, P_{k^2}$ the entries $A$, which are homogeneous polynomials in $x_1, \ldots, x_{d^2}$ of degree $\dfrac{d}{k}$. Let $B$ be the generic $k \times k$ matrix with entries $Y_1, \ldots, Y_{k^2}$. Denote by $Q_1, \ldots Q_{k^2}$ the $k-1$ minors of $B$. The variety defined by the vanishing of the $\{Q_i\}_{i=1\ldots k^2}$ is non-empty of codimension $4$ in $\mathbb{A}^{k^2}$ (here I use that $k>1$). Hence, if we replace $Y_i$ by $P_i(x_1, \ldots, x_{d^2})$, we see that the scheme defined by the vanishing of the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$ has codimension at most $4$ in $\mathbb{A}^{d^2}$. Furthermore, a simple computation of partial derivatives shows that the scheme defined by the vanishing of the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$ is included in the singular locus of the variety defined by $\det A = 0$. But $\det A$ is the determinant of the generic $d \times d$ matrix, so that its singular locus is the variety of matrices of $\textrm{rank} < d-1$ : it is irreducible of codimension $4$ in $\mathbb{A}^{d^2}$. From the above, we deduce that the variety of $d \times d$ matrices of $\textrm{rank} < d-1$ is set-theoretically equal to the scheme defined by the
$\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$. By Hochster's result (the existence part), one can find $2k$ polynomials (say $T_1, \ldots, T_{2k}$) in the ideal generated by $Q_1, \ldots, Q_{k^2}$ such that $$\textrm{rad}(T_1, \ldots, T_{2k}) = \textrm{rad}(Q_1, \ldots, Q_{k^2}).$$ Replacing $Y_i$ by $P_i(x_1,\ldots, x_{d^2})$ in the $\{T_j\}_{j=1 \ldots 2k}$, we find that the vanishing of the $\{T_j(P_1, \ldots, P_{k^2}) \}_{j=1 \ldots 2k}$ defines set-theoretically the variety of $d \times d$ matrices of $\textrm{rank} < d-1$. Since $2k < 2d$, we get a contradiction with Hochster's result.
|
{
"source": [
"https://mathoverflow.net/questions/270386",
"https://mathoverflow.net",
"https://mathoverflow.net/users/88133/"
]
}
|
270,490 |
In the chapter "A Mathematician's Gossip" of his renowned Indiscrete Thoughts , Rota launches into a diatribe concerning the "replete injustice" of misplaced credit and "forgetful hero-worshiping" of the mathematical community. He argues that a particularly egregious symptom of this tendency is the cyclical rediscovery of forgotten mathematics by young mathematicians who are unlikely to realize that their work is fundamentally unoriginal. My question is about his example of this phenomenon. In all mathematics, it would be hard to find a more blatant instance of this regrettable state of affairs than the theory of symmetric functions. Each generation rediscovers them and presents them in the latest
jargon. Today it is K-theory yesterday it was categories and functors, and the day before, group representations. Behind these and several other attractive theories stands one immutable source: the ordinary, crude definition of the symmetric functions and the identities they satisfy. I don't see how K-theory, category theory, and representation theory all fundamentally have at their core "the ordinary, crude definition of the symmetric functions and the identities they satisfy." I would appreciate if anyone could give me some insight into these alleged connections and, if possible, how they exemplify Rota's broader point.
|
I think Abdelmalek Abdesselam and William Stagner are completely correct in their interpretation of the words "Behind" and "one immutable source" as describing one theory, the theory of symmetric functions, being the central core of another. The issue that led to this question instead comes from misunderstanding this sentence: Today it is K-theory yesterday it was categories and functors, and the day before, group representations. The listed objects are not a list of theories. If they were, he would say "category theory" and "representation theory". Instead, it is a list of different languages, or as Rota calls them, jargons. The function of this sentence is to explain what jargons he is referring to in the previous sentence. If we delete it, the paragraph still makes perfect sense, but lacks detail: In all mathematics, it would be hard to find a more blatant instance of this regrettable state of affairs than the theory of symmetric functions. Each generation rediscovers them and presents them in the latest jargon. [...] Behind these and several other attractive theories stands one immutable source: the ordinary, crude definition of the symmetric functions and the identities they satisfy. The "theories" in question are not K-theory, category theory, and representation theory but rather the theory of symmetric functions expressed in the languages of K-theory, category theory, and representation theory. For instance presumably one of them is the character theory of $GL_n$, expressed in the language of group representations. The reason I am confident in this interpretation is nothing to do with grammar but rather the meaning and flow of the text. The claim that symmetric function theory is the source of three major branches of mathematics seems wrong, but if correct, it would be very bizarre to introduce it in this way, slipped in the end of a paragraph making a seemingly less shocking point, and then immediately dropped (unless the quote was truncated?). One would either lead with it, or build up to it, and in either case then provide at least some amount of explanation. Thus instead I (and Joel, and Vladimir) interpret it as making a less dramatic claim.
|
{
"source": [
"https://mathoverflow.net/questions/270490",
"https://mathoverflow.net",
"https://mathoverflow.net/users/88961/"
]
}
|
270,669 |
One of the delights in mathematical research is that some (mostly deep) results in one area remain unknown to mathematicians in other areas, but later, these discoveries turn out to be equivalent! Therefore, I would appreciate any recollections (with references) to: Question. Provide pairs of theorems from different areas of mathematics and/or physics, each proven with apparently different methods, and later the two results were found to be equivalent? The quest emanates from my firm belief that it is imperative to increase our awareness of such developments, as a matter of collective effort to enjoy the value of all mathematical heritage. There is always this charming story about the encounter between Freeman Dyson and Hugh Montgomery.
|
Consider $n$ evenly spaced points on a circle representing $\mathbb{Z}^n$.
Two sets of points
with the same multiset of distances between them (measured by the shortest distance around
the circle) are said to be homometric .
In the music literature,
homometric point sets correspond
to pitch-class sets with the same intervalic content, and this
theorem is known as the "hexachordal theorem": Hexachordal Theorem : Complementary sets with $k=n/2$ (and $n$ even) are
homometric. In particular, Schoenberg realized that two complementary chords of six
notes each in a twelve-tone scale have identical intervalic content,
and so have analogous "aural effects." Figure from: Ballinger, B., Benbernou, N., Gomez, F., O’Rourke, J., & Toussaint, G. (2009, June). The Continuous Hexachordal Theorem. In International Conference on Mathematics and Computation in Music (pp. 11-21). Springer Berlin Heidelberg. The following history was uncovered by Godfried Toussaint in the early 2000's. The hexachordal theorem was originally proved in the music literature by Lewin in 1960 (1), and
subsequently followed by many different proofs in the music-theory and
mathematics literature, including a proof by Blau in 1999 (2): (1) Lewin, David. "Re: The intervallic content of a collection of notes, intervallic relations between a collection of notes and its complement: an application to Schoenberg's hexachordal pieces." Journal of Music Theory 4.1 (1960): 98-101. (2) Blau, Steven K. "The hexachordal theorem: A mathematical look at interval relations in twelve-tone composition." Mathematics Magazine 72.4 (1999): 310-313. However, the theorem was known to crystallographers about thirty years earlier,
who were interested because X-rays depend on inter-atom distances, and so
homometric sets have ambiguous X-rays. The theorem was first proved by Patterson (3),
and again followed by many different proofs in the separate
crystallography literature, including most recently
a proof by Senechal (4): (3) Patterson, A. Lindo. "Ambiguities in the X-ray analysis of crystal structures." Physical Review 65.5-6 (1944): 195. (4) Senechal, Marjorie. "A point set puzzle revisited." European Journal of Combinatorics 29.8 (2008): 1933-1944. The separate literature threads were united by Toussaint, as mentioned above.
|
{
"source": [
"https://mathoverflow.net/questions/270669",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
270,677 |
I read this sufficient condition for solvable boundary problem (Gilbarg & Trudinger 1998): "the boundary value problem is solvable if every component of the complement of
the domain consists of more than a single point." Could anyone suggest where I may find a formal proof of this, or kindly provide me with a barrier function to show the statement holds? Thanks in advance!
|
Consider $n$ evenly spaced points on a circle representing $\mathbb{Z}^n$.
Two sets of points
with the same multiset of distances between them (measured by the shortest distance around
the circle) are said to be homometric .
In the music literature,
homometric point sets correspond
to pitch-class sets with the same intervalic content, and this
theorem is known as the "hexachordal theorem": Hexachordal Theorem : Complementary sets with $k=n/2$ (and $n$ even) are
homometric. In particular, Schoenberg realized that two complementary chords of six
notes each in a twelve-tone scale have identical intervalic content,
and so have analogous "aural effects." Figure from: Ballinger, B., Benbernou, N., Gomez, F., O’Rourke, J., & Toussaint, G. (2009, June). The Continuous Hexachordal Theorem. In International Conference on Mathematics and Computation in Music (pp. 11-21). Springer Berlin Heidelberg. The following history was uncovered by Godfried Toussaint in the early 2000's. The hexachordal theorem was originally proved in the music literature by Lewin in 1960 (1), and
subsequently followed by many different proofs in the music-theory and
mathematics literature, including a proof by Blau in 1999 (2): (1) Lewin, David. "Re: The intervallic content of a collection of notes, intervallic relations between a collection of notes and its complement: an application to Schoenberg's hexachordal pieces." Journal of Music Theory 4.1 (1960): 98-101. (2) Blau, Steven K. "The hexachordal theorem: A mathematical look at interval relations in twelve-tone composition." Mathematics Magazine 72.4 (1999): 310-313. However, the theorem was known to crystallographers about thirty years earlier,
who were interested because X-rays depend on inter-atom distances, and so
homometric sets have ambiguous X-rays. The theorem was first proved by Patterson (3),
and again followed by many different proofs in the separate
crystallography literature, including most recently
a proof by Senechal (4): (3) Patterson, A. Lindo. "Ambiguities in the X-ray analysis of crystal structures." Physical Review 65.5-6 (1944): 195. (4) Senechal, Marjorie. "A point set puzzle revisited." European Journal of Combinatorics 29.8 (2008): 1933-1944. The separate literature threads were united by Toussaint, as mentioned above.
|
{
"source": [
"https://mathoverflow.net/questions/270677",
"https://mathoverflow.net",
"https://mathoverflow.net/users/110369/"
]
}
|
270,930 |
From my limited perspective, it appears that the understanding
of a mathematical phenomenon has usually been achieved,
historically, in a continuous setting
before it was fully explored in a discrete setting.
An example I have in mind is the relatively recent activity in discrete differential geometry , and discrete minimal surfaces in particular: Image :
A discrete Schwarz-P surface. Bobenko, Hoffmann, Springborn: 2004 arXiv abstract . I would be interested in examples where the reverse has happened:
a topic was first substantively explored in a discrete setting,
and only later extended to a continuous setting.
One example—perhaps not the best example—is the contrast
between $n$-body dynamics and galactic dynamics. Of course the former
is hardly "discrete," but rather is more discrete than the study of galaxy dynamics leading to the conclusion that there must exist vast
quantities of dark matter . Perhaps there are clearer examples where discrete understanding preceded
continuous exploration?
|
I would say that a lot of topology was discrete before it was continuous.
The Euler characteristic was first observed (in 1752) as an invariant of
polyhedra. Around 1900 Poincaré first calculated Betti numbers, and
generalized the Euler characteristic, in a polyhedral setting. The first
general treatment of topology, by Dehn and Heegaard in their Enzyklopädie
article of 1907, was also in a polyhedral setting. It was only after 1910, with simplicial approximation methods introduced by
Brouwer, that the topological invariance of various combinatorial invariants
was proved. For example, Alexander proved the topological invariance of the
Betti numbers in 1915.
|
{
"source": [
"https://mathoverflow.net/questions/270930",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
271,526 |
For $n\geqslant m>1$ , the integral $$I_{n,m}:=\int\limits_0^\infty\dfrac{\tanh^n(x)}{x^m}dx$$ converges. If $m$ and $n$ are both even or both odd, we can use the residue theorem to easily evaluate it in terms of odd zeta values, since the integrand then is a nice even function. For example, defining $e_k:=(2^k-1)\dfrac{\zeta(k) }{\pi^{k-1}}$ , we have $$ \begin{align}
I_{2,2}&= 2e_3 \\ \\
I_{4,2}&= \dfrac83e_3-4e_5 \\ \\
I_{4,4}&= -\dfrac{16}{3}e_5+20e_7 \\ \\
I_{6,2}&=\dfrac{46}{15}e_3-8e_5+6e_7 \\ \\
I_{6,4}&=-\dfrac{92}{15}e_5+40e_7-56e_9 \\ \\
I_{6,6}&=\dfrac{46}{5}e_7-112e_9+252e_{11} \\ \\
I_{3,3}&= -e_3+6e_5 \\ \\
I_{5,3}&= -e_3+10e_5-15e_7 \\ \\
I_{5,5}&= e_5-25e_7 +70e_9 \\ \\
&etc.
\end{align}$$ But: Is there a closed form for $I_{3,2}=\int\limits_0^\infty\dfrac{\tanh^3(x)}{x^2}dx$ ? I am not sure at all whether nospoon's method used here or one of the other ad hoc approaches can be generalized to tackle this. If the answer is positive, there might be chances that $I_{\frac32,\frac32}$ and the like also have closed forms.
|
Following the suggestion I made in a comment , the integral can be rewritten as the contour integral $$
I_{3,2} = \frac{1}{2\pi i} \oint \frac{\operatorname{tanh}^3 z}{z^2} \log(-z) \, dz ,
$$ where the clockwise contour tightly encircles the positive real axis, which coincides with the branch cut of the logarithm. The reason that this integral is equivalent is because the branch jump across the real line of $\frac{1}{2\pi i} \log(-z)$ is precisely $1$ . The integrand has poles at all $z=\pm i\pi(k+\frac{1}{2})$ , $k=0,1,2,\ldots$ . Evaluating the residues we find \begin{align*}
I_{3,2}
&= \sum_{k=0}^\infty \frac{8\log\pi(k+\frac{1}{2})}{\pi^2 (2k+1)^2} - \frac{96 \log\pi(k+\frac{1}{2})-80}{\pi^4 (2k+1)^4} \\
&= \frac{5}{6} - \gamma - \frac{19 \log 2}{15} + 12 \log A - \log\pi + \frac{90 \zeta'(4)}{\pi^4} \\
&= 1.1547853133231762640590704519415261475352370924508924890\ldots
\end{align*} The last two lines can be checked with Wolfram Alpha , where $\gamma$ is the Euler-Mascheroni constant , and $A$ is the Glaisher constant . Edit: Using $\gamma =12\,\log(A)-\log(2\pi)+\frac{6}{\pi^2}\,\zeta'(2)$ , this can be simplified to $$I_{3,2}=\frac{5}{6} - \frac{4\log 2}{15} -\frac{6\zeta'(2)}{\pi^2}+ \frac{90 \zeta'(4)}{\pi^4},$$ that is $$\boxed{I_{3,2}=\frac{5}{6} - \frac{4\log 2}{15} -\frac{\zeta'(2)}{\zeta(2)}+ \frac{\zeta'(4)}{\zeta(4)}}.
$$ And this form may hint to the existence of similar closed forms for other integrals $I_{n,m}$ with $n+m$ odd. Edit: Indeed... Numerically, it looks like e.g. $$ I(5,2)=\frac1{3}\Bigl(\frac{8}{5}-\frac{44}{63}\log2-3\frac{\zeta'(2)}{\zeta(2)} +5 \frac{\zeta'(4)}{\zeta(4)}-2\frac{\zeta'(6)}{\zeta(6)} \Bigr)$$ $$ I(7,2)=\frac1{45}\Bigl(\frac{7943}{420}-\frac{428}{45}\log2- 45\frac{\zeta'(2)}{\zeta(2)}+ 98\frac{\zeta'(4)}{\zeta(4)}- 70\frac{\zeta'(6)}{\zeta(6)}+ 17\frac{\zeta'(8)}{\zeta(8)}\Bigr)$$ $$ I(9,2)=\frac1{315}\Bigl(\frac{71077}{630}-\frac{10196}{165}\log2- 315\frac{\zeta'(2)}{\zeta(2)}+ 818\frac{\zeta'(4)}{\zeta(4)}- 798\frac{\zeta'(6)}{\zeta(6)}+ 357\frac{\zeta'(8)}{\zeta(8)}- 62\frac{\zeta'(10)}{\zeta(10)}\Bigr)$$ $$I(11,2)=\frac1{14175}\Bigl(\frac{2230796}{495}-\frac{10719068}{4095}\log2- 14175\frac{\zeta'(2)}{\zeta(2)}+ 41877\frac{\zeta'(4)}{\zeta(4)}- 50270\frac{\zeta'(6)}{\zeta(6)}+ 31416\frac{\zeta'(8)}{\zeta(8)}- 10230\frac{\zeta'(10)}{\zeta(10)}+1382\frac{\zeta'(12)}{\zeta(12)}\Bigr)$$ $$I(13,2)=\frac1{467775}\Bigl(\frac{270932553}{2002}-\frac{25865068}{315}\log2- 467775\frac{\zeta'(2)}{\zeta(2)}+ 1528371\frac{\zeta'(4)}{\zeta(4)}- 2137564\frac{\zeta'(6)}{\zeta(6)}+ 1672528\frac{\zeta'(8)}{\zeta(8)}- 771342\frac{\zeta'(10)}{\zeta(10)}+197626\frac{\zeta'(12) }{\zeta(12)}-21844\frac{\zeta'(14) }{\zeta(14)}\Bigr)$$ Here the initial denominators are chosen as the least common denominators of the $\frac{\zeta'(2k)}{\zeta(2k)}$ terms, such that inside the big parentheses, we have integer coefficients here. It turns out that the sequence of those denominators, viz. $1, 3, 45, 315, 14175, 467775$ , coincides up to signs with A117972 , the numerators of the rational numbers $\frac{\pi^{2n}\;\zeta'(-2n)}{\zeta(2n+1)}$ . Moreover, note that the coefficients of the $\frac{\zeta'(2k)}{\zeta(2k)}$ have alternating signs and always sum to 0, meaning that the closed forms can be decomposed into terms $\frac{\zeta'(2k)}{\zeta(2k)}-\frac{\zeta'(2k-2)}{\zeta(2k-2)}$ , which are thus "exponential periods". Further, the numerators of the $\frac{\zeta'(2n)}{\zeta(2n)}$ term of $I_{2n-1,2}$ , viz. $1, -2, 17, -62, 1382, -21844$ , coincide with A002430 , the numerators of the Taylor series for $\tanh(x)$ , which are also closely related to the rational values $\zeta(1-2n)$ . All this seems rather interesting.
|
{
"source": [
"https://mathoverflow.net/questions/271526",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29783/"
]
}
|
271,608 |
The following popular mathematical parable is well known: A father left 17 camels to his three sons and, according to the will, the eldest son should be given a half of all camels, the middle son the one-third part and the youngest son the one-ninth. This is hard to do, but a wise man helped the sons: he added his own camel, the oldest son took 18/2=9 camels, the second son took 18/3=6 camels, the third son 18/9=2 camels and the wise man took his own camel and went away. I ask for applications of this method: when something is artificially added, somehow used and, after that, taken away (as was this 18 th camel of a wise man). Let me start with two examples from graph theory: We know Hall's lemma: if any $k=1,2,\dots$ men in a town like at least $k$ women in total, then all the men may get married so that each of them likes his wife. How to conclude the following generalized version? If any $k$ men like at least $k-s$ women in total, then all but $s$ men may get married. Proof. Add $s$ extra women (camel-women) liked by all men. Apply usual Hall lemma, after that say pardon to the husbands of extra women. This is due to Noga Alon, recently popularized by Gil Kalai. How to find a perfect matching in a bipartite $r$-regular multigraph? If $r$ is a power of 2, this is easy by induction. Indeed, there is an Eulerian cycle in every connected component. Taking edges alternatively we partition our graph onto two $r/2$-regular multigraphs. Moreover, we get a partition of edges onto $r$ perfect matchings. Now, if $r$ is not a power of 2, take large $N$ and write $2^N=rq+t$, $0<t<r$. Replace each edge of our multigraph onto $q$ edges, also add extra edges formed by arbitrary $t$ perfect matchings. This new multigraph may be partitioned onto $2^N$ perfect matchings, and if $N$ is large enough, some of them do not contain extra edges.
|
The AM-GM inequality (as well as other inequalities, for instance the discrete version of Jensen's inequality) can be proved with the following trick due to Cauchy, known as "forward-backward induction": Let $P(n)$ be the proposition "For any $n$ positive reals, $\frac1n \sum a_i \geq (\prod a_i)^{1/n}$" (AM-GM inequality with $n$ terms). The proof is made of three parts. Prove $P(2)$ directly. Use $P(n)$ and $P(2)$ to prove $P(2n)$ (hence by induction we can now prove $P(2^k)$ for each $k$). Given $n$ terms $a_1,a_2,\dots, a_{n}$, use $P(n+1)$ on the $n+1$ terms $a_1,a_2,\dots, a_n, \frac1n \sum a_i$ to derive $P(n)$ with a few manipulations. So when we want to prove $P(n)$ for a generic $n$, we use part 3 to add several "phantom" terms equal to $AM=\frac1n \sum a_i$ to make $n$ a power of 2, then use parts 1+2. The proof is described in more detail here .
|
{
"source": [
"https://mathoverflow.net/questions/271608",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4312/"
]
}
|
271,841 |
(This is a restatement of a question asked on the Mathematics.SE , where the solutions were a bit disappointing. I'm hoping that professional mathematicians here might have a better solution.) What are some problems in pure mathematics that require(d) solution techniques from the broadest and most disparate range of sub-disciplines of mathematics? The difficulty or importance or real-world application of the problem is not my concern, but instead the breadth of the range of sub-disciplines needed for its solution. The ideal answer would be a problem that required, for instance, number theory, group theory, set theory, formal logic, homotopy theory, graph theory, combinatorics, geometry, and so forth. Of course, most sub-branches of mathematics overlap with other sub-branches, so just to be clear, in this case you should consider two sub-branches as separate if they have separate listings (numbers) in the Mathematics Subject Classification at the time of the result. (Later, and possibly in response to such a result, the Subject Classifications might be modified slightly.) One of the reasons I'm interested in this problem is by analogy to technology. More and more problems in technology require a range of disciplines, e.g., electrical engineering, materials science, perceptual psychology, optics, thermal physics, and so forth. Is this also the case in research mathematics? I'm not asking for an opinion —this question is fact-based, or at minimum a summary of the quantification of the expert views of research mathematicians, mathematics journal editors, mathematics textbook authors, and so forth. The issue can minimize the reliance on opinion by casting it as an objectively verifiable question (at least in principle): What research mathematics paper, theorem or result has been classified at the time of the result with the largest number of Mathematics Subject Classification numbers? Moreover, as pointed out in a comment, the divisions (and hence Subject Classification numbers) are set by experts analyzing the current state of mathematics, especially its foundations. The ideal answer would point to a particular paper, a result, a theorem, where one can identify objectively the range of sub-branches that were brought to bear on the proof or result (as, for instance, might be documented in the Mathematics Subject Classification or appearance in textbooks from disparate fields). Perhaps one can point to particular mathematicians from disparate sub-fields who collaborated on the result.
|
The proof of the Ramanujan conjecture by Deligne. It uses: number theory algebraic geometry topology representation theory commutative algebra complex analysis
|
{
"source": [
"https://mathoverflow.net/questions/271841",
"https://mathoverflow.net",
"https://mathoverflow.net/users/89654/"
]
}
|
272,303 |
The next International Congress of Mathematicians (ICM) will be next year in Rio de Janeiro, Brazil. The present question is the 2018 version of similar questions from 2014 and 2010 . Can you, please, for the benefit of others give a short description of the work of one of the plenary speakers? List of plenary speakers at ICM 2018 : Alex Lubotzky (Israel) Andrei Okounkov (Russia/USA) Assaf Naor (USA) Carlos Gustavo Moreira (Brazil) Catherine Goldstein (France) Christian Lubich (Germany) Geordie Williamson (Australia/Germany) Gil Kalai (Israel) Greg Lawler (USA) Lai-Sang Young (USA) Luigi Ambrosio (Italy) Michael Jordan (USA) Nalini Anantharaman (France) Peter Kronheimer (USA) and Tom Mrowka (USA) Peter Scholze (Germany) Rahul Pandharipande (Switzerland) Ronald Coifman (USA) Sanjeev Arora (USA) Simon Donaldson (UK/USA) Sylvia Serfaty (France/USA) Vincent Lafforgue (France)
|
Kronheimer and Mrowka have both spoken at the ICM before. Most likely, the current invitation is based on their proof that Khovanov homology detects the unknot (although they have other spectacular work since their previous ICM talks, such as the proof of Property (P) ). The corresponding question for the Jones polynomial is a well-known open problem. Kronheimer, P.B.; Mrowka, T.S. , Khovanov homology is an unknot-detector , Publ. Math., Inst. Hautes Étud. Sci. 113, 97-208 (2011). ZBL1241.57017 . MR2805599 The strategy of the proof is to consider (a modification of) the instanton Floer homology invariant of knots which (roughly) counts representations of the knot group to $SU(2)$ in which the meridian has trace $=0$. They show that this invariant is always non-trivial for non-trivial knots. This mimics a similar proof of non-triviality for knot Floer homology (defined by Rasmussen and Oszvath-Zsabo) by Juhasz, who showed that the highest grading of the knot Floer homology is sutured Floer homology of the complement of a minimal genus Seifert surface in the knot complement (using an adjunction inequality). Kronheimer and Mrowka had previously formulated an instanton version of sutured Floer homology so that they could mimic Juhasz's proof. Then they show that there is a spectral sequence going from Khovanov homology to their knot instanton homology, and hence the Khovanov homology of non-trivial knots has rank at least 2. This part of the proof was modeled on a spectral sequence that Oszvath-Zsabo found from Khovanov homology to the Heegaard-Floer homology of the double branched cover. The proof of the existence of this spectral sequence is based on the TQFT-like properties of the instanton knot invariant for cobordisms between knots by surfaces in 4-manifolds, which they develop further in this paper.
|
{
"source": [
"https://mathoverflow.net/questions/272303",
"https://mathoverflow.net",
"https://mathoverflow.net/users/91419/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.