source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
21,881
I am a TA for a multivariable calculus class this semester. I have also TA'd this course a few times in the past. Every time I teach this course, I am never quite sure how I should present curl and divergence. This course follows Stewart's book and does not use differential forms; we only deal with vector fields (in $\mathbb{R}^3$ or $\mathbb{R}^2$). I know that div and curl and gradient are just the de Rham differential (of 2-forms, 1-forms, and 0-forms respectively) in disguise. I know that things like curl(gradient f) = 0 and div(curl F) = 0 are just rephrasings of $d^2 = 0$. However, these things are, understandably, quite mysterious to the students, especially the formula for curl, given by $\nabla \times \textbf{F}$, where $\nabla$ is the "vector field" $\langle \partial_x , \partial_y , \partial_z \rangle$. They always find the appearance of the determinant / cross product to be quite weird. And the determinant that you do is itself a bit weird, since its second row consists of differential operators. The students usually think of cross products as giving normal vectors, so they are lead to questions like: What does it mean for a vector field to be perpendicular to a "vector field" with differential operator components?! Incidentally, is the appearance of the "vector field" $\nabla = \langle \partial_x , \partial_y , \partial_z \rangle$ just some sort of coincidence, or is there some high-brow explanation for what it really is? Is there a clear (it doesn't have to necessarily be 100% rigorous) way to "explain" the formula for curl to undergrad students, within the context of a multivariable calculus class that doesn't use differential forms? I actually never quite worked out the curl formula myself in terms of fancier differential geometry language. I imagine it's: take a vector field (in $\mathbb{R}^3$), turn it into a 1-form using the standard Riemannian metric, take de Rham d of that to get a 2-form, take Hodge star of that using the standard orientation to get a 1-form, turn that into a vector field using the standard Riemannian metric. I imagine that the appearance of the determinant / cross product comes from the Hodge star. I imagine that one can work out divergence in the same way, and the reason why the formula for divergence is "simple" is because the Hodge star from 3-forms to 0-forms is simple. Is my thinking correct? Stewart's book provides some comments about how to give curl and divergence a "physical" or "geometric" or "intuitive" interpretation; the former gives the axis about which the vector field is "rotating" at each point, the latter tells you how much the vector field is "flowing" in or out of each point. Is there some way to use these kinds of "physical" or "geometric" pictures to "prove" or explain curl(gradient f) = 0 and div(curl F) = 0? Is there some way to explain to undergrad students how the formulas for curl and div do in fact agree with the "physical" or "geometric" picture? Though such an explanation is perhaps less "mathematical", I would find an explanation of this sort satisfactory for my class. Thanks in advance!
To me, the explanation for the appearance of div, grad and curl in physical equations is in their invariance properties. Physics undergrads are taught (aren't they?) Galileo's principle that physical laws should be invariant under inertial coordinate changes. So take a first-order differential operator $D$, mapping 3-vector fields to 3-vector fields. If it's to appear in any general physical equation, it must commute with with translations (and therefore have constant coefficients) and also with rotations. Just by considering rotations about the 3 coordinate axes, you can then check that $D$ is a multiple of curl. If I want to devise a "physical" operator which has the same invariance property - and therefore equals curl, up to a factor - I'd try something like "the mean angular velocity of particles uniformly distributed on a very small sphere centred at $\mathbf{x}$, as they are carried along by the vector field." (This is manifestly invariant, but not manifestly a differential operator!) [Here I should admit that, having occasionally tried, I've never convinced more than a fraction of a calculus class that it's possible to understand something in terms of the properties it satisfies rather than in terms of a formula. That's unsurprising, perhaps: it's not an obvious idea, and it's entirely absent from the standard textbooks.]
{ "source": [ "https://mathoverflow.net/questions/21881", "https://mathoverflow.net", "https://mathoverflow.net/users/83/" ] }
21,899
I've tried in vain to find a definition of an algebra over a noncommutative ring. Does this algebraic structure not exist? In particular, does the following definition from http://en.wikipedia.org/wiki/Algebra_(ring_theory) make sense for noncommutative $R$? Let $R$ be a commutative ring. An algebra is an $R$-module $A$ together with a binary operation $$ [\cdot,\cdot]: A\times A\to A $$ called $A$-multiplication, which satisfies the following axiom: $$ [a x + b y, z] = a [x, z] + b [y, z], \quad [z, a x + b y] = a[z, x] + b [z, y] $$ for all scalars $a$, $b$ in $R$ and all elements $x$, $y$, $z$ in $A$. So, is there a common notion of an algebra over a noncommutative ring?
The commutative notion of an (associative or not) algebra $A$ over a commutative ring $R$ has two natural generalization to the noncommutative setup, but the one you list with defined left $R$-linearity in both arguments is neither of them; in particular your multiplication does not necessarily induce a map from the tensor product, unless the image of $R$ is in the center. Most useful is the notion of an $R$-ring $A$ (or a ring $A$ over $R$), which is just a monoid in the monoidal category of $R$-bimodules: in other words the multiplication is a map $A\otimes A\to A$ which is left linear in first and right linear in the second factor. If we drop the associativity for the multiplication all works the same way, but I do not know if there is a common name (maybe descriptive like magma internal to the monoidal category of $R$-bimodules; or one may try a rare term nonassociative $R$-ring). In the commutative case, the mutliplication is both left and right linear in each factor, what is here possible only if $R$ maps into the center of $A$. (Edit: I erased here one additional nonsense sentence clearly written when tired ;) ). Thus the two useful concepts in the noncommutative case are $R$-rings (possibly nonassociative!) and, well, the subclass with that property: $R$ maps into $Z(A)$, deserving the full name of "algebra". There is also a notion of $R$- coring , which is a comonoid in the monoidal category of $R$-bimodules, generalizing the notion of an $R$-coalgebra to a noncommutative ground ring. Edit: I suggest also this link .
{ "source": [ "https://mathoverflow.net/questions/21899", "https://mathoverflow.net", "https://mathoverflow.net/users/1291/" ] }
21,911
To ask this question in a (hopefully) more direct way: Please imagine that I take a freely moving ball in 3-space and create a 'cage' around it by defining a set of impassible coordinates, $S_c$ (i.e. points in 3-space that no part of the diffusing ball is allowed to overlap). These points reside within the volume, $V_{cage}$, of some larger sphere, where $V_{cage}$ >> $V_{ball}$. Provided the set of impassible coordinates, $S_c$, is there a computationally efficient and/or nice way to determine if the ball can ever escape the cage? Earlier version of question: In Pachinko one shoots a small metal ball into a forest of pins, then gravity then pulls it downwards so that it will either fall into a pocket (where you win a prize) or the sink at the bottom of the machine. The spacing and distribution of the pins will help to insure that one only wins certain prizes with low probability. Now imagine that we have a more general game where: (1) - The ball is simply diffusing in 3-space (like a molecule undergoing Brownian motion). I.e. there is no fixed downward trajectory due to gravity. (2) - You win a prize if the ball diffuses over a particular coordinate, just like one of the pockets in regular pachinko. (3) - We generalize he pins as a set of impassible coordinates. (4) - We define a 'sink' as an always accessible coordinate. (5) - We define a starting coordinate for the sphere. Given access the 3-space coordinates for (2), (3), (4), & (5), what's the most efficient way to find whether the game is 'winnable', or if the ball will fall into the 'sink' with a probability of unity? How can we find the minimum set from (3) that prevents the ball from reaching the pocket?
Replace the pins by balls of radius $R_{ball}$ and the ball by a point. This is a logically equivalent formulation. The question, then, is: given a finite set of balls, $B_1$, $B_2$, ...., $B_k$ in $\mathbb{R}^n$, and a point $x$, how to determine where $x$ is in the unbounded component of $\mathbb{R}^n \setminus \bigcup B_i$. I don't know the answer to this, but here is an easy way to compute the number of connected components of $\mathbb{R}^n \setminus \bigcup B_i$. In other words, I can determine whether there is some place from which a ball cannot escape. By Alexander duality , the number of bounded components of $\mathbb{R}^n \setminus \bigcup B_i$ this is the dimension of $H_n(\bigcup B_i)$. Cover $\bigcup B_i$ by the $B_i$. Every intersection of finitely many $B_i$ is convex, hence contractible. So $\bigcup B_i$ is homotopic to the nerve of this cover. That is a simplicial complex, so it is easy to compute its homology. One final practical idea: I have used painting software where I could click on a point and it would color every point which was connected to that one. Maybe the algorithms used to make that software could solve this problem as well?
{ "source": [ "https://mathoverflow.net/questions/21911", "https://mathoverflow.net", "https://mathoverflow.net/users/3248/" ] }
21,929
We're all use to seeing differential operators of the form $\frac{d}{dx}^n$ where $n\in\mathbb{Z}$. But it has come to my attention that this generalises to all complex numbers, forming a field called fractional calculus which apparently even has applications in physics! These derivatives are defined as fractional iterates. For example, $(\frac{d}{dx}^\frac{1}{2})^2 = \frac{d}{dx}$ or $(\frac{d}{dx}^i)^i = \frac{d}{dx}^{-1}$ But I can't seem to find a more meaningful definition or description. The derivative means something to me; these just have very abstract definitions. Any help?
I understand where Ryan's coming from, though I think the question of how to interpret fractional calculus is still a reasonable one. I found this paper to be pretty neat, though I have no idea if there are any better interpretations out there. http://people.tuke.sk/igor.podlubny/pspdf/pifcaa_r.pdf
{ "source": [ "https://mathoverflow.net/questions/21929", "https://mathoverflow.net", "https://mathoverflow.net/users/5486/" ] }
21,931
Can anyone help me with this problem? if $G$ has abelian Sylow-p-subgroups, prove that $p$ does not divide the order of $G'\cap Z(G)$, where $G'$ and $Z(G)$ are as usual, the subgrup generated by the set of all commutators and the center, respectively. thanks a lot. :D
I understand where Ryan's coming from, though I think the question of how to interpret fractional calculus is still a reasonable one. I found this paper to be pretty neat, though I have no idea if there are any better interpretations out there. http://people.tuke.sk/igor.podlubny/pspdf/pifcaa_r.pdf
{ "source": [ "https://mathoverflow.net/questions/21931", "https://mathoverflow.net", "https://mathoverflow.net/users/5487/" ] }
21,947
In this post about the difference between the recursive and effective topos, Andrej Bauer said: If you are looking for a deeper explanation, then perhaps it is fair to say that the Recursive Topos models computability a la Banach-Mazur (a map is computable if it takes computable sequences to computable sequences) and the Effective topos models computability a la Kleene (a map is computable if it is realized by a Turing machine). In many respects Kleene's notion of computability is better, but you'll have to ask another question to find out why :-) So I'm asking: 1) What is "computability a la Banach-Mazur"? I would guess it has something to do with Baire spaces and computable analysis, but I don't really know. 2) Why is Kleene's notion of computability better?
This answer requires a bit of background. Definition 1: a numbered set $(X,\nu_X)$ is a set $X$ together with a partial surjection $\nu_X : \mathbb{N} \to X$, called a numbering of $X$. When $\nu_X(n) = x$ we say that $n$ is a code for $x$. Numbered sets are a generalization of Gödel codes. Some typical examples are: $\mathbf{N} = (\mathbb{N}, \mathrm{id}_\mathbb{N})$ is the standard numbering of natural numbers. $\mathbf{P} = (P, \phi)$ where $P$ is the set of partial computable maps and $\phi$ is a standard enumeration of partial computable maps. $\mathbf{R} = (R,\nu_R)$ where $R$ is the set of computable reals and $\nu_R(n) = x$ when, for all $k \in \mathbb{N}$, $\phi_n(k)$ outputs (a code of) a rational number $q$ such that $|x - q| < 2^{-k}$. Numbered sets can be used to give effective structure to many mathematical structures. What should we take as a morphism between numbered sets? Presumably a map $f : X \to Y$ should be considered a morphism from $(X,\nu_X)$ to $(Y,\nu_Y)$ when it is "computable" in a suitable sense. We understand fairly well what it means to have a computable map $\mathbb{N} \to \mathbb{N}$, namely computed by a Turing machine, so let us take that for granted. It is easy to extend computability of sequences of numbers to computability of arbitrary sequences: Defintion 2: A map $s : \mathbb{N} \to X$ is a computable sequence in $(X,\nu_X)$ when there exists a computable map $f : \mathbb{N} \to \mathbb{N}$ such that $s(n) = \nu_X(f(n))$ for all $n \in \mathrm{dom}(\nu_X)$. Now suppose we think a bit like analysts. One way to define a continuous map is to say that it maps convergent sequences to convergent sequences. We could mimick this idea to define general computable maps. Definition 3: A function $f : X \to Y$ where $(X,\nu_X)$ and $(Y,\nu_Y)$ are numbered sets is Banach-Mazur computable when $f \circ s$ is a computable sequence in $(Y,\nu_Y)$ whenever $s$ is a computable sequence in $(X,\nu_X)$. How good is this notion? And how does it compare to the following notion, which is taken as the standard one nowadays? Definition 3: A function $f : X \to Y$ where $(X,\nu_X)$ and $(Y,\nu_Y)$ are numbered sets is Markov computable , or just computable , when there exists a partial computable map $g : \mathbb{N} \to \mathbb{N}$ such that $f(\nu_X(n)) = \nu_Y(g(n))$ for all $n \in \mathrm{dom}(\nu_X)$. In other words, $f$ is tracked by $g$ in the sense that $g$ does to codes what $f$ does to elements. (Note: in this MO I attributed this notion of computability to Kleene, but I think it's better to attach Markov's name to it, if any.) Every Markov computable function is Banach-Mazur computable. In some cases the converse holds as well. For example, every Banach-Mazur computable map $\mathbf{N} \to \mathbf{N}$ is Markov computable. However, this is not the case in general: R. Friedberg demonstrated that there is a Banach-Mazur computable map $\mathbf{N}^\mathbf{N} \to \mathbf{N}$ which is not Markov computable. [R. Friedberg. 4-quantifier completeness: A Banach-Mazur functional not uniformly partial recursive. Bull. Acad. Polon. Sci. Sr. Sci. Math. Astr. Phys., 6:1–5, 1958.] P. Hertling constructed a Banach-Mazur computable map $\mathbf{R} \to \mathbf{R}$ which is not Markov computable. [P. Hertling. A Banach-Mazur computable but not Markov computable function on the computable real numbers. In Proceedings ICALP 2002, pages 962–972. Springer LNCS 2380, 2002.] A. Simpson and I showed that there is a Banach-Mazur computable $\mathbf{X} \to \mathbf{R}$ that is not Markov computable when $\mathbf{X}$ is any inhabited computable complete separable metric space computably without isolated points. [A. Bauer and A. Simpson: Two Constructive Embedding-Extension Theorems with Applications to Continuity Principles and to Banach-Mazur Computability , Mathematical Logic Quarterly, 50(4,5):351-369, 2004.] What this says is that Banach-Mazur computability is too general because it admits functions that cannot be computed in the standard sense of the word, i.e., computed by Turing machine (in terms of codes).
{ "source": [ "https://mathoverflow.net/questions/21947", "https://mathoverflow.net", "https://mathoverflow.net/users/1610/" ] }
22,015
Definition. A locally finitely presented morphism of schemes $f\colon X\to Y$ is smooth (resp. unramified , resp. étale ) if for any affine scheme $T$, any closed subscheme $T_0$ defined by a square zero ideal $I$, and any morphisms $T_0\to X$ and $T\to Y$ making the following diagram commute g T 0 --> X | | | |f v v T ---> Y there exists (resp. exists at most one, resp. exists exactly one) morphism $T\to X$ which fills the diagram in so that it still commutes. For checking that $f$ is unramified or étale, it doesn't matter that I required $T$ to be affine. The reason is that for an arbitrary $T$, I can cover $T$ by affines, check if there exists (a unique) morphism on each affine, and then "glue the result". If there's at most one morphism locally, then there's at most one globally. If there's a unique morphism locally, then there's a unique morphism globally (uniqueness allows you to glue on overlaps). But for checking that $f$ is smooth, it's really important to require $T$ to be affine in the definition, because it could be that there exist morphisms $T\to X$ locally on $T$, but it's impossible to find these local morphisms in such a way that they glue to give a global morphism. Question: What is an example of a smooth morphism $f\colon X\to Y$, a square zero nilpotent thickening $T_0\subseteq T$ and a commutative square as above so that there does not exist a morphism $T\to X$ filling in the diagram? I'm sure I worked out such an example with somebody years ago, but I can't seem to reproduce it now (and maybe it was wrong). One thing that may be worth noting is that the set of such filling morphisms $T\to X$, if it is non-empty, is a torsor under $Hom_{\mathcal O_{T_0}}(g^*\Omega_{X/Y},I)=\Gamma(T_0,g^*\mathcal T_{X/Y}\otimes I)$, where $\mathcal T_{X/Y}$ is the relative tangent bundle. So the obstruction to finding such a lift will represent an element of $H^1(T_0,g^*\mathcal T_{X/Y}\otimes I)$ (you can see this with Čech cocycles if you want). So in any example, this group will have to be non-zero.
Using some of BCnrd's ideas together with a different construction, I'll give a positive answer to Kevin Buzzard's stronger question; i.e., there is a counterexample for any non-etale smooth morphism. Call a morphism $X \to Y$ wicked smooth if it is locally of finite presentation and for every (square-zero) nilpotent thickening $T_0 \subseteq T$ of $Y$-schemes, every $Y$-morphism $T_0 \to X$ lifts to a $Y$-morphism $T \to X$. Theorem: A morphism is wicked smooth if and only if it is etale. Proof: Anton already explained why etale implies wicked smooth. Now suppose that $X \to Y$ is wicked smooth. In particular, $X \to Y$ is smooth, so it remains to show that the geometric fibers are $0$-dimensional. Wicked smooth morphisms are preserved by base change, so by base extending by each $y \colon \operatorname{Spec} k \to Y$ with $k$ an algebraically closed field, we reduce to the case $Y=\operatorname{Spec} k$. Moreover, we may replace $X$ by an open subscheme to assume that $X$ is etale over $\mathbb{A}^n_k$ for some $n \ge 0$. Fix a projective variety $P$ and a surjection $\mathcal{F} \to \mathcal{G}$ of coherent sheaves on $P$ such that some $g \in \Gamma(P,\mathcal{G})$ is not in the image of $\Gamma(P,\mathcal{F})$. (For instance, take $P = \mathbb{P}^1$, let $\mathcal{F} = \mathcal{O}_P$, and let $\mathcal{G}$ be the quotient corresponding to a subscheme consisting of two $k$-points.) Make $\mathcal{O}_P \oplus \mathcal{F}$ an $\mathcal{O}_P$-algebra by declaring that $\mathcal{F} \cdot \mathcal{F} = 0$, and let $T = \operatorname{\bf Spec}(\mathcal{O}_P \oplus \mathcal{F})$. Similarly, define $T_0 = \operatorname{\bf Spec}(\mathcal{O}_P \oplus \mathcal{G})$, which is a closed subscheme of $T$ defined by a nilpotent ideal sheaf. We then may view $g = 0+g \in \Gamma(P,\mathcal{O}_P \oplus \mathcal{G}) = \Gamma(T_0,\mathcal{O}_{T_0})$. Choose $x \in X(k)$; without loss of generality its image in $\mathbb{A}^n(k)$ is the origin. Using the infinitesimal lifting property for the etale morphism $X \to \mathbb{A}^n$ and the nilpotent thickening $P \subseteq T_0$, we lift the point $(g,g,\ldots,g) \in \mathcal{A}^n(T_0)$ mapping to $(0,0,\ldots,0) \in \mathbb{A}^n(P)$ to some $x_0 \in X(T_0)$ mapping to $x \in X(k) \subseteq X(P)$. By wicked smoothness, $x_0$ lifts to some $x_T \in X(T)$. The image of $x_T$ in $\mathbb{A}^n(T)$ lifts $(g,g,\ldots,g)$, so each coordinate of $x_T$ is a global section of $\mathcal{F}$ mapping to $g$, which is a contradiction unless $n=0$. Thus $X \to Y$ is etale.
{ "source": [ "https://mathoverflow.net/questions/22015", "https://mathoverflow.net", "https://mathoverflow.net/users/1/" ] }
22,032
I read the article in wikipedia, but I didn't find it totally illuminating. As far as I've understood, essentially you have a morphism (in some probably geometrical category) $Y \rightarrow X$, where you interpret $Y$ as being the "disjoint union" of some "covering" (possibly in the Grothendieck topology sense) of $X$, and you want some object $\mathcal{F'}$ defined on $Y$ to descend to an object $\mathcal{F}$ defined on $X$ that will be isomorphic to $\mathcal{F}'$ when pullbacked to $Y$ (i.e. "restricted" to the patches of the covering). To do this you have problems with $Y\times_{X}Y$, which is interpreted as the "disjoint union" of all the double intersections of elements of the cover. I'm aware of the existence of books and notes on -say- Grothendieck topologies and related topics (that I will consult if I'll need a detailed exposition), but I would like to get some ideas in a nutshell, with some simple and maybe illuminating examples from different fields of mathematics. I also know that there are other MO questions related to descent theory, but I think it's good that there's a (community wiki) place in which to gather instances, examples and general picture. So, What is descent theory in general? And what are it's unifying abstract patterns? In which fields of mathematics does it appear or is relevant, and how does it look like in each of those fields? (I'm mostly interested in instances within algebraic geometry, but having some picture in other field would be nice). Could you make some examples of theorems which are "typical" of descent theory? And also mention the most important and well known theorems?
Suppose we are given some category (or higher category) of "spaces" in which each space $X$ is equipped with a fiber , i.e. a category $C_X$ of objects of some type over it. For example, a space can be a smooth manifold and the fiber is the category of vector bundles over it; or a space is an object of the category dual to the category of rings and the fiber is its category of left modules. Given a map $f: Y\to X$, one often has an induced functor $f^* : C_X\to C_Y$ (pullback, inverse image functor, extension of scalars). The basic questions of classical descent theory are: When an object $G$ in $C_Y$ is in the image via $f^*$ of some object in $C_X$ ? Classify all forms of object $G\in C_Y$, that is find all $E\in C_X$ for which $f^*(E)\cong G$. Grothendieck introduced pseudofunctors and fibred categories to formalize an ingenious method to deal with descent questions. He introduces additional data on an object $G$ in $C_Y$ to have a chance of determining an isomorphism class of an object in $C_X$. Such an enriched object over $X$ is called a ``descent datum''. $f$ is an effective descent morphism if the morphism $f$ induces a canonical equivalence of the category of the descent data (for $f$ over $X$) with $C_X$. It is a nontrivial result that in the case of rings and modules, the effective descent morphisms are preciselly pure morphisms of rings. Grothendieck's flat descent theory tells a weaker result that faithfully flat morphisms are of effective descent. In algebraic situations one often introduces a (co) monad $T_f : C_X\to C_X$ (say with the multiplication $\mu: T_f \circ T_f \to T_f$) induced by the morphism $f$. The category of descent data is then nothing else than the Eilenberg-Moore category $T_f-\mathrm{Mod}$ of (co)modules (also called (co)algebras) over $T_f$. Then, by the definition, $f$ is of an effective descent if and only if the comparison map (defined in the (co)monad theory) between $C_X$ and $T_f-\mathrm{Mod}$ is an equivalence. Several variants of Barr-Beck theorem give conditions which are equivalent or (in some variants) sufficient to the comparison map for a monad induced by a pair of adjoint functors being an equivalence. Generically such theorems are called monadicity (or tripleability) theorems. One can describe most of (but not all) situations of 1-categorical descent theory via the monadic approach. There are numerous generalizations of monadicity theorems, higher cocycles and descent, both in monadic and in fibered category setup in higher categorical context (Giraud, Breen, Street, K. Brown, Hermida, Marmolejo, Mauri-Tierney, Jardine, Joyal, Simpson, Rosenberg-Kontsevich, Lurie...); the theory of stacks, gerbes and of general cohomology is almost the same as the general descent theory, in a point of view. For examples, it is better to consult the literature. It takes a while to treat them.
{ "source": [ "https://mathoverflow.net/questions/22032", "https://mathoverflow.net", "https://mathoverflow.net/users/4721/" ] }
22,111
Let $U$ be a dense open subscheme of an integral noetherian scheme $X$ and let $E$ be a vector bundle on $U$. Suppose that the complement $Y$ of $U$ has codimension $\textrm{codim}(Y,X) \geq 2$. Let $F$ be a vector bundle on $X$ extending $E$, i.e., $F|_{U} = E$. Is any extension of $E$ to $X$ isomorphic to $F$?
This is true if $X$ satisfies Serre's condition $S_2$, i.e. $\mathcal O_X$ is $S_2$. Then a vector bundle is $S_2$ since locally it is isomorphic to $\mathcal O_X^n$. More generally, a coherent sheaf $F$ on a Japanese scheme (for example: $X$ is of finite type over a field) which is $S_2$ has a unique extension from an open subset $U$ with $\operatorname{codim} (X\setminus U)\ge 2$. This follows at once from the cohomological characterization of $S_2$. Thus, another name for the $S_2$-sheaves: they are sheaves which are saturated in codimension 2 , and another name for the $S_2$-fication: saturation in codimension 2 . P.S. Of course, by Serre's criterion, normal = $S_2+R_1$. So the above statement is true for any normal (e.g. smooth) variety. P.P.S. And of course, Gorenstein implies Cohen-Macaulay implies $S_2$. So the statement is also true for hypersurfaces and complete intersections, which could be very singular and non-reduced. Edit to define some terms: A Japanese (or Nagata) ring is a ring obtained from a ring finitely generated over a field or $\mathbb Z$ by optionally applying localizations and completions. The property used here is that for a Japanese ring $R$, its integral closure (normalization) $\tilde R$ is a finitely generated $R$-module. This is important because the $S_2$-fication $S_2(R)$ lies between $R$ and $\tilde R$. A coherent sheaf $F$ satisfies $S_n$ if for any point $x\in Supp(F)$, one has $$ depth_x (F) \ge \min(\dim_x Supp(F),n) $$ If $F$ locally corresponds to an $R$-module $M$, and $x$ to a prime ideal $p$, then the depth is the length of a maximal regular sequence $(f_1,\dots, f_k)$ of elements of $R_p$ for $M_p$ (so, $f_1$ is a nonzerodivisor in $M_p$, etc.).
{ "source": [ "https://mathoverflow.net/questions/22111", "https://mathoverflow.net", "https://mathoverflow.net/users/4333/" ] }
22,141
This is a follow up to this closed question . I open a random page, such as something on arXiv at 8:05 p.m. EST, and I see all these dollar signs, and I sigh and I wish that I could see nicely formatted math formulas instead, just like on MO. Is it possible? Can one write a Greasemonkey script to apply jsMath after the fact even if the page authors did not think of it? A Mozilla Firefox addon? Please share your solutions. Seeing like this is an active community of people with similar interests, I am sure that hundreds or thousands of mathematicians would benefit from a solution.
The Greasemonkey MathML script written by Steve Cheng and linked to in Scott Morrison's answer worked only partially for me in Firefox on Windows 7: it did not display many \mathbb, \mathcal, and \mathfrak characters because the corresponding Unicode characters were missing in the fonts. Installing additional STIX and Asana Math fonts did not help, in fact it made the display looking worse. So I rewrote the script (a long and tedious job finding the correct Unicode codes and putting them in the right places). I also added arxiv.org, front.math.ucdavis.edu, MathSciNet, and mail.google.com to the sites supported by default, and added miscellaneous characters and TeX commands missing in the original script. Yes, it works with gmail (!) if you switch to the basic HTML view. So now you can read an email from your collaborator and see typeset math right there. Now tell me you haven't always wished and prayed for this? I know I have. Here are the detailed instructions for the method that produces good results using Mozilla Firefox on Windows 7. I haven't tested on other systems, you are welcome to share your experiences in the comments. Click here to install the Greasemonkey Firefox extension. Download a modified Greasemonkey script from here and save it to your Desktop. From the Firefox menu bar, File > Open File, navigate to the downloaded script and open it. Greasemonkey will offer to install it. Do that. That should be it. Check how it works by looking at some arXiv abstracts such as this , or this . Even when the authors use custom notations, such as \red or \cE, removing the dollar signs, putting math in a different font, and using sub- and superscripts dramatically increases the readability in my experience. Edit: I also fixed the displayed formulas with double dollars, which the original script did not handle correctly. So now you can also view this and this . So in the end this was more of a community service than a question. Enjoy the results!
{ "source": [ "https://mathoverflow.net/questions/22141", "https://mathoverflow.net", "https://mathoverflow.net/users/1784/" ] }
22,142
Today I found myself at the Wikipedia page on Vaught's Conjecture, http://en.wikipedia.org/wiki/Vaught_conjecture and it says that Prof. Knight, of Oxford, "has announced a counterexample" to the conjecture. The phrasing is odd; I interpret it as suggesting that there is some doubt as to whether Prof. Knight attained his goal. Another Wikipedia page uses similar language, "it is thought that there is a counterexample" or something like that. (I forget which page now.) Prof. Knight's page, which you can easily find through the link above, certainly gives the impression that he himself harbors no doubts about his achievement. Since the counterexample is a 117-page construction and not immediately perspicuous, I thought I'd ask here what the situation is. Is the paper being refereed? Apparently it's from 2002, so something unusual must be going on.
As far as I understand, no, Vaught's Conjecture has not been resolved. We held a reading seminar on Robin Knight's proposed counter-example closely following each of his drafts and simplified presentations here at Berkeley some years ago and were ultimately convinced that the draft of January 2003 does not contain a correct disproof of Vaught's Conjecture and requires more than minor emendations to produce a complete proof. We did not discover any essential error, though there were important points in the argument where it seemed to us that even the author had not worked out the technical details. That said, it is possible that revisions he has posted since then are sufficient, though in view of how much time it would require to enter into the details of the argument, I am not willing to work through the later papers until the basic architecture of the proof is certified by some other expert. To be fair to Robin Knight, his work in set theoretic topology is well-respected and his construction takes into account the relevant features required for a counter-example to Vaught's Conjecture. If you would like to know whether or not he believes that his proof works, you should ask him directly. If he says that he does believe the proof to be valid, then you can attempt to check the proof yourself. The difficulty in reading his manuscript is not the amount of background material one must know in order to follow it, but exactly the opposite: almost everything is developed from scratch so that one must hold the entire construction in one's mind without having the usual anchors of established theorems.
{ "source": [ "https://mathoverflow.net/questions/22142", "https://mathoverflow.net", "https://mathoverflow.net/users/4367/" ] }
22,174
When teaching Measure Theory last year, I convinced myself that a finite measure defined on the Borel subsets of a (compact; separable complete?) metric space was automatically regular. I used the Borel Hierarchy and some transfinite induction. But, typically, I've lost the details. So: is this true? Are related questions true? What are some good sources for this sort of questions? As motivation, a student pointed me to http://en.wikipedia.org/wiki/Lp_space#Dense_subspaces where it's claimed (without reference) that (up to a slight change of definition) the result is true for finite Borel measures on any metric space. (I'm normally only interested in Locally Compact Hausdorff spaces, for which, e.g. Rudin's "Real and Complex Analysis" answers such questions to my satisfaction. But here I'm asking more about metric spaces). To clarify, some definitions (thanks Bill!): I guess by "Borel" I mean: the sigma-algebra generated by the open sets. A measure $\mu$ is "outer regular" if $\mu(B) = \inf\{\mu(U) : B\subseteq U \text{ is open}\}$ for any Borel B. A measure $\mu$ is "inner regular" if $\mu(B) = \sup\{\mu(K) : B\supseteq K \text{ is compact}\}$ for any Borel B. A measure $\mu$ is "Radon" if it's inner regular and locally finite (that is, all points have a neighbourhood of finite measure). So I don't think I'm quite interested in Radon measures (well, I am, but that doesn't completely answer my question): in particular, the original link to Wikipedia (about L^p spaces) seems to claim that any finite Borel measure on a metric space is automatically outer regular, and inner regular in the weaker sense with K being only closed.
The book Probability measures on metric spaces by K. R. Parthasarathy is my standard reference; it contains a large subset of the material in Convergence of probability measures by Billingsley, but is much cheaper! Parthasarathy shows that every finite Borel measure on a metric space is regular (p.27), and every finite Borel measure on a complete separable metric space, or on any Borel subset thereof, is tight (p.29). Tightness tends to fail when separability is removed, although I don't know any examples offhand. (Definitions used in Parthasarathy's book: $\mu$ is regular if for every measurable set $A$, $\mu(A)$ equals the supremum of the measures of closed subsets of $A$ and the infimum of open supersets of $A$. We call $\mu$ tight if $\mu(A)$ is always equal to the supremum of the measures of compact subsets of $A$. Some other texts use "regular" to mean "regular and tight", so there is some room for confusion here.)
{ "source": [ "https://mathoverflow.net/questions/22174", "https://mathoverflow.net", "https://mathoverflow.net/users/406/" ] }
22,188
I have studied some basic homological algebra. But I can't send to get started on spectral sequences. I find Weibel and McCleary hard to understand. Are there books or web resources that serve as good first introductions to spectral sequences? Thank you in advance!
Many of the references that people have mentioned are very nice, but the brutal truth is that you have to work very hard through some basic examples before it really makes sense. Take a complex $K=K^\bullet$ with a two step filtration $F^1\subset F^0=K$, the spectral sequence contains no more information than is contained in the long exact sequence associated to $$0 \to F^1\to F^0\to (F^1/F^0)\to 0$$ Now consider a three step filtration $F^2\subset F^1\subset F^0=K$, write down all the short exact sequences you can and see what you get. The game is to somehow relate $H^*(K)$ to $H^*(F^i/F^{i+1})$. Suppose you know these are zero, is $H^*(K)=0$? Once you've mastered that then ...
{ "source": [ "https://mathoverflow.net/questions/22188", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
22,189
There are many "strange" functions to choose from and the deeper you get involved with math the more you encounter. I consciously don't mention any for reasons of bias. I am just curious what you consider strange and especially like. Please also give a reason why you find this function strange and why you like it. Perhaps you could also give some kind of reference where to find further information. As usually: Please only mention one function per post - and let the votes decide :-)
A Brownian motion sample path . These are about the most bizarrely behaved continuous functions on $\mathbb{R}^+$ that you can think of. They are nowhere differentiable, have unbounded variation, attain local maxima and minima in every interval... Many, many papers and books have been written about their strange properties. Edit: As commented, I should clarify the term "sample path". Brownian motion is a stochastic process $B_t$. We say a sample path of Brownian motion has some property if the function $t \mapsto B_t$ has that property almost surely. So, run a Brownian motion, and with probability 1 you will get a function with all these weird properties.
{ "source": [ "https://mathoverflow.net/questions/22189", "https://mathoverflow.net", "https://mathoverflow.net/users/1047/" ] }
22,299
The popular MO question "Famous mathematical quotes" has turned up many examples of witty, insightful, and humorous writing by mathematicians. Yet, with a few exceptions such as Weyl's "angel of topology," the language used in these quotes gets the message across without fancy metaphors or what-have-you. That's probably the style of most mathematicians. Occasionally, however, one is surprised by unexpectedly colorful language in a mathematics paper. If I remember correctly, a paper of Gerald Sacks once described a distinction as being as sharp as the edge of a pastrami slicer in a New York delicatessen. Another nice one, due to Wilfred Hodges, came up on MO here . The reader may well feel he could have bought Corollary 10 cheaper in another bazaar. What other examples of colorful language in mathematical papers have you enjoyed?
I don't even know if this is intentional or not. In his book Teichmuller theory , John Hubbard frequently references the category of Banach Analytic Manifolds. He adheres to the convention that a category be referenced by the concatenation of the first three letters of each constituent word, making the category in question BanAnaMan . This still cracks me up to this day.
{ "source": [ "https://mathoverflow.net/questions/22299", "https://mathoverflow.net", "https://mathoverflow.net/users/1587/" ] }
22,369
I would just like a clarification related to closed subschemes. If $(X,{\cal O}_X)$ is a locally ringed space and $A\subset X$ is any subset with the subspace topology then $i^{-1}{\cal O}_X$ will be a sheaf of rings on $A$ where $i:A\rightarrow X$ is the inclusion map. (Recall that the inverse image $i^{-1}{\cal O}_X$ is the sheafification of the presheaf $U \mapsto \lim_{V\supset i(U)} {\cal O}_X(V)$ for $U\subseteq A$ open, where the inductive limit is over all open subsets $V$ of $X$ containing $U$.) Is the reason why we don't do this (and instead start talking about closed subschemes, etc. etc.) just that $(A,i^{-1}{\cal O}_X)$ need not be a scheme even when $X$ is? Put differently: given any closed subset of a scheme there will be many ways to make it a closed subscheme. What is the relation between the locally ringed spaces on a closed subset making it a closed subscheme and the locally ringed space I have described above which we obtain by pulling back the structure sheaf via the inclusion map.
It might help to consider the extreme case when $x$ is a closed point of $X$, and $i$ is the inclusion $\{x\} \hookrightarrow X$. The pullback $i^{-1}\mathcal O_X$ is then the stalk of $\mathcal O_X$ at $x$, i.e. the local ring $A_{\mathfrak m}$, if Spec $A$ is an affine n.h. of $x$ in $X$, and $\mathfrak m$ is the maximal ideal in $A$ corresponding to the closed point $x$. Now a single point, with a local ring $A_{\mathfrak m}$ as structure sheaf, is not a scheme (unless $A_{\mathfrak m}$ happens to be zero-dimensional). Moreover, the restriction map from sections of $\mathcal O_X$ over $X$ to section of $i^{-1}\mathcal O_X$ over $x$ is not evaluation of functions at $x$ (which corresponds to reducing elements of $A$ modulo $\mathfrak m$), but is rather just passage to the germs of functions at $x$. The idea in scheme theory is that sections of $\mathcal O_X$ should be functions, and restriction to a closed subscheme should be restriction of functions. In particular, restriction to a closed point should be evaluation of the function (if you like, the constant term of the Taylor series of the function), not passage to the germ (which is like remembering the whole Taylor series). If you bear this intuition in mind, and think about the case of a closed point, you will soon convince yourself that the general notion of closed subscheme is the correct one: If we restrict functions to the locus cut out by an ideal sheaf $\mathcal I$, or (in the affine setting) by an ideal $I$ in $A$, then two sections will give the same function on this locus if they coincide mod $\mathcal I$ (or mod $I$ in the affine setting), and so it is natural to define the structure sheaf to then be $\mathcal O_X/\mathcal I$ (or to take its global sections to be $A/I$ in the affine settin), rather than $i^{-1}\mathcal O_X$.
{ "source": [ "https://mathoverflow.net/questions/22369", "https://mathoverflow.net", "https://mathoverflow.net/users/1148/" ] }
22,462
This is a question I've asked myself a couple of times before, but its appearance on MO is somewhat motivated by this thread , and sigfpe's comment to Pete Clark's answer. I've often heard it claimed that combinatorial species are wonderful and prove that category theory is also useful for combinatorics. I'd like to be talked out of my skepticism! I haven't read Joyal's original 82-page paper on the subject, but browsing a couple of books hasn't helped me see what I'm missing. The Wikipedia page, which is surely an unfair gauge of the theory's depth and uses, reinforces my skepticism more than anything. As a first step in my increasing appreciation of categorical ideas in fields familiar to me (logic may be next), I'd like to hear about some uses of combinatorial species to prove things in combinatorics. I'm looking for examples where there is a clear advantage to their use. To someone whose mother tongue is not category theory, it is not helpful to just say that "combinatorial structures are functors, because permuting the elements of a set A gives a permutation of the partial orders on A". This is like expecting baseball analogies to increase a brazilian guy's understanding of soccer. In fact, if randomly asked on the street, I would sooner use combinatorial reasoning to understand finite categories than use categories of finite sets to understand combinatorics. Added for clarification: In my (limited) reading of combinatorial species, there is quite a lot going on there that is combinatorial. The point of my question is to understand how the categorical part is helping.
Composition of species is closely related to the composition of symmetric collections of vector spaces ("S-modules"), which is a remarkable example of a monoidal category everyone who had ever encountered operads necessarily used. Applying ideas coming from this monoidal category interpretation has various consequences for combinatorics as well. For example, look at papers of Bruno Vallette on partition posets ( here and here ): I believe that already the description of the $S_n$ action on the top homology of the usual partition lattice was hard to explain from the combinatorics point of view - and for many other lattices would be impossible without the Koszul duality viewpoint.
{ "source": [ "https://mathoverflow.net/questions/22462", "https://mathoverflow.net", "https://mathoverflow.net/users/4367/" ] }
22,478
Take, for example, the Klein bottle K. Its de Rham cohomology with coefficients in $\mathbb{R}$ is $\mathbb{R}$ in dimension 1, while its singular cohomology with coefficients in $\mathbb{Z}$ is $\mathbb{Z} \times \mathbb{Z}_2$ in dimension 1. It is in general true that de Rham cohomology ignores the torsion part of singular cohomology. This is not a big surprise since de Rham cohomology really just gives the dimensions of the spaces of solutions to certain PDE's, but I'm wondering if there is some other way to directly use the differentiable structure of a manifold to recover torsion. I feel like I should know this, but what can I say... Thanks!
You can compute the integer (co)homology groups of a compact manifold from a Morse function $f$ together with a generic Riemannian metric $g$; the metric enters through the (downward) gradient flow equation $$ \frac{d}{dt}x(t)+ \mathrm{grad}_g(f) (x(t)) = 0 $$ for paths $x(t)$ in the manifold. After choosing further Morse functions and metrics, in a generic way, you can recover the ring structure, Massey products, cohomology operations, Reidemeister torsion, functoriality. The best-known way to compute the cohomology from a Morse function is to form the Morse cochain complex, generated by the critical points (see e.g. Hutchings's Lecture notes on Morse homology ). Poincaré duality is manifest. Another way, due to Harvey and Lawson , is to observe that the de Rham complex $\Omega^{\ast}(M)$ sits inside the complex of currents $D^\ast(M)$, i.e., distribution-valued forms. The closure $\bar{S}_c$ of the the stable manifold $S_c$ of a critical point $c$ of $f$ defines a Dirac-delta current $[\bar{S}_c]$. As $c$ varies, these span a $\mathbb{Z}$-subcomplex $S_f^\ast$ of $D^*(M)$ whose cohomology is naturally the singular cohomology of $M$. The second approach could be seen as a "de Rham theorem over the integers", because over the reals, the inclusions of $S_f\otimes_{\mathbb{Z}} \mathbb{R}$ and $\Omega^{\ast}_M$ into $D^\ast(M)$ are quasi-isomorphisms, and the resulting isomorphism of $H^{\ast}_{dR}(M)$ with $H^\ast(S_f\otimes_{\mathbb{Z}}\mathbb{R})=H^\ast_{sing}(X;\mathbb{R})$ is the de Rham isomorphism.
{ "source": [ "https://mathoverflow.net/questions/22478", "https://mathoverflow.net", "https://mathoverflow.net/users/4362/" ] }
22,549
Suppose $x\in \mathbb{R}$ is irrational, with irrationality measure $\mu=\mu(x)$; this means that the inequality $|x-\frac{p}{q}|< q^{-\lambda}$ has infinitely many solutions in integers $p,q$ if and only if $\lambda < \mu$. A beautiful theorem of Roth asserts that algebraic numbers have irrationality measure $2$. For $\lambda<\mu$, let $\mathcal{Q}(x,\lambda) \subset \mathbb{N}$ be the (infinite) set of all $q$ occuring in solutions to the aforementioned inequality. Question: For which pairs $(x,\lambda)$ does $\mathcal{Q}(x,\lambda)$ have positive relative density in the positive integers? For which pairs $(x,\lambda)$ does the cardinality of $\mathcal{Q}(x,\lambda) \cap [1,N]$ grow like a positive power of $N$?
$\mathcal{Q}(x,\lambda)$ has positive relative density if and only if $\lambda\le 1$. This follows from Weyl's Theorem on Uniform Distribution. (There is a nice concise proof in Cassels' "Diophantine Approximation".) Weyl's Theorem: Let $I\subset \mathbb{R}$ be an interval of length $\epsilon \le 1$. Let $S_N(I)$ be the set of all integers $q$ in the interval $[1,N]$ such that for some integer $p$, it holds that $xq-p\in I$. Then $$\frac{Card(S_N(I))}{N} \to \epsilon \text{ as } N\to\infty.$$ Here's a proof-sketch, using Weyl's Theorem, that if $\lambda > 1$ then $\mathcal{Q}(x,\lambda)$ has relative density zero: Fix $\epsilon > 0$, and take $I$ (in Weyl's Theorem) to be the interval $(-\epsilon,\epsilon)$. Suppose $\lambda>1$. Let $q\in \mathcal{Q}(x,\lambda)$; so for some $p\in \mathbb{Z}$, $$|xq-p| < q^{1-\lambda}.$$ There is an integer $M$, depending only on $\epsilon$, such that $|xq-p| < \epsilon$ whenever $p$ and $q$ satisfy the above inequality and $q\ge M$. Therefore $$\mathcal{Q}(x,\lambda)\cap [M,N]\subset S_N(I).$$ It follows from Weyl's Theorem that the relative density of $\mathcal{Q}(x,\lambda)$ does not exceed $2\epsilon$. Since $\epsilon$ is arbitrary, the relative density of $\mathcal{Q}(x,\lambda)$ must be zero. This can be proved in a more elementary but laborious way using the "Ostrowski Number System", which is explained in the Rockett and Szusz book on continued fractions.
{ "source": [ "https://mathoverflow.net/questions/22549", "https://mathoverflow.net", "https://mathoverflow.net/users/1464/" ] }
22,579
I think a major reason is because Lie algebras don't have an identity, but I'm not really sure.
The reason is simple: There are many non-unital rings which appear quite naturally. If $X$ is a locally compact space (in the following every space is assumed to be Hausdorff), then $C_0(X)$ , the ring of continuous complex-valued functions on $X$ vanishing at infinity, is a $C^\ast$ -algebra which is unital if and only if $X$ is compact. If $X = \mathbb{N}$ , this is just the ring of sequences converging to $0$ . Gelfand duality yields an anti-equivalence between unital commutative $C^\ast$ -algebras and compact spaces, and also between (possibly non-unital) commutative $C^*$ -algebras (with "proper" homomorphisms) and locally compact spaces (with proper maps). In a very similar spirit ( $\mathbb{C}$ is replaced by $\mathbb{F}_2$ ), there is an anti-equivalence between unital boolean rings and compact totally disconnected spaces, and also between Boolean rings and locally compact totally disconnected spaces. One-point-Compactification on the topological side corresponds here to the unitalization on the algebraic side. Perhaps we have the following conclusion: As locally compact spaces appear very naturally in mathematics (e.g. manifolds), the same is true for non-unital rings. If $A$ is a ring (possibly non-unital), its unitalization is defined to be the universal arrow from $A$ to the forgetful functor from unital rings to rings. An explicit construction is given by $\tilde{A} = A \oplus \mathbb{Z}$ as abelian group with the obvious multiplication so that $A \subseteq \tilde{A}$ is an ideal and $1 \in \mathbb{Z}$ is the identity. Because of the universal property, the module categories of $A$ and $\tilde{A}$ are isomorphic. Thus many results for unital rings take over to non-unital rings. Every ideal of a ring can be considered as a ring. Important examples also come from functional analysis, such as the ideal of compact operators on a Hilbert space.
{ "source": [ "https://mathoverflow.net/questions/22579", "https://mathoverflow.net", "https://mathoverflow.net/users/4692/" ] }
22,624
I am working on my zero knowledge proofs and I am looking for a good example of a real world proof of this type. An even better answer would be a Zero Knowledge Proof that shows the statement isn't true.
The classic example, given in all complexity classes I've ever taken, is the following: Imagine your friend is color-blind. You have two billiard balls; one is red, one is green, but they are otherwise identical. To your friend they seem completely identical, and he is skeptical that they are actually distinguishable. You want to prove to him (I say "him" as most color-blind people are male) that they are in fact differently-colored. On the other hand, you do not want him to learn which is red and which is green. Here is the proof system. You give the two balls to your friend so that he is holding one in each hand. You can see the balls at this point, but you don't tell him which is which. Your friend then puts both hands behind his back. Next, he either switches the balls between his hands, or leaves them be, with probability 1/2 each. Finally, he brings them out from behind his back. You now have to "guess" whether or not he switched the balls. By looking at their colors, you can of course say with certainty whether or not he switched them. On the other hand, if they were the same color and hence indistinguishable, there is no way you could guess correctly with probability higher than 1/2. If you and your friend repeat this "proof" $t$ times (for large $t$), your friend should become convinced that the balls are indeed differently colored; otherwise, the probability that you would have succeeded at identifying all the switch/non-switches is at most $2^{-t}$. Furthermore, the proof is "zero-knowledge" because your friend never learns which ball is green and which is red; indeed, he gains no knowledge about how to distinguish the balls.
{ "source": [ "https://mathoverflow.net/questions/22624", "https://mathoverflow.net", "https://mathoverflow.net/users/5601/" ] }
22,629
Are there primes of every Hamming weight? That is, for every integer $n \in \mathbb{Z}_{>0}$ does there exist a prime which is the sum of $n$ distinct powers of $2$ ? In this case, the Hamming weight of a number is the number of $1$ s in its binary expansion. Many problems of this sort have been considered, but perhaps not in such language. For instance, the question "Are there infinitely many Fermat primes ?" corresponds to asking, "Are there infinitely many distinct primes with Hamming weight exactly $2$ ?" Also related is the question of whether there are infinitely many Mersenne primes . These examples suggest a class of such problems, "Do there exist infinitely many primes which are the sum of exactly $n$ distinct powers of two?" Since this question is open even for the $n=2$ case, I pose a much weaker question here. What is known is that for every $n \leq 1024$ there is such a prime. The smallest such prime is listed in the Online Encyclopedia of Integer Sequences A061712 . The number of zeros in the smallest such primes are listed in A110700 . The number of zeros in a number with a given Hamming weight is a reasonable measure of how large that number is. The conjecture at OEIS is quite a bit stronger than the question I pose. Is there a theorem ensuring such primes for every $n \in \mathbb{Z}_{>0}$ ?
Fedja is absolutely right: this has been proven, for sufficiently large $n$ , by Drmota, Mauduit and Rivat. Although it looks at first sight as though this question is as hopeless as any other famous open problem on primes, it is easy to explain why this is not the case. Of the numbers between $1$ and $N := 2^{2n}$ , the proportion whose digit sum is precisely $n$ is a constant over $\sqrt{\log N}$ . These numbers are therefore quite "dense", and there is a technique in prime number theory called the method of bilinear sums (or the method of Type I/II sums) which in principle allow one to seriously think about finding primes in such a set. This is what Drmota, Mauduit and Rivat do. I do not believe that their method has currently been pushed as far as (for example) showing that there are infinitely many primes with no 0 when written in base 1000000. Let me also remark that they depend in a really weird way on some specific properties of these digit representation functions, in particular concerning the sum of the absolute values of their Fourier coefficients, which is surprisingly small. That is, it is not the case that they treat these Hamming sets as though they were "typical" sets of density $1/\sqrt{\log N}$ . I think one might also mention a celebrated paper of Friedlander and Iwaniec, https://arxiv.org/abs/math/9811185 . In this work they prove that there are infinitely many primes of the form $x^2 + y^4$ . This sequence has density just $c/N^{1/4}$ in the numbers up to $N$ , so the analysis necessary to make the bilinear forms method work is really tough. Slightly later, Heath-Brown adapted their ideas to handle $x^3 + 2y^3$ . Maybe that's in some sense the sparsest explicit sequence in which infinitely many primes are known (except of course for silly sequences like $s_n$ equals the first prime bigger than $2^{2^n}$ ). Finally, let me add the following: proving that, for some fixed $n$ , there are infinitely many primes which are the sum of $n$ powers of two - this is almost certainly an open problem of the same kind of difficulty as Mersenne primes and so on.
{ "source": [ "https://mathoverflow.net/questions/22629", "https://mathoverflow.net", "https://mathoverflow.net/users/5597/" ] }
22,635
Disclaimer Of course not, I'm aware of Gödel's second incompleteness theorem. Still there is something which does not persuade me, maybe it's just that I've taken my logic class too long ago. On the other hand, it may turn out I'm just confused. :-) Background I will be talking about models of set theory; these are sets on their own, so a confusion can arise, since the symbol $\in$, viewed as "set belonging" in the usual sense, may have a different meaning from the symbol $\in$ of the theory. So, to avoid confusion, I will speak about levels. On the first level is the set theory mathematicians use all day. This has axioms, but is not a theory in the usual sense of logic. Indeed, to speak about logic we already need sets (to define alphabets and so on). In this naif set theory we develop logic, in particular the notions of theory and model. We call this theory Set1 . On the second level is the formalized set theory; this is a theory in the sense of logic. We just copy the axioms of the naif set theory and take the (formal) theory which has these strings of symbols as axioms. We call this theory Set2 . Now Gödel's result tells us that if Set2 is consistent, it cannot prove its own consistence. Well, here we need to be a bit more precise. The claim as stated is obvious, since Set2 cannot prove anything about the sets in the first level. It does not even know that they exist. So we repeat the process that carried from Set1 to Set2 : we define in Set2 the usual notions of logic (alphabets, theories, models...) and use these to define another theory Set3 . A correct statement of Gödel's result is, I think , that if Set2 is consistent, then it cannot prove the consistence of Set3 . The problem Ok, so we have a clear statement which seems to be completely provable in Set1 , and indeed it is. This doesn't tell us, however that if Set1 is consistent, then it cannot prove the consistence of Set2 . So I'm left with the doubt that what one can do "from the outside" may be a bit more than what one can do in the formalized theory. Compare this with Gödel's first incompleteness theorem, where one has a statement which is true for natural numbers (and we can prove it from the outside) but which is not provable in PA . So the question is: is there any reason to believe that Set1 cannot prove the consistence of Set2 ? Or I'm just confused and what I said does not make sense? Of course one could just argue that Set1 , not being formalized, is not amenable to mathematical investigation; the best model we have is Set2 , so we should trust that we can always "shift our theorems one level". But this argument does not convince me: indeed Gödel's first incompleteness theorem shows that we have situations where the theorem in the formalized theory are strictly less then what we can see from the outside. Final comment In a certain sense, it is far from intuitive that set theory should have a model. Because models are required to be sets, and sets are so small... Of course I know about universes, and how one can use them to "embed" the theory of classes inside set theory, so sets may be bigger than I think. But then again, existence of universes is not provable from the usual axioms of set theory.
Your question certainly makes sense and it is a point that I feel is too often glossed over in textbooks. Let me rephrase your question. Goedel's second theorem says that, assuming that a certain formal system (ZFC, say) has a certain property that we call "consistency," then there is no formal proof in ZFC of a certain string, commonly denoted by "Con(ZFC)." Fine. But why on earth should this theorem say anything about whether the consistency of ZFC can be proved mathematically? The theorem is just a theorem about abstract strings of symbols, not about what human beings can and cannot do. The string denoted "Con(ZFC)" is commonly taken to "say" that "ZFC is consistent," but what is the justification for doing so? A string is just a string, and doesn't "say" anything. If we choose to think of the string as "meaning" something then that's our business, but surely that kind of human social activity is not something we can prove mathematical theorems about? The answer is that, underlying the usual discussions of Goedel's second theorem, there is the following Key Assumption: If someone were to come up with a mathematical proof of the consistency of ZFC, then by mimicking that proof, we could produce a formal proof of Con(ZFC) from the axioms of ZFC. The Key Assumption is crucial. Without it, we cannot make the leap from Goedel's second theorem to a meta-mathematical statement about the (im)possibility of proving the consistency of mathematics. And note that the Key Assumption is not a purely mathematical one; it cannot be, because it is a statement linking something that is not purely mathematical (namely, mathematical proof, which is a product of human activity) and something that is purely mathematical (namely, ZFC and theorems of ZFC). Therefore the Key Assumption is not susceptible to mathematical proof, and the reasons we have for accepting it must be in part philosophical. So what reasons do we have for accepting the Key Assumption? The chief reason is that long experience has taught us that all mathematical proofs that mathematicians come up with can indeed be mimicked by formal proofs in ZFC. This may seem obvious to us today, but it is not at all a trivial statement. Prior to the set-theoretic revolution, it was by no means obvious that all the diverse areas of mathematics could be formulated in a single common language (i.e., set theory) and deduced from a short list of axioms. It is only through the hard work of those working in the foundations of mathematics that we now take for granted that for any precise mathematical statement we want to make, there exists a formal sentence $S$ in the first-order language of set theory with the property that any mathematically acceptable proof of the original mathematical statement can be mimicked to produce a formal proof of $S$ from the axioms of ZFC. And if you had any lingering doubts about whether this formal mimicry existed only in theory and not in practice, then in recent years, the advent of formal theorem-proving software such as Mizar, HOL Light, Coq, Isabelle, etc., should have swept away such doubts by demonstrating concretely that large areas of mathematics can be mimicked formally in practice, and not just in theory. Finally, let me mention that although I believe it is very reasonable to accept the Key Assumption, it is possible to reject it. Perhaps most notably, the philosopher Michael Detlefsen has challenged the standard claim that the string Con(ZFC) properly mimics the statement "ZFC is consistent" in the sense of the Key Assumption, and has suggested that Hilbert's program to prove the consistency of mathematics is not yet dead. I believe that Detlefsen is simply mistaken and that there is nothing unsatisfactory about the standard string Con(ZFC), but he is at least correct that there is something to be checked here, and it is not a purely mathematical point but a partially philosophical one.
{ "source": [ "https://mathoverflow.net/questions/22635", "https://mathoverflow.net", "https://mathoverflow.net/users/828/" ] }
22,643
What is the definition of a stadium curve and does it have a curvature that is defined and continuous at each of its points?
Your question certainly makes sense and it is a point that I feel is too often glossed over in textbooks. Let me rephrase your question. Goedel's second theorem says that, assuming that a certain formal system (ZFC, say) has a certain property that we call "consistency," then there is no formal proof in ZFC of a certain string, commonly denoted by "Con(ZFC)." Fine. But why on earth should this theorem say anything about whether the consistency of ZFC can be proved mathematically? The theorem is just a theorem about abstract strings of symbols, not about what human beings can and cannot do. The string denoted "Con(ZFC)" is commonly taken to "say" that "ZFC is consistent," but what is the justification for doing so? A string is just a string, and doesn't "say" anything. If we choose to think of the string as "meaning" something then that's our business, but surely that kind of human social activity is not something we can prove mathematical theorems about? The answer is that, underlying the usual discussions of Goedel's second theorem, there is the following Key Assumption: If someone were to come up with a mathematical proof of the consistency of ZFC, then by mimicking that proof, we could produce a formal proof of Con(ZFC) from the axioms of ZFC. The Key Assumption is crucial. Without it, we cannot make the leap from Goedel's second theorem to a meta-mathematical statement about the (im)possibility of proving the consistency of mathematics. And note that the Key Assumption is not a purely mathematical one; it cannot be, because it is a statement linking something that is not purely mathematical (namely, mathematical proof, which is a product of human activity) and something that is purely mathematical (namely, ZFC and theorems of ZFC). Therefore the Key Assumption is not susceptible to mathematical proof, and the reasons we have for accepting it must be in part philosophical. So what reasons do we have for accepting the Key Assumption? The chief reason is that long experience has taught us that all mathematical proofs that mathematicians come up with can indeed be mimicked by formal proofs in ZFC. This may seem obvious to us today, but it is not at all a trivial statement. Prior to the set-theoretic revolution, it was by no means obvious that all the diverse areas of mathematics could be formulated in a single common language (i.e., set theory) and deduced from a short list of axioms. It is only through the hard work of those working in the foundations of mathematics that we now take for granted that for any precise mathematical statement we want to make, there exists a formal sentence $S$ in the first-order language of set theory with the property that any mathematically acceptable proof of the original mathematical statement can be mimicked to produce a formal proof of $S$ from the axioms of ZFC. And if you had any lingering doubts about whether this formal mimicry existed only in theory and not in practice, then in recent years, the advent of formal theorem-proving software such as Mizar, HOL Light, Coq, Isabelle, etc., should have swept away such doubts by demonstrating concretely that large areas of mathematics can be mimicked formally in practice, and not just in theory. Finally, let me mention that although I believe it is very reasonable to accept the Key Assumption, it is possible to reject it. Perhaps most notably, the philosopher Michael Detlefsen has challenged the standard claim that the string Con(ZFC) properly mimics the statement "ZFC is consistent" in the sense of the Key Assumption, and has suggested that Hilbert's program to prove the consistency of mathematics is not yet dead. I believe that Detlefsen is simply mistaken and that there is nothing unsatisfactory about the standard string Con(ZFC), but he is at least correct that there is something to be checked here, and it is not a purely mathematical point but a partially philosophical one.
{ "source": [ "https://mathoverflow.net/questions/22643", "https://mathoverflow.net", "https://mathoverflow.net/users/4423/" ] }
22,814
A group $G$ is said to be linear if there exists a field $k$, an integer $n$ and an injective homomorphism $\varphi: G \to \text{GL}_n(k).$ Given a short exact sequence $1 \to K \to G \to Q \to 1$ of groups where $K$ and $Q$ are linear (over the same field), is it true that $G$ is linear too? Background: Arithmetic groups are by definition commensurable with a certain linear group, so they are finite extensions of a linear group, and finite groups clearly are linear (over any field).
The universal central extension $\widetilde{\text{Sp}_{2n}}\mathbb{Z}$ is the preimage of $\text{Sp}_{2n}\mathbb{Z}$ in the universal cover of $\text{Sp}_{2n}\mathbb{R}$, and fits into the sequence $$1\to \mathbb{Z}\to \widetilde{\text{Sp}_{2n}}\mathbb{Z}\to \text{Sp}_{2n}\mathbb{Z}\to 1.$$ Deligne proved that $\widetilde{\text{Sp}_{2n}}\mathbb{Z}$ is not residually finite; the intersection of all finite-index subgroups of is $2\mathbb{Z}<\widetilde{\text{Sp}_{2n}}\mathbb{Z}$. In particular, this implies that $\widetilde{\text{Sp}_{2n}}\mathbb{Z}$ is not linear. But certainly $\mathbb{Z}$ and $\text{Sp}_{2n}\mathbb{Z}$ are. If you want an arithmetic group, you can take the corresponding $\mathbb{Z}/k\mathbb{Z}$-extension of $\text{Sp}_{2n}\mathbb{Z}$, which will not be linear as long as $k\neq 2$. I learned the proof of this theorem from Dave Witte Morris, who has written up his fairly-accessible notes as " A lattice with no torsion-free subgroup of finite index (after P. Deligne)" ( PDF link).
{ "source": [ "https://mathoverflow.net/questions/22814", "https://mathoverflow.net", "https://mathoverflow.net/users/3380/" ] }
22,837
Pete Clark threw down the challenge in his comment to my answer on Why the heck are the homotopy groups of the sphere so damn complicated? : Have the homotopy groups of spheres ever been applied to anything , including in algebraic topology itself? It started to get some answers in those comments, but comments are a lousy place to record answers to a question like this so I'm reposting it as a question. In order to add some more value to the question (and justify my reposting it), let me say that I can foresee answers coming in several different flavours and I'd like the answers to explicitly say which flavour they use. Firstly, there is the distinction between stable and unstable homotopy groups. Briefly, there is a natural map $\pi_k(S^n) \to \pi_{k+1}(S^{n+1})$ and eventually (you will see the phrase, "in the stable range") this becomes an isomorphism. Once it is an isomorphism, we refer to them as the stable homotopy groups. So there are more unstable homotopy groups than stable ones, but to balance that, the stable ones are better behaved. Secondly, there is the point that I was trying to make in the aforementioned question: the fact that the homotopy groups are so complicated is correlated with their usefulness. So there may be some uses of the homotopy groups of spheres that explicitly rely on their complexity: if they weren't so complicated, they wouldn't be able to detect X . Thirdly, and partly in converse to the above, we do know some of the homotopy groups of spheres. So a use might be: because we know $\pi_7(S^{16})$ then we know X . So in your answer, please indicate which of the above best fits (or if none do, try to classify it in some way). Also, please note that this is a question about the homotopy groups of spheres , not homotopy theory in general, and that although I'm an algebraic topologist (some of the time), answers outside algebraic topology will be more useful in "selling" our subject! This question is a fairly obvious one for community wiki: it wasn't originally my question (though I hope that I've expanded it a little to add extra value) and I appear to be asking for a "big list". However, I suspect that the really good answers will involve some work to explain to a non-expert the key idea of why the homotopy groups of spheres are so important - merely linking to a paper will not be very satisfactory because it is likely that that paper is written for algebraic topologists rather than a general audience, and I would like to reward such efforts with the only coinage MO has. If the only answers I get are "see this paper" then I will gladly hit the "community wiki" button (indeed, if that was all I got, I'd consider closing the question).
I used to think that the entire theory was intellectual masturbation, but two examples in particular completely changed my mind. The first is the Pontryagin-Thom construction, which exhibits an isomorphism between the $k$th stable homotopy group $\pi_{n+k}(S^n)$ and the framed cobordism group of smooth $k$-manifolds. This is even interesting (though more elementary) in the case $k = 0$, where it recovers the basic degree theory that you learn in your first course on topology. This was originally developed by Pontryagin to compute homotopy groups of spheres, but now it is regarded as a tool in manifold theory. These matters are discussed in Chapter 3 of Luck's book on Surgery theory, for example. The second application is to physics. Unfortunately I don't understand this story very well at all, so I'll begin with what I more or less DO understand (which may or may not be well-known). The basic idea begins with the problem of situating electromagnetism in a quantum mechanical framework. Dirac began this process by imagining a "magnetic monopole", i.e. a particle that would play the role for magnetic fields that the electron plays for electric fields. The physical laws for a charged particle sitting in the field determined by a magnetic monopole turn out to depend on a choice of vector potential for the field (the choice is necessarily local), and Dirac found that changing the vector potential corresponds to multiplying the wave function $\psi$ for the particle by a complex number of modulus 1 (i.e. an element of U(1)). If we think of the magnetic monopole as sitting at the origin, then these phases can naturally be regarded as elements of a principal $U(1)$-bundle over $M = \mathbb{R}^3 - \{0\}$. But $M$ is homotopy equivalent to $S^2$, and principal $U(1)$-bundles over $S^2$ are classified by $\pi_1(U(1)) = \mathbb{Z}$. Proof: think about the Hopf fibration. The appearance of the integers here corresponds exactly to the observation of Dirac (the Dirac quantization condition) that the existence of a magnetic monopole implies the quantization of electric charge. It is remarkable to note that Hopf's paper on the Hopf fibration and Dirac's paper on magnetic monopoles were published in the same year, though neither had any clue that the two ideas were related! The story goes on. The so-called "Yang-Mills Instantons" correspond in a similar way to principal $SU(2)$ bundles over $S^4$, which are classified by $\pi_3(SU(2)) = \mathbb{Z}$. Again, the integers have important physical significance. So these two classical examples motivate the computation of $\pi_1(S^1)$ and $\pi_3(S^3)$, but as is always the case this is just the tip of an iceberg. I am not familiar with anything deeper than the tip, but I have it on good authority that physicists have become interested in homotopy groups of other spheres as well, presumably to classify other principal bundles (it seems like a bit of a coincidence that the groups which came up in these examples are spheres, but maybe one reduces homotopy theory for other spaces to homotopy theory for spheres). People who know more about physics and/or the classification of principal bundles should feel free to chime in. A great reference for the mathematician who wants to learn something about the physics that I discussed here is the book "Topology, Geometry, and Gauge Fields: Foundations" by Naber.
{ "source": [ "https://mathoverflow.net/questions/22837", "https://mathoverflow.net", "https://mathoverflow.net/users/45/" ] }
22,838
I am interested in how to select interesting yet reasonable problems for students to work on, either at Honours (that is, a research-based single year immediately after a degree) or PhD. By this I mean a problem that is unsolved but for which there is a good chance that a student can solve it either completely or partially and come out with a thesis either way. There are a number of possible strategies that I see some of my colleagues use, but which are sadly not available to me: (1) Be sufficiently brilliant that you already know roughly how something unsolved should be solved and guide the student accordingly, modifying the strategy on the fly. (2) Have a major project, say classifying a big class of structures, that is amenable to attack with a big general theorem with many well-defined sub-cases that can be assigned individually to students. Personally I often work on problems that lead nowhere - I don't solve it, it's too hard, I only rediscover known examples etc - but provided at least some of the problems work out - it doesn't matter. But for students, its more of a "one-shot" affair - they can't afford to work for a year or, worse, three years, and get nothing. Are there any general principles that will help in the selection of problems? Or is just a situation where you've either "got the knack" or you haven't?
Let me first answer a slightly different question, how to organize one's thoughts about such problems. I simply maintain a list of suitable projects, with ideas on how to approach them, and put them in a file "Dissertation Problems.tex". Some of these are projects that I might like to carry out myself, but many are projects that would be suitable for a math PhD dissertation. I have another similar file called "Math Ideas" that I maintain on my mobile phone, so that I can write into it when I am traveling. It often happens that when I am working on one problem, I have an idea for a related project, or a side question, or a generalization to a method that arises, or an idea for a counterexample to such a generalization. Sometimes, of course, these side problems can be solved or incorporated into the original project. But just as often, they are not directly necessary for the original project, but still interesting, and so I save them for a later project or for a student. Of course, not all ideas pan out, and in many cases, these ideas turn out to be uninteresting or wrong in some way. But often, they have turned into very interesting questions whose resolution forms a dissertation or a paper or joint paper. In this way, in time I accumulate more problems and questions than I can work on myself, and when my PhD students are ready for a problem, I can suggest several that may be to their liking. Often the student already has ideas for a problem, and in these cases I help them to focus their questions and efforts. These files also work for joint projects with colleagues looking to do a joint project with me. The most important thing, however, in this method, is to remember to write the idea down . Surely we all have great ideas from time to time, but then, after we become absorbed in another task or project, the fleeting idea is regrettably forgotten. So it is important to be systematic about recording it. Create a file on your laptop (and on your mobile phone!) to which you add suitable mathematical ideas when they occur to you. Perhaps you want two files, one for student projects, and one for projects you want to reserve for yourself. This question I have answered---how to organize one's thoughts about mathematical ideas--- may seem to be a trival matter, but I believe that many mathematical ideas are held in mind only briefly before being forgotton forever, and so I find it to be a critically important issue, an important key to successful mathematical practice. Let me now turn to the question you actually asked, namely, how to select problems in the first place. This I find to be an intensely personal matter, having to do with one's style of mathematics. We all likely have different mathematical styles, with some of us interested in easy-to-state or sweeping questions about fundamental issues, others preferring to understand motivating principal examples very deeply, and others interested in aspects of some big-machinery construction method, and so on. These mathematical styles will lead to very different sorts of questions and problems, and will naturally affect the kinds of problems that we would find suitable for students. It is of course much easier to find suitable problems for students that arise as part of a big program with many examples or cases that need to be worked out, and these kinds of problems often carry with them the opportunity to learn a part of that machinery. (My own style surely tends more toward quirky but fundamental questions, perhaps underlying or alongside the well-beaten path but not directly on it, and less towards big machinery, but in others eyes I am likely merely presupposing a certain amount of big machinery, such as forcing or large cardinals.) The sweeping-questions problems, however, are more dangerous for students because they can often be harder or of unknown difficulty. But occasionally, I find that an interesting special case or aspect of an interesting new sweeping question, which arises during my own investigations, and when these occur I have found these smaller projects very satisfying mathematically, both for myself and my students. I would never give a problem to a PhD student that I didn't myself find compelling and interesting.
{ "source": [ "https://mathoverflow.net/questions/22838", "https://mathoverflow.net", "https://mathoverflow.net/users/1492/" ] }
22,885
Are there any non-contractible connected topological rings? Of course, such a thing cannot be a (topological) algebra over the reals. (I have a vague memory of having a glance at an erticle by Lurie in which some (for me) rather esoteric theory of higher categorical structures gave rise to topological rings that would have some very nontrivial topology, but I know nothing about that field(s) and, well, I just don't remember... Maybe someone can provide less "esoteric" examples! :) )
Here is a method for manufacturing such topological rings. The main technical ingredient is a product-preserving functor $$\Theta: \mathrm{Set}^{\Delta^{op}} \to \mathrm{CGHaus}$$ from the category of simplicial sets to the category of compactly generated Hausdorff spaces that is not , however, the usual geometric realization functor. This will almost undoubtedly be unfamiliar, and so will require some preface. The basic idea though is that while the usual geometric realization uses for its topological input the usual interval $I = [0, 1]$, the formal properties of the realization functor, particularly the fact it preserves finite products, still hold upon replacing $I$ by any compact topological interval $L$ and replacing ordinary affine simplices by $L$-valued simplices. This $L$-based realization $\Theta$, being product-preserving, takes simplicial rings to ring objects in $\mathrm{CGHaus}$. By choosing an appropriate $L$ that is connected but not path-connected, we can construct a topological ring that is connected but not path-connected, hence not contractible. We define an interval to be a linearly ordered set with distinct top and bottom elements, and an interval map as an order-preserving map that preserves the top and bottom. Observe that the usual affine simplex $\sigma_{n-1}$ of dimension $n-1$ can be described as the space of $(n-1)$-tuples $0 \leq x_1 \leq \ldots \leq x_{n-1} \leq 1$ (topologized as a subspace of $I^n$), or in other words as the space of interval maps $[n+1] \to I$ from the finite interval with $n+1$ points to $I$. Meanwhile, the category $\mathrm{FinInt}$ of finite intervals $[n+1]$ is equivalent to $\Delta^{op}$ (where $\Delta$ is the category of finite nonempty ordinals); indeed we have a functor $\hom(-, [2]): \Delta^{op} \to \mathrm{FinInt}$, where the set of order-preserving maps $\hom([n], [2])$ from the $n$-element ordinal $[n]$ to $[2]$ is given the pointwise order, thus inheriting an interval structure from the interval structure on $[2]$, where we have $[n+1] \cong \hom_{\Delta}([n], [2])$ as intervals). The usual geometric realization $R(X)$ of a simplicial set $X$, from a categorical point of view, is a tensor product $X \otimes_\Delta \sigma$ of a "right $\Delta$-module" $X: \Delta^{op} \to \mathrm{Set}$ with a left $\Delta$-module $\sigma: \Delta \to \mathrm{CGHaus}$ (the affine simplex functor): $$\sigma: \Delta \simeq \mathrm{FinInt}^{op} \stackrel{\hom(-, I)}{\to} \mathrm{CGHaus}$$ $$[n] \mapsto [n+1] \mapsto \hom_{\mathrm{Int}}([n+1], I).$$ This tensor product is often described by a coend formula $$R(X) = X \otimes_\Delta \sigma = \int^{[n] \in \Delta} X([n]) \cdot \hom_{\mathrm{Int}}([n+1], I).$$ As is well-known, $R$ is product-preserving. What is perhaps less well-known is that the only thing we need from $I$ to prove this fact is that it's compact Hausdorff and the interval order $\leq$ is a closed subset of $I \times I$. Complete details may be found in the nLab here . Therefore, if we replace $I$ with another compact Hausdorff topological interval $L$ (so that $\leq_L$ is a closed subset of $L \times L$), we get the same result, that the functor $\Theta = R_L$ defined by the formula $$R_L(X) = \int^{[n] \in \Delta} X([n]) \cdot \hom_{\mathrm{Int}}([n+1], L)$$ is also product-preserving. Let us take our compact topological interval $L$ to be the end-compactification of the long line (so, adjoin points $-\infty$ and $\infty$ to the ends of the long line). This is connected, but not path-connected because for example there is no path from $\infty$ to any other point. Now we just turn a crank: start with any denumerable non-trivial ring $R$ in $\mathrm{Set}$ -- I'll take $R = \mathbb{Z}/(2)$ -- and apply a sequence of product-preserving functors, $$\mathrm{Set} \stackrel{K}{\to} \mathrm{Cat} \stackrel{N}{\to} \mathrm{Set}^{\Delta^{op}} \stackrel{R_L}{\to} \mathrm{CGHaus}.$$ (Here $K$ is the functor that takes a set $S$ to the category such that $\hom(x, y)$ is a singleton for any $x, y \in S$; this is right adjoint to the forgetful functor $\mathrm{Cat} \to \mathrm{Set}$ that remembers only the set of objects, and being a right adjoint, $K$ preserves products. The nerve functor $N$ also preserves products.) Since ring objects can be defined in any category with finite products, we have that product-preserving functors transport ring objects to ring objects. One should draw a picture of the category $K(\mathbb{Z}/(2))$; it's pretty clearly connected, and its nerve will be a connected simplicial set, or indeed a connected simplicial ring. The $L$-based realization of that will thus be a connected colimit of (connected) $L$-based simplices $\sigma_L(n) = \hom([n+1], L)$ (see the nLab here for connected colimits of connected spaces), and so it too will be a connected ring object in $\mathrm{CGHaus}$. At this point, the overall idea should be pretty clear, and the rest is just some technical mopping-up. One technical point is that products in $\mathrm{CGHaus}$ need not be usual topological products (as shown by a famous example of Dowker), so one might object that we could end up not with a topological ring, but some kind of funny ring object in $\mathrm{CGHaus}$. However, in many cases of interest, topological products do coincide with $\mathrm{CGHaus}$ products. This is particularly the case for colimits of countable increasing sequences of compact Hausdorff spaces: their product in $\mathrm{CGHaus}$ is the usual topological product. (The same proof as given by Allen Hatcher for Theorem A.6 here will do.) Thus, what counts here is that $N(K(\mathbb{Z}/(2)))$ is a simplicial set with finitely many cells in each dimension, and $R_L$ applied to this involves taking a countable union of compact Hausdorff spaces, so we are okay here. A second technical point involves showing that $X = (R_L \circ N \circ K)(\mathbb{Z}/(2))$ is not path-connected, which is intuitively clear, but an idea of proof would be nice. $X$ can be described as a union of nondegenerate simplices, where there are two such simplices in each dimension $n$ (corresponding to paths of length $n$ of the form $0 \to 1 \to 0 \to \ldots$ and $1 \to 0 \to 1 \to \ldots$), and a point in the interior of each such simplex has coordinates given by an increasing chain of length $n$ in a dictionary order, say $(j_1, t_1) < (j_2, t_2) < \ldots < (j_n, t_n)$ where the $j_k$ belong to the order type $-\omega_1 \cup \omega_1$ ($\omega_1$ being the first uncountable ordinal, and $-\omega_1$ is of opposite order type, extending in the "negative" direction), and the $t_k$ belong to $[0, 1)$. Every point of $X$ is an interior point of some unique $n$-simplex. Now if $\alpha: I \to X$ is a path connecting a point in the interior of an $n$-simplex, $n > 0$, to a 0-simplex, then let $(a, b) \subset I$ be a connected component of the open set of $t \in I$ such that $\alpha(t)$ is interior to an $n$-simplex with $n > 0$. Since $(a, b)$ has countable cofinality, there is a countable ordinal $\kappa$ such that for every $t \in (a, b)$, the maximum ordinal $|j_k|$ occurring in the coordinate description of $\alpha(t)$ is bounded above by $\kappa$. But $\alpha(a)$, being a 0-cell, has a neighborhood $U$ where every point $p \in U$, $p \neq \alpha(a)$, has a maximum $|j_k|$ (in its coordinate description) greater than $\kappa$, and we have reached a contradiction.
{ "source": [ "https://mathoverflow.net/questions/22885", "https://mathoverflow.net", "https://mathoverflow.net/users/4721/" ] }
22,897
Is there a nice characterization of fields whose automorphism group is trivial? Here are the facts I know. Every prime field has trivial automorphism group. Suppose L is a separable finite extension of a field K such that K has trivial automorphism group. Then, if E is a finite Galois extension of K containing L , the subgroup $Gal(E/L)$ in $Gal(E/K)$ is self-normalizing if and only if L has trivial automorphism group. (As pointed out in the comments, a field extension obtained by adjoining one root of a generic polynomial whose Galois group is the full symmetric group satisfies this property). The field of real numbers has trivial automorphism group, because squares go to squares and hence positivity is preserved, and we can then use the fact that rationals are fixed. Similarly, the field of algebraic real numbers has trivial automorphism group, and any subfield of the reals that is closed under taking squareroots of positive numbers has trivial automorphism group. My questions: Are there other families of examples of fields that have trivial automorphism group? For instance, are there families involving the p-adics? [EDIT: One of the answers below indicates that the p-adics also have trivial automorphism group.] For what fields is it true that the field cannot be embedded inside any field with trivial automorphism group? I think that any automorphism of an algebraically closed field can be extended to any field containing it, though I don't have a proof) [EDIT: One of the answers below disproves the parenthetical claim, though it doesn't construct a field containing an algebraically closed field with trivial automorphism group]. I suspect that $\mathbb{Q}(i)$ cannot be embedded inside any field with trivial automorphism group, but I am not able to come up with a proof for this either. [EDIT: Again, I'm disproved in one of the answers below]. I'm not even able to come up with a conceptual reason why $\mathbb{Q}(i)$ differs from $\mathbb{Q}(\sqrt{2})$, which can be embedded in the real numbers. ADDED SEP 26 : All the questions above have been answered, but the one question that remains is: can every field be embedded in a field with trivial automorphism group? Answering the question in general is equivalent to answering it for algebraically closed fields.
As Robin as pointed out, for all primes $p$ , $\mathbb{Q}_p$ is rigid , i.e., has no nontrivial automorphisms. It is sort of a coincidence that you ask, since I spent much of the last $12$ hours writing up some material on multiply complete fields which has applications here: Theorem (Schmidt): Let $K$ be a field which is complete with respect to two inequivalent nontrivial norms (i.e., the two norms induce distinct nondiscrete topologies). Then $K$ is algebraically closed. Corollary: Let $K$ be a field which is complete with respect to a nontrivial norm and not algebraically closed. Then every automorphism of $K$ is continuous with respect to the norm topology. (Proof: To say that $\sigma$ is a discontinuous automorphism is to say that the pulled back norm $\sigma^*|| \ ||: x \mapsto ||\sigma(x)||$ is inequivalent to $|| \ ||$ . Thus Schmidt's theorem applies. In particular this applies to show that $\mathbb{Q}_p$ and $\mathbb{R}$ are rigid, since every continuous automorphism is determined by its values on the dense subspace $\mathbb{Q}$ , hence the identity is the only possibility. (It is possible to give a much more elementary proof of these facts, e.g. using the Ostrowski classification of absolute values on $\mathbb{Q}$ .) At the other extreme, each algebraically closed field $K$ has the largest conceivable automorphism group: $\# \operatorname{Aut}(K) = 2^{\# K}$ : e.g. Theorem 80 of http://alpha.math.uga.edu/~pete/FieldTheory.pdf . There is a very nice theorem of Bjorn Poonen which is reminiscent, though does not directly answer, your other question. For any field $K$ whatsoever, and any $g \geq 3$ , there exists a genus $g$ function field $K(C)$ over $K$ such that $\operatorname{Aut}(K(C)/K)$ is trivial. However there may be other automorphisms which do not fix $K$ pointwise. There is also a sense in which for each $d \geq 3$ , if you pick a degree $d$ polynomial $P$ with $\mathbb{Q}$ -coefficients at random, then with probability $1$ it is irreducible and $\mathbb{Q}[t]/(P)$ is rigid. By Galois theory this happens whenever $P$ is irreducible with Galois group $S_d$ , and by Hilbert Irreducibility the complement of this set is small: e.g. it is "thin" in the sense of Serre. Addendum : Recall also Cassels' embedding theorem (J.W.S. Cassels, An embedding theorem for fields , Bull. Austral. Math. Soc. 14 (1976), 193-198): every finitely generated field of characteristic $0$ can be embedded in $\mathbb{Q}_p$ for infinitely many primes $p$ . It would be nice to know some positive characteristic analogue that would allow us to deduce that a finitely generated field of positive characteristic can be embedded in a rigid field (so far as I know it is conceivable that every finitely generated field of positive characteristic can be embedded in some Laurent series field $\mathbb{F}_q((t))$ , but even if this is true it does not have the same consequence, since Laurent series fields certainly have nontrivial automorphisms).
{ "source": [ "https://mathoverflow.net/questions/22897", "https://mathoverflow.net", "https://mathoverflow.net/users/3040/" ] }
22,923
Does there exist an algorithm which computes the Galois group of a polynomial $p(x) \in \mathbb{Z}[x]$? Feel free to interpret this question in any reasonable manner. For example, if the degree of $p(x)$ is $n$, then the algorithm could give a set of permutations $\pi \in Sym(n)$ which generate the Galois group.
There is an algorithm described in an ancient and interesting book on Galois Theory by Leonard Eugene Dickson. Here is a brief sketch in the case of an irreducible polynomial $f\in \mathbb{Q}[x]$ . Suppose that $z_1\ldots z_n$ are the roots of $f$ in some splitting field of $f$ over $\mathbb{Q}$ . (We don't need to construct the splitting field. The $z_i$ are mentioned here for the sake of explanation.) Let $x_1\ldots x_n$ be indeterminates. For a permutation $\sigma\in S_n$ , let $$E_\sigma=x_1z_{\sigma(1)}+\ldots+ x_n z_{\sigma(n)}.$$ Let $g(x):=\prod _{\sigma} (x-E_\sigma)$ , where $\sigma$ runs through all permutations in $S_n$ . Each coefficient $c_i$ of $x^i$ in $g$ is symmetric in $z_1 \ldots z_n$ , so (using the theorem on symmetric functions) we can write $c_i$ as a polynomial in $x_1\dots x_n$ with rational coefficients. Assuming that this has been done, factor $g$ into irreducibles over the ring $\mathbb{Q}[x_1 \ldots x_n]$ . Let $g_0$ be the irreducible factor of $g$ that is satisfied by $E_{Id}$ , where $Id$ is the identity permuation. Then the galois group of $f$ consists of all permutations of $x_1\ldots x_n$ that fix $g_0$ . The point is that the computation of $g_0$ is effective (albeit horrendous) and so is the determination of the permutations that fix $g_0$ .
{ "source": [ "https://mathoverflow.net/questions/22923", "https://mathoverflow.net", "https://mathoverflow.net/users/4706/" ] }
22,927
As I understand it, it has been proven that the axiom of choice is independent of the other axioms of set theory. Yet I still see people fuss about whether or not theorem X depends on it, and I don't see the point. Yes, one can prove some pretty disturbing things, but I just don't feel like losing any sleep over it if none of these disturbing things are in conflict with the rest of mathematics. The discussion seems even more moot in light of the fact that virtually none of the weird phenomena can occur in the presence of even mild regularity assumptions, such as "measurable" or "finitely generated". So let me turn to two specific questions: If I am working on a problem which is not directly related to logic or set theory, can important mathematical insight be gained by understanding its dependence on the axiom of choice? If I am working on a problem and I find a two page proof which uses the fact that every commutative ring has a maximal ideal but I can envision a ten page proof which circumvents the axiom of choice, is there any sense in which my two page proof is "worse" or less useful? The only answer to these questions that I can think of is that an object whose existence genuinely depends on the axiom of choice do not admit an explicit construction, and this might be worth knowing. But even this is largely unsatisfying, because often these results take the form "for every topological space there exists X..." and an X associated to a specific topological space is generally no more pathological than the topological space you started with. Thanks in advance!
The best answer I've ever heard --- and I think I heard it here on MathOverflow from Mike Shulman, which suggests that this question is roughly duplicated somewhere else --- is that you should care about constructions "internal" to other categories: For many, many applications, one wants "topological" objects: topological vector spaces, topological rings, topological groups, etc. In general, for any algebraic gadget, there's a corresponding topological gadget, by writing the original definition (a la Bourbaki) entirely in terms of sets and functions, and then replacing every set by a topological space and requiring that every function be continuous. A closely related example is that you might want "Lie" objects: sets are replaced by smooth manifolds and functions by smooth maps. Another closely related example is to work entirely within the "algebraic" category. In all of these cases, the "axiom of choice" fails. In fact, from the internal-category perspective, the axiom of choice is the following simple statement: every surjection ("epimorphism") splits, i.e. if $f: X\to Y$ is a surjection, then there exists $g: Y \to X$ so that $f\circ g = {\rm id}_Y$. But this is simply false in the topological, Lie, and algebraic categories. This leads to all sorts of extra rich structure if you do algebra internal to these categories. You have to start thinking about bundles rather than products, there can be "anomalies", etc. Update: In the comments, there was a request for a totally explicit example, where Axiom of Choice is commonly used but not necessary. Here's one that I needed recently. Let $\mathcal C$ be an abelian tensor category, by which I mean that it is abelian , has a monoidal structure $\otimes$ that is biadditive on hom-sets, and that has a distinguished natural isomorphism $\text{flip}: X\otimes Y \overset\sim\to Y\otimes X$ which is a "symmetry" in the sense that $\text{flip}^2 = \text{id}$. Then in $\mathcal C$ is makes sense to talk about "Lie algebra objects" and "associative algebra objects", and given an associative algebra $A$ you can define a Lie algebra by "$[x,y] = xy - yx$", where this is short-hand for $[,] = (\cdot) - (\cdot \circ \text{flip})$ — $x,y$ should not be read as elements, but as some sort of generalization. So we can makes sense of the categories of $\text{LIE}_{\mathcal C} = $"Lie algebras in $\mathcal C$" and $\text{ASSOC}_{\mathcal C} = $"associative algebras in $\mathcal C$", and we have a forgetful functor $\text{Forget}: \text{ASSOC}_{\mathcal C} \to \text{LIE}_{\mathcal C}$. Then one can ask whether $\text{Forget}$ has a left adjoint $U: \text{LIE}_{\mathcal C} \to \text{ASSOC}_{\mathcal C}$. If $\mathcal C$ admits arbitrary countable direct sums, then the answer is yes: the tensor algebra is thence well-defined, and so just form the quotient as you normally would do, being careful to write everything in terms of objects and morphisms rather than elements. In particular, if $\mathfrak g \in \text{LIE}_{\mathcal C}$, then $U\mathfrak g \in \text{ASSOC}_{\mathcal C}$ and it is universal with respect to the property that there is a Lie algebra homomorphism $\mathfrak g \to U\mathfrak g$. Let's say that $\mathfrak g$ is representable if the map $\mathfrak g \to U\mathfrak g$ is a monomorphism in $\text{LIE}_{\mathcal C}$. By universality, if there is any associative algebra $A$ and a monomorphism $\mathfrak g \to A$, then $\mathfrak g \to U\mathfrak g$ is mono, so this really is the condition that $\mathfrak g$ has some faithful representation. The statement that "Every Lie algebra is representable" is normally known as the Poincare-Birkoff-Witt theorem. The important point is that the usual proof — the one that Birkoff and Witt gave — requires the Axiom of Choice, because it requires picking a vector-space basis, and so it works only when $\mathcal C$ is the category of $\mathbb K$ vector spaces for $\mathbb K$ a field, or more generally when $\mathcal C$ is the category of $R$-modules for $R$ a commutative ring and $\mathfrak g$ is a free $R$-module, or actually the proof can be made to work for arbitrary Dedekind domains $R$. But in many abelian categories of interest this approach is untenable: not every abelian category is semisimple, and even those that are you often don't have access to bases. So you need other proofs. Provided that $\mathcal C$ is "over $\mathbb Q$" (hom sets are $\mathbb Q$-vector spaces, etc.), a proof that works constructively with no other restrictions on $\mathcal C$ is available in Deligne, Pierre; Morgan, John W. Notes on supersymmetry (following Joseph Bernstein). Quantum fields and strings: a course for mathematicians , Vol. 1, 2 (Princeton, NJ, 1996/1997), 41--97, Amer. Math. Soc., Providence, RI, 1999. MR1701597 . They give a reference to Corwin, L.; Ne'eman, Y.; Sternberg, S. Graded Lie algebras in mathematics and physics (Bose-Fermi symmetry). Rev. Modern Phys . 47 (1975), 573--603. MR0438925 . in which the proof is given when $\mathcal C$ is the category of modules of a (super)commutative ring $R$, with $\otimes = \otimes_R$, and, importantly, $2$ and $3$ are both invertible in $R$. [Edit: I left a comment July 28, 2011, below, but should have included explicitly, that Corwin--Ne'eman--Sternberg require more conditions on $\mathcal C$ than just that $2$ and $3$ are invertible. Certainly as stated "PBW holds when $6$ is invertible" is inconsistent with the examples of Cohn below.] Finally, with $R$ an arbitrary commutative ring and $\mathcal C$ the category of $R$-modules, if $\mathfrak g$ is torsion-free as a $\mathbb Z$-module then it is representable. This is proved in: Cohn, P. M. A remark on the Birkhoff-Witt theorem. J. London Math. Soc . 38 1963 197--203. MR0148717 So it seems that almost all Lie algebras are representable. But notably Cohn gives examples in characteristic $p$ for which PBW fails. His example is as follows. Let $\mathbb K$ be some field of characteristic $p\neq 0$; then in the free associative algebra $\mathbb K\langle x,y\rangle$ on two generators we have $(x+y)^p - x^p - y^p = \Lambda_p(x,y)$ is some non-zero Lie series. Let $R = \mathbb K[\alpha,\beta,\gamma] / (\alpha^p,\beta^p,\gamma^p)$ be a commutative ring, and define $\mathfrak g$ the Lie algebra over $R$ to be generated by $x,y,z$ with the only defining relation being that $\alpha x = \beta y + \gamma z$. Then $\mathfrak g$ is not representable in the category of $R$-modules: $\Lambda_p(\beta y,\gamma z)\neq 0$ in $\mathfrak g$, but $\Lambda_p(\beta y,\gamma z)= 0$ in $U\mathfrak g$.
{ "source": [ "https://mathoverflow.net/questions/22927", "https://mathoverflow.net", "https://mathoverflow.net/users/4362/" ] }
22,990
This question arose after reading the answers (and the comments to the answers) to Why worry about the axiom of choice? . First things first. In my intuitive conception of the hierarchy of sets , the axiom of choice is obviously true. I mean, how can the product of a family of non-empty sets fail to be non-empty? I simply cannot fathom it. Now, I understand that there are people who disagree with me; a mathematician of a (more) constructive persuasion would reply that mathematical existence is constructive existence. Well, we can agree to disagree. And besides, the distinction between constructive and non-constructive proofs is very much worth having in mind. First, because constructive proofs usually give more information and second, there are many contexts where AC is not available (e.g. topoi). A second (personal) reason for championing AC is a pragmatic one: it allows us to prove many things. And "many things" include things that physicists use without a blink. Analysis can hardly get off the ground without some form of choice. Countable choice (ACC) or dependent countable choice (ACDC) is enough for most elementary analysis and many constructivists have no problem with ACC or ACDC. For example, ACC and the stronger ACDC are enough to prove that the countable union of countable sets is countable or Baire's theorem but it is not enough to prove Hahn-Banach, Tychonoff or Krein-Milman (please, correct me if I am wrong). And this is where my question comes in. In one of the comments to the post cited above someone wrote (quoting from memory) that the majority of practicing mathematicians views countable choice as "true". I have seen this repeated many times, and the way I read this is that while the majority of practicing mathematicians views ACC as "obviously true", a part of this population harbours, in various degrees, some doubts about full AC. Assuming that I have not misread these statements, why in the minds of some people ACC is "unproblematic" but AC's validity is not? What is the intuitive explanation (or philosophical reason, if you will) why making countably infinite choices is "unproblematic" but making arbitrarily infinite choices is somehow "more suspicious" and "fraught with dangers"? I for one, cannot see any difference, but then again I freely confess my ignorance about these matters. Let me stress once again that I do not think for a moment that denying AC is "wrong" in some absolute sense of the word; I just would like to understand better what is the obstruction (to use a geometric metaphor) from passing from countably infinite choices to arbitrarily infinite ones. Note: some rewriting and expansion of the original post to address some of the comments.
Here is one explanation of why countable choice is not problematic in constructive mathematics. For this discussion it is useful to formulate the axiom of choice as follows: $(\forall x \in X . \exists y \in Y . R(x,y)) \implies \exists f \in Y^X . \forall x \in X . R(x,f(x))$ This says that a total relation $R \subseteq X \times Y$ contains a function. The usual formulation of the axiom of choice is equivalent to the above one. Indeed, if $(S_i)_{i \in I}$ is a family of non-empty sets we take $X = I$, $Y = \bigcup_i S_i$ and $R(i,x) \iff x \in S_i$ to obtain a choice function $f : I \to \bigcup_i S_i$. Conversely, given a total relation $R \subseteq X \times Y$, consider the the family $(S_x)_{x \in X}$ where $S_x = \lbrace y \in Y \mid R(x,y)\rbrace$ and apply the usual axiom of choice. One way of viewing sets in constructive mathematics is to imagine that they are collections together with given equality, i.e., some sort of "presets" equipped with equivalence relations. This actually makes sense if you think about how we implement abstract sets in computers: each element of the abstract set is represented by a finite sequence of bits, where each element may have many valid representations (and this is unavoidable in general). Let me give two specific examples: a natural number $n \in \mathbb{N}$ is represented in the usual binary system, and let us allow leadings zeroes, so that $42$ is represented by $101010$ as well as $0101010$, $00101010$, etc. a (computable) real $x \in \mathbb{R}$ is represented by machine code (a binary string) that computes arbitrarily good approximations of $x$. Specifically, a piece of code $p$ represents $x$ when $p(n)$ outputs a rational number that differs from $x$ by at most $2^{-n}$. Of course we only represent computable reals this way, and every computable real has many different representations. Let me write $\mathbb{R}$ for the set of computable reals, because those are the only reals relevant to this discussion. An essential difference between the first and the second example is that there is a computable canonical choice of representatives for elements of $\mathbb{N}$ (chop off the leading zeroes), whereas there is no such canonical choice for $\mathbb{R}$, for if we had it we could decide equality of computable reals and consequently solve the Halting problem. According to the constructive interpretation of logic, a statement of the form $\forall x \in X . \exists y \in Y . R(x, y)$ holds if there is a program $p$ which takes as input a representative for $x \in X$ and produces a representative for $y \in Y$, together with a witness for $R(x,y)$. Crucially, $p$ need not respect equality of $X$. For example, $\forall x \in \mathbb{R} . \exists n \in \mathbb{N} . x < n$ is accepted because we can write a program which takes as input a representative of $x$, namely a program $p$ as described above, and outputs a natural number larger than $x$, for example $round(p(0)) + 1000$. However, the number $n$ will necessarily depend on $p$, and there is no way too make it depend only on $x$ (computably). Let us have a look at the axiom of choice again: $(\forall x \in X . \exists y \in Y . R(x,y)) \implies \exists f \in Y^X . \forall x \in X . R(x,f(x))$ We accept this if there is a program which takes as input a $p$ witnessing totality of $R$ and outputs a representative of a choice function $f$, as well as a witness that $\forall x \in X. R(x, f(x))$ holds. This is probematic because $p$ need not respect equality of $X$, whereas a representative for $f$ must respect equality. It is not clear where we could get it from, and in specific examples we can show that there isn't one. Already the following fails: $(\forall x \in \mathbb{R} . \exists n \in \mathbb{N} . x < n) \implies \exists f \in \mathbb{N}^\mathbb{R} . \forall x \in \mathbb{R} . x < f(x)$. Indeed, every computable map $f : \mathbb{R} \to \mathbb{N}$ is constant (because a non-constant one would allow us to write a Halting oracle). However, if we specialize to countable choice $(\forall n \in \mathbb{N} . \exists y \in Y . R(n,y)) \implies \exists f \in Y^\mathbb{N} . \forall n \in \mathbb{N} . R(n,f(n))$ then we can produce the desired program. Given $p$ that witnesses totality of $R$, define the following program $q$ that represents a choice function: $q$ takes as input a binary representation of a natural number $n$, possibly with leading zeroes, chops of the leading zeroes, and applies $p$. Now, even if $p$ did not respect equality of natural numbers, $q$ does because it applies $p$ to canonically chosen representatives. In general, we will accept choice for those sets $X$ that have computable canonical representatives for their elements. Ok, this was a bit quick, but I hope I got the idea accross. Let me finish with a general comment. Most working mathematicians cannot imagine alternative mathematical universes because they were thoroughly trained to think about only one mathemtical universe, namely classical set theory. As a result their mathematical intuition has fallen a victim to classical set theory. The first step towards understanding why someone might call into question a mathematical principle which seems obviously true to them, is to broaden their horizon by studying other mathematical universes. On a smaller scale this is quite obvious: one cannot make sense of non-Euclidean geometry by interpreting points and lines as those of the Euclidean plane. Similarly, you cannot understand in what way the axiom of choice could fail by interpreting it in classical set theory. You must switch to a different universe, even though you think there isn't one... Of course, this takes some effort, but it's a real eye-opener.
{ "source": [ "https://mathoverflow.net/questions/22990", "https://mathoverflow.net", "https://mathoverflow.net/users/2562/" ] }
22,999
Consider a Symplectic manifold D (with $H^1(D)=0$) with symplectic form $w$. Let V be the total space of a circle bundle over D with non-trivial Euler class $e\in H^2(D)$. You may think of V as the set of unit vectors in a complex line L bundle over D with chern class e. Then we can construct a symplectic form ,denote it by same symbol $w$, on total space of L whose restriction to D is the original w. The question is whether V has a contact structure with a contact form $\alpha$ such that $d\alpha=w\mid_V$. Looking the the Gysinn sequence : $0\rightarrow H^0(D) \rightarrow H^2(D) \rightarrow H^2(V) \rightarrow 0$ it seems that the answer is: Yes iff $w$ is multiple of e. I Just wanted to make sure that my conclusion is correct!
Here is one explanation of why countable choice is not problematic in constructive mathematics. For this discussion it is useful to formulate the axiom of choice as follows: $(\forall x \in X . \exists y \in Y . R(x,y)) \implies \exists f \in Y^X . \forall x \in X . R(x,f(x))$ This says that a total relation $R \subseteq X \times Y$ contains a function. The usual formulation of the axiom of choice is equivalent to the above one. Indeed, if $(S_i)_{i \in I}$ is a family of non-empty sets we take $X = I$, $Y = \bigcup_i S_i$ and $R(i,x) \iff x \in S_i$ to obtain a choice function $f : I \to \bigcup_i S_i$. Conversely, given a total relation $R \subseteq X \times Y$, consider the the family $(S_x)_{x \in X}$ where $S_x = \lbrace y \in Y \mid R(x,y)\rbrace$ and apply the usual axiom of choice. One way of viewing sets in constructive mathematics is to imagine that they are collections together with given equality, i.e., some sort of "presets" equipped with equivalence relations. This actually makes sense if you think about how we implement abstract sets in computers: each element of the abstract set is represented by a finite sequence of bits, where each element may have many valid representations (and this is unavoidable in general). Let me give two specific examples: a natural number $n \in \mathbb{N}$ is represented in the usual binary system, and let us allow leadings zeroes, so that $42$ is represented by $101010$ as well as $0101010$, $00101010$, etc. a (computable) real $x \in \mathbb{R}$ is represented by machine code (a binary string) that computes arbitrarily good approximations of $x$. Specifically, a piece of code $p$ represents $x$ when $p(n)$ outputs a rational number that differs from $x$ by at most $2^{-n}$. Of course we only represent computable reals this way, and every computable real has many different representations. Let me write $\mathbb{R}$ for the set of computable reals, because those are the only reals relevant to this discussion. An essential difference between the first and the second example is that there is a computable canonical choice of representatives for elements of $\mathbb{N}$ (chop off the leading zeroes), whereas there is no such canonical choice for $\mathbb{R}$, for if we had it we could decide equality of computable reals and consequently solve the Halting problem. According to the constructive interpretation of logic, a statement of the form $\forall x \in X . \exists y \in Y . R(x, y)$ holds if there is a program $p$ which takes as input a representative for $x \in X$ and produces a representative for $y \in Y$, together with a witness for $R(x,y)$. Crucially, $p$ need not respect equality of $X$. For example, $\forall x \in \mathbb{R} . \exists n \in \mathbb{N} . x < n$ is accepted because we can write a program which takes as input a representative of $x$, namely a program $p$ as described above, and outputs a natural number larger than $x$, for example $round(p(0)) + 1000$. However, the number $n$ will necessarily depend on $p$, and there is no way too make it depend only on $x$ (computably). Let us have a look at the axiom of choice again: $(\forall x \in X . \exists y \in Y . R(x,y)) \implies \exists f \in Y^X . \forall x \in X . R(x,f(x))$ We accept this if there is a program which takes as input a $p$ witnessing totality of $R$ and outputs a representative of a choice function $f$, as well as a witness that $\forall x \in X. R(x, f(x))$ holds. This is probematic because $p$ need not respect equality of $X$, whereas a representative for $f$ must respect equality. It is not clear where we could get it from, and in specific examples we can show that there isn't one. Already the following fails: $(\forall x \in \mathbb{R} . \exists n \in \mathbb{N} . x < n) \implies \exists f \in \mathbb{N}^\mathbb{R} . \forall x \in \mathbb{R} . x < f(x)$. Indeed, every computable map $f : \mathbb{R} \to \mathbb{N}$ is constant (because a non-constant one would allow us to write a Halting oracle). However, if we specialize to countable choice $(\forall n \in \mathbb{N} . \exists y \in Y . R(n,y)) \implies \exists f \in Y^\mathbb{N} . \forall n \in \mathbb{N} . R(n,f(n))$ then we can produce the desired program. Given $p$ that witnesses totality of $R$, define the following program $q$ that represents a choice function: $q$ takes as input a binary representation of a natural number $n$, possibly with leading zeroes, chops of the leading zeroes, and applies $p$. Now, even if $p$ did not respect equality of natural numbers, $q$ does because it applies $p$ to canonically chosen representatives. In general, we will accept choice for those sets $X$ that have computable canonical representatives for their elements. Ok, this was a bit quick, but I hope I got the idea accross. Let me finish with a general comment. Most working mathematicians cannot imagine alternative mathematical universes because they were thoroughly trained to think about only one mathemtical universe, namely classical set theory. As a result their mathematical intuition has fallen a victim to classical set theory. The first step towards understanding why someone might call into question a mathematical principle which seems obviously true to them, is to broaden their horizon by studying other mathematical universes. On a smaller scale this is quite obvious: one cannot make sense of non-Euclidean geometry by interpreting points and lines as those of the Euclidean plane. Similarly, you cannot understand in what way the axiom of choice could fail by interpreting it in classical set theory. You must switch to a different universe, even though you think there isn't one... Of course, this takes some effort, but it's a real eye-opener.
{ "source": [ "https://mathoverflow.net/questions/22999", "https://mathoverflow.net", "https://mathoverflow.net/users/5259/" ] }
23,060
This question probably doesn't make any sense, but I don't see why, so I ask it here hoping someone will illuminate the matter: There is this whole area of study in Set Theory about the consistency, independence of axioms, etc. In some of these you use model theory (e.g. forcing) to prove results about set theory. My question is: What is the foundation of this model theory we are using? We are certainly using sets to talk about the models, what some may call sets in the "meta"-mathematics, that is to say, the "real" mathematics. But then, all these arguments in the end in are all about the theory of sets as a theory, and not the theory of sets as a foundation of math, since we are using these sets in the meantime. So our set theory is not about the foundation of math. Am I right?
Your worries arise from asymmetry between how you view ordinary mathematics and how you view logic and model theory. If it is the business of logic and model theory to provide foundations for the rest of mathematics then, of course, logicians and model theorists will not be allowed to use mathematical methods until they have secured them. But how might they accomplish this? The more we think about it, the more it becomes obvious that "securing the foundations of mathematics", whatever that means, is a task for philosophers at best and a form of mysticism at worst. It is far more fruitful to think of logic and model theory as just another branch of mathematics, namely the one that studies mathematical methods and mathematical activity with mathematical tools. They follow the usual pattern of "mathematizing" their object of interest: observe what happens in the real world (look at what mathematicians do) simplify and idealize the observed situation until it becomes manageable by mathematical tools (simplify natural language to formal logic, pretend that mathematicians only formulate and prove theorems and do nothing else, pretend that all proofs are always written out in full detail, etc.) apply standard mathematical techniques As we all know well, the 20th century logicians were very successful. They gave us important knowledge about the nature of mathematical activity and its limitations. One of results was the realization that almost all mathematics can be done with first-order logic and set theory. The set-theoretic language was adopted as a universal means of communication among mathematicians. The success of set theory has lead many to believe that it provides an unshakeable foundation for mathematics. It does not, at least not the mystical kind that some would like to have. It provides a unifying language and framework for mathematicians, which in itself is a small miracle. Always remember that practically all classical mathematics was invented before modern logic and set theory. How could it exist without a foundation so long? Was the mathematics of Euclid, Newton and Fourier really vacouous until set theory came along and "gave it a foundation"? I hope this explains what model theorists do. They apply standard mathematical methodology to study mathematical theories and their meaning. They have discovered, for example, that however one axiomatizes a given body of mathematics in first-order logic (for example, the natural numbers), the resulting theory will have unintended and surprising interpretations (non-standard models of Peano arithmetic), and I am skimming over a few technical details here. There is absolutely nothing strange about applying model theory to the axioms known as ZFC. Or to put it another way: if you ask "why are model theorists justified in using sets?" then I ask back "why are number theorists justified in using numbers?"
{ "source": [ "https://mathoverflow.net/questions/23060", "https://mathoverflow.net", "https://mathoverflow.net/users/1724/" ] }
23,113
We know from elementary school that the triangle inequality holds in Euclidean geometry. Some where in High School or in Univ., we come across non-Euclidean geometries (hyperbolic and Riemannian) and Absolute geometry where in both the inequality holds. I am curious whether the triangle inequality is made to hold in any geometry( from the beginning) or is a consequence of some axioms. Presumably, the denial of the inequality would create havoc in that conceivable geometry. Thanks.
There are people who seriously study quasi-normed spaces. The most natural examples are $\ell_p$ spaces for p strictly between 0 and 1 (the "norm" given by the usual formula and the distance given by the norm of the difference). Although these spaces do not satisfy the triangle inequality, you get an inequality of the form $\|x+y\|\leq C(\|x\|+\|y\|)$.
{ "source": [ "https://mathoverflow.net/questions/23113", "https://mathoverflow.net", "https://mathoverflow.net/users/5627/" ] }
23,175
This question is short but to the point: what is the "right" abstract framework where Mayer-Vietoris is just a trivial consequence?
The Mayer-Vietoris sequence is an upshot of the relationship between sheaf cohomology and presheaf cohomology (a.k.a. Cech cohomology). Let $X$ be a topological space (or any topos), $\mathcal U$ a covering of $X$. Let $\mathop{\rm Sh}X$ be the category of sheaves on $X$ and $\mathop{\rm PreSh}X$ the category of presheaves. The embedding $\mathop{\rm Sh}X \subseteq \mathop{\rm PreSh}X$ is left-exact; its derived functors send a sheaf $F$ into the presheaves $U \mapsto \mathrm H^i(U, F)$. For any presheaf $P$, one can define Cech cohomology $\mathrm {\check H}^i(\mathcal U, P)$ of $P$ by the usual formulas (this is often done only for sheaves, but scrutinizing the definition, one sees that the sheaf condition is never used). One shows that the $\mathrm {\check H}^i(\mathcal U, -)$ are the derived funtors of $\mathrm {\check H}^0(\mathcal U, -)$; and of course for a sheaf $F$, $\mathrm {\check H}^0(\mathcal U, F)$ coincides with $\mathrm H^0(\mathcal U, F)$. The Grothendieck spectral sequence of this composition, in the case of a covering with two elements, gives the Mayer--Vietoris sequence. There is also a spectral sequence for finite closed covers, which is obtained as in anonymous's answer. I guess that this can also be interpreted as Tilman does, in a different language (I am not a topologist).
{ "source": [ "https://mathoverflow.net/questions/23175", "https://mathoverflow.net", "https://mathoverflow.net/users/5756/" ] }
23,193
The real numbers can be axiomatically defined (up to isomorphism) as a Dedekind-complete ordered field. What is a similar standard axiomatic definition of the integer numbers? A commutative ordered ring with positive induction?
It's the unique commutative ordered ring whose positive elements are well-ordered.
{ "source": [ "https://mathoverflow.net/questions/23193", "https://mathoverflow.net", "https://mathoverflow.net/users/5761/" ] }
23,202
In the following, I use the word "explicit" in the following sense: No choices of bases (of vector spaces or field extensions), non-principal ultrafilters or alike which exist only by Zorn's Lemma (or AC) are needed. Feel free to use similar (perhaps more precise) notions of "explicit", but reasonable ones! To be honest, I'm not so interested in a discussion about mathematical logic. If no example is there, well, then there is no example. ;-) Can you give explicit large linearly independent subsets of $ \mathbb{R}$ over $\mathbb{Q}$? For example, $\{\ln(p) : p \text{ prime}\}$ is such a set, but it's only countable and surely is no basis. You can find more numbers which are linearly independent, but I cannot find uncountably many. AC implies $\dim_\mathbb{Q} \mathbb{R} = |\mathbb{R}|$. Perhaps $ZF$ has a model in which every linearly independant subset of $ \mathbb{R}$ is countable? The same question for algebraically independent subsets of $ \mathbb{R}$ over $\mathbb{Q}$? Perhaps the set above is such a subset? But anyway, it is too small. Closely related problems: Can you give an explicit proper subspace of $ \mathbb{R}$ over $\mathbb{Q}$, which is isomorphic to $ \mathbb{R}$? If so, is the isomorphism explicit? Same question for subfields. That would be great if there were explicit examples. :-)
Here is a linearly independent subset of $\mathbb{R}$ with size $2^{\aleph_0}$. Let $q_0, q_1, \ldots$ be an enumeration of $\mathbb{Q}$. For every real number $r$, let $$T_r = \sum_{q_n < r} \frac{1}{n!}$$ The proof that these numbers are linearly independent is similar to the usual proof that $e$ is irrational. (It's a cute problem; there's spoiler below.) I think a similar trick might work for algebraic independence, but I don't recall having seen such a construction. Actually, John von Neumann showed that the numbers $$A_r = \sum_{n=0}^\infty \frac{2^{2^{[nr]}}}{2^{2^{n^2}}}$$ are algebraically independent for $r > 0$. [ Ein System algebraisch unabhängiger zahlen , Math. Ann. 99 (1928), no. 1, 134–141.] A more general result due to Jan Mycielski seems to go through in ZF + DC perhaps just ZF in some cases. [ Independent sets in topological algebras , Fund. Math. 55 (1964), 139–147.] As for subspaces and subfields isomorphic to $\mathbb{R}$, the answer is no. (Since I'm not allowed to post any logic here, I'll refer you to this answer and let you figure it out.) Well, I'll bend the rules a little... Consider a $\mathbb{Q}$-linear isomorphism $h:\mathbb{R}\to H$, where $H$ is a $\mathbb{Q}$-linear subspace of $\mathbb{R}$ (i.e. $h$ is an additive group isomorphism onto the divisible subgroup $H$ of $\mathbb{R}$). If $h$ Baire measurable then it must be continuous by an ancient theorem of Banach and Pettis. It follows that $h(x) = xh(1)$ for all $x \in \mathbb{R}$ and therefore $H = \mathbb{R}$. Shelah has produced a model of ZF + DC where all sets of reals have the Baire property , so any such $h$ in this model must be Baire measurable. A similar argument works if Baire measurable is replaced by Lebesgue measurable, but Solovay's model of ZF + DC where all sets of reals are Lebesgue measurable uses the existence of an inaccessible cardinal, and this hypothesis was shown necessary by Shelah. Spoiler Suppose for the sake of contradiction that $r_1 > r_2 > \cdots > r_k$ and $a_1,a_2,\ldots,a_k \in \mathbb{Z}$ are such that $a_1T_{r_1} + a_2T_{r_2} + \cdots + a_kT_{r_k} = 0$. Choose a very large $n$ such that $r_1 > q_n > r_2$. If $n$ is large enough that $$(|a_1| + |a_2| + \cdots + |a_k|) \sum_{m=n+1}^\infty \frac{n!}{m!} < 1$$ then the tail terms of $n!(a_1T_{r_1}+\cdots+a_kT_{r_k}) = 0$ must cancel out, and we're left with $$a_1 = -\sum_{m=0}^{n-1} \sum_{q_m < r_i} a_i \frac{n!}{m!} \equiv 0 \pmod{n}$$ If moreover $n > |a_1|$, this means that $a_1 = 0$. Repeat to conclude that $a_1 = a_2 = \cdots a_k = 0$.
{ "source": [ "https://mathoverflow.net/questions/23202", "https://mathoverflow.net", "https://mathoverflow.net/users/2841/" ] }
23,268
I'm the sort of mathematician who works really well with elements. I really enjoy point-set topology, and category theory tends to drive me crazy. When I was given a bunch of exercises on subjects like limits, colimits, and adjoint functors, I was able to do them, although I am sure my proofs were far longer and more laborious than they should have been. However, I felt like most of the understanding I gained from these exercises was gone within a week. I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations). A couple months ago, I was trying to use the statements found in Hartshorne about glueing schemes and morphisms and realized that these statements were inadequate for my purposes. Looking more closely, I realized that Hartshorne's hypotheses are "wrong," in roughly the same way that it is "wrong" to require, in the definition of a basis for a topology that it be closed under finite intersections. (This would, for instance, exclude the set of open balls from being a basis for $\mathbb{R}^n$.) Working through it a bit more, I realized that the "right" statement was most easily expressed by saying that a certain kind of diagram in the category of schemes has a colimit. At this point, the notion of "colimit" began to seem much more manageable: a colimit is a way of gluing objects (and morphisms). However, I cannot think of any similar intuition for the notion of "limit." Even in the case of a fibre product, a limit can be anything from an intersection to a product, and I find it intimidating to try to think of these two very different things as a special cases of the same construction. I understand how to show that they are; it just does not make intuitive sense, somehow. For another example, I think (and correct me if I am wrong) that the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits . [This is not correct as stated. See Martin Brandenburg's answer below for an explanation of why not, as well as what the correct statement is.] It seems like a statement this simple should make everything clearer, but I find it much easier to understand the definition in terms of compatible local sections gluing together. I can (I think) prove that they are the same, but by the time I get to one end of the proof, I've lost track of the other end intuitively. Thus, my question is this: Is there a nice, preferably geometric intuition for the notion of limit? If anyone can recommend a book on category theory that they think would appeal to someone like me, that would also be appreciated.
I pick up your remarks about sheaves. Indeed, the sheaf condition is a very good example to get a geometric idea of a limit. Assume that $X$ is a set and $X_i$ are subsets of $X$ whose union is $X$. Then it is clear how to characterize functions on $X$: These are simply functions on the $X_i$ which agree on the overlaps $X_i \cap X_j$. This can be formulated in a fancy way: Let $J$ be the category whose objects are the indices $i$ and pairs of such indices $(i,j)$. It should be a preorder and we have the morphisms $(i,j) \to i, (i,j) \to j$. Consider the diagram $J \to Set$, which is given by $i \mapsto X_i, (i,j) \mapsto X_i \cap X_j$. What we have remarked above says exactly that $X$ is the colimit of this diagram! In a similar fashion, open coverings can be understood as colimits in the category of topological spaces, ringed spaces or schemes. It's all about gluing morphisms. Now what about limits? I think it is important first to understand limits in the category of sets. If $F : J \to Set$ is a small diagram, then we can consider simply the set of "compatible elements in the image" of $F$, namely $X = \{x \in \prod_j F(j) : \forall i \to j : x_j = F(i \to j)(x_i)\}$. A short definition would be $X = Cone(*,F)$. Observe that we have projections $X \to F(j), x \mapsto x_j$ and with these $X$ is the limit of $F$. Now the Yoneda-Lemma or just the definition of a limit tells you how you can think of a limit in an arbitrary category: That $X$ is a limit of a diagram $F : J \to C$ amounts to say that elements of $X$ .. erm we don't have any elements, so let's say morphisms $Y \to X$, naturally correspond to compatible elem... erm morphisms $Y \to F(i)$. In other words, for every $Y$, $X(Y)$ is the set-theoretic limit of the diagramm $F(Y)$. I hope that this makes clear that the concept of limits in arbitrary categories is already visible in the category of sets. Now let $X$ be a topological space and $O(X)$ the category of open subsets of $X$; it's an preorder with respect to the inclusion. Thus a presheaf is just a functor $F$ from $O(X)^{op}$ to the category of sets (or which suitable category you like). Now open coverings can be described as certain limits in $O(X)^{op}$, i.e. colimits in $O(X)$, as above. Observe that $F$ is a sheaf if and only if $F$ preserves these limits: If $U$ is covered by $U_i$, then $F(U)$ should be the limit of the $F(U_i), F(U_i \cap U_j)$ with transition maps $F(U_i) \to F(U_i \cap U_j), F(U_j) \to F(U_i \cap U_j)$, i.e. $F(U)$ consists of compatible elements of the $F(U_i)$, meaning that the elements of $F(U_i)$ and $F(U_j)$ restrict to the same element in $F(U_i \cap U_j)$. Thus we have a perfect geometric example of a limit: the set of sections on an open set is the limit of the set of sections on the open subsets of a covering. Somehow this view takes over to the general case: Let $F : J \to Set$ be a functor. Regard it as a presheaf on $J^{op}$, and the map induced by $i \to j$ in $J^{op}$ as a restriction $F(j) \to F(i)$. Also call the elements of $F(i)$ sections on $i$. Then the limit of $F$ consists of compatible sections. Since I've been learning algebraic geometry, I almost always think of limits in this way. Finally it is important to remember that limit is just the dual concept of colimit. And often algebra and geometry appear dually at once, for example sections and open subsets in sheaves. If $(X_i,\mathcal{O}_{X_i})$ are ringed spaces and you want to find the colimit, well you can guess that you have to do: Take the colimit of the $X_i$ and the limit of the $\mathcal{O}_{X_i}$ (pullbacked to the colimit). "...the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits" This is not correct. The reason is that the index category can be rather wild and colimits in preorders don't care about that. In detail: Let $U : J \to O(X)^{op}$ be a small diagram. Then the limit is just the union $V$ of $U_j$. Thus $F$ preserves this limit iff sections on $V$ are sections on the $U_j$ which are compatible with respect to the restriction morphisms given by $U$. If $J$ is discrete and $U$ maps everything to the same open subset $V$ of $X$, then the compatible sections are $F(V)^J$, which is bigger than $F(V)$. "... I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations" I think this book is still one of the best introductions into category theory. It can be hard to grasp all these abstract concepts and examples, but it gets easier as soon as you get input from other areas where category theoretic ideas are omnipresent. Your example about gluing morphisms illustrates this very well.
{ "source": [ "https://mathoverflow.net/questions/23268", "https://mathoverflow.net", "https://mathoverflow.net/users/5094/" ] }
23,269
An often-cited principle of good mathematical exposition is that a definition should always come with a few examples and a few non-examples to help the learner get an intuition for where the concept's limits lie, especially in cases where that's not immediately obvious. Quillen model categories are a classic such case. There are some easy rough intuitions—“something like topological spaces”, “somewhere one can talk about homotopy”, and so on—but various surprising examples show quite how crude those intuitions are, and persuade one that model categories cover a much wider range of situations than one might think at first. However, I haven't seen any non-examples of model structures written up, or even discussed—that is, categories and classes of maps which one might think would be model structures, but which fail for subtle/surprising reasons . Presumably this is because, given the amount of work it typically takes to construct an interesting model structure, no-one wants to write (or read) three-quarters of that work without the payoff of an actual example at the end. Has anyone encountered any interesting non-examples of this sort? Background on my motivations: I'm currently working with Batanin/Leinster style weak higher categories, and have a problem which seems amenable to model-theoretic techniques, so I'm trying to see if I can transfer/adapt/generalise the model structures defined by Cisinski et al, Lafont/Métayer/Worytkiewicz, etc. in this area. So I have some candidate (cofibrantly generated) classes of maps, and am trying to prove that they work; and there are lots of good examples around of how to prove that something is a model structure, but it would also be helpful to know what kinds of subtleties I should be looking out for that might make it fail to be.
Here is a classical example. Let CDGA be the category of commutative differential graded algebras over a fixed ground field k of characteristic $p$. Weak equivalences are quasi-isomorphisms, fibrations are levelwise surjections. These would determine the others, but cofibrations are essentially generated by maps $A \rightarrow B$ such that on the level of the underlying DGA, $B$ is a polynomial algebra over $A$ on a generator $x$ whose boundary is in $A$. CDGA is complete and cocomplete, satisfies the $2$-out-of-$3$ axiom, the retract axiom, satisfies lifting, and a general map can be factored into a cofibration followed by an acyclic fibration by the small object argument. However, you don't have factorizations into acyclic cofibrations followed by fibrations, because of the following. Suppose $A \rightarrow B$ is a map of commutative DGAs which is a fibration in the above sense. Then for any element $[x]$ in the (co)homology of $B$ in even degree, the $p$-th power $[x]^p$ is in the image of the cohomology of $A$. In fact, pick any representing cycle $x \in B$ and choose a lift $y \in A$. Then the boundary of $y^p$ is $py^{p-1} = 0$ by the Leibniz rule, so $[y^p]$ is a lift of $[x]^p$ to the (co)homology of $A$. (As a result, there are a lot of other "homotopical" constructions, such as homotopy pullbacks, that are forced to throw you out of the category of commutative DGAs into the category of $E_\infty$ DGAs.) Nothing goes wrong in characteristic zero.
{ "source": [ "https://mathoverflow.net/questions/23269", "https://mathoverflow.net", "https://mathoverflow.net/users/2273/" ] }
23,337
A morphism of schemes $f:X\to S$ is said to be quasi-compact if for every OPEN quasi-compact subset $K \subset S$ the subset $f^{-1}(K) \subset X$ is also quasi-compact (and open, of course!). The morphism $f:X\to S$ is said to be universally closed if for every morphism $T\to S$ the resulting base-changed morphism $X_T \to T$ is closed. The title question (inspired by topology) is then: Question 1: If $f:X\to S$ is universally closed, does it follow that $f$ is quasi-compact? Here is a variant of this question, asking for a stronger conclusion : Question 2: If $f:X\to S$ is universally closed, does it follow that for every quasi-compact subset $K\subset S$, open or not, $f^{-1}(K)$ is quasi-compact ? REMARK 1 The converse of Question 1 is false: any morphism between affine schemes is quasi-compact but is not universally closed in general. REMARK 2 One might wonder whether $f$ proper implies $f$ quasi-compact. The answer is "yes" but for an irrelevant reason: proper is defined as separated, universally closed and of finite type. Since finite type already implies quasi-compact, proper obviously implies quasi-compact. REMARK 3 In topology "proper" is (or should be !) defined as universally closed; equivalently, closed with quasi-compact fibres. Topologically proper implies that every quasi-compact subset (open or not) of the codomain has quasi-compact inverse image. The converse is not true in general, but it is for locally compact spaces. REMARK 4 (edited).As BCnrd remarks in his comment below, it is not at all clear that the two questions are equivalent (I had stated they were in the previous version of this post, but I retract that claim ). Also, beware that in topology the notion of quasi-compact continuous map is so weak as to be essentially useless since decent topological spaces, the ones algebraic geometers never use :) , have so few open quasi-compact subsets.
Yes, a universally closed morphism is quasi-compact. (I haven't yet checked whether the same approach answers question 2.) Proof: Without loss of generality, we may assume that $S=\operatorname{Spec} A$ for some ring $A$, and that $f$ is surjective. Suppose that $f$ is not quasi-compact. We need to show that $f$ is not universally closed. Write $X = \bigcup_{i \in I} X_i$ where the $X_i$ are affine open subschemes of $X$. Let $T=\operatorname{Spec} A[\{t_i:i \in I\}]$, where the $t_i$ are distinct indeterminates. Let $T_i=D(t_i) \subseteq T$. Let $Z$ be the closed set $(X \times_S T) - \bigcup_{i \in I} (X_i \times_S T_i)$. It suffices to prove that the image $f_T(Z)$ of $Z$ under $f_T \colon X \times_S T \to T$ is not closed. There exists a point $\mathfrak{p} \in \operatorname{Spec} A$ such that there is no neighborhood $U$ of $\mathfrak{p}$ in $S$ such that $X_U$ is quasi-compact, since otherwise we could cover $S$ with finitely many such $U$ and prove that $X$ itself was quasi-compact. Fix such $\mathfrak{p}$, and let $k$ be its residue field. First we check that $f_T(Z_k) \ne T_k$. Let $\tau \in T(k)$ be the point such that $t_i(\tau)=1$ for all $i$. Then $\tau \in T_i$ for all $i$, and the fiber of $Z_k \to T_k$ above $\tau$ is isomorphic to $(X - \bigcup_{i \in I} X_i)_k$, which is empty. Thus $\tau \in T_k - f_T(Z_k)$. If $f_T(Z)$ were closed in $T$, there would exist a polynomial $g \in A[\{t_i:i \in I\}]$ vanishing on $f_T(Z)$ but not at $\tau$. Since $g(\tau) \ne 0$, some coefficient of $g$ would have nonzero image in $k$, and hence be invertible on some neighborhood $U$ of $\mathfrak{p}$. Let $J$ be the finite set of $j \in I$ such that $t_j$ appears in $g$. Since $X_U$ is not quasi-compact, we may choose a point $x \in X - \bigcup_{j \in J} X_j$ lying above some $u \in U$. Since $g$ has a coefficient that is invertible on $U$, we can find a point $P \in T$ lying above $u$ such that $g(P) \ne 0$ and $t_i(P)=0$ for all $i \notin J$. Then $P \notin T_i$ for each $i \notin J$. A point $z$ of $X \times_S T$ mapping to $x \in X$ and to $P \in T$ then belongs to $Z$. But $g(f_T(z))=g(P) \ne 0$, so this contradicts the fact that $g$ vanishes on $f_T(Z)$.
{ "source": [ "https://mathoverflow.net/questions/23337", "https://mathoverflow.net", "https://mathoverflow.net/users/450/" ] }
23,352
A theorem (I do unfortunately not remember to whom it is due) states that there exists a finitely presented group containing a subgroup isomorphic to the additive group of rational numbers. Can somebody give an explicit construction?
Francesco Matucci, James Hyde and I have just posted an arXiv preprint with a solution to this problem. We prove that $\mathbb{Q}$ embeds in the group $\overline{T}$ of piecewise-linear homeomorphisms of the real line obtained by lifting Thompson's group $T$ through the covering map from the line to the circle. That is, $\overline{T}$ consists of all piecewise-linear homeomorphisms $f$ of the real line that satisfy the following conditions: Each linear segment of $f$ has the form $f(x) = 2^a x + \dfrac{b}{2^c}$ for some $a,b,c\in\mathbb{Z}$ . Each breakpoint of $f$ has dyadic rational coordinates. $f(x+1)=f(x)+1$ for all $x\in\mathbb{R}$ . We also prove that this group $\overline{T}$ has a presentation with two generators and four relations. It follows from this result together with a theorem of Brin that $\mathbb{Q}$ embeds in the automorphism group of Thompson's group $F$ , which is also finitely presented. Similarly, $\mathbb{Q}$ embeds in the braided Thompson group $BV$ introduced by Brin and Dehornoy, which again is known to be finitely presented. Our preprint also proves that $\mathbb{Q}$ embeds into a finitely presented simple group which we denote $T\mathcal{A}$ . (None of the other groups listed above are simple.) This is a certain group of homeomorphisms of the circle that are "nearly piecewise-linear" in the sense that they have infinitely many linear pieces that accumulate at finitely many points. We prove that this group $T\mathcal{A}$ is two-generated and has type $\mathrm{F}_\infty$ , and we indicate how an explicit finite presentation of $T\mathcal{A}$ could be derived.
{ "source": [ "https://mathoverflow.net/questions/23352", "https://mathoverflow.net", "https://mathoverflow.net/users/4556/" ] }
23,409
My question is related to the question Explanation for the Chern Character to this question about Todd classes , and to this question about the Atiyah-Singer index theorem . I'm trying to learn the Atiyah-Singer index theorem from standard and less-standard sources, and what I really want now is some soft, heuristic, not-necessarily-rigourous intuitive explanation of why it should be true. I am really just looking for a mental picture, analogous somehow to the mental picture I have of Gauss-Bonnet : "increasing Gaussian curvature tears holes in a surface". The Atiyah-Singer theorem reads $$\mathrm{Ind}(D)=\int_{T^\ast M}\mathrm{ch}([\sigma_m(D)])\smile \mathrm{Td}(T^\ast M \otimes \mathbb{C})$$ What I want to understand is what the Chern character cup Todd class is actually measuring (heuristically- it doesn't have to be precisely true), and why, integrated over the cotangent bundle, this should give rise to the index of a Fredholm operator. I'm not so much interested in exact formulae at this point as in gleaning some sort of intuition for what is going on "under the hood". The Chern character is beautifully interpreted in this answer by Tyler Lawson , which, however, doesn't tell me what it means to cup it with the Todd class (I can guess that it's some sort of exponent of the logarithm of a formal group law, but this might be rubbish, and it's still not clear what that should be supposed to be measuring). Peter Teichner gives another, to my mind perhaps even more compelling answer , relating the Chern character with looping-delooping (going up and down the n-category ladder? ), but again, I'm missing a picture of what role the Todd class plays in this picture, and why it should have anything to do with the genus of an elliptic operator. I'm also missing a "big picture" explanation of Fei Han's work, even after having read his thesis (can someone familiar with this paper summarize the conceptual idea without the technical details?). Similarly, Jose Figueroa-O'Farrill's answer looks intriguing, but what I'm missing in that picture is intuitive understanding of why at zero temperature, the Witten index should have anything at all to do with Chern characters and Todd classes. I know (at least in principle) that on both sides of the equation the manifold can be replaced with a point, where the index theorem holds true trivially; but that looks to me like an argument to convince somebody of the fact that it is true, and not an argument which gives any insight as to why it's true. Let me add background about the Todd class, explained to me by Nigel Higson: "The Todd class is the correction factor that you need to make the Thom homomorphism commute with the Chern character." (I wish I could draw commutative diagrams on MathOverflow!) So for a vector bundle $V\longrightarrow E\longrightarrow X$ , you have a Thom homomorphism in the top row $K(X)\rightarrow K_c(E)$ , one in the bottom row $H^\ast_c(X;\mathbb{Q})\rightarrow H_c^\ast(E;\mathbb{Q})$ , and Chern characters going from the top row to the bottom row. This diagram doesn't commute in general, but it commutes modulo the action of $\mathrm{TD}(E)$ . I don't think I understand why any of this is relevant. In summary, my question is Do you have a soft not-necessarily-rigourous intuitive explanation of what each term in the Atiyah-Singer index theorem is trying to measure, and of why, in these terms, the Atiyah-Singer index theorem might be expected to hold true.
I don't think I can really give you the intuition that you seek because I don't think I quite have it yet either. But I think that understanding the relevance of Nigel Higson's comment might help, and I can try to provide some insight. (Full disclosure: most of my understanding of these matters has been heavily influenced by Nigel Higson and John Roe). My first comment is that the index theorem should be regarded as a statement about K-theory, not as a cohomological formula. Understanding the theorem in this way suppresses many complications (such as the confusing appearance of the Todd class!) and lends itself most readily to generalization. Moreover the K-theory proof of the index theorem parallels the "extrinsic" proof of the Gauss Bonnet theorem, making the result seem a little more natural. The appearance of the Chern character and Todd class are explained in this context by the observations that the Chern character maps K-theory (vector bundles) to cohomology (differential forms) and that the Todd class measures the difference between the Thom isomorphism in K-theory and the Thom isomorphism in cohomology. I unfortunately can't give you any better intuition for the latter statement than what can be obtained by looking at Atiyah and Singer's proof, but in any event my point is that the Todd class arises because we are trying to convert what ought to be a K-theory statement into a cohomological statement, not for a reason that is truly intrinsic to the index theorem. Before I elaborate on the K-theory proof, I want to comment that there is also a local proof of the index theorem which relies on detailed asymptotic analysis of the heat equation associated to a Dirac operator. This is analogous to certain intrinsic proofs of the Gauss-Bonnet theorem, but according to my understanding the argument doesn't provide the same kind of intuition that the K-theory argument does. The basic strategy of the local argument, as simplified by Getzler, is to invent a symbolic calculus for the Dirac operator which reduces the theorem to a computation with a specific example. This example is a version of the quantum-mechanical harmonic oscillator operator, and a coordinate calculation directly produces the $\hat{A}$ genus (the appropriate "right-hand side" of the index theorem for the Dirac operator). There are some slightly more conceptual versions of this proof, but none that I have seen REALLY explain the geometric meaning of the $\hat{A}$ genus. So let's look at the K-theory argument. The first step is to observe that the symbol of an elliptic operator gives rise to a class in $K(T^*M)$. If the operator acts on smooth sections of a vector bundle $S$, then its symbol is a map $T^*M \to End(S)$ which is invertible away from the origin; Atiyah's "clutching" construction produces the relevant K-theory class. Second, one constructs an "analytic index" map $K(T^*M) \to \mathbb{Z}$ which sends the symbol class to the index of $D$. The crucial point about the construction of this map is that it is really just a jazzed up version of the basic case where $M = \mathbb{R}^2$, and in that case the analytic index map is the Bott periodicity isomorphism. Third, one constructs a "topological index map" $K(T^*M) \to \mathbb{Z}$ as follows. Choose an embedding $M \to \mathbb{R}^n$ (one must prove later that the choice of embedding doesn't matter) and let $E$ be the normal bundle of the manifold $T^*M$. $E$ is diffeomorphic to a tubular neighborhood $U$ of $T^*M$, so we have a composition $K(T^* M) \to K(E) \to K(U) \to K(T^*\mathbb{R}^n)$ Here the first map is the Thom isomorphism, the second is induced by the tubular neighborhood diffeomorphism, and the third is induced by inclusion of an open set (i.e. extension of a vector bundle on an open set to a vector bundle on the whole manifold). But K-theory is a homotopy functor, so $K(T^* \mathbb{R}^n) \cong K(\text{point}) = \mathbb{Z}$, and we have obtained our topological index map from $K(T^*M)$ to $\mathbb{Z}$. The last step of the proof is to show that the analytic index map and the topological index map are equal, and here again the basic idea is to invoke Bott periodicity. Note that we expect Bott periodicity to be the relevant tool because it is crucial to the construction of both the analytic and topological index maps - in the topological index map it is hiding in the construction of the Thom isomorphism, which by definition is the product with the Bott element in K-theory. To recover the cohomological formulation of the index theorem, just apply Chern characters to the composition of K-theory maps which defines the topological index. The K-theory formulation of the index theorem says that if you "plug in" the symbol class then you get out the index, and all squares with K-theory on top and cohomology on the bottom commute except for the "Thom isomorphism square", which introduces the Todd class. So the main challenge is to get an intuitive grasp of the K-theory formulation of the index theorem, and as I hope you can see the main idea is the Bott periodicity theorem. I hope this helps!
{ "source": [ "https://mathoverflow.net/questions/23409", "https://mathoverflow.net", "https://mathoverflow.net/users/2051/" ] }
23,427
Just yesterday I heard of the notion of a fundamental group of a topos, so I looked it up on the nLab , where the following nice definition is given: If $T$ is a Grothendieck topos arising as category of sheaves on a site $X$, then there is the notion of locally constant, locally finite objects in $T$ (which I presume just means that there is a cover $(U_i)$ in $X$ such that each restriction to $U_i$ is constant and finite). If $C$ is the subcategory of $T$ consisting of all the locally constant, locally finite objects of $T$, and if $F:C\rightarrow FinSets$ is a functor ("fiber functor"), satisfying certain unnamed properties which should imply prorepresentability, then one defines $\pi_1(T,F)=Aut(F)$. Now, if $X_{et}$ is the small étale site of a connected scheme $X$, then it is well known the category of locally constant, locally finite sheaves on $X$ is equivalent to the category of finite étale coverings of $X$, and with the appropriate notion of fiber functor it surely follows that the étale fundamental group and the fundamental group of the topos on $X_{et}$ coincide. Similarly, as the nlab entry mentions, if $X$ is a nice topological space, locally finite, locally constant sheaves correspond to finite covering spaces (via the "éspace étalé"), and we should recover the profinite completion of the usual topological fundamental group. Before I come to my main question: Did I manage to summarize this correctly, or is there something wrong with the above? My question: Has the fundamental group of other topoi been studied, and in what context or disguise might we already know them? For example, what is known about the fundamental group of the category of fppf sheaves over a scheme $X$?
The profinite fundamental group of $X_{fppf}$ as you define it is again the etale fundamental group of X. More precisely, the functor (of points) $f : X_{et} \to \mathrm{Sh}_{fppf}(X)$ is fully faithful and has essential image the locally finite constant sheaves (image clearly contained there, as finite etale maps are even etale locally finite constant, let alone fppf locally so). Proof in 3 steps: It is fully faithful by Yoneda (note also well-defined by fppf descent for morphsisms). Both sides are fppf sheaves (stacks) in $X$, by classical fppf descent. Combining 1 and 2, it suffices to show that a sheaf we want to hit is just fppf locally hit, which is obvious since locally it's finite constant. Note that the same proof also works for $X_{et}$ or anything in between -- once your topology splits finite etale maps it doesn't really matter what it is. So we usually just work with the minimal one, the small etale topology. As Mike Artin said to me apropos of something like this, "Why pack a suitcase when you're just going around the corner?"
{ "source": [ "https://mathoverflow.net/questions/23427", "https://mathoverflow.net", "https://mathoverflow.net/users/259/" ] }
23,478
The first thing to say is that this is not the same as the question about interesting mathematical mistakes . I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes. Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are (i) a bounded entire function is constant; (ii) $\sin z$ is a bounded function; (iii) $\sin z$ is defined and analytic everywhere on $\mathbb{C}$ ; (iv) $\sin z$ is not a constant function. Obviously, it is (ii) that is false. I think probably many people visualize the extension of $\sin z$ to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense. A second example is the statement that an open dense subset $U$ of $\mathbb{R}$ must be the whole of $\mathbb{R}$ . The "proof" of this statement is that every point $x$ is arbitrarily close to a point $u$ in $U$ , so when you put a small neighbourhood about $u$ it must contain $x$ . Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$ , even if they are widely believed) and that the reasons they are found plausible are quite varied.
For vector spaces, $\dim (U + V) = \dim U + \dim V - \dim (U \cap V)$, so $$ \dim(U +V + W) = \dim U + \dim V + \dim W - \dim (U \cap V) - \dim (U \cap W) - \dim (V \cap W) + \dim(U \cap V \cap W), $$ right?
{ "source": [ "https://mathoverflow.net/questions/23478", "https://mathoverflow.net", "https://mathoverflow.net/users/1459/" ] }
23,487
A number theorist I know (who studies Galois representations) raised a question recently: Which finite groups can have an irreducible character of degree at least 2 having only $n=2, 3$, or 4 classes where the character takes nonzero values? He has learned about a few special examples involving nonabelian groups which are very close to being abelian. My limited intuition about the question, based on finite groups of Lie type, suggests that the sort of group he is looking for will be far from simple. But maybe there is no reasonable characterization of these groups for a given $n \leq 4$? In any case, there should be some relevant literature out there. The question itself belongs to a familiar genre in finite group theory: What does the character table tell me about the structure of a group? Appropriate references: S. , Gagola, Pacific J. Math 108 (1983), 363-385, Berkovich-Zhmud', Characters of Finite Groups, Part 2, Chapter 21.
For vector spaces, $\dim (U + V) = \dim U + \dim V - \dim (U \cap V)$, so $$ \dim(U +V + W) = \dim U + \dim V + \dim W - \dim (U \cap V) - \dim (U \cap W) - \dim (V \cap W) + \dim(U \cap V \cap W), $$ right?
{ "source": [ "https://mathoverflow.net/questions/23487", "https://mathoverflow.net", "https://mathoverflow.net/users/4231/" ] }
23,547
The motivation for this question comes from the novel Contact by Carl Sagan. Actually, I haven't read the book myself. However, I heard that one of the characters (possibly one of those aliens at the end) says that if humans compute enough digits of $\pi$, they will discover that after some point there is nothing but zeroes for a really long time. After this long string of zeroes, the digits are no longer random, and there is some secret message embedded in them. This was supposed to be a justification of why humans have 10 fingers and increasing computing power. Anyway, apologies for the sidebar, but this all seemed rather dubious to me. So, I was wondering if it is known that $\pi$ does not contain 1000 consecutive zeroes in its base 10 expansion? Or perhaps it does? Of course, this question makes sense for any base and digit. Let's restrict ourselves to base 10. If $\pi$ does contain 1000 consecutive $k$'s, then we can instead ask if the number of consecutive $k$'s is bounded by a constant $b_k$. According to the wikipedia page , it is not even known which digits occur infinitely often in $\pi$, although it is conjectured that $\pi$ is a normal number . So, it is theoretically possible that only two digits occur infinitely often, in which case $b_k$ certainly exist for at least 8 values of $k$. Update. As Wadim Zudilin points out, the answer is conjectured to be yes. It in fact follows from the definition of a normal number (it helps to know the correct definition of things). I am guessing that a string of 1000 zeroes has not yet been observed in the over 1 trillion digits of $\pi$ thus computed, so I am adding the open problem tag to the question. Also, Douglas Zare has pointed out that in the novel, the actual culprit in question is a string of 0s and 1s arranged in a circle in the base 11 expansion of $\pi$. See here for more details.
Summing up what others have written, it is widely believed (but not proved) that every finite string of digits occurs in the decimal expansion of pi, and furthermore occurs, in the long run, "as often as it should," and furthermore that the analogous statement is true for expansion in base b for b = 2, 3, .... On the other hand, for all we are able to prove, pi in decimal could be all sixes and sevens (say) from some point on. About the only thing we can prove is that it can't have a huge string of zeros too early. This comes from irrationality measures for pi which are inequalities of the form $|\pi-(p/q)|>q^{-9}$ (see, e.g., Masayoshi Hata, Rational approximations to $\pi$ and some other numbers, Acta Arith. 63 (1993), no. 4, 335-349, MR1218461 (94e:11082)), which tell us that such a string of zeros would result in an impossibly good rational approximation to pi.
{ "source": [ "https://mathoverflow.net/questions/23547", "https://mathoverflow.net", "https://mathoverflow.net/users/2233/" ] }
23,564
In the December 2009 issue of the newsletter of the European Mathematical Society there is a very interesting interview with Pierre Cartier. In page 33, to the question What was the ontological status of categories for Grothendieck? he responds Nowadays, one of the most interesting points in mathematics is that, although all categorical reasonings are formally contradictory, we use them and we never make a mistake. Could someone explain what this actually means? (Please feel free to retag.)
Note: I am not a historian. I'm just guessing as to what prompted the comments. Here's my guess: if you do set theory naively, in the old-fashioned "anything is a set" way, then you run into Russell's paradox; the set consisting of all sets that aren't elements of themselves gives you trouble. So you then decide set theory needs formalising (I'm talking about 100 years ago here of course) and you write down some axioms, and the ones that "won" are ZFC, where only "small" collections of things are sets, and "big" things like "all groups" aren't sets. Of course there's nothing contradictory about considering all groups, or quantifying over groups (i.e. saying "every group has an identity element"), but you can't quantify over the set of all groups. And now because it's the 50s or 60s and you want to do homological algebra and take derived functors and do spectral sequences and stuff in some abstract way, you are now feeling the pressure a bit, because you want to define "functions" from the category of all G-modules (G a group) to the category of abelian groups called "group cohomology", but "H^n(G,-)" isn't a function, because its domain and range aren't sets. So you call it a "functor", which is fine, and press on. And as time goes on, and you start composing derived functors, you know in your heart that it's all OK. And then Grothendieck comes along, and probably other people too, and raise the issue that one really should be a bit more careful, because we don't want another Frege (who wrote a huge treatise on set theory but allowed big sets and his axioms were contradictory because of Russell's paradox). So Grothendieck tried to tame these beasts and "go back to basics"---but in some sense he "failed"---or, more precisely, realised that there were fundamental problems if he really wanted to treat categories as sets. "Sod it all", thought Grothendieck, "this is not really the main point". So he said "let's just assume there's a universe, i.e. (basically) a set where all the axioms of set theory hold" (it was a bit more complicated than that but still). This assumption (a) fixed all his problems but was (b) unprovable from the axioms of ZFC (because of Goedel). So there's a guess. Cartier is perhaps going over the top with "contradictory"---the statement "Russell's paradox is a paradox" is true but the statement "any mathematical manipulation with collections of objects that don't form a set is formally contradictory" is much stronger and surely false.
{ "source": [ "https://mathoverflow.net/questions/23564", "https://mathoverflow.net", "https://mathoverflow.net/users/394/" ] }
23,571
Are there non-isomorphic number fields (say of the same degree and signature) that have the same discriminant and regulator? I'm guessing the answer is no - why? And focusing on fields of small degree (n=3 and n=4), what about a less restrictive question: can we find two such fields that have the same regulator (no discriminant restrictions)?
Yes, see e.g. the paper "Arithmetically equivalent number fields of small degree" (Google for it) by Bosma and de Smit. In brief: two number fields $K$ and $K'$ are said to be arithmetically equivalent if they have the same Dedekind zeta function. A famous group-theoretic construction of Perlis (Journal of Number Theory, 1977) gives many nontrivial (i.e., non-isomorphic) pairs of arithmetically equivalent number fields. Remarkably, this construction works equally well to construct isospectral, non-isometric Riemannian manifolds, as was later shown by Sunada. Arithmetically equivalent number fields necessarily share many of the simplest invariants, for instance they have equal discriminants. As the aformentioned paper explains, for arithmetically equivalent $K$ and $K'$, comparing zeta functions gives $h(K)r(K) = h(K')r(K')$, where $h$ is the class number and $r$ is the regulator. Therefore, to get an affirmative answer to your question you want a nontrivial pair of arithmetically equivalent number fields $K$ and $K'$ with $h(K) = h(K')$. The paper by Bosma and de Smit gives such examples.
{ "source": [ "https://mathoverflow.net/questions/23571", "https://mathoverflow.net", "https://mathoverflow.net/users/5860/" ] }
23,593
OK so let's see if I can use MO to explicitly compute an example of something, by getting other people to join in. Sort of "one level up"---often people answer questions here but I'm going to see if I can make people do a more substantial project. Before I start, note (1) the computation might have been done already [I'd love to hear of a reference, if it has] (2) the computation may or may not be worth publishing (3) If it is worth publishing, there may or may not be a debate as to who the authors are. I personally don't give a hang about (2) or (3) at this point, but others might. Let me get on to the mathematics. Oh---just a couple more things before I start---this project is related to the mathematics at this question , but perhaps pushes it a bit further (if we can get it to work). I had initially thought about these issues because I was going to give them to an undergraduate, but the undergraduate tells me today that he's decided to do his project on the holomorphic case, and it seemed a bit daft to let my initial investment in the problem go to waste, so I thought I'd tell anyone who was interested. If no-one takes the bait here, I'll probably just make this another UG project. OK so here's the deal. Say $K/\mathbf{Q}$ is a finite Galois extension, and $\rho:Gal(K/\mathbf{Q})\to GL(2,\mathbf{C})$ is an irreducible 2-dimensional representation. General conjectures in the Langlands philosophy predict that $\rho$ comes from an automorphic form on $GL(2)$ over $\mathbf{Q}$. The idea is that we are going to "see" this form in an explicit example where general theory does not yet prove that it exists. Now the determinant of $\rho$ is a 1-dimensional Galois representation, and it makes sense to ask whether $det(\rho(c))$ is $+1$ or $-1$, where $c$ is complex conjugation. It has to be one of these, because $c^2=1$. The nature of the automorphic form predicted to exist depends on the sign. If the determinant is $-1$ then the form should be holomorphic, and a classical weight 1 cusp form. In this case the existence of the form is known, because it is implied by Serre's conjecture, which is now a theorem of Khare and Wintenberger. If the determinant is $+1$ then the conjectural form should be a real-analytic function on the upper half plane, invariant under a congruence subgroup, and satisfying a certain differential equation (which is not the Cauchy-Riemann equations in this case). If the image of $\rho$ is a solvable group then the existence of this form is known by old work of Langlands and Tunnell. So in summary then, the one case where the form is not known to exist is when the determinant of complex conjugation is $+1$ and the image of Galois is not solvable. Here is an explicit example. The polynomial g5=344 + 3106*x - 1795*x^2 - 780*x^3 - x^4 + x^5 has splitting field $L$, an $A_5$-extension of $\mathbf{Q}$ ramified only at the prime 1951. Now $A_5$ is isomorphic to the quotient of $SL(2,\mathbf{F}_5)$ by its centre $\pm 1$, and $L$ has a degree two extension $K_0$, also unramified outside 1951, with $Gal(K_0/\mathbf{Q})$ being $SL(2,\mathbf{F}_5)$. Turns out that $K_0$ can be taken to be the splitting field of the rather messier polynomial g24 = 14488688572801 - 2922378139308818*x^2 + 134981448876235615*x^4 - 1381768039105642956*x^6 + 4291028045077743465*x^8 - 2050038614413776542*x^10 + 287094814384960835*x^12 - 9040633522810414*x^14 + 63787035668165*x^16 - 158664037068*x^18 + 152929135*x^20 - 50726*x^22 + x^24 I should perhaps say that David Roberts told me these polynomials in Jan 2008; they're in a paper by him and John Jones---but they learnt about them from a paper of Doud and Moore. Now $SL(2,\mathbf{F}_5)$ has two faithful 2-dimensional complex representations; the traces of each representation take values in $\mathbf{Q}(\sqrt{5})$ and one is of course the conjugate of the other. The determinant of both representations is trivial so in fact they are $SL(2,\mathbf{C})$-valued. Oh---also, all the roots of g24 are real---and hence $K_0$ is totally real. So what we have here is a representation $$\rho_0:Gal(K_0/\mathbf{Q})\to GL(2,\mathbf{C})$$ which is conjectured to come from automorphic forms, but, as far as I know, the conjecture is not known in this case. Unfortunately the conductor of $\rho_0$ is $1951^2$, which is a bit big. In fact let me say something more about what is going on at 1951. In the $A_5$ extension the decomposition and inertia groups at 1951 are both cyclic of order 5. If my understanding of what David Roberts told me is correct, in the $SL(2,\mathbf{F}_5)$ extension the decomposition and inertia groups are both cyclic of order 10 (in fact I just got magma to check this). But the upshot is that $\rho_0$ restricted to a decomposition group at 1951 is of the form $\psi+\psi^{-1}$ with $\psi$ of order 10. Which character of order 10? Well there are four characters $(\mathbf{Z}/1951\mathbf{Z})^\times\to\mathbf{C}^\times$ of order 10, and two of them will do, and two won't, and which ones will do depends on which 2-dimensional representation of $SL(2,\mathbf{F}_5)$ you chose. The key point though is that if you get $\psi$ right, then the twist $\rho:=\rho_0\otimes\psi$ will have conductor 1951, which is tiny for these purposes. Now, as Marty did in an $A_4$ example and as I did and Junkie did in a dihedral example in the Maass form question cited above, it is possible to figure out explicitly numbers $b_1$, $b_2$, $b_3$,..., with the property that $$L(\rho,s)=\sum_{n\geq1}b_n/n^s.$$ If one had a computer program that could calculate $b_n$ for $n\geq1$, then there are not one but two ways that one could attempt to give computational evidence for the predictions given by the Langlands philosophy: (A) one could use techniques that Fernando Rodriguez-Villegas explained to me a few months ago to try and get computational evidence that $L(\rho,s)$ had analytic continuation to the complex plane and satisfied the correct functional equation, and (B) one could compute the corresponding real analytic function on the upper half plane, evaluate it at various places to 30 decimal places, and see if the function was invariant under the group $\Gamma_1(1951)$. I don't know much about (A) but I once tried, and failed, to do (B), and my gut feeling is that my mistake is in the computer program I wrote to compute the $b_n$. But as junkie's response in the previous question indicates, there now seem to be several ways to compute the $b_n$ and one thing I am wondering is whether we can use the methods he/she indicated in this question. Let me speak more about how I tried to compute the $b_n$. The character of the Maass form is $\psi^2$, the determinant of $\rho$. General theory tells us that $b_n$ is a multiplicative function of $n$ so we only need compute $b_n$ for $n$ a prime power. Again general theory (consider the local $L$-functions) says that for $p\not=1951$ one can compute $b_{p^n}$ from $b_p$. If $p=1951$ then $b_{p^n}=1$ for all $n$, because the decomposition and inertia groups coincide for 1951 in $K_0$ [EDIT: This part of the argument is wrong, and it explains why my programs didn't work. I finally discovered my mistake after comparing the output of mine and Junkie's programs and seeing where they differed. It's true that decomposition and inertia coincide in $K_0$ but when one twists by the order 10 character this stops being true. In fact $b_{1951}$ is a primitive 5th root of unity that I don't know how to work out using my method other than by trial and error.]. Finally, if the $L$-function of $\rho_0$ is $\sum_n a_n/n^s$ then $b_n=\psi(n)a_n$ for all $n$ prime to 1951, so it suffices to compute $a_p$ for $p\not=1951$. To compute $a_p$ I am going to compute the trace of $\rho_0(Frob_p)$. I first compute the GCD of the degrees of the irreducible factors of g24 mod $p$. If $p$ doesn't divide the discriminant of g24 then this GCD is the order of $Frob_p$ in $SL(2,\mathbf{F}_5)$. If $p$ does divide the discriminant of g24 then rotten luck, I need to factorize $p$ in the ring of integers of the number field generated by a root of g24. I did this using magma. Here are the results: prime order 2 6 3 10 5 4 163 5 16061 1 889289 10 451400586583 2 1188493301983785760551727 2 120450513180827412314298160097013390669723824832697847 1 Now unfortunately just computing the order of the conj class of Frobenius is not enough to determine the trace of the Galois representation, because $SL(2,\mathbf{F}_5)$ contains two conj classes of elements of order 5, and two of order 10. However, in both cases, the conj classes remain distinct in $A_5$ and so it suffices to have an algorithm which can distinguish between the two conjugacy classes in $A_5$. More precisely, we have to solve the following problem: we label the conj classes of elements of order 5 in $A_5$ as C1 and C2, and we want an algorithm which, given a prime $p$ for which g5 is irreducible mod $p$, we want the algorithm to return "C1" or "C2" depending on which class $Frob_p$ is in. Here is a beautiful way of doing it, explained to me by Bjorn Poonen: if g5 is irreducible mod $p$ then its roots in an alg closure of $\mathbf{F}_p$ are $x,x^p,x^{p^2},...$. Set $x_i=x^{p^i}$ and compute $\prod_{i&lt;j}(x_i-x_j)$. This product is a square root of the discriminant of g5. Choose once and for all a square root of the discriminant of g5 in the integers; if the product is congruent to this mod $p$ return "C1", else return "C2". That's it! I implemented this. I had a program which returned a bunch of $a_n$'s, and hence a bunch of $b_n$'s. I built the function on the upper half plane as the usual sum involving Bessel functions and so on, but it computationally did not come out to be invariant under $\Gamma_1(1951)$. If anyone wants to take up the challenge of computing the $b_n$ that would be great. I have explained one way to do it above, but I am well aware that there might be other ways to compute the $b_n$ analogous to junkie's approach from the previous question.
Here is Magma code that gets you the answer in a few seconds. I made a special case for the bad primes, and did them by hand. _<x> := PolynomialRing(Rationals()); f5 := 344 + 3106*x - 1795*x^2 - 780*x^3 - x^4 + x^5; g24 := 14488688572801 - 2922378139308818*x^2 + 134981448876235615*x^4 - 1381768039105642956*x^6 + 4291028045077743465*x^8 - 2050038614413776542*x^10 + 287094814384960835*x^12 - 9040633522810414*x^14 + 63787035668165*x^16 - 158664037068*x^18 + 152929135*x^20 - 50726*x^22 + x^24; K := NumberField(f5); _,D := IsSquare(Integers()!Discriminant(f5)); prec := 30; CHAR_TABLE := CharacterTable(GaloisGroup(g24)); chi := CHAR_TABLE[2]; BAD_FACTORS := [ <2,Polynomial([1,-1,1])>, <3,Polynomial([1,-ComplexField(prec)!chi[9],1])>, <5,Polynomial([1,0,1])>, <7,Polynomial([1,0,1])>, <71,Polynomial([1,0,1])>, <137,Polynomial([1,1,1])>, <163,Polynomial([1,-ComplexField(prec)!chi[5],1])>, <1951,Polynomial([1])>, <16061,Polynomial([1,-2,1])>, <889289,Polynomial([1,-ComplexField(prec)!chi[8],1])> ]; BAD := [bf[1] : bf in BAD_FACTORS]; FACTORS := [bf[2] : bf in BAD_FACTORS]; function LOCAL(p,d : Precision:=prec) if p in BAD then return FACTORS[Position(BAD,p)]; end if; R := Roots(ChangeRing(f5,GF(p))); if #R eq 1 then return Polynomial([1,0,1]); end if; if #R eq 2 then ord := Lcm([Degree(f[1]) : f in Factorization(Polynomial(GF(p),g24))]); return Polynomial([1,ord eq 3 select 1 else -1,1]); end if; if #R eq 5 then ord := Lcm([Degree(f[1]) : f in Factorization(Polynomial(GF(p),g24))]); return Polynomial([1,ord eq 1 select -2 else 2,1]); end if; r := Roots(ChangeRing(f5,GF(p^5))); x := r[1][1]; prod := GF(p)!&*[x^(p^i)-x^(p^j) : j in [(i+1)..4], i in [0..4]]; wh := prod eq GF(p)!D; ord := Lcm([Degree(f[1]) : f in Factorization(Polynomial(GF(p),g24))]); if ord eq 10 then class := wh select 8 else 9; // compatible with FACTORS else class := wh select 6 else 5; end if; return Polynomial([1,-ComplexField(prec)!chi[class],1]); end function; L := LSeries(1, [0,0], 1951^2, LOCAL : Precision:=prec); // s->1-s, Gamma(s/2)^2 psi := DirichletGroup(1951, CyclotomicField(10)).1; p1951 := Polynomial([1,-ComplexField(prec)!CyclotomicField(5).1]); TP := TensorProduct(L, LSeries(psi : Precision:=prec), [<1951, 1, p1951>]); CheckFunctionalEquation(TP); Here is the special values: ev := Evaluate(TP,0); // 2-1.453085056... rel := PowerRelation(ev,4 : Al:="LLL"); NF := NumberField(rel); Q5<zeta5> := CyclotomicField(5); assert IsIsomorphic(NF,Q5); Q5!NF.1; So $L(\rho,0)=-4\zeta_5(1+\zeta_5)$ for Marty. I get $L(\rho_0,-1)=32(48723\sqrt{5} - 778741)$ as an algebraic. I get $L(\rho,-2)=8800\zeta_5^3 - 14444\zeta_5^2 + 35604\zeta_5 + 17412$ with more precision. I determined the TensorProduct factor at 1951 via trial and error, making the obvious guesses until one worked (the failure is at 100-110 digits). With this, I take it to 240 digits and I can even get $$L(\rho,-4)=-18475535360\zeta_5^3 - 11142861380\zeta_5^2 - 12091894020\zeta_5 - 7107607296$$ and $$L(\rho,-6)=25255057273186244\zeta_5^3 - 1015274469604000\zeta_5^2 - 15695788409197884\zeta_5 + 9459547822189412$$ The precision can go higher if you want more. Finally, the Maass form: function MaassEval(L,z) x:=Real(z); y:=Imaginary(z); printf "Using %o coefficients\n", Ceiling(11/y); C := LGetCoefficients(L,Ceiling(11/y)); pi := Pi(RealField()); a := Sqrt(y)*&+[C[n]*KBessel(0,2*pi*n*y)*Sin(2*pi*n*x) : n in [1..#C]]; return a; end function; zz:=0.0001+0.0001*ComplexField().1; MaassEval(TP,zz); // Using 110000 coefficients // -1.71477211817772949974178783985E-8 + 9.01673609747756708674470686948E-9*i MaassEval(TP,zz/(1951*zz+1)); // Using 161297 coefficients // -1.71477211817772949974179078240E-8 + 9.01673609747756708674496293450E-9*i
{ "source": [ "https://mathoverflow.net/questions/23593", "https://mathoverflow.net", "https://mathoverflow.net/users/1384/" ] }
23,770
This question is going to be extremely vague. It seems that wherever I go (especially about Grothendieck's circle of ideas) the higher-dimensional analogue of a curve minus a finite number of points is a scheme minus a normal crossing divisor. Why is that? What's so special about a normal crossing divisor that it simulates a curve minus a finite number of points better?
It mostly has to do with finding nice compactifications. Compactifications of varieties are a good thing as they allow us to control what happens at "infinity". If the variety itself is smooth it seems a good idea (and it is!) to demand that the compactification also be smooth. However, you need the situation to be nice at infinity in order to make the study of asymptotic behaviour at infinity to be as easy as possible. The best behaviour at infinity would be if the complement were smooth but that is in general not possible. What is always possible is to demand that the complement be a divisor with normal crossings. In practice it works essentially as well as having a smooth complement: You have a bunch of smooth varieties intersecting in as nice a manner as possible.
{ "source": [ "https://mathoverflow.net/questions/23770", "https://mathoverflow.net", "https://mathoverflow.net/users/5309/" ] }
23,829
Related MO questions: What is the general opinion on the Generalized Continuum Hypothesis? ; Completion of ZFC ; Complete resolutions of GCH How far wrong could the Continuum Hypothesis be? When was the continuum hypothesis born? Background The Continuum Hypothesis (CH) posed by Cantor in 1890 asserts that $ \aleph_1=2^{\aleph_0}$ . In other words, it asserts that every subset of the set of real numbers that contains the natural numbers has either the cardinality of the natural numbers or the cardinality of the real numbers. It was the first problem on the 1900 Hilbert's list of problems. The generalized continuum hypothesis asserts that there are no intermediate cardinals between every infinite set X and its power set. Cohen proved that the CH is independent from the axioms of set theory. (Earlier Goedel showed that a positive answer is consistent with the axioms). Several mathematicians proposed definite answers or approaches towards such answers regarding what the answer for the CH (and GCH) should be. The question My question asks for a description and explanation of the various approaches to the continuum hypothesis in a language which could be understood by non-professionals. More background I am aware of the existence of 2-3 approaches. One is by Woodin described in two 2001 Notices of the AMS papers ( part 1 , part 2 ). Another by Shelah (perhaps in this paper entitled "The Generalized Continuum Hypothesis revisited " ). See also the paper entitled " You can enter Cantor paradise " (Offered in Haim's answer.); There is a very nice presentation by Matt Foreman discussing Woodin's approach and some other avenues. Another description of Woodin's answer is by Lucca Belloti (also suggested by Haim ). The proposed answer $ 2^{\aleph_0}=\aleph_2$ goes back according to François to Goedel . It is (perhaps) mentioned in Foreman's presentation. (I heard also from Menachem Magidor that this answer might have some advantages.) François G. Dorais mentioned an important paper by Todorcevic's entitled " Comparing the Continuum with the First Two Uncountable Cardinals ". There is also a very rich theory (PCF theory) of cardinal arithmetic which deals with what can be proved in ZFC. Remark: I included some information and links from the comments and answer in the body of question. What I would hope most from an answer is some friendly elementary descriptions of the proposed solutions. There are by now a couple of long detailed excellent answers (that I still have to digest) by Joel David Hamkins and by Andres Caicedo and several other useful answers. (Unfortunately, I can accept only one answer.) Update (February 2011): A new detailed answer was contributed by Justin Moore . Update (Oct 2013) A user 'none' gave a link to an article by Peter Koellner about the current status of CH : Update (Jan 2014) A related popular article in "Quanta:" To settle infinity dispute a new law of logic (belated) update (Jan 2014) Joel David Hamkins links in a comment from 2012 a very interesting paper Is the dream solution to the continuum hypothesis attainable written by him about the possibility of a "dream solution to CH." A link to the paper and a short post can be found here . (belated) update (Sept 2015) Here is a link to an interesting article: Can the Continuum Hypothesis be Solved? By Juliette Kennedy Update A videotaped lecture The Continuum Hypothesis and the search for Mathematical Infinity by Woodin from January 2015, with reference also to his changed opinion. (added May 2017) Update (Dec '15): A very nice answer was added (but unfortunately deleted by owner, (2019) now replaced by a new answer) by Grigor. Let me quote its beginning (hopefully it will come back to life): "One probably should add that the continuum hypothesis depends a lot on how you ask it. $2^{\omega}=\omega_1$ Every set of reals is either countable or has the same size as the continuum. To me, 1 is a completely meaningless question, how do you even experiment it? If I am not mistaken, Cantor actually asked 2..." Update A 2011 videotaped lecture by Menachem Magidor: Can the Continuum Problem be Solved? (I will try to add slides for more recent versions.) Update (July 2019) Here are slides of 2019 Woodin's lecture explaining his current view on the problem. (See the answer of Mohammad Golshani.) Update (Sept 19, 2019) Here are videos of the three 2016 Bernay's lectures by Hugh Woodin on the continuum hypothesis and also the videos of the three 2012 Bernay's lectures on the continuum hypothesis and related topics by Solomon Feferman . Update (Sept '20) Here are videos of the three 2020 Bernays' lectures by Saharon Shelah on the continuum hypothesis. Update (May '21) In a new answer , Ralf Schindler gave a link to his 2021 videotaped lecture in Wuhan, describing a result with David Asperó that shows a relation between two well-known axioms. It turns out that Martin's Maximum $^{++}$ implies Woodin's ℙ $_{max}$ axiom. Both these axioms were known to imply the $\aleph_2$ answer to CH. A link to the paper: https://doi.org/10.4007/annals.2021.193.3.3
Since you have already linked to some of the contemporary primary sources, where of course the full accounts of those views can be found, let me interpret your question as a request for summary accounts of the various views on CH. I'll just describe in a few sentences each of what I find to be the main issues surrounding CH, beginning with some historical views. Please forgive the necessary simplifications. Cantor. Cantor introduced the Continuum Hypothesis when he discovered the transfinite numbers and proved that the reals are uncountable. It was quite natural to inquire whether the continuum was the same as the first uncountable cardinal. He became obsessed with this question, working on it from various angles and sometimes switching opinion as to the likely outcome. Giving birth to the field of descriptive set theory, he settled the CH question for closed sets of reals, by proving (the Cantor-Bendixon theorem) that every closed set is the union of a countable set and a perfect set. Sets with this perfect set property cannot be counterexamples to CH, and Cantor hoped to extend this method to additional larger classes of sets. Hilbert. Hilbert thought the CH question so important that he listed it as the first on his famous list of problems at the opening of the 20th century. Goedel. Goedel proved that CH holds in the constructible universe $L$ , and so is relatively consistent with ZFC. Goedel viewed $L$ as a device for establishing consistency, rather than as a description of our (Platonic) mathematical world, and so he did not take this result to settle CH. He hoped that the emerging large cardinal concepts, such as measurable cardinals, would settle the CH question, and as you mentioned, favored a solution of the form $2^\omega=\aleph_2$ . Cohen. Cohen introduced the method of forcing and used it to prove that $\neg$ CH is relatively consistent with ZFC. Every model of ZFC has a forcing extension with $\neg$ CH. Thus, the CH question is independent of ZFC, neither provable nor refutable. Solovay observed that CH also is forceable over any model of ZFC. Large cardinals. Goedel's expectation that large cardinals might settle CH was decisively refuted by the Levy-Solovay theorem, which showed that one can force either CH or $\neg$ CH while preserving all known large cardinals. Thus, there can be no direct implication from large cardinals to either CH or $\neg$ CH. At the same time, Solovay extended Cantor's original strategy by proving that if there are large cardinals, then increasing levels of the projective hierarchy have the perfect set property, and therefore do not admit counterexamples to CH. All of the strongest large cardinal axioms considered today imply that there are no projective counterexamples to CH. This can be seen as a complete affirmation of Cantor's original strategy. Basic Platonic position. This is the realist view that there is Platonic universe of sets that our axioms are attempting to describe, in which every set-theoretic question such as CH has a truth value. In my experience, this is the most common or orthodox view in the set-theoretic community. Several of the later more subtle views rest solidly upon the idea that there is a fact of the matter to be determined. Old-school dream solution of CH. The hope was that we might settle CH by finding a new set-theoretic principle that we all agreed was obviously true for the intended interpretation of sets (in the way that many find AC to be obviously true, for example) and which also settled the CH question. Then, we would extend ZFC to include this new principle and thereby have an answer to CH. Unfortunately, no such conclusive principles were found, although there have been some proposals in this vein, such as Freilings axiom of symmetry . Formalist view. Rarely held by mathematicians, although occasionally held by philosophers, this is the anti-realist view that there is no truth of the matter of CH, and that mathematics consists of (perhaps meaningless) manipulations of strings of symbols in a formal system. The formalist view can be taken to hold that the independence result itself settles CH, since CH is neither provable nor refutable in ZFC. One can have either CH or $\neg$ CH as axioms and form the new formal systems ZFC+CH or ZFC+ $\neg$ CH. This view is often mocked in straw-man form, suggesting that the formalist can have no preference for CH or $\neg$ CH, but philosophers defend more subtle versions, where there can be reason to prefer one formal system to another. Pragmatic view. This is the view one finds in practice, where mathematicians do not take a position on CH, but feel free to use CH or $\neg$ CH if it helps their argument, keeping careful track of where it is used. Usually, when either CH or $\neg$ CH is used, then one naturally inquires about the situation under the alternative hypothesis, and this leads to numerous consistency or independence results. Cardinal invariants. Exemplifying the pragmatic view, this is a very rich subject studying various cardinal characteristics of the continuum, such as the size of the smallest unbounded family of functions $f:\omega\to\omega$ , the additivity of the ideal of measure-zero sets, or the smallest size family of functions $f:\omega\to\omega$ that dominate all other such functions. Since these characteristics are all uncountable and at most the continuum, the entire theory trivializes under CH, but under $\neg$ CH is a rich, fascinating subject. Canonical Inner models. The paradigmatic canonical inner model is Goedel's constructible universe $L$ , which satisfies CH and indeed, the Generalized Continuum Hypothesis, as well as many other regularity properties. Larger but still canonical inner models have been built by Silver, Jensen, Mitchell, Steel and others that share the GCH and these regularity properties, while also satisfying larger large cardinal axioms than are possible in $L$ . Most set-theorists do not view these inner models as likely to be the "real" universe, for similar reasons that they reject $V=L$ , but as the models accommodate larger and larger large cardinals, it becomes increasingly difficult to make this case. Even $V=L$ is compatible with the existence of transitive set models of the very largest large cardinals (since the assertion that such sets exist is $\Sigma^1_2$ and hence absolute to $L$ ). In this sense, the canonical inner models are fundamentally compatible with whatever kind of set theory we are imagining. Woodin. In contrast to the Old-School Dream Solution, Woodin has advanced a more technical argument in favor of $\neg$ CH. The main concepts include $\Omega$ -logic and the $\Omega$ -conjecture, concerning the limits of forcing-invariant assertions, particularly those expressible in the structure $H_{\omega_2}$ , where CH is expressible. Woodin's is a decidedly Platonist position, but from what I have seen, he has remained guarded in his presentations, describing the argument as a proposal or possible solution, despite the fact that others sometimes characterize his position as more definitive. Foreman. Foreman, who also comes from a strong Platonist position, argues against Woodin's view. He writes supremely well, and I recommend following the links to his articles. Multiverse view. This is the view, offered in opposition to the Basic Platonist Position above, that we do not have just one concept of set leading to a unique set-theoretic universe, but rather a complex variety of set concepts leading to many different set-theoretic worlds. Indeed, the view is that much of set-theoretic research in the past half-century has been about constructing these various alternative worlds. Many of the alternative set concepts, such as those arising by forcing or by large cardinal embeddings are closely enough related to each other that they can be compared from the perspective of each other. The multiverse view of CH is that the CH question is largely settled by the fact that we know precisely how to build CH or $\neg$ CH worlds close to any given set-theoretic universe---the CH and $\neg$ CH worlds are in a sense dense among the set-theoretic universes. The multiverse view is realist as opposed to formalist, since it affirms the real nature of the set-theoretic worlds to which the various set concepts give rise. On the Multiverse view, the Old-School Dream Solution is impossible, since our experience in the CH and $\neg$ CH worlds will prevent us from accepting any principle $\Phi$ that settles CH as "obviously true". Rather, on the multiverse view we are to study all the possible set-theoretic worlds and especially how they relate to each other. I should stop now, and I apologize for the length of this answer.
{ "source": [ "https://mathoverflow.net/questions/23829", "https://mathoverflow.net", "https://mathoverflow.net/users/1532/" ] }
23,857
A finite group $G$ can be considered as a category with one object. Taking its nerve $NG$ , and then geometrically realizing we get $BG$ the classifying space of $G$ , which classifies principal $G$ bundles. Instead starting with any category $C$ , what does $NC$ classify? (Either before or after taking realization.) Does it classify something reasonable?
Ieke Moerdijk has written a small Springer Lecture Notes tome addressing this question: "Classifying Spaces and Classifying Topoi" SLNM 1616 . Roughly the answer is: A $G$-bundle is a map whose fibers have a $G$-action, i.e. are $G$-sets (if they are discrete), i.e. they are functors from $G$ seen as a category to $\mathsf{Sets}$. Likewise a $\mathcal C$-bundle for a category $\mathcal C$ is a map whose fibers are functors from $\mathcal C$ to $\mathsf{Sets}$, or, if you want, a disjoint union of sets (one for each object of $\mathcal C$) and an action by the morphisms of $\mathcal C$ — a morphism $A \to B$ in $\mathcal C$ takes elements of the set corresponding to $A$ to elements of the set corresponding to $B$. There is a completely analogous version for topological categories also.
{ "source": [ "https://mathoverflow.net/questions/23857", "https://mathoverflow.net", "https://mathoverflow.net/users/3557/" ] }
23,859
Suppose $S$ and $S'$ are two compact Riemann surfaces of genus $g$. Does there exist a sequence of genera $g_i \to \infty$ and covers $S_i, S_{i}'$ of $S,S'$, both of genus $g_i$, such that $d(S_i,S_{i}')\to 0$? Here $d$ a "natural" distance function on Teichmuller space, of which I suppose there are many, but for definiteness let's take it to be induced by the Teichmuller metric. This question was asked to me by Rick Kenyon last year, and some brief thought on it got me nowhere.
Ieke Moerdijk has written a small Springer Lecture Notes tome addressing this question: "Classifying Spaces and Classifying Topoi" SLNM 1616 . Roughly the answer is: A $G$-bundle is a map whose fibers have a $G$-action, i.e. are $G$-sets (if they are discrete), i.e. they are functors from $G$ seen as a category to $\mathsf{Sets}$. Likewise a $\mathcal C$-bundle for a category $\mathcal C$ is a map whose fibers are functors from $\mathcal C$ to $\mathsf{Sets}$, or, if you want, a disjoint union of sets (one for each object of $\mathcal C$) and an action by the morphisms of $\mathcal C$ — a morphism $A \to B$ in $\mathcal C$ takes elements of the set corresponding to $A$ to elements of the set corresponding to $B$. There is a completely analogous version for topological categories also.
{ "source": [ "https://mathoverflow.net/questions/23859", "https://mathoverflow.net", "https://mathoverflow.net/users/1464/" ] }
23,911
I am teaching a course on Riemann Surfaces next term, and would like a list of facts illustrating the difference between the theory of real (differentiable) manifolds and the theory non-singular varieties (over, say, $\mathbb{C}$). I am looking for examples that would be meaningful to 2nd year US graduate students who has taken 1 year of topology and 1 semester of complex analysis. Here are some examples that I thought of: 1. Every $n$-dimensional real manifold embeds in $\mathbb{R}^{2n}$. By contrast, a projective variety does not embed in $\mathbb{A}^n$ for any $n$. Every $n$-dimensional non-singular, projective variety embeds in $\mathbb{P}^{2n+1}$, but there are non-singular, proper varieties that do not embed in any projective space. 2. Suppose that $X$ is a real manifold and $f$ is a smooth function on an open subset $U$. Given $V \subset U$ compactly contained in $U$, there exists a global function $\tilde{g}$ that agrees with $f$ on $V$ and is identically zero outside of $U$. By contrast, consider the same set-up when $X$ is a non-singular variety and $f$ is a regular function. It may be impossible find a global regular function $g$ that agrees with $f$ on $V$. When $g$ exists, it is unique and (when $f$ is non-zero) is not identically zero on outside of $U$. 3. If $X$ is a real manifold and $p \in X$ is a point, then the ring of germs at $p$ is non-noetherian. The local ring of a variety at a point is always noetherian. What are some more examples? Answers illustrating the difference between real manifolds and complex manifolds are also welcome.
Here is a list biased towards what is remarkable in the complex case. (To the potential peeved real manifold: I love you too.) By "complex" I mean holomorphic manifolds and holomorphic maps; by "real" I mean $\mathcal{C}^{\infty}$ manifolds and $\mathcal{C}^{\infty}$ maps. Consider a map $f$ between manifolds of equal dimension. In the complex case: if $f$ is injective then it is an isomorphism onto its image. In the real case, $x\mapsto x^3$ is not invertible. Consider a holomorphic $f: U-K \rightarrow \mathbb{C}$, where $U\subset \mathbb{C}^n$ is open and $K$ is a compact s.t. $U-K$ is connected. When $n\geq 2$, $f$ extends to $U$. This so-called Hartogs phenomenon has no counterpart in the real case. If a complex manifold is compact or is a bounded open subset of $\mathbb{C}^n$, then its group of automorphisms is a Lie group. In the smooth case it is always infinite dimensional. The space of sections of a vector bundle over a compact complex manifold is finite dimensional. In the real case it is always infinite dimensional. To expand on Charles Staats's excellent answer: few smooth atlases happen to be holomorphic, but even fewer diffeomorphisms happen to be holomorphic. Considering manifolds up to isomorphism, the net result is that many complex manifolds come in continuous families, whereas real manifolds rarely do (in dimension other than $4$: a compact topological manifold has at most finitely many smooth structures; $\mathbb{R}^n$ has exactly one). On the theme of zero subsets (i.e., subsets defined locally by the vanishing of one or several functions): One equation always defines a codimension one subset in the complex case, but {$x_1^2+\dots+x_n^2=0$} is reduced to one point in $\mathbb{R}^n$. In the complex case, a zero subset isn't necessarily a submanifold, but is amenable to manifold theory by Hironaka desingularization. In the real case, any closed subset is a zero set. The image of a proper map between two complex manifolds is a zero subset, so isn't too bad by the previous point. Such a direct image is hard to deal with in the real case.
{ "source": [ "https://mathoverflow.net/questions/23911", "https://mathoverflow.net", "https://mathoverflow.net/users/5337/" ] }
23,936
I have the following setup: There is a collection of items I and a collection of partial rankings V. That is, an element of V is a total ordering on a subset of I. There is no expectation of consistency among the elements of V: it may be that x < y for one element and y < x for another. I would like to assign a score $s : I \to \mathbb{R}$ which in some sense captures these rankings. That is, I would like s(x) < s(y) to mean "x tends to be less than y for elements of V which have both in their domain". I'm not sure of what a good way to do this is. Arrow's impossibility theorem puts some constraints on what can be achieved here, because given a set of votes and a scoring function like this we could use the scoring function to define a total order on the items, which is then constrained by the theorem. I suppose I'm really looking for references rather than an answer to this question (although both would be appreciated): I'm sure there's a body of theory around this, but I have no idea what it is like or what it's called, so I'm at a bit of a loss as to where to start looking for a solution.
Here is a list biased towards what is remarkable in the complex case. (To the potential peeved real manifold: I love you too.) By "complex" I mean holomorphic manifolds and holomorphic maps; by "real" I mean $\mathcal{C}^{\infty}$ manifolds and $\mathcal{C}^{\infty}$ maps. Consider a map $f$ between manifolds of equal dimension. In the complex case: if $f$ is injective then it is an isomorphism onto its image. In the real case, $x\mapsto x^3$ is not invertible. Consider a holomorphic $f: U-K \rightarrow \mathbb{C}$, where $U\subset \mathbb{C}^n$ is open and $K$ is a compact s.t. $U-K$ is connected. When $n\geq 2$, $f$ extends to $U$. This so-called Hartogs phenomenon has no counterpart in the real case. If a complex manifold is compact or is a bounded open subset of $\mathbb{C}^n$, then its group of automorphisms is a Lie group. In the smooth case it is always infinite dimensional. The space of sections of a vector bundle over a compact complex manifold is finite dimensional. In the real case it is always infinite dimensional. To expand on Charles Staats's excellent answer: few smooth atlases happen to be holomorphic, but even fewer diffeomorphisms happen to be holomorphic. Considering manifolds up to isomorphism, the net result is that many complex manifolds come in continuous families, whereas real manifolds rarely do (in dimension other than $4$: a compact topological manifold has at most finitely many smooth structures; $\mathbb{R}^n$ has exactly one). On the theme of zero subsets (i.e., subsets defined locally by the vanishing of one or several functions): One equation always defines a codimension one subset in the complex case, but {$x_1^2+\dots+x_n^2=0$} is reduced to one point in $\mathbb{R}^n$. In the complex case, a zero subset isn't necessarily a submanifold, but is amenable to manifold theory by Hironaka desingularization. In the real case, any closed subset is a zero set. The image of a proper map between two complex manifolds is a zero subset, so isn't too bad by the previous point. Such a direct image is hard to deal with in the real case.
{ "source": [ "https://mathoverflow.net/questions/23936", "https://mathoverflow.net", "https://mathoverflow.net/users/4959/" ] }
23,943
NEW CONJECTURE: There is no general upper bound. Wadim Zudilin suggested that I make this a separate question. This follows representability of consecutive integers by a binary quadratic form where most of the people who gave answers are worn out after arguing over indefinite forms and inhomogeneous polynomials. Some real effort went into this, perhaps it will not be seen as a duplicate question. So the question is, can a positive definite integral binary quadratic form $$ f(x,y) = a x^2 + b x y + c y^2 $$ represent 13 consecutive numbers? My record so far is 8: the form $$6x^2+5xy+14y^2 $$ represents the 8 consecutive numbers from 716,234 to 716,241. Here we have discriminant $ \Delta = -311,$ and 2,3,5,7 are all residues $\pmod {311}.$ I do not think it remotely coincidental that $$6x^2+xy+13 y^2 $$ represents the 7 consecutive numbers from 716,235 to 716,241. I have a number of observations. There is a congruence obstacle $\pmod 8$ unless, with $ f(x,y) = a x^2 + b x y + c y^2 $ and $\Delta = b^2 - 4 a c,$ we have $\Delta \equiv 1 \pmod 8,$ or $ | \Delta | \equiv 7 \pmod 8.$ If a prime $p | \Delta,$ then the form is restricted to either all quadratic residues or all nonresidues $ \pmod p$ among numbers not divisible by $p.$ In what could be a red herring, I have been emphasizing $\Delta = -p$ where $p \equiv 7 \pmod 8$ is prime, and where there is a very long string of consecutive quadratic residues $\pmod p.$ Note that this means only a single genus with the same $\Delta = -p,$ and any form is restricted to residues. I did not anticipate that long strings of represented numbers would not start at 1 or any predictable place and would be fairly large. As target numbers grow, the probability of not being represented by any form of the discriminant grows ( if prime $q \parallel n$ with $(-p| q) = -1$), but as the number of prime factors $r$ with $(-p| r) = 1$ grows so does the probability that many forms represent the number if any do. Finally, on the influence of taking another $\Delta$ with even more consecutive residues, the trouble seems to be that the class number grows as well. So everywhere there are trade-offs. EDIT, Monday 10 May. I had an idea that the large values represented by any individual form ought to be isolated. That was naive. Legendre showed that for a prime $q \equiv 7 \pmod 8$ there exists a solution to $u^2 - q v^2 = 2,$ and therefore infinitely many solutions. This means that the form $x^2 + q y^2$ represents the triple of consecutive numbers $q v^2, 1 + q v^2, u^2$ and then represents $4 + q v^2$ after perhaps skipping $3 + q v^2$. Taking $q = 8 k - 1,$ the form $ x^2 + x y + 2 k y^2$ has no restrictions $\pmod 8,$ while an explicit formula shows that it represents every number represented by $x^2 + q y^2.$ Put together, if $8k-1 = q$ is prime, then $ x^2 + x y + 2 k y^2$ represents infinitely many triples. If, in addition, $ ( 3 | q) = 1,$ it seems plausible to expect infinitely many quintuples. It should be admitted that the recipe given seems not to be a particularly good way to jump from length 3 to length 5, although strings of length 5 beginning with some $q t^2$ appear plentiful. EDIT, Tuesday 11 May. I have found a string of 9, the form is $6 x^2 + x y + 13 y^2$ and the numbers start at $1786879113 = 3 \cdot 173 \cdot 193 \cdot 17839$ and end with $1786879121$ which is prime. As to checking, I have a separate program that shows me the particular $x,y$ for representing a target number by a positive binary form. Then I checked those pairs using my programmable calculator, which has exact arithmetic up to $10^{10}.$ EDIT, Saturday 15 May. I have found a string of 10, the form is $9 x^2 + 5 x y + 14 y^2$ and the numbers start at $866988565 = 5 \cdot 23 \cdot 7539031$ and end with $866988574 = 2 \cdot 433494287.$ EDIT, Thursday 17 June. Wadim Zudilin has been running one of my programs on a fast computer. We finally have a string of 11, the form being $ 3 x^2 + x y + 26 y^2$ of discriminant $-311.$ The integrally represented numbers start at 897105813710 and end at 897105813720. Note that the maximum possible for this discriminant is 11. So we now have this conjecture: For discriminants $\Delta$ with absolute values in this sequence http://www.oeis.org/A000229 some form represents a set of $N$ consecutive integers, where $N$ is the first quadratic nonresidue. As a result, we conjecture that there is no upper bound on the number of consecutive integers that can be represented by a positive quadratic form.
I just wanted to remark that if $p$ is a prime such that $\ell$ splits in $F = \mathbb{Q}(\sqrt{-p})$ for all $\ell \le N$, then one may prove the existence of $N$ consecutive integers which are norms of integers in $\mathcal{O}_F$, providing one is willing to assume a standard hypothesis about prime numbers, namely, Schinzel's Hypothesis H. First, note the following: Lemma 1: If $C$ is an abelian group of odd order, then there exists a finite (ordered) set $S = \{c_i\}$ of elements of $C$ such that every element in $C$ can be written in the form $\displaystyle{\sum \epsilon_i \cdot c_i}$ where $\epsilon_i = \pm 1$. Proof: If $C = A \oplus B$, take $S_C = S_A \cup S_B$. If $C = \mathbb{Z}/n \mathbb{Z}$ then take $S = \{1,1,1,\ldots,1\}$ with $|S| = 2n$. Let $C$ be the class group of $F$. It has odd order, because $2$ splits in $F$ and thus $\Delta_F = -p$. Let $S$ be a set as in the lemma. Let $A$ denote an ordered set of distinct primes $\{p_i\}$ which split in $\mathcal{O}_F$ such that one can write $p_i = \mathfrak{p}_i \mathfrak{p}'_i$ with $[\mathfrak{p}_i] = c_i \in C$, where $c_i$ denotes a set of elements whose existence was shown in Lemma 1. Lemma 2: If $n$ is the norm of some ideal $\mathfrak{n} \in \mathcal{O}_F$, and $n$ is not divisible by any prime $p_i$ in $A$, then $$n \cdot \prod_{A} p_i$$ is the norm of an algebraic integer in $\mathcal{O}_F$. Proof: We may choose $\epsilon_i = \pm 1$ such that $\displaystyle{[\mathfrak{n}] + \sum \epsilon_i \cdot c_i = 0 \in C}$. By assumption, $[\mathfrak{p}_i] = c_i \in C$ and thus $[\mathfrak{p}'_i] = -c_i \in C$. Hence the ideal $$\mathfrak{n} \prod_{\epsilon_i = 1} { \mathfrak{p}} \prod_{\epsilon_i = -1} \mathfrak{p}'$$ is principal, and has the desired norm. By the Chebotarev density theorem (applied to the Hilbert class field of $F$), there exists a set $A$ of primes as above which avoids any fixed finite set of primes. In particular, we may find $N$ such sets which are pairwise distinct and which contain no primes $\le N$. Denote these sets by $A_1, \ldots, A_N$. By the Chinese remainder theorem, the set of integers $m$ such that $$m \equiv 0 \mod p \cdot (N!)^2$$ $$m + j \equiv 0 \mod \prod_{p_i \in A_j} p_i, \qquad 1 \le j \le N$$ is of the form $m = d M + k$ where $0 \le k < M$, $d$ is arbitrary, and $M$ is the product of the moduli. Lemma 3: Assuming Schinzel's Hypothesis H, there exists infinitely many integers $d$ such that $$ P_{dj}:= \frac{dM + k + j}{j \cdot \prod_{p_i \in A_j} p_i}$$ are simultaneouly prime for all $j = 1,\ldots,N$. Proof: By construction, all these numbers are coprime to $M$ (easy check). Hence, as $d$ varies, the greatest common divisor of the product of these numbers is $1$, so Schinzel's Hypothesis H applies. Let $\chi$ denote the quadratic character of $F$. Note that $dM + k + j = j \mod p$, and so $\chi(dM + k + j) = \chi(j) = 1$ (as all primes less than $N$ split in $F$). Moreover, $\chi(p_i) = 1$ for all primes $p_i$ in $A_j$ by construction. Hence $\chi(P_{dj}) = 1$. In particular, if $P_{dj}$ is prime, then $P_{dj}$ and $j \cdot P_{dj}$ are norms of (not necessarily principal) ideals in the ring of integers of $F$. By Lemma 2, this implies that $$dM + k + j = j \cdot P_{dj} \prod_{p_i \in A_j} p_i$$ is the norm of some element of $\mathcal{O}_F$ for all $j = 1,\ldots, N$. One reason to think that current sieving technology will not be sufficient to answer this problem is the following: when Sieving produces a non-trivial lower bound, it usually produces a pretty good lower bound. However, there are no good (lower) bounds known for the following problem: count the number of integers $n$ such that $n$, $n+1$, and $n+2$ are all sums of two squares. Even for the problem of estimating the number of $n$ such that $n$ and $n+1$ are both sums of squares is tricky - Hooley implies that the natural sieve does not give lower bounds (for reasons analogous to the parity problem). Instead, he relates the problem to sums of the form $\displaystyle{\sum_{n < x} a_n a_{n+1}}$ where $\sum a_n q^n = \theta^2$ is a modular form. In particular, he implicitly uses automorphic methods which won't work with three or more terms.
{ "source": [ "https://mathoverflow.net/questions/23943", "https://mathoverflow.net", "https://mathoverflow.net/users/3324/" ] }
24,034
More generally, can the zero set $V(f)$ of a continuous function $f : \mathbb{R} \to \mathbb{R}$ be nowhere dense and uncountable? What if $f$ is smooth? Some days ago I discovered that in this proof I am working on, I have implicitly assumed that $V(f)$ has to be countable if it is nowhere dense - hence this question.
The continuous function is very easy to construct: it's the distance to the closed set.
{ "source": [ "https://mathoverflow.net/questions/24034", "https://mathoverflow.net", "https://mathoverflow.net/users/1508/" ] }
24,039
we know $u$ (if it is a solution to the wave equation in $\mathbb{R}^3$) decays as $1/t$ as $t$ goes to $\infty$. this comes easily from spherical means. but how do we know this is the maximum possible rate of decay?
The continuous function is very easy to construct: it's the distance to the closed set.
{ "source": [ "https://mathoverflow.net/questions/24039", "https://mathoverflow.net", "https://mathoverflow.net/users/5985/" ] }
24,082
Let $k$ be a field. Then $k[[x,y]]$ is a complete local noetherian regular domain of dimension $2$. What are the prime ideals? I've browsed through the paper "Prime ideals in power series rings" (Jimmy T. Arnold), but it does not give a satisfactory answer. Perhaps there is none. Of course you might think it is more natural to consider only certain prime ideals (for example open/closed ones w.r.t. the adic topology), but I'm interested in the whole spectrum. A first approximation is the subring $k[[x]] \otimes_k k[[y]]$. If we know its spectrum, perhaps we can compute the fibers of $\text{Spec } k[[x,y]] \to \text{Spec } k[[x]] \otimes_k k[[y]]$. Now the spectrum of the tensor product consists of $(x),(y),(x,y)$ and $\text{Spec } k((x)) \otimes_k k((y))$. The latter one is still very complicated, I think. For example we have the kernel of $k((x)) \otimes_k k((y)) \to k((x))$. Also, for every $p \in k[[x]]$, we have the prime ideal $(y - p)$.
The ring $k[[x,y]]$ is a local UFD of dimension 2; so its prime ideals are the zero ideal, the maximal ideal, and all the ideals generated by an irreducible element. They are all closed (all ideals in a noetherian local ring are closed). Classifying them is an extremely complicated business, already when $k = \mathbb C$. The ring $k[[x]] \otimes_k k[[y]]$ is truly nasty, it is not even noetherian, and I doubt it would help.
{ "source": [ "https://mathoverflow.net/questions/24082", "https://mathoverflow.net", "https://mathoverflow.net/users/2841/" ] }
24,265
Let $s_n = \sum_{i=1}^{n-1} i!$ and let $g_n = \gcd (s_n, n!)$. Then it is easy to see that $g_n$ divides $g_{n+1}$. The first few values of $g_n$, starting at $n=2$ are $1, 3, 3, 3, 9, 9, 9, 9, 9, 99$, where $g_{11}=99$. Then $g_n=99$ for $11\leq n\leq 100,000$. Note that if $n$ divides $s_n$, then $n$ divides $g_m$ for all $m\geq n$. If $n$ does not divide $s_n$, then $n$ does not divide $s_m$ for any $m\geq n$. If $p$ is a prime dividing $g_n$ but not dividing $g_{n-1}$ then $p=n$, for if $p<n$ then $p$ divides $(n-1)!$ and therefore $p$ divides $s_n-(n-1)!=s_{n-1}$, whence $p$ divides $g_{n-1}$. So to show that $g_n\rightarrow \infty$ it suffices to show that there are infinitely many primes $p$ such that $1!+2!+\cdots +(p-1)! \equiv 0$ (mod $p$).
This is so close to the Kurepa conjecture which asserts that $\gcd\left(\sum_{k=0}^{n-1}k!,n!\right)=2$ for all $n\geq 2$, which was settled in 2004 by D. Barsky and B. Benzaghou "Nombres de Bell et somme de factorielles". So what they proved is that $K(p)=1!+\cdots+(p-1)!\neq -1\pmod{p}$ for any odd prime $p$. This goes against Kevin Buzzard's heuristic that $K(p)$ is random mod $p$. Let me mention two ways you can restate the fact $p|K(p)$: a) It is equivalent to $K(\infty)=\sum_{k=1}^{\infty}k!$ not being a unit in $\mathbb Z_p$. b) It is equivalent to $\mathcal B_{p-1}=2\pmod{p}$ where $\mathcal{B} _n$ is the $n$th Bell number. (It is easy to show that $\mathcal B _{p}=2\pmod{p}$) I forgot to mention that the conjecture that $p>11$ doesn't divide $K(p)$ is in question B44 of R. Guy's "Unsolved Problems in Number theory".
{ "source": [ "https://mathoverflow.net/questions/24265", "https://mathoverflow.net", "https://mathoverflow.net/users/1243/" ] }
24,270
This may be a soft question, but it's just something I thought of one night before sleeping. It's not my field at all, so I am just asking out of curiosity. Has anyone studied the number which is the sum over primes $\sum{ 2^{-p}}$? Its binary expansion (clearly) has a 1 in each prime^th "decimal place", and a zero everywhere else, so, it should be important in number theory I would guess.
Here is Hardy & Wright's answer from "An Introduction to the Theory of Numbers", (5th ed, p344), where they discuss a similar number: "Although ... gives a 'formula' for the nth prime, it is not a very useful one. To calculate $p_n$ from this formula, it is necessary to know the value of $a$ correct to $2^n$ decimal places; and to do this, it is necessary to know the values of $p_1$, $p_2$, ..., $p_n$ ... There are a number of similar formulae which suffer from the same defect ... Any one of these formulae (or any similar one) would attain a different status if the exact value of the number $a$ which occurs in it could be expressed independently of the primes. There seems no likelihood of this, but it cannot be ruled out as entirely impossible."
{ "source": [ "https://mathoverflow.net/questions/24270", "https://mathoverflow.net", "https://mathoverflow.net/users/4528/" ] }
24,318
If there are not, then would it be easier to say that 2 objects are identical as ordered fields as opposed to being isomorphic as ordered fields? Or is the word isomorphism used to emphasise the fact that the objects are different as sets?
Inside of the complex numbers there are lots of examples of distinct fields which are isomorphic. For instance, there are three subfields of the form ${\mathbf Q}(\alpha)$ where $\alpha^3 = 2$: take for $\alpha$ any of the three complex cube roots of 2 and you get a different subfield. What are the consequences of treating them as literally equal? You can't make any sense of Galois theory if you do that! Similarly, all $p$-Sylow subgroups of a finite group are isomorphic (since conjugate subgroups are isomorphic groups), but it would kind of destroy a lot of the content of the Sylow theorems by trying to say the $p$-Sylow subgroups are identical. More generally, anytime you have isomorphic but unequal objects inside a larger object, it can lead to confusion if not outright incomprehensibility if you try to regard them all as identical. (There was a paper by Chevalley about unit groups in number fields where he made a genuine error by an abuse of the "square root" notation and I think one might be able to express the mistake in the form of an isomorphism being confused with an equality, but I'd have to look at the paper again to be sure about this.) The word isomorphism does not emphasize that two objects are different; any group or vector space admits an isomorphism with itself using the identity map. The word emphasizes that in a structural way the two objects look like each other even though they are not literally the same. Never say two objects are identical if they are not actually identical. Having said that, I must admit that in mathematics one meets phrases like "since $X$ and $Y$ are isomorphic we can identify $X$ with $Y$" and then $X$ is replaced with $Y$. The usefulness of doing this depends on the application you have in mind. Note, however, that replacing $X$ with $Y$ is not saying that $X$ and $Y$ are the same thing. This question sounds like it is being asked by someone who hasn't had a lot of experience with isomorphisms and is trying to get a feel for what it means. In a year or two, after seeing more appearances of the concept and its uses, you'll get a better feel for it, but for now do not think the word isomorphism is a synonym for identical.
{ "source": [ "https://mathoverflow.net/questions/24318", "https://mathoverflow.net", "https://mathoverflow.net/users/4692/" ] }
24,350
As I understand it, mathematics is concerned with correct deductions using postulates and rules of inference. From what I have seen, statements are called true if they are correct deductions and false if they are incorrect deductions. If this is the case, then there is no need for the words true and false. I have read something along the lines that Godel's incompleteness theorems prove that there are true statements which are unprovable, but if you cannot prove a statement, how can you be certain that it is true? And if a statement is unprovable, what does it mean to say that it is true?
Tarski defined what it means to say that a first-order statement is true in a structure $M\models \varphi$ by a simple induction on formulas. This is a completely mathematical definition of truth. Goedel defined what it means to say that a statement $\varphi$ is provable from a theory $T$, namely, there should be a finite sequence of statements constituting a proof , meaning that each statement is either an axiom or follows from earlier statements by certain logical rules. (There are numerous equivalent proof systems, useful for various purposes.) The Completeness Theorem of first order logic, proved by Goedel, asserts that a statement $\varphi$ is true in all models of a theory $T$ if and only if there is a proof of $\varphi$ from $T$. Thus, for example, any statement in the language of group theory is true in all groups if and only if there is a proof of that statement from the basic group axioms. The Incompleteness Theorem, also proved by Goedel, asserts that any consistent theory $T$ extending some a very weak theory of arithmetic admits statements $\varphi$ that are not provable from $T$, but which are true in the intended model of the natural numbers. That is, we prove in a stronger theory that is able to speak of this intended model that $\varphi$ is true there, and we also prove that $\varphi$ is not provable in $T$. This is the sense in which there are true-but-unprovable statements. The situation can be confusing if you think of provable as a notion by itself, without thinking much about varying the collection of axioms. After all, as the background theory becomes stronger, we can of course prove more and more. The true-but-unprovable statement is really unprovable-in-$T$, but provable in a stronger theory. Actually, although ZFC proves that every arithmetic statement is either true or false in the standard model of the natural numbers, nevertheless there are certain statements for which ZFC does not prove which of these situations occurs. Much or almost all of mathematics can be viewed with the set-theoretical axioms ZFC as the background theory, and so for most of mathematics, the naive view equating true with provable in ZFC will not get you into trouble. But the independence phenomenon will eventually arrive, making such a view ultimately unsustainable. The fact is that there are numerous mathematical questions that cannot be settled on the basis of ZFC, such as the Continuum Hypothesis and many other examples. We have of course many strengthenings of ZFC to stronger theories, involving large cardinals and other set-theoretic principles, and these stronger theories settle many of those independent questions. Some set theorists have a view that these various stronger theories are approaching some kind of undescribable limit theory, and that it is that limit theory that is the true theory of sets. Others have a view that set-theoretic truth is inherently unsettled, and that we really have a multiverse of different concepts of set. On that view, the situation is that we seem to have no standard model of sets, in the way that we seem to have a standard model of arithmetic.
{ "source": [ "https://mathoverflow.net/questions/24350", "https://mathoverflow.net", "https://mathoverflow.net/users/4692/" ] }
24,503
This is quite possibly a stupid question, but it is pretty far from what I normally do, so I wouldn't even know where to look it up. If $X$ is a projective variety over an algebraically closed field of arbitrary characteristic and $Y\subset X$ a smooth divisor. Under which conditions can I contract $Y$ to a point, i.e. under which conditions is there a projective (smooth!?) variety $V$, and a morphism $f:X\rightarrow V$, such that $f$ is an isomorphism away from $Y$, and $Y$ is mapped to a point. What can one say if $Y$ is a strict normal crossings divisor? Hints and references are very appreciated!
For a smooth $Y$, a necessary condition for contractibility is that the conormal line bundle $N_{Y,X}^\*$ is ample. It is also sufficient for contracting to an algebraic space. The reference is Algebraization of formal moduli. II. Existence of modifications. by M. Artin. $Y$ can be contracted to a point on an algebraic (projective) variety if in addition $Y=\mathbb P^{n-1}$, $n=\dim X$. You can prove this easily by hands. Start with an ample divisor $H$ and then prove that an appropriate linear combination $|aH+bY|$ is base point free and is zero exactly on $Y$. You will find the argument in Matsuki's book on Mori's program for example. So if $X$ is a surface and $Y=\mathbb P^1$ with $Y^2<0$ then it is contractible to a projective surface. For a reducible divisor $Y=\sum Y_i$ a necessary condition (which is also sufficient in the category of algebraic spaces) is that the matrix $(Y_i.Y_j)$ is negative definite. The strongest elementary sufficient condition for contractibility to a variety is that $\sum Y_i$ is a rational configuration of curves. This is contained in On isolated rational singularities of surfaces by M. Artin. This paper also contains an example of an elliptic curve $Y$ with $Y^2=-1$ which is not contractible to an algebraic surface. The surface $X$ is the blowup of $\mathbb P^2$ at 10 sufficiently general points lying on a smooth cubic, $Y$ is the strict preimage of that cubic. Finally, for an irreducible divisor $Y$ the resulting space $V$ is smooth iff $Y=\mathbb P^{n-1}$ and $N_{Y,X}=\mathcal O(-1)$. Indeed, $X\to V$ has to factor through the blowup of $V$ at a point by the universal property of the blowup. But then $X$ has to coincide with this blowup by Zariski main theorem. And on the blowup at a point the exceptional divisor is $\mathbb P^{n-1}$ with the normal bundle $\mathcal O(-1)$.
{ "source": [ "https://mathoverflow.net/questions/24503", "https://mathoverflow.net", "https://mathoverflow.net/users/259/" ] }
24,506
In model theory, two structures $\mathfrak{A}, \mathfrak{B}$ of identical signature $\Sigma$ are said to be elementarily equivalent ($\mathfrak{A} \equiv \mathfrak{B}$) if they satisfy exactly the same first-order sentences w.r.t. $\Sigma$. An astounding theorem giving an algebraic characterisation of this notion is the so-called Keisler-Shelah isomorphism theorem, proved originally by Keisler (assuming GCH) and then by Shelah (avoiding GCH), which we state in its modern strengthening (saying that only a single ultrafilter is needed): $\mathfrak{A} \equiv \mathfrak{B} \ \iff \ \exists \mathcal{U} \text{ s.t. } (\Pi_{i\in\mathcal{I}} \ \mathfrak{A})/\mathcal{U} \cong (\Pi_{i\in\mathcal{I}} \ \mathfrak{B})/\mathcal{U},$ where $\mathcal{U}$ is a non-principal ultrafilter on, say, $\mathcal{I} = \mathbb{N}$. That is, two structures are elementarily equivalent iff they have isomorphic ultrapowers. My question is the following (admittedly rather vague): Does anyone know of constructions in which an ultrafilter is chosen by an appeal to this characterisation and then used for other means? An example of what I have in mind would be something like this (using the fact that any two real closed fields are elementarily equivalent w.r.t. the language of ordered rings): In order to perform some construction $C$ I ``choose'' a non-principal ultrafilter $\mathcal{U}$ on $\mathbb{N}$ by specifying it as a witness to the following isomorphism induced by Keisler-Shelah: $\mathbb{R}^\mathbb{N}/\mathcal{U} \cong \mathbb{R}_{alg}^\mathbb{N}/\mathcal{U},$ where $\mathbb{R}_{alg}$ is the field of real algebraic numbers. So the construction $C$ should be dependent upon the fact that $\mathcal{U}$ is a non-principal ultrafilter bearing witness to the Keisler-Shelah isomorphism between some ultrapower of the reals and the algebraic reals, resp. Also, a follow-up question: Let's say I'd like to ``solve'' the above isomorphism for $\mathcal{U}$. Are there interesting things in general known about the solution space, e.g., the set of all non-principal ultrafilters bearing witness to the Keisler-Shelah isomorphism for two fixed elementarily equivalent structures such as $\mathbb{R}$ and $\mathbb{R}_{alg}$? What machinery is useful in investigating this?
For a smooth $Y$, a necessary condition for contractibility is that the conormal line bundle $N_{Y,X}^\*$ is ample. It is also sufficient for contracting to an algebraic space. The reference is Algebraization of formal moduli. II. Existence of modifications. by M. Artin. $Y$ can be contracted to a point on an algebraic (projective) variety if in addition $Y=\mathbb P^{n-1}$, $n=\dim X$. You can prove this easily by hands. Start with an ample divisor $H$ and then prove that an appropriate linear combination $|aH+bY|$ is base point free and is zero exactly on $Y$. You will find the argument in Matsuki's book on Mori's program for example. So if $X$ is a surface and $Y=\mathbb P^1$ with $Y^2<0$ then it is contractible to a projective surface. For a reducible divisor $Y=\sum Y_i$ a necessary condition (which is also sufficient in the category of algebraic spaces) is that the matrix $(Y_i.Y_j)$ is negative definite. The strongest elementary sufficient condition for contractibility to a variety is that $\sum Y_i$ is a rational configuration of curves. This is contained in On isolated rational singularities of surfaces by M. Artin. This paper also contains an example of an elliptic curve $Y$ with $Y^2=-1$ which is not contractible to an algebraic surface. The surface $X$ is the blowup of $\mathbb P^2$ at 10 sufficiently general points lying on a smooth cubic, $Y$ is the strict preimage of that cubic. Finally, for an irreducible divisor $Y$ the resulting space $V$ is smooth iff $Y=\mathbb P^{n-1}$ and $N_{Y,X}=\mathcal O(-1)$. Indeed, $X\to V$ has to factor through the blowup of $V$ at a point by the universal property of the blowup. But then $X$ has to coincide with this blowup by Zariski main theorem. And on the blowup at a point the exceptional divisor is $\mathbb P^{n-1}$ with the normal bundle $\mathcal O(-1)$.
{ "source": [ "https://mathoverflow.net/questions/24506", "https://mathoverflow.net", "https://mathoverflow.net/users/4915/" ] }
24,552
Brian Conrad indicated a while ago that many of the results proven in AG using universes can be proven without them by being very careful ( link ). I'm wondering if there are any results in AG that actually depend on the existence of universes (and what some of the more interesting ones are). I'm of course aware of the result that as long as we require that the classes of objects and arrows are sets (this is the only valid approach from Bourbaki's perspective), for every category C, there exists a universe U such that the U-Yoneda lemma holds for U-Psh(C) (this relative approach makes proper classes pointless because every universe allows us to model a higher level of "largeness"), but this is really the only striking application of universes that I know of (and the only result I'm aware of where it's clear that they are necessary for the result).
My belief is that no result in algebraic geometry that does not explicitly engage the universe concept will fully require the use of universes. Indeed, I shall advance an argument that no such results actually need anything beyond ZFC, and indeed, that they need much less than this. (But please note, I answer as a set theorist rather than an algebraic geometer.) My reason is that there are several hierarchies of weakened universe concepts, which appear to be sufficient to carry out all the arguments that I have heard using universes, but which are set-theoretically strictly weaker hypotheses. A universe, as you know, is known in set theory as $H_\kappa$, the set of all hereditarily-size-less-than-$\kappa$ sets, for some inaccessible cardinal $\kappa$. Every such universe also has the form $V_\kappa$, in the cumulative Levy hierarchy, because $H_\kappa=V_\kappa$ for any beth-fixed point, which includes all inaccessible cardinals. Thus, the Universe Axiom, asserting that every set is in a universe, is equivalent to the large cardinal assertion that there are unboundedly many inaccessible cardinals. This hypothesis is relatively low in the large cardinal hierarchy, and so from this perspective, it is relatively mild to just go ahead and use universes. In consistency strength, for example, it is strictly weaker than the existence of a single Mahlo cardinal, which is considered to be very weak large-cardinal theoretically, and much stronger hypotheses are routinely considered in set theory. Nevertheless, these hypotheses do definitely exceed ZFC in strength, unless ZFC is already inconsistent, and so your question is a good one. It follows the pattern in set theory of inquiring the exact large cardinal strength of a given hypothesis. The weaker universe concepts that I propose to use in replacement of universes are the following, where I take the liberty of introducing some new terminology. A weak universe is some $V_\alpha$ which models ZFC. The Weak Universe axiom is the assertion that every set is in a weak universe. This axiom is strictly weaker than the Universe Axiom, since in fact, every universe is already a model of it. Namely, if $\kappa$ is inaccessible, then there are unboundedly many $\alpha\lt\kappa$ with $V_\alpha$ elementary in $V_\kappa$, by the Lowenheim-Skolem theorem, and so $V_\kappa$ satisfies the Weak Universe Axiom by itself. From what I have seen, it appears that most of the applications of universes in algebraic geometry could be carried out with weak universes, if one is somewhat more careful about how one treats universes. The difference is that when using weak universes, one must pay attention to whether a given construction is definable inside the universe or not, in order to know whether the top level $\kappa$ of the weak universe, which may now be singular (and this is the difference), is reached. Let us say that a very weak universe is simply a transitive set model of ZFC. (In set theory, one would want just to call these universes , but here that word is taken to have the meaning above; so we could call them set-theoretic universes .) The Very Weak Universe Axiom (or Set-theoretic Universe Axiom) is the assertion that every set is an element of a very weak universe. The difference between a very weak universe and a weak universe is that a very weak universe $M$ may be wrong about power sets, even though it satisfies its own version of the Powerset Axiom. Set theorists are very attentive to such very weak universes, and pay attention in a set-theoretic construction to which model of set theory it is undertaken. If the algebraic geometers were to give similar attention to this point, thereby turning themselves into set theorists, I believe that all of their arguments using universes could be essentially replaced with very weak universes. Another important point is that while universes are always linearly ordered by inclusion, this is no longer true for very weak universes. Now, even the Very Weak Universe Axiom transcends ZFC in consistency strength, because it clearly implies Con(ZFC). So let me now describe how one might provide an even greater reduction in the strength of the hypothesis, and capture a use of universes within ZFC itself. The key is to realize that algebraic geometry does not really use the full force of ZFC. (Please take this with some skepticism, given my comparatively little exposure to algebraic geometry.) It seems to me unlikely, for example, that one needs the full Replacement Axiom in order to carry out the principal goal constructions of algebraic geometry. Let me suppose that these arguments can be carried out in some finite fragment $ZFC_0$ of $ZFC$, for example, $ZFC$ restricted to formulas of complexity $\Sigma_N$ for some definite number $N$, such as $100$ or so. In this case, let me define that a good-enough-universe is $V_\kappa$, provided that this satisfies $ZFC_0$. All such good-enough universes have $V_\kappa=H_\kappa$, just as with universes, since these will be beth-fixed points. The Good-enough Universe Axiom is the assertion that every set is a member of a good-enough universe. Now, my claim is first, that this Good-enough Universe Axiom is sufficient to carry out most or even all of the applications of universes in algebraic geometry, provided that one is sufficiently attentive to the set-theoretic issues, and second, that this axiom is simply a theorem of ZFC. Indeed, one can get more, that the various good-enough universes agree with each other on truth. Theorem. There is a definable closed unbounded class of cardinals $\kappa$ such that every $V_\kappa$ is a good-enough universe and furthermore, whenever $\kappa\lt\lambda$ in $C$, then $\Sigma_N$-truth in $V_\kappa$ agrees with $\Sigma_N$-truth in $V_\lambda$, and moreover, agrees with $\Sigma_N$ truth in the full universe $V$. This theorem is exactly an instance of the Levy Reflection Theorem. OK, so if I am right, then the algebraic geometers can carry out their universe arguments by paying a lot of careful attention to the set-theoretic complexity of their constructions, and using good-enough universes instead of universes. But should they do this? For most purposes, I don't think so. The main purpose of universes is as a simplifying device of convenience to stratify the full universe by levels, which can be fruitfully compared by local notions of large and small. This makes for a very convenient theory, having numerous local concepts of large and small. I can imagine, however, a case where one has used the Universe theory to prove an elementary result, such as Fermat's Last Theorem, and one wants to know what are the optimal hypotheses for the proof. The question would be whether the extra universe hypotheses are required or not. The thrust of my answer here is that such a question will be answered by replacing the universe concepts that are used in the proof with any of the various weakened universe concepts that I have mentioned above, and thereby realizing the theorem as a theorem of ZFC or much less.
{ "source": [ "https://mathoverflow.net/questions/24552", "https://mathoverflow.net", "https://mathoverflow.net/users/1353/" ] }
24,573
If $k$ is a characteristic $p$ field containing a subfield with $p^2$ elements (e.g., an algebraic closure of $\mathbb{F}_p$), then the number of isomorphism classes of supersingular elliptic curves over $k$ has a formula involving $\lfloor p/12 \rfloor$ and the residue class of $p$ mod 12, described in Chapter V of Silverman's The Arithmetic of Elliptic Curves . If we weight these curves by the reciprocals of the orders of their automorphism groups, we obtain the substantially simpler Eichler-Deuring mass formula: $\frac{p-1}{24}$. For example, when $p=2$, the unique supersingular curve $y^2+y=x^3$ has endomorphisms given by the Hurwitz integers (a maximal order in the quaternions), and its automorphism group is therefore isomorphic to the binary tetrahedral group , which has order 24. Silverman gives the mass formula as an exercise, and it's pretty easy to derive from the formula in the text. The proof of the complicated formula uses the Legendre form (hence only works away from 2), and the appearance of the $p/12$ boils down to the following two facts: Supersingular values of $\lambda$ are precisely the roots of the Hasse polynomial, which is separable of degree $\frac{p-1}2$. The $\lambda$-line is a 6-fold cover of the $j$-line away from $j=0$ and $j=1728$ (so the roots away from these values give an overcount by a factor of 6). Question: Is there a proof of the Eichler-Deuring formula in the literature that avoids most of the case analysis, e.g., by using a normal form of representable level? I suppose any nontrivial level structure will probably require some special treatment for the prime(s) dividing that level. Even so, it would be neat to see any suitably holistic enumeration, in particular, one that doesn't need to single out special $j$-invariants. (This question has been troubling me for a while, but Greg's question inspired me to actually write it down.)
One argument (maybe not of the kind you want) is to use the fact that the wt. 2 Eisenstein series on $\Gamma_0(p)$ has constant term (p-1)/24. More precisely: if $\{E_i\}$ are the s.s. curves, then for each $i,j$, the Hom space $L_{i,j} := Hom(E_i,E_j)$ is a lattice with a quadratic form (the degree of an isogeny), and we can form the corresponding theta series $$\Theta_{i,j} := \sum_{n = 0}^{\infty} r_n(L_{i,j})q^n,$$ where as usual $r_n(L_{i,j})$ denotes the number of elements of degree $n$. These are wt. 2 forms on $\Gamma_0(p)$. There is a pairing on the $\mathbb Q$-span $X$ of the $E_i$ given by $\langle E_i,E_j\rangle = $ # $Iso(E_i,E_j),$ i.e. $$\langle E_i,E_j\rangle = 0 \text{ if } i \neq j\text{ and equals # }Aut(E_i) \text{ if }i = j,$$ and another formula for $\Theta_{i,j}$ is $$\Theta_{i,j} := 1 + \sum_{n = 1}^{\infty} \langle T_n E_i, E_j\rangle q^n,$$ where $T_n$ is the $n$th Hecke correspondence. Now write $x := \sum_{j} \frac{1}{\text{#}Aut(E_j)} E_j \in X$. It's easy to see that for any fixed $i$, the value of the pairing $\langle T_n E_i,x\rangle$ is equal to $\sum_{d |n , (p,d) = 1} d$. (This is just the number of $n$-isogenies with source $E_i,$ where the target is counted up to isomorphism.) Now $$\sum_{j} \frac{1}{\text{#}Aut(E_j)} \Theta_{i,j} = \bigg{(}\sum_{j} \frac{1}{\text{#}Aut(E_j)}\bigg{)} + \sum_{n =1}^{\infty} \langle T_n E_i, x\rangle q^n = \bigg{(}\sum_{j}\frac{1}{\text{#}Aut(E_j)}\bigg{)} + \sum_{n = 1}^{\infty} \bigg{(}\sum_{d | n, (p,d) = 1} d\bigg{)}q^n.$$ Now the LHS is modular of wt. 2 on $\Gamma_0(p)$, thus so is the RHS. Since we know all its Fourier coefficients besides the constant term, and they coincide with those of the Eisenstein series, it must be the Eisenstein series. Thus we know its constant term as well, and that gives the mass formula. (One can replace the geometric aspects of this argument, involving s.s. curves and Hecke correspondences, with pure group theory/automorphic forms: namely the set $\{E_i\}$ is precisely the idele class set of the multiplicative group $D^{\times}$, where $D$ is the quat. alg. over $\mathbb Q$ ramified at $p$ and $\infty$. This formula, writing the Eisenstein series as a sum of theta series, is then a special case of the Seigel--Weil formula, I believe, which in general, when you pass to constant terms, gives mass formulas of the type you asked about.)
{ "source": [ "https://mathoverflow.net/questions/24573", "https://mathoverflow.net", "https://mathoverflow.net/users/121/" ] }
24,579
I saw a while ago in a book by Clifford Pickover, that whether the Flint Hills series $\displaystyle \sum_{n=1}^\infty\frac1{n^3\sin^2 n}$ converges is open. I would think that the question of its convergence is really about the density in $\mathbb N$ of the sequence of numerators of the standard convergent approximations to $\pi$ (which, in itself, seems like an interesting question). Naively, the point is that if $n$ is "close" to a whole multiple of $\pi$ , then $1/(n^3\sin^2n)$ is "close" to $\frac1{\pi^2 n}$ . [Numerically there is some evidence that only some of these values of $n$ affect the overall behavior of the series. For example, letting $S(k)=\sum_{n=1}^{k}\frac1{n^3\sin^2n}$ , one sees that $S(k)$ does not change much in the interval, say, $[50,354]$ , with $S(354)<5$ . However, $S(355)$ is close to $30$ , and note that $355$ is very close to $113\pi$ . On the other hand, $S(k)$ does not change much from that point until $k=100000$ , where I stopped looking.] I imagine there is a large body of work within which the question of the convergence of this series would fall naturally, and I would be interested in knowing something about it. Sadly, I'm terribly ignorant in these matters. Even knowing where to look for some information on approximations of $\pi$ by rationals, or an ad hoc approach just tailored to this specific series would be interesting as well.
As Robin Chapman mentions in his comment, the difficulty of investigating the convergence of $$ \sum_{n=1}^\infty\frac1{n^3\sin^2n} $$ is due to lack of knowledge about the behavior of $|n\sin n|$ as $n\to\infty$, while the latter is related to rational approximations to $\pi$ as follows. Neglecting the terms of the sum for which $n|\sin n|\ge n^\varepsilon$ ($\varepsilon>0$ is arbitrary), as they all contribute only to the `convergent part' of the sum, the question is equivalent to the one for the series $$ \sum_{n:n|\sin n|< n^\varepsilon}\frac1{n^3\sin^2n}. \qquad(1) $$ For any such $n$, let $q=q(n)$ minimizes the distance $|\pi q-n|\le\pi/2$. Then $$ \sin|\pi q-n|=|\sin n|< \frac1{n^{1-\varepsilon}}, $$ so that $|\pi q-n|\le C_1/n^{1-\varepsilon}$ for some absolute constant $C_1$ (here we use that $\sin x\sim x$ as $x\to0$). Therefore, $$ \biggl|\pi-\frac nq\biggr|<\frac{C_1}{qn^{-\varepsilon}}, $$ equivalently $$ \biggl|\pi-\frac nq\biggr|<\frac{C_2}{n^{2-\varepsilon}} \quad\text{or}\quad \biggl|\pi-\frac nq\biggr|<\frac{C_2'}{q^{2-\varepsilon}} $$ (because $n/q\approx\pi$) for all $n$ participating in the sum (1). It is now clear that the convergence of the sum (1) depends on how often we have $$ \biggl|\pi-\frac nq\biggr|<\frac{C_2'}{q^{2-\varepsilon}} $$ and how small is the quantity in these cases. (Note that it follows from Dirichlet's theorem that an even stronger inequality, $$ \biggl|\pi-\frac nq\biggr|<\frac1{q^2}, $$ happens for infinitely many pairs $n$ and $q$.) The series (1) converges if and only if $$ \sum_{n:|\pi-n/q|< C_2n^{-2+\varepsilon}}\frac1{n^5|\pi-n/q|^2} $$ converges. We can replace the summation by summing over $q$ (again, for each term $\pi q\approx n$) and then sum the result over all $q$, because the terms corresponding to $|\pi-n/q|< C_2n^{-2+\varepsilon}$ do not influence on the convergence: $$ \sum_{q=1}^\infty\frac1{q^5|\pi-n/q|^2} =\sum_{q=1}^\infty\frac1{q^3(\pi q-n)^2} \qquad(2) $$ where $n=n(q)$ is now chosen to minimize $|\pi-n/q|$. Summarizing, the original series converges if and only if the series in (2) converges. It is already an interesting question of what can be said about the convergence of (2) if we replace $\pi$ by other constant $\alpha$, for example by a "generic irrationality". The series $$ \sum_{q=1}^\infty\frac1{q^3(\alpha q-n)^2} $$ for a real quadratic irrationality $\alpha$ converges because the best approximations are $C_3/q^2\le|\alpha-n/q|\le C_4/q^2$, and they are achieved on the convergents $n/q$ with $q$ increasing geometrically. A more delicate question seems to be for $\alpha=e$, because one third of its convergents satisfies $$ C_3\frac{\log\log q}{q^2\log q}<\biggl|e-\frac pq\biggr|< C_4\frac{\log\log q}{q^2\log q} $$ (see, e.g., [C.S.Davis, Bull. Austral. Math. Soc. 20 (1979) 407--410]). The number $e$, quadratic irrationalities, and even algebraic numbers are `generic' in the sense that their irrationality exponent is known to be 2. What about $\pi$? The irrationality exponent $\mu=\mu(\alpha)$ of a real irrational number $\alpha$ is defined as the infimum of exponents $\gamma$ such that the inequality $|\alpha-n/q|\le|q|^{-\gamma}$ has only finitely many solutions in $(n,q)\in\Bbb Z^2$ with $q\ne0$. (So, Dirichlet's theorem implies that $\mu(\alpha)\ge2$. At the same time from metric number theory we know that it is 2 for almost all real irrationals.) Assume that $\mu(\pi)>5/2$, then there are infinitely many solutions to the inequality $$ \biggl|\pi-\frac nq\biggr|<\frac{C_5}{q^{5/2}}, $$ hence infinitely many terms in (2) are bounded below by $1/C_5$, so that the series diverges (and (1) does as well). Although the general belief is that $\mu(\pi)=2$, the best known result of V.Salikhov (see this answer by Gerry and my comment) only asserts that $\mu(\pi)<7.6064\dots$,. I hope that this explains the problem of determining the behavior of the series in question.
{ "source": [ "https://mathoverflow.net/questions/24579", "https://mathoverflow.net", "https://mathoverflow.net/users/6085/" ] }
24,585
There seems to be a few papers around with Erdős written as Erdös. For example: MR0987571 (90h:11090) Alladi, K.; Erdös, P.; Vaaler, J. D. Multiplicative functions and small divisors. II. J. Number Theory 31 (1989), no. 2, 183--190. (Reviewer: Friedrich Roesler) 11N37 Would it be incorrect to cite such papers using Erdős instead?
We cite papers to show our respect to the authors and to help our readers find stuff. For the second purpose, I suspect most people would just type in names without diacritical marks, and most search facilities would find what you're looking for based on the letters alone, so it doesn't really matter. But for the first purpose, I think you should spell the name the way its owner would want it spelled, regardless of what some journal may have done.
{ "source": [ "https://mathoverflow.net/questions/24585", "https://mathoverflow.net", "https://mathoverflow.net/users/2264/" ] }
24,594
Are there general surveys or introductions to the homotopy groups of spheres? I'm interested especially in connections to low-dimensional geometry and topology.
While my Algebraic Topology book and my unfinished book on spectral sequences (referred to in other answers to this question) contain some information about homotopy groups of spheres, they don't really qualify as a general survey or introduction. One source that fits this bill more closely is Chapter 1 of Doug Ravenel's "green book" Complex Cobordism and Stable Homotopy Groups of Spheres, from 1986. This introductory chapter starts at a reasonably accessible level, with increasing prerequisites in the later sections of the chapter. More recent surveys ought to exist, although at the moment I can't recall any. With the recent solution of the Kervaire invariant problem by Hill-Hopkins-Ravenel, this would be a good time for an updated survey. Connections between homotopy groups of spheres and low-dimensional geometry and topology have traditionally been somewhat limited, with the Hopf bundle being the thing that comes most immediately to mind. A fairly recent connection is Soren Galatius' theorem that the homology groups of $Aut(F_n)$, the automorphism group of a free group, are isomorphic in a stable range of dimensions to the homology groups of "loop-infinity S-infinity", the space whose homotopy groups are the stable homotopy groups of spheres.
{ "source": [ "https://mathoverflow.net/questions/24594", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
24,604
Well, I'm aware that this question may seem very naive to the several experts on this topic that populate this site: feel free to add the "soft question" tag if you want... So, knowing nothing about modular forms (except they're intrinsically sections of powers of the canonical bundle over some moduli space of elliptic curves, and transcendentally differentials on the upper half plane invariant w.r.t. some specific subgroup of $SL(2,\mathbb{Z})$), I have the curiosity -that many other non experts might have- to understand a bit why that is considered a so vast and important topic in mathematics. The wikipedia page doesn't help: on the contrary, it makes this topic appear as quite narrow and merely technical. I would roughly divide the question into three (though maybe not neatly distinct) parts: 1) Why are modular forms per se interesting? That is, do they "generate" some piece of rich self-contained mathematics? To make an analogy: cohomology functors were born as applied tools for studying spaces, but have then evolved to a very rich theory in itself; can the same be said about m.f.'s? 2) How are modular forms deeply related to other, possibly quite distant, mathematical areas? For example: I've heard about deep relations to some generalized cohomology theories (elliptic cohomology) via formal group laws coming from elliptic curves; and I've heard about the so called moonshine conjecture; there should also be some more classical relations to the theory of integral quadratic forms and diophantine equations, and of course to elliptic curves; and people here always mention Galois representations... 3) Why are modular forms useful as "applied" technical tools? In this last question I'm ideally expecting indications of cases (or actual theorems) in which some questions that do not involve modular forms are asked about some mathematical objects, and an answer that does not involve m.f.'s is given, but the method used to obtain that answer/proof makes consistent use of m.f.'s.
Your questions would require an enormous amount of work to answer properly, so let me just suggest a few modest and very partial answers to your 1)2)3). 1) Modular forms are shiny: they satisfy or explain many beautiful and surprising numerical identities (about partitions and sums of square among others). This got them noticed in the first place. 2) Modular forms have Galois representations, and conversely Galois representations often come from modular forms. If you care at all about representations of the absolute Galois group of $\mathbb Q$, then you will first presumably be interested in class field theory, and develop the Kronecker-Weber theorem. But then you will get interested in representations of $G_{\mathbb Q}$ of rank 2. Modular forms provide many examples of such Galois representations, and conversely, only a handful of hypotheses are required for such a Galois representation to come from a modular form. This means concretely that one can identify many Galois representations simply by computing a few traces of Frobenius morphisms and then doing some computations in the complex upper half-plane. 3) If a rational elliptic curve has a non-vanishing $L$-function at 1, it has no non-torsion rational points. The main conjecture of Iwasawa (about class groups in the cyclotomic $\mathbb Z_{p}$-extension of $\mathbb Q$) is true. Fermat's last theorem is true. Here are three extremely famous conjectures solved by an ubiquitous appeal to modular forms. All these conjectures were well-known in the 60s but I don't think it is an exaggeration to say that almost no one then would have suspected that modular forms would come into play.
{ "source": [ "https://mathoverflow.net/questions/24604", "https://mathoverflow.net", "https://mathoverflow.net/users/4721/" ] }
24,605
As I remember the following is true: Fact: for every infinite-dimensional normed space $X$ the unit sphere $S$ is weak-dense in the unit ball $B$. Please help me find a reference. Thanks in advance Miki
Your questions would require an enormous amount of work to answer properly, so let me just suggest a few modest and very partial answers to your 1)2)3). 1) Modular forms are shiny: they satisfy or explain many beautiful and surprising numerical identities (about partitions and sums of square among others). This got them noticed in the first place. 2) Modular forms have Galois representations, and conversely Galois representations often come from modular forms. If you care at all about representations of the absolute Galois group of $\mathbb Q$, then you will first presumably be interested in class field theory, and develop the Kronecker-Weber theorem. But then you will get interested in representations of $G_{\mathbb Q}$ of rank 2. Modular forms provide many examples of such Galois representations, and conversely, only a handful of hypotheses are required for such a Galois representation to come from a modular form. This means concretely that one can identify many Galois representations simply by computing a few traces of Frobenius morphisms and then doing some computations in the complex upper half-plane. 3) If a rational elliptic curve has a non-vanishing $L$-function at 1, it has no non-torsion rational points. The main conjecture of Iwasawa (about class groups in the cyclotomic $\mathbb Z_{p}$-extension of $\mathbb Q$) is true. Fermat's last theorem is true. Here are three extremely famous conjectures solved by an ubiquitous appeal to modular forms. All these conjectures were well-known in the 60s but I don't think it is an exaggeration to say that almost no one then would have suspected that modular forms would come into play.
{ "source": [ "https://mathoverflow.net/questions/24605", "https://mathoverflow.net", "https://mathoverflow.net/users/4282/" ] }
24,688
Is there an efficient way to sample uniformly points from the unit n-sphere? Informally, by "uniformly" I mean the probability of picking a point from a region is proportional to the area of that region on the surface of the sphere. Formally, I guess I'm referring to the Haar measure. I guess "efficient" means the algorithm should take poly(n) time. Of course, it's not clear what I mean by an algorithm since real numbers cannot be represented on a computer to arbitrary precision, so instead we can imagine a model where real numbers can be stored, and arithmetic can be performed on them in constant time. Also, we're given access to a random number generator which outputs a real in [0,1]. In such a model, it's easy to sample from the surface of the n-hypercube in O(n) time, for example. If you prefer to stick with the standard model of computation, you can consider the approximate version of the problem where you have to sample from a discrete set of vectors that $\epsilon$-approximate the surface of the n-sphere.
Generate $X_1, X_2, \ldots, X_n$ independent, normally distributed random variables See wikipedia for information on how to do this given some standard source of randomness, for example uniform(0,1) random variables. Then let $Y_i = X_i/\sqrt{X_1^2 + \cdots + X_n^2}$ for $i = 1, \ldots, n$. Then $(Y_1, \ldots, Y_n)$ is uniformly distributed on the surface of the sphere. The time this takes is linear in $n$. This works because the multivariate normal $(X_1, \ldots, X_n)$ with covariance matrix the identity (that is, $n$ independent unit normals) is rotationally symmetric around the origin.
{ "source": [ "https://mathoverflow.net/questions/24688", "https://mathoverflow.net", "https://mathoverflow.net/users/1042/" ] }
24,773
This is a question that I have seen asked passively in comments relating to the separation of category theory from set theory, but I haven't seen it addressed in full. I know that it's possible to formulate category theory within set theory while still being albe to construct the useful things one would want from category theory. So as far as I understand, all normal mathematics that involves category theory can be done as long as a little caution is taken. I also know that some people (categorical foundationalsists) would still like to formulate category theory without use of or reference to set theory. While I admit that I am curious about this for curiosity's sake, I'm not sure if there are any practical motivations for doing this. The only reason for wanting to separate category theory from set theory that I have read about is for the sake of `autonomy of category theory'. So my question is twofold: What other reasons might categorical foundationalists have for separating category theory from set theory, and what practical purposes might it serve to do this?
I don't agree that this is what (most) categorists who are interested in foundations are doing. It is true that Lawvere in the mid-60's (and perhaps to this day) wanted to develop a theory of categories independent of a theory of sets, but I don't think that represents the main thrust of modern-day categorical work on "foundations". Much more work has been directed toward developing a full-fledged categorical theory of sets , either as in Lawvere's Elementary Theory of a Category of Sets and extensions thereof, or understanding classical theories of sets such as ZF through a categorical lens, as in Algebraic Set Theory. There is also ongoing discussion of what strength of set theory is suitable for doing what category theorists would like to do. As one can see with even a casual perusal of such work, there is no antagonism toward set theory per se, or a desire to somehow get away from sets. I think some confusion might stem from over-hasty identification of set theory with a "canonized" form of set theory, such as ZFC (or something in that family such as Gödel-Bernays set theory), based on a single binary predicate called "membership". In ordinary ZFC, a set is characterized by its membership tree, so that the elements of sets are sets themselves, possessing their own internal structure. This may be termed a "materialist" form of set theory (material because elements of sets are considered as having "substance"). If there is antagonism toward this type of set theory on the part of some category theorists, it's because it lends itself to a conception of "set" that is largely irrelevant to the actual practice of core mathematics, insofar as mathematicians don't care what elements are "made of". The prevailing trends of mathematical practice today and throughout most of the twentieth century promote a more "structuralist" view: that what counts is not what the elements of a structure "are" particularly, but rather how they are interrelated in a structure, and where two structures are considered abstractly the same if they are isomorphic. This seems like a truism today, but it is precisely this view which drives a more categorically-minded view, which looks toward not what sets "are", but of how we use them, what abstract constructions we want to perform on them, and so on. Thus, concepts such as "power set" are in this view more relevantly captured by suitable universal properties which serve to characterize their structure up to specified isomorphism. A theory of sets which takes this point of view seriously and axiomatically may be termed a "structural set theory". Thus the real contrast is between "material" and "structural" theories of sets, with category theorists tending to prefer structural set theory. An example of such is Lawvere's aforementioned Elementary Theory of the Category of Sets (ETCS). A different and more recent example is Mike Shulman's SEAR (Sets, Elements, and Relations), which you can read about at the nLab. As for practical benefits of structuralist set theory: they are huge! It should be borne in mind that elementary topos theory was largely inspired by Lawvere's insight that Grothendieck toposes themselves model most of the axioms of the kind of structuralist set theory he was investigating in ETCS, and this has been revolutionary. This answer is already long enough, so I won't enter on a discussion of that here.
{ "source": [ "https://mathoverflow.net/questions/24773", "https://mathoverflow.net", "https://mathoverflow.net/users/3664/" ] }
24,913
Mathematics is rife with the fruit of abstraction. Many problems which first are solved via "direct" methods (long and difficult calculations, tricky estimates, and gritty technical theorems) later turn out to follow beautifully from basic properties of simple devices, though it often takes some work to set up the new machinery. I would like to hear about some examples of problems which were originally solved using arduous direct techniques, but were later found to be corollaries of more sophisticated results. I am not as interested in problems which motivated the development of complex machinery that eventually solved them, such as the Poincare conjecture in dimension five or higher (which motivated the development of surgery theory) or the Weil conjectures (which motivated the development of l-adic and other cohomology theories). I would also prefer results which really did have difficult solutions before the quick proofs were found. Finally, I insist that the proofs really be quick (it should be possible to explain it in a few sentences granting the machinery on which it depends) but certainly not necessarily easy (i.e. it is fine if the machinery is extremely difficult to construct). In summary, I'm looking for results that everyone thought was really hard but which turned out to be almost trivial (or at least natural) when looked at in the right way. I'll post an answer which gives what I would consider to be an example. I decided to make this a community wiki, and I think the usual "one example per answer" guideline makes sense here.
Here is my example. In the 1930's (I think), Wiener gave a proof that if $f$ is a continuous nonvanishing function on the circle with absolutely convergent Fourier series, then so is $1/f$. The proof was a long piece of hard analysis, involving detailed local calculations and complicated estimates. Later (in the 1940's?), Gelfand found that the statement follows from the basic theory of Banach algebras as follows. The functions on the circle with absolutely convergent Fourier series can be characterized as the image of the Gelfand transform $\Gamma: l^1(\mathbb{Z}) \to C(S^1)$. In general if $\Gamma: B \to C(M)$ is the Gelfand transform from a commutative Banach algebra to the ring of continuous functions on its maximal ideal space, then $x$ is invertible in $B$ if and only if $\Gamma(x)$ is invertible in $C(M)$. So the hypotheses on $f$ imply that $f = \Gamma(x)$ for some invertible $x$ in $l^1(\mathbb{Z})$, and a simple calculation shows that $1/f = \Gamma(x^{-1})$.
{ "source": [ "https://mathoverflow.net/questions/24913", "https://mathoverflow.net", "https://mathoverflow.net/users/4362/" ] }
24,960
Over the years, I have heard two different proposed answers to this question. It has something to do with parabolic elements of $SL(2,\mathbb{R})$. This sounds plausible, but I haven't heard a really convincing explanation along these lines. "Parabolic" is short for "para-Borelic," meaning "containing a Borel subgroup." Which answer, if either, is correct? A related question is who first introduced the term and when. Chevalley perhaps?
It appears that neither of the answers is fully correct. There is a great book, "Essays in the history of Lie groups and algebraic groups" by Armand Borel, when it comes to references of this type. To quote from chapter VI section 2: ...There was no nice terminology for the subgroups $P _I$ with lie algebra the $\mathfrak p _I$ until R. Godement suggested calling them parabolic subgroups. I shall therefore anachronistically call them that... "The geometry of the finite simple groups" by F. Buekenhout is on the other hand the only paper that came up in a search for paraborelic, and the author mentions he is using this term instead of parabolic to distinguish from parabolic subgroups of Chevalley groups.
{ "source": [ "https://mathoverflow.net/questions/24960", "https://mathoverflow.net", "https://mathoverflow.net/users/3106/" ] }
24,970
This was going to be a comment to Differentiable structures on R^3 , but I thought it would be better asked as a separate question. So, it's mentioned in the previous question that $\mathbb{R}^4$ has uncountably many (smooth) differentiable structures. This is a claim I've certainly heard before, and I have looked a little bit at the construction of exotic $\mathbb{R}^4$s, but it's something that I really can't say I have an intuitive understanding of. It seems reasonable enough to me that a generic manifold can have more than one differentiable structure, just from the definition; and is in fact, a little surprising to me that manifolds have only one differentiable structure for dimension $d \le 3$. But it's very odd to me that $\mathbb{R}^d$ has exactly one differentiable structure, unless $d=4$, when it has way too many! Naively, I would have thought that, since $\mathbb{R}^4 = \mathbb{R}^2 \times \mathbb{R}^2$, and $\mathbb{R}^2$ has only one differentiable structure, not much can happen. Although, we know $\text{Diff}(M\times N)$ cannot generically be reasonably decomposed in terms of $\text{Diff}(M)$ and $\text{Diff}(N)$ in general, I would not have expected there to be obstructions for this to happen in this case. I would have also thought, that since $\mathbb{R}^5$ has only one differentiable structure, and $\mathbb{R}^4$ is a submanifold of $\mathbb{R}^5$, and $\mathbb{R}^3$ is a submanifold of $\mathbb{R}^4$ with only one differentiable structure, this would be fairly restrictive on the differentiable structures $\mathbb{R}^4$ can have. Although it seems that this only restricts the "inherited" differentiable structure to be a unique one, it still seems odd to me that the there are "non-inherited" structures in $d=4$ from $d=5$, and somehow all of these non-inhereted structures are identical on the submanifold $\mathbb{R}^3$! Anyway, can anyone provide a intuitively sensible explanation of why $\mathbb{R}^4$ is so screwed up compared to every other dimension? Usually I would associate multiple differentiable structures with something topologically "wrong" with the manifold. Is something topologically "wrong" with $\mathbb{R}^4$ compared to every other dimension? Or is this a geometric problem somehow?
I once heard Witten say that topology in 5 and higher dimensions "linearizes". What he meant by that is that the geometric topology of manifolds reduces to algebraic topology. Beginning with the Whitney trick to cancel intersections of submanifolds in dimension $d \ge 5$, you then get the h-cobordism theorem , the solution to the Poincare conjecture, and surgery theory. As a result, any manifold in high dimensions that is algebraically close enough to $\mathbb{R}^d$ is homeomorphic or diffeomorphic to $\mathbb{R}^d$. By the work of Freedman and others using Casson handles, there is a version of or alternative to the Whitney trick in $d=4$ dimensions, but only in the continuous category and not in the smooth category. Otherwise geometric topology does not "linearize" in Witten's sense. But in $d \le 3$ dimensions, the dimension is too low for the smooth category to separate from the continuous category, at least for the question of classification of manifolds. What you have in 3 dimensions is examples such as the Whitehead manifold , which is contractible but not homeomorphic to $\mathbb{R}^3$. In 4 dimensions you instead get open manifolds that are homeomorphic to $\mathbb{R}^4$ (because they are contractible I'm not sure if other conditions are needed and simply connected at infinity), but not diffeomorphic to $\mathbb{R}^4$. You have to be on the threshold between low dimensions and high dimensions to have the phenomenon. I would say that these exotic $\mathbb{R}^4$s don't really look that much like standard $\mathbb{R}^4$, they just happen to be homeomorphic. The homeomorphism has fractal features, and so does the Whitehead manifold. Meanwhile 2 dimensions is too low to have non-standard contractible manifolds. In the smooth category, the Riemann uniformization theorem proves that smooth 2-manifolds are very predictable, or you can get the same result in the PL category with a direct combinatorial attack on planar graphs. And as mentioned, smooth, PL, and topological manifolds don't separate in this dimension. Also, concerning your question about Cartesian products: Obviously the famous results imply that there is a fibration of standard $\mathbb{R}^5$ by exotic $\mathbb{R}^4$. The Whitehead manifold cross $\mathbb{R}$ is also homeomorphic to $\mathbb{R}^4$. (I don't know if it's diffeomorphic.) These fibrations are also fractal or have fractal features.
{ "source": [ "https://mathoverflow.net/questions/24970", "https://mathoverflow.net", "https://mathoverflow.net/users/3329/" ] }
25,089
When, if ever, can we view a differential form, e.g. like $dx \wedge dy$, as the similar looking expression used in physics to represent the product of "infinitesimals" e.g. $dx$ $dy$? In particular, I'm wondering why differential forms are anti-symmetric, e.g. $dx \wedge dy=-dy \wedge dx$, whereas in physics we often are happy to write $dx$ $dy=dy$ $dx$. Am I misunderstanding something basic?
In both physics and mathematics, there are times when you want a signed multiple integral $dx \wedge dy$, and there are times when you want its unsigned counterpart $dx\;dy = |dx \wedge dy|$. The difference is that in physics, the notation $dx \wedge dy$ is typically paraphrased either with cross products or with antisymmetric indices. The exterior algebra of differential forms is a brilliant definition due to Elie Cartan. Physicists sometimes need ideas that are equivalent to Cartan's work in this topic, but in most areas of physics they simply didn't adopt his notation. One major exception is string theorists and certain gauge theorists, who by now understand Cartan perfectly well. For example, the most elegant way to understand a surface integral, as you see it in Ampere's law or Stokes' theorem, is as a signed integral. It is the integral of a differential form $$\omega(x(u,v),y(u,v),z(u,v)) = f(u,v) du \wedge dv$$ over a surface. But you can instead write it as the surface integral of a vector field $\vec{\omega} \cdot (\vec{du} \times \vec{dv})$. Any physicist can tell you that it's a signed integral; the only thing missing is Cartan's notation. A related example is Maxwell's equations. In low energy physics you write them as four equations with 3-vectors. In high-energy physics you write them as one or two equations with 4-vectors and 4-tensors with indices. You can also write the same equation using differential forms, but only gauge theorists and string theorists feel that they need that notation. On the other hand, a mathematician who wants to use a probability density function or find an unsigned area or volume is perfectly happy to integrate with respect to $dx\;dy = |dx \wedge dy|$. Given other examples such as $ds = \sqrt{|dx|^2+|dy|^2}$ and $|dx \wedge dy|^p$, there is also a shift in emphasis: In more elementary use of Leibniz notation, the differentials are meant more as instructions for what kind of integral you are doing. In Cartan's notation, and in these other unsigned variations, the differentials become objects in their own right, basically what physicists would recognize as tensor fields with special transformation laws.
{ "source": [ "https://mathoverflow.net/questions/25089", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
25,132
A wikipedia page/paragraph on ℵ₁ states: "The definition of ℵ₁ implies (in ZF, Zermelo-Fraenkel set theory without the axiom of choice) that no cardinal number is between ℵ₀ and ℵ₁." "If the axiom of choice (AC) is used, it can be further proved that the class of cardinal numbers is totally ordered, and thus ℵ₁ is the second-smallest infinite cardinal number." Can someone point me at a lay explanation of this please? Is it simply saying that ℵ½'s existence is up to definition, or choice? And this has been shown by the axiom of choice ?
The point is that without the Axiom of Choice, cardinalities are not linearly ordered, and it is possible under $\neg AC$ that there are additional cardinalities to the side of the $\aleph$'s. Thus, the issues is not additional cardinalities between $\aleph_0$ and $\aleph_1$, but rather additional cardinalities to the side, incomparable with these cardinalities. Let me explain. We say that two sets $A$ and $B$ are equinumerous or have the same cardinality if there is a bijection $f:A\to B$. We say that $A$ has smaller-or-equal cardinality than $B$ if there is an injection $f:A\to B$. It is provable (without AC) that $A$ and $B$ have the same cardinality if and only if each is smaller-or-equal to the other (this is the Cantor-Shroeder-Bernstein theorem). Under AC, every set is bijective with an ordinal, and so we may use these ordinals to select canonical representatives from the equinumerosity classes. Thus, under AC, the $\aleph_\alpha$'s form all of the possible infinite cardinalities. But when AC fails, the cardinalities are not linearly ordered (the linearity of cardinalities is equivalent to AC). Let me mention a few examples: It is a consequence of the Axiom of Determinacy that there is no $\omega_1$ sequence of distinct reals. Thus, in any model of AD, the cardinality of the reals is uncountable, but incomparable to $\aleph_1$. Thus, in such a model, it is no longer correct to say that $\aleph_1$ is the smallest uncountable cardinal. One should say instead that $\aleph_1$ is the smallest uncountable well-orderable cardinal. A more extreme example is provided by the Dedekind finite infinite sets. These sets are not finite, but also not bijective with any proper subset. It follows that they can have no countably infinite subsets. In particular, they are uncountable sets, but their cardinality is incomparable with $\omega$. Thus, in a model of $\neg AC$ having a Dedekind finite infinite set, it is no longer correct to say that $\aleph_0$ is the smallest infinite cardinal. Thus, the issue isn't whether there is something between $\aleph_0$ and $\aleph_1$, but rather, whether there are additional cardinalities to the side of these cardinalities.
{ "source": [ "https://mathoverflow.net/questions/25132", "https://mathoverflow.net", "https://mathoverflow.net/users/6156/" ] }
25,161
Background: When I first took measure theory/integration, I was bothered by the idea that the integral of a real-valued function w.r.t. a measure was defined first for nonnegative functions and only then for real-valued functions using the crutch of positive and negative parts (and only then for complex-valued functions using their real and imaginary parts). It seemed like a strange starting point to make the theory dependent on knowledge of the nonnegative function case when this certainly isn't necessary for Riemann integrals or infinite series: in those cases you just take the functions or sequences as they come to you and put no bias on positive or negative parts in making the definitions of integrating or summing. Later on I learned about integration w.r.t a measure of Banach-space valued functions in Lang's Real and Functional Analysis. You can't break up a Banach-space valued function into positive and negative parts, so the whole positive/negative part business has to be tossed aside as a foundational concept. At the end of this development in the book Lang isolates the special aspects of integration for nonnegative real-valued functions (which potentially could take the value $\infty$). Overall it seemed like a more natural method. Now I don't think a first course in integration theory has to start off with Banach-space valued functions, but there's no reason you couldn't take a cue from that future generalization by developing the real-valued case in the same way Banach-space valued functions are handled, thereby avoiding the positive/negative part business as part of the initial steps. Finally my question: Why do analysts prefer the positive/negative part foundations for integration when there is a viable alternative that doesn't put any bias on which function values are above 0 or below 0 (which seems to me like an artificial distinction to make)? Note: I know that the Lebesgue integral is an "absolute" integral, but I don't see that as a justification for making the very definition of the integral require treatment of nonnegative functions first. (Lang's book shows it is not necessary. I know analysts are not fond of his books, but I don't see a reason that the method he uses, which is just copying Bochner's development of the integral, should be so wildly unpopular.)
It's really the difference between two kinds of completions: An order-theoretic completion. For this, it's easiest to start with non-negative functions, and have infinite values dealt with pretty naturally. A metric completion. For this, it's more natural to start with finite-valued signed simple functions. It's not exactly that simple -- historically, signed simple functions (well, actually, I think they used step functions) were used in an order-theoretic treatment by Riesz and Nagy. But I think this is a good way to look at the two ways of approaching this integral. And needless to say, these two approaches generalize in two different contexts. They are both interesting and illuminate somewhat different aspects of the Lebesgue integral, even on the real line. For instance, the order-theoretic approach leads quickly to results such as the monotone convergence and bounded convergence theorems, while the metric approach leads naturally to the topology of convergence in measure and completeness of the $L_p$ spaces.
{ "source": [ "https://mathoverflow.net/questions/25161", "https://mathoverflow.net", "https://mathoverflow.net/users/3272/" ] }
25,307
Is there any sort of classification of (say finite) groups with the property that every subgroup is normal? Of course, any abelian group has this property, but the quaternions show commutativity isn't necessary. If there isn't a classification, can we at least say the group must be of prime power order, or even a power of two?
These are called Dedekind groups , and the non-abelian ones are called Hamiltonian groups. The finite ones were classified by Dedekind, and the classification extended to all groups by Baer. The non-abelian ones are a direct product of the quaternion group of order 8, an elementary abelian 2 group, and a periodic abelian group of odd order (or all of whose elements have odd order). Periodic abelian groups all of whose elements have odd order can be quite complicated, but the finite ones are direct products of cyclic groups. Your example does not have the property that all of its subgroups are normal when n ≥ 4. The subgroup generated by x1*x2*x3 is not normal, since (x1*x2*x3)^x4 = (a x1) (a x2) (a x3) = a x1*x2*x3, but x1*x2*x3 has order 2. For n = 3, your group is Q8 x 2, and so is Hamiltonian. The cyclic group of order 6 and the direct product Q8 x 3 are two groups of non-(prime power) order with every subgroup normal.
{ "source": [ "https://mathoverflow.net/questions/25307", "https://mathoverflow.net", "https://mathoverflow.net/users/5513/" ] }
25,344
Is there any "well-known" algebraically closed field that is uncountable other than $\mathbb{C}$ ? The algebraic closure of $\mathbb{C}(X)$ would work, but is it meaningful, is this field used in some topics ? Have you other examples ? Thank you.
The algebraic closure of $\mathbb{F}_p((t))$ is uncountable of characteristic $p$. It comes up naturally in number theory and algebraic geometry. For every characteristic $p \geq 0$ and uncountable cardinal $\kappa$, there is up to isomorphism exactly one algebraically closed field of characteristic $p$ and cardinality $\kappa$. The examples of $\mathbb{C}$ and closures of Laurent series fields as above give you the ones of continuum cardinality and all characteristics. Indeed I do not know any specific reason to consider algebraically closed fields of larger than continuum cardinality.
{ "source": [ "https://mathoverflow.net/questions/25344", "https://mathoverflow.net", "https://mathoverflow.net/users/6187/" ] }
25,360
I remember to have read that the L-function of an elliptic curve, which a priori only converges for $\Re s > \frac{3}{2}$ also converges at $s=1$ provided that the $L$-function satisfies the functional equation. I always thought that this is due to the fact that in this case the L-function is also the L-function of a modular form and in this case we have better convergence. However, the modular forms which correspond to these curves are cusp forms of weight 2 and so have a priori even worse convergence properties, namely convergence for $\Re s > 2$. So I now wonder, whether I remember correctly and the the claim above is indeed correct? If so, I would like to see a reference for the proof. What is the reason for this fact. Does some non-trivial fact about elliptic curves play a role. It will (almost surely) not hold for arbitrary modular forms or cusp forms, since they always satisfy the functional equation. What do we need? An Euler product? EDIT: The $L$-function extends to an entire function. But what I am interested in is the original series representation. Is it true that the series representation for the $L$-function of an elliptic curve which converges a priori for $\Re s > \frac{3}{2}$, is valid also for a bigger strip $\Re s > c$ with $c<\frac{3}{2}$.
Wood, see the article by Kumar Murty in Seminar on Fermat's Last Theorem. He shows how the L-series converges (conditionally!) on Re(s) > 5/6, thus in particular at s = 1. You can find the book on Google books and do a search on "5/6" to find the page. OK, I just did that and will tell you: it's on page 15. He proves the theorem for any Dirichlet series converging abs. for Re(s) > 3/2 and having an analytic continuation and suitable functional equation relating values at s and 2-s. (Thus in practice it is a theorem about L-functions of suitable weight 2 modular forms, certainly nothing directly about elliptic curves!) He also says that what he describes is a special case of a more general result, with citation. Where the Dirichlet series converges, it still represents the L-function that may have been analytically continued to the wider region by some other method, since Dirichlet series are analytic on the half-plane to the right of any point where they converge and moreover at any point where they converge the value is the limit of the function taken along the line to the right of the point (Abel's theorem for power series on discs works for Dirichlet series on right half-planes). So we are assured that if you happen to know the series itself converges somewhere new it still equals the orthodox analytic continuation (whatever that means). On the other hand, the Euler product has surprises. Goldfeld discovered that if the Euler product for an ell. curve over Q converges at s = 1, in the natural sense of partial Euler products over primes up to x as x goes to $\infty$, and the value of the Euler product is nonzero, then this value is not L(1) but rather is off from this by a factor of $\sqrt{2}$. Of course there was no real input about elliptic curves directly: Goldfeld was assuming the ell. curve was modular (he was writing in the 1980s) and used that right away. It turns out exactly the same thing happens for quadratic Dirichlet L-functions at s = 1/2: if the partial Euler product at s = 1/2 converges to a nonzero value then again you're off by a factor of $\sqrt{2}$ from L(1/2), but in one setting the factor is $\sqrt{2}$ and in the other it's $1/\sqrt{2}$. For non-quadratic Dirichlet L-functions there's no funny business: if the Euler product converges at s = 1/2 to a nonzero value (which, by the way, it always should since nobody expects Dirichlet L-functions vanish at 1/2) then the value will be L(1/2). I first heard about Goldfeld's result at a talk by Karl Rubin (he gave the usual heuristic for BSD conjecture by looking at the Euler product at s = 1 and some wiseguy in the audience asked if it really did converge at s = 1 and Rubin mentioned there was a paper of Goldfeld on that), but when I read Goldfeld's paper I was confused by part of it, so in trying to work it out in the simpler example of Dirichlet L-functions I wound up seeing I could prove the same kind of theorem for any Euler product over a global field having the properties everyone expects it should have. This turns out to be related to properties of symmetric and exterior square Euler products and is morally the same quadratic bias (called Chebyshev's bias) that Sarnak and Rubinstein found when they worked out comparative statistics on the number of primes up to x in different congruence class mod $m$: classes of squares or non-squares exhibit different fine growth rates compared to one another. For more details on the partial Euler products, including some numerical examples, see my paper http://www.math.uconn.edu/~kconrad/articles/eulerprod.pdf . By the way, one can definitely observe this $\sqrt{2}$ business happening numerically but we'll never expect to prove it happens since it actually implies the Riemann hypothesis for the relevant L-function. If the product converges to a nonzero value at a point on the critical line it is no real surprise that you could prove the Dirichlet series for the log of the Euler product converges everywhere to the right of the critical line, which implies the Euler product itself converges to a nonzero value everywhere to the right of the critical line, hence Riemann hypothesis. In fact, as I show in that paper, this (suitable) Euler product convergence at a point on the critical line is actually equivalent to something which at present appears to lies deeper than the Riemann hypothesis but is still plausible. There's no lack of results which imply RH but are themselves false, you see. This probably isn't one of them. In summary, the Dirichlet series for ell. curve L-functions (over $\mathbf Q$) can provably be shown to converge on a wider region than they are usually said to converge and still equal the L-function there, including s = 1, while the Euler product probably does converge at s = 1 too but if the value is not 0 then it's not going to converge to what you expect... unless you think the Riemann hypothesis is false.
{ "source": [ "https://mathoverflow.net/questions/25360", "https://mathoverflow.net", "https://mathoverflow.net/users/3757/" ] }
25,363
In what way and with what utility is the law of excluded middle usually disposed of in intuitionistic type theory and its descendants? I am thinking here of topos theory and its ilk, namely synthetic differential geometry and the use of topoi in algebraic geometry (this is a more palatable restructuring, perhaps), where free use of these "¬⊨P∨¬P" theories is necessarily everywhere--freely utilized at every turn, one might say. But why and how are such theories first formulated, and what do they look like in the purely logical sense? You will have to forgive me; I began as a student in philosophy (not even that of mathematics), and the law of excluded middle is something that was imbibed with my mother's milk, as it were. This is more of a philosophical issue than a mathematical one, but being the renaissance guys/gals that you all are, I thought that perhaps this could generate some fruitful discussion.
You make a couple of basic mistakes in your question. Perhaps you should correct them and ask again because I am not entirely sure what it is you are asking: Topos theory does not "freely use $P \lor \lnot P$", and neither does synthetic differential geometry. In fact, topos theorists are quite careful about not using the law of excluded middle, while synthetic differential geometry proves the negation of the law of excluded middle. As far as I know, the law of excluded middle is $P \lor \lnot P$, while the law of non-contradiction is $\lnot (P \land \lnot P)$. These two are not equivalent (unless you already believe in the law of excluded middle, in which case the whole discussion is trivial). The principle of non-contradiction is of course intuitionistically valid. So you seem to be confusing two different logical principles. If I had to guess what you asked, I would say you are wondering why anyone in their right mind would want to be agnostic about the law of excluded middle (intuitionistic logic) or even deny it (synthetic differential geometry). Aren't people who do so just plain crazy? To understand why someone might work without the law of excluded middle, the best thing is to study their theories. Probably you cannot afford to devote several years of your life to the study of topos theory. For an executive summary of synthetic differential geometry and its interplay with logic I recommend John Bell 's texts on synthetic differential geometry, such as this one . Let me try an analogy. Imagine a mathematician who studies commutative groups and has never heard of the non-commutative ones. One day he meets another mathematician who shows him non-commutative groups. How will the first mathematician react? I imagine he will go through all the usual phases : Denial: these are not groups! Anger: why are you destroying my groups? I hate you! Bargaining: can we at least analyze non-commutative group in terms of their "commutative representations" (whatever that would mean)? Depression: this is hopeless, I wasted my life studying the wrong groups. I might as well study point-set topology. Acceptance: it's kind of cool that the symmetries of a cube form a group. I am at stage 5 with regards to intuitionistic logic. Where are you?
{ "source": [ "https://mathoverflow.net/questions/25363", "https://mathoverflow.net", "https://mathoverflow.net/users/6131/" ] }
25,428
I have heard that Polylogarithms are very interesting things. The wikipedia page shows a lot of interesting identities. These functions are indeed supposed to have caught the attention of Ramanujan. Moreover, they seem to be important in physics for various purposes like Bose-Einstein integrals, which I am not really knowledgeable enough to understand. These are all things I have heard from people after I queried "why polylogarithms are interesting". So this function $$Li_s (z) = \sum_{k=1}^\infty \frac{z^k}{k^s}$$ is very interesting and has a lot of useful properties as I can see. Especially for integral values of $s$. What is bothering me is the following. Let $f(z) = \sum a_n z^n$ be analytic inside the unit disc. That is, the radius of convergence at least $1$. For simplicity, we assume it is $1$, i.e., $$r = \frac{1}{\limsup_{n\rightarrow\infty}\sqrt[n]{|a_n|}} = 1$$ implying that inside the unit disc, term-by-term differentiation and integration are possible. Since $\left( a_n \right)^{1/n}$ goes to $1$ as $n$ goes to $\infty$, it is also true that $\left(na_n \right)^{1/n}$ goes to $1$. Similarly, $\left(\frac{a_n}{n} \right)^{1/n}$ also goes to $1$. Now, defining $$f(z,s) = \sum_{n=1}^\infty \frac{a_n z^n}{n^s}$$ it is the case that $f_{2}(z) = f(z,2)$ and in general iterating, $f_n(z) = f(z, n)$ are analytic inside the unit disc. In fact, uniform bounds hold true for compact sets contained in the domain $|z| \leq 1$ and $|s| \leq K$ for arbitrary $K$, and so $f(z,s)$ is analytic in the domain $D^\circ \times \mathbb C$, where $z \in D^\circ$, the open unit disc, and $s \in \mathbb C$. So any analytic function in the unit disc can be extended just like the logarithm is extended to polylogarithms. But there is a rich theory about polylogarithms, and I haven't heard of any such theories about other functions analytic inside the unit disc. So, what makes polylogarithm so amenable to an extended theory of polylogarithms yielding so many results? Why is this not giving such an interesting theory for just any other complex functions?
The reason why polylogarithm are so important/interesting/ubiquitous is they are the simplest non trivial examples of analytical functions underlying variations of mixed Hodge structure. This goes back to Beilinson and Deligne. A variation of mixed Hodge structure is a very sophisticated gadget. You can think of it as a nice differential equation (the underlying connexion) its solutions (the underlying local system of horizontal sections) with a $\mathbb{Q}$-structure some additional data that make the structure very rigid. Typical examples of VMHS on $X$ come from the cohomology of families of varieties parametrized by $X$. They can be used to encode the interaction between between topological and arithmetical properties of $X$. For example, there is a rank 2 variation of mixed Hodge structure $K \in Ext^1_{VMHS(\mathbb{C}^\times)}(\mathbb{Q},\mathbb{Q}(1))$ over $\mathbb{C}^\times$ known as the Kummer sheaf. The underlying local system $K_{\mathbb{Q}}$ has fiber $H_1(\mathbb{C}^\times,\{1,z\};\mathbb{Q})$ at $z$. The underlying connexion is a trivial vector bundle of rank 2 with nilpotent connexion $$ \nabla = d - \begin{pmatrix} 0 & 0 \cr % \frac{dt}{t} & 0 \end{pmatrix} $$ The "periods" are obtained by integrating the coefficient of the matrix over the paths $\gamma \in H_1(\mathbb{C}^\times,\{1,z\};\mathbb{Q})$. So we get a the non trivial period by integrating $\frac{dt}{t}$ over paths $[1,z]$, i.e. we get determinations of $\log(z)$. Conclusion: we have an object $K$ in $VMHS(\mathbb{C}^\times)$ "categorizing" the classical logarithm function. On the arithmetic side, the transcendance of $\log(z)$ for generic $z$ mirrors the fact that $H_1(\mathbb{C}^\times) = \mathbb{Z}$. The same can be done for polylogarithm functions. The Logarithmic sheaf is the symmetric algebra $Log := Sym(K)$ (it corresponds to the whole family of $\log^n(z)$, $n\in \mathbb{N}$). The Polylogarithm sheaf is a canonical extension $Pol$ of $\mathbb{Q}$ by the restriction of $Log(1)$ to $\mathbb{P}^1\setminus \{0,1,\infty\}$. Its periods encodes the monodromy of the polylogarithm functions in the same way the Kummer sheaf does for the logarithm function. These are the most elementary unipotent variations of mixed Hodge structures. Now we have "categorized" the classical polylogarithm functions. In fact, this can actually be done on a more fundamental level using only algebraic cycles defined over $\mathbb{Z}$ (this is the motivic story, the variation of mixed Hodge structure being just a realization of the motivic object). This has very interisting arithmetic consequences. For example, specializing to 1, this implies that we have motivic cohomology classes in $H^1(\mathbb{Z},\mathbb{Q}(n))$ whose images under the Hodge regulator corresponds to $\zeta(n)$. Using this picture, the period conjecture then implies that the $\zeta(2n+1)$ are algebraicly independent over $\mathbb{Q}$. To give you an idea of how powerful this intuition is: we can't even prove $\zeta(5)$ is irrational! In conclusion, the polylogarithm functions are interesting because they correspond to non trivial algebraic cycles. This leads to interactions between the analytical properties of the functions, arithmetics of special values and algebraic geometry. Lots of classical functions should have similar interpretations. For example, there is a similar picture for Euler's Beta and Gamma functions.
{ "source": [ "https://mathoverflow.net/questions/25428", "https://mathoverflow.net", "https://mathoverflow.net/users/6031/" ] }
25,472
In the simplest cases, the fundamental group serves as a measure of the number of 2-dimensional "holes" in a space. It is interesting to know whether they capture the following type of "hole". This example may look pathological, but one must understand where one gets stuck, when one tries to study pathological spaces. It helps one in understanding where exactly all the extra nice conditions are used, and hopefully this type of approach will help in minimizing the number of false beliefs we unconsciously have. The line with the double origin is the following space. In the union $\{0\} \times \mathbb R \cup \{ 1 \} \times \mathbb R$, impose the equivalence relation $(0, x) \sim (1, x) $ iff $x \neq 0$. This space is locally like the real line, ie a $1$-manifold in everything except the Hausdorff condition. It is connected, path-connected and semilocally simply connected. Just the sort of nice space you study in the theory of fundamental group and covering spaces, except for the (significant) pathology that it has one inconvenient extra point violating the Hausdorff-ness. It seems that the usual methods of computing the fundamental group are not working for this space. Van Kampen's theorem in particular does not apply. Also the covering spaces are weird, just like this space. In fact this space would have been a covering of $\mathbb R$, were it not for the condition that the preimage of every point is a disjoint union of open sets. So, what if we try to compute the fundamental group of this space? I would be satisfied to know whether it trivial or not. Say, is the collection of homotopy classes of loops based at $1$ nontrivial? It is possible to speculate that a certain loop based at $1$ which passes through both the origins on this special line, in such a way that it passes through the "upper" origin, ie $(1,0)$ on the way left, and it passes through the lower origin, ie $(0,0)$, ought not to be homotopic to the constant loop based at $1$. But how to go about proving/disproving this statement?
The earlier answers showing that the fundamental group of this space is infinite cyclic by determining its universal cover or by constructing a fiber bundle over it with contractible fibers are very nice, but it's also possible to compute $\pi_1(X)$ by applying the classical van Kampen theorem not to $X$ itself but to the mapping cylinder of a map from the circle to $X$ representing the supposed generator of $\pi_1(X)$, namely the map that sends the upper and lower halves of $S^1$ to arcs in $X$ from $+1$ to $-1$ in the two copies of $\mathbb R$ in $X$. Decompose the mapping cylinder into the two open sets $A$ and $B$ which are the complements of the two "bad" points in $X$ (regarding $X$ as a subspace of the mapping cylinder). Taking a little care with the point-set topology, one can check that $A$, $B$ and $A\cap B$ each deformation retract onto the circle end of the mapping cylinder. Then van Kampen's theorem says that $\pi_1$ of the mapping cylinder, which is isomorphic to $\pi_1(X)$, is isomorphic to the free product of two copies of $\mathbb Z$ amalgamated into a single $\mathbb Z$. An interesting fact about $X$ is that it is not homotopy equivalent to a CW complex, or in fact to any Hausdorff space. For if one had a homotopy equivalence $f:X \to Y$ with $Y$ Hausdorff then $f$ would send the two bad points of $X$ to the same point of $Y$ so $f$ would factor through the quotient space of $X$ obtained by identifying these two bad points. This quotient is just $\mathbb R$ and the quotient map $X \to \mathbb R$ is not injective on $\pi_1$, so the same is true for $f$ and $f$ can't be a homotopy equivalence.
{ "source": [ "https://mathoverflow.net/questions/25472", "https://mathoverflow.net", "https://mathoverflow.net/users/6031/" ] }
25,513
This is related to another question in which it is proved that Zariski open sets are dense in analytic topology. But it is intuitive that something more is true. Namely, that they are the sets where some polynomials vanish, and consideration of a few examples in $\mathbb R^n$ where they are of Lebesgue measure $0$, suggest strongly that the Zariski-closed sets(except the whole affine space) are of measure $0$ in $\mathbb C^n$ as well. This should be quite simple; but I am unable to prove it due to inexperience in measure theory. The nice thing about proving this is that once this is done, then we are able to claim safely that so-and-so statement is true almost everywhere, if it is true on a Zariski-open set. So, in a more measure theoretic formulation: Let $X$ be a set in $\mathbb C^n$ contained in the zero locus of some collection of polynomials. How to show that $X$ is of measure $0$? In fact my feeling is that more should be true, ie, we can replace polynomials by analytic functions at least, and get the same result.
If a real analytic function $f:U\subset\mathbb R^n\to\mathbb R^m$ is zero on a set $Z$ of positive measure (and $U$ is connected), then $f\equiv 0$. Indeed, almost every point of $Z$ is a density point . It is easy to see that the derivative at a density point is zero. Therefore $df=0$ a.e. on $Z$. Applying the same argument to $df$, conclude that the second derivative vanishes a.e. on $Z$ too. And so on. Thus $f$ has zero Taylor expansion at some point, hence $f\equiv 0$.
{ "source": [ "https://mathoverflow.net/questions/25513", "https://mathoverflow.net", "https://mathoverflow.net/users/6031/" ] }
25,592
While trying to get some perspective on the extensive literature about highest weight modules for affine Lie algebras relative to "level" (work by Feigin, E. Frenkel, Gaitsgory, Kac, ....), I run into the notion of dual Coxeter number but am uncertain about the extent of its influence in Lie theory. The term was probably introduced by Victor Kac and is often denoted by $h^\vee$ (sometimes by $g$ or another symbol). It occurs for example in the 1990 third edition of his book Infinite Dimensional Lie Algebras in Section 6.1. (The first edition goes back to 1983.) It also occurs a lot in the mathematical physics literature related to representations of affine Lie algebras. And it occurs in a 2009 paper by D. Panyushev in Advances which studies the structure of complex simple Lie algebras. Where in Lie theory does the dual Coxeter number play a natural role (and why)? A further question is whether it would be more accurate historically to refer instead to the Kac number of a root system , since the definition of $h^\vee$ is not directly related to the work of Coxeter in group theory. BACKGROUND: To recall briefly where the Coxeter number $h$ comes from, it was introduced by Coxeter and later given its current name (by Bourbaki?). Coxeter was studying a finite reflection group $W$ acting irreducibly on a real Euclidean space of dimension $n$: Weyl groups of root systems belonging to simple complex Lie algebras (types $A--G$), these being crystallographic, together with the remaining dihedral groups and two others. The product of the $n$ canonical generators of $W$ has order $h$, well-defined because the Coxeter graph is a tree. Its eigenvalues are powers of a primitive $h$th root of 1 (the "exponents"): $1=m_1 \leq \dots \leq m_n = h-1$. Moreover, the $d_i = m_i+1$ are the degrees of fundamental polynomial invariants of $W$ and have product $|W|$. In the Weyl group case, where there is an irreducible root system (but types $B_n, C_n$ yield the same $W$), work of several people including Kostant led to the fact that $h$ is 1 plus the sum of coefficients of the highest root relative to a basis of simple roots. On the other hand, the dual Coxeter number is 1 plus the sum of coefficients of the highest short root of the dual root system. For respective types $B_n, C_n, F_4, G_2$, the resulting values of $h, h^\vee$ are then $2n, 2n, 12, 6$ and $2n-1, n+1, 9,4$. This gets pretty far from Coxeter's framework. One place where $h^\vee$ clearly plays an essential role is in the study of a highest weight module for an affine Lie algebra, where the canonical central element $c$ acts by a scalar (the level or central charge ). The "critical" level $-h^\vee$ has been especially challenging, since here the theory seems to resemble the characteristic $p$ situation rather than the classical one.
The dual Coxeter number comes up naturally as a normalization factor for invariant bilinear forms on the Lie algebra: according to Kac's book which you quote, $2h^{\vee}$ is the ratio between the Killing form and the "minimal" bilnear form (the trace form for $sl_n$), which has the property that the square of the length of the maximal root is 2. This minimal form corresponds to the minimal affine Kac-Moody group corresponding to the Lie algebra, or equivalently to the minimal line bundle on the affine Grassmannian or the moduli spaces of G-bundles on curves (the generator of the Picard group). As a result, the $-2h^\vee$-th power of the basic ample line bundle on the Grassmannian or moduli space of bundles (which is associated to the level given by the Killing form) ends up being identified with the canonical line bundle, and in particular the $h^\vee$th power is a square-root of the canonical bundle, or spin structure. (This is analogous to the role of $\rho$ for the finite flag variety.) Thus the critical level arises naturally geometrically -- it corresponds to half-forms on the Grassmannian/moduli spaces. The basic yoga of quantization (or of unitary/normalized induction of representations) tells us that classical symmetries are "shifted" by half-forms - cf $\rho$-shifts in representation theory. Likewise the critical shift for affine algebras.. for example the Feigin-Frenkel theorem is the analogue of the Harish-Chandra isomorphism: the center of the enveloping algebra at critical level (rather than level 0 as one might naively guess, ignoring half-form twists) is isomorphic to the algebra of invariant polynomials on the (dual of the) Lie algebra. (This can be said more canonically keeping track of symmetries of change of variable, magic word being "opers", but let's ignore that). One can say all this very naturally algebraically (without resorting to geometry) -- $\rho$ can be described as the square root of the modular character of the Borel subalgebra (up to sign or something, not being very careful here). The critical level has a similar description in terms of the positive half (Taylor series part) of the Kac-Moody algebra - if you try to define the modular character of this half you are quickly led to semiinfinite determinants etc, ie to the previous geometric story, and so one can assert that the critical level "is" half the modular character of the positive loop subalgebra.
{ "source": [ "https://mathoverflow.net/questions/25592", "https://mathoverflow.net", "https://mathoverflow.net/users/4231/" ] }
25,603
This concerns a number of basic questions about ample line bundles on a variety $X$ and maps to projective space. I have searched related questions and not found answers, but I apologize if I missed something. I'll work with schemes of finite type over a field $k$ for simplicity. Background A quasi-coherent sheaf $F$ on a $k$-scheme $X$ is globally generated if the natural map $H^{0}(X,F)\otimes \mathcal{O}_{X} \rightarrow F$ is a surjection of sheaves. Basically, this says that for any point on $X$, there is at least one section of $F$ that doesn't vanish at that point, so there are enough sections of $F$ to see all the points of $X$. (EDIT: As pointed out in the comments below, this last sentence does not describe a situation equivalent to being globally generated. Perhaps it is better to say that globally generated means that for each point $x \in X$, $F$ has some rank $r$ at $x$ and globally generated means that there are at least $r$ sections of $F$ that are linearly independent over $x$.) The notion of globally generated is especially useful when $F=L$ is a line bundle on $X$. If $V$ is a finite dimensional subspace of $H^{0}(X,L)$ such that $V \otimes \mathcal{O} \rightarrow L$ is surjective, then we get a morphism $\varphi_{V}:X \rightarrow \mathbb{P}(V)$ by the universal property of the projective space $\mathbb{P}(V)$ of hyperplanes in $V$. Essentially, given a point $x \in X$, we look at the fibre over $x$ of the surjection $V \otimes \mathcal{O} \rightarrow L$ to get a quotient $V \rightarrow L_{x}$. The kernel is a hyperplane in $V$, and the morphism $\varphi_{V}$ sends $x$ to that hyperplane as a point in $\mathbb{P}(V)$. So how to build globally generated sheaves? A line bundle $L$ is called ample if for every coherent sheaf $F$, $F \otimes L^{\otimes n}$ is globally generated for all large $n$. The smallest $n$ after which this becomes true can depend on $F$. Finally, a line bundle is called very ample if $L$ is globally generated and $\varphi_{V}$ is an embedding for some subspace of sections $V$. There are various properties of and criteria for ample line bundles, which can be found in Hartshorne, for example. What we need for the below questions are the following: $L$ is ample if and only if $L^{m}$ is ample for some $m$ if and only if $L^{n}$ is very ample for some $n$; if $L$ is ample, eventually $L^{k}$ will have sections, be globally generated, be very ample, and have no higher cohomology. Questions Are there simple examples (say on a curve or surface) of line bundles that are globally generated but not ample, of ample line bundles with no sections, of ample line bundles that are globally generated but not very ample, and of very ample line bundles with higher cohomology? Given an ample line bundle $L$, what is the minimal number $k$ so that I can be sure $L^{k}$ has sections, is globally generated, is very ample? Is $k$ related to the dimension of $X$? If $L$ is very ample, I can use it to embed $X$ into some projective space. Then by projecting from points off of $X \subset \mathbb{P}^{N}$, I can eventually get a finite morphism $X \rightarrow \mathbb{P}^{d}$, where $d$ is the dimension of $X$. But what if I just know that $L$ is ample and globally generated? Can I also use it to get such a finite morphism to $\mathbb{P}^{d}$?
1. Are there simple examples (say on a curve or surface) of line bundles that are globally generated but not ample, of ample line bundles with no sections, of ample line bundles that are globally generated but not very ample, and of very ample line bundles with higher cohomology? On a curve of genus $g$, a general divisor of degree $d \le g-1$ has no sections. Of course, if $d>0$ then it is ample. $K_X$ on a hyperelliptic curve is globally generated but not very ample. Look at $L=\mathcal O(1)$ on a plane curve of genus $d$. Then from $$ 0\to \mathcal O_{\mathbb P^2}(1-d) \to \mathcal O_{\mathbb P^2}(1) \to \mathcal O_C(1)\to 0$$ you see that $H^1(\mathcal O_C(1))=H^2(\mathcal O_{\mathbb P^2}(1-d))$ which is dual to $H^0(\mathcal O_{\mathbb P^2}(d-4))$. So that's nonzero for $\ge4$. 2. Given an ample line bundle $L$, what is the minimal number $k$ so that I can be sure $L^k$ has sections, is globally generated, is very ample? Is $k$ related to the dimension of $X$? Again, just look at the divisor of a degree 1 on a curve of genus $g$. You need $k\ge g$, so you see that there is no bound in terms of the dimension. It turns out that a better right question to ask is about the adjoint line bundles $\omega_X\otimes L^k$ ($K_X+kL$ written additively). Then the basic guiding conjecture is by Fujita, and which says that for $k\ge \dim X+1$ the sheaf is globally generated, and for $k\ge \dim X+2$ it is very ample. This is proved for $\dim X=2$, proved with slightly worse bounds for $\dim X=3$. For higher dimensions the best result is due to Angehrn-Siu who gave a quadratic bound on $k$ instead of linear. There are some small improvements for some special cases. 3. If $L$ is very ample, I can use it to embed $X$ into some projective space. Then by projecting from points off of $X\subset \mathbb P^N$, I can eventually get a finite morphism $X\to \mathbb{P}^d$, where $d$ is the dimension of $X$. But what if I just know that $L$ is ample and globally generated? Can I also use it to get such a finite morphism to $\mathbb P^d$? But of course $L$ gives a morphism $f$, and it follows that $f$ is finite: $f$ contacts no curve so $f$ is quasifinite, and $f$ is projective (since $X$ was assumed to be projective). And quasifinite + proper = finite.
{ "source": [ "https://mathoverflow.net/questions/25603", "https://mathoverflow.net", "https://mathoverflow.net/users/6254/" ] }
25,630
From A Mathematician’s Apology, G. H. Hardy, 1940: "I had better say something here about this question of age, since it is particularly important for mathematicians. No mathematician should ever allow himself to forget that mathematics, more than any other art or science, is a young man's game. ... I do not know an instance of a major mathematical advance initiated by a man past fifty. If a man of mature age loses interest in and abandons mathematics, the loss is not likely to be very serious either for mathematics or for himself." Have matters improved for the elderly mathematician? Please answer with major discoveries made by mathematicians past 50.
Roger Apery was 62 when he proved the irrationality of $\zeta(3)$.
{ "source": [ "https://mathoverflow.net/questions/25630", "https://mathoverflow.net", "https://mathoverflow.net/users/1320/" ] }
25,723
This should be straightforward; I'm sorry if it's too much so. Can someone point me to a reference which computes the Dolbeault cohomology of the Hopf manifolds? Motivation: I'd like to work through a concrete example of the Hodge decomposition theorem failing for non-Kähler manifolds. The textbook I have handy (Griffiths & Harris) doesn't treat this, and the obvious Google search was unhelpful.
Even though this question has an accepted answer, the answers so far are not complete or explicit. I kept working on this question, because I have been curious for a long time about the structure of Dolbeault complexes. First of all, the Frölicher spectral sequence does not directly reveal all of the non-Hodge information in the Dolbeault complex of a non-Kähler complex manifold. I learned from Mikhail Khovanov that in the category of bounded double complexes over a field, every object is isomorphic to a unique direct sum of indecomposable objects. Moreover, the indecomposable double complexes can be classified as squares, dots, and zigzags. Here is an example of each type of indecomposable, with the convention that omitted cells and arrows are 0: $$\begin{matrix} \mathbb{C} & \rightarrow & \mathbb{C} \\\\ \uparrow & & \uparrow \\\\ \mathbb{C} & \rightarrow & \mathbb{C} \end{matrix}\qquad\qquad \mathbb{C}\qquad\qquad \begin{matrix} \mathbb{C} & \rightarrow & \mathbb{C} \\\\ & & \uparrow \\\\ & & \mathbb{C} & \rightarrow & \mathbb{C} \end{matrix} $$ This is actually a standard result about an $A_\infty$ quiver algebra with alternating arrows. To be precise about the Hodge theorem and the Frölicher spectral sequence, the $\partial\bar{\partial}$ lemma says exactly that there are no zigzags (other than length 1, which are then dots), which is then equivalent to the statement that horizontal cohomology is isomorphic to vertical cohomology. The squares are projective objects and do not contribute to any cohomology theory. The odd-length zigzags contribute to the total de Rham cohomology, while the even-length zigzags do not. Meanwhile there are two Frölicher spectral sequences, each of which detects half of the even zigzags. The Frölicher spectral sequences are insensitive to the odd zigzags, but you can still say that a Dolbeault complex is non-Hodge if it has odd zigzags, even though each Frölicher spectral sequence degenerates at $E_1$ if there are no even zigzags. The information of all of the zigzags, extracted by discarding only the squares, has been defined in the literature as "Aeppli cohomology". (Serre duality implies, sort-of indirectly, that the Aeppli cohomology of a compact manifold is self-dual; I would be interested to see a direct derivation of this fact.) Now, the Hopf manifolds. The most standard round Hopf manifold of complex dimension $n$ has an important group action of $U(n) \times S^1$. (For those who aren't familiar with the terminology, the standard Hopf manifold is $(\mathbb{C}^n\setminus 0)/\Gamma_r$, where $\Gamma_r$ is generated by rescaling by a real constant $r > 1$.) There is one aspect of my calculation that for me is a conjecture: That a connected, compact Lie group acts trivially on all of the zigzags of a compact manifold. This is true for odd zigzags since the de Rham cohomology has an invariant integer lattice, but I do not have an argument for even zigzags. But let's suppose that it is so. The invariant part of the Dolbeault complex is algebraically generated by these differential forms of degree $(1,0)$, $(0,1)$, and $(1,1)$: $$\alpha = \frac{\bar{z} \cdot dz}{z \cdot \bar{z}} \qquad \bar{\alpha} = \frac{z \cdot d\bar{z}}{z \cdot \bar{z}} \qquad \omega = \frac{dz \cdot d\bar{z}}{z \cdot \bar{z}}.$$ (I use $z$ and $dz$ as a vector of functions and a vector of 1-forms, so that I can take dot products. I'm leaving out the wedge product symbol.) Then I calculated the following: $$\partial \alpha = 0 \qquad \bar{\partial} \alpha = \alpha \bar{\alpha} - \omega \qquad \partial \omega = - \alpha \omega \qquad \omega^n = n\alpha \bar{\alpha} \omega^{n-1}.$$ A basis for the invariant part of the Dolbeault complex is given by $\omega^k$, $\alpha \omega^k$, $\bar{\alpha} \omega^k$, and $\alpha \bar{\alpha} \omega^k$ for $0 \le k \le n-1$. The Poincaré series of the invariant complex is a matrix like this one: $$\begin{matrix} 0 & 0 & 0 & 1 & 1 \\\\ 0 & 0 & 1 & 2 & 1 \\\\ 0 & 1 & 2 & 1 & 0 \\\\ 1 & 2 & 1 & 0 & 0 \\\\ 1 & 1 & 0 & 0 & 0 \end{matrix}.$$ After calculating the differential, my answer is that this decomposes as a dot at each corner, a zigzag of length 3 next to each each corner, and a progression of squares. Modulo the conjecture that all zigzags are invariant, this is a complete description of the Dolbeault complex. The $n=1$ case is an exception in which the Hopf manifold obviously is Kähler (it's a torus, and all complex curves are Kähler). In this case the invariant complex decomposes as four dots.
{ "source": [ "https://mathoverflow.net/questions/25723", "https://mathoverflow.net", "https://mathoverflow.net/users/2819/" ] }
25,778
Is there a simple numerical procedure for obtaining the derivative (with respect to $x$) of the pseudo-inverse of a matrix $A(x)$, without approximations (except for the usual floating-point limitations)? The matrix $\frac{\mathrm{d}}{\mathrm{d}x}A(x)$ is supposed to be known. In other words, are there analytical formulas that could be numerically evaluated so as to obtain the derivative of the pseudo-inverse? or, what formula would generalize $$ \frac{\mathrm{d}}{\mathrm{d}x}A^{-1}(x) = -A^{-1}(x) \left(\frac{\mathrm{d}}{\mathrm{d}x}A(x)\right) A^{-1}(x) $$ for the pseudo-inverse? I would be happy if this were possible, as this would allow my uncertainty calculation programming package to precisely calculate uncertainties on the pseudo-inverse of matrices whose elements have uncertainties (currently, a numerical differentiation is performed, which may yield imprecise results in some cases). Any idea would be much appreciated!
The answer is known since at least 1973 : a formula for the derivative of the pseudo-inverse of a matrix $A(x)$ of constant rank can be found in The Differentiation of Pseudo-Inverses and Nonlinear Least Squares Problems Whose Variables Separate. Author(s): G. H. Golub and V. Pereyra. Source: SIAM Journal on Numerical Analysis, Vol. 10, No. 2 (Apr., 1973), pp. 413-432 References 29 and 30 in the above paper contain an earlier formula that can also be used to obtain the same result (papers by P.A. Wedin). The case of non-constant rank is simple: the pseudo-inverse is not continuous, in this case (see Corollary 3.5 in On the Perturbation of Pseudo-Inverses, Projections and Linear Least Squares Problems. G. W. Stewart. SIAM Review, Vol. 19, No. 4. (Oct., 1977), pp. 634-662). Here is the formula for a matrix of constant rank (equation (4.12), in the Golub paper): $$ \frac{\mathrm d}{\mathrm d x} A^+(x) = -A^+ \left( \frac{\mathrm d}{\mathrm d x} A \right) A^+ +A^+ A{^+}^T \left( \frac{\mathrm d}{\mathrm d x} A^T \right) (1-A A^+) + (1-A^+ A) \left( \frac{\mathrm d}{\mathrm d x} A^T \right) A{^+}^T A^+ $$ (for a real matrix). For complex matrices, the above formula works if Hermitian conjugates are used instead of transposes. I don't have any reference on this (anyone?), but this is verified by all the numerical tests I did (with matrices of various shapes and ranks).
{ "source": [ "https://mathoverflow.net/questions/25778", "https://mathoverflow.net", "https://mathoverflow.net/users/3810/" ] }
25,794
Let $\chi$ be a Dirichlet character and $L(1,\chi)$ the associated L-function evaluated at $s=1$. What would be the 'shortest' proof of the non-vanishing of $L(1,\chi)$? Background: The non-vanishing of $L(1,\chi)$ plays an essential role in the proof of Dirichlet´s theorem on primes in arithmetic progressions. In his "Introduction to analytic number theory", T. M. Apostol gives an elementary proof of the above fact estimating various sums in a few lemmas in the context of a proof of the aforementioned Dirichlet theorem. While his approach has the advantage of being self-contained and not requiring much of a background, it is quite lenghty. In their "Analytic number theory", H. Iwaniec and E. Kowalski remark that in Dirichlet´s original proof the non-vanishing of $L(1,\chi)$ for real Dirichlet characters is a simple consequence of Dirichlet´s class number formula. However, in both approaches it is necessary to distinguish between real and complex Dirichlet characters. Hence my two "sub"-questions: 1) Is there a proof that avoids the distinction between the complex and real case? 2) Are there in general other proof strategies for $L(1,\chi)\neq 0$ that can be considered shorter and/or more elegant than the two mentioned above?
I like the proof by Paul Monsky: 'Simplifying the Proof of Dirichlet's Theorem' American Mathematical Monthly, Vol. 100 (1993), pp. 861-862. Naturally this does maintain the distinction between real and complex as whatever you do, the complex case always seems to be easier as one would have two vanishing L-functions for the price of one. I incorporated this argument into my note on a "real-variable" proof of Dirichlet's theorem at http://secamlocal.ex.ac.uk/people/staff/rjchapma/etc/dirichlet.pdf . There are proofs, notably in Serre's Course in Arithmetic which claim to treat the real and complex case on the same footing. But this is an illusion; it pretends the complex case is as hard as the real case. Serre considers the product $\zeta_m(s)=\prod L(s,\chi)$ where $\chi$ ranges over the modulo $m$ Dirichlet characters. If one of the $L(1,\chi)$ vanishes then $\zeta_m(s)$ is bounded as $s\to 1$ and Serre obtains a contradiction by using Landau's theorem on the abscissa of convergence of a positive Dirichlet series. But all this subtlety is only needed for the case of real $\chi$. In the non-real case, at least two of the $L(1,\chi)$ vanish so that $\zeta_m(s)\to0$ as $s\to1$. But it's elementary that $\zeta_m(s)>1$ for real $s>1$ and the contradiction is immediate, without the need of Landau's subtle result. Added (25/5/2010) I like the Ingham/Bateman method. It is superficially elegant, but as I said in the comments, it makes the complex case as hard as the real. Again it reduces to using Landau's result or a choice of other trickery. What one should look at is not $\zeta(s)^2L(s,\chi)L(s,\overline\chi)$ but $$G(s)=\zeta(s)^6 L(s,\chi)^4 L(s,\overline\chi)^4 L(s,\chi^2)L(s,\overline\chi^2)$$ (cf the famous proof of nonvanishing of $\zeta$ on $s=1+it$ by Mertens). Unless $\chi$ is real-valued this function will vanish at $s=1$ if $L(1,\chi)=0$. But one shows that $\log G(s)$ is a Dirichlet series with nonnegative coefficients and we get an immediate contradiction without any subtle lemmas. Again it shows that the real case is the hard one. For real $\chi$ then $G(s)=[\zeta(s)L(s,\chi)]^8$ while Ingham/Bateman would have us consider $[\zeta(s)L(s,\chi)]^2$. This leads us to the realization that for real $\chi$ we should look at $\zeta(s)L(s,\chi)$ which is the Dedekind zeta function of a quadratic field. (So if one is minded to prove the nonvanishing by showing that a Dedekind zeta function has a pole, quadratic fields suffice, and one needn't bother with cyclotomic fields). But we can do more. Let $t$ be real and consider $$G_t(s)= \zeta(s)^6 L(s+it,\chi)^4 L(s-it,\overline\chi)^4 L(s+2it,\chi^2)L(s-2it,\overline\chi^2).$$ Unless both $t=0$ and $\chi$ is real, if $L(1+it,\chi)=0$ one gets a contradiction just as before. So the nonvanishing of any $L(s,\chi)$ on the line $1+it$ is easy except at $1$ for real $\chi$. This special case really does seem to be deeper! Added (26/5/2010) The argument I outlined with the function $G_t(s)$ is well-known to extend to a proof for a zero-free region of the L-function to the left of the line $1+it$. At least it does when unless $t=0$ and $\chi$ is real-valued. In that case it breaks down and we get the phenomenon of the Siegel zero; the possible zero of $L(s,\chi)$ for $\chi$ real-valued, just to the left of $1$ on the real line. So the extra difficulty of proving $L(1,\chi)\ne0$ for $\chi$ real-valued is liked to the persistent intractability of showing that Siegel zeroes never exist.
{ "source": [ "https://mathoverflow.net/questions/25794", "https://mathoverflow.net", "https://mathoverflow.net/users/1849/" ] }
25,983
I once heard a joke (not a great one I'll admit...) about higher dimensional thinking that went as follows- An engineer, a physicist, and a mathematician are discussing how to visualise four dimensions: Engineer : I never really get it Physicist : Oh it's really easy, just imagine three dimensional space over a time- that adds your fourth dimension. Mathematician : No, it's way easier than that; just imagine $\mathbb{R}^n$ then set n equal to 4. Now, if you've ever come across anything manifestly four dimensional (as opposed to 3+1 dimensional) like the linking of 2 spheres, it becomes fairly clear that what the physicist is saying doesn't cut the mustard- or, at least, needs some more elaboration as it stands. The mathematician's answer is abstruse by the design of the joke but, modulo a few charts and bounding 3-folds, it certainly seems to be the dominant perspective- at least in published papers. The situation brings to mind the old Von Neumann quote about "...you never understand things. You just get used to them", and perhaps that really is the best you can do in this situation. But one of the principal reasons for my interest in geometry is the additional intuition one gets from being in a space a little like one's own and it would be a shame to lose that so sharply, in the way that the engineer does, in going beyond 3 dimensions. What I am looking for, from this uncountably wise and better experienced than I community of mathematicians, is a crutch- anything that makes it easier to see, for example, the linking of spheres- be that simple tricks, useful articles or esoteric (but, hopefully, ultimately useful) motivational diagrams: anything to help me be better than the engineer. Community wiki rules apply- one idea per post etc.
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as $\mathbb R^n$, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly. For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images. One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all). It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures). For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable. More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.
{ "source": [ "https://mathoverflow.net/questions/25983", "https://mathoverflow.net", "https://mathoverflow.net/users/5869/" ] }
25,993
Consider a compact subset $K$ of $R^n$ which is the closure of its interior. Does its boundary $\partial K$ have zero Lebesgue measure ? I guess it's wrong, because the topological assumption is invariant w.r.t homeomorphism, in contrast to being of zero Lebesgue measure. But I don't see any simple counterexample.
Construct a Cantor set of positive measure in much the same way as you make the `standard' Cantor set but make sure the lengths of the deleted intervals add up to 1/2, say. Let $U$ be the union of the intervals that are deleted at the even-numbered steps and let $V$ be the union of the intervals deleted at the odd-numbered steps. The Cantor set is the common boundary of $U$ and $V$; their closures are as required.
{ "source": [ "https://mathoverflow.net/questions/25993", "https://mathoverflow.net", "https://mathoverflow.net/users/6129/" ] }
26,001
I asked myself, which spaces have the property that $X^2$ is homeomorphic to $X$. I started to look at some examples like $\mathbb{N}^2 \cong \mathbb{N}$, $\mathbb{R}^2\ncong \mathbb{R}, C^2\cong C$ (for the cantor set $C$). And then I got stuck, when I considered the rationals. So the question is: Is $\mathbb{Q}^2$ homeomorphic to $\mathbb{Q}$ ?
Yes, Sierpinski proved that every countable metric space without isolated points is homeomorphic to the rationals: http://at.yorku.ca/p/a/c/a/25.htm . An amusing consequence of Sierpinski's theorem is that $\mathbb{Q}$ is homeomorphic to $\mathbb{Q}$. Of course here one $\mathbb{Q}$ has the order topology, and the other has the $p$-adic topology (for your favourite prime $p$) :-)
{ "source": [ "https://mathoverflow.net/questions/26001", "https://mathoverflow.net", "https://mathoverflow.net/users/3969/" ] }
26,083
For example, Wikipedia states that etale cohomology was "introduced by Grothendieck in order to prove the Weil conjectures". Why are cohomologies and other topological ideas so helpful in understanding arithmetic questions?
Why are topological ideas so important in arithmetic? In some sense KConrad is of course spot on, but let me offer a completely different kind of answer. Why are complex functions of one variable so important in arithmetic? (Zeta function, L-functions, Riemann hypothesis, Birch--Swinnerton-Dyer, modular forms, theta series, Eisenstein series...). Why is geometry so important in arithmetic? (Faltings' theorem, applications of algebraic geometry, low-dimensional arithmetic of varieties (elliptic curves etc)) Why is K-theory so important in arithmetic? (Bloch-Kato, Voevodsky...) Why is logic so important in arithmetic? (Julia Robinson, Matiyasevich, Ax-Kochen and then Hrusovski proving that "if it's true in char p for suff large p then it's true in char 0" in the context of some very deep statements) Why is functional analysis so important in arithmetic? (L^2 functions on $\Gamma\backslash G$ with $G$ a semisimple Lie group being related to automorphic forms and hence to number theory via Langlands, with crucial analytic tools like the trace formula). Why are dynamical systems so important in arithmetic? (3x+1 problem, work of Deninger, or of Lind/Ward and their school). Here's the answer: it's because arithmetic is a very mature subject---it has been around literally thousands of years, and because it has been around so long, there is far more of a chance that someone will come along with an insight relating [insert arbitrary area of pure mathematics here] with arithmetic. So in some sense it's a historical fluke. If we were all born with continuum-many fingers which we could move only in real-analytic ways, and we didn't discover the positive integers until much later on, then arithmetic would be all new and we'd be waiting for Gauss, and real analysis would be as old as the hills, and people would be asking "why is [insert arbitrary thing] so important in real analysis"? [PS (1) yeah I know, I was being facetious at the end, and (2) yeah I know, my list at the top is woefully incomplete]
{ "source": [ "https://mathoverflow.net/questions/26083", "https://mathoverflow.net", "https://mathoverflow.net/users/4692/" ] }
26,112
What is a good example of a fact about the moduli space of some object telling us something useful about a specific one of the objects? I am currently learning about moduli spaces (in the context of the moduli space of elliptic curves). While moduli spaces do seem to be fascinating objects in themselves, I am after examples in which facts about a moduli space tell us something interesting about the specific objects that they parametrise. For example, does the study of $\mathbb RP^n$ tell us anything we don't already know about some given line through the origin (say, the $x_1$-axis) in $\mathbb R^{n+1}$?
The easiest example I can think of is the natural incidence correspondence between $\mathbb{P}^3$ and the parameter space of cubic surfaces. This can be used to show that every cubic surface contains a line; from this it follows easily that every smooth cubic surface contains exactly $27$ lines. Another example is the moduli space of stable maps constructed by Kontsevich; this parametrizes certain maps from curves to (to stick with a simple case) $\mathbb{P}^2$ . It can be used to answer the following question: given $3d-1$ points in $\mathbb{P}^2$ in general position, compute the number $N_d$ of rational curves of degree $d$ passing through these points. It turns out that the values $N_d$ satisfy a certain recursive relation which allows you to compute all these numbers starting from the obvious $N_1 = 1$ (through $2$ points passes exactly one line). You can find the formula at the entry Kontsevich's formula on Rigorous trivialities ; it yields for instance $N_2 = 1$ and $N_3 = 12$ . Yet another example, again more elementary is the following. The Grassmannian $G = \operatorname{Gr}(1, \mathbb{P}^3)$ parametrizes lines in $\mathbb{P}^3$ . The computation of the cohomology of $G$ allows you to compute the number of lines which are incident to $4$ fixed lines in general position (it turns out this number is $2$ ).
{ "source": [ "https://mathoverflow.net/questions/26112", "https://mathoverflow.net", "https://mathoverflow.net/users/6345/" ] }
26,220
I hope this question is not so elementary that it'll get me banned... In mathematics we see a lot of impredicativity. Example of definitions involving impredicativity include: subgroup/ideal generated by a set, closure/interior of a set (in topology), topology generated by a family of sets, connected/path connected component of a point, sigma algebra generated by a family... And of course, the least upper bound property of real numbers. Impredicativity flood mathematics but there are people who don't like it. I think type theory was developed due the paradoxical and impredicative nature of "set of all sets that don't contain themselves". I'm very ignorant on this and from what I read type theory is hopeless complicated to work this, so I ask the people who favor predicativism: all those people have fiddled with mathematics for all those years and no one ever found a contradiction (I mean in ZFC), so is it worth all that effort?
Yes, it is worth the effort. A predicative version of an impredicative construction is typically more explicit and informative than the impredicative one. For example, consider the construction of a subgroup $\langle S \rangle$ of a group $G$ generated by the set $S$: impredicative : $\langle S \rangle$ is the intersection of all subgroups of $G$ which contain $S$. predicative : $\langle S \rangle$ consists of all finite combinations of elements of $S$ and their inverses, i.e., a typical elements is $x_1 x_2 \cdots x_n$ where $x_i \in S \cup S^{-1}$. This can be quite useful if you want to compute with groups (i.e., with a computer), as you will definitely prefer the second description, which tells you how exactly the elements of the subgroup can be represented. Many examples of impredicative constructions are special cases of the following theorem. Theorem (Knaster and Tarski): A monotone map on a complete lattice has a least fixed-point above every point. To take two of your examples: Subgroup generated by a set : the complete lattice is the powerset $P(G)$ of the group $G$ in question, and the map $f : P(G) \to P(G)$ takes $S \subseteq G$ to $f(S) = S \cup S^{-1} \cup S \cdot S$. $\sigma$-algebra generated by a family of subsets : exercise. There are two standard ways of proving the Knaster-Tarski theorem, one impredicative and one predicative. These exemplify the two general approaches of getting to desired objects "impredicatively from above" and "predicatively from below". The impredicative proof goes as follows: call a point $x$ a prefixed point if $f(x) \leq x$. Consider the set $S$ of all prefixed points above a given point $y$ (which is not empty as it contains the top of the lattice). The least fixed-point above $y$ is the infimum $x = \inf S$ (exercise). The predicative proof goes as follows: iterate $f$ starting with a given point $y$ to construct an increasing sequence $$y, f(y), f^2(y), \ldots, f^\omega(y), \ldots, f^\alpha(y), \ldots$$ where we have to iterate through ordinals until we're blue in the face. The iteration stops eventually, and that's the least fixed-point above $y$. Of course, in such generality the predicative proof is hardly better than the impredicative one because we replaced one non-description with another. But in particular cases we might know something about $f$. For example, we might know that it preserves suprema of countable chains, as we do in the example of a subgroup generated by a set, in which case the iteration stops at $\omega$. Your third example, namely the connected component of a point, can be dealt with also, but I am not sure it's any better than the impredicative construction: Connected components : the connected component of a point is a maximal connected subset containing it. You are probably thinking of the construction that says "just take the union of all connected subsets that contain the point". We could instead try the following: define $x \sim y$ to mean that for all continuous $f : X \to 2$, $f(x) = f(y)$. The connected components of $X$ are the equivalence classes of $\sim$. Therefore the connected component of a point is just its equivalence class. This is not entirely satisfactory as it replaces one bad description with another. Can we be more explicit? What if we have a nice basis for the space? Sometime you have to reformulate the whole subject to get away from builtin impredicativity (and it is still worth doing because it gives computational meaning to theorems which are quite non-computational in the impredicative setting): Closure/interior of a set : under classical formulation of topology you can sometimes get away with predicative construction, for example if you can reduce your construction to manipulation of a countable topological basis, e.g. the interior of $S \subseteq \mathbb{R}$ is the union of all open intervals with rational endpoints that are contained in $S$. There are general formulations of topology, such as formal topology and Abstract Stone Duality , which avoid impredicative constructions altogether. Lastly, you mention Dedekind completeness of reals. I am not sure this is impredicative. The supremum of a non-empty bounded family of left-sided Dedekind cuts is simply their union. What is impredicative about taking the union of a family of cuts? Adendum : Note that in the typical case it is the construction , i.e., an existence proof, which is predicative or impredicative, not the definition. For example, the group generated by a set is defined as the least subgroup containing the generators, which has nothing to to with predicativity/impredicativity. P.S. You need a better MO username.
{ "source": [ "https://mathoverflow.net/questions/26220", "https://mathoverflow.net", "https://mathoverflow.net/users/6361/" ] }