source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
46,883
For the purposes of this question let a "physical intuition" be an intuition that is derived from your everyday experience of physical reality. Your intuitions about how the spin of a ball affects it's subsequent bounce would be considered physical intuitions. Using physical intuitions to solve a math problem means that you are able to translate the math problem into a physical situation where you have physical intuitions, and are able to use these intuitions to solve the problem. One possible example of this is using your intuitions about fluid flow to solve problems concerning what happens in certain types of vector fields. Besides being interesting in its own right, I hope that this list will give people an idea of how and when people can solve math problems in this way. (In its essence, the question is about leveraging personal experience for solving math problems. Using physical intuitions to solve math problems is a special case.) These two MO questions are relevant. The first is aimed at identifying when using physical intuitions goes wrong, while the second seems to be an epistemological question about how using physical intuition is unsatisfactory.
The first and second laws of thermodynamics allow you to recover the inequality between the arithmetic and the geometric means: Bring together n identical heat reservoirs with heat capacity $C$ and temperatures $T_1,\ldots,T_n$ and allow them to reach a final temperature $T$. The first law of thermodynamics tells you that $T$ is the arithmetic mean of the $T_i$. The second law of thermodynamics demands the non-negativity of the change in entropy, which is $$ Cn \, log(T/G) $$ where $G$ is the geometric mean. It follows that $T > G$. I believe this argument was first made by P.T. Landsberg (no relation!).
{ "source": [ "https://mathoverflow.net/questions/46883", "https://mathoverflow.net", "https://mathoverflow.net/users/10134/" ] }
46,900
The complex irreps of a finite group come in three types: self-dual by a symmetric form, self-dual by a symplectic form, and not self-dual at all. In the first two cases, the character is real-valued, and in the third it is sometimes only complex-valued. The cases can be distinguished by the value of the Schur indicator $\frac{1}{|G|} \sum_g \chi(g^2)$, necessarily $1$, $-1$, or $0$. They correspond to the cases that the representation is the complexification of a real one, the forgetful version of a quaternionic representation, or neither. A conjugacy class $[g]$ is called "real" if all characters take real values on it, or equivalently, if $g\sim g^{-1}$. I vaguely recall the number of real conjugacy classes being equal to the number of real irreps. Do I remember that correctly? Can one split the real conjugacy classes into two types, "symmetric" vs. "symplectic"? With #1 now granted, a criterion for a "good answer" would be that the number of symmetric real conjugacy classes should equal the number of symmetrically self-dual irreps. (I don't have any application in mind; it's just bothered me off and on for a long time.)
It's a great question! Disappointingly, I think the answer to (2) is No : The only restriction on a `good' division into "symmetric" vs. "symplectic" conjugacy classes that I can see is that it should be intrinsic, depending only on $G$ and the class up to isomorphism. (You don't just want to split the self-dual classes randomly, right?) This means that the division must be preserved by all outer automorphisms of $G$, and this is what I'll use to construct a counterexample. Let me know if I got this wrong. The group My $G$ is $C_{11}\rtimes (C_4\times C_2\times C_2)$, with $C_2\times C_2\times C_2$ acting trivially on $C_{11}=\langle x\rangle$, and the generator of $C_4$ acting by $x\mapsto x^{-1}$. In Magma, this is G:=SmallGroup(176,35), and it has a huge group of outer automorphisms $C_5\times((C_2\times C_2\times C_2)\rtimes S_4)$, Magma's OuterFPGroup(AutomorphismGroup(G)). The reason for $C_5$ is that $x$ is only conjugate to $x,x^{-1}$ in $C_{11}\triangleleft G$, but there there are 5 pairs of possible generators like that in $C_{11}$, indistinguishable from each other; the other factor of $Out\ G$ is $Aut(C_2\times C_2\times C_4)$, all of these guys commute with the action. The representations The group has 28 orthogonal, 20 symplectic and 8 non-self-dual representations, according to Magma. The conjugacy classes There are 1+7+8+5+35=56 conjugacy classes, of elements of order 1,2,4,11,22 respectively. The elements of order 4 are (clearly) not conjugate to their inverses, so these 8 classes account for the 8 non-self-dual representations. We are interested in splitting the other 48 classes into two groups, 28 'orthogonal' and 20 'symplectic'. The catch The problem is that the way $Out\ G$ acts on the 35 classes of elements of order 22, it has two orbits according to Magma - one with 30 classes and one with 5. (I think I can see that these numbers must be multiples of 5 without Magma's help, but I don't see the full splitting at the moment; I can insert the Magma code if you guys want it.) Anyway, if I am correct, these 30 classes are indistinguishable from one another, so they must all be either 'orthogonal' or 'symplectic'. So a canonical splitting into 28 and 20 cannot exist. Edit : However, as Jack Schmidt points out (see comment below), it is possible to predict the number of symplectic representations for this group!
{ "source": [ "https://mathoverflow.net/questions/46900", "https://mathoverflow.net", "https://mathoverflow.net/users/391/" ] }
46,907
I attended a talk given by W. Hugh Woodin regarding the Ultimate L axiom and I wanted to verify my current understanding of what the search for this axiom means. I find it to be a fascinating topic but the details are so far beyond my grasp. Given the language of set theory, one can write down a multitude of first-order sentences. By Godel's Incompleteness Theorem, it is known that from the ZFC axioms one can only derive the truth-values of a (small) fragment of these sentences. In the past, it was hoped (by Godel, among others) that the Large Cardinal Axiom hierarchy would provide an infinite ladder of axioms of increasing strength such that any first-order sentence in the language of set theory would be either provable or refutable from ZFC + LCA for some suitable LCA. However, it is now known (?) that the LCA hierarchy (pictorially represented as the vertical spine of the set-theoretic universe V) is not enough to settle all such questions. In particular, there is an additional horizontal "degree of freedom" due to Cohen forcing: for instance, when it comes to CH, it is known (or merely believed?) that both CH and ~CH are consistent with the LCA hierarchy. Now, let a "completion of ZFC" be an assignment of truth-values to every first-order sentence in the language of set theory, such that a sentence is true whenever ZFC proves that sentence; moreover, for the other sentences (i.e. those which are undecidable in ZFC) the assignment of truth-values must be consistent. My understanding of Ultimate L is that it picks out a unique completion of ZFC as being the "correct" one; that is, even though Cohen forcing allows us to have models (and therefore completions) of both ZFC + CH and also of ZFC + ~CH, Ultimate L eliminates the horizontal ambiguity and provides us with a unique completion of ZFC in which the truth-values of first-order sentences only depend on the vertical LCA hierarchy. Is my understanding correct? And how do we know that there are (infinitely) many different completions of ZFC in the first place? Could it be that there is no way to consistently assign truth-values to all first-order sentences, i.e. that no completion exists? Also, how would we know that Ultimate L + LCA picks out a unique completion (as opposed to a class of completions)? And would it be a valid completion (does consistency of ZFC + Ultimate L follow from Con ZFC)? I would appreciate answers to any of the above questions, as I can't find anything on this topic in the literature. Thank you!
I am not sure which statement you heard as the "Ultimate $L$ axiom," but I will assume it is the following version: There is a proper class of Woodin cardinals, and for all sentences $\varphi$ that hold in $V$, there is a universally Baire set $A\subseteq{\mathbb R}$ such that, letting $\theta=\Theta^{L(A,{\mathbb R})}$, we have that $HOD^{L(A,{\mathbb R})}\cap V_\theta\models\varphi$. (At least, this is the version of the axiom that was stated during Woodin's plenary talk at the 2010 ICM, which should be accessible from this link . See also the slides for this talk --Thanks to John Stillwell for the link.) I do not think you will find much about it in the current literature, but Woodin has written a long manuscript ("Suitable extender models") that should probably provide us with the standard reference once it is published. As stated, this is really an infinite list of axioms (one for each $\varphi$). The statement is very technical, and it may be a bit difficult to see what its connection is with Woodin's program of searching for nice inner models for supercompactness. (That was the topic of his recent series of talks at Luminy; I wrote notes on them and they can be found here .) Keeping the discussion at an informal level (which makes what follows not entirely correct), what is going on is the following: Gödel defined $L$, the constructible universe. It is an inner model of set theory, and it can be analyzed in great detail. In a sense (guided by specific technical results), we feel there is only one model $L$, although of course by the incompleteness theorems we cannot expect to prove all its properties within any particular formal framework. Think of the natural numbers for an analogue: Although no formal theory can prove all their properties, most mathematicians would agree that there is only one "true" set of natural numbers (up to isomorphism). This "completeness" of $L$ is a very desirable feature of a model, but we feel $L$ is too far from the actual universe of sets, in that no significant large cardinals can belong to it. The inner model program attempts to build $L$-like models that allow the presence of large cardinals and therefore are closer to what we could think of as the "true universe of sets"; again, the goal is to build certain canonical inner models that are unique in a sense (similar to the uniqueness of ${\mathbb N}$ or of $L$), and that (if there are "traces" of large cardinals in the universe $V$) contain large cardinals. The program has been very successful, but progress is slow. One of the key reasons for this slow development is that the models that are obtained very precisely correspond to specific large cardinals, so that, for example, $L[\mu]$, the canonical $L$-like model for one measurable cardinal, does not allow the existence of even two measurable cardinals ($L$ itself does not even allow one). Currently, the inner model program reaches far beyond a measurable, but far below a supercompact cardinal. Woodin began an approach with the goal of studying the coarse structure of the inner models for supercompactness. This would be the first step towards the construction of the corresponding $L$-like models. (The second step requires the introduction of so-called fine-structural considerations, and it is traditionally significantly more elaborate than the coarse step.) The results reported in the talks I linked to above indicate that, if the construction of this model is successful, we will actually have built the "ultimate version of $L$", in that the model we would obtain not only accommodates a supercompact cardinal but, in essence, all large cardinals of the universe. If we succeed in building such a model, then it makes sense to ask how far it is from the actual universe of sets. A reasonable position (which Woodin seems to be advocating) is that it makes no sense to distinguish between two theories of sets if each one can interpret the other, because then anything that can be accomplished with one can just as well be accomplished with the other, and differences in presentation would just be linguistic rather than mathematical. One could also argue that of two theories, if one interprets the other but not vice versa, then the "richer" one would be preferable. Of course, one would have to argue for reasons why one would consider the richer theory "true" to begin with. This is a multiverse view of set theory (different in details from other multiverse approaches, such as Hamkins's ) and rather different from the traditional view of a distinguished true universe. Our current understanding of set theory gives us great confidence in the large cardinal hierarchy. $\mathsf{ZFC}$ is incomplete, and so is any theory we can describe. However, there seems to be a linear ordering of strengthenings of $\mathsf{ZFC}$, provided by the large cardinal axioms. Moreover, this is not an arbitrary ordering, but in fact most extensions of $\mathsf{ZFC}$ that have been studied are mutually interpretable with an extension of $\mathsf{ZFC}$ by large cardinals (and those for which this is not known are expected to follow the same pattern, our current ignorance being solely a consequence of the present state of the inner model program). So, for example, we can begin with the $L$-like model for, say, a Woodin cardinal, and obtain from it a model of a certain fragment of determinacy while, beginning with this amount of determinacy, we can proceed to build the inner model for a Woodin cardinal. Semantically, we are explaining how to pass from a model of one theory to a model of the other. But we can also describe the process as establishing the mutual interpretability of both theories. Of course, if we begin with the $L$-like model for two Woodin cardinals, we can still interpret the other theory just as before, but that theory may not be strong enough to recover the model with two Woodin cardinals. From this point of view, a reasonable "ultimate theory" of the universe of sets would be obtained if we can describe "ultimate $L$" and provide evidence that any extension of $\mathsf{ZFC}$ attainable by the means we can currently foresee would be interpretable from the theory of "ultimate $L$". The ultimate $L$ list of axioms is designed to accomplish precisely this result. Part of the point is that we expect $L$-like models to cohere with one another in a certain sense, so we can order them. We, in fact, expect that this order can be traced to the complexity of certain iteration strategies which, ultimately, can be described by sets of reals. Our current understanding suggests that these sets of reals ought to be universally Baire . Finally, we expect that the models of the form $$HOD^{L(A,{\mathbb R})}\cap V_\theta$$ as above, are $L$-like models, and that these are all the models we need to consider. The fact that when $A=\emptyset$ we indeed obtain an $L$-like model in the presence of large cardinals, is a significant result of Steel, and it can be generalized as far as our current techniques allow. The $\Omega$-conjecture, formulated by Woodin a few years ago, would be ultimately responsible for the $HOD^{L(A,{\mathbb R})}\cap V_\theta$ models being all the $L$-like models we need. (Though I do not quite see that formally the "ultimate $L$" list of axioms supersedes the $\Omega$-conjecture). Also, if there is a nice $L$-like model for a supercompact, then the results mentioned earlier suggest we have coherence for all these Hod-models. The theory of the universe that "ultimate $L$" provides us with is essentially the theory of a very rich $L$-like model. It will not be a complete theory, by the incompleteness theorems, but any theory $T$ whose consistency we can establish by, say, forcing from large cardinals would be interpretable from it, so "ultimate $L$" is all we need, in a sense, to study $T$. Similarly, only adding large cardinal axioms would give us a stronger theory (but then, this strengthening would be immediately absorbed into the "ultimate $L$" framework). It is in this sense that Woodin says that the "axiom" would give us a complete picture of the universe of sets. It would also be reasonable to say that this is the "correct" way of going about completing $\mathsf{ZFC}$, since any extension can be interpreted from this one. [Note I am not advocating for the correctness of Woodin's viewpoint, or saying that it is my own. I feel I do not understand many of the technical issues at the moment to make a strong stance. As others, I am awaiting the release of the "suitable extender models" manuscript. Let me close with the disclaimer that, in case the technical details in what I have mentioned are incorrect, the mistakes are mine.] Edit : (Jan. 10, 2011) Here is a link to slides of a talk by John Steel. Both Woodin's slides linked to above, and Steel's are for talks at the Workshop on Set Theory and the Philosophy of Mathematics , held at the University of Pennsylvania, Oct. 15-17, 2010. Hugh's talk was on Friday the 15th, John's was on Sunday. John's slides are a very elegant presentation of the motivations and mathematics behind the formulation of Ultimate $L$. (Jul. 26, 2013) Woodin's paper has appeared, in two parts: W. Hugh Woodin. Suitable extender models I , J. Math. Log., 10 (1-2) , (2010), 101–339. MR2802084 (2012g:03135) , and W. Hugh Woodin. Suitable extender models II: beyond $\omega$-huge , J. Math. Log., 11 (2) , (2011), 115–436. MR2914848 . He is also working on a manuscript covering the beginning of the fine structure theory of these models. I will add a link once it becomes available. John Steel has a nice set of slides discussing in some detail the multiverse view mentioned above: Gödel's program , CSLI meeting, Stanford, June 1, 2013. For more on why one may want to accept large cardinals as a standard feature of the universe of sets, see here . Edit (May 17, 2017): W. Hugh Woodin has written a highly accessible survey describing the current state of knowledge regarding Ultimate $L$. For now, see here (I hope to update more substantially if I find the time): MR3632568 . Woodin, W. Hugh. In search of Ultimate-L. The 19th Midrasha Mathematicae lectures . Bull. Symb. Log. 23 (2017), no. 1, 1–109.
{ "source": [ "https://mathoverflow.net/questions/46907", "https://mathoverflow.net", "https://mathoverflow.net/users/7154/" ] }
46,934
I would like to show that any Zariski-closed subsemigroup of $SL_n(\mathbb{C})$ is a group. If I understand correctly, this is consequence 1.2.A of http://www.heldermann-verlag.de/jlt/jlt03/BOSLAT.PDF . Is there a more elementary proof? For $SL_2(\mathbb{C})$, the result is quite easy to show directly, or using the Hilbert basis theorem, .
It is quite elementary. Let $S$ be the semi-group in question. Then for any $g \in S$, the set $g^kS$ for $k=1,2, \dots$ is a decreasing sequence of closed sets, hence it has to stabilize. So, $g^kS=g^{k+1}S$ implies that $gS=S$. Hence $S$ is closed with respect to taking inverse, and therefore is a group.
{ "source": [ "https://mathoverflow.net/questions/46934", "https://mathoverflow.net", "https://mathoverflow.net/users/408/" ] }
46,970
Recently, I learnt in my analysis class the proof of the uncountability of the reals via the Nested Interval Theorem ( Wayback Machine ). At first, I was excited to see a variant proof (as it did not use the diagonal argument explicitly). However, as time passed, I began to see that the proof was just the old one veiled under new terminology. So, till now I believe that any proof of the uncountability of the reals must use Cantor's diagonal argument. Is my belief justified? Thank you.
Mathematics isn't yet ready to prove results of the form, "Every proof of Theorem T must use Argument A." Think closely about how you might try to prove something like that. You would need to set up some plausible system for mathematics in which Cantor's diagonal argument is blocked and the reals are countable . Nobody has any idea how to do that. The best you can hope for is to look at each proof on a case-by-case basis and decide, subjectively, whether it is "essentially the diagonal argument in disguise." If you're lucky, you'll run into one that your intuition tells you is a fundamentally different proof, and that will settle the question to your satisfaction. But if that doesn't happen, then the most you'll be able to say is that every known proof seems to you to be the same. As explained above, you won't be able to conclude definitively that every possible argument must use diagonalization. ADDENDUM (August 2020). Normann and Sanders have a very interesting paper that sheds new light on the uncountability of $\mathbb R$ . In particular they study two specific formulations of the uncountability of $\mathbb R$ : $\mathsf{NIN}$ : For any $Y:[0,1] \to \mathbb{N}$ , there exist $x,y \in [0,1]$ such that $x\ne_{\mathbb{R}} y$ and $Y(x) =_{\mathbb{N}} Y(y)$ . $\mathsf{NBI}$ : For any $Y[0,1] \to \mathbb{N}$ , either there exist $x,y \in [0,1]$ such that $x\ne_{\mathbb{R}} y$ and $Y(x) =_{\mathbb{N}} Y(y)$ , or there exists $N\in\mathbb{N}$ such that $(\forall x\in [0,1])(Y(x) \ne N)$ . One of their results is that a system called ${\mathsf Z}_2^\omega$ does not prove $\mathsf{NIN}$ . Their model of $\neg\mathsf{NIN}$ can therefore be interpreted as a situation where the reals are countable! Nevertheless we are still far from showing that Cantor's diagonal argument is needed to prove that the reals are uncountable. A further caveat is that Normann and Sanders argue that the unprovability of $\mathsf{NIN}$ in ${\mathsf Z}_2^\omega$ —which might at first sight suggest that $\mathsf{NIN}$ is a strong axiom—is an artificial result, and that the proper framework for studying $\mathsf{NIN}$ and $\mathsf{NBI}$ is what they call a “non-normal scale,” in which $\mathsf{NIN}$ and $\mathsf{NBI}$ are very weak. In particular their paper gives lots of examples of statements that imply $\mathsf{NIN}$ and $\mathsf{NBI}$ . I suspect, though, that you'll probably feel that the proofs of those other statements smuggle in Cantor's diagonal argument one way or another. ADDENDUM (December 2022). I just listened to an amazing talk by Andrej Bauer , reporting on joint work with James Hanson. If you start listening around 14:53 , you'll see how, in the context of intuitionistic logic, one can formulate precisely the question of whether there is a proof of the uncountability of the reals that doesn't use diagonalization. Bauer and Hanson don't answer this question, but they construct something they call a "parameterized realizability topos" in which the Dedekind reals are countable . In particular, this shows that higher-order intuitionistic logic (in which one cannot formulate the usual diagonalization argument) cannot show the reals are uncountable. Now, you could still justifiably claim that this whole line of research does not really address the original question, which I presume tacitly assumes classical logic; nevertheless, this still comes closer than anything else I've seen.
{ "source": [ "https://mathoverflow.net/questions/46970", "https://mathoverflow.net", "https://mathoverflow.net/users/5627/" ] }
46,986
Let me start by reminding two constructions of topological spaces with such exotic combination of properties: 1) The elements are non-zero integers; base of topology are (infinite) arithmetic progressions with coprime first term and difference. 2) Take $\mathbb{R}^{\infty}\setminus \{0\}$ with product-topology and factorize by the relation $x\sim y \Leftrightarrow x=ty$ for some $t>0$ (infinite-dimensional sphere). Then consider only points with rational coordinates, all but finitely of them vanishing. The first question is whether are these two examples homeomorphic or somehow related. The second is an historical one. I've heard that the first example of such space belongs to P. S. Urysohn. What was his example?
First let us fix the terminology. The space (1) is known in General Topology as the Golomb space . More precisely, the Golomb space $\mathbb G$ is the set $\mathbb N$ of positive integers, endowed with the topology generated by the base consisting of arithmetic progressions $a+b\mathbb N_0$ where $a,b$ are relatively prime natural numbers and $\mathbb N_0=\{0\}\cup\mathbb N$ . Let us call the space (2) the rational projective space and denote it by $\mathbb QP^\infty$ . Both spaces $\mathbb G$ and $\mathbb QP^\infty$ are countable, connected and Hausdorff but they are not homeomorphic. A topological property distinguishing these spaces will be called the oo-regularity. Definition. A topological space $X$ is called oo-regular if for any non-empty disjoint open sets $U,V\subset X$ the subspace $X\setminus(\bar U\cap\bar V)$ of $X$ is regular. Theorem. The rational projective space $\mathbb QP^\infty$ is oo-regular. The Golomb space $\mathbb G$ is not oo-regular. Proof. The statement 1 is relatively easy, so is left to the interested reader. The proof of 2. In the Golomb space $\mathbb G$ consider two basic open sets $U=1+5\mathbb N_0$ and $V=2+5\mathbb N_0$ . It can be shown that $\bar U=U\cup 5\mathbb N$ and $\bar V=V\cup 5\mathbb N$ , so $\bar U\cap\bar V=5\mathbb N$ . We claim that the subspace $X=\mathbb N\setminus (\bar U\cap\bar V)=\mathbb N\setminus 5\mathbb N$ of the Golomb space is not regular. Consider the point $x=1$ and its neighborhood $O_x=(1+4\mathbb N)\cap X$ in $X$ . Assuming that $X$ is regular, we can find a neighborhood $U_x$ of $x$ in $X$ such that $\bar U_x\cap X\subset O_x$ . We can assume that $U_x$ is of basic form $U_x=1+2^i5^jb\mathbb N_0$ for some $i\ge 2$ , $j\ge 1$ and $b\in\mathbb N\setminus(2\mathbb N_0\cup 5\mathbb N_0)$ . Since the numbers $4$ , $5^j$ , and $b$ are relatively prime, by the Chinese remainder Theorem, the intersection $(1+5^j\mathbb N_0)\cap (2+4\mathbb N_0)\cap b\mathbb N_0$ contains some point $y$ . It is clear that $y\in X\setminus O_x$ . We claim that $y$ belongs to the closure of $U_x$ in $X$ . We need to check that each basic neighborhood $O_y:=y+c\mathbb N_0$ of $y$ intersects the set $U_x$ . Replacing $c$ by $5^jc$ , we can assume that $c$ is divisible by $5^j$ and hence $c=5^jc'$ for some $c'\in\mathbb N_0$ . Observe that $O_y\cap U_x=(y+c\mathbb N_0)\cap(1+4^i5^jb\mathbb N_0)\ne\emptyset$ if and only if $y-1\in 4^i5^jb\mathbb N_0-5^jc'\mathbb N_0=5^j(4^ib\mathbb N_0-c'\mathbb N_0)$ . The choice of $y\in 1+5^j\mathbb N_0$ guarantees that $y-1=5^jy'$ . Since $y\in 2\mathbb N_0\cap b\mathbb N_0$ and $c$ is relatively prime with $y$ , the number $c'=c/5^j$ is relatively prime with $4^ib$ . So, by the Euclidean Algorithm, there are numbers $u,v\in\mathbb N_0$ such that $y'=4^ibu-c'v$ . Then $y-1=5^jy'=5^j(4^ibu-c'v)$ and hence $1+4^i5^ju=y+5^jc'v\in (1+4^i5^jb\mathbb N_0)\cap(y+c\mathbb N_0)=U_x\cap U_y\ne\emptyset$ . So, $y\in\bar U_x\setminus O_x$ , which contradicts the choice of $U_x$ . Remark. Another well-known example of a countable connected space is the Bing space $\mathbb B$ . This is the rational half-plane $\mathbb B=\{(x,y)\in\mathbb Q\times \mathbb Q:y\ge 0\}$ endowed with the topology generated by the base consisting of sets $$U_{\varepsilon}(a,b)= \{(a,b)\}\cup\{(x,0)\in\mathbb B:|x-(a-\sqrt{2}b)|<\varepsilon\}\cup \{(x,0)\in\mathbb B:|x-(a+\sqrt{2}b)|<\varepsilon\}$$ where $(a,b)\in\mathbb B$ and $\varepsilon>0$ . It is easy to see that the Bing space $\mathbb B$ is not oo-regular, so it is not homeomorphic to the rational projective space $\mathbb QP^\infty$ . Problem 1. Is the Bing space homeomorphic to the Golomb space? Remark. It is clear that the Bing space has many homeomorphisms, distinct from the identity. So, the answer to Problem 1 would be negative if the answer to the following problem is affirmative. Problem 2. Is the Golomb space $\mathbb G$ topologically rigid? Problem 3. Is the Bing space topologically homogeneous? Since the last two problems are quite interesting I will ask them as separate questions on MathOverFlow. Added in an edit. Problem 1 has negative solution. The Golomb space and the Bing space are not homeomorphic since 1) For any non-empty open sets $U_1,\dots,U_n$ in the Golomb space (or in the rational projective space) the intersection $\bigcap_{i=1}^n\bar U_i$ is not empty. 2) The Bing space contain three non-empty open sets $U_1,U_2,U_3$ such that $\bigcap_{i=1}^3\bar U_i$ is empty. Added in a next edit. Problem 2 has the affirmative answer : the Golomb space $\mathbb G$ is topologically rigid. This implies that $\mathbb G$ is not homeomorphic to the Bing space or the rational projective space (which are topologically homogeneous). Problem 3 has an affirmative solution : the Bing space is topologically homogeneous. Added in Edit made 14.03.2020. The rational projective space $\mathbb Q P^\infty$ admits a nice topological characterization: Theorem . A topological space $X$ is homeomorphic to $\mathbb Q P^\infty$ if and only if $X$ is countable, first countable, and admits a decreasing sequence of nonempty closed sets $(X_n)_{n\in\omega}$ such that $X_0=X$ , $\bigcap_{n\in\omega}X_n=\emptyset$ , and for every $n\in\omega$ , (i) the complement $X\setminus X_n$ is a regular topological space, and (ii) for every nonempty open set $U\subseteq X_n$ the closure $\overline{U}$ contains some set $X_m$ .
{ "source": [ "https://mathoverflow.net/questions/46986", "https://mathoverflow.net", "https://mathoverflow.net/users/4312/" ] }
47,185
A choice function maps every set (in its domain) to an element of itself. This question concerns existence of an anti-choice function defined on the family of countable sets of reals. In an answer to a question about uncountability proofs it was suggested that while Cantor's diagonal method furnishes a Borel function mapping each countable sequence $S$ of real numbers to a number not in the seqence, the same is not true for countable sets of reals. This seems surprising (but believable) to me and an important insight, if true. But why is it true? Put more provocatively (if this is fair, if not explain it as stated) Let $\mathcal{C}$ be the family of countable subsets of $\mathbb{R}$ .` Is it the case that for every Borel function $f:\mathcal{C} \rightarrow \mathbb{R}$ there is an $X \in \mathcal{C}$ such that $f(X) \in X$?
The initial function you mention is the diagonalizing function $d:\mathbb{R}^\omega\to\mathbb{R}$, for which one ensures that $z=d(x_0,x_1,\ldots)$ is distinct from every $x_n$ simply by making the $n$-th digit of $z$ different from the $n$-th digit of $x_n$ in some regular way. Since the graph of this function is arithmetically definable (needing to look only at the individual digits of the input and output), it follows that $d$ is a Borel function. The point of your question, however, is that this function is not well-defined on different enumerations of the same set---the resulting diagonal value will be different if you rearrange the input. What would be desired is a function $f:\mathbb{R}^\omega\to\mathbb{R}$ such that always $f(x_0,x_1,\ldots)\neq x_n$, but for which $f$ gives the same value for different enumerations of the same countable set. Unfortunately, there is no such function that is Borel. Theorem. There is no Borel function $f:\mathbb{R}^\omega\to\mathbb{R}$ such that $f(x_0,x_1,\ldots)\neq x_n$ for every sequence $\vec x$ and index $n$, and $f(x_0,x_1,\ldots)=f(y_0,y_1,\ldots)$, whenever $\{\ x_0,x_1,\ldots\ \}=\{\ y_0,y_1,\ldots\ \}$. Proof. The nonexistence of such a function is closely related in spirit, in the context of Borel equivalence relation theory, to the impossibility of a Borel reduction from the equivalence relation $E_{set}$ to $=$, and the argument below belongs to that subject. The argument will use set-theoretic forcing, and is an instance where forcing is used in order to make a conclusion about the ground model $V$, rather than to prove an independence result. To begin, suppose that $f$ is a Borel function with the two properties. Let $\mathbb{P}=\text{Coll}(\omega,\mathbb{R})$ be the forcing to collapse $\mathbb{R}$ to $\omega$. That is, conditions in $\mathbb{P}$ are finite sequences of reals, ordered by end-extension. The generic object will be a countable enumeration consisting of all the reals of the ground model $V$. Let $g$ and $h$ be mutually $V$-generic for $\mathbb{P}$, and consider the corresponding forcing extensions $V[g]$, $V[h]$ and their common extension $V[g][h]$. Since $f$ was a Borel function, it has a Borel code that may be re-interpreted in any of these universes. Furthermore, the assertion that $f$ has the stated features is a $\Pi^1_1$ statement about this Borel code, and hence absolute between $V$ and these larger universes. That is, the re-interpreted function $f$ continues to have the desired properties in $V[g][h]$. Since $g$ and $h$ both enumerate the same set $\mathbb{R}^V$, it follows that $f(g)=f(h)$ in $V[g][h]$. In particular, the value $z=f(g)=f(h)$ is in both $V[g]$ and $V[h]$. But since $g$ and $h$ are mutually generic, it follows that $V[g]\cap V[h]=V$, and so $z\in V$. But this contradicts the fact that $f(g)$ should be a real not listed in $g$, since $g$ lists all the reals of $V$, including $z$. Contradiction! QED The practitioners of Borel equivalence relation theory have a large bag of tools at their disposal---many arguments proceed with one's choice of forcing or ergodic theory and group actions or something else---and I expect similarly that there is a forcing-free proof of the theorem above (perhaps someone can post such an argument?). But to my way of thinking, the forcing proof is fairly sharp. Lastly, let me say that if there are sufficient large cardinals, then projective truth is absolute from $V$ to $V[g][h]$, and in this case, the same argument shows that there can be no projective function $f$ with the two properties. Since the Borel functions sit merely at the doorstep of the projective hierarchy, this would be an enormous expansion of the phenomenon.
{ "source": [ "https://mathoverflow.net/questions/47185", "https://mathoverflow.net", "https://mathoverflow.net/users/8008/" ] }
47,214
(Added an epilogue) I started a job as a TA, and it requires me to take a five sessions workshop about better teaching in which we have to present a 10 minutes lecture (micro-teaching). In the last session the two people in charge of the workshop said that we should be able to "explain our research field to most people, or at least those with some academic background, in about three minutes". I argued that it might be possible to give a general idea of a specific field in psychology, history, maybe some engineering, and other fields that deal with concepts most people hear on a daily basis. However, I continued, in mathematics I have to explain to another mathematician a good 30 minutes explanation what is a large cardinal. I don't see how I can just tell someone "Dealing with very big sizes of infinity that you can't prove their existence within the usual system". Most people are only familiar with one notion of infinity, and the very few (usually physicists and electrical engineering students) that might know there are more than one - will start wondering why it's even interesting. One of the three people who gave a presentation that session, and came from the field of education, asked me what do I study in math. I answered the above, and he said "Okay, so you're trying to describe some absolute sense of reality." to which I simply said "No". Anyway, after this long and heartbreaking story comes the actual question. I was asked to give my presentation next week. I said I will talk about "What is mathematics" because most people think it's just solving huge and complicated equations all day. I want to give a different (and correct) look on the field in 10 minutes (including some open discussion with the class), and the crowd is beginner grad students from all over the academy (physics, engineering of all kinds, biology, education, et cetera...) I have absolutely no idea how to proceed from asking them what is math in their opinion, and then telling them that it's [probably] not that. Any suggestions or references? Addendum: The due date was this morning, after reading carefully the answers given here, discussing the topic with my office-mates and other colleagues, my advisor and several other mathematicians in my department I have decided to go with the Hilbert's Hotel example after giving a quick opening about the bad PR mathematicians get as people who solve complicated equations filled with integrals and whatnot. I had a class of about 30 people staring at me vacantly most of the 10 minutes, as much as I tried to get them to follow closely. The feedback (after the micro-teaching session the class and the instructors give feedback) was very positive and it seemed that I managed to get the idea through - that our "regular" (read: pre-math education) intuition doesn't apply very well when dealing with infinite things. I'd like to thank everyone that wrote an answer, a comment or a comment to an answer. I read them all and considered every bit of information that was provided to me, in hope that this question will serve others in the future and that I will be able to take from it more the next time I am asked to explain something non-trivial to the layman.
I have given talks about mathematics to non-mathematicians, for example to a bunch of marketing people. Supplemental: to see an example of a talk of mine that was given to a general audience, see my TEDx talk "Zeroes" (with supplemental material ). The talk lasted 15 minutes and it took me about two weeks to prepare. In my experience the following points are worth noting: If the audience does not understand you it is all in vain. You should interact with your audience. Ask them questions, talk to them. A lecture is a boring thing. Pick one thing and explain it well. The audience will understand that in 10 minutes you cannot explain all of math. The audience will not like you if you rush through a number of things and you don't explain any one of them well. So an introductory sentence of the form "Math is a vast area with many uses, but in these 10 minutes let me show you just one cool idea that mathematicians have come up." is perfectly ok. A proof of something that seems obvious does not appeal to people. For example, the proof of Kepler's conjecture about sphere packing is a bad example because most people won't see what the fuss is all about. So Kepler's conjecture would be a bad example. You are not talking to mathematicians. You are not allowed to have definitions, theorems or proofs. You are not allowed to compute anything. Pictures are your friend. Use lots of pictures whenever possible. You need not talk about your own work, but pick something you know well. Do not pick examples that always appear in popular science (Fermat's Last Theorem, the Kepler conjecture, the bridges of Koenigsberg, any of the 1 million dollar problems). Pick something interesting but not widely known. Here are some ideas I used in the past. I started with a story or an intriguing idea, and ended by explaining which branch of mathematics deals with such ideas. Do not start by saying things like "an important branch of mathematics is geometry, let me show you why". Geometry is obviously not important since all of mathematics has zero importance for your audience. But they like cool ideas. So let them know that math is about cool ideas. To explain what topology and modern geometry are about, you can talk about the Lebesgue covering dimension. Our universe is three-dimensional. But how can we find this out? Suppose you wake up in the morning and say "what's the dimension of the universe today?" You walk into your bathroom and look at the tiles. There is a point where three of them meet and you say to yourself "yup, the universe is still three-dimensional". Find some tiles in the classroom and show people how always at least three of them meet. Talk about how four of them could also meet, but at least three of them will always meet in a point. In a different universe, say in a plane, the tiles would really be segments and so only two of them would meet. Draw this on a board. Show slides of honeycombs in which three honeycomb cells meet. Show roof tilings in which thee tiles meet, etc. Ask the audience to imagine what happens in four dimensions: what do floor tiles in a bathroom look like there? They must be like our bricks. What is a chunk of space for us is just a wall for them. So if we have a big pile of bricks stacked together, how many will meet at a point? At least four (this will require some help from you)! To explain knot theory, start by stating that we live in a three-dimensional space because otherwise we could not tie the shoelaces. It is a theorem of topology that knots only exist in three dimensions. You proceed as follows. First you explain that in one or two dimensions you can't make a knot because the shoelace can't cross itself. It can only be a circle. In three dimensions you can have a knot, obviously. In four dimensions every knot can be untied as follows. Imagine the that the fourth dimension is the color of the rope. If two points of the rope are of different color they can cross each other. That is not cheating because in the fourth dimension (color) they're different. So take a knot and color it with the colors of the rainbow so that each point is a different color. Now you can untie the knot simply by pulling it apart in any which way. Crossing points will always be of different colors. Show pictures of knots. Show pictures of knots colored in the color of the rainbow. Explain infinity in terms of ordinal numbers (cardinals are no good for explaining infinity because people can't imagine $\aleph_1$ and $2^{\aleph_0}$ ). An ordinal number is like a queue of people who are waiting at a counter (pick an example that everyone hates, in Slovenia this might be a long queue at the local state office). A really, really long queue contains infinitely many people. We can imagine that an infinite queue 1, 2, 3, 4, ... is processed only after the world ends. Discuss the following question: suppose there are already infinitely many people waiting and one more person arrives. Is the queue longer? Some will say yes, some will say no. Then say that an infinite row of the form 1, 2, 3, 4, ... with one extra person at the end is like waiting until the end of the world, and then one more day after that. Now more people will agree that the extra person really does make the queue longer. At this point you can introduce $\omega$ as an ordinal and say that $\omega + 1$ is larger than $\omega$ . Invite the audience to invent longer queues. As they do, write down the corresponding ordinals. They will invent $\omega + n$ , possibly $\omega + \omega$ . Someone will invent $\omega + \omega + \omega + \ldots$ , you say this is a bit imprecise and suggest that we write $\omega \cdot \omega$ instead. You are at $\omega^2$ . Go on as far as your audience can take it (usually somewhere below $\epsilon_0$ ). Pictures: embed countable ordinals on the real line to show infinite queues of infinite queues of infinite queues...
{ "source": [ "https://mathoverflow.net/questions/47214", "https://mathoverflow.net", "https://mathoverflow.net/users/7206/" ] }
47,369
The proof that column rank = row rank for matrices over a field relies on the fact that the elements of a field commute. I'm looking for an easy example of a matrix over a ring for which column rank $\neq$ row rank. i.e. can one find a $2 \times 3$-(block)matrix with real $2\times 2$-matrices as elements, which has different column and row ranks?
It is a classical observation due to Nathan Jacobson that a division ring such that the set of invertible matrices over it is closed under transposition has to be a field, i.e. commutative. The reason is simple: the matrix $\begin{pmatrix} a & b \\ c & 1 \end{pmatrix}$ is invertible if and only if $\begin{pmatrix} a - bc & 0 \\ c & 1 \end{pmatrix}$ is invertible. This happens if and only $a - bc \neq 0$. For the transpose you get the condition $a - cb \neq 0$. Hence, taking $a = cb$ and a pair of non-commuting elements $b,c$ in the division ring, you get an example of an invertible matrix, whose transpose is not invertible.
{ "source": [ "https://mathoverflow.net/questions/47369", "https://mathoverflow.net", "https://mathoverflow.net/users/6415/" ] }
47,442
It's probably common knowledge that there are Diophantine equations which do not admit any solutions in the integers, but which admit solutions modulo $n$ for every $n$. This fact is stated, for example, in Dummit and Foote (p. 246 of the 3rd edition), where it is also claimed that an example is given by the equation $$ 3x^3 + 4y^3 + 5z^3 = 0. $$ However, D&F say that it's "extremely hard to verify" that this equation has the desired property, and no reference is given as to where one can find such a verification. So my question is: Does anyone know of a readable reference that proves this claim (either for the above equation or for others)? I haven't had much luck finding one.
It is actually quite straightforward to write down examples in one variable where this occurs. For example, the Diophantine equation $(x^2 - 2)(x^2 - 3)(x^2 - 6) = 0$ has this property: for any prime $p$, at least one of $2, 3, 6$ must be a quadratic residue, so there is a solution $\bmod p$, and by Hensel's lemma (which has to be applied slightly differently when $p = 2$) there is a solution $\bmod p^n$ for any $n$. We conclude by CRT. ( Edit: As Fedor says, there are problems at $2$. We can correct this by using, for example, $(x^2 - 2)(x^2 - 17)(x^2 - 34)$.) Hilbert wrote down a family of quartics with the same property. There are no (monic) cubics or quadratics with this property: if a monic polynomial $f(x) \in \mathbb{Z}[x]$ with $\deg f \le 3$ is irreducible over $\mathbb{Z}$ (which is equivalent to not having an integer solution), then by the Frobenius density theorem there are infinitely many primes $p$ such that $f(x)$ is irreducible $\bmod p$.
{ "source": [ "https://mathoverflow.net/questions/47442", "https://mathoverflow.net", "https://mathoverflow.net/users/430/" ] }
47,492
Consider a continuous irreducible representation of a compact Lie group on a finite-dimensional complex Hilbert space. There are three mutually exclusive options: 1) it's not isomorphic to its dual (in which case we call it 'complex') 2) it has a nondegenerate symmetric bilinear form (in which case we call it 'real') 3) it has a nondegenerate antisymmetric bilinear form (in which case we call it 'quaternionic') It's 'real' in this sense iff it's the complexification of a representation on a real vector space, and it's 'quaternionic' in this sense iff it's the underlying complex representation of a representation on a quaternionic vector space. Offhand, I know just four compact Lie groups whose continuous irreducible representations on complex vector spaces are all either real or quaternionic in the above sense : 1) the group Z/2 2) the trivial group 3) the group SU(2) 4) the group SO(3) Note that I'm so desperate for examples that I'm including 0-dimensional compact Lie groups, i.e. finite groups! 1) is the group of unit-norm real numbers, 2) is a group covered by that, 3) is the group of unit-norm quaternions, and 4) is a group covered by that . This probably explains why these are all the examples I know. For 1), 2) and 4), all the continuous irreducible representations are in fact real. What are all the examples?
An irreducible representation is real or quaternionic precisely when its character is real-valued. By the Peter-Weyl theorem all characters are real-valued precisely when every element in the group is conjugate to its inverse. When the group is connected a more precise answer is as follows: The Weyl group (in its tautological representation) must contain multiplication by $-1$ and this is true precisely when all indecomposable root system factors have that property. I don't remember off hand which indecomposable root systems have this property but it is of course well known (type A is out, type B/C is in, type D depends on the parity of the rank). Addendum : Found the relevant places in Bourbaki. All characters are real-valued precisely when the element he calls $w_0$ is $-1$ (Ch. VIII,Prop. 7.5.11) and one can also read off if a given representation is real or quaternionic (loc. cit. Prop 12). From the tables in Chapter 6 one gets that $w_0=-1$ precisely for $A_1$, B/C, D for even rank, $E_7$, $E_8$, $F_4$ and $G_2$.
{ "source": [ "https://mathoverflow.net/questions/47492", "https://mathoverflow.net", "https://mathoverflow.net/users/2893/" ] }
47,569
Do you know properties which distinguish four-dimensional spaces among the others? What makes four-dimensional topological manifolds special? What makes four-dimensional differentiable manifolds special? What makes four-dimensional Lorentzian manifolds special? What makes four-dimensional Riemannian manifolds special? other contexts in which four dimensions or $3+1$ dimensions play a distinguishing role. If you feel there are many particularities, please list the most interesting from your personal viewpoint. They may be concerned with why spacetime has four dimensions, but they should not be limited to this.
(Riemannian geometry) Four is the only dimension $n$ in which the adjoint representation of SO($n$) is not irreducible. Since the adjoint representation is isomorphic to the representation on 2-forms, this means that the bundle of 2-forms on an oriented Riemannian manifold decomposes into self-dual and anti-self-dual forms. 2-forms are particularly significant, since the curvature of a connection is a 2-form. In particular the curvature of the Levi-Civita connection is a 2-form with values in the adjoint bundle, so it has a 4-way decomposition into self-dual and anti-self-dual pieces. Hence there are natural curvature conditions on Riemannian 4-manifolds which have no analogue in other dimensions (without imposing additional structure). The impact of self-duality includes: special properties of Einstein metrics, Yang-Mills connections, and twistor theory for (anti-)self-dual Riemannian manifolds. EDIT Note also Torsten Ekedahl's response to the question above (which I missed when posting this): in any even dimension, middle dimensional forms are not irreducible for the complexified special orthogonal group. This accounts not only for the special features of four dimensions in Riemannian geometry, but also dimensions 2 and 6, where 1-forms and 3-forms play a special role. Further, Lorentzian geometry in four dimensions is special because the bundle of 2-forms has a natural complex structure: this underpins the Petrov Classification of spacetimes, for example
{ "source": [ "https://mathoverflow.net/questions/47569", "https://mathoverflow.net", "https://mathoverflow.net/users/10095/" ] }
47,702
In his 1967 paper A convenient category of topological spaces , Norman Steenrod introduced the category CGH of compactly generated Hausdorff spaces as a good replacement of the category Top topological spaces, in order to do homotopy theory. The most important difference between CGH and Top is that in CGH there is a functorial homeomorphism $$\mathrm{map}(X,\mathrm{map}(Y,Z))\cong \mathrm{map}(X\times Y,Z),$$ a fact that is only true in Top under the extra assumption that $Y$ is locally compact. But in more recent papers, I see that people use CG W H spaces instead of CGH spaces... Why? Could someone explain to me what goes wrong in CGH spaces (please illustrate with an example), and explain how the "w" fixes everything? Also (following Jeff's comment), to whom should the "w" be attributed? One more wish: can someone give me an example of a CGWH space that isn't CGH ?
I believe that CGWH spaces were first used in a systematic way in the work of Lewis-May-Steinberger on spectra. It is certainly the case that Gaunce Lewis's (unpublished) thesis contains the best reference on CGWH spaces that I'm aware of. (I haven't looked at the McCord paper Andrey mentions. Update: Having looked at McCord's paper, it does indeed seem to be the one to introduce CGWH (the idea of which he attributes to J.C. Moore.)) As to why one might prefer to use CGWH spaces, I'm not precisely sure. But here is one possibility. A key property of the category of CG spaces is that the product of a quotient map with a space is still a quotient map. In CGWH spaces, something even nicer is true: any pullback of a quotient map (along any map) is still a quotient map. (I don't know whether this nicer fact fails in CGH, but I suspect it does.) Another nice fact about CGWH: regular monomorphisms are precisely the closed inclusions.("Regular monomorphism" means the monomorphism is an equalizer of some pair.) (I originally said here that regular epis in CGWH are precisely quotient maps, but on reflection I'm not sure this is true.)
{ "source": [ "https://mathoverflow.net/questions/47702", "https://mathoverflow.net", "https://mathoverflow.net/users/5690/" ] }
47,905
I read about the following puzzle thirty-five years ago or so, and I still do not know the answer. One gives an integer $n\ge1$ and asks to place the integers $1,2,\ldots,N=\frac{n(n+1)}{2}$ in a triangle according to the following rules. Each integer is used exactly once. There are $n$ integers on the first row, $n-1$ on the second one, ... and finally one in the $n$th row (the last one). The integers of the $j$th row are placed below the middle of intervals of the $(j-1)$th row. Finally, when $a$ and $b$ are neighbours in the $(j-1)$th row, and $c$ lies in $j$-th row, below the middle of $(a,b)$ (I say that $a$ and $b$ dominate $c$), then $c=|b-a|$. Here is an example, with $n=4$. $$\begin{matrix} 6 & & 10 & & 1 & & 8 \\\\ & 4 & & 9 & & 7 \\\\ & & 5 & & 2 & & \\\\ & & & 3 & & & \end{matrix}$$ Does every know about this ? Is it related to something classical in mathematics ? Maybe eigenvalues of Hermitian matrices and their principal submatrices. If I remember well, the author claimed that there are solutions for $n=1,2,3,4,5$, but not for $6$, and the existence was an open question when $n\ge7$. Can anyone confirm this ? Trying to solve this problem, I soon was able to prove the following. If a solution exists, then among the numbers $1,\ldots,n$, exactly one lies on each line, which is obviously the smallest in the line. In addition, the smallest of a line is a neighbour of the highest, and they dominate the highest of the next line. The article perhaps appeared in the Revue du Palais de la Découverte . Edit . Thanks to G. Myerson's answer, we know that these objects are called Exact difference triangles in the literature.
This is the first problem in Chapter 9 of Martin Gardner, Penrose Tiles to Trapdoor Ciphers. In the addendum to the chapter, he writes that Herbert Taylor has proved it can't be done for $n\gt5$ . Unfortunately, he gives no reference. There may be something about the problem in Solomon W Golomb and Herbert Taylor, Cyclic projective planes, perfect circular rulers, and good spanning rulers, in Sequences and their applications (Bergen, 2001), 166–181, Discrete Math. Theor. Comput. Sci. (Lond.), Springer, London, 2002, MR1916130 (2003f:51016). See also http://www.research.ibm.com/people/s/shearer/dts.html and the literature on difference matrices and difference triangles. EDIT. Reading a little farther into the Gardner essay, I see he writes, The only published proof known to me that the conjecture is true is given by G. J. Chang, M. C. Hu, K. W. Lih and T. C. Shieh in "Exact Difference Triangles," Bulletin of the Institute of Mathematics, Academia Sinica, Taipei, Taiwan (vol. 5, June 1977, pages 191- 197). This paper can be found at http://w3.math.sinica.edu.tw/bulletin/bulletin_old/d51/5120.pdf and the review is MR0491218 (58 #10483). EDIT 2023: Brian Chen, YunHao Fu, Andy Liu, George Sicherman, Herbert Taylor, Po-Sheng Wu, Triangles of Absolute Differences, Chapter 11 (pages 115-124) in Plambeck and Rokicki, eds., Barrycades and Septoku: Papers in Honor of Martin Gardner and Tom Rodgers, MAA Press 2020, also gives a proof.
{ "source": [ "https://mathoverflow.net/questions/47905", "https://mathoverflow.net", "https://mathoverflow.net/users/8799/" ] }
47,954
Disclaimer: I don't know a whole lot about complexity theory beyond, say, a good undergrad class. With increasing frequency I seem to be encountering claims by complexity theorists that, in the unlikely event that P=NP were proved and an algorithm with reasonable constants found, mathematicians wouldn't bother trying to prove things anymore beause we could just use our P-time algorithm to search for proofs. Usually this is part of an argument for why all mathematicians and logicians should care a lot about P=?=NP. I think most of these claims are exaggerations of the first full paragraph on page 8 of the Cook's problem description for the Clay Institute (which itself is stated in a completely reasonable and unexaggerated manner). However, it's quite clear from the Clay Institute description that P=NP is relevant only to classes of problems, parameterized by some integer $n$, for which we have already proved all three of the following: the question is not independent of our chosen axioms ($T\vdash \phi\vee T\vdash \neg\phi$) any proof of the proposition must have size at most polynomial in $n$ any proof of the negation of the proposition must have size at most polynomial in $n$ This way we know there's a proof of either the proposition or its negation, and the search problem for the one that does exist falls inside NP, so we can dovetail the two searches and stop when one of them succeeds. This puzzles me. Most of the propositions mathematicians care about don't come in integer-parameterized classes, let alone classes with known proof-size bounds. Usually they come in classes of size 1 with no knowledge of proof-size. Is there some trick for turning the sorts of results mathematicians care about into these integer-parameterized-polynomially-bounded classes? Example: how would you do this for the question of whether or not CH is independent of ZFC? Cook and Reckhow's JSL article The Relative Efficiency of Propositional Proof Systems (which seems to be the starting point for the literature) actually mentions that if you take the problem class to consist of all propositions in some proof system (such as first-order predicate calculus), take the length of the proposition as the parameter, and take the question to be "is it entailed by the axioms", then at the time the paper was published (1979) no real-world proof system was known to have the desired property, and a few were known not to have the desired property. I suppose I am being slightly lazy here, since the study of which problems have this property is a whole subfield with plenty of literature I could read, but really I'm only interested in whether or not that subfield's positive-results-to-date justify the claims I've been hearing lately. A reference to a paper containing the "trick" above would be fine as an answer.
Let me address the issue at the beginning of the original question: If P=NP were proved and an algorithm with reasonable constants found, would mathematicians stop trying to prove things? The relevant NP set in this situation seems to be the $L_1$ of Ryan Williams's answer, which I regard (or decode) as the set of pairs consisting of a proposition $P$ to be proved and an upper bound $n$, written in unary notation, for the proof length. If we had a polynomial time algorithm for this NP set, then I could apply it as follows. Take $P$ to be some proposition that I'm tempted to work on, and take $n$ to be larger than any proof that I'd have time to write out in my life. If the algorithm, applied to these inputs, says "no" then I shouldn't work on this problem, because any proof would be too long for me to write out. If the algorithm says "yes" then I still shouldn't work on the problem because a P-time algorithm for Ryan's $L_2$ could find the proof for me. All of this, however, depends on an extremely optimistic understanding of "reasonable constants". The $n$ I chose is (I hope) rather big, so even a quadratic-time algorithm (with a small coefficient on the quadratic term) could take a long time (longer than my lifetime). The bottom line is that, if P=NP were proved with plausible constants in the running time, it would not be foolish for me to keep trying to prove theorems. (Even if it were foolish, I'd keep trying anyway, partly because it's fun and partly because people might like my proof better than the lexicographically first one.) By the way, the system in which proofs are done should, for these purposes, not be simply an axiomatic system like ZFC with its traditional axioms and underlying logic. It should be a system that allows you to formally introduce definitions. In fact, it should closely approximate what mathematicians actually write. The reason is that, although I'm looking only for proofs short enough to write in my lifetime, that doesn't mean proofs short enough to write in primitive ZFC notation in my lifetime. I believe some (if not all) of the proofs I've published would, if written in primitive ZFC notation, be too long for a lifetime.
{ "source": [ "https://mathoverflow.net/questions/47954", "https://mathoverflow.net", "https://mathoverflow.net/users/2361/" ] }
48,014
The Robertson-Seymour theorem on graph minors leads to some interesting conundrums. The theorem states that any minor-closed class of graphs can be described by a finite number of excluded minors. As testing for the presence of any given minor can be done in cubic time (albeit with astronomical constants) this implies that there exists a polynomial time algorithm for testing membership in any minor-closed class of graphs. Hence it seems reasonable that problem should be deemed to be in P. However the RS theory does not give us even the faintest clue as to how to determine the guaranteed-finite set of excluded minors, and until we have these at hand, we may not have any algorithm of any sort. Worse still, there is no known algorithm to actually find the excluded minors and even if you have a big list of them, there is no way that I know to verify that the list is actually complete. In fact, could it perhaps actually be undecidable to find the list of excluded minors? So, does it make sense to view a problem as being simultaneously polynomial-time and undecidable? It seems a bit odd to me (who is not a particular expert in complexity) but maybe it's quite routine?
Consider the following simplified example of the same phenomenon, which many students find clarifying. Let $f(n)=1$, if there are $n$ consecutive $7$s in the decimal expansion of $\pi$, and otherwise $f(n)=0$. Is this function computable? A naive attempt to compute $f(n)$ would simply proceed to search $\pi$ for $n$ consecutive $7$s. If found, the algorithm outputs $1$, but otherwise....and then the naive algorithm doesn't seem to know when to output $0$, and so students sometimes expect that $f$ is not computable. But actually, $f$ is a computable function. If it happens that there are arbitrarily long sequences of $7$s in the decimal expansion of $\pi$, an open question, then $f$ is the constant $1$ function, which is certainly computable. Otherwise, there is some longest sequence of $7$s in $\pi$, having length $N$, and so $f$ is the function that is $1$ up to $N$ and then $0$ above $N$. And this function also is computable, for any particular $N$. So the situation is that we have proved that $f$ is computable by exhibiting several algorithms, and proving that $f$ is definitely computed by one of them, but we don't know which one. (In fact, $f$ is linear time computable.) So we have proved that $f$ is a computable function, but by a pure existence proof that merely shows there is an algorithm computing $f$, without explicitly exhibiting it. It seems to be the same phenomenon in your case, where you have a computable function, but you don't know which algorithm computes it. Addition. Let me try to address Thierry Zell's concern about the third question. To my way of thinking, the phenomenon of the question is an instance of the problem of uniformity of algorithms, a pervasively considered issue in computability theory. To illustrate, consider the question of whether a given program $p$ halts on input $0$ before another program $q$. Let $f_p(q)=1$ if it does and otherwise $f_p(q)=0$. Every such function $f_p$ is computable, for similar reasons to my $\pi$ function $f$ above, since either $p$ doesn't halt at all on input $0$, in which case $f_p$ is identically $0$, or $p$ does halt in $N$ steps, in which case we need only run $q$ for $N$ steps to see if it halts, and give our output for $f_p(q)$ by that time. So each $f_p$ is a computable function. But the joint function $f(p,q)=f_p(q)$, a binary function, is not computable (if it were, then we could solve the halting problem: to decide if $p$ halts on input $0$, design a program $q$ that would take one step extra after a halt, and ask if $p$ halts before $q$). In other words, the function $f(p,q)$ is computable for any fixed $p$, but not uniformly in $p$. And such uniformity issues are ubiquitous in computability theory. In the example of the question, each class of graphs is decidable, but not uniformly so, since by Tony's answer there is no uniform algorithm, given a description of the class, to find the collection of excluded minors. But for any such fixed class, the membership question is decidable. The issue of whether a given algorithm is uniform in a given parameter is a very common concern in computability theory, and occurs throughout the subject.
{ "source": [ "https://mathoverflow.net/questions/48014", "https://mathoverflow.net", "https://mathoverflow.net/users/1492/" ] }
48,045
I am puzzled by the amazing utility and therefore ubiquity of two-dimensional matrices in comparison to the relative paucity of multidimensional arrays of numbers, hypermatrices . Of course multidimensional arrays are useful: every programming language supports them, and I often employ them myself. But these uses treat the arrays primarily as convenient data structures rather than as mathematical objects. When I think of the generalization of polygon to $d$-dimensional polytope, or of two-dimensional surface to $n$-dimensional manifold, I see an increase in mathematical importance and utility; whereas with matrices, the opposite. One answer to my question that I am prepared to acknowledge is that my perception is clouded by ignorance: hypermatrices are just as important, useful, and prevalent in mathematics as 2D matrices. Perhaps tensors, especially when viewed as multilinear maps, fulfill this role. Certainly they play a crucial role in physics, fluid mechanics, Riemannian geometry, and other areas. Perhaps there is a rich spectral theory of hypermatrices, a rich decomposition (LU, QR, Cholesky, etc.) theory of hypermatrices, a rich theory of random hypermatrices—all analogous to corresponding theories of 2D matrices, all of which I am unaware. I do know that Cayley explored hyperdeterminants in the 19th century, and that Gelfand, Kapranov, and Zelevinsky wrote a book entitled Discriminants, Resultants and Multidimensional Determinants (Birkhäuser, Boston, 1994) about which I know little. If, despite my ignorance, indeed hypermatrices have found only relatively rare utility in mathematics, I would be interested to know if there is some high-level reason for this, some reason that 2D matrices are inherently more useful than hypermatrices? I am aware of how amorphous is this question, and apologize if it is considered inappropriate.
Note that in linear algebra matrices describe at least two different things: linear maps between vector spaces (we consider only finite-dimensional vector spaces here) and bilinear forms. When thinking of matrices as tensors, linear maps between $V$ and $W$ are elements of the space $V^* \otimes W$, whereas bilinear forms between $V$ and $W$ are elements of $V^* \otimes W^* $. Now you can easily generalize the latter case to more than two spaces, but not the former. But it is the former case where several concepts like composition (matrix multiplication), determinants, eigenvalues etc. apply. (Note that eigenvalues and determinants can be defined for bilinear forms on a vector space equipped with an inner product, but not for bilinear forms on plain vector spaces). Of course you can consider spaces like $V^* \otimes W^* \otimes X$, but elements of this space are better thought as linear maps between $V\otimes W$ and $X$ than as three-dimensional hypermatrices. So what is special about the number 2 is that there is a notion of duality for vector spaces, but no "n-ality".
{ "source": [ "https://mathoverflow.net/questions/48045", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
48,067
everyone. I am wondering whether anyone knows if the square of Dirac Delta function is defined somewhere. In the beginning, this question might look strange. But by restricting the space of the test functions, I think it is still possible. For example, in order to make sense of $\delta_0^2$ , we can think that it is the limit of $\frac{e^{-x^2/t}}{2\pi t}$ as $t\rightarrow 0_+$ . Now choose the test function $f(x)=x^2$ . It is clear that $$ \int_{-\infty}^{\infty} x^2 \frac{e^{-x^2/t}}{2\pi t} d x = \frac{1}{2\sqrt{\pi t}} \int_{-\infty}^{\infty} x^2 \frac{e^{-x^2/t}}{\sqrt{\pi t}} d x = \frac{1}{2\sqrt{\pi t}} \cdot \frac{t}{2} = \frac{\sqrt{t}}{4\sqrt{\pi}}\;. $$ Then let $t$ tend to $0$ , we get $\langle\delta_0^2,f\rangle=0$ in this case. So we can restrict, for example, all test functions tend to 0 at the speed no less than $x^2$ . I don't want to invent the whole stuff if it already exists. Otherwise, I might take care of the every details. Thank you in advance for any hints. EDIT: Here are some references that I found to be useful. Mikusiński, J. On the square of the Dirac delta-distribution . (Russian summary) Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys. 14 1966 511–513. 44.40 (46.40) Ta Ngoc Tri, The Colombeau theory of generalized functions Master thesis, 2005
When L. Schwartz "invented" distributions (actually, he only invented the mathematical theory as a part of functional analysis, because distributions were already used by physicists), he proved incidentally that it is impossible to define a product in such a way that distributions form an algebra with acceptable topological properties. What is possible is to define the product of distributions when their wave front sets do not meet. This is why $fT$ makes sense if $T$ is a distribution and $f$ is $C^\infty$ , for instance, because the front set of $f$ is void. But you can also multiply that way genuine distributions; for instance in $\mathbb R^2$ , $$(1)\qquad\delta_{x=0}=\delta_{x_1=0}\delta_{x_2=0}.$$ J.-F. Colombeau invented in the 70's an algebra of generalized functions, which has something to do with distributions. But each distribution has infinitely many representatives in the algebra, and you have to play with the equality and a "weak equality" (or "association"). I don't know of an example where this tool solved an open problem. In Colombeau's algebra, the square of $\delta_0$ makes sense, but is highly non unique. Edit (May 2020). I'd like to share the following generalization of identity (1) above, which I found in developing my theory of Divergence-free positive symmetric tensor . In ${\mathbb R}^d$ , consider the one-dimensional Lebesgue measure ${\cal L}_j$ along the $j$ -th axis, for $1\le j\le d$ . Then $$({\cal L}_1\cdots{\cal L}_d)^{\frac1{d-1}}=\delta_{x=0}.$$ There are a lot of reasons why this equality makes sense and is valid. For instance, if you approach ${\cal L}_j$ by $(2\epsilon)^{1-d}dx|_{K_j(\epsilon)}$ where $K_j(t)=(-\epsilon,\epsilon)^{d-1}\times {\mathbb R}\vec e_j$ , then the left-hand side equals $(2\epsilon)^{-d}dx|_{(-\epsilon,\epsilon)^d}$ , which approaches the Dirac at the origin. There is an analogous identity when the orthogonal axes are replaced by an arbitrary list of $d$ axes; then the right-hand sides is $C\delta$ , where the constant $C$ is computed by solving a case of Minkowski's Problem.
{ "source": [ "https://mathoverflow.net/questions/48067", "https://mathoverflow.net", "https://mathoverflow.net/users/36814/" ] }
48,118
I'm reading an article by Ricardo Mañé, "The Hausdorff dimension of horseshoes of diffeomorphisms of surfaces" ( https://doi.org/10.1007/BF02585431 ). I'm having a technical problem. Sorry for my ignorance, but I would like a reference which explains how to equip the Grassmann manifolds with a metric.
I found it surprisingly difficult to find a reference for this when I was studying Mane's papers on multiplicative ergodic theorems. My hypothesis was that people working with the Grassmannian in other areas are happy with the fact that the Grassmannian is metrisable for abstract topological reasons, and don't actually care very much about a precise metric, but I might be wrong about this... in my answer I'm going to assume that we're considering a finite-dimensional space equipped with an inner product structure. If you are interested in precise metrics on the Grassmannian, the most popular definition of which I am aware is this one: $$d(V,W):=\max\left\{\sup_{w \in W, \|w\|=1}\inf \{\|v-w\| \colon v \in V \},\sup_{v \in V, \|v\|=1}\inf \{\|v-w\| \colon w \in W\}\right\}$$ This is I think not quite the same as the one suggested by Ryan Budney, but produces the same topology. This one seems to be the most popular definition for people working in multiplicative ergodic theory (it is in Barreira and Pesin's book, for example). There are some equivalent ways of describing this metric which seem to be less well-known. If we know a priori that $V$ and $W$ have the same dimension, then the maximum in the expression above is always attained by both expressions simultaneously! Hence if we fix a dimension $r$, then the expression $$d(V,W):=\sup_{v \in V,\|v\|=1}\inf\left\{\|v-w\|\colon w \in W\right\}$$ is actually a metric for the component of the Grassmannian which consists of all $r$-dimensional subspaces. This does not seem to be very well-known; I actually discovered this by reading Kato's book on perturbation theory, which isn't exactly the first place I'd go to to find out about Grassmannian manifolds... Another way to put a metric on the Grassmannian is as follows. We can identify a subspace $U$ with the unique linear operator of orthogonal projection onto that subspace, and take the metric given by setting the distance between two subspaces to be the operator norm distance between the orthogonal projection operators corresponding to those subspaces. I personally like this approach a great deal, because I think it makes it very obvious that the Grassmannian is compact (well, obvious if you're a functional analyst!). This metric is also, rather pleasantly I think, exactly identical to the first metric I defined above. You can find a proof that the two things are the same in the book on Hilbert spaces by Akhiezer and Glazman. There's a short discussion on this topic in my paper "A rapidly-converging lower bound for the joint spectral radius via multiplicative ergodic theory", which is basically the result of a gentle argument between myself and the referee over how the metric on the Grassmannian should be defined!
{ "source": [ "https://mathoverflow.net/questions/48118", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
48,222
An answer of André Henriques' inspired the following closely related CW question. Parts of the following is extracted from his answer and my comments. I regularly teach a knot theory class. Every time, students ask about applications. What should I say? I have two off-the-cuff replies when students ask. The first is that knot theory is a treasure chest of examples for several different branches of topology, geometric group theory, and certain flavours of algebra. The second is a list of engineering and scientific applications: untangling DNA , mixing liquids , and the structure of the Sun's corona . I'm interested hearing about other applications. I am also interested in hearing your take on the pedagogical issues involved. Thank you!
If I may steal some thunder from Peter Shor, his paper, Quantum money from knots (with Edward Farhi, David Gosset, Avinatan Hassidim, and Andrew Lutomirski) relies for the security of its "quantum money scheme" on the assumption that given two different looking but equivalent knots, it is difficult to explicitly find a transformation that takes one to the other. The Alexander polynomial plays a prominent role in the paper.
{ "source": [ "https://mathoverflow.net/questions/48222", "https://mathoverflow.net", "https://mathoverflow.net/users/1650/" ] }
48,248
There are a number of theorems or lemmas or mathematical ideas that come to be known as eponymous tricks , a term which in this context is in no sense derogatory. Here is a list of 11 such tricks (the last of which I learned at MO): the Whitney trick the DeTurck trick the Cayley trick the Rabinowitsch trick the Klee trick the Moser trick the Herglotz trick the Weyl trick the Karatsuba trick the Jouanolou trick Minty's trick Edit: List augmented from the comments and answers: the Eilenberg–Mazur swindle the Parshin trick the Atiyah rotation trick the Higman trick Rosser's trick Scott's trick the Craig trick the Uhlenbeck trick the Alexander trick Grilliot's trick Zarhin's trick [For any abelian variety $A$ , $(A \times A^{\vee})^4$ is principally polarizable.] Kirby's torus trick Trost's Discriminant trick The Brauer trick . Discussed in Gorenstein's Finite Simple Groups . Further Edit. And although my original interest was in eponymous (=named-after-someone) tricks, several non-eponymous tricks have been mentioned, so I'll gather those here as well: the determinant trick the kernel trick the W-trick Some of those listed above do not yet have Wikipedia pages (hint, hint—Thierry). I (JOR) am not seeking to extend this list (although I would be incidentally interested to learn of prominent omissions), but rather I am wondering: Is there some aspect or trait shared by the mathematical ideas or techniques that, over time, come to be named "tricks"?
How about the following (which I think applies to some of these tricks but not others): a trick is something whose usefulness is not fully captured by any particular set of hypotheses, so it would limit its usefulness to write it down as a lemma.
{ "source": [ "https://mathoverflow.net/questions/48248", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
48,453
Let $X$ be a subset of the real line and $S=\{s_i\}$ an infinite sequence of positive numbers. Let me say that $X$ is $S$- small if there is a collection $\{I_i\}$ of intervals such that the length of each $I_i$ equals $s_i$ and the union $\bigcup I_i$ contains $X$. And $X$ is said to be small if it is $S$-small for any sequence $S$. Obviously every countable set is small. Are there uncountable small sets? Some observations: A set of positive Hausdorff dimension cannot be small. Moreover, a small set cannot contain an uncountable compact subset.
The sets you are calling small are commonly referred to in the literature as "strong measure zero sets." The Borel conjecture is the assertion that any strong measure zero set is countable. This is independent of the usual axioms of set theory. For example, Luzin sets are strong measure zero; if MA (Martin's axiom) holds, then there are strong measure zero sets of size continuum. In fact, both CH and ${\mathfrak b}=\aleph_1$ contradict the Borel conjecture. However, the Borel conjecture is consistent. This was first shown by Laver, in 1976; in his model the continuum has size $\aleph_2$. Later, it was observed (by Woodin, I think) that adding random reals to a model of the Borel conjecture, preserves the Borel conjecture, so the size of the continuum can be as large as wanted. All this is described very carefully in Chapter 8 of the very nice book "Set Theory: On the Structure of the Real Line" by Tomek Bartoszynski and Haim Judah, AK Peters (1995).
{ "source": [ "https://mathoverflow.net/questions/48453", "https://mathoverflow.net", "https://mathoverflow.net/users/4354/" ] }
48,477
This has been inspired by this MO question: Harmonic maps into compact Lie groups Just for joking: which is your favourite never appeared forthcoming paper? (do not hesitate to close this question if unappropriate)
This doesn't exactly count as an unpublished forthcoming paper, but the supposed original proof of Fermat's Last Theorem that was "too large to fit in the margin" should probably be mentioned here.
{ "source": [ "https://mathoverflow.net/questions/48477", "https://mathoverflow.net", "https://mathoverflow.net/users/8320/" ] }
48,622
This question is partly motivated by Never appeared forthcoming papers . Motivation Grothendieck's "Récoltes et Semailles" has been cited on various occasions on this forum. See for instance the answers to Good papers/books/essays about the thought process behind mathematical research or Which mathematicians have influenced you the most? . However, these citations reflect only one aspect of "Récoltes et Semailles", namely the nontechnical reflexion about Mathematics and mathematical activity. Putting aside the wonderful "Clef du Yin et du Yang", which is a great reading almost unrelated to Mathematics, I remember reading in "Récoltes et Semailles" a bunch of technical mathematical reflexions, almost all of which were above my head due to my having but a smattering of algebraic geometry. However, I recall for instance reading Grothendieck's opinion that standard conjectures were false, and claiming he had in mind a few related conjectures (which he doesn't state precisely) which might turn out to be the right ones. I still don't even know what the standard conjectures state and thus didn't understand anything, but I know many people are working hard to prove these conjectures. I've thus often wondered what was the value of Grothendieck's mathematical statements (which are not limited to standard conjectures) in "Récoltes et Semailles". The questions I'd like to ask here are the following: Have the mathematical parts of "Récoltes et Semailles" proved influential? If so, is there any written evidence of it, or any account of the development of the mathematical ideas that Grothendieck has expressed in this text? If the answer to the first question is negative, what are the difficulties involved in implementing Grothendieck's ideas? Idle thoughts In the latter case, I could come up with some possible explanations: Those who could have developed and spread these ideas didn't read "Récoltes et Semailles" seriously and thus nobody was aware of their existence. Those people took the mathematical content seriously but it was beyond anyone's reach to understand what Grothendieck was trying to get at because of the idiosyncratic writing style. Should one of these two suppositions be backed by evidence, I'd appreciate a factual answer. The ideas were already outdated or have been proven wrong. If this is the case, I'd appreciate a reference. Epanorthosis Given that "Pursuing Stacks" and "Les Dérivateurs" were written approximately in 1983 and 1990 respectively and have proved influential (see Maltsiniotis's page for the latter text, somewhat less known), I would be surprised should Grothendieck's mathematical ideas expressed around 1985 be worthless.
Begging your pardon for indulging in some personal history (perhaps personal propaganda), I will explain how I ended up applying R'ecollte et Semaille. I do apologize in advance for interpreting the question in such a self-centered fashion! I didn't come anywhere near to reading the whole thing, but I did spend many hours dipping into various portions while I was a graduate student. Serge Lang had put his copy into the mathematics library at Yale, a very cozy place then for hiding among the shelves and getting lost in thoughts or words. Even the bits I read of course were hard going. However, one thing was clear even to my superficial understanding: Grothendieck, at that point, was dissatisfied with motives. Even though I wasn't knowledgeable enough to have an opinion about the social commentary in the book, I did wonder quite a bit if some of the discontent could have a purely mathematical source. A clue came shortly afterwards, when I heard from Faltings Grothendieck's ideas on anabelian geometry. I still recall my initial reaction to the section conjecture: `Surely there are more splittings than points!' to which Faltings replied with a characteristically brief question:' Why?' Now I don't remember if it's in R&S as well, but I did read somewhere or hear from someone that Grothendieck had been somewhat pleased that the proof of the Mordell conjecture came from outside of the French school. Again, I have no opinion about the social aspect of such a sentiment (assuming the story true), but it is interesting to speculate on the mathematical context. There were in Orsay and Paris some tremendously powerful people in arithmetic geometry. Szpiro, meanwhile, had a very lively interest in the Mordell conjecture, as you can see from his writings and seminars in the late 70's and early 80's. But somehow, the whole thing didn't come together. One suspects that the habits of the Grothendieck school, whereby the six operations had to be established first in every situation where a problem seemed worth solving, could be enormously helpful in some situations, and limiting in some others. In fact, my impression is that Grothendieck's discussion of the operations in R&S has an ironical tinge. [This could well be a misunderstanding due to faulty French or faulty memory.] Years later, I had an informative conversation with Jim McClure at Purdue on the demise of sheaf theory in topology. [The situation has changed since then.] But already in the 80's, I did come to realize that the motivic machinery didn't fit in very well with homotopy theory. To summarize, I'm suggesting that the mathematical content of Grothendieck's strong objection to motives was inextricably linked with his ideas on homotopy theory as appeared in 'Pursuing Stacks' and the anabelian letter to Faltings, and catalyzed by his realization that the motivic philosophy had been of limited use (maybe even a bit of an obstruction) in the proof of the Mordell conjecture. More precisely, motives were inadequate for the study of points (the most basic maps between schemes!) in any non-abelian setting, but Faltings' pragmatic approach using all kinds of Archimedean techniques may not have been quite Grothendieck's style either. Hence, arithmetic homotopy theory. Correct or not, this overall impression was what I came away with from the reading of R&S and my conversations with Faltings, and it became quite natural to start thinking about a workable approach to Diophantine geometry that used homotopy groups. Since I'm rather afraid of extremes, it was pleasant to find out eventually that one had to go back and find some middle ground between the anabelian and motivic philosophies to get definite results. This is perhaps mostly a story about inspiration and inference, but I can't help feeling like I did apply R&S in some small way. (For a bit of an update, see my paper with Coates here .) Added, 14 December: I've thought about this question on and off since posting, and now I'm quite curious about the bit of R&S I was referring to, but I no longer have access to the book. So I wonder if someone knowledgeable could be troubled to give a brief summary of what it is Grothendieck really says there about the six operations. I do remember there was a lot, and this is a question of mathematical interest.
{ "source": [ "https://mathoverflow.net/questions/48622", "https://mathoverflow.net", "https://mathoverflow.net/users/5587/" ] }
48,642
Does anyone know how to parametrize the boundary of the Mandelbrot set? I am not a fractal-geometer or a dynamical systems person. I just have some idle curiosity about this question. The Mandelbrot set is customarily defined as the set $M$ of all points $c\in\mathbb{C}$ such that the iterates of the function $z\mapsto z^2+c$, starting at $z=0$, remain bounded forever. Most very pretty depictions of the Mandelbrot set show $M$ as an intersection of an infinite sequence of sets $M_1\supset M_2\supset M_3\supset\cdots$, where the boundary of $M_i$ is the curve $|z_i(c)|=K$. Here $z_i(c)$ is the $i$th iterate of $z\mapsto z^2+c$, starting at $z=0$, and $K$ is some constant which guarantees that future iterates will escape. These curves $\partial (M_i)$ guide the viewer to see the increasingly intricate parts of the Mandelbrot set. Each of these curves $\partial(M_i)$ is analytic and closed. They can thus be parametrized nicely with a trigonometric series. To be more specific, each boundary has a parametrization of the form $$z(t)=\sum_{k=0}^\infty a_k\cos(kt)+i\sum_{k=0}^\infty b_k\sin(kt).$$ (In fact, since each boundary $\partial(M_i)$ is determined by a polynomial equation in the real and imaginary parts of $c$, I think each of these series should terminate. Correct me if I am wrong.) I would think that the limiting path should also have some nice parametrization with a trigonometric series. Is this limit the same for all $K$? If the limit is not the same for all $K$, then is there a limit as $K\rightarrow\infty$? What are the Fourier coefficients?
Lasse's answer expanded: Let $\psi$ be the map of the exterior of the unit disk onto the exterior of the Mandelbrot set, with Laurent series $$ \psi(w) = w + \sum_{n=0}^\infty b_n w^{-n} = w - \frac{1}{2} + \frac{1}{8} w^{-1} - \frac{1}{4} w^{-2} + \frac{15}{128} w^{-3} + 0 w^{-4} -\frac{47}{1024} w^{-5} + \dots $$ Then of course the boundary of the Mandelbrot set is the image of the unit circle under this map. However, this depends on the (not yet proved) local connectedness of that boundary. Here, for the coefficients $b_n$ there is no known closed form, but they can be computed recursively. Of course we put $w = e^{i\theta}$ and then this is a Fourier series.
{ "source": [ "https://mathoverflow.net/questions/48642", "https://mathoverflow.net", "https://mathoverflow.net/users/11264/" ] }
48,690
Any Grothendieck topos E is the "classifying topos" of some geometric theory, in the sense that geometric morphisms F→E can be identified with "models of that theory" internal to the topos F. For the topos of sheaves on a site C, the corresonding theory may tautologically be taken to be "the theory of cover-preserving flat functors on C." However, for some naturally arising toposes of interest, the classified theory has a different, more intuitive expression. For instance, the topos of simplicial sets classifies linear orders with distinct endpoints, and the "Zariski topos" classifies local rings. My question is: if X is a scheme—say affine for simplicity—then what theory does its (petit) etale topos $Sh(X_{et})$ classify? Can it be expressed in a nice intuitive way, better than "cover-preserving flat functors on the etale site"? I hope/suspect that it should have something to do with "geometric points of X" but I'm not sure how to formulate that as a geometric theory.
It classifies what the Grothendieck school calls "strict local rings". The points of such a topos are strict Henselian rings (Henselian rings with separably closed residue field). See Monique Hakim's thesis ( Topos annelés et schémas relatifs $\operatorname{III.2-4}$) for a proof and a more precise definition of what constitutes a "strict local ring" in a topos.
{ "source": [ "https://mathoverflow.net/questions/48690", "https://mathoverflow.net", "https://mathoverflow.net/users/49/" ] }
48,771
I do not know exactly how to characterize the class of proofs that interests me, so let me give some examples and say why I would be interested in more. Perhaps what the examples have in common is that a powerful and unexpected technique is introduced that comes to seem very natural once you are used to it. Example 1. Euler's proof that there are infinitely many primes. If you haven't seen anything like it before, the idea that you could use analysis to prove that there are infinitely many primes is completely unexpected. Once you've seen how it works, that's a different matter, and you are ready to contemplate trying to do all sorts of other things by developing the method. Example 2. The use of complex analysis to establish the prime number theorem. Even when you've seen Euler's argument, it still takes a leap to look at the complex numbers. (I'm not saying it can't be made to seem natural: with the help of Fourier analysis it can. Nevertheless, it is a good example of the introduction of a whole new way of thinking about certain questions.) Example 3. Variational methods. You can pick your favourite problem here: one good one is determining the shape of a heavy chain in equilibrium. Example 4. Erdős's lower bound for Ramsey numbers. One of the very first results (Shannon's bound for the size of a separated subset of the discrete cube being another very early one) in probabilistic combinatorics. Example 5. Roth's proof that a dense set of integers contains an arithmetic progression of length 3. Historically this was by no means the first use of Fourier analysis in number theory. But it was the first application of Fourier analysis to number theory that I personally properly understood, and that completely changed my outlook on mathematics. So I count it as an example (because there exists a plausible fictional history of mathematics where it was the first use of Fourier analysis in number theory). Example 6. Use of homotopy/homology to prove fixed-point theorems. Once again, if you mount a direct attack on, say, the Brouwer fixed point theorem, you probably won't invent homology or homotopy (though you might do if you then spent a long time reflecting on your proof). The reason these proofs interest me is that they are the kinds of arguments where it is tempting to say that human intelligence was necessary for them to have been discovered. It would probably be possible in principle, if technically difficult, to teach a computer how to apply standard techniques, the familiar argument goes, but it takes a human to invent those techniques in the first place. Now I don't buy that argument. I think that it is possible in principle, though technically difficult, for a computer to come up with radically new techniques. Indeed, I think I can give reasonably good Just So Stories for some of the examples above. So I'm looking for more examples. The best examples would be ones where a technique just seems to spring from nowhere -- ones where you're tempted to say, "A computer could never have come up with that ." Edit: I agree with the first two comments below, and was slightly worried about that when I posted the question. Let me have a go at it though. The difficulty with, say, proving Fermat's last theorem was of course partly that a new insight was needed. But that wasn't the only difficulty at all. Indeed, in that case a succession of new insights was needed, and not just that but a knowledge of all the different already existing ingredients that had to be put together. So I suppose what I'm after is problems where essentially the only difficulty is the need for the clever and unexpected idea. I.e., I'm looking for problems that are very good challenge problems for working out how a computer might do mathematics. In particular, I want the main difficulty to be fundamental (coming up with a new idea) and not technical (having to know a lot, having to do difficult but not radically new calculations, etc.). Also, it's not quite fair to say that the solution of an arbitrary hard problem fits the bill. For example, my impression (which could be wrong, but that doesn't affect the general point I'm making) is that the recent breakthrough by Nets Katz and Larry Guth in which they solved the Erdős distinct distances problem was a very clever realization that techniques that were already out there could be combined to solve the problem. One could imagine a computer finding the proof by being patient enough to look at lots of different combinations of techniques until it found one that worked. Now their realization itself was amazing and probably opens up new possibilities, but there is a sense in which their breakthrough was not a good example of what I am asking for. While I'm at it, here's another attempt to make the question more precise. Many many new proofs are variants of old proofs. These variants are often hard to come by, but at least one starts out with the feeling that there is something out there that's worth searching for. So that doesn't really constitute an entirely new way of thinking. (An example close to my heart: the Polymath proof of the density Hales-Jewett theorem was a bit like that. It was a new and surprising argument, but one could see exactly how it was found since it was modelled on a proof of a related theorem. So that is a counterexample to Kevin's assertion that any solution of a hard problem fits the bill.) I am looking for proofs that seem to come out of nowhere and seem not to be modelled on anything. Further edit. I'm not so keen on random massive breakthroughs. So perhaps I should narrow it down further -- to proofs that are easy to understand and remember once seen, but seemingly hard to come up with in the first place.
The method of forcing certainly fits here. Before, set theorists expected that independence results would be obtained by building non-standard, ill-founded models, and model theoretic methods would be key to achieve this. Cohen's method begins with a transitive model and builds another transitive one, and the construction is very different from all the techniques being tried before. This was completely unexpected. Of course, in hindsight, we see that there are similar approaches in recursion theory and elsewhere happening before or at the same time. But it was the fact that nobody could imagine you would be able to obtain transitive models that mostly had us stuck.
{ "source": [ "https://mathoverflow.net/questions/48771", "https://mathoverflow.net", "https://mathoverflow.net/users/1459/" ] }
48,773
Let $f: X\rightarrow Y$ be a morphism of varieties. Let $0\rightarrow F\rightarrow E\rightarrow G\rightarrow 0$ be a short exact sequence of locally free sheave of finite rank. If direct images of above sheaves are locally free, then is it true that it induces a short exact sequence $0\rightarrow f_*F\rightarrow f_*E\rightarrow f_*G\rightarrow 0$?
The method of forcing certainly fits here. Before, set theorists expected that independence results would be obtained by building non-standard, ill-founded models, and model theoretic methods would be key to achieve this. Cohen's method begins with a transitive model and builds another transitive one, and the construction is very different from all the techniques being tried before. This was completely unexpected. Of course, in hindsight, we see that there are similar approaches in recursion theory and elsewhere happening before or at the same time. But it was the fact that nobody could imagine you would be able to obtain transitive models that mostly had us stuck.
{ "source": [ "https://mathoverflow.net/questions/48773", "https://mathoverflow.net", "https://mathoverflow.net/users/11433/" ] }
48,849
Allow me to take advantage of your collective scholarliness... The symmetric group $\mathbb S_n$ can be presented, as we all know, as the group freely generated by letters $\sigma_1,\dots,\sigma_{n-1}$ subject to relations $$ \begin{aligned} &\sigma_i\sigma_j=\sigma_j\sigma_i, && 1\leq i,j&lt;n, |i-j|>1;\\\\ &\sigma_i\sigma_j\sigma_i=\sigma_j\sigma_i\sigma_j, &&1\leq i,j&lt;n, |i-j|=1; \\\\ &\sigma_i^2=1, && 1\leq i&lt;n \end{aligned} $$ If we drop the last group of relations, declaring that the $\sigma_i$'s are involutions, we get the braid group $\mathbb B_n$. Now suppose I add to $\mathbb B_n$ the relations $$ \begin{aligned} &\sigma_i^3=1, && 1\leq i&lt;n \end{aligned} $$ and call the resulting group $\mathbb T_n$. This very natural group has probably shown up in the literature. Can you provide references to such appearances? In particular, is $\mathbb T_n$ finite?
Following up what was mentioned in the comments for $n$ up to $5$. In "Factor groups of the braid group" Coxeter showed that the quotient of the Braid group by the normal closure of the subgroup generated by $\{\sigma_i^k \ | \ 1\le i\le n-1\}$ is finite if and only if $$\frac{1}{n}+\frac{1}{k}>\frac{1}{2}$$ In your case ($k=3$) this translates to this group being infinite for $n\geq 6$. P.S. For the same question on Artin braid groups one can use the classification of finite complex reflection groups. See for example the first reference there, "On complex reflection groups and their associated braid groups" by Broué, Malle and Rouquier.
{ "source": [ "https://mathoverflow.net/questions/48849", "https://mathoverflow.net", "https://mathoverflow.net/users/1409/" ] }
48,908
There's an amusing comment in Peter Lax's Functional Analysis book. After a brief description of the Invariant Subspace Problem , he says (paraphrasing) "...this question is still open. It is also an open question whether or not this question is interesting." To avoid lengthy discussions involving subjective views about what makes math interesting, I'd simply like to know if there are examples of math papers out there that begin with something like, "Suppose the invariant subspace problem has a positive answer..." Of course, papers that are about the ISP itself don't count!
The invariant subspace problem for Banach spaces was solved in the negative for Banach spaces by Per Enflo and counterexamples for many classical spaces were constructed by Charles Read. The problem is open for reflexive Banach spaces. On the other hand, S. Argyros and R. Haydon recently constructed a Banach space $X$ s.t. $X^*$ is isomorphic to $\ell_1$ and every bounded linear operator on $X$ is the sum of a scalar times the identity plus a compact operator, hence the invariant subspace problem has a positive solution on $X$. The invariant subspace problem has spurred quite a lot of interesting mathematics. Usually when a positive result is proved, much more comes out, such as a functional calculus for operators. See, e.g., recent papers by my colleague C. Pearcy and his collaborators. In cases where the ISP has a positive solution for a class of operators, there may be a structure theory for the operators. There is, for example, J. Ringrose's classical structure theorem for compact operators on a Banach space. This is a beautiful and useful theorem, which, BTW, I am using currently with T. Figiel and A. Szankowski to relate the Lidskii trace formula to the J. Erdos theorem in Banach spaces. Why is the twin prime conjecture interesting?
{ "source": [ "https://mathoverflow.net/questions/48908", "https://mathoverflow.net", "https://mathoverflow.net/users/9124/" ] }
48,910
A friend of mine introduced me to the following question: Does there exist a smooth function $f: \mathbb{R} \to \mathbb{R}$, ($f \in C^\infty$), such that $f$ maps rationals to rationals and irrationals to irrationals and is nonlinear? I posed this question earlier in math.stackexchange.com ( link to the question ) where it received considerable interest. There hasn't been an answer so far, but one commenter suggested to bring it here. Related results The friend who told me the problem has been able to prove that no polynomial satisfies the required conditions. If we required just that $f \in C^1$, then we can cut and paste the function $x \mapsto \frac{1}{x}$ to provide a nonlinear example: $$f(x) = \begin{cases}\frac{1}{x-1} + 1, & x \le 0 \\\\ \frac{1}{x+1} - 1, & x \ge 0\end{cases}$$
There are such functions. Moreover any diffeomorphism $f_0:\mathbb R\to\mathbb R$ can be approximated by such $f$. For the sake of simplicity I assume that $f_0'\ge 2$ everywhere. Enumerate the rationals: $\mathbb Q=\{r_1,r_2,\dots\}$, and construct a sequence $f_0,f_1,f_2,\dots$ of self-diffeomorphisms of $\mathbb R$ satisfying the following: $f_{2k-1}(r_k)\in\mathbb Q$, and $f_n(r_k)$ is the same for all $n\ge 2k-1$ $f_{2k}^{-1}(r_k)\in\mathbb Q$, and $f_n^{-1}(r_k)$ is the same for all $n\ge 2k$. The first $k$ derivatives of the difference $f_k-f_{k-1}$ are bounded by $2^{-k}$ everywhere on $\mathbb R$. Such a sequence has a limit $f$ in $C^\infty$, and this limit is a diffeomorphism satisfying $f(\mathbb Q)\subset\mathbb Q$ and $f^{-1}(\mathbb Q)\subset\mathbb Q$. The sequence $\{f_i\}$ can be constructed by induction. To construct $f_{2k-1}$ from $g:=f_{2k-2}$, consider $g(r_k)$. If it is rational, let $f_{2k-1}=g$. If not, let $I$ be an open interval containing $r_k$ and not containing any of the points $r_i$ and $g^{-1}(r_i)$ for $i\le k-1$. (Note that $r_k$ is different from these points due to the fact that $g(r_k)\notin\mathbb Q$). Then define $f_{2k-1}=g+\varepsilon\cdot h$ where $h$ is your favorite smooth function with support contained in $I$ and such that $h(r_k)\ne 0$, $\varepsilon$ is so small that the above derivative estimates hold and is chosen so that $f_{2k-1}(r_k)\in\mathbb Q$. To construct $f_{2k}$ from $f_{2k-1}$, do a similar perturbation near the pre-image of $r_k$, assuming it is not yet rational.
{ "source": [ "https://mathoverflow.net/questions/48910", "https://mathoverflow.net", "https://mathoverflow.net/users/4165/" ] }
48,970
An apparently elementary question that bugs me for quite some time: (1) Why are the integers with the cofinite topology not path-connected? Recall that the open sets in the cofinite topology on a set are the subsets whose complement is finite or the entire space. Obviously, the integers are connected in the cofinite topology, but to prove that they are not path-connected is much more subtle. I admit that this looks like the next best homework problem (and was dismissed as such in this thread ), but if you think about it, it does not seem to be obvious at all. An equivalent reformulation of (1) is: (2) The unit interval $[0,1] \subset \mathbb{R}$ cannot be written as a countable union of pairwise disjoint non-empty closed sets. I can prove this, but I'm not really satisfied with my argument, see below. My questions are: Does anybody know a reference for a proof of (1) , (2) or an equivalent statement, and if so, do you happen to know who has proved this originally? Do you have an easier or slicker proof than mine? Here's an outline of my rather clumsy proof of (2) : Let $[0,1] = \bigcup_{n=1}^{\infty} F_{n}$ with $F_{n}$ closed, non-empty and $F_{i} \cap F_{j} = \emptyset$ for $i \neq j$. The idea is to construct by induction a decreasing family $I_{1} \supset I_{2} \supset \cdots$ of non-empty closed intervals such that $I_{n} \cap F_{n} = \emptyset$. Then $I = \bigcap_{n=1}^{\infty} I_{n}$ is non-empty. On the other hand, since every $x \in I$ lies in exactly one $F_{n}$, and since $x \in I \subset I_{n}$ and $I_{n} \cap F_{n} = \emptyset$, we see that $I$ must be empty, a contradiction. In order to construct the decreasing sequence of intervals, we proceed as follows: Since $F_{1}$ and $F_{2}$ are closed and disjoint, there are open sets $U_{1} \supset F_{1}$ and $U_{2} \supset F_{2}$ such that $U_{1} \cap U_{2} = \emptyset$. Let $I_{1} = [a,b]$ be a connected component of $[0,1] \smallsetminus U_{1}$ such that $I_{1} \cap F_{2} \neq \emptyset$. By construction, $I_{1}$ is not contained in $F_{2}$, so by connectedness of $I_{1}$ there must be infinitely many $F_{n}$'s such that $F_{n} \cap I_{1} \neq \emptyset$. Replacing $[0,1]$ by $I_{1}$ and the $F_{n}$'s by a (monotone) enumeration of those $F_{n}$ with non-empty intersection with $I_{1}$, we can repeat the argument of the previous paragraph and get $I_{2}$. [In case we have thrown away $F_{3}, F_{4}, \ldots, F_{m}$ in the induction step (i.e, their intersection with $I_{1}$ is empty but $F_{m+1} \cap I_{1} \neq \emptyset$), we put $I_{3}, \ldots, I_{m}$ to be equal to $I_{2}$ and so on.] Added: Feb 15, 2011 I was informed that a proof of (2) appears in C. Kuratowski, Topologie II , §42, III, 6 on p.113 of the 1950 French edition, with essentially the same argument as I gave above. There it is attributed to W. Sierpiński, Un théorème sur les continus , Tôhoku Mathematical Journal 13 (1918), p. 300-303.
I happen to have been thinking about this question recently. The proof I like uses the fact that a nested sequence of open intervals has non-empty intersection provided neither end point is eventually constant. Now one inductively constructs a sequence of such intervals as follows. Each interval is a component of the complement of the union of the first n closed sets, for some n. Then wait till the next closed set intersects that interval. (If it never does, then we're trivially done.) It cannot fill the whole interval, and indeed must miss out an interval at the left and an interval at the right. So pass to one of those subintervals in such a way that your left-right choices alternate. Done. PS The question (with closed intervals instead of closed sets) was an exercise on the first sheet of Cambridge's Analysis I course last year.
{ "source": [ "https://mathoverflow.net/questions/48970", "https://mathoverflow.net", "https://mathoverflow.net/users/11081/" ] }
49,015
This question might be astoundingly naive, because my understanding of modular forms is so meek. It occurred to me that the reason I was never able to penetrate into the field of modular forms, automorphic forms, the Langland's program and so forth was because my appeal is to things that have the feel of SGA1, and those things do not. I was wondering, therefore, if Grothendieck had devoted thought to this, and if so where it can be found, and how it is treated in the field at the moment.
No. That is perhaps a little too categorical, but a mathscinet search with Grothendieck as author and "modular form" or "forme modulaire" as "anywhere" gives no result. I don't remember him mentionning modular forms in "Recoltes et Semailles" either. More to the point, it is a commonplace in the field of modular and automorphic forms to wish that Grothendieck had given some time to the subject -- and made it a little more "Grothendieck-style". Pierre Cartier gave a talk at the IHES in January 2009 where he deplored that "Grothendieck and Langlands never met". Also, the correspondence between Serre and Grothendieck contains several letters where Serre tries to attract Grothendieck to the subject of modular forms, and where Grothendieck doesn't conceal his disinterest (to say the least).
{ "source": [ "https://mathoverflow.net/questions/49015", "https://mathoverflow.net", "https://mathoverflow.net/users/5756/" ] }
49,151
In the December 2010 issue of Scientific American , an article "A Geometric Theory of Everything" by A. G. Lisi and J. O. Weatherall states "... what is arguably the most intricate structure known to mathematics, the exceptional Lie group E8." Elsewhere in the article it says "... what is perhaps the most beautiful structure in all of mathematics, the largest simple exceptional Lie group. E8." Are these sensible statements? What are some other candidates for the most intricate structure and for the most beautiful structure in all of mathematics? I think the discussion should be confined to "single objects," and not such general "structures" as modern algebraic geometry. Question asked by Richard Stanley Here are the top candidates so far: 1) The absolute Galois group of the rationals 2) The natural numbers (and variations) 4) Homotopy groups of spheres 5) The Mandelbrot set 6) The Littlewood Richardson coefficients (representations of $S_n$ etc.) 7) The class of ordinals 8) The monster vertex algebra 9) Classical Hopf fibration 10) Exotic Lie groups 11) The Cantor set 12) The 24 dimensional packing of unit spheres with kissing number 196560 (related to 8). 13) The simplicial symmetric sphere spectrum 14) F_un (whatever it is) 15) The Grothendiek-Teichmuller tower. 16) Riemann's zeta function 17) Schwartz space of functions And there are a few more...
The absolute Galois group of $\mathbb{Q}$. It contains the information of all algebraic extensions of the rationals - and is therefore the most important single object of algebraic number theory. Representations of the absolute Galois group are central to many diophantine questions; see for example the Taniyama-Shimura conjecture (aka modularity theorem) which led to a solution of Fermat's last theorem and states in some form that certain Galois representations associated to elliptic curves come from modular forms. One of the most intricate set of conjectures is dedicated (partly) to the study of representations of the absolute Galois group of $\mathbb{Q}$: the Langlands program.
{ "source": [ "https://mathoverflow.net/questions/49151", "https://mathoverflow.net", "https://mathoverflow.net/users/2807/" ] }
49,173
Recall the notion of locally presentable category (nLab) : $\DeclareMathOperator{\Hom}{Hom}$ Definition: Fix a regular cardinal $\kappa$; a set is $\kappa$-small if its cardinality is strictly less than $\kappa$. A $\kappa$-directed category is a poset in which every $\kappa$-small set has an upper bound. A $\kappa$-directed colimit is the colimit of a diagram for which the indexing category $\kappa$-directed. An object $a$ of a category is $\kappa$-small if $\Hom(a,-)$ preserves $\kappa$-directed colimits. A category is $\kappa$-locally presentable if it is (locally small and) cocomplete and there exists a set of objects, all of which are $\kappa$-small, such that the cocompletion of (the full subcategory on) this set in the category is the entire category. It is a fact that every $\kappa$-locally presentable category is also $\lambda$-locally presentable for every $\lambda > \kappa$. In a current research project, we have some constructions that work naturally for $\kappa$-locally presentable categories for arbitrary regular cardinals $\kappa$. But all of our applications seem to be to $\aleph_0$-locally presentable categories. For example, for any ring $R$, I'm pretty sure that the category of $R$-modules is $\aleph_0$-locally presentable. (Every module has a presentation; any particular element or equation in the module is determined by some finite subpresentation.) The category of groups is $\aleph_0$-locally presentable (by the same argument). I'm told that every topos is locally presentable for some $\kappa$; is a topos necessarily $\aleph_0$-locally presentable? Indeed, although I know of some categories that are not $\aleph_0$-locally presentable but are $\kappa$-locally presentable for some larger $\kappa$; the example, apparently, is the poset of ordinals strictly less than $\kappa$ for some large regular cardinal $\kappa$. But this is not a category I have ever encountered "in nature"; it's more of a zoo specimen. Hence my somewhat ill-defined question: Question: Do there exist "in nature" (or, "used by working mathematicians") categories that are $\kappa$-locally presentable for some $\kappa > \aleph_0$ but that are not $\aleph_0$-locally presentable? Put another way, is there any use to having a construction that works for all $\kappa$? So far, I haven't thought much about general topoi, and we do want our project to include those, so I would certainly accept as an answer "yes, this particular topos". But there might be other "representation theoretic" categories, or other things.
The category of Banach spaces and contractions (over the reals or any other complete normed field, I think) is an example of an $\aleph_{1}$-presentable category which is not $\aleph_{0}$-presentable. The point is that the ground field is a strong generator and its represented functor $Ban(k,-)$ commutes with $\aleph_{1}$-filtered colimits. It essentially boils down to the fact about infinitary operations that arsmath pointed out in the above remark, details can be found in Borceux, Handbook of categorical algebra, vol. 2, Example 5.2.2 (e). To see that the "unit ball functor" $Ban(k,-)$ does not commute with ordinary filtered colimits, you can take for example the identity $\varinjlim_{n < \omega} \ell^{1}(n) \cong \ell^{1}(\omega)$, where the $n$ are the finite ordinals and the maps the obvious inclusions. The set $\lim_{n < \omega} Ban(k,\ell^{1}(n))$ only consists of sequences with finitely many non-zero entries, while the set $Ban(k,\ell^{1}(\omega))$ has all summable sequences of norm $\leq 1$.
{ "source": [ "https://mathoverflow.net/questions/49173", "https://mathoverflow.net", "https://mathoverflow.net/users/78/" ] }
49,259
Consider a ring $A$ and an affine scheme $X=\operatorname{Spec}A$ . Given two ideals $I$ and $J$ and their associated subschemes $V(I)$ and $V(J)$ , we know that the intersection $I\cap J$ corresponds to the union $V(I\cap J)=V(I)\cup V(J)$ . But a product $I.J$ gives a new subscheme $V(I.J)$ which has same support as the union but can be bigger in an infinitesimal sense. For example if $I=J$ you get a scheme $V(I^2)$ which is equal to "double" $V(I)$ . Vague Question : What is geometric interpretation of $V(I.J)$ in general? Precise question : When is $I\cap J=I.J$ ? Everybody knows the case $I+J=A$ but this is absolutely not necessary. For example if $A$ is UFD and $f,g$ are relatively prime then $(f).(g)=(f)\cap(g) $ but in general $(f)+(g)\neq A$ (e.g. $f=X, g=Y \in k[X, Y]$ ) Thank you very much.
Answer to the precise question: When $\mathrm{Tor}^1(A/I, A/J)=0$. Proof: We have the exact sequence $$0 \to I \to A \to A/I \to 0$$ Tensoring with $A/J$, we get $$0 \to \mathrm{Tor}^1(A/I, A/J) \to I/(I \cdot J) \to A/J \to A/(I+J) \to 0.$$ The left hand term is $0$ because $A$ is flat as an $A$-module. Now, what is the kernel of $I \mapsto A/J$? Clearly, it is $I \cap J$. So the kernel of $I/(I \cdot J) \to A/J$ is $(I \cap J)/(I \cdot J)$. We see that $I \cap J = I \cdot J$ if and only if $\mathrm{Tor}^1(A/I, A/J)=0$.
{ "source": [ "https://mathoverflow.net/questions/49259", "https://mathoverflow.net", "https://mathoverflow.net/users/10408/" ] }
49,278
For a connected reductive algebraic group $G$ over a field $k$, other than the \'etale fundamental group of $G$ (regarded just as a scheme), there seems to be another notion, usually called the algebraic fundamental group of $G.$ I am not sure of its definition, but I guess (at least when $G$ is split) it might be something like $I/\Gamma,$ where $I$ is the "topological fundamental group" of a maximal torus $T$ (if it makes sense), and $\Gamma$ is the subgroup generated by the inverse roots. In particular, for $GL_n$ this should produce $\mathbb Z,$ which agrees with the topological fundamental group when $k=\mathbb C.$ Could anyone give some references on this?
At Jim's request, here's an expanded version of my comments above. I will have to use some facts from the topological theory of complex algebraic varieties, but out of stubbornness I will not use any such facts which are part of the theory of Lie groups (the maximal compact subgroup, facts specific to complex semisimple Lie algebras, etc.) Let $X$ be a smooth connected affine scheme over a field $k$ (the case of interest being a connected semisimple $k$-group). Consider the collection of connected finite \'etale covers of $X$. This is an inverse system of affine schemes (with coordinate rings that are domains). Consider the inverse limit (i.e., Spec of direct limit of coordinate rings), call it $\widetilde{X}$. This is an algebraist's analogue of a universal cover: it is Spec of a (typically huge, not finite type over $k$, nor noetherian) domain, so it is connected, and one can show by "standard" direct limit arguments that it has no nontrivial connected finite etale cover. So every finite etale cover of $X$ is totally split by pullback over $\widetilde{X}$, "as if" it were a universal cover in topology. The automorphism group of $\widetilde{X}$ over $X$ is (by one definition) the opposite group of the etale fundamental group of $X$ (upon fixing some geometric point of $X$ as the base point, and lifting it to a geometric point of $\widetilde{X}$; if $k$ were separably closed then we could take a point in $X(k)$ as the base point and lift it to a $k$-point of $\widetilde{X}$). Is $\widetilde{X} \rightarrow X$ a finite-degree covering, say when $k$ is separably closed? This is the same to ask that $\widetilde{X}$ is finite type over $k$. In characteristic $p > 0$ one can use the Artin-Schreier method to make infinitely many pairwise non-isomorphic degree-$p$ connected finite etale Galois covers of $X$ if $\dim X > 0$, so in positive characteristic $\widetilde{X}$ is never of finite type over $k$ when $\dim X > 0$. (To verify infinitude, one approach involves working with finite generically-etale maps to an affine space to essentially reduce the problem to the more familiar case of affine spaces of positive dimension.) But sometimes in char. 0 it is of finite type (such as algebraic varieties over $\mathbf{C}$ whose complex points are simply connected in the topological case; we'll come to some examples below). Let's call a connected scheme $S$ simply connected if it has no nontrivial connected finite etale covers; e.g, $\widetilde{X}$ above (see, no noetherian hypotheses). Now there arises a question (inspired by the topological case): is $\widetilde{X} \times_{{\rm{Spec}}(k)} \widetilde{X}$ simply connected (assuming it is at least connected, which is automatic when $k$ is separably closed)? This amounts to asking if the natural map $\pi_1(X \times X) \rightarrow \pi_1(X) \times \pi_1(X)$ is an isomorphism (with $X \times X$ connected). The Artin-Schreier method shows that this fails in char. $> 0$ when $\dim X > 0$, even if $k = k_s$. But if $k$ is alg. closed of char. 0 then it is true. (Here is a sketch of a proof. The content is to show that a cofinal system of connected finite etale covers of $X \times X$ is given by products of such covers of the factors, and then group theory handles the rest. By specialization arguments typically called "Lefschetz principle", we can assume $k = \mathbf{C}$. Then the known result on the topological side reduces the task to checking that $E \rightsquigarrow E(\mathbf{C})$ sets up an equivalence from the category of finite \'etale covers of $X$ to the category of finite-degree covering spaces over $X(\mathbf{C})$. This is the so-called Riemann Existence Theorem, and is proved in section 5 of Exp. XII of SGA1 via resolution of singularities; can also be proved via alterations. Maybe there is an more elementary algebraic proof of the product compatibility by exploiting tame ramification in char. 0, but if so then it is escaping my memory at the moment.) So when $k$ is alg. closed of char. 0, if $G$ is a smooth connected affine $k$-group then by simple connectedness (and connectedness) of $\widetilde{G} \times \widetilde{G}$ we can copy the same argument with Lie groups to uniquely equip $\widetilde{G}$ with a $k$-group scheme structure over that of $G$ making a chosen $k$-rational base point on $\widetilde{G}$ over the identity of $G$ as the identity point and the covering map a $k$-homomorphism. Continuing with such $k$, the coordinate ring $\widetilde{A}$ of $\widetilde{G}$ is a Hopf algebra over $k$, yet it is constructed as a directed union of $k$-subalgebras that are finite etale over the coordinate ring $A$ of $G$. By a general fact from the land of Hopf algebras (proved in Waterhouse's book on affine group schemes, for example), $\widetilde{A}$ is a directed union of finite type $k$-subalgebras $A_i$ that are also Hopf subalgebras. Since $A$ is finite type over $k$, by considering only "big enough" $i$, we may assume that every $A_i$ contains $A$. But $A_i$ is finitely generated over $k$, hence over $A$, yet $\widetilde{A}$ is a directed union of finite \'etale $A$-algebras. Thus, each $A_i$ is contained in a finite $A$-algebra and hence is itself module-finite over $A$. In other words, $G_i := {\rm{Spec}}(A_i) \rightarrow G$ is an isogeny between smooth connected affine $k$-groups. This isogeny is necessarily etale (as we're in char. 0), and hence the kernel is central by connectedness of the $k$-groups, and each $G_i$ is necessary semisimple when $G$ is. To summarize, if $k$ is alg. closed of char. 0 and $G$ is connected semisimple, then $\widetilde{G}$ is an inverse limit of connected semisimple $k$-groups equipped with an etale isogeny over $G$. But by the theory of connected semisimple groups over general fields, the collection of central isogenous covers of $G$ has a single maximal element that dominates all others (called the simply connected member of the central isogeny class, and characterized by the property that it admits no non-trivial central isogenous covers by another smooth connected semisimple $k$-group; more on this dude below). Voila, so for $k$ alg. closed of char. 0 the collection of $G_i$'s is actually finite and terminates at $\widetilde{G}$. That is, for such $k$ the "abstract" $\widetilde{G}$ coincides with the "simply connected" central cover of $G$ in the sense of algebraic groups, so we conclude that the etale fundamental group is actually finite and coincides with the Cartier dual of the algebraic fundamental group (as the latter is by definition of the Cartier dual of the kernel of the central isogeny from the simply connected central cover; more on this over general fields below). In particular, $G$ is simply connected in the sense of algebraic groups if and only if it is simply connected as a scheme. In the special case $k = \mathbf{C}$, we recover the fact that a connected semisimple $\mathbf{C}$-group $G$ is simply connected in the sense of algebraic groups if and only if $G$ is simply connected as a scheme, and (by the Riemann Existence Theorem) also if and only if $G(\mathbf{C})$ is simply connected in the sense of topology. This latter "if and only if" rests on the fact that when $G(\mathbf{C})$ is not simply connected then it has a nontrivial connected cover of finite degree, which is a consequence of the topological fundamental group being commutative and (as for any algebraic variety over $\mathbf{C}$) finitely generated. Meanwhile, as indicated above with Artin-Schreier coverings (with details left to the interested reader), in characteristic $p > 0$ (say assuming $k = k_s$) the etale fundamental group of a positive-dimensional smooth affine $k$-scheme is always infinite. But the etale fundamental group is an entirely different creature from the algebraic fundamental group over such $k$, as is most easily seen by noting that ${\rm{PGL}}_p$ is not simply connected in the sense of algebraic groups due to the non-etale central isogeny ${\rm{SL}}_p \rightarrow {\rm{PGL}}_p$ of degree $p$. Finally, let's address the characteristic-free theory of the "simply connected central cover" for connected semisimple groups over any field, and the related notion of "algebraic fundamental group". A connected semisimple group $G$ over a field $k$ is simply connected if any central $k$-isogeny $f:G' \rightarrow G$ from a connected semisimple $k$-group is necessarily an isomorphism. (By "central isogeny" I mean that the scheme-theoretic kernel of $f$ is contained in the scheme-theoretic center of $G'$; see Definition A.1.10 and preceding discussion in "Pseudo-reductive groups".) Since every maximal $k$-torus in a connected semisimple $k$-group is its own scheme-theoretic centralizer, the finite scheme-theoretic center of such $k$-groups is contained in a $k$-torus and hence is of multiplicative type. Together with properties of "multiplicative type" groups under central extensions, this underlies the fact that a composition of central isogenies between connected semisimple groups is again central: the crux is that even when the kernels are not etale, their automorphism schemes are always etale. (Beyond the connected reductive setting, over any field $k$ of char. $p > 0$ there exists a pair of central $k$-isogenies $G \rightarrow G'$ and $G' \rightarrow G''$ whose composition has kernel that is not central in $G$. For example, over $\mathbf{F}_p$ let $G$ and $G''$ be the standard upper-triangular unipotent subgroup of ${\rm{SL}}_3$, whose scheme-theoretic center is the upper-right $\mathbf{G}_a$. Take $G \rightarrow G''$ to be the Frobenius homomorphism, and take $G'$ to be the intermediate quotient of $G$ by the unique central $\alpha_p$ from the upper-right entry.) Then the real theorem is the existence and uniqueness (up to unique isomorphism) of a simply connected central cover of any connected semisimple $k$-group, and the compatibility of its formation with respect to any extension of the base field. By Galois descent, the strong uniqueness requirements reduce the proof of this assertion to the case $k = k_s$, so all connected semisimple $k$-groups are split. Hence, we can appeal to the Existence and Isomorphism/Isogeny Theorems with root data to conclude. (For further discussion, see Corollary A.4.11 in "Pseudo-reductive groups" and back-references in its proof.) If $G$ is a connected semisimple group over a field $k$ and $\pi:\widetilde{G} \rightarrow G$ is its simply connected central cover in the sense of algebraic groups, then ${\rm{ker}}(\pi)$ is a finite $k$-group scheme of multiplicative type (since it is central in the connected semisimple $\widetilde{G}$) and hence its Cartier dual is a commutative finite \'etale $k$-group. That is called the algebraic fundamental group $\pi_1(G)$ in the sense of algebraic groups. (So by definition, the algebraic fundamental group is trivial if and only if $G$ is simply connected in the sense of algebraic groups.) As we saw above, if $k$ is alg. closed of char. 0 then this is "dual" to the etale fundamental group of the variety $G$, and in characteristic $p > 0$ the example $G = {\rm{PGL}}_p$ shows that it is really quite unrelated to the usual etale fundamental group in the sense of schemes (even when $k = k_s$). Jim, what were you saying about being exhausted? :)
{ "source": [ "https://mathoverflow.net/questions/49278", "https://mathoverflow.net", "https://mathoverflow.net/users/370/" ] }
49,303
The story of the analogy between knots and primes, which now has a literature, started with an unpublished note by Barry Mazur. I'm not absolutely sure this is the one I mean, but in his paper, Analogies between group actions on 3-manifolds and number fields, Adam Sikora cites B. Mazur, Remarks on the Alexander polynomial, unpublished notes. He also cites the published paper B. Mazur, Notes on étale topology of number fields, Ann. Sci. Ecole Norm. Sup. (4)6 (1973), 521-552. I suppose an expert would recognize as relevance of this paper, but I don't see that even the word "knot" ever occurs there. [My Question] Does anyone have a copy of Mazur's note that they would share, please? If not, has anyone at least actually seen it. By the way, already years ago, I asked Mazur himself. I watched him kindly search his office, but he came up dry. I realize that whatever insights the original note contains have doubtless been surpassed after c. 40 years by published results, but historical curiosity drives my desire to see the document that started the industry.
This showed up in my snail-mail today, so I'm sharing the wealth: http://ifile.it/rodc5is/mazur.pdf
{ "source": [ "https://mathoverflow.net/questions/49303", "https://mathoverflow.net", "https://mathoverflow.net/users/10909/" ] }
49,315
In teaching my algebraic topology class, this group showed up as part of an easy fundamental group computation: $\langle a,b\mid a^2=b^2\rangle$. My first instinct was that this must be $\mathbb{Z}*\mathbb{Z}/2$ because clearly every element can be written as a product of $b$'s (only to the power 1) and powers of $a$. But this turns out to be far from clear (and likely wrong). I assume this must be a well-known group to group theorists, so I'm curious if it's isomorphic to something that can be described by other means (or what's known about it in general). Thanks!
Setting $c:=b^{-1}$ one obtains the presentation $$G= \langle a,c \, | \, a^2c^2=1 \rangle,$$ which is the fundamental group of the Klein bottle. It is well known that another presentation of such a group is $$G= \langle x,y \,|\, x^{-1}yx=y^{-1} \rangle,$$ and this allows one to write $G$ as a semi-direct product of infinite cyclic groups, namely $$G= \mathbb{Z} \rtimes_{\sigma} \mathbb{Z}$$ where $\sigma \colon \mathbb{Z} \to \mathbb{Z}$ is defined by $\sigma(y)=y^{-1}$ and $\mathbb{Z}$ is written multiplicatively.
{ "source": [ "https://mathoverflow.net/questions/49315", "https://mathoverflow.net", "https://mathoverflow.net/users/6646/" ] }
49,333
What would be a good (as "easy" as possible) example of a Triangular (Non trivial) quasi-Hopf algebra? By trivial I mean the quasi strcture not to be trivial, but if the triangular structure is trivial it's ok (it's actally better for me)
Setting $c:=b^{-1}$ one obtains the presentation $$G= \langle a,c \, | \, a^2c^2=1 \rangle,$$ which is the fundamental group of the Klein bottle. It is well known that another presentation of such a group is $$G= \langle x,y \,|\, x^{-1}yx=y^{-1} \rangle,$$ and this allows one to write $G$ as a semi-direct product of infinite cyclic groups, namely $$G= \mathbb{Z} \rtimes_{\sigma} \mathbb{Z}$$ where $\sigma \colon \mathbb{Z} \to \mathbb{Z}$ is defined by $\sigma(y)=y^{-1}$ and $\mathbb{Z}$ is written multiplicatively.
{ "source": [ "https://mathoverflow.net/questions/49333", "https://mathoverflow.net", "https://mathoverflow.net/users/11554/" ] }
49,348
I teach, among many other things, a class of wonderful and inquisitive 7th graders. We've recently been studying and discussing various number systems (N, Z, Q, R, C, algebraic numbers, and even quaternions and surreals). One thing that's been hanging in the air is giving a proof that there really do exist transcendental numbers (and in particular, real ones). They're willing to take my word for it, but I'd really like to show them if I can. I've brainstormed two possible approaches: 1) Use diagonalization on a list of algebraic numbers enumerated by their heights (in the usual way) to construct a transcendental number. This seems doable to me, and would let me share some cool facts about cardinality along the way. The asterisk by it is that, while the argument is constructive, we don't start with a number in hand and then prove that it's transcendental--a feature that I think would be nice. 2) More or less use Liouville's original proof, put as simply as I can manage. The upshots of this route are that we start with a number in hand, it's a nice bit of history, and there are some cool fraction things that we could talk about (we've been discussing repeating decimals and continued fractions). The downside is that I'm not sure if I can actually make it accessible to my students. So here is where you come in. Is there a simple, elementary proof that some particular number is transcendental? Two kinds of responses that would be helpful would be: a) to point out some different kind of argument that has a chance of being elementary enough, and b) to suggest how to recouch or bring to its essence a Liouville-like argument. My model for this is the proof Conway popularized of the fact that $\sqrt{2}$ is irrational. You can find it as proof 8''' on this page . I realize that transcendence is deep waters, and I certainly don't expect something easy to arise, but I thought I'd tap this community's expertise and ingenuity. Thanks for thinking on it.
The original Liouville's number is probably the easiest, but most of the proofs tend to invoke calculus (because why not?), so let me try to show it in a more 7th-grade friendly way. I'll call this the swaths-of-zero approach. So we know that Liouville's number $L$ looks like this: .1100010000000000000000010... with a 1 in the $n!$ places. When we square it, we get this: .012100220001000000000000220002... What happens is that in the $2n!$ places we get a 1, and in the $p!+q!$ places we get a 2. (The great thing about this is that it can be explained using the elementary-school algorithm, the one they are all familiar with, for multiplication.) If we multiply $L$ by an integer and write down the answer, the value of that integer will be "laid bare" as we go deeply enough into $L$'s decimal expansion, as eventually the 1s are far enough away to become that integer without stepping on each other. Similarly, if we multiply $L^2$ by an integer, we will see that integer in some places, and 2 times that integer in others. For large enough $n,$ if we look between the $n!$ place and the $(n+1)!$ place, the last thing we'll see is that integer written at the $2n!$ place. Thus the swaths of zero in the multiple of $L$ are, $n!-(n-1)!=(n-1)(n-1)!$ long (minus a constant), whereas the widest swaths of zero in the multiple of $L^2$ are $n!-2(n-1)!=(n-2)(n-1)!$ (minus a constant) long, which is shorter, so there is no way to add positive multiples of $L$ and $L^2$ together to clear everything after the decimal point, or find positive multiples of each so that everything after the decimal point is equal. More generally: Suppose $a_jL^j+...$ and $a_kL^k+...$ are integer polynomials in $L,$ where $j>k.$ We show that their values cannot match up fully past the decimal point. The swaths of zero in the first polynomial, moving back from the $n!$ spot, are a constant away from $(n-j)(n-1)!$ long (the constant being the length of the sum of the coefficients), whereas in the second they are a constant away from $(n-k)(n-1)!$ long, in the same place (moving back from the $n!$ spot). I don't know if this explanation holds up to the standards of rigor you like to maintain when teaching them, but I think they will find it fascinating.
{ "source": [ "https://mathoverflow.net/questions/49348", "https://mathoverflow.net", "https://mathoverflow.net/users/6793/" ] }
49,357
In a recent question Deane Yang mentioned the beautiful Riemannian geometry that comes up when looking at $G_2$. I am wondering if people could expand on the geometry related to the exceptional Lie Groups. I am not precisely sure what I am looking for, but ostensibly there should be answers forth coming from other who have promised such answers. I understand a bit about how the exceptional Lie groups come up historically, and please correct the following if it is incorrect, but when looking at the possible dynkin diagrams you see that there is no reason for $E_6$,$E_7$,$E_8$,$G_2$, and $F_4$ to not occur as root systems. While root systems are geometric, this is not what I am asking about. Thanks
I promised Sean a detailed answer, so here it is. As José has already mentioned, it is only $G_2$ (of the five exceptional Lie groups) which can arise as the holonomy group of a Riemannian manifold. Berger's classification in the 1950's could not rule it out, and neither could he rule out the Lie group $\mathrm{Spin}(7)$, but these were generally believed to not possibly be able to exist. However, in the early 1980's Robert Bryant succeeded in proving the existence of local examples (on open balls in Euclidean spaces). Then in the late 1980's Bryant and Simon Salamon found the first complete, non-compact examples of such manifolds, on total spaces of certain vector bundles, using symmetry (cohomogeneity one) methods. (Since then there are many examples of non-compact cohomogeneity one $G_2$ manifolds found by physicists.) Finally, in 1994 Dominic Joyce stunned the mathematical community by proving the existence of hundreds of compact examples. His proof is non-constructive, using hard analysis involving the existence and uniqueness of solutions to a non-linear elliptic equation, much as Yau's solution of the Calabi conjecture gives a non-constructive proof of the existence and uniqueness of Calabi-Yau metrics (holonomy $\mathrm{SU}(n)$ metrics) on Kahler manifolds satisfying certain conditions. (In 2000 Alexei Kovalev found a new construction of compact $G_2$ manifolds that produced several hundred more non-explicit examples. These are the only two known compact constructions to date.) It is exactly this similarity to Calabi-Yau manifolds (and to Kahler manifolds in general) that I will explain. When it comes to Riemannian holonomy, the aspect of the group $G_2$ which is important is not really that it is one of the five exceptional Lie groups, but rather that it is the automorphism group of the octonions $\mathbb O$, an $8$-dimensional non-associative real division algebra. The octonions come equipped with a positive definite inner product, and the span of the identity element $1$ is called the real octonions while its orthogonal complement is called the imaginary octonions $\mathrm{Im} \mathbb O \cong \mathbb R^7$. This is entirely analogous to the quaternions $\mathbb H$, except that the non-associativity introduces some new complications. In fact the analogy allows us to define a cross product on $\mathbb R^7$ in the same way, as follows. Let $u, v \in \mathbb R^7 \cong \mathrm{Im} \mathbb O$ and define $u \times v = \mathrm{Im}(uv)$, where $uv$ denotes the octonion product. (In fact the real part of $uv$ is equal to $-\langle u, v \rangle$, just as it is for quaternions.) This cross product satisfies the following relations: \begin{equation} u \times v = - v \times u, \qquad \qquad \langle u \times v , u \rangle = 0, \qquad \qquad {|| u\times v||}^2 = {|| u \wedge v ||}^2, \end{equation} exactly like the cross product on $\mathbb R^3 \cong \mathrm{Im} \mathbb H$. However, there is a difference, unlike the cross product in $\mathbb R^3$, the following expression is not zero: \begin{equation} u \times (v \times w) + \langle u, v \rangle w - \langle u, w \rangle v \end{equation} but is instead a measure of the failure of the associativity $(uv)w - u(vw)$, up to a factor. Note that on $\mathbb R^7$ there can be defined a $3$-form (totally skew-symmetric trilinear form) using the cross product as follows: $\varphi(u,v,w) = \langle u \times v, w \rangle$, which is called the associative $3$-form for reasons that we won't get into here. Digression: In fact one can show that only on $\mathbb R^3$ and $\mathbb R^7$ can one construct such a cross product, and this is intimately related to the fact that only the spheres $S^2$ and $S^6$ can admit almost complex structures. But I digress... Getting back to $G_2$ geometry: a $7$-dimensional smooth manifold $M$ is said to admit a $G_2$-structure if there is a reduction of the structure group of its frame bundle from $\mathrm{GL}(7, \mathbb R)$ to the group $G_2$ which can actually be viewed naturally as a subgroup of $\mathrm{SO}(7)$. For those familiar with $G$-structures, this tells you that a $G_2$-structure determines a Riemannian metric and an orientation. In fact, one can show on a manifold with $G_2$-structure, there exists a non-degenerate $3$-form $\varphi$ for which, given a point $p$ on $M$, there exists local coordinates near $p$ such that, in those coordinates, at the point $p$, the form $\varphi$ is exactly the associative $3$-form on $\mathbb R^7$ discussed above. Now one can show that there is a way to canonically determine both a metric and an orientation in a highly non-linear way from this $3$-form $\varphi$. Then one can define a cross product $\times$ by essentially using the metric to ``raise an index'' on $\varphi$. In summary, a manifold $(M, \varphi)$ with $G_2$-structure comes equipped with a metric, cross product, $3$-form, and orientation, which satisfy \begin{equation} \varphi(u,v,w) = \langle u \times v , w \rangle. \end{equation} This is exactly analogous to the data of an almost Hermitian manifold , which comes with a metric, an almost complex structure $J$, a $2$-form $\omega$, and an orientation, which satisfy \begin{equation} \omega(u,v) = \langle Ju , v \rangle. \end{equation} Essentially, a manifold admits a $G_2$-structure if one can identify each of its tangent spaces with the imaginary octonions $\mathrm{Im} \mathbb O \cong \mathbb R^7$ in a smoothly varying way, just as an almost Hermitian manifold is one in which we can identify each of its tangent spaces with $\mathbb C^m$ (together with its Euclidean inner product) in a smoothly varying way. For a manifold to admit a $G_2$-structure, the necessary and sufficient conditions are that it be orientable and spin . (This is equivalent to the vanishing of the first two Stiefel-Whitney classes.) So there are lots of such $7$-manifolds, just as there are lots of almost Hermitian manifolds. But the story does not end there. Let $(M, \varphi)$ be a manifold with $G_2$-structure. Since it determines a Riemannian metric $g_{\varphi}$, there is an induced Levi-Civita covariant derivative $\nabla$, and one can ask if $\nabla \varphi = 0$? If this is the case, $(M, \varphi)$ is called a $G_2$-manifold, and one can show that the Riemannian holonomy of $g_{\varphi}$ is contained in the group $G_2 \subset \mathrm{SO}(7)$. These are much harder to find, because it involves solving a fully non-linear partial differential equation for the unknown $3$-form $\varphi$. They are in some ways analogous to Kahler manifolds, which are exactly those almost Hermitian manifolds that satisfy $\nabla \omega = 0$, but those are much easier to find. One reason is because the metric $g$ and the almost complex structure $J$ on an almost Hermitian manifold are essentially independent (they just have to satisfy the mild condition of compatibility) whereas in the $G_2$ case, the metric and the cross product are determined non-linearly from $\varphi$. However, the analogy is not perfect, because one can show that when $\nabla \varphi = 0$, the Ricci curvature of $g_{\varphi}$ necessarily vanishes. So $G_2$-manifolds are always Ricci-flat! (This is one reason that physicists are interested in such manifolds---they play a role as ``compactifications'' in $11$-dimensional $M$-theory analogous to the role of Calabi-Yau $3$-folds in $10$-dimensional string theory.) So in some sense $G_2$-manifolds are more like Ricci-flat Kahler manifolds, which are the Calabi-Yau manifolds. In fact, if we allow the holonomy to be a proper subgroup of $G_2$, there are many examples of $G_2$-manifolds. For example, the flat torus $T^7$, or the product manifolds $T^3 \times CY2$ and $S^1 \times CY3$, where $CYn$ is a Calabi-Yau $n$-fold, have Riemannian holonomy groups properly contained in $G_2$. These are in some sense ``trivial'' examples because they reduce to lower-dimension constructions. The manifolds with full holonomy $G_2$ are also called irreducible $G_2$-manifolds and it is precisely these manifolds that Bryant, Bryant-Salamon, Joyce, and Kovalev constructed. We are lacking a ``Calabi-type conjecture'' which would give necessary and sufficient conditions for a compact $7$-manifold which admits $G_2$-structures to admit a $G_2$-structure which is parallel ($\nabla \varphi = 0$.) Indeed, we don't even know what the conjecture should be. There are topological obstructions which are known, but we are far from knowing sufficient conditions. In fact, this question is more similar to the following: suppose $M^{2n}$ is a compact, smooth, $2n$-dimensional manifold that admits almost complex structures. What are necessary and sufficient conditions for $M$ to admit Kahler metrics? We certainly know many necessary topological conditions, but (as far as I know, and correct me if I am wrong) we are nowhere near knowing sufficient conditions. What makes the Calabi conjecture tractable (I almost said easy, of course it is anything but easy) is the fact that we already start with a Kahler manifold (holonomy $\mathrm{U}(m)$ metric) and want to reduce the holonomy by only $1$ dimension, to $\mathrm{SU}(m)$. Then the $\partial \bar \partial$-lemma in Kahler geometry allows us to reduce the Calabi conjecture to a (albeit fully non-linear) elliptic PDE for a single scalar function . Any analogous ``conjecture'' in either the Kahler or the $G_2$ cases would have to involve a system of PDEs, which are much more difficult to deal with. That's my not-so-short crash course in $G_2$-geometry. I hope some people read all the way to the end of this...
{ "source": [ "https://mathoverflow.net/questions/49357", "https://mathoverflow.net", "https://mathoverflow.net/users/3901/" ] }
49,365
Generate $S_n$ by transpositions $s_i$ of (i) and (i+1). Both $S_3$ and $S_4$ have single elements of maximal word norm associated with this presentation. In fact, the Cayley graph of $S_3$ can be seen as a tiling of $S^1$, and the Cayley graph of $S_4$ a tiling of $S^2$. The element of maximal length is then antipodal to e. Does every symmetric group $S_n$ have a single element of maximal word norm? If so, is there a formula for its length l(n)?
It is amazing how a fact that I was taught in a middle school can be proved using big theories where I don't understand half of the words. Let me add a straightforward proof (for $S_n$ and only $S_n$). For a permutation $\sigma:\{1,\dots,n\}\to\{1,\dots,n\}$, let $\lambda(\sigma)$ denote the number of inversions in $\sigma$, that is the number of pairs $(i,j)$ such that $i<j$ and $\sigma(i)>\sigma(j)$. Then $\lambda(\sigma)$ equals the length of $\sigma$ with respect to the generating set $\{s_i\}$. Indeed, left-multiplying $\sigma$ by $s_i$ only interchanges $\sigma(i)$ and $\sigma(i+1)$, and hence changes $\lambda(\sigma)$ by at most 1. Therefore the length is bounded below by $\lambda$. On the other hand, if $\sigma$ is not the identity, there exists $i$ such that $\sigma(i+1)<\sigma(i)$, then left-multiplying by $s_i$ decreases $\lambda(\sigma)$ by 1. Repeating this procedure, one reaches the identity from $\sigma$ by exactly $\lambda(\sigma)$ multiplications by generators. Now it is clear that the maximum length equals $n(n-1)/2$ and is attained only at the order reversing permutation (the one given by $\sigma(i)=n+1-i$ for all $i$).
{ "source": [ "https://mathoverflow.net/questions/49365", "https://mathoverflow.net", "https://mathoverflow.net/users/11557/" ] }
49,415
The following is a FAQ that I sometimes get asked, and it occurred to me that I do not have an answer that I am completely satisfied with. In Rudin's Principles of Mathematical Analysis , following Theorem 3.29, he writes: One might thus be led to conjecture that there is a limiting situation of some sort, a “boundary” with all convergent series on one side, all divergent series on the other side—at least as far as series with monotonic coefficients are concerned. This notion of “boundary” is of course quite vague. The point we wish to make is this: No matter how we make this notion precise, the conjecture is false. Exercises 11(b) and 12(b) may serve as illustrations. Exercise 11(b) states that if $\sum_n a_n$ is a divergent series of positive reals, then $\sum_n a_n/s_n$ also diverges, where $s_n = \sum_{i=1}^n a_n$. Exercise 12(b) states that if $\sum_n a_n$ is a convergent series of positive reals, then $\sum_n a_n/\sqrt{r_n}$ converges, where $r_n = \sum_{i\ge n} a_i$. Although these two exercises are suggestive, they are not enough to convince me of Rudin’s strong claim that no matter how we make this notion precise, the conjecture is false . Are there any stronger theorems in this direction? Edit. For example, are there any theorems about the topology/geometry of the spaces of all convergent/divergent series, where a series is viewed as a point in $\mathbb{R}^\infty$ or $(\mathbb{R}^+)^\infty$ in the obvious way?
A rather detailed discussion of the subject can be found in Knopp's Theory and Application of Infinite Series (see § 41, pp. 298–305). He mentions that the idea of a possible boundary between convergent and divergent series was suggested by du Bois-Reymond. There are many negative (and mostly elementary) results showing that no such boundary, in whatever sense it might be defined, can exist. Stieltjes observed that for an arbitrary monotone decreasing sequence $(\epsilon_n)$ with the limit $0$ , there exist a convergent series $\sum c_n$ and a divergent series $\sum d_n$ such that $c_n=\epsilon_nd_n$ . (This can be easily deduced from the Abel–Dini theorem). Pringsheim remarked that, for a convergent and a divergent series with positive terms, the ratio $c_n/d_n$ can assume all possible values, since one may have simultaneously $$\liminf\frac{c_n}{d_n}=0\qquad\mbox {and}\qquad\limsup\frac{c_n}{d_n}=\infty.$$ I like the following geometric interpretation. Given a (convergent or divergent) series $\sum a_n$ , let's mark the sequence of points $(n,a_n)\in\mathbb R^2$ and join the consecutive points by straight segments. Then there is a convergent series $\sum c_n$ and a divergent series $\sum d_n$ (both with positive and monotonically decreasing terms) such that the corresponding polygonal graphs can intersect in an indefinite number of points. The results remain essentially unaltered even if one requires that both sequences $(c_n)$ and $(d_n)$ are fully monotone , which is a very strong monotonicity assumption. This was shown by Hahn ( "Über Reihen mit monoton abnehmenden Gliedern" , Monatsh. für Math., Vol. 33 (1923), pp. 121–134).
{ "source": [ "https://mathoverflow.net/questions/49415", "https://mathoverflow.net", "https://mathoverflow.net/users/3106/" ] }
49,426
The usual category of measure spaces consists of objects $(X, \mathcal{B}_X, \mu_X)$, where $X$ is a space, $\mathcal{B}_X$ is a $\sigma$-algebra on $X$, and $\mu_X$ is a measure on $X$, and measure preserving morphisms $\phi \colon (X, \mathcal{B}_X, \mu_X) \to (Y, \mathcal{B}_Y, \mu_Y)$ such that $\phi_\ast \mu_X(E) = \mu_X(\phi^{-1}(E)) = \mu_Y(E)$ for all $E \in \mathcal{B}_Y$. The category of measurable spaces consists of objects $(X, \mathcal{B}_X)$ and measurable morphisms $\phi \colon (X, \mathcal{B}_X) \to (Y, \mathcal{B}_Y)$. Products exist in the category of measurable spaces. They coincide with the standard product $(X \times Y, \mathcal{B}_X \times \mathcal{B}_Y)$, where $X \times Y$ is the Cartesian product of $X$ and $Y$ and $\mathcal{B}_X \times \mathcal{B}_Y$ is the coarsest $\sigma$-algebra on $X\times Y$ such that the canonical projections $\pi_X \colon X \times Y \to X$ and $\pi_Y \colon X \times Y \to Y$ are measurable. Equivalently, $\mathcal{B}_X \times \mathcal{B}_Y$ is the $\sigma$-algebra generated by the sets $E \times F$ where $E \in \mathcal{B}_X$ and $F \in \mathcal{B}_Y$. However, in the category of measure spaces, products do not exist. The first obstacle is that the canonical projection $\pi_X \colon X \times Y \to X$ may not be measure preserving. A simple example is the product of $(\mathbf{R}, \mathcal{B}[\mathbf{R}], \mu)$ with itself, where $\mathcal{B}[\mathbf{R}]$ is the Borel $\sigma$-algebra on $\mathbf{R}$. In this case, $(\pi_{\mathbf{R}})_\ast\mu\times \mu([0,1]) = \mu\times \mu(\pi_\mathbf{R}^{-1}([0,1])) = \mu \times \mu([0,1]\times \mathbf{R}) = \infty \neq 1 = \mu([0,1])$. In addition, there may be multiple measures on $X\times Y$ whose pushforwards on $X$ and $Y$ are $\mu_X $ and $\mu_Y$. Terry Tao mentions that from the perspective of probability, this reflects that the distribution of random variables $X$ and $Y$ is not enough to determine the distribution of $(X, Y)$ because $X$ and $Y$ are not necessarily independent. Given that the products in the usual category fail to exist, is it possible to define a new categorical structure on the class of measure spaces such that products do exist?
To clarify Chris Heunen's answer, let me point out that most notions of measure theory have analogs in the category of smooth manifolds. For example, the analog of a measure space (X,M,μ), where X is a set, M is a σ-algebra of measurable subsets of X, and μ is a measure on (X,M), is a smooth manifold X equipped with a density μ. Likewise, the analog of a measure-preserving morphism is a volume-preserving smooth map. The category of smooth manifolds equipped with a density together with volume preserving maps as morphisms does not have good categorical properties. The problem comes from the fact that preservation of volume is too strong a condition to allow for good categorical properties. If one drops the data of a density and the property of volume preservation, then the resulting category of smooth manifolds has relatively good categorical properties, such as existence of finite products and, more generally, existence of all finite transversal limits. The same is true for the category of measure spaces. However, in this case we cannot simply drop the data of a measure from the definition and expect to get a category with good properties. The reason for this is that the data of a measure in fact combines two independent pieces of data. The first piece of data tells us which sets have measure zero and which ones don't. The second piece of data tells us the actual values of measure on sets of non-zero measure. The analog of dropping the data of a density for measure theory is dropping the second piece of data described above, but not the first one. This can already be seen for smooth manifolds: If we have a smooth manifold, we don't need a density on it to say which sets have measure 0. Thus one is naturally led to the notion of a measurable space that knows which measurable sets have measure 0. I describe it in more detail in [1], therefore here I offer only a brief summary of the main definitions. A measurable space is a triple (X,M,N), where X is a set, M is a σ-algebra of measurable subsets of X, and N⊂M is a σ-ideal of measure 0 sets. A morphism of measurable spaces f: (X,M,N)→(Y,P,Q) is an equivalence class of maps of sets g: X→Y such that the preimage of every element of P is an element of M and the preimage of every element of Q is an element of N. Two maps g and h are equivalent if they differ on a set of measure 0. Here for the sake of simplicity I assume that both measurable spaces are complete (any subset of a measure 0 subset is again a measure 0 subset). Every measurable space is equivalent to its completion [2], hence we do not lose anything by restricting ourselves to complete measurable spaces. In general, one has to modify the above definition to account for incompleteness, as explained in the link above. Finally, one has to require that measurable spaces are localizable . One way to express this property is to say that the Boolean algebra M/N of equivalence classes of measurable sets is complete (i.e., it has arbitrary suprema and infima). Many basic theorems of measure theory fail without this property. In fact, as explained in the link above, theorems such as Radon-Nikodym theorem and Riesz representation theorem are equivalent to the property of localizability. Remarkably, the category of localizable measurable spaces is equivalent to the opposite category of the category of commutative von Neumann algebras [7]. This statement can be seen as another justification for the property of localizability. Henceforth I assume that all measurable spaces are localizable. Being in possession of a good category of measurable spaces we can prove that it admits finite products, and, more generally, arbitrary finite limits. Let me point out that at this point measure theory diverges from smooth manifolds: The functor that sends a smooth manifold to its underlying measurable space is colax monoidal but not strong monoidal with respect to the monoidal structures given by the categorical products. More precisely, the category of measurable spaces admits two natural product-like monoidal structures. One is given by the categorical product mentioned above, and the other one is the spatial product, which is often simply called the product in many textbooks on measure theory. By the universal property of product there is a canonical map from the spatial product to the categorical product, which is a monomorphism but not an isomorphism unless one of the spaces is atomic, i.e., a disjoint union of points. The forgetful functor F from the category of smooth manifolds to the category of measurable spaces is strong monoidal with respect to the categorical product on the category of smooth manifolds and the spatial product on the category of measurable spaces. Therefore it is also colax monoidal with respect to the categorical product on measurable spaces, but not strong monoidal because F is essentially surjective on objects and the map from the spatial product to the categorical product is not always an isomorphism. Here is an instructive example for the last statement. Let Z=F(R), where R is the real line considered as a smooth manifold. Consider the spatial product of Z and Z, which is canonically isomorphic to F(R×R). The spatial product maps monomorphically to the categorical product Y=Z×Z. However, Y is much bigger than F(R×R). For example, the diagonal map Z→Y=Z×Z is disjoint from F(R×R) in Y. The space Y also has a lot of other subspaces whose existence is guaranteed by the universal property. Note that set-theoretically F(R×R) also has a diagonal subset. However, this subset has measure 0 and therefore is invisible in this formalism. Let me finish by mentioning that locales arguably provide much better formalism for measure theory, which in particular does not suffer from problems with sets of measure 0, e.g., we don't need to pass to equivalence classes or even mention the words “almost everywhere”. The relevant functor sends a measurable space (X,M,N) to the locale M/N (as explained above M/N is a complete Boolean algebra, hence a locale). We obtain a faithful functor from the category of measurable spaces to the category of locales. Let's call its image the category of measurable locales . Then measurable morphisms of measurable locales correspond bijectively to equivalence classes of measurable maps. Note that we don't need to pass to equivalence classes to define a measurable morphism of measurable locales. Every measurable space (or locale) can be uniquely decomposed into its atomic and diffuse part. The atomic part is a disjoint union of points and the diffuse part does not have any isolated points. An example of a diffuse space is given by F(M) where M is a smooth manifold of non-zero dimension. (In fact all these spaces are isomorphic as measurable spaces if the number of connected components of M is countable.) Here is the punchline: If Z is a diffuse measurable locale, then it does not have any points, in particular it is non-spatial. This can serve as an explanation of why we cannot construct a reasonable concrete category of measurable spaces and why we always have to use equivalence classes if we want to stay in the point-set measure theory. References that advocate (but do not evangelize) the viewpoint described in this answer: [1] Is there an introduction to probability theory from a structuralist/categorical perspective? [2] What's the use of a complete measure? [3] Monoidal structures on von Neumann algebras [4] Why do probabilists take random variables to be Borel (and not Lebesgue) measurable? [5] Is there a measure zero set which isn't meagre? [6] When is $L^2(X)$ separable? [7] Reference for the Gelfand-Neumark theorem for commutative von Neumann algebras [8] Decomposition of an abelian von Neumann algebra [9] Subfactor theory and Hilbert von Neumann Algebras [10] Problems where we can't make a canonical choice, solved by looking at all choices at once [11] Integration of differential forms using measure theory? [12] Can we characterize the spatial tensor product of von Neumann algebras categorically? [13] Which complete Boolean algebras arise as the algebras of projections of commutative von Neumann algebras? [14] Conditional Expectation for $\sigma$-finite measures
{ "source": [ "https://mathoverflow.net/questions/49426", "https://mathoverflow.net", "https://mathoverflow.net/users/11568/" ] }
49,548
Consider an algebraic vector bundle $E$ on a scheme $X$. By definition there is an open cover of $X$ consisting of open subsets on which $E$ is trivial and if $X$ is quasi-compact, a finite cover suffices. The question then is simply: what is the minimum number of open subsets for a cover which trivializes $E$ ? Now this is silly because the answer obviously depends on $E$ ! If $E$ is trivial to begin with, the cover consisting of just $X$ will do, of course, but if you take $\mathcal O(1)$ on $\mathbb P^n_k$ you won't get away with less than $n+1$ trivializing open subsets . Here is why. Suppose you have $n$ open subsets $U_i\subset \mathbb P^n_k$ over which $\mathcal O(1)$ is trivial. Take regular nonzero sections $s_i\in \Gamma(U_i,\mathcal O(1) )$ and extend them rationally to $\mathbb P^n_k$. Each such extended rational section $\tilde {s_i}$ will have a divisor $D_i$ and the complements $\tilde U_i= X\setminus |D_i|$, $(U_i\subset \tilde U_i)$, of the supports of those divisors will give you a cover of $\mathbb P^n_k$ by $n$ affine open subsets trivializing $\mathcal O(1)$. But this is impossible , because $n$ hypersurfaces in $\mathbb P^n_k$ cannot have empty intersection. This, conversations with colleagues and some vague considerations/analogies have led me to guess ( I am certainly not calling my rather uninformed musings a conjecture) that the following question might have a positive answer: Is it true that on a (complete) algebraic variety of dimension $n$ every vector bundle is trivialized by some cover consisting of at most $n+1$ open sets? For example, the answer is indeed yes for a line bundle on a (not necessarily complete) smooth curve $X$: every line bundle $L$ on $X$ can be trivialized by two open subsets . Edit Needless to say I'm overjoyed at Angelo's concise and brilliant positive answer. In the other direction ( trivialization with too few opens to be shown impossible) I would like to generalize my observation about projective space. So my second question is: Consider a (very) ample line bundle $L$ on a complete variety $X$ and a rational section $s \in \Gamma _{rat} (X, L) $. Is it true that its divisor $D= div (s)$ has a support $|D|$ whose complement $X\setminus |D|$ is affine ? Let me emphasize that the divisor $D$ is not assumed to be effective, and that is where I see a difficulty.
This is true if we assume that the vector bundles has constant rank (it is clearly false if we allow vector bundles to have different ranks at different points). Let $U_1$ be an open dense subset of $X$ over which $E$ is trivial, and let $H_1$ be a hypersurface containing the complement of $U_1$. Then $E$ is trivial over $X \smallsetminus H_1$. Now, it is easy to see that there exists an open subset $U_2$ of $X$, containing the generic points of all the components of $H_1$, over which $E$ is trivial (this follows from the fact that a projective module of constant rank over a semi-local ring is free). Let $H_2$ be a hypersurface in $X$ containing the complement of $U_2$, but not containing any component of $H_1$. Then we let $U_3$ be an open subset of $X$ containing the generic points of the components of $H_1 \cap H_2$, and let $H_3$ be a hypersurface containing the complement of $U_3$, but not the generic points of the components of $H_1 \cap H_2$. After we get to $H_{n+1}$, the intersection $H_1 \cap \dots \cap H_{n+1}$ will be empty, and the complements of the $H_i$ will give the desired cover. [Edit]: now that I think about it, you don't even need the hypersurfaces, just define the $H_i$ to be complement of the $U_i$.
{ "source": [ "https://mathoverflow.net/questions/49548", "https://mathoverflow.net", "https://mathoverflow.net/users/450/" ] }
49,551
This question is motivated by the question link text , which compares the infinite direct sum and the infinite direct product of a ring. It is well-known that an infinite dimensional vector space is never isomorphic to its dual. More precisely, let $k$ be a field and $I$ be an infinite set. Let $E=k^{(I)}=\oplus_{i \in I} k$ be the $k$-vector space with basis $I$, so that $E^{*}$ can be identified with $k^I = \prod_{i \in I} k$. Then a stronger result asserts that the dimension of $E^{*}$ over $k$ is equal to the cardinality of $k^I$. This is proved in Jacobson, Lectures in Abstract Algebra , Vol. 2, Chap. 9, $\S$ 5 (Jacobson deduces it from a lemma which he attributes to Erdös and Kaplansky). Summarizing, we have \begin{equation} \operatorname{dim}_k (k^I) = \operatorname{card} k^I. \end{equation} Now, if $V$ is any $k$-vector space, we can ask for the dimension of $V^I$. Does the Erdös-Kaplansky theorem extend to this setting ? Is it true that for any vector space $V$ and any infinite set $I$, we have $\operatorname{dim} V^I = \operatorname{card} V^I$ ? More generally, given a family of nonzero vector spaces $(V_i)$ indexed by $I$, is it true that $\operatorname{dim} \prod_{i \in I} V_i = \prod_{i \in I} \operatorname{card} V_i$ ? If $V$ is isomorphic to $k^J$ for some set $J$, then the result holds as a consequence of Erdös-Kaplansky. In the general case, we have $V \cong k^{(J)}$, and we can assume that $J$ is infinite. In this case I run into difficulties in computing the dimension of $V^I$. I can only prove that $\operatorname{dim} V^I \geq \operatorname{card} k^I \cdot \operatorname{card} J$.
The answer to both questions is yes. As a preliminary, let's prove that for any infinite-dimensional vector space $V$, that Lemma: $card(V) = card(k) \cdot \dim V$ Proof: Since $card(k) \leq card(V)$ and $\dim V \leq card(V)$, the inequality $$card(k) \cdot \dim V \leq card(V)^2 = card(V)$$ is obvious. On the other hand, any element of $V$ is uniquely of the form $\sum_{j \in J} a_j e_j$ for some finite subset $J$ of (an indexing set of) a basis $B$ and all $a_j$ nonzero. So an upper bound of $card(V)$ is $card(P_{fin}(B)) \sup_{j \in P_{fin}(B)} card(k)^j$. If $B$ is infinite, then $card(P_{fin}(B)) = card(B) = \dim(V)$, and for all finite $j$ we have $card(k^j) \leq card(k)$ if $k$ is infinite, and $card(k^j) \leq \aleph_0$ if $k$ is finite, and either way we have $$card(V) \leq \dim V \cdot \max\{card(k), \aleph_0\} \leq \dim V \cdot card(k)$$ as desired. $\Box$ The rest is now easy. Suppose $I$ is an infinite set, and suppose without loss of generality that $V_i$ is nontrivial for all $i \in I$. Put $V = \prod_{i \in I} V_i$. We have $$\dim V \geq \dim k^I = card(k)^I \geq card(k)$$ where the equality is due to Erdos and Kaplansky. Therefore $$\dim(V) = \dim(V)^2 \geq \dim V \cdot card(k) = card(V) = \prod_i card(V_i)$$ by the lemma above.
{ "source": [ "https://mathoverflow.net/questions/49551", "https://mathoverflow.net", "https://mathoverflow.net/users/6506/" ] }
49,647
Consider the following question: 1) For a given natural number $a$ , are there finitely or infinitely many natural numbers that are not of the form $anm \pm n\pm m$ , where $m$ and $n$ range over positive integers? (For $a=1$ or $a=2$ you have all the natural numbers.) Does this problem appear in the literature? As one can see at MO Scribe's question Chen's Theorem with congruence conditions. it is the same like asking if there are infinitely many $k$ such that both $ak+1$ and $ak-1$ do not have any non-trivial factors of the form $\pm 1 \mod a$ . I give a proof that for $a=6$ the question is equivalent to the twin prime conjecture so it is known that we don't have any proof. But what about other values of $a$ ? Is the problem for $a=100$ or more of the same difficulty? 2) Is the density (Szemeredi's or Schnirelman's), of the numbers that are not of the form, zero for any value of $a$ ? 3) From Viggo Brun's theorem we have that the sum of the reciprocals of the twin primes converges. Does the sum of the reciprocals for any value of $a$ of the numbers that are not of this form converge? 4)For a given natural number $a$ are there infinitely many $k$ such that both $ak+1$ , $ak-1$ do not have any $prime$ factors of the form $\pm 1 \mod a$ ? (the same questions for these $k$ as in 2) and 3) ) 5) And the most easy : for which $a$ do we have a proof that there are infinetely many $k$ such that such that both $ak+1$ , $ak-1$ are either prime or can be written as a product of two numbers both not of the form $\pm 1 \mod a$ ? I guess that if $φ(a)$ is big enough we can have such a proof (the same questions for these $k$ as in 2) and 3) ) NOTE: In 1) and 4) both both $ak+1$ and $ak-1$ can be either primes or the product of primes not of the form $\pm 1 \mod a$ ,but in the 1) no subproduct of them can be of this form. As MO Scribe noticed the conjectural answer is that there should be infinetely many such pairs because we are aspecting to have infinitely many prime pairs of any reasonable congruence condition. https://math.stackexchange.com/questions/15075/do-we-have-a-proof-of-the-infiniteness There are infinitely many twin primes if and only if there are infinitely many natural numbers that are not of the form $6nm \pm n \pm m$ . Proof: Every number that is not a multiple of $2$ or $3$ is of the form $6N\pm 1$ . So the only pairs that are not divisible by $2$ or $3$ are $(6N-1,6N+1)$ for any $N$ . Now are there infinitely many such prime pairs (twin primes)? If the number $6N-1$ is prime it should not be written as a product of some numbers $6n+1,6m-1$ for any $n,m > 0$ . So $(6n+1)(6m-1)=6(6nm-n+m)-1$ , which means that $N$ should not be of the form $6nm-n+m$ for any $n,m>0$ . Similarly, if $6N+1$ is a prime it should not be a product of some numbers $(6n-1)(6m-1) =6(6nm-n-m)+1$ , or $(6n+1)(6m+1) =6(6nm+n+m)+1$ . Which means that we have a prime couple of the form $(6N-1,6N+1)$ if and only if $N$ is not of the form $6nm \pm n \pm m$ for any $n,m$ . NOTE: After i have edited this observasion-question I realised that it is well known that for $a=6$ it is equivalent to twin prime conjecture (as it is written on the answer by Luis below too), a notice that S.Golomb seems to have done first, but my question is focused to the other values of $a$ . If someone want to add something http://mathoverflow.tqft.net/discussion/921/reedited/#Item_1
$\newcommand\Z{\mathbf{Z}}$ $\newcommand\Q{\mathbf{Q}}$ (Caveat: normally I wouldn't answer a question with such a limited knowledge of the general theory, but classical analytic number theory seems not so well represented by active MO members.) Suppose that $A$ is a finite abelian group. Then I claim that given any set of at least $|A| + 1$ (not necessarily distinct) elements of $A$ one can find a proper subset whose sum is the identity. Proof: Denote the elements $a_i$ for $i = 1$ to $|A|+1$. By the pigeonhole principle, either one of the $|A|$ sums $\sum_{i=1}^{r} a_i$ for $r = 1$ to $|A|$ is the identity, or two of the sums are the same element of $|A|$, in which case consider the difference. We deduce from this the following: Let $n$ be any integer coprime to $a$ with more than $r:=|(\Z/a \Z)^{\times}|$ prime factors. Then $n$ has a proper divisor of the form $1 \mod a$. Suppose that $k$ cannot be represented by the form $amn \pm m \pm n$, and suppose that $a > 2$. It is simple to deduce that this is equivalent to asking that $ak+1$ and $ak-1$ have no proper divisors of the form $\pm 1 \mod a$. It follows that $ak+1$ and $ak-1$ each have at most $r=|(\Z/a \Z)^{\times}|$ prime factors. The integers with at most $r$ prime factors are sometimes called $r$-almost primes . If $\pi_r(x)$ counts the number of $r$-almost primes $\le x$ then $$\pi_r(x) \sim \frac{x (\log \log x)^{r-1}}{\log(x)}.$$ (Compare this to the prime number theorem when $r = 1$.) In particular, we see that the $r$-almost primes have zero density (in any sense), and thus: 2) The density of integers that can not be represented in the form $amn + m + n$ is zero. Similarly, the density of integers that cannot be represented in the form $amn + m -n$ is zero. In particular, the density of the $a$-asterios numbers, the integers that can neither be represented in the form $amn+m+n$ nor $amn+m-n$, is zero. Let $\pi_{r,2}(x)$ denote the number of twin $r$-almost primes less than or equal to $x$, that is, the number of integers $n \le x$ such that $n$ and $n+2$ are both $r$-almost primes. (For example, $\pi_{1,2}(x)$ counts the number of twin primes less than $x$.) What do we know about this function? Brun was the first to give an upper bound for $\pi_{r,2}(x)$ using sieving techniques. Refinements by others (in particular Selberg) allowed one to obtain the estimate $$\pi_{1,2}(x) \ll \frac{x}{(\log x)^2},$$ which gives the correct (conjectural) order of magnitude. Without going into the Selberg sieve, let me say that what these arguments really give is decent upper and lower bounds of the following kind (for large $x$): $$\frac{A x}{(\log x)^2} < \left\{n < x, \ p \nmid n(n+2) \ \text{if} \ p < x^{\alpha}\right\} < \frac{B x}{(\log x)^2}$$ for non-zero constants $A$ and $B$, where $0 < \alpha < 1$ is some fixed small constant, which we might imagine for the sake of argument is $1/10$. Since every twin prime $>x^{\alpha}$ contributes to this sum, this gives the correct (up to a constant) upper bound for $\pi_{1,2}(x)$. It also gives a lower bound for $\pi_{10,2}(x)$, since if every factor of $n < x$ is at least $x^{1/10}$, then $n$ has at most $10$ prime factors. The motto I learnt about sieving was the following: upper bounds are easy, lower bounds are hard. Thus, since we are interested in bounding $\pi_{r,2}(x)$, it seems that we are in good shape. However, there is a subtlety here. Let $\pi(x,z)$ denote the number of integers $n$ less than $x$ such that every prime factor of $n$ is at least $z$. It's clear by the argument of the last paragraph that $\pi_r(x) \ge \pi(x,x^{1/r})$. One might imagine that these numbers are roughly of the same magnitute. However, it turns out that $\pi_r(x)$ is much bigger than $\pi(x,x^{1/r})$. The latter is comparable to the number of primes less than $x$, where the former has an extra factor of $(\log \log x)^{r-1}$. The reason is that $\pi_r(x)$ is dominated by numbers with (a few) small prime factors. In fact, as Kowalski pointed out to me, it is not even obvious that one can easily obtain the correct upper bound for $\pi_r(x)$ simply by sieving over primes. From the asymptotic for $\pi_{r}(x)$, one expects that $$(*): \qquad \pi_{r,2}(x) =^{?} \ O\left(\frac{x (\log \log x)^{2r-2}}{(\log x)^2}\right).$$ (EDIT: My resident expert reports that this is known. Here is a sketch of the idea in the simpler case where we want to count pairs $n$ and $n+2$ where $n$ is a $2$-almost prime and $n+2$ is prime. First, for a small prime $p$, we want to find an upper bound for the number of $n < x$ such that $n$ is divisible by $p$ and both $n/p$ and $n+2$ are prime. This is a similar problem to counting twin primes, and in a similar way one obtains a bound of the form $O(x/\log x)$ (key point: the implied constant does not depend on $p$). If we wish to bound the number of pairs $(n,n+2)$ such that $n+2$ is prime and $n$ is a $2$-almost prime, we may instead count the triples $(p,n,n+2)$ where $p < x$ is prime, $n < x$ is divisible by $p$, $n/p$ is prime, and $n+2$ is prime. If for each $p < x$ we have a upper bound of $Ax/\log x$ (for the same $A$), in total we obtain the upper bound: $$ \frac{Ax}{\log x} \cdot \sum_{p < x} \frac{1}{p} \sim \frac{Ax \log \log x}{\log x}.$$ Of course, the devil is in the details! END EDIT) All one needs to answer 3) is that the exponent of $\log(x)$ in the denominator is $> 1$. 3). Assuming the expected result (*), the inverse sum of the $a$-asterios primes converges. Consider the set of integers $S_a$ which do not have any prime factors of the form $\pm 1 \mod a$. This is a reasonable thing to do whenever $|(\Z/a\Z)^{\times}| > 2$. This is a weaker condition, so there are more of these numbers and consequently obtaining upper bounds is harder. We may form the Dirichlet series $$L(s) = \sum_{S_a} \frac{1}{n^s}$$ which has an Euler product: $$L(s) = \prod_{p \not\equiv \pm 1} \left(1 - \frac{1}{p^s}\right)^{-1}$$ Now let $K = \Q(\zeta_a)^{+}$ be the totally real subfield of $\Q(\zeta_a)$. It has degree $r/2 = \phi(a)/2$, where $r > 2$ unless $a = 1,2,3,4$ or $6$. (What we say now only makes sense for $r \ge 2$, in which case $r/2 \in \Z$.) A prime splits completely in $K$ if and only if it is of the form $\pm 1 \mod a$. Looking at the Euler product of $\zeta_K(s)$, we see that, up to a constant which can be explicitly written as some product over primes, $$\zeta_K(s) L(s)^{r/2} \sim \zeta_{\Q}(s)^{r/2}$$ as $s \rightarrow 1^{+}$, and hence $L(s) \sim (s-1)^{(2-r)/r}$ (up to some constant) as $s \rightarrow 1^{+}$. We deduce (Perron's formula) that the number of integers $\le x$ all of whose prime factors are not of the form $\pm 1 \mod a$ is asymptotic to $$ \kappa \cdot \frac{x}{(\log x)^{2/r}},$$ for some non-zero constant $\kappa$. This is the same analysis that gives the asymptotic formula for the number of integers $\le x$ which can be written as a sum of two squares (a result of Landau). We immediately deduce: 4a) The number of integers $a$ such that $ak+1$ (or $ak-1$) has no prime factors of the form $\pm 1 \mod a$ has zero density. If $r > 2$ (so $a \ge 3$ and $a \ne 3,4,6$) then the power $2/r$ of $(\log x)$ is at most $1/2$. Thus, we actually are led to the following guess: 4b) If $r = \phi(a) > 2$, then one would heuristically expect the inverse sum of integers $k$ such that none of the prime factors of $ak-1$ and $ak+1$ are $\pm 1 \mod a$ diverges. If $r = 2$, so $a = 3$, $4$, or $6$, then (Brun) the series converges. Here is a related problem of the very same kind: can one count the number of integers $n \le x$ such that both $n$ and $n+1$ can be expressed as the sum of two squares, and prove that there are $\sim x/\log(x)$ such integers (perhaps up to non-zero constant factors)? Any integer $n$ can be written as the product of two numbers not of the form $\pm 1$ unless every prime factor of $n$ is of the form $\pm 1 \mod a$. The integers all of whose prime factors are $\pm 1 \mod a$ can be analyzed exactly as in the last paragraph. In this case, the number of integers all of whose prime factors are of the form $\pm 1 \mod a$ is asymptotic to $$ \kappa \cdot \frac{x}{(\log x)^{(1-2/r)}}.$$ Suppose that $r > 2$. Then the set of such integers has density zero, and thus the set of integers which have a factor not of the form $\pm 1 \mod a$ has density one. Any set of density one has infinitely many "twins" satisfying any fixed congruence condition. Hence: 5) If $r = \phi(a) > 2$, then there are infinitely many $k$ such that both $ak+1$ and $ak-1$ have a factor not of the form $\pm 1 \mod a$. Indeed, such numbers have density one. Finally, I have nothing to say about problem 1) besides the remarks I made in my rephrasing of the original question here: Chen's Theorem with congruence conditions.
{ "source": [ "https://mathoverflow.net/questions/49647", "https://mathoverflow.net", "https://mathoverflow.net/users/14726/" ] }
49,690
Let $X$ be a topological space. In elementary algebraic topology, the cup product $\phi \cup \psi$ of cochains $\phi \in H^p(X), \psi \in H^q(X)$ is defined on a chain $\sigma \in C_{p+q}(X)$ by $(\phi \circ \psi)(\sigma) = \phi(_p\sigma)\psi(\sigma_q)$ where $p_\sigma$ and $\sigma_q$ denote the restriction of $\sigma$ to the front $p$-face and the back $q$-face, respectively. (More generally, any diagonal approximation $C_{\ast}(X) \to C_{\ast}(X) \otimes C_{\ast}(X)$ could be used; this is the Alexander-Whitney one.) The cup product defined by the Alexander-Whitney diagonal approximation as above is associative for cochains but skew-commutative only up to homotopy (this results from the fact that the two diagonal approximations $C_{\ast}(X) \to C_{\ast}(X) \otimes C_{\ast}(X)$ given by Alexander-Whitney and its "flip" (with the signs introduced to make it a chain map) agree only up to chain homotopy. The commutative cochain problem attempts to fix this: that is, to find a graded-commutative differential graded algebra $C_1^*(X)$ associated functorially to $X$ (which may be restricted to be a simplicial complex) which is chain-equivalent to the usual (noncommutative) algebra $C^{\ast}(X)$ of singular cochains. In Rational homotopy theory and differential forms, Griffiths and Morgan mention briefly that there is no way to make the cup-product skew-commutative on cochains (that is, to solve the commutative cochain problem) with $\mathbb{Z}$-coefficients, and that this is because of the existence of cohomology operations. It is also asserted that these cohomology operations don't exist over $\mathbb{Q}$ (presumably excluding the power operations). Could someone explain what this means?
Via the Dold-Kan correspondence, the category of cosimplicial abelian groups is equivalent to the category of nonpositively graded chain complexes of abelian groups (using homological grading conventions). Both of these categories are symmetric monoidal: chain complexes via the usual tensor product of chain complexes, and cosimplicial abelian groups via the "pointwise" tensor product. But the Dold-Kan equivalence is not a symmetric monoidal functor. However, you can make it lax monoidal in either direction. The Alexander-Whitney construction makes the functor (cosimplicial abelian groups -> cochain complexes) into a lax monoidal functor, so that for every cosimplicial ring, the associated chain complex has the structure of a differential graded algebra. However, it is not lax symmetric monoidal, so the differential graded algebra you obtain is generally not commutative even if you started with something commutative. There is another construction (the "shuffle product") which makes the inverse functor (cochain complexes -> cosimplicial abelian groups) into a lax symmetric monoidal functor. In particular, it carries commutative algebras to commutative algebras. So every commutative differential graded algebra (concentrated in nonpositive degrees) determines a cosimplicial commutative ring. One way of phrasing the phenomenon you are asking about is as follows: not every cosimplicial commutative ring arises in this way, even up to homotopy equivalence. For example, if $A$ is a cosimplicial ${\mathbb F}_2$-algebra, then the cohomology groups of $A$ come equipped with some additional structures (Steenrod operations). Most of these operations automatically vanish in the case where $A$ is obtained from a commutative differential graded algebra. If $R$ is a commutative ring and $X$ is a topological space, you can obtain a cosimplicial commutative ring by associating to each degree $n$ the ring of $R$-valued cochains on $X$ (the ring structure is given by ``pointwise'' multiplication). These examples generally don't arise from commutative differential graded algebras unless $R$ is of characteristic zero. For example when $R = {\mathbb F}_2$, the $R$-cohomology of $X$ is acted on by Steenrod operations, and this action is generally nontrivial (and useful to know about).
{ "source": [ "https://mathoverflow.net/questions/49690", "https://mathoverflow.net", "https://mathoverflow.net/users/344/" ] }
49,721
I would like to open a discussion about the Axiom of Symmetry of Freiling , since I didn't find in MO a dedicated question. I'll first try to summarize it, and the ask a couple of questions. DESCRIPTION The Axiom of Symmetry , was proposed in 1986 by Freiling and it states that $AS$ : for all $f:I\rightarrow I_{\omega}$ the following holds: $\exists x \exists y. ( x \not\in f(y) \wedge y\not\in f(x) )$ where $I$ is the real interval $[0,1]$ , and $I_{\omega}$ is the set of countable subsets of $I$ . It is known that $AS = \neg CH$ . What makes this axiom interesting is that it is explained and justified using an apparently clear probabilistic argument, which I'll try to formulate as follow: Let us fix $f\in I\rightarrow I_{\omega}$ . We throw two darts at the real interval $I=[0,1]$ which will reach some points $x$ and $y$ randomly. Suppose that when the first dart hits $I$ , in some point $x$ , the second dart is still flying. Now since $x$ is fixed, and $f(x)$ is countable (and therefore null) the probability that the second dart will hit a point $y\in f(x)$ is $0$ . Now Freiling says (quote), Now, by the symmetry of the situation (the real number line does not really know which dart was thrown first or second), we could also say that the first dart will not be in the set $f(y)$ assigned to the second one. This is deliberately an informal statement which you might find intuitive or not. However, Freiling concludes basically saying that, since picking two reals $x$ and $y$ at random, we have almost surely a pair $(x,y)$ such that, $x \not\in f(y) \wedge y\not\in f(x) )$ , then, at the very least, there exists such a pair, and so $AS$ holds. DISCUSSION If you try to formalize the scenario, you'd probably model the "throwing two darts" as choosing a point $(x,y) \in [0,1]^{2}$ . Fixed an arbitrary $f\in I\rightarrow I_{\omega}$ , Freiling's argument would be good, if the set $BAD = $ { $(x,y) | x\in f(y) \vee y \in f(x) $ } has probability $0$ . $BAD$ is the set of points which do not satisfy the constraints of $AS$ . If $BAD$ had measure zero, than finding a good pair would be simple, just randomly choose one! In my opinion the argument would be equally good, if $BAD$ had "measure" strictly less than $1$ . In this case we might need a lot of attempts, but almost surely we would find a good pair after a while. However $BAD$ needs not to be measurable. We might hope that $BAD$ had outermeasure $<1$ , this would still be good enough, I believe. However, if $CH$ holds there exists a function $f_{CH}:I\rightarrow I_{\omega}$ such that $BAD$ is actually the whole set $[0,1]^{2}$ !! This $f_{CH}$ is defined using a well-order of $[0,1]$ and defining $f_{CH}(x) = $ { $y | y \leq x $ }. Under $CH$ the set $f(x)$ is countable for every $x\in[0,1]$ . Therefore $BAD = $ { $ (x,y) | x\in f_{CH}(y) \vee y \in f_{CH}(x) $ } $ = $ { $ (x,y) | x\leq y \vee y \leq x $ } $ =[0,1]^{2}$ So it looks like that under this formulation of the problem, if $CH$ then $\neg AS$ , which is not surprisingly at all since $ZFC\vdash AS = \neg CH$ . Also I don't see any problem related with the "measurability" of $BAD$ . QUESTIONS Clearly it is not possible to formalize and prove $AS$ . However the discussion above seems very clear to me, and it just follows that if $CH$ than $BAD$ is the whole set $[0,1]^{2}$ . without the need of any non-measurable sets or strange things. And since picking at random a point in $[0,1]^{2}$ is like throwing two darts, I don't really think $AS$ should be true, or at least I don't find the probabilistic explanation very convincing. On the other hand there is something intuitively true on Freiling's argument. My questions, (quite vague though, I would just like to know what you think about $AS$ ), are the following. A) Clearly Freiling's makes his point, on the basis that the axioms of probability theory are too restrictive, and do not capture all our intuitions. This might be true if the problem was with some weird non-measurable sets, but in the discussion above, non of these weird things are used. Did I miss something? B) After $AS$ was introduced, somebody tried to tailor some "probability-theory" to capture Freling's intuitions? More in general, is there any follow up, you are aware of? C) Where do you see that Freiling's argument deviates (even philosophically) from my discussion using $[0,1]^{2}$ . I suspect the crucial, conceptual difference, is in seeing the choice of two random reals as, necessarily, a random choice of one after the other, but with the property that this arbitrary non-deterministic choice, has no consequences at all. Thank you in advance, Matteo Mio
The point is that violations of the Axiom of Symmetry are fundamentally connected with non-measurable sets, and counterexample functions $f$ to AS cannot be nice measurable functions. You have proved the one direction $CH\to \neg AS$ , that if there is a well-order of the reals in order type $\omega_1$ , then the function $f$ that maps each real to its predecessors violates AS. Observe in this case that the set of pairs $\{(x,y) \mid y\in f(x)\}$ has all vertical sections countable, and all horizonatal sections co-countable, which would violate Fubini's theorem if it were measurable. So it is not measurable. Conversely, for the direction $\neg AS\to CH$ , all violations of AS have essentially this form. To see this, suppose that $f$ is a function without the symmetry property, so that for any two reals $x$ and $y$ , either $x\in f(y)$ or $y\in f(x)$ . For any real $x$ , let $A_x$ be the closure of $x$ under $f$ , obtained by iteratively applying $f$ to $x$ and to any real in $f(x)$ , and so on to all those reals iteratively. Thus, $A_x$ is a countable set of reals and closed under $f$ . Define a relation $y\leq x$ if $y\in A_x$ . This is a reflexive transitive relation. The symmetry assumption on $f$ exactly ensures that this relation is a linear relation, so that either $x\leq y$ or $y\leq x$ for any two reals. So it is a linear pre-order. Furthermore, all proper initial segments of the pre-order are countable, since any such initial segment is contained in some $A_y$ . In other words, the relation $\leq$ is an $\omega_1$ -like linear pre-order of the reals. This implies CH, since the cofinality of this order can be at most $\omega_1$ , for otherwise there would be an uncountable initial segment, and so $\mathbb{R}$ would be an $\omega_1$ -union of countable sets. That is, the argument shows that every counterexample to AS arises essentially the same way as in your CH argument, but using a pre-order instead of a well-order. Note that the set $A=\{(x,y)\mid y\in A_x\}$ is non-measurable by the same Fubini argument: all the vertical slices are countable, and all horizontal slices co-countable. My view is that any philosophical, pre-reflection or intuitive concept of probability will have a very fundamental problem in dealing with subsets of the plane for which all vertical sections are countable and all horizontal sections are co-countable. For such a set, from one direction it looks very big, and from another direction it looks very small, but our intuitive concept is surely that rotating a set shouldn't affect our judgement of its size.
{ "source": [ "https://mathoverflow.net/questions/49721", "https://mathoverflow.net", "https://mathoverflow.net/users/11618/" ] }
49,731
I've agreed, perhaps unwisely, to give a talk to Philosophers about string theory. I'd like to give the philosophers an overview of the status and influence of string theory in physics, which I feel competent to do, but I would also like to say something about the influence it has had in mathematics where I am on less familiar ground. I've read the Jaffe-Quinn manifesto and the responses in http://arxiv.org/abs/math.HO/9404229 . What I would like from MO are pointers to more recent discussions of this issue in the mathematical community so that I can get a sense of where things stand 16 years later.
Dear Jeff, string theory has had a colossal influence on the renewal of enumerative geometry, a two century old branch of algebraic geometry inextricably linked to intersection theory. Here is a telling anecdote. Ellingsrud and Strømme, two renowned specialists in Hilbert Scheme theory, had calculated the number of rational cubic curves on a general quintic threefold by arguments based on their paper On the Chow ring of a geometric quotient , Annals of Math. 130 (1987) 159–187 Their result differed from that predicted by string theory. Of course everybody thought the mathematicians were right, but actually there had been a programming error in their calculations and the correct result was that of the physicists (which Ellingsrud and Strømme confirmed after fixing their bug). This was the beginning of a long list of results predicted by string theorists and subsequently proved by mathematicians, a celebrated example being Kontsevich's formula for the number $N_d$ of degree $d$ rational curves in $\mathbb P^2$ passing through $3d-1$ points in general position. You can read all about Kontsevich's formula in Kock and Vainsencher's free on-line book And a pleasantly elementary general reference is Sheldon Katz's Enumerative Geometry and String Theory , published by the AMS in its Student Mathematical Library (vol. 32).
{ "source": [ "https://mathoverflow.net/questions/49731", "https://mathoverflow.net", "https://mathoverflow.net/users/10475/" ] }
49,866
I know some applications of finite continued fractions. Probably you know more. Can you add anything? (For Applications of periodic continued fractions I have made a special topic.) 1) (Trivial) Analysis of Euclidean algorithm (and its variants). This item includes extended Euclidean algorithm, calculation of $a^{-1}\pmod n$ , lattice reduction, number recognition (Andreas Blass), parametrization of solution of the equation $ad-bc=N$ , calculation of convex hull of non-zero lattice points from first quadrant etc. 2) Decomposition of prime $p=4n+1$ to the sum of two squares. 3) Rodseth's formula for Frobenius numbers with three arguments. 4) Analysis of Frieze Patterns from The Book of Numbers (Conway, J. H. and Guy, R. K.) 5) Calculation of goodness (dicrepancy or something similar) of 2-dimesional lattice rules for numerical integration. 6) Singularitie resolution in toric surfaces (added by J.C. Ottem). 7) Classification of rational tangles (added by Paolo Aceto). 8) Calculation of Dedekind sums. 9) Calculation of the number of A-graded algebras (V.I. Arnold A-graded algebras and continued fractions ) 10) Asymptotic behavior of a curve in $\mathbb{R}^n$ with constant curvature $k_1$ , constant second curvature $k_2$ , ... (till constant curvature $k_{n-1}$ ). (V.I. Arnold) 11) The way to attack (discovered by Michael J. Wiener) RSA public key crypto system with small private exponents (added by jp). 12) DDA-algorithm for converting a segment into a nice-looking sequence of pixels. Another algorithms of integer linear programming: finding a “closest points” in a given halfplane (added by Wilberd van der Kallen). 13) Analysis of Lehmer pseudo-random number generator (added by Gerry Myerson). See U. Dieter. Pseudo-random numbers. The exact distribution of pairs and Knuth D. E. The art of computer programming. Volume 2 (Theorem D, section 3.3.3). 14) Bach and Shallit show how to compute the Jacobi symbol in terms of the simple continued fraction (Bach, E. and Shallit, J. Algorithmic Number Theory, Vol. 1: Efficient Algorithms. Cambridge, MA: MIT Press, pp. 343-344, 1996.) 15) A criterion for a rectangle to be tilable by rectangles of a similar shape. Construction of alternating-current circuits with given properties (added by M. Skopenkov). 16) Slam dunking of rational surgery diagrams for a three-manifolds (added by Kelly Davis). 17) CF allows to predict digets in $1/M$ random number generator, see Blum, L.; Blum, M. & Shub, M. A simple unpredictable pseudo-random number generator. SIAM J. Comput., 1986, 15, 364-383 . 18) Asymptotic analysis of incomplete Gauss sums (theta sums) (Fiedler, H.; Jurkat, W. & Koerner, O. Asymptotic expansions of finite theta series. Acta Arith. , 1977, 32, 129-146; J. Marklof, Theta sums, Eisenstein series, and the semiclassical dynamics of a precessing spin, in: D. Hejhal, J. Friedman, M. Gutzwiller and A. Odlyzko (eds.), Emerging Applications of Number Theory, IMA Volumes in Mathematics and its Applications, Volume 109 (Springer, New York, 1999) pp. 405-450) 19) The statistics of the trajectory of Sinai billiard in a flat two-torus, see Boca, Gologan, Zaharescu and Bykovskii, Ustinov . 20) Analysis of "linear" permutations (from Zolotarev's proof of quadratic reciprocity law). 21) Calculation of quadratic character sums with polynomial arguments. 22) The signature of a generic symmetric integral matrix can be expressed as a finite continued fraction (added by Andrew Ranicki). 23) Lehman's algorithm for factoring large integers.
In knot theory continued fractions are used to classify rational tangles. Conway proved that two rational tangles are isotopic if and only if they have the same fraction. This is proved by Kauffman in http://arxiv.org/pdf/math/0311499.pdf . The paper also contains all the basic definitions and I think it can be read by any mathematician.
{ "source": [ "https://mathoverflow.net/questions/49866", "https://mathoverflow.net", "https://mathoverflow.net/users/5712/" ] }
49,915
This is a question posed by Adam Chalcraft. I am posting it here because I think it deserves wider circulation, and because maybe someone already knows the answer. A polyomino is usually defined to be a finite set of unit squares, glued together edge-to-edge. Here I generalize it to mean a finite set of unit hypercubes, glued together facet-to-facet. Given a polyomino $P$ in $\mathbb{R}^m$, I can lift it to a polyomino in a higher-dimensional Euclidean space $\mathbb{R}^{m+n}$ by crossing it with a unit $n$-cube: the lifted polyomino is just $P\times [0,1]^n$. Obviously, not all polyominos tile space. Is it true that given any polyomino $P$ in $\mathbb{R}^m$, there exists some $n$ such that the lifted polyomino $P\times [0,1]^n$ tiles $\mathbb{R}^{m+n}$? Many people's first instinct is that multiply-connected polyominos (those with "holes" in them) can't possibly tile, but you can get inside holes if you lift to a high enough dimension.
A positive answer to this question has just appeared in the arXiv: Tiling with arbitrary tiles; Vytautas Gruslys, Imre Leader, Ta Sheng Tan; http://arxiv.org/abs/1505.03697
{ "source": [ "https://mathoverflow.net/questions/49915", "https://mathoverflow.net", "https://mathoverflow.net/users/3106/" ] }
50,033
Burnside's Lemma states that, given a set $X$ acted on by a group $G$, $$|X/G|=\frac{1}{|G|}\sum_{g\in G}|X^g|$$ where $|X/G|$ is the number of orbits of the action, and $|X^g|$ is the number of fixed points of $g$. In other words, the number of orbits is equal to the average number of fixed points of an element of $G$. Is there any way in which the fixed points of an element $g$ can be thought of as orbits? I had wondered aloud on my recent question here how (or if) Burnside's Lemma can be interpreted as having the same kind of object on both sides, so as to be a "true" average theorem, e.g. "number of orbits = average over $g\in G$ of (number of orbits satisfying (something to do with $g$))" or "number of orbits = average over $g\in G$ of (number of orbits of some new action which depends on $g$)" Since Qiaochu stated the comments to my question that he suspects Burnside's Lemma can be categorified, and that this may be related, I have also added that tag.
I'm not sure I'd call this a categorification, but the way I think of Burnside's Lemma is as follows. Consider the subset $Z \subset G \times X$ consisting of pairs $(g,x)$ such that $g\cdot x =x$, where by $\cdot$ I just mean the action of $G$ on $X$. The cartesian product $G \times X$ comes with the two surjections $\pi_G : G \times X \to G$ and $\pi_X : G \times X \to X$, and you can compute the cardinality of $Z$ either along the fibres of $\pi_G$ or along the fibres of $\pi_X$: the former gives you the sum over the fixed point sets, whereas the latter gives you a sum over the stabilizers. Then the orbit-stabilizer theorem does the rest. Thanks to @Arrow who pointed out the link in my comment was broken. Here's hopefully a link that works to the same one-page document .
{ "source": [ "https://mathoverflow.net/questions/50033", "https://mathoverflow.net", "https://mathoverflow.net/users/1916/" ] }
50,150
'Twas the night before Christmas and under the tree Was a heap of new balls, stacked tight as can be. The balls so gleaming, they reflect all light rays, Which bounce in the stack every which way. When, what to my wondering mind does occur: A question of interest; I hope you concur! From each point outside, I wondered if light Could reach deep inside through gaps so tight? More precisely, let $\cal{B}$ be a finite collection of congruent, perfect mirror balls arranged, say, in a cubic close-packing cannonball stack. Let $H$ be the set of points inside the closed convex hull of $\cal{B}$, $H^+ = \mathbb{R}^3 \setminus H$ the points outside, and $H^- = H \setminus \cal{B}$ the points in the crevasses inside. Q1 . Is it true that every point $a \in H^+$ can illuminate every point $b \in H^-$ in the sense that there is a light ray from $a$ that reaches $b$ after a finite number of reflections? I believe the answer to Q1 is 'No': If $a$ is sufficiently close to a point of contact between a ball of $\cal{B}$ and $H$, then all rays from $a$ deflect into $H^+$. If this is correct, the question becomes: which pair of points $(a,b)$ can illuminate one another, for a given collection $\cal{B}$? Specifically: Q2 . Is there some finite radius $R$ of a sphere $S$ enclosing a collection $\cal{B}$ such that every point $a$ outside $S$ can illuminate every point $b \in H^-$? More precisely, are there conditions on $\cal{B}$ that ensure such a claim holds? If the centers of the balls in $\cal{B}$ are collinear, then points in the bounding cylinder do not fully illuminate. If the centers of the balls are coplanar, then points on that plane do not fully illuminate. So some configurations must be excluded. Perhaps a precondition analogous to this might suffice: If the hull $H$ of $\cal{B}$ encloses a sphere of more than twice the common radius of the balls, then ... ? Failing a general result, can it be established for stackings as illustrated above? The answers (especially Bill Thurston's) in response to the earlier MO question on lightrays bouncing between convex bodies may be relevant. Even speculative 'answers' are welcomed! Edit (23Dec). Although I remain optimistic that there is a nice theorem lurking here, fedja's observation that points near the boundary of the hull remain dark makes it a challenge to formulate a precise statement of a possible theorem. Something like this: If $\cal{B}$ is sufficiently "fat," then every point $a$ sufficiently far from $\cal{B}$ illuminates every point $b$ in $H^-$ that is not too close to the boundary of $H$. Edit (24Dec). There is an associated computational question, interesting even in two dimensions: Given $a$ and $b$, what is the complexity of deciding if $a$ can illuminate $b$? Is it even decidable?
I took a pane of clear glass and touched two balls at once I put my light, perhaps, by chance, above the pane. Alas, the shining pile on the same side in its arrangement lay, and no matter what I tried (I tried a whole day) Some darkness (though not too much) remained around points of touch...
{ "source": [ "https://mathoverflow.net/questions/50150", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
50,245
David Feldman asked whether it would be reasonable for the Riemann hypothesis to be false, but for the Riemann zeta function to only have finitely many zeros off the critical line. I very rashly predicted that this question would be essentially as hard as the Riemann hypothesis itself. However, on further reflection, I stumbled upon a natural and reasonable conjecture which has a serious bearing on whether this dichotomy holds, which I have never seen in print. So, let $f:\mathbf{R}\to \mathbf{R}$ be continuous. There are various notions of quasi-periodic and almost periodic function in the literature. The following (quite weak) one is more than enough for my purposes: Definition. A function $f: \mathbf{R} \to \mathbf{R}$ is locally quasiperiodic if, for every bounded interval $I \subset \mathbf{R}$ and every $\delta>0$, there exists an unbounded sequence $t_n \in \mathbf{R}$ such that $\sup_{t\in I} |f(t+t_n)-f(t)|<\delta$. For example, finite trigonometric polynomials $\sum a_j \sin{(b_j t + c_j)}$ are locally quasiperiodic. Back to the zeta function: the Hardy $Z$-function is defined as $Z(t)=\pi^{-it/2}\frac{\Gamma(1/4+it/2)}{|\Gamma(1/4+it/2)|}\zeta(1/2+it)$. The functional equation for the zeta function immediately implies that $Z(t)$ is real-valued, and by construction we have $|Z(t)|=|\zeta(1/2+it)|$. One of the nice things about the $Z$-function is that it turns out to be computable in fairly efficient ways (the Riemann-Siegel formula), and it reduces the problem of finding zeros of zeta on the critical line to finding sign changes of the $Z$-function. In fact, the $Z$-function knows about the Riemann hypothesis: If the $Z$-function has a negative local maximum or a positive local minimum, then the Riemann hypothesis is false; see e.g. Section 8.3 of Edwards's book. I don't believe the converse to this is known, so let's call such an extremum a strong failure of the Riemann hypothesis . Now, I don't believe that the $Z$-function itself is locally quasiperiodic, because the density of its zeros should grow as $t$ grows, and it should wiggle "faster and faster" accordingly; more precisely, the number of zeros in an interval $[t,t+h]$ for $h$ fixed should be $\sim \frac {h}{2\pi}\log{t}$ as $t\to\infty$. However, rescaling in a naive manner, let's consider instead $Z(\frac{t}{\log{t}})$. This should have $\sim \frac{h}{2\pi}$ zeros in an interval $[t,t+h]$ for $h$ fixed and $t \to \infty$, and I see no reason not to believe that Conjecture A. The function $Z(\frac{t}{\log{t}})$ is locally quasiperiodic. My main reason for enunciating this is that the truth of Conjecture A implies that if there is one strong failure of the Riemann hypothesis, then there are infinitely many strong failures. This is actually pretty evident; take $I$ a small interval containing the relevant bad local extrema and take $\delta$ small enough so the intervals $I+t_n$ contain bad local extrema of the same type. It's not obvious to me whether Conjecture A is at all accessible by current technology. For example, I don't a single example of an unbounded function which is provably locally quasiperiodic. I would love to see such an example (I've tried and failed to construct one). Also, it seems natural to ask whether there is some simple characterization of locally quasiperiodic functions in terms of properties of their (distributional) Fourier transforms. Is such a characterization reasonable to expect?
This doesn't quite sound like the right conjecture here, because Z is known to go to infinity on the average, by Selberg's central limit theorem (see my blog post on this topic). But this is easy to fix by working with a projective notion of local quasiperiodicity in which one divides $f(t)$ or $f(t+t_n)$ by an $n$-dependent scaling factor. In that case, one is basically asking for the zero process of the zeta function to be recurrent, and this would be predicted by the GUE hypothesis. However, I doubt that this question will be resolved before the GUE hypothesis itself is settled. EDIT: Note though that there are other hypotheses than the GUE hypothesis that also lead to a recurrent zero process, such as the Alternative hypothesis , which is linked to the existence of infinitely many Siegel zeroes. I suppose it is a priori conceivable that some sort of dichotomy might be set up in which recurrence is obtained by completely different means in each case of the dichotomy (as is the case with proofs of multiple recurrence in ergodic theory) but I am personally skeptical that one could really handle all the cases without making enough progress on understanding zeta to solve much more difficult and prominent conjectures about that function. (In particular, with this approach one would have to first eliminate the possibility of having only finitely many zeroes off the critical line, leading us back to the original conjecture that motivated the one here.)
{ "source": [ "https://mathoverflow.net/questions/50245", "https://mathoverflow.net", "https://mathoverflow.net/users/1464/" ] }
50,343
EDIT (30 Nov 2012): MoMath is opening in a couple of weeks, so this seems like it might be a good time for any last-minute additions to this question before I vote to close my own question as "no longer relevant". As some of you may already know, there are plans in the making for a Museum of Mathematics in New York City. Some of you may have already seen the Math Midway , a preview of the coming attractions at MoMath. I've been involved in a small way, having an account at the Math Factory where I have made some suggestions for exhibits. It occurred to me that it would be a good idea to solicit exhibit ideas from a wider community of mathematicians. What would you like to see at MoMath? There are already a lot of suggestions at the above Math Factory site; however, you need an account to view the details. But never mind that; you should not hesitate to suggest something here even if you suspect that it has already been suggested by someone at the Math Factory, because part of the value of MO is that the voting system allows us to estimate the level of enthusiasm for various ideas. Let me also mention that exhibit ideas showing the connections between mathematics and other fields are particularly welcome, particularly if the connection is not well-known or obvious. A couple of the answers are announcements which may be better seen if they are included in the question. Maria Droujkova: We are going to host an open online event with Cindy Lawrence, one of the organizers of MoMath, in the Math Future series. On January 12th 2011, at 9:30pm ET, follow this link to join the live session using Elluminate. George Hart: ...we at MoMath are looking for all kinds of input. If you’re at the Joint Math Meetings this week, come to our booth in the exhibit hall to meet us, learn more, and give us your ideas.
At the science museum in London they have this very cute little gadget used by mapmakers 150 years ago: an axle with a rubber ring around it, and the ring pressing against a cone. The whole lot is attached to a metal stylus; you trace around an area on a map with the stylus and a little reader tells you the area of what you've traced around. I always found that ingenious. The exhibit in London then goes on to show how you can use the same idea to integrate and hence solve differential equations, and finishes with a monster machine that can solve ordinary 4th order ODEs using basically the same trick; you set the coefficients with dials and then the machine draws a graph of the output. I'm afraid I know neither the name of the cute gadget nor the machine :-( but it strikes me as being appropriate for a "math museum"...
{ "source": [ "https://mathoverflow.net/questions/50343", "https://mathoverflow.net", "https://mathoverflow.net/users/3106/" ] }
50,516
Clearly the etale fundamental group of $\mathbb{P}^1_{\mathbb{C}} \setminus \{a_1,...,a_r\}$ doesn't depend on the $a_i$'s, because it is the profinite completion of the topological fundamental group. Does the same hold for when I replace $\mathbb{C}$ by a finite field? How about an algebraically closed field of positive characteristic? (note that I'm talking about the full $\pi_1$ and not the prime-to-$p$ part)
It is a result of Tamagawa that for two affine curves $C_1, C_2$ over finite fields $k_1,k_2$ any continuous isomorphism $\pi_1(C_1)\rightarrow \pi_1(C_2)$ arises from an isomorphism of schemes $C_1\rightarrow C_2$. Hence, if $\pi_1( \mathbb{P}^1\setminus\{a_1,\ldots, a_r\})$ were independent of the choice of the $a_i$, then the isomorphism class of the schemes $\mathbb{P}^1\setminus\{a_1,\ldots, a_r\}$ would be independent of the choice of $a_1,\ldots,a_r$. Tamagawa's result is Theorem 0.6 in this paper: The Grothendieck conjecture for affine curves, A Tamagawa - Compositio Mathematica, 1997 http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=298922 In the case of an algebraically closed field, the answer is also that the fundamental group depends on the choice of the points that are being removed. Again by a theorem by Tamagawa: If $k$ is the algebraic closure of $\mathbb{F}_p$, and $G$ a profinite group not isomorphic to $(\hat{\mathbb{Z}}^{(p')})^2\times \mathbb{Z}_p$, then there are only finitely many $k$-isomorphism classes of smooth curves $C$ with fundamental group $G$ (the restriction on $G$ excludes ordinary elliptic curves). This can be found in Finiteness of isomorphism classes of curves in positive characteristic with prescribed fundamental groups, A Tamagawa - Journal of Algebraic Geometry, 2004
{ "source": [ "https://mathoverflow.net/questions/50516", "https://mathoverflow.net", "https://mathoverflow.net/users/5309/" ] }
50,519
There have been a couple questions on MO, and elsewhere, that have made me curious about integral or rational cohomology operations. I feel pretty familiar with the classical Steenrod algebra and its uses and constructions, and I am at a loss as to imagine some chain level construction of such an operation, other than by coupling mod p operations with bockstein and reduction maps. I am mostly just curious about thoughts in this direction, previous work, and possible applications. So my questions are essentially as follows: 1) Are there any "interesting" rational cohomology operations? I feel like I should be able to compute $H\mathbb{Q}^*H\mathbb{Q}$ by noticing that $H\mathbb{Q}$ is just a rational sphere and so there are no nonzero groups in the limit. Is this right? 2) Earlier someone posted a reference request about $H\mathbb{Z}^*H\mathbb{Z}$, and I am just curious about what is known, and what methods were used. 3) Is there a reasonable approach, ie explainable in this forum, for constructing chain level operations? the approaches I have seen seem to require some finite characteristic assumptions, but maybe I am misremembering things. 4) I am currently under the impression that a real hard part of the problem is integrating all the information from different primes, is this the main roadblock? or similar to what the main obstruction is? My apologies for the barrage of questions, if people think it would be better split up, I would be happy to do so. Thanks for your time.
$HZ^nHZ$ is trivial for $n<0$. $HZ^0HZ$ is infinite cyclic generated by the identity operation. For $n>0$ the group is finite. So you know everything if you know what's going on locally at each prime. For $n>0$ the $p$-primary part is not just finite but killed by $p$, which means that you can extract it from the Steenrod algebra $H(Z/p)^{*}H(Z/p)$ and Bocksteins. EDIT Here's the easier part: The integral homology groups of the space $K(Z,n)$ vanish below dimension $n$, and by induction on $n$ they are all finitely generated. Also $H_{n+k}K(Z,n)$ is independent of $n$ for roughly $n>k$, so that in this stable range $H_{n+k}K(Z,n)$ is $HZ_kHZ$, which is therefore finitely generated. This plus the computation of rational (co)homology gives that $HZ_kHZ$ is finite for $k>0$. Here's the funny part: Of course one expects there to be some elements of order $p^m$ for $m>1$ in the (co)homology of $K(Z,n)$, and in fact there are; the surprise is that stably this is not the case.
{ "source": [ "https://mathoverflow.net/questions/50519", "https://mathoverflow.net", "https://mathoverflow.net/users/3901/" ] }
50,522
We can "travel" on all the vector space $V =GF(2)^n$ by doing the following (a) choose a primitive polynomial $P(t)$ of degree $n$ over $GF(2)$. (b) change vector $ X = (x_1, \ldots,x_{n-1}) \in V$ into vector $Y = (y_1, \ldots, y_{n-1}) \in V$. (c) repeat until $V$ is exhausted (2^n times) where $y_1+y_2z+ \cdots + y_nz^{n-1} = z(x_1+x_2z+ \cdots + x_nz^{n-1})$ and $z$ is a zero of $P$, i.e., $P(z)=0.$ I want to do the same with integral vectors containing only 1 and -1 I.e.: "travel" on all possible vectors $(r_1, \ldots, r_{n-1})$ with $r_i^2=1$ How to do that ??? I do some trys without success... reason of the question: I have only limited time on a computer ((five days per job, two jobs allowed)) and I need to try some computations on all such vectors with moderately large $n$ the loop: from r_1=-1 to 1 by 2 do; from r_2=-1 to 1 by 2 do $\cdots$ from r_{n-1}=-1 to 1 by 2 do; do not "fit" in my allowed time. following suggestion (thanks) let consider the following: I need to examine each of the $2^n$ vectors. To fit time allowed suffices to break the $2^n$ in smaller parts and apply to each of them the method I am asking for here ! I tried: (a) $r_i \in \{−1,1\}$ go to $si=(r_i+1)/2$ in $\{0,1\}$ (b) apply idea with primitive polynomial, to the $s_i$'s (So forced to take some reduction modulo $2$ in some coordinates) (c) recover $R_j$ the new $r_j$, by $R_j=2s_j−1$ so that from vector $(r_1,…,r_n)$ we get new vector $(R_1,…,R_n)$ and applying this $2^n$ times we should (hopefully) get all the $2^n$ vectors but this does NOT work since I ended, e.g. to the cycle $(−1,−1,…,−1)$ going to itself indefinitely In other words: Can I write these $2^n$ vectors as a sequence $v_1,…,v_{2^n}$ in such a manner that I can with some simple algebraic computation, (similar to the use of the primitive polynomial in case the vectors are in $GF(2)^n$)) get the vector $v_k$ from the vector $v_{k−1}$ beginning with any fixed vector $v_1$ ???
$HZ^nHZ$ is trivial for $n<0$. $HZ^0HZ$ is infinite cyclic generated by the identity operation. For $n>0$ the group is finite. So you know everything if you know what's going on locally at each prime. For $n>0$ the $p$-primary part is not just finite but killed by $p$, which means that you can extract it from the Steenrod algebra $H(Z/p)^{*}H(Z/p)$ and Bocksteins. EDIT Here's the easier part: The integral homology groups of the space $K(Z,n)$ vanish below dimension $n$, and by induction on $n$ they are all finitely generated. Also $H_{n+k}K(Z,n)$ is independent of $n$ for roughly $n>k$, so that in this stable range $H_{n+k}K(Z,n)$ is $HZ_kHZ$, which is therefore finitely generated. This plus the computation of rational (co)homology gives that $HZ_kHZ$ is finite for $k>0$. Here's the funny part: Of course one expects there to be some elements of order $p^m$ for $m>1$ in the (co)homology of $K(Z,n)$, and in fact there are; the surprise is that stably this is not the case.
{ "source": [ "https://mathoverflow.net/questions/50522", "https://mathoverflow.net", "https://mathoverflow.net/users/11016/" ] }
50,557
The background is as follows: I have been whittling away at my commutative algebra notes (or, rather at commutative algebra itself, I suppose) recently for the occasion of a course I will be teaching soon. I just inserted the statement of the theorem that the ring $\overline{\mathbb{Z}}$ of all algebraic integers is a Bezout domain (i.e., all finitely generated ideals are principal; note that this ring is very far from being Noetherian). I actually don't remember off the top of my head who first proved this result -- and I would be happy to learn, although that's not my main question here . The reference that sticks in my mind is Kaplansky's 1970 text Commutative Rings , where he proves the following nice generalization: Theorem: Let $R$ be a Dedekind domain with fraction field $K$ and algebraic closure $\overline{K}$ . Suppose that for every finite subextension $L$ of $\overline{K}/K$ , the ideal class group of the integral closure $R_L$ of $R$ in $L$ is a torsion abelian group. Then the integral closure $S$ of $R$ in $\overline{K}$ is a Bezout domain. Neat, huh? Then when I got to deducing the result about $\overline{\mathbb{Z}}$ , it hit me: nowhere in these notes do I verify that $\mathbb{Z}$ satisfies the hypotheses of Kaplansky's theorem, namely that the ideal class group of the ring of integers of a number field is always finite. Don't get me wrong: I wasn't expecting anything different -- I actually don't know of any commutative algebra text which proves this result. Indeed, it is generally held that the finiteness of the class number is one of the first results of algebraic number theory which is truly number-theoretic in nature and not part of the general study of commutative rings. But the truth is that I've been bristling at this state of affairs for some time: I would really like there to be a subbranch of mathematics called "abstract algebraic number theory" which proves "general" results like this. (My reasons for this are, so far as I can recall at the moment, purely psychological and aesthetic: I have no specific ulterior motive here, alas.) To see past evidence of me flirting with these issues, see this previous MO question (which does not have an accepted answer) and these other notes of mine (which don't actually get off the ground and establish anything exciting). So let me try once again: Is there a purely algebraic proof of the finiteness of the class number? Unfortunately I don't know exactly what I mean here, because the standard proofs that one finds in algebraic number theory texts are certainly "purely algebraic" in nature or can be made so. (For instance, it is well known that it is convenient but not necessary to use geometry of numbers -- the original proofs of this finiteness result predate Minkowski's work.) Here are some criteria: I want a general -- or "structural" -- condition on a Dedekind domain that implies the finiteness of its class group. (In my previous question, I asked whether finiteness of the residue rings was such a condition. I still don't know the answer to that.) This condition should in particular apply to rings of integers of number fields and also to coordinate rings of regular, integral affine curves over finite fields. Note that already the standard "purely algebraic proofs" of finiteness of class number in the number field case do not in fact proceed by a general method which also works verbatim in the function field case: additional arguments are usually required. (See for instance Dino Lorenzini's Invitation to Arithmetic Geometry .) As far as number field / function field unity goes, the best approach I know is the adelic one: Fujisaki's Lemma, which is Theorem 1.1 here (see also the theorem on the last page). But this is a topological argument, and the topological and valuation theoretic properties of global fields which go into it are quite particular to global fields: I am (dimly) aware of results of Artin-Whaples which characterize global fields as the ones which have these nice properties: the product formula, and so forth. It is possible that what I am seeking simply doesn't exist. If you feel like you understand why the finiteness of class number is in some precise way arithmetic rather than algebraic in nature, please do explain it to me! Added : here are some further musings which might possibly be relevant. I like to think of three basic theorems of algebraic number theory as being of a kind ("the three finiteness theorems"): (i) $\mathbb{Z}_K$ is a Dedekind domain which is finitely generated as a $\mathbb{Z}$ -module. (ii) $\operatorname{Pic} \mathbb{Z}_K$ is finite. (iii) $\mathbb{Z}_K^{\times}$ is finitely generated as a $\mathbb{Z}$ -module. [Yes, there is also a fourth finiteness theorem due to Hermite, on restricted ramification, which is perhaps most important of all...] The first of these is acceptably "purely algebraic" to me: it is a result about taking the normalization of a Dedekind domain in a finite field extension. The merit of the adelic approach is that it shows that (ii) and (iii) are closely interrelated: the conjunction of the two of them is formally equivalent to the compactness of the norm one idele class group. So perhaps it is a mistake to fixate on conditions only ensuring the finiteness of the class group. For instance, the class of Dedekind domains with finite class group is closed under localization but the class of Dedekind domains which also have finitely generated unit group is not. However the "Hasse domains" -- i.e. $S$ -integer rings of global fields -- do have both of these properties.
Yes, there exist purely algebraic conditions on a Dedekind domain which hold for all rings of integers in global fields and which imply that the class group is finite. For a finite quotient domain $A$ (i.e., all non-trivial quotients are finite rings), a non-zero ideal $I\subseteq A$ and a non-zero $x\in A$ , let $N_{A}(I)=|A/I|$ and $N_{A}(x)=|A/xA|$ . Also define $N_{A}(0)=0$ . Call a principal ideal domain $A$ a basic PID if the following conditions are satisfied: $A$ is a finite quotient domain, for each $m\in\mathbb{N}$ , $$\#\{x\in A\mid N_{A}(x)\leq m\}>m$$ (i.e., $A$ has “enough elements of small norm”), there exists a constant $C\in\mathbb{N}$ such that for all $x,y\in A$ , $$N_{A}(x+y)\leq C\cdot(N_{A}(x)+N_{A}(y))$$ (i.e., $N_{A}$ satisfies the “quasi-triangle inequality”). Theorem . Let $A$ be a basic PID and let $B$ be a Dedekind domain which is finitely generated and free as an $A$ -module. Then $B$ has finite ideal class group. For the proof, see here . It is easy to verify that $\mathbb{Z}$ and $\mathbb{F}_q[t]$ are basic PIDs, so the ring of integers in any global field satisfies the hypotheses of the above theorem (using the non-trivial fact that rings of integers in global fields are finitely generated over one of these PIDs). More generally, one can take the class of overrings of Dedekind domains which are finitely generated and free over some basic PIDs. Since it is known that an overring of a Dedekind domain with finite class group also has finite class group, this gives a wider class of algebraically defined Dedekind domains (including $S$ -integers like $\mathbb{Z}[\frac{1}{p}]$ ) with finite class group. Added: The second condition for basic PIDs can be relaxed to: there exists a constant $c\in\mathbb{N}$ such that for each $m\in\mathbb{N}$ , $$ \#\{x\in A\mid N_{A}(x)\leq c\cdot m\}\geq m. $$
{ "source": [ "https://mathoverflow.net/questions/50557", "https://mathoverflow.net", "https://mathoverflow.net/users/1149/" ] }
50,713
I'm trying to get an idea of Drinfeld's Zastava space . It seems to be an infinite-dimensional version of the flag variety, for affine Lie algebras. But, first of all, why is it called Zastava (Застава)? Sadly I don't understand Russian and I don't understand the connotation. Googling gave me this Wikipedia article about a car company first. But I don't think a car company has much to do with these spaces, or does it? Apparently it means "outpost" in Russian. But why an outpost? Drinfeld seems to be Ukrainian (again, from Wikipedia) and zastava means "pledge"? Is it an outpost to a greater understanding of these spaces? Is it a pledge of mathematicians to understand more???
The term was coined by one Michael de Finkelberg during his visit to Croatia. The word is indeed Croatian and means ``flag''. I was happy to have a Croatian word in mathematics. The strategy of giving a new notion an old name but in a different language is not perfect.
{ "source": [ "https://mathoverflow.net/questions/50713", "https://mathoverflow.net", "https://mathoverflow.net/users/5420/" ] }
50,798
What are the pairs $(P,Q)$ of subsets of $\mathbb N$ for which the map \begin{eqnarray*} P\times Q & \rightarrow & {\mathbb N} \\\\ (p,q) & \mapsto & p+q \end{eqnarray*} is a bijection ? Obvious examples are $P=\mathbb N$ with $Q=\{0\}$, or $P=2\mathbb N$ with $Q=\{0,1\}$. Are there others ? This question is related to a puzzle given in EMISSARY (fall 2010), asking to find infinite series $f(x)$ and $g(x)$ with coefficients $0$ and $1$, whose product equals $\frac{1}{1-x}$. I suspect that the word infinite was written on purpose, and therefore $P$ and $Q$ must be infinite. Later . After the answers, I understand that one can find a sequence $(P_j)_{j\ge0}$ of subsets of $\mathbb N$ with $0\in P_j$, such that every $n\in\mathbb N$ writes $\sum_{j\ge0}p_j$ with $p_j\in P_j$, in a unique way. Of course, all but finitely many $p_j$'s are zeros. Now, I feel dumb, because this follows for instance from the writing of integers in some basis.
To comment on Qiaochu's answer, one can show that all such factorizations come from mixed radix representations (different bases, factorial base etc.). That is if $$\frac{1}{1-x}=P(x)Q(x)$$ then there must be a sequence $1=a_0\le a_1 \le a_2\le\cdots$ so that $a_i$ divides $a_{i+1}$ and disjoint subsets $A,B$ with $\mathbb N=A\cup B$ , so that $$P(x)=\prod_{i\in A}\frac{1-x^{a_{i+1}}}{1-x^{a_i}},Q(x)=\prod_{i\in B}\frac{1-x^{a_{i+1}}}{1-x^{a_i}}.$$ The proof is simple, suppose $P(x)=1+x+\cdots +x^{a_1-1}+\cdots$ then $Q(x)=Q_1(x^{a_1})$ and $P(x)=\frac{x^{a_1}-1}{x-1}P_1(x)$ . Then we proceed by induction.
{ "source": [ "https://mathoverflow.net/questions/50798", "https://mathoverflow.net", "https://mathoverflow.net/users/8799/" ] }
50,872
I realize this question isn't strictly mathematical, and if it doesn't fit with the content on this site then feel free (moderators/high-rep users) to close it. But when I thought up the question it seemed to me that the users on this site would be best equipped to answer it. I've always been intrigued by the paintings of M.C. Escher comprising infinitely repeating patterns. For those of you not familiar with his work, here's an example, titled Angels & Devils : I recently wrote a blog post about the above picture and then started thinking: how did Escher do that? It's kind of like a fractal , right? (Or is that an extremely ignorant thing to say?) Maybe if I were to attempt such a painting myself, today, I could find a computer program to generate random fractal-like patterns over and over until I found one I felt I could work with; then I could simply "fill in the space" with whatever image I chose. But surely Escher didn't have any such tool available to him, right?. So: how might Escher have designed such patterns? Does anyone have any mathematical insight into what process might have been used to accomplish this? Alternatively, does anyone possibly have some historical knowledge of how Escher actually did do this?
'Around 1956, Escher explored the concept of representing infinity on a two-dimensional plane. Discussions with Canadian mathematician H.S.M. Coxeter inspired Escher's interest in hyperbolic tessellations, which are regular tilings of the hyperbolic plane. Escher's works Circle Limit I–IV demonstrate this concept. In 1995, Coxeter verified that Escher had achieved mathematical perfection in his etchings in a published paper. Coxeter wrote, "Escher got it absolutely right to the millimeter."' http://en.wikipedia.org/wiki/M._C._Escher If Angels and Devils is a hyperbolic tessellation then it might have been inspired by Coxeter. The construction itself was done using techniques like these: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.8746&rep=rep1&type=pdf
{ "source": [ "https://mathoverflow.net/questions/50872", "https://mathoverflow.net", "https://mathoverflow.net/users/11947/" ] }
50,992
This question is motivated by the current interest of Mathematics and Physics community in Wall Crossing. My questions are : What is wall crossing in Physics, what are the reasons for current interest in it. What is wall crossing in terms of mathematics, what is the reason for interest, is it just physics or some mathematical motivation. thanks.
Very roughly speaking, "wall-crossing" refers to a situation where you construct a would-be "invariant" $\Omega(t)$, that would naively be independent of parameters $t$ but actually depends on them in a piecewise-constant way: so starting from any $t_0$, $\Omega(t)$ is invariant under small enough deformations, but jumps at certain real-codimension-1 loci in the parameter space (the walls). You might initially think of this as a kind of quality-control problem in your invariant factory, to be eliminated by some more clever construction of an improved $\Omega(t)$; but at the moment it seems that this is the wrong point of view: there are interesting quantities that really do have wall-crossing behavior. To name one example of such a quantity: suppose you have a compact Kahler manifold $M$ with an anticanonical divisor $D$, and you want to construct the mirror of $M \setminus D$ following the ideas of Strominger-Yau-Zaslow. As it turns out, one of the essential ingredients you will need is a count of holomorphic discs in $M$, with boundary on a special Lagrangian torus $T(t)$ in $M$ (lying in a family parameterized by $t$). The number of such discs in a given homology class exhibits wall-crossing as $t$ varies, and this wall-crossing turns out to be crucial in making the construction work. This story has been developed by Auroux. In physics, the wall-crossing phenomena that have been studied a lot recently arose in the context of "BPS state counting". If you have a supersymmetric quantum field theory of the right sort, depending on parameters $t$, you can define a collection of numbers $\Omega(\gamma, t) \in {\mathbb Z}$: they are superdimensions of certain graded Hilbert spaces attached to the theory (spaces of "1-particle BPS states with charge $\gamma$"). These quantities exhibit wall-crossing as a function of $t$. Moreover, $\Omega(\gamma, t)$ are among the relatively few quantities in field theory that we are sometimes able to calculate exactly , so naturally they have attracted a lot of interest. In particular, they are the subject of the Ooguri-Strominger-Vafa conjecture of 2004, which in some cases relates their asymptotics to Gromov-Witten invariants; the investigation of this conjecture (mostly by Denef-Moore) is what triggered the current resurgence of interest in wall-crossing from the physics side. A particular case is the $4$-dimensional quantum field theory (or supergravity) associated to a Calabi-Yau threefold $X$ (obtained by dimensional reduction of the $10$-dimensional string theory on the $6$-dimensional $X$ to leave $10-6=4$ dimensional space.) In that case the physically-defined $\Omega(\gamma,t)$ are to be identified with the "generalized Donaldson-Thomas invariants" of $X$, studied by Joyce-Song and Kontsevich-Soibelman among others. The mathematical interpretation of $t$ in that case is as a point on the space of Bridgeland stability conditions of $X$. (If $X$ is compact, the last I heard, this space is not known to be nonempty, but the majority view seems to be that this gap will be filled...) One focal point for the excitement of the last few years is that a pretty remarkable wall-crossing formula has been discovered, the "Kontsevich-Soibelman wall-crossing formula", which completely answers the question of how $\Omega(t)$ depends on $t$, and seems to apply (in some form) to all of the situations I described above. The formula was rather surprising to physicists; the process of trying to understand why it is true in the physical setting led to some interesting physical and geometric spin-offs, some of which seem likely to be re-importable into pure mathematics.
{ "source": [ "https://mathoverflow.net/questions/50992", "https://mathoverflow.net", "https://mathoverflow.net/users/9534/" ] }
51,091
Oftentimes, in the standard algebraic topology books (May, Switzer, Whithead, for instance), there are tricky little proofs that depend on proving that two maps are homotopic. This is comparable to the way we build homotopies, lifts, etc. combinatorially in simplicial homotopy theory, but for some reason I never really acquired the skill-set (maybe the intuition?) to come up with these homotopies in the topological case. I'm just mystified how these little formulas are pulled out of thin air. Am I missing a key technique that's often taught early-on in an algebraic topology course? Is it tricky even with practice? Have there been any papers that focus on systematic ways of generating these things? I also noticed that in May's book, he oftentimes writes out explicit formulas for his homotopies, sometimes in a way that obscures the issue at hand (for instance, there is a homotopy that is described by an explicit formula, but it's nothing more than an explicit "representative of the natural homotopy" between the identity map and the constant map on a contractible based space.) How often can these seemingly arbitrary formulas be replaced with more canonical descriptions? (This last question is a soft question to people with experience in topology)
The basic phenomenon is that often the best way to think about "little homotopies" is to use the geometric parts of your brain --- to use primarily your GPU (geometry processing unit), with your arithmetic processing unit, logic processing unit and lexical processing units all in the background, so to speak. However, when writing down a proof, it's customary, and usually easier to transcribe it into symbolic form. This tends to be a one-way process --- it's much harder to start from symbolic formulas and regenerate the geometric intuiton than to start from the geometric intuition and transcribe it into symbolic formulas. It has become much easier to create reasonable figures illustrating geometric ideas than it used to be (say 20 or 30 years ago), but it's still hard. It's especially hard to directly convey geometric intuition in higher dimensions --- word portraits of geometric ideas can be good, but most mathematical writing neglects them. I think the best strategy for learning is to avoid reading symbolic definitions of these little homotopies until you have spent some effort thinking about them for yourself, primarily in your head. (Sketches can be good too, but they're often another layer of difficulty. Geometric imagination is not predominantly visual; it's a learned, tricky skill to be able to draw an image on paper that adequately represents a geometric mental model.) In my experience, the symbolic descriptions often actively interfere with geometric understanding; at first, only use them as hints, for times after you've thought hard and are stuck. It takes time and concentration to build good mental images, but geometric imagination does improve with practice, and it's worth the effort. Eventually, you learn to read the formulas and evoke the geometric images.
{ "source": [ "https://mathoverflow.net/questions/51091", "https://mathoverflow.net", "https://mathoverflow.net/users/1353/" ] }
51,187
I'm not a set theorist, but I understand the 'pop' version of set-theoretic forcing: in analogy with algebra, we can take a model of a set theory, and an 'indeterminate' (which is some poset), and add it to the theory and then complete to a model with the desired properties. I understand the category theoretic version better, which is to take sheaves (valued in a given category $Set$) over said poset with a given Grothendieck topology (the double negation topology). The resulting topos is again a model of set theory, but now has the properties you want, absent from the original $Set$. But what is this poset, really? Is it the poset of subobjects of the set you want to append to your theory/model (say a set of a specified cardinality, or some tree with a property)? Is it related to a proof of the property you are interested in? To clarify, I'm not interested in the mechanical definition of an appropriate generic poset, but what it is morally. Bonus points for saying what it 'is' before and after forcing, if this even makes sense.
The other answers are excellent, but let me augment them by offering an intuitive explanation of the kind you seem to seek. In most forcing arguments, the main idea is to construct a partial order out of conditions that each consist of a tiny part of the generic object that we would like to add; each condition should make a tiny promise about what the new generic object will be like. The generic filter in effect provides a way to bundle these promises together into a coherent whole having the desired properties. For example, with the Cohen forcing $\text{Add}(\omega,\theta)$, we want to add $\theta$ many new subsets of $\omega$, in order to violate CH, say. So we use conditions that specify finitely many bits in a $\omega\times\theta$ matrix of zeros and ones. Each condition makes a finite promise about how the entire matrix will be completed. The union of all conditions in the generic filter is a complete filling-in of the matrix. Genericity guarantees that each column of this matrix is a new real not present in the ground model and different from all other columns, since any finite condition can be extended so as to disagree on any particular column with any particular real or from any particular other column. With the collapse forcing $\text{Coll}(\omega,\kappa)$, we want to add a new surjective function $f:\omega\to\kappa$. So we use conditions consisting of the finite partial functions $p:\omega\to\kappa$, ordered by extension. Each such condition is a tiny piece of the generic function we want to add, describing finitely much of it. The union of the generic filter provides a total function $g:\omega\to \kappa$, and the genericity of the filter will guarantee that $g$ is surjective, since for any $\alpha<\kappa$, any condition $p$ can be extended to a stronger condition having $\alpha$ in the range. And similarly with many other forcing arguments. We design the partial order to consist of tiny pieces of the object that we are trying to add, each of which makes a small promise about the generic object. If $G$ is a generic filter for this partial order, then the union of $G$ is the joint collection of all these promises. In many forcing arguments, it is not enough just to build a partial order consisting of tiny pieces of the desired object, since one also wants to know that the forcing preserves other features. For example, we want to know that the forcing does not inadvertently collapse cardinals or that it can be iterated without collapsing cardinals. This adds a wrinkle to the idea above, since one wants to use tiny pieces of the generic object, but impose other requirements on the conditions that will ensure that the partial order has a nice chain-condition or is proper and so on. So the design of a forcing notion is often a trade-off between these requirements---one must find a balance between simple-mindedly added pieces of the desired generic object and ensuring that the partial order has sufficient nice properties that it doesn't destroy too much. In this sense, I would say that the difficult part of most forcing arguments is not the mastery of the forcing technology, the construction of the generic filter and of the model---although that aspect of forcing is indeed nontrivial---but rather it is the detailed design of the partial order to achieve the desired effect.
{ "source": [ "https://mathoverflow.net/questions/51187", "https://mathoverflow.net", "https://mathoverflow.net/users/4177/" ] }
51,395
There have been similar questions on mathoverflow, but the answers always gave some advanced introduction to the mathematics of quantum field theory, or string theory and so forth. While those may be good introduction to the mathematics of those subjects, what I require is different: what provides a soft and readable introduction to the (many) concepts and theories out there, such that the mathematics involved in it is in comfortable generality. What makes this is a "for mathematicians" question, is that a standard soft introduction will also assume that the reader is uncomfortable with the word "manifold" or certainly "sheaf" and "Lie algebra". So I'm looking for the benefit of scope and narrative, together with a presumption of mathematical maturity. N.B. If your roadmap is several books, that is also very welcome.
If you really know nothing about physics I suggest you begin with any text book on physics for undergrad. Easy to read, it will introduce the main usual suspects. After, you'll ask again :) I am not sure that jumping from nothing to quantum mechanics, or even worse quantum fields theory, would be wise, like jumping from nothing in math to algebraic geometry or K-Theory. After that, it depends of course at what level of mathematical physics you want to stop. I will illustrate this with some examples: Question : What is the "mass" of an isolated dynamical system? Math Answer : It is the class of cohomology of the action of the group of Galilee, measuring the lack of equivariance of the moment map, on a symplectic manifold representing the isolated dynamical system. Another question : Why in general relativity $E = mc^2$? Math Answer : Because the group of Poincaré has no cohomology Another, other question : What is the theorem of decomposition of motions around the center of gravity? Math Answer : Let $(M,\omega)$ be a symplectic manifold with an hamiltonian action of the group of Galilee, if the "mass" of the system is not zero (in the sense above) then $M$ is the symplectic product or $({\bf R}^6, {\rm can})$, representing the motions of the center of gravity, by another symplectic manifold $(M_0,\omega_0)$, representing the motions around the center of gravity. The group of Galillee acting naturally on $\bf R^6$ and $SO(3) \times {\bf R}$ on $M_0$. Another, other, other question : What are the constants of motions? Math Answer : Let $(M,\omega)$ be a pre-symplectic manifold with an hamiltonian action of a Lie group $G$, then the moment map is constant on the characteristics of $\omega$, that is the integral manifolds of the vector distribution $x \mapsto \ker(\omega_x)$. These answers are the mathematical versions of physics classical constructions, but it would be very difficult to appreciate them if you have no pedestrian introduction of physics. You may enjoy also Aristotles' book "Physics", as a first dish, just for tasting the flavor of physics :) After that, you will be able to appreciate also quantum mechanics, but this is another question. Addendum Just before entering in the modern world of physics I would suggest few basic lectures for the winter evenings, near the fireplace (I'm sorry I write them down in french because I read them in french). • Platon , Timée, trad. Émile Chambry. • Aristote , La Physique, Éd. J. Vrin. • Maïmonide , Le Guide des Égarés, Éd. Maisonneuve & Larose. (the part about time as an accident of motion, accident of the thing. Very deep and modern thoughts). • Giordano Bruno , Le Banquet des Cendres, Éd. L’éclat. • Galileo Galilei , Dialogue sur les Deux Grands Systèmes du Monde, Éd. Points. • Albert Einstein , La Relativité, Éd. Payot. • Joseph-Louis Lagrange , Mécanique Analytique, Éd. Blanchard. • Felix Klein , Le Programme d’Erlangen, Éd. Gauthier-Villars. • Jean-Marie Souriau , Structure des Systèmes Dynamiques, Éd. Dunod. • Victor Guillement & Shlomo Sternberg , Geometric Asymptotics, AMS Math Books • François DeGandt Force and Geometry in Newton Principia .
{ "source": [ "https://mathoverflow.net/questions/51395", "https://mathoverflow.net", "https://mathoverflow.net/users/5756/" ] }
51,399
Since I was first introduced to it, I've been intrigued by the claim that the universe contains a finite amount of information . (That link is not where I first encountered the concept; it is simply the first example of this claim I could find from a quick Google search.) Basically, the argument seems to be that if there is a finite amount of matter in the universe, that matter can only store a finite amount of information . On the surface, I have to concede that this makes a lot of sense. After all, if I'm thinking in terms of bits (for example), I might visualize a hypothetical "infinite hard disk drive" that could store unlimited data. This device would presumably have to be infinite in size, since it stores information on a physical platter that obviously occupies some space. Digging a little deeper, however, I start to doubt this presumption. After all, information can be compressed according to a system of encoding information in a particular set of symbols. Then as long as the system provides a way of decoding that information, you could effectively increase the capacity of any storage mechanism by encoding its contents using said system (analogous to converting every file on a hard disk using some compression algorithm such as LZMA ). But , there's still more to it than that. It goes without saying that any system of compression like what I just described comprises its own information , and therefore needs to be stored somewhere itself. Since the universe is "all there is" (?), a system of encoding the information contained within it would have to be a part of that very information . This is where I think I hit a mental wall. On the one hand, it seems that you could extract a seemingly unlimited amount of information from finite data—by using a system to encode that data, another system to encode the encoded data, and so on and so forth—whereas on the other, intuition tells me that there must come a point where, if the data as well as the system must share a space, there is no longer any room for either more data or another system of encoding it. The available space becomes too "crowded," so to speak. Is there a mathematical principle or theorem that answers this question? Is the problem I'm describing (determining a limit on the capacity of material data to store information) defined, analyzed, and/or illuminated by any particular concept(s) in mathematics?
If you really know nothing about physics I suggest you begin with any text book on physics for undergrad. Easy to read, it will introduce the main usual suspects. After, you'll ask again :) I am not sure that jumping from nothing to quantum mechanics, or even worse quantum fields theory, would be wise, like jumping from nothing in math to algebraic geometry or K-Theory. After that, it depends of course at what level of mathematical physics you want to stop. I will illustrate this with some examples: Question : What is the "mass" of an isolated dynamical system? Math Answer : It is the class of cohomology of the action of the group of Galilee, measuring the lack of equivariance of the moment map, on a symplectic manifold representing the isolated dynamical system. Another question : Why in general relativity $E = mc^2$? Math Answer : Because the group of Poincaré has no cohomology Another, other question : What is the theorem of decomposition of motions around the center of gravity? Math Answer : Let $(M,\omega)$ be a symplectic manifold with an hamiltonian action of the group of Galilee, if the "mass" of the system is not zero (in the sense above) then $M$ is the symplectic product or $({\bf R}^6, {\rm can})$, representing the motions of the center of gravity, by another symplectic manifold $(M_0,\omega_0)$, representing the motions around the center of gravity. The group of Galillee acting naturally on $\bf R^6$ and $SO(3) \times {\bf R}$ on $M_0$. Another, other, other question : What are the constants of motions? Math Answer : Let $(M,\omega)$ be a pre-symplectic manifold with an hamiltonian action of a Lie group $G$, then the moment map is constant on the characteristics of $\omega$, that is the integral manifolds of the vector distribution $x \mapsto \ker(\omega_x)$. These answers are the mathematical versions of physics classical constructions, but it would be very difficult to appreciate them if you have no pedestrian introduction of physics. You may enjoy also Aristotles' book "Physics", as a first dish, just for tasting the flavor of physics :) After that, you will be able to appreciate also quantum mechanics, but this is another question. Addendum Just before entering in the modern world of physics I would suggest few basic lectures for the winter evenings, near the fireplace (I'm sorry I write them down in french because I read them in french). • Platon , Timée, trad. Émile Chambry. • Aristote , La Physique, Éd. J. Vrin. • Maïmonide , Le Guide des Égarés, Éd. Maisonneuve & Larose. (the part about time as an accident of motion, accident of the thing. Very deep and modern thoughts). • Giordano Bruno , Le Banquet des Cendres, Éd. L’éclat. • Galileo Galilei , Dialogue sur les Deux Grands Systèmes du Monde, Éd. Points. • Albert Einstein , La Relativité, Éd. Payot. • Joseph-Louis Lagrange , Mécanique Analytique, Éd. Blanchard. • Felix Klein , Le Programme d’Erlangen, Éd. Gauthier-Villars. • Jean-Marie Souriau , Structure des Systèmes Dynamiques, Éd. Dunod. • Victor Guillement & Shlomo Sternberg , Geometric Asymptotics, AMS Math Books • François DeGandt Force and Geometry in Newton Principia .
{ "source": [ "https://mathoverflow.net/questions/51399", "https://mathoverflow.net", "https://mathoverflow.net/users/11947/" ] }
51,494
It is well known that a separable space is a topological space that has a countable dense subset. I am wondering how is this related to the name 'separable'? Any intuition where the name come from?
As far as I know the word separable was introduced by M. Fréchet in Sur quelques points du calcul fonctionnel , Rend. Circ. Mat. Palermo 22 (1906), 1-74. The paper can be obtained via this link (Springer). It's the famous paper in which he introduced metric spaces. He considers first slightly more general objects which he calls classes (V) : where (V) stands for voisinage — neighborhood. Remark: Metrics are introduced under the name écart in n o 49 on page 30. It is peculiar that the symmetry condition is not explicitly mentioned but it seems to be understood as Fréchet immediately mentions that metric spaces generalize classes (V) cf. n o 27 on page 17f. However, I couldn't find an instance where he actually uses it, he is always careful to respect the order — I may have missed something since I haven't read the paper in detail. I quote the relevant passage [from n o 37 on page 23f]: Nous appellerons ensuite classe séparable une classe qui puisse être considérée d'au moins une façon comme l'ensemble dérivé d'un ensemble dénombrable de ses propres éléments. [...] Ceci étant, nous nous bornerons maintenant à l'étude des classes (V) NORMALES, c'est-à-dire parfaites, séparables et admettant une généralisation du théorème de CAUCHY. Cette limitation n'a du reste rien d'artificiel, elle provient directement de la comparaison des classes (V) avec les ensembles linéaires [...] [...] Passons maintenant aux classes séparables. On peut qualifier ainsi les ensembles linéaires en considérant la droite indéfinie comme l'ensemble dérivé de l'ensemble des points d'abscisses rationnelles. Mais il n'en est pas de même pour toute classe parfaite (V). Below is a translation into English (made by several people here). Very roughly: Fréchet defines separable spaces as we do it today and says that in the following he will restrict attention to complete, perfect and separable metric spaces. The last quoted paragraph indeed confirms Qiaochu's comment. We will henceforth define a separable class as a class that can be considered in at least one way as the derived set of a countable set of its own elements. [...] This being said, we shall restrict ourselves now to the study of (V) NORMAL classes, that is to say perfect, separable and admitting a generalization of Cauchy's theorem. This limitation has in fact nothing artificial; it comes directly from the comparison of the classes (V) with linear sets. [...] We now pass to separable classes. We can qualify in this way linear sets by viewing the indefinite line as the derived subset of the set of its points with rational abscissa. But it isn't so for all perfect classes (V).
{ "source": [ "https://mathoverflow.net/questions/51494", "https://mathoverflow.net", "https://mathoverflow.net/users/1992/" ] }
51,531
There are several well-known mathematical statements that are 'obvious' but false (such as the negation of the Banach--Tarski theorem). There are plenty more that are 'obvious' and true. One would naturally expect a statement in the latter category to be easy to prove -- and they usually are. I'm interested in examples of theorems that are 'obvious', and known to be true, but that lack (or appear to lack) easy proofs. Of course, 'obvious' and 'easy' are fuzzy terms, and context-dependent. The Jordan curve theorem illustrates what I mean (and motivates this question). It seems 'obvious', as soon as one understands the definition of continuity, that it should hold; it does in fact hold; but all the known proofs are surprisingly difficult. Can anyone suggest other such theorems, in any areas of mathematics?
If $I_1,I_2,\dots$ are intervals of real numbers with lengths that sum to less than 1, then their union cannot be all of $[0,1]$. It is quite common for people to think this statement is more obvious than it actually is. (The "proof" is this: just translate the intervals so that the end point of $I_1$ is the beginning point of $I_2$, and so on, and that will clearly maximize the length of interval you can cover. The problem is that this argument works just as well in the rationals, where the conclusion is false.)
{ "source": [ "https://mathoverflow.net/questions/51531", "https://mathoverflow.net", "https://mathoverflow.net/users/3755/" ] }
51,685
To prove L'Hôpital's rule, the standard method is to use use Cauchy's Mean Value Theorem (and note that once you have Cauchy's MVT, you don't need an $\epsilon$-$\delta$ definition of limit to complete the proof of L'Hôpital). I'm assuming that Cauchy was responsible for his MVT, which means that Bernoulli didn't know about it when he gave the first proof. So what did he do instead?
L'Hôpital's rule was first published in Analyse des Infiniment Petits . According to The Historical Development of The Calculus by Edwards (p. 269), L'Hospital's argument, which is stated verbally without functional notation (see the English translation included in Struik's source book, pp. 313 - 316), amounts simply to the assertion that $$\frac{f(a+dx)}{g(a+dx)}= \frac{f(a) + f'(a) dx}{g(a) + g'(a)dx}=\frac{f'(a) dx}{g'(a) dx} =\frac{f'(a)}{g'(a)}$$ provided that $f(a) = g(a) = 0$ . He concludes that, if the ordinate $y$ of a given curve "is expressed by a fraction, the numerator and denominator of which do each of them become 0 when $x = a$ ," then "if the differential of the numerator be found, and that is divided by the differential of the denominator, after having made $x = a$ , we shall have the value of [the ordinate $y$ when $x = a$ ]." Edit. J.L. Coolidge explains in The Mathematics of Great Amateurs (see pp. 159-160 of the 2nd edition) that L'Hôpital was interested in calculating $$\lim\limits_{x\to a}\frac{\sqrt{2a^3x-x^4}-a\sqrt[3]{a^2x}}{a-\sqrt[4]{ax^3}}=\frac{16}{9}a.$$ As a matter of fact this particular problem had worried him a good deal. We find him writing in July 1693 to John Bernoulli suggesting that we should substitute directly in the original equation, getting $$\frac{a^2-a^2}{a-a}=2a,$$ and in September of the same year he writes: 'Je vous avoue que je me suis fort appliqué à résoudre l'équation $$\frac{\sqrt{2a^3x-x^4}-a\sqrt[3]{a^2x}}{a-\sqrt[4]{ax^3}}=y$$ lorsque $x=a$ , car ne voyant point de jour pour у réussir puisque toutes les solutions qui se présentent d'abord ne sont pas exactes.' All this suggests that L'Hospital learnt the correct solution from Bernoulli, but did not give him the specific credit, with the unfortunate result that the method came to be known as L'Hospital's method.
{ "source": [ "https://mathoverflow.net/questions/51685", "https://mathoverflow.net", "https://mathoverflow.net/users/4194/" ] }
51,754
There is often a lot of confusion surrounding the differences between relativizing individual formulas to models and the expression of "is a model of" through coding the satisfaction relation with Gödel operations. I think part of this can be attributed to the common preference for using formulas over codings. For example, a standard proof showing that $V_{\kappa} \models ZFC$ for $\kappa$ inaccessible will appeal to the fact that all of the ZFC axioms relativized to $V_{\kappa}$ are true. But then one learns about the Lévy Reflection Theorem scheme which allows every (finite) conjunction of formulas to be reflected to some $V_{\alpha}$. Perhaps this knowledge is followed by a question of whether the Compactness theorem can be used to contradict Gödel's Second Incompleteness Theorem. Specifically, consider the following erroneous proof that ZFC + CON(ZFC) proves its own consistency: Introduce a new constant $M$ into the language of set theory and add to the axioms of ZFC all of its axioms $\varphi_n$ relativized to $M$, denoted $\varphi_n^M$. Provided that ZFC is consistent, every finite collection of this theory is consistent by the Lévy Reflection Theorem whereby the Compactness Theorem tells us that the entire theory ZFC + "$M \models ZFC$" will be consistent. Consequently, this theory has a (ZFC) model $N$ so in this model, there exists a model $M$ of ZFC. To summarize then, arguing in ZFC + CON(ZFC), we've seemingly proven that we have a ZFC model $N$ modeling the consistency of ZFC by virtue of it having the model $M$ (i.e., seemingly $N \models ZFC + CON(ZFC)$ so we would have a proof of CON(ZFC + CON(ZFC)). The misstep in this proof is of course a misuse of the conclusion of the Compactness theorem, mainly the assumption that such an $N$ will think that $M$ is a ZFC model. With some enumeration of the formulas of the axioms $\{\varphi_n| n \in \mathbb{N}\}$ of ZFC, it is clear that $N$ will certainly think that $M \models \varphi_n$ for any particular $n \in \mathbb{N}$ analogous to how a nonstandard model of Peano arithmetic has an element $c$ satisfying $c > n$ for any particular $n \in \mathbb{N}$. The problem of course in the case of $N$ is that there may be formulas with nonstandard indices not accounted for just as there will definitely be nonstandard numbers greater than $c$ in the PA example. If one were to carry out the same proof with the more tedious arithmetization of syntax, then this link may be more apparent. To a lesser extent, there may also be confusion with the fact that $0^{\sharp}$ provides us with a proper class of $L(\alpha) \preceq L$. This may lead to the question of whether $L$ has its own truth predicate, contradicting Tarski's Theorem. But of course $L$ will only realize that each of these $\varphi^{L(\alpha)}$ is true for any ZFC axiom $\varphi$, and if one attempts to appeal to the arithmetization of syntax, one can begin to see the problem that these $\alpha$ may not (and of course will not) be definable (without parameters) in the constructible universe L. Since these types of misconceptions can be common among logicians and non-logicians alike, I thought I would ask the highly intelligent mathematicians who have worked through such problems or helped illuminate them to others if they would do so here as well. I think compiling a collection of tidbits of wisdom in this area from the collective perspectives of the MO Community can be illuminating to all. As such, my question is as follows: What insights can you share regarding the questions of formalizing "is a model of ZFC" in ZFC and the various "paradoxes" that arise? For example, maybe you can show a related seemingly paradoxical problem and resolve it, or simply share your thoughts on how to avoid such traps of logic.
Here is a result along the lines you are requesting, which I find beautifully paradoxical. Theorem. Every model of ZFC has an element that is a model of ZFC. That is, every $M\models ZFC$ has an element $m$, which $M$ thinks is a structure in the language of set theory, a set $m$ and a binary relation $e$ on $m$, such that if we consider externally the set of objects $\bar m=\{\ a\ |\ M\models a\in m\ \}$ with the relation $a\mathrel{\bar e} b\leftrightarrow M\models a\mathrel{e} b$, then $\langle \bar m,\bar e\rangle\models ZFC$. Many logicians instinctively object to the theorem, on the grounds of the Incompleteness theorem, since we know that $M$ might model $ZFC+\neg\text{Con}(ZFC)$. And it is true that this kind of $M$ can have no model that $M$ thinks is a ZFC model. The paradox is resolved, however, by the kind of issues mentioned in your question and the other answers, that the theorem does not claim that $M$ agrees that $m$ is a model of the ZFC of $M$, but only that it externally is a model of the (actual) ZFC. After all, when $M$ is nonstandard, it may be that $M$ does not agree that $m$ satisfies ZFC, even though $m$ actually is a model of ZFC, since $M$ may have many non-standard axioms that it insists upon. Proof of theorem. Suppose that $M$ is a model of ZFC. Thus, in particular, ZFC is consistent. If it happens that $M$ is $\omega$-standard, meaning that it has only the standard natural numbers, then $M$ has all the same proofs and axioms in ZFC that we do in the meta-theory, and so $M$ agrees that ZFC is consistent. In this case, by the Completeness theorem applied in $M$, it follows that there is a model $m$ which $M$ thinks satisfies ZFC, and so it really does. The remaining case occurs when $M$ is not $\omega$-standard. In this case, let $M$ enumerate the axioms of what it thinks of as ZFC in the order of their Goedel numbers. An initial segment of this ordering consists of the standard axioms of ZFC. Every finite collection of those axioms is true in some $(V_\alpha)^M$ by an instance of the Reflection theorem. Thus, since $M$ cannot identify the standard cut of its natural numbers, it follows (by overspill) that there is some nonstandard initial segment of this enumeration that $M$ thinks is true in some $m=(V_\alpha)^M$. Since this initial segment includes all actual instances of the ZFC axioms, it follows that $m$ really is a model of ZFC, even if $M$ does not agree, since it may think that some nonstandard axioms might fail in $M$. $\Box$ I first learned of this theorem from Brice Halimi, who was visiting in New York in 2011, and who subsequently published his argument in: Halimi, Brice , Models as universes , Notre Dame J. Formal Logic 58, No. 1, 47-78 (2017). ZBL06686417 . Note that in the case that $M$ is $\omega$-nonstandard, then we actually get that a rank initial segment $(V_\alpha)^M$ is a model of ZFC. This is a very nice transitive set from $M$'s perspective. There are other paradoxical situations that occur with countable computably saturated models of ZFC. First, every such M contains rank initial segment $(V_\alpha)^M$, such that externally, $M$ is isomorphic to $(V_\alpha)^M$. Second, every such $M$ contains an element $m$ which $M$ thinks is an $\omega$-nonstandard model of a fragment of set theory, but externally, we can see that $M\cong m$. Switching perspectives, every such $M$ can be placed into another model $N$, to which it is isomorphic, but which thinks $M$ is nonstandard.
{ "source": [ "https://mathoverflow.net/questions/51754", "https://mathoverflow.net", "https://mathoverflow.net/users/11318/" ] }
51,759
Let $S$ be a closed convex surface, the boundary of a compact convex body in $\mathbb{R}^3$. I am interested in whether there are conditions on its shape that ensure that it supports a long, simple (non-self-crossing) geodesic. The length of a geodesic for my purposes is the longest distance you can travel along the geodesic before returning to your starting point. Some condition is necessary for the type of result I seek, for all the geodesics on a sphere have the same length. Define the elongation $L$ of $S$ as the largest height to diameter ratio, $h/d$, of a cylinder of height $h$ and diameter $d$ in which $S$ is tightly inscribed. By tightly inscribed I mean that $S$ touches the top, bottom, and sides of the cylinder in such a manner that neither the height nor diameter can be reduced. I could use a theorem of this type: If $S$ has elongation $L \ge k$, then there is a simple geodesic on $S$ of length $\ge f(k)$, where $f(k)$ is some increasing function of $k$, e.g., $c k$ for a constant $c > 0$. Perhaps such a theorem cannot exist. Or maybe a theorem of this ilk exists, but only with certain smoothness assumptions? There are always at least three simple closed geodesics on $S$, by a theorem of Lyusternik and Schnirelmann, but perhaps they might all be short? For an ellipsoid, the three simple closed geodesics follow the major and minor axes, and the longest of those satisfies the type of relationship I seek. (Elongation could as well be defined in terms of an enclosing ellipsoid rather than cylinder.) And a cylindrical $S$ supports a long spiral geodesic: Such spirals are exactly the type of geodesic I seek. Thanks for any ideas or pointers! Edit . This may not add much, but here is how I view a long geodesic on a cylinder: starting at $a$, crossing the bottom in a segment $x x'$, crossing the top in $y y'$, and stopping at $b$ just before it is about to cross itself.
Here is a result along the lines you are requesting, which I find beautifully paradoxical. Theorem. Every model of ZFC has an element that is a model of ZFC. That is, every $M\models ZFC$ has an element $m$, which $M$ thinks is a structure in the language of set theory, a set $m$ and a binary relation $e$ on $m$, such that if we consider externally the set of objects $\bar m=\{\ a\ |\ M\models a\in m\ \}$ with the relation $a\mathrel{\bar e} b\leftrightarrow M\models a\mathrel{e} b$, then $\langle \bar m,\bar e\rangle\models ZFC$. Many logicians instinctively object to the theorem, on the grounds of the Incompleteness theorem, since we know that $M$ might model $ZFC+\neg\text{Con}(ZFC)$. And it is true that this kind of $M$ can have no model that $M$ thinks is a ZFC model. The paradox is resolved, however, by the kind of issues mentioned in your question and the other answers, that the theorem does not claim that $M$ agrees that $m$ is a model of the ZFC of $M$, but only that it externally is a model of the (actual) ZFC. After all, when $M$ is nonstandard, it may be that $M$ does not agree that $m$ satisfies ZFC, even though $m$ actually is a model of ZFC, since $M$ may have many non-standard axioms that it insists upon. Proof of theorem. Suppose that $M$ is a model of ZFC. Thus, in particular, ZFC is consistent. If it happens that $M$ is $\omega$-standard, meaning that it has only the standard natural numbers, then $M$ has all the same proofs and axioms in ZFC that we do in the meta-theory, and so $M$ agrees that ZFC is consistent. In this case, by the Completeness theorem applied in $M$, it follows that there is a model $m$ which $M$ thinks satisfies ZFC, and so it really does. The remaining case occurs when $M$ is not $\omega$-standard. In this case, let $M$ enumerate the axioms of what it thinks of as ZFC in the order of their Goedel numbers. An initial segment of this ordering consists of the standard axioms of ZFC. Every finite collection of those axioms is true in some $(V_\alpha)^M$ by an instance of the Reflection theorem. Thus, since $M$ cannot identify the standard cut of its natural numbers, it follows (by overspill) that there is some nonstandard initial segment of this enumeration that $M$ thinks is true in some $m=(V_\alpha)^M$. Since this initial segment includes all actual instances of the ZFC axioms, it follows that $m$ really is a model of ZFC, even if $M$ does not agree, since it may think that some nonstandard axioms might fail in $M$. $\Box$ I first learned of this theorem from Brice Halimi, who was visiting in New York in 2011, and who subsequently published his argument in: Halimi, Brice , Models as universes , Notre Dame J. Formal Logic 58, No. 1, 47-78 (2017). ZBL06686417 . Note that in the case that $M$ is $\omega$-nonstandard, then we actually get that a rank initial segment $(V_\alpha)^M$ is a model of ZFC. This is a very nice transitive set from $M$'s perspective. There are other paradoxical situations that occur with countable computably saturated models of ZFC. First, every such M contains rank initial segment $(V_\alpha)^M$, such that externally, $M$ is isomorphic to $(V_\alpha)^M$. Second, every such $M$ contains an element $m$ which $M$ thinks is an $\omega$-nonstandard model of a fragment of set theory, but externally, we can see that $M\cong m$. Switching perspectives, every such $M$ can be placed into another model $N$, to which it is isomorphic, but which thinks $M$ is nonstandard.
{ "source": [ "https://mathoverflow.net/questions/51759", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
51,863
I've always taken this for granted until recently: In the simplest case, given Jordan curve $C \subseteq \mathbb{C}$ containing a neighborhood of $\bar{0}$ in its interior. Given parametrizations $\gamma_1:S^1 \rightarrow C$. Is it true that for all $\varepsilon >0$, there exists $\delta >0$ s.t. any Jordan curve $C'$ with a parametrization $\gamma_2:S^1 \rightarrow C_2$ so that $||\gamma_1-\gamma_2||<\delta$ in the uniform norm implies the Riemann maps $R, R'$ from $\mathbb{D}$ to the interiors of $C, C'$ that fixes the origin and have positive real derivatives at $\bar{0}$ would be at most $\varepsilon$ apart?
Here's a conceptual proof why this is true, up to things which are intuitively obvious and not hard to prove: In the unit disk, almost every Brownian path hits the boundary. The hitting measure equals proportional to arc length. In two dimensions, a conformal map takes trajectories of Brownian paths to trajectories of Brownian paths: just the time parametrization changes. (This is a consequence of the fact that conformal maps take harmonic functions to harmonic functions; harmonic functions are the functions whose expectation is invariant under Brownian motion.) It follows that the pushforward of arc length of the unit disk via the Riemann mapping is the hitting probability for Brownian paths starting at the image of the origin. Your question is equivalent to asking whether the measure of intervals in your parametrized Jordan curves is uniformaly continuous with respect to the uniform topology on parametrized Jordan curves. It's intuitively obvious, as well as true and not hard to prove (further explanation below), that a Brownian path that starting at a point $z$ inside a Jordan domain near the boundary is likely to hit the boundary nearby. This fact quickly implies the continuity that you need: follow Brownian motion until it gets within $2 \epsilon$ of the initial boundary curve, so it is between $\epsilon$ and $3 \epsilon$ of the perturbed curve. When Brownian motion is continued, most of it can't shift very far before hitting. (Note: given a Jordan curve, you must take $\epsilon$ small enough that short intervals as measured by hitting measure are also short on the curve, to be able to conclude that the Riemann mapping does not move very far when you perturb the curve.) There are a variety of ways to prove that a ; one way is to lift continuously to a branch of the map $\log(z-z_0)$, where $z_0$ is the closest point to the boundary. Now the random walk takes place in an arbitrarily long strip of width no more than $2 \pi$; it has little chance of remaining in the strip long enough to move far along its length. This follows from the fact that a Brownian path in 1 dimension has a large probability of going outside an interval of length $2 \pi$ after a certain length of time. Another way to prove that Brownian paths are likely to hit nearby on the curve is to make use of the estimate for the Poincaré metric inside a domain: it varies by no more than a factor of 2 from 1/(minimum distance to the boundary). With this estimate, you can show that for a large Poincaré disk centered about $z$ near a boundary point $z_0$, most of its arc length gets squeezed near to $z_0$. Side note: Brouwer proved (in his intuitionistic framework) that every function that is everywhere defined is continuous, so from this point of view Caratheodory's theorem about continuity at the boundary implies continuity. However, one needs to check that Caratheodory's theorem is true intuitionistically; Brouwer later rejected his famous fixed pointed theorem on these grounds.
{ "source": [ "https://mathoverflow.net/questions/51863", "https://mathoverflow.net", "https://mathoverflow.net/users/9322/" ] }
51,887
Are there constructive examples of doubly stochastic matrices (whose rows and columns all sum up to $1$ and contain only non-negative entries) that are not diagonalizable?
Sure. For example: $$A = \begin{pmatrix} 5/12 & 5/12 & 1/6 \\ 1/4 & 1/4 & 1/2 \\ 1/3 & 1/3 & 1/3 \end{pmatrix}$$ Note that $$A \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} = \begin{pmatrix} 1/4 \\ -1/4 \\ 0 \end{pmatrix} \ \mbox{and} \ A^2 \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} = 0.$$ This shows that $A$ is not diagonalizable, as, for diagonalizable matrices, $A$ and $A^2$ have the same kernel. Now, let me explain how to find this. Let $w$ be the all ones vector. The condition that $A$ is doubly stochastic is that $Aw =w$ and $A^T w = w$ (ignoring positivity for now). For any nonzero vector $v \in \mathbb{R}^n$, we have $Av =v$ and $A^T v = v$ if and only if $Av=v$ and $A$ sends $v^{\perp}$ into itself. This equivalence is obvious for $v=e_1$, and the truth of the statement is preserved by orthogonal changes of coordinate, so it is true for any nonzero $v$. So, I wanted $Aw=w$ and $A$ to preserve $w^{\perp}$. So, if $A$ is going to be non-diagonalizable, it has to have a nontrivial Jordan block on $w^{\perp}$. So I tried making $A$ be of the form $\left( \begin{smallmatrix} 0 & c \\ 0 & 0 \end{smallmatrix} \right)$ in the basis $\begin{pmatrix} 1 & -1 & 0 \end{pmatrix}^T$, $\begin{pmatrix} 0 & 1 & -1 \end{pmatrix}^T$. At first I tried this with $c=1$, but some of the entries came out negative. So I redid it with a smaller value of $c$. (I knew this had to work because, when $c=0$, you get $A = \left( \begin{smallmatrix} 1/3 & 1/3 & 1/3 \\ 1/3 & 1/3 & 1/3 \\ 1/3 & 1/3 & 1/3 \end{smallmatrix} \right)$ so, by continuity, for $c$ small enough I had to get nonnegative entries.
{ "source": [ "https://mathoverflow.net/questions/51887", "https://mathoverflow.net", "https://mathoverflow.net/users/1837/" ] }
51,905
I hope this is appropriate for mathoverflow. Understanding $\mathbb{C}_p$ has always been something of a stumbling block for me. A standard thing to do in number theory is to take the completion $\mathbb{Q}_p$ of the rationals with respect to a $p$-adic absolute value. The resulting field is then complete, but has no good reason to be algebraically closed. You can take its algebraic closure, but that is not complete, so then you take the completion of that, and get a field which is both complete, and algebraically closed, denoted by $\mathbb{C}_p$. I understand that it is a reasonable desire to have a field extension of $\mathbb{Q}_p$ that is both complete and algebraically closed; my trouble, however, is getting some sort of grasp on how to picture this object, and to develop any intuition about how it is used. Here are my questions; I'd imagine the answers are related: Am I even supposed to be able to picture it? Is there some way I ought to think of a typical element? Is it worth it, in terms of these goals, to look at the proofs of the assertions in my first paragraph? How is $\mathbb{C}_p$ typically used? (this question may be too vague, feel free to ignore it!) Please feel free to answer any or all of these questions.
I'll suggest a way to get a hold on $\mathbb{C}_p$ in a "pictorial" way. It is supposed to be similar to viewing $\mathbb{C}$ as a plane acting on itself via rotations, scalings, and translations. There's a usual picture of $\mathbb{Z}_p$, which looks like the thing below for $p=3$ (taken from the website of Heiko Knospe ): Here the outermost circle is all of $\mathbb{Z}_3$; the three large colored circles are the residue classes mod $3$, the smaller circles are the residue classes mod $9$, and so on. If you want to think about $\mathbb{Q}_p$, imagine this picture continued infinitely "upward," (e.g. this circle is accompanied by two others, inside some larger circle, accompanied by two others, etc.). Now the operations of multiplication and addition do something very geometric. Namely, addition cyclically permutes the residue classes (of each size!) by some amount, depending on the coefficient of $p^n$ in the $p$-adic expansion of whatever $p$-adic integer you have in mind. Multiplication by a unit switches the residue classes around as you'd expect, and multiplication by a multiple of $p^n$ shrinks the whole circle down and sends it to some (possibly rotated) copy of itself inside the small circle corresponding to the ideal $(p^n)$. Now zero has the $p$-adic expansion $0+0\cdot p+0\cdot p^2+\cdots$ and so it is the unique element in the intersection of the circles corresponding to the residue class $0$ mod $p^n$ for every $n$. So we have a way to think of zeroes of polynomials over $\mathbb{Q}_p$---namely, a Galois extension of $\mathbb{Q}_p$ is some high dimensional vector space $\mathbb{Q}_p^N$ (which you probably have a picture of from linear algebra) acted on by $\mathbb{Q}_p$, in a way that twists each factor of $\mathbb{Q}_p^N$ and permutes the factors of the direct sum, according to the Galois action. That the extension is algebraic means that there's some way to twist it about (using the previously described actions) to put any element at the $0$ point. Totally ramified extensions add intermediate levels of circles between those that already exist, whereas unramified extensions add new circles. I think this point of view is a particularly appealing visualization. Now, the algebraic closure of $\mathbb{Q}_p$ is some maximal element of the poset of these algebraic extensions---which is hard to visualize as it is not really "unique," but for the sake of a picture one might think of choosing embeddings $K\to K'$ for each $K'/K$, and then taking the union. Finally, think of the completion in the usual way, e.g. by formally adding limits of Cauchy sequences. Trying to draw pictures of some finite algebraic extensions of $\mathbb{Q}_p$ might help, and figuring out what the actions by addition and multiplication are is a fun exercise. I hope this "word picture" is as useful for you as it is for me. ADDED: Though this answer is becoming rather long, I wanted to add another picture to expand on the points I made about unramified and totally ramified extensions above. Here is a picture of $\mathbb{Z}_3$, which I made with the free software Blender; imagine it continuing indefinitely upward: A top view of this object should be the previous picture; the actual elements of $\mathbb{Z}_3$ should be viewed as sitting "infinitely high up" on the branches of this tree. As you can see, this object splits into levels, indexed by $\mathbb{N}$, and on the $n$-th level there are $p^n$ "platforms" corresponding to the residues mod $p^n$. For $\mathbb{Q}_p$, the levels should be indexed by $\mathbb{Z}$. Now what happens when one looks at an unramfied extension of degree $k$? The levels, which correspond to powers of the maximal ideal, should not change, so the levels are still indexed by $\mathbb{Z}$; but the amount of branching on each "platform" is now indexed by $\mathcal{O}_K/m=\mathbb{F}_{p^k}$. So instead of having $p$ branches coming out of each level, one has $p^k$. On the other hand, what if we have a totally ramified extension of degree $k$? Now $\mathcal{O}_k/m=\mathbb{F}_p$, so there are still $p$ branches on each level. But because the uniformizer now has valuation $1/k$, we can view the levels as being indexed by $\mathbb{Z}[1/k]$ (if you like, the height of each platform is now $1/k$ rather than $1$). So what is the upshot for $\mathbb{C}_p$? We can view it as a similar diagram, except the levels are indexed by $\mathbb{Q}$, and the branches coming off of an individual platform correspond to elements of $\overline{\mathbb{F}_p}$. One nice thing about this picture is that one can actually build spaces like the one I've included in the picture---replacing the tubes in my picture with line segments---such that the elements of $\mathbb{Q}_p$ or some extension thereof are a subset of the space (living "infinitely far" from the part I've drawn), with the subspace topology being the usual topology on the local field. Furthermore, the construction is functorial, in that an embedding $K\hookrightarrow K'$ induces a continuous map of spaces. The distance between two points in the local field is then given by their "highest common ancestor" in this garden of forking paths. (This picture is essentially a description the Berkovich spaces mentioned by Joe Silverman, though I am essentially a novice in that regard, so it's quite possible I've made some mistake; you should take this as a description of my intuition, not Berkovich's definition.)
{ "source": [ "https://mathoverflow.net/questions/51905", "https://mathoverflow.net", "https://mathoverflow.net/users/9960/" ] }
52,023
Does Poincare-Hopf index theorem generalizes in any way to non compact manifolds ? In particular, I am interested in the case of a smooth vector field on a cylinder $\mathbb{T}_1\times\mathbb{R}$? If so, are there some additional assumption that one has to impose on a vector field considered (maybe it should vanish outside some compact set or decay very fast "at infinity"?). Sorry if the question is silly - I know the Hodge index theorem only from very elementary sources (Arnold's book on ODEs and Wikipedia). Motivation (physical digression): A friend of mine tries to model action of a cardiac tissue in a heart and its soundings. One of his aims is to understand phenomena called "spiral waves" (they are believed to be partially responsible for hearth attacks). I don't know the details but those "spiral waves" can be described by some ODE defined on a domain which is closely related to the real geometry of considered tissue. From the information about indexes of singular points of a corresponding vector field it is possible to deduce some qualitative information about occurrence of this phenomenon.
Every noncompact manifold admits nonzero vector fields, or more generally, vector fields with any specified set of isolated zeros along with the behavior near that zero. However, if you have information of the behavior of a vector field near infinity, or just in a neighborhood of the boundary of a compact set, there is an index theorem. Perhaps this is the case with your $\mathbb T^! \times \mathbb R$. In the particular case of a cylinder, there is a simple way to calculate the index. Take any compact subcylinder delimited by two circles. Map the cylinder to the plane minus the origin. Around each of the curves, the vector field has a turning number: as you go around the curve counterclockwise, the vector field turns by some number of rotations (counting counterclockwise as positive. The index of the vector field in the compact subannulus is the difference: the number of turns on the outer boundary minus the number of turns on the inner boundary. One way to describe a general formula is this: let $N^n$ be manifold, and let $M, \partial M \subset N$ be a compact submanifold. Let $X$ be a vector field that is is nonvanishing in a neighborhood of $\partial M$. Choose an outward normal vector field $U$ along $\partial M$; now arrange $X$ so that its direction coincides with $U$ only in isolated points, so if we project $X$ to $N$ along $U$, it is a vector field with isolated singularities. Let $i_+(X)$ be the sum of the Poincaré-Hopf indices over all singularities where $X$ is oriented outward. Then the Poincaré-Hopf index $i(X)$ of $X$ in $M$ equals the Euler characteristic of $M$ minus $i_+(X)$. Here's one proof: triangulate a neighborhood of $M$ so that $\partial M$ is a subcomplex, and so that $X$ is transverse to the triangulation except near the singularities, in the sense that in any simplex, the foliation defined by $X$ is topologically equivalent to the kernel of a linear map in general position of the simplex to $\mathbb R^{n-1}$. Put a $+1$ at the barycenter of each triangle of even dimension, and a $-1$ at the barycenter of each triangle of odd dimension. Think of $X$ as a wind that blows these numbers along, so that after an instant, all numbers (except for exceptions near the zeros of $X$) are inside an $n$-simplex. In any typical simplex, all the signs cancel out. However, along the boundary, some of the numbers are blown away and lost. To regularize the situation, modify $X$ by pushing in the negative normal direction. Now $X$ points inward everywhere except in a neighborhood of points where it coincides with the outward normal. Thus everything cancels out except for local contributions given by $i(X)$ and $i_+(X)$.
{ "source": [ "https://mathoverflow.net/questions/52023", "https://mathoverflow.net", "https://mathoverflow.net/users/11521/" ] }
52,126
This may be a fairly simple question. Suppose G is a (T0) topological group. Assume that G is path-connected, locally path-connected, and semilocally simply connected, so that covering space theory applies. Question: Is it true that for any element of $\pi_1(G,e)$ (where e is the identity element of G ), there exists a [ADDED: continuous ] homomorphism from $S^1$ to $G$ having that element of $\pi_1(G,e)$ as its homotopy class? Another way of formulating this is that there is a set map: $$\operatorname{Hom}_{cts}(S^1,G) \to \pi_1(G,e)$$ The subscript cts is to indicate continuous. (when G is abelian, the left side has a group structure too [ADDED: under pointwise multiplication ], and the Eckmann-Hilton principle tells us that we get a group homomorphism). Is the set map surjective in all cases (regardless of whether G is abelian)? Does the image of $\operatorname{Hom}(S^1,G)$ generate $\pi_1(G,e)$ as a group (this is equivalent to surjectivity when $G$ is abelian)? Does surjectivity work for Lie groups? Compact Lie groups? Does the weaker formulation (2) work for Lie groups? I have a sketch of an argument/proof that may show (4) (basically, using properties of one-parameter subgroups), but I'm hoping somebody will have a clean proof that works in general for topological groups.
No. A continuous homomorphism $S^1\to G$ yields a map $BS^1\to BG$. The space $BS^1$ is homotopy equivalent to $\mathbb CP^\infty$. There is a topological group $G$ such that $BG$ is homotopy equivalent to the sphere $S^2$. A map corresponding to a generator of $\pi_1G=\pi_2BG=H_2S^2$ would give an isomorphism $H^2BG\to H^2BS^1$, but this is incompatible with the cup product. EDIT: This example is universal in the following sense: A standard way of making a Kan loop group for the suspension of a based simplicial set $K$ is to apply (levelwise) the free group functor from based sets to groups. The realization of this is then the universal example of a topological group $G$ equipped with a continuous map $|K|\to G$. Apply this with $K=S^1$. EDIT: Yes in the Lie group case. It suffices to consider compact $G$ since a maximal compact subgroup is a deformation retract. Now put a Riemannian structure on $G$ that is left and right invariant, and use that the geodesics are the cosets of the $1$-parameter subgroups, and that in a compact Riemannian manifold every loop is freely homotopic to a closed geodesic.
{ "source": [ "https://mathoverflow.net/questions/52126", "https://mathoverflow.net", "https://mathoverflow.net/users/3040/" ] }
52,169
Motivated by the apparent lack of possible classification of integer matrices up to conjugation ( see here ) and by a question about possible complete graph invariants ( see here ), let me ask the following: Question: Is there an example of a pair of non-isomorphic simple finite graphs which have conjugate (over $\mathbb Z$) adjacency matrices? It is well-known that there are many graphs which have the same spectrum. This implies that their adjacency matrices are conjugate over $\mathbb C$. In Allen Schwenk, Almost all trees are cospectral. New directions in the theory of graphs (Proc. Third Ann Arbor Conf., Univ. Michigan, Ann Arbor, Mich., 1971), pp. 275–307. Academic Press, New York, 1973 it was shown that almost all trees have cospectral partners. Maybe $\mathbb Z$-conjugate graphs can be found among trees?
Yes. Consider the adjacency matrices $$ A = \left[\begin{array}{rrrrrrrrrrr} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right] $$ and $$ B = \left[ \begin{array}{rrrrrrrrrrr} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right]. $$ These are both the adjacency matrices of trees, and both have characteristic polynomial $$\lambda^{11}-10\lambda^9+34\lambda^7-47\lambda^5+25\lambda^3-4\lambda.$$ Each tree has exactly two vertices of degree 3, separated by a path of length 1 in the case of $A$ but length 2 in the case of $B$. In particular, the trees are not isomorphic. Now consider the [EDIT: improved, much nicer] matrix $$ C = \left[\begin{array}{rrrrrrrrrrr} 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -1 \\\\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\\\ \end{array}\right] $$ with determinant $-1$. Since $C^{-1}AC = B$, the two trees (on 11 vertices) are non-isomorphic but have adjacency matrices that are conjugate over $\mathbb Z$. Now to explain the where the example comes from. The pair of graphs was constructed by a method, attributed to Schwenk, that I found in Doob's chapter of Topics in algebraic graph theory (edited by Beineke and Wilson). The first 9 rows and columns of $A$, in common with $B$, come from a particular tree on 9 vertices that has a pair of attachment points such that extending the tree in the same way from either point gives isomorphic spectra. Adding a single pendant vertex cannot work for this problem, as I found using Brouwer and van Eijl's trick, mentioned by Chris Godsil, of comparing the Smith normal forms of (very) small polynomials in $A$ and $B$, in this case $A+2I$ and $B+2I$. When a path of length two is added at either of the two special vertices, however, there doesn't seem to be any obstruction of this type. I then set about trying to conjugate both $A$ and $B$, separately, to the companion matrix of their mutual characteristic polynomial, by looking for a random small integer vector $x$ for which the matrix $X_A = [ x\ Ax\ A^2x\ \ldots\ A^{10}x]$ has determinant $\pm 1$, and similarly $y$ giving $Y_B$. (The fact that I succeeded fairly easily may have something to do with the fact that $A+I$ is invertible over $\mathbb Z$.) The matrix $X_AY_B^{-1}$ then acts like the $C$ above. [EDIT: The actual matrix $C$ I found at random and first posted was not nearly so pretty, with a Frobenius norm nearly ten times the current example. But taking powers 0 to 10 of $A$ times $C$ gave a $\mathbb Q$-basis for the full space of conjugators, whose Smith normal form (as 11 vectors in $\mathbb R^{121}$) was all 1's down the diagonal, so in fact it was a $\mathbb Z$-basis. Performing an LLL reduction on this lattice basis then gave a list of smaller-norm matrices, the third of which is the more illuminating $C$ given above, of determinant $-1$. The other determinants from the reduced basis were all $0$ and $\pm 8$.] Taking rational $x$ and not restricting the determinant of $X_A$ gives a space of possible rational matrices $C$ of dimension 11, which are generically invertible; varying $y$ gives the same space [EDIT: as does multiplying on the left by powers (or in the more general case commutants) of $A$]. Since the spectrum of $A$ has no repeated roots, this is also the dimension of the commutant of $A$, and every matrix conjugating $A$ to $B$ lies in this space. Starting with a rational basis, it is not hard to find an exact basis for the integer sublattice, and taking the determinant of a general point in the integer lattice gives an integer polynomial in 11 variables which takes the value $1$ or $-1$ if and only if the matrices $A$ and $B$ are conjugate over $Z$. If there are repeated roots, you have to work a little harder; in general the full space has dimension the sum of the squares of the multiplicities, and is generated by multiplying on the left by a basis for the commutator space of $A$. A basis for the commutant can be produced (for a diagonalizable matrix) by first conjugating $A$ to a direct sum of companion matrices for the irreducible factors of the characteristic polynomial, and then one at a time, for each $k$-by-$k$ block corresponding to a $k$-times repeated factor of degree $m$, replacing each of the $k^2$ blocks with powers $0$ to $m-1$ of the companion matrix for that factor, with $0$ everywhere elsewhere.
{ "source": [ "https://mathoverflow.net/questions/52169", "https://mathoverflow.net", "https://mathoverflow.net/users/8176/" ] }
52,176
Let $G$ be a (discrete) group, and $1/G$ the corresponding groupoid with one object. Consider the diagram in (the 2-category) Groupoids with one vertex, labeled $1/G$, the one arrow from that vertex to itself, given by the identity map. $$ \begin{matrix} 1/G \\ {\huge \circlearrowleft} \\ \scriptstyle \mathrm{id} \end{matrix}$$ (This diagram is equivalent to the pair of parallel arrows $1/G \overset{\rm id}{\underset{\rm id}\rightrightarrows} 1/G$. Note that I am not filling in the loop with a 2-cell.) A cute fact is that the ("2-") limit of this diagram in Groupoids is the action groupoid $G/G$ of the adjoint action of $G$ on itself. (See e.g. 2 limit in nLab or HTT Chapter 4 for a definition of limits.) Now, in homotopological terms, the groupoid $1/G$ looks like the classifying space ${\rm B}G$, and the above diagram looks like ${\rm B}G \times S^1$. I have the possibly-mistaken impression that limits are supposed to look like topological cones (but maybe this is because we use words like "cone" when talking about limits). Question: In terms of homotopy, how should I visualize the limit cone $$ \lim\left( \begin{matrix} 1/G \\ {\huge \circlearrowleft} \\ \scriptstyle \mathrm{id} \end{matrix} \right) \quad \begin{matrix} {\huge \to} \\ {\large \circlearrowleft \!\!\!\!\!\! \circlearrowleft} \end{matrix} \quad \begin{matrix} 1/G \\ {\huge \circlearrowleft} \\ \scriptstyle \mathrm{id} \end{matrix} $$ ? (Edits: per Quid's request in the comments, I replaced some broken images with diagrams, trying to reconstruct them from memory. $\circlearrowleft \!\!\!\!\! \circlearrowleft$ is my attempt at a doubled circle arrow, i.e. a 2-cell filling in the cone walls.)
Yes. Consider the adjacency matrices $$ A = \left[\begin{array}{rrrrrrrrrrr} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right] $$ and $$ B = \left[ \begin{array}{rrrrrrrrrrr} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right]. $$ These are both the adjacency matrices of trees, and both have characteristic polynomial $$\lambda^{11}-10\lambda^9+34\lambda^7-47\lambda^5+25\lambda^3-4\lambda.$$ Each tree has exactly two vertices of degree 3, separated by a path of length 1 in the case of $A$ but length 2 in the case of $B$. In particular, the trees are not isomorphic. Now consider the [EDIT: improved, much nicer] matrix $$ C = \left[\begin{array}{rrrrrrrrrrr} 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -1 \\\\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\\\ \end{array}\right] $$ with determinant $-1$. Since $C^{-1}AC = B$, the two trees (on 11 vertices) are non-isomorphic but have adjacency matrices that are conjugate over $\mathbb Z$. Now to explain the where the example comes from. The pair of graphs was constructed by a method, attributed to Schwenk, that I found in Doob's chapter of Topics in algebraic graph theory (edited by Beineke and Wilson). The first 9 rows and columns of $A$, in common with $B$, come from a particular tree on 9 vertices that has a pair of attachment points such that extending the tree in the same way from either point gives isomorphic spectra. Adding a single pendant vertex cannot work for this problem, as I found using Brouwer and van Eijl's trick, mentioned by Chris Godsil, of comparing the Smith normal forms of (very) small polynomials in $A$ and $B$, in this case $A+2I$ and $B+2I$. When a path of length two is added at either of the two special vertices, however, there doesn't seem to be any obstruction of this type. I then set about trying to conjugate both $A$ and $B$, separately, to the companion matrix of their mutual characteristic polynomial, by looking for a random small integer vector $x$ for which the matrix $X_A = [ x\ Ax\ A^2x\ \ldots\ A^{10}x]$ has determinant $\pm 1$, and similarly $y$ giving $Y_B$. (The fact that I succeeded fairly easily may have something to do with the fact that $A+I$ is invertible over $\mathbb Z$.) The matrix $X_AY_B^{-1}$ then acts like the $C$ above. [EDIT: The actual matrix $C$ I found at random and first posted was not nearly so pretty, with a Frobenius norm nearly ten times the current example. But taking powers 0 to 10 of $A$ times $C$ gave a $\mathbb Q$-basis for the full space of conjugators, whose Smith normal form (as 11 vectors in $\mathbb R^{121}$) was all 1's down the diagonal, so in fact it was a $\mathbb Z$-basis. Performing an LLL reduction on this lattice basis then gave a list of smaller-norm matrices, the third of which is the more illuminating $C$ given above, of determinant $-1$. The other determinants from the reduced basis were all $0$ and $\pm 8$.] Taking rational $x$ and not restricting the determinant of $X_A$ gives a space of possible rational matrices $C$ of dimension 11, which are generically invertible; varying $y$ gives the same space [EDIT: as does multiplying on the left by powers (or in the more general case commutants) of $A$]. Since the spectrum of $A$ has no repeated roots, this is also the dimension of the commutant of $A$, and every matrix conjugating $A$ to $B$ lies in this space. Starting with a rational basis, it is not hard to find an exact basis for the integer sublattice, and taking the determinant of a general point in the integer lattice gives an integer polynomial in 11 variables which takes the value $1$ or $-1$ if and only if the matrices $A$ and $B$ are conjugate over $Z$. If there are repeated roots, you have to work a little harder; in general the full space has dimension the sum of the squares of the multiplicities, and is generated by multiplying on the left by a basis for the commutator space of $A$. A basis for the commutant can be produced (for a diagonalizable matrix) by first conjugating $A$ to a direct sum of companion matrices for the irreducible factors of the characteristic polynomial, and then one at a time, for each $k$-by-$k$ block corresponding to a $k$-times repeated factor of degree $m$, replacing each of the $k^2$ blocks with powers $0$ to $m-1$ of the companion matrix for that factor, with $0$ everywhere elsewhere.
{ "source": [ "https://mathoverflow.net/questions/52176", "https://mathoverflow.net", "https://mathoverflow.net/users/78/" ] }
52,241
Let E be an elliptic curve, let $L(s) = \sum a_n n^{-s}$ denote its L-function, and set $$ f(x) = \sum a_n \frac{x^n}{n}. $$ Then Honda has observed that $$ F(X,Y) = f^{-1}(f(X) + f(Y)) $$ defines a formal group law. The formal group law of an elliptic curve has applications to the theory of torsion points, apparently because formal groups are useful tools for studying such objects over discrete valuation domains. Nevertheless I would appreciate it if someone could point out the intuition behind this approach. What is the connection between the L-series and the group law on the curve given by the formal group law? Do formal group laws just give a streamlined proof of basic properties of the elliptic curve over $p$-adic fields, or is there more to them? I've also seen the work of Lubin-Tate in local class field theory, and I do remember that I found the material as frightening as cohomology at first. It would be nice if the answers had something from a salesman's point of view: why should I buy formal group laws at all?
Okay, here's a few words about the relation between the $L$-series and the formal group. In general, if $F(X,Y)$ is the formal group law for $\hat G$, then there is an associated formal invariant differential $\omega(T)=P(T)dT$ given by $P(T)=F_X(0,T)^{-1}$. Formally integrating the power series $\omega(T)$ gives the formal logarithm $\ell(T)=\int_0^T\omega(T)$. The logarithm maps $\hat G$ to the additive formal group, so we can recover the formal group as $$ F(X,Y) = \ell^{-1}(\ell(X)+\ell(Y))$$. (See, e.g., Chapter IV of Arithmetic of Elliptic Curves for details.) Now let $E$ be an elliptic curve and $\omega=dx/(2y+a_1x+a_3)$ be an invariant differential on $E$. If $E$ is modular, say corresponding to the cusp form $g(q)$, then we have (maybe up to a constant scaling factor) $\omega = g(q) dq/q = \sum_{n=1}^{\infty} a_nq^{n-1}$. Eichler-Shimura tell us that the coefficients of $g(q)$ are the coefficients of the $L$-series $L(s)=\sum_{n=1}^\infty a_n n^{-s}$. Integrating $\omega$ gives the elliptic logarithm, which is the function you denoted by $f$, i.e., $f(q)=\sum_{n=1}^\infty a_nq^n/n$, and then the formal group law on $E$ is $F(X,Y)=f^{-1}(f(X)+f(Y))$. To me, the amazing thing here is that the Mellin transform of the invariant differential gives the $L$-series. Going from the invariant differential to the formal group law via the logarithm is quite natural.
{ "source": [ "https://mathoverflow.net/questions/52241", "https://mathoverflow.net", "https://mathoverflow.net/users/3503/" ] }
52,286
I recently heard the following fact : Up to the $15$th skeleton, the classifying space $BE_8$ and $K(\mathbb{Z},4)$ are homotopy equivalent? I have two questions on this : (1) Is there any easy way to see this? Of course, knowing the first fourteen homotopy groups of $E_8$ is enough but then the question is how does one compute them? (2) Is there any feasible explanation that suggests that $4$th cohomology classes (possibly related to gerbes), i.e., elements in $H^4(X;\mathbb{Z})$, arise from physical considerations and if $X$ is of dimension $14$ or less then we're classifying $E_8$-bundles on $X$, thereby suggesting that $E_8$ arises out of physical considerations? The last question is a little vague but any pointers would be great!
Given a simple Lie group $G$, you can check how far $G$ is from beeing a $K(\mathbb Z,3)$ by looking at the place where the affine vertex gets glued onto the Dynkin diagram, and measuring the length of that tail. For $E_8$, it's the longest, and so $E_8$ is the best possible approximation to a $K(\mathbb Z,3)$. $$\bullet - \bullet - \stackrel{\stackrel{\displaystyle\bullet}|}{\bullet} - \underbrace{\bullet - \bullet - \bullet - \bullet}_{\text{long tail}} - \circ$$ This is done by labelling the cells of the affine Grassmannian $\Omega G$ by data from the dynlin Diagram, and checking how far you need to go for $\Omega G$ to start looking different than $\mathbb C \mathbb P^\infty$. the affine Grassmannian $\Omega G$ is a very nice space: it's a complex (ind-)variety, and it is stratified by finite dimensional cells. In particular, it has a natural CW -decomposition. Each cells of $\Omega G$ is isomorphic to $\mathbb C^n$, and is in particular of even (real) dimension. Moreover, $\Omega G$ is a coadjoint orbit of the infinite dimensional Lie group $S^1\ltimes \widetilde {LG}$. Here, the tilde refers to the universal central extension of the loop group $LG$, and $S^1$ acts by reparametrizing the loops. The inclusion $\Omega G\to Lie(S^1\ltimes \widetilde {LG} )^* $ can be composed with the projection $$Lie(S^1\ltimes \widetilde {LG})^* \twoheadrightarrow (\mathfrak t_{S^1\ltimes \widetilde {LG}})^* \cong \mathfrak t^* \oplus \mathbb R \oplus \mathbb R$$ (here $\mathfrak t$ denotes the Lie algebra of the maximal torus $T$ of $G$). It turns out that the composite lands in a translated copy of $\mathfrak t^* \oplus \mathbb R$, and so one gets map $$ \mu:\Omega G \to \mathfrak t^* \oplus \mathbb R $$ called the moment map (for the $T_{S^1\ltimes LG}$ action). What is important, is that the space $t^* \oplus \mathbb R$ has a natural basis that is indexed by the vertices of the extended Dynkin diagram: those are the simple coroots . I will denote each cell by the moment map image in $\mathfrak t^* \oplus \mathbb R$ of its center point (in the basis of simple coroots). Now, let me specialize to the case $G=E_8$. Here we go: 0-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 0}|}{0} - 0 - 0 - 0 - 0 - 0 \end{matrix}$ 2-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 0}|}{0} - 0 - 0 - 0 - 0 - 1 \end{matrix}$ 4-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 0}|}{0} - 0 - 0 - 0 - 1 - 1 \end{matrix}$ 6-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 0}|}{0} - 0 - 0 - 1 - 1 - 1 \end{matrix}$ 8-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 0}|}{0} - 0 - 1 - 1 - 1 - 1 \end{matrix}$ 10-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 0}|}{0} - 1 - 1 - 1 - 1 - 1 \end{matrix}$ 12-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 0}|}{1} - 1 - 1 - 1 - 1 - 1 \end{matrix}$ 14-dimensional cell: $\qquad\begin{matrix} 0 - 0 - \stackrel{\stackrel{\displaystyle 1}|}{1} - 1 - 1 - 1 - 1 - 1 \end{matrix}$ other 14-dimensional cell: $\qquad\begin{matrix} 0 - 1 - \stackrel{\stackrel{\displaystyle 0}|}{1} - 1 - 1 - 1 - 1 - 1 \end{matrix}$ As you can see, $ H^* (\Omega G) = H^* (\mathbb C \mathbb P^\infty ) $ for $*\le 13$. Even better: the varieties $\Omega G$ and $\mathbb C \mathbb P^\infty$ are isomorphic in complex dimensions $\le 6$ [ added later: I take back that claim. I don't konw how to prove that the varieties $\Omega G$ and $\mathbb C \mathbb P^\infty$ are isomorphic in complex dimensions $\le 6$ (it might still be true)]. In particular, the CW -complexes $\Omega G$ and $\mathbb C \mathbb P^\infty$ are isomorhpic in dimensions $\le 13$. Taking classifying spaces, we get that the CW -complexes $G$ and $K(\mathbb Z,3)$ are isomorphic in dimensions $\le 14$.
{ "source": [ "https://mathoverflow.net/questions/52286", "https://mathoverflow.net", "https://mathoverflow.net/users/1993/" ] }
52,509
Let $E$ be a closed subspace of $L^2[0,1]$. Suppose that $E\subset{}L^\infty[0,1]$. Is it true that $E$ is finite dimensional? PS. This is actually a question from the real analysis qualifier. I came across it as I was teaching qualifier preparation course, and was solving problems from old qualifiers. So, though it might follow from some advanced theory of Banach spaces, I am most interested in the 'elementary' solution, using only methods from standard real analysis course. Note: if $E\subset{}C[0,1]$, then it is a problem from Folland, and there is a solution there. However, it does not work for $L^\infty$, not without some trick.
Another solution: as Mikael wrote, $||f||_{\infty} \leq C ||f||_2$ for every $f \in E$. Let $f_1,\ldots,f_n$ be an orthonormal family in your subspace. Then for every $x \in [0,1]$, $f_1(x)^2+\ldots+f_n(x)^2 \leq ||f_1(x)f_1+\ldots+f_n(x)f_n||_{\infty} \leq C \|f_1(x)f_1+\ldots+f_n(x)f_n\|_2$ $$=C \sqrt{f_1(x)^2+\ldots+f_n(x)^2},$$ and by squaring we get $f_1(x)^2+\ldots+f_n(x)^2 \leq C^2$, and integrating gives $n \leq C^2$.
{ "source": [ "https://mathoverflow.net/questions/52509", "https://mathoverflow.net", "https://mathoverflow.net/users/10714/" ] }
52,688
I am in the process of redesigning the calculus course that I have taught five or six times. What I would like to know is if anyone has some really good examples or exercises that I could either do in class or give as a project. In particular, I've found that I don't have many good examples/exercises that illustrate the awesomeness of the main theorems (Intermediate Value Theorem, Mean Value Theorem, etc.). All levels of difficulty are certainly appreciated. The intent is to have material that I can present or assign here and there throughout the course that goes beyond basic calculus and will challenge even those to whom math comes naturally. An example of what I'm looking for is something like showing a continuous function on $S^1$ has to map two antipodal points to the same value. EDIT: In response to Qiaochu Yuan, Calc I and II together form all of single variable calculus. For Calc I: limits, differentiation, Riemann integration (improper as well). For Calc II: sequences, series, polar coordinates, parametric coordinates. The old book for this course was Stewart's "Calculus: Early Transcendentals", but I don't follow any book when I teach.
You only need integration by parts to prove the irrationality of $\pi$. I'm having my Calculus 2 students do it as a long-term group project starting Monday. Then when you've done partial fractions, you can have them derive the quickly-converging BBP formula for $\pi$. And you can have them do the "18th Century Style" Euler argument for evaluating $\sum_{n=1}^\infty {1\over n^2}$. Here's a link to two of these: http://homepages.wmich.edu/~jstrom/PiProjects/
{ "source": [ "https://mathoverflow.net/questions/52688", "https://mathoverflow.net", "https://mathoverflow.net/users/10206/" ] }
52,692
It's possible this question is trivial, in which case it will be answered quickly. In any case, I realized that it's a basic question the answer to which I should know but do not. Everybody loves knots — one-dimensional compact manifolds mapped generically into three-dimensional compact manifolds — and it's natural to ask about "knots" in higher dimensions. Of course, the space of generic maps of a one-dimensional compact manifold into a four-dimensional compact manifold is connected, so there is no interesting "knotting". Instead, people usually think about "surface knots in 4d", which are usually defined as embedded compact 2-manifolds in a 4-manifold. But surfaces can map into 4-space in much more interesting ways. In particular, whereas a generic map from a 1-manifold to a 3-manifold is an embedding, two generic surfaces in 4-d can be "stuck" on each other: the generic behavior is to have point intersections. So a richer theory than that of embedded surfaces in 4-space is one that allows for these point self-intersections — it would be the theory of connected components of the space of generic maps. Still, though, thinking about these self-intersections is hard, and their existence is part of what makes 2-knot theory hard (for instance, it interferes with developing a good "Vassiliev" theory for 2-knots). If you really want to reproduce the fact that generic maps have no self intersections, you should move the ambient space one dimension higher. Hence my question: Can compact 2-manifold embedded into a compact 5-manifold be interestingly "knotted"? I.e. let $L$ be a compact 2-manifold and $M$ a compact 5-manifold; are there multiple connected components in the space of embeddings $L \hookrightarrow M$? I expect the answer is "no", else I would have heard about it. But my intuition is sufficiently poor that I thought it best to ask.
If $M$ is a connected compact $2$-manifold, then it unknots in $\Bbb R^5$. More generally, $k$-connected $n$-manifolds embed in $\Bbb R^{2n-k}$, provided $k<\frac{n-2}2$, and unknot in $\Bbb R^{2n-k+1}$, provided $k<\frac{n-1}2$. This was proved around 1961 - by Roger Penrose, J.H.C. Whitehead, and Zeeman in the PL category; and by Haefliger in the smooth category. Later Zeeman and Irwin relaxed the metastable dimension restrictions in the PL result to codimension $\ge 3$ (see Zeeman's "Seminar on Combinatorial Topology"). On the other hand, the disjoint union of two $2$-spheres is a compact $2$-manifold. It definitely knots in $\Bbb R^5$ as detected by the linking number. That is the degree $\alpha$ of $S^2\times S^2\to S^4$, $(p,q)\mapsto \frac{f(p)-g(q)}{||f(p)-g(q)||}$, calling our link $f\sqcup g:S^2\sqcup S^2\to\Bbb R^5$. A nontrivial link is the Hopf link, whose components are the factors of the join $S^5=S^2*S^2$. Since $S^2$ unknots in $\Bbb R^5$, the exterior of one component is always homotopy equivalent to $S^2$, and the linking number is also the degree $\lambda$ of $p(S^2)\to S^5\setminus q(S^2)\simeq S^2$. [In different dimensions, where $\alpha$ and $\lambda$ are not numbers but homotopy classes of spheroids (more precisely $\alpha$ factors though a spheroid up to homotopy, upon killing the wedge), their relation is more interesting: $\alpha$ equals the suspension of $\lambda$ (up to a sign).] By Haefliger's theorem (1963) that embeddings in the metastable range are classified by equivariant homotopy of two-point configuration spaces, the linking number for each pair of components is the only invariant of smoothly embedded $2$-manifolds in $\Bbb R^5$. This recovers the result that connected surfaces unknot in $\Bbb R^5$; and additionally implies that there is nothing new for $3$-component links. [In contrast, there are the Borromean rings of three $3$-spheres in $\Bbb R^6$, whose nontriviality is detected e.g. by a nonvanishing triple Massey product in the complement. Thinking of the usual Borromean rings in $\Bbb R^3$ as lying in the three coordinate planes, one can similarly do three copies of $S^1*S^1$ lying in the two-factor subproducts of $\Bbb R^2\times\Bbb R^2\times\Bbb R^2$.] Also smooth, PL and topological knot theories coincide for smooth $n$-manifolds in smooth $m$-manifolds in the metastable range $m>\frac{3(n+1)}2$ (this includes $2$-manifolds in $\Bbb R^5$). In more detail, Haefliger's classification theorem implies that if two smooth embeddings in the metastable range are isotopic (=homotopic through topological embeddings, possibly wild) then they are smoothly isotopic. Weber's PL classification theorem (1967) implies additionally that every PL embedding of a smooth manifold in the metastable range is ambient isotopic to a smooth embedding. Also it follows from results of Edwards and Bryant that an arbitrary topological embedding in codimension $\ge 3$ is isotopic to a PL embedding, and, from results of Bryant-Seebeck, that a locally flat topological embedding in codimension $\ge 3$ is ambient isotopic to a PL embedding.
{ "source": [ "https://mathoverflow.net/questions/52692", "https://mathoverflow.net", "https://mathoverflow.net/users/78/" ] }
52,708
In the introduction to chapter VIII of Dieudonné's Foundations of Modern Analysis (Volume 1 of his 13-volume Treatise on Analysis ), he makes the following argument: Finally, the reader will probably observe the conspicuous absence of a time-honored topic in calculus courses, the “Riemann integral”. It may well be suspected that, had it not been for its prestigious name, this would have been dropped long ago, for (with due reverence to Riemann’s genius) it is certainly quite clear to any working mathematician that nowadays such a “theory” has at best the importance of a mildly interesting exercise in the general theory of measure and integration (see Section 13.9, Problem 7). Only the stubborn conservatism of academic tradition could freeze it into a regular part of the curriculum, long after it had outlived its historical importance. Of course, it is perfectly feasible to limit the integration process to a category of functions which is large enough for all purposes of elementary analysis (at the level of this first volume), but close enough to the continuous functions to dispense with any consideration drawn from measure theory; this is what we have done by defining only the integral of regulated functions (sometimes called the “Cauchy integral”). When one needs a more powerful tool, there is no point in stopping halfway, and the general theory of (“Lebesgue”) integration (Chapter XIII) is the only sensible answer. I've always doubted the value of the theory of Riemann integration in this day and age. The so-called Cauchy integral is, as Dieudonné suggests, substantially easier to define (and prove the standard theorems about), and can also integrate essentially every function that we might want in a first semester analysis/honors calculus course. For any other sort of application of integration theory, it becomes more and more worthwhile to develop the fully theory of measure and integration (this is exactly what we did in my second (roughly) course on analysis, so wasn't the time spent on the Riemann integral wasted?). Why bother dealing with the Riemann (or Darboux or any other variation) integral in the face of Dieudonné's argument? Edit: The Cauchy integral is defined as follows: Let $f$ be a mapping of an interval $I \subset \mathbf{R}$ into a Banach space $F$. We say that a continuous mapping $g$ of $I$ into $F$ is a primitive of $f$ in $I$ if there exists a denumerable set $D \subset I$ such that, for any $\xi \in I - D$, $g$ is differentiable at $\xi$ and $g'(\xi) =f(\xi)$ . If $g$ is any primitive of a regulated function $f$, the difference $g(\beta) - g(\alpha)$, for any two points of $I$, is independent of the particular primitive $g$ which is considered, owing to (8.7.1); it is written $\int_\alpha^\beta f(x) dx$, and called the integral of $f$ between $\alpha$ and $\beta$. (A map $f$ is called regulated provided that there exist one-sided limits at every point of $I$). Edit 2: I thought this was clear, but I meant this in the context of a course where the theory behind the integral is actually discussed. I do not think that an engineer actually has to understand the formal theory of Riemann integration in his day-to-day use of it, so I feel that the objections below are absolutely beside the point. This question is then, of course, in the context of an "honors calculus" or "calculus for math majors" course.
It is frequently claimed that Lebesgue integration is as easy to teach as Riemann integration. This is probably true, but I have yet to be convinced that it is as easy to learn. T. Körner: A companion to analysis: a second first and first second course in analysis. I know, this is slightly besides the Riemann vs Cauchy point, but I like this quotation so much I couldn't help myself...
{ "source": [ "https://mathoverflow.net/questions/52708", "https://mathoverflow.net", "https://mathoverflow.net/users/1353/" ] }
52,744
Modular forms are defined here: http://en.wikipedia.org/wiki/Modular_form#General_definitions Maass forms are defined here: http://en.wikipedia.org/wiki/Maass_wave_form I wonder if modular forms can be transfered into Maass forms. Or they two are different categories of automorphic forms.
In the more common terminology modular forms on the upper half-plane fall into two categories: holomorphic forms and Maass forms. In fact there is a notion of Maass forms with weight and nebentypus, which includes holomorphic forms as follows: if $f(x+iy)$ is a weight $k$ holomorphic form, then $y^{k/2}f(x+iy)$ is a weight $k$ Maass form. There are so-called Maass lowering and raising operators that turn a weight $k$ Maass form into a weight $k-2$ or weight $k+2$ Maass form. Using these, the weight $k$ holomorphic forms can be understood as those that are "new" for weight $k$: for $k\geq 2$ the raising operator isometrically embeds the space of weight $k-2$ Maass forms into the space of weight $k$ Maass forms, and the orthogonal complement is the subspace coming from weight $k$ holomorphic forms as described in the previous paragraph; also, the lowering operator acts as an inverse on the image of the raising operator and annihilates the mentioned orthogonal component. All these connections can be better understood in the language of representation theory. I learned this material from Bump: Automorphic Forms and Representations, see especially Theorem 2.7.1 on page 241. Another good reference (from the classical perspective) is Duke-Friedlander-Iwaniec (Invent Math. 149 (2002), 489-577), see Section 4 there.
{ "source": [ "https://mathoverflow.net/questions/52744", "https://mathoverflow.net", "https://mathoverflow.net/users/2666/" ] }
52,979
I came across the problem "find all integer solutions to $y^2=x^3+17$." I've tried several things, without any success, and I was hoping that someone could help out. (Some ideas or a reference for where to find it are both appreciated) By numerical calculation I have found that the following integer points $(x,y)$ lie on the curve $(-1,4)$, $(-2,3)$, $(2,5)$, $(4,9)$, $(8,23)$, $(43,282)$, $(52,375)$, $(5234,378661)$ and this is probably all of them. Thanks
There is a standard method for computing all integral points on an elliptic curve using David's bounds and lattice reduction. The method can be found in the book: Nigel Smart, "The Algorithmic Resolution of Diophantine Equations", Cambridge University Press. This method is implemented in several computer algebra packages, including magma. If you type: E:=EllipticCurve([0,0,0,0,17]); IntegralPoints(E); into the online magma calculator at http://magma.maths.usyd.edu.au/calc/ it will give the eight points you've found already.
{ "source": [ "https://mathoverflow.net/questions/52979", "https://mathoverflow.net", "https://mathoverflow.net/users/12176/" ] }
52,996
Modular forms of integral weight are prominent in number theory. Furthermore, there are $\theta$-functions and the $\eta$-function, having weight 1/2, which also have a rich theory. But I have never seen a modular form of weight e.g. 1/3. I have been wondering about this for a long time. Are there examples of modular forms of fractional weights other than multiples of 1/2? And if yes, is there are reason why they are poorly studied?
I am no expert here, but I believe modular forms of fractional weight (e.g. of weight 1/3) appear more naturally as forms on metaplectic covers of GL(2) (e.g. on the cubic cover) and over fields containing the relevant roots of unity (e.g. the third roots of unity). Kubota around 1970 initiated the study of these covers, and a few years later Patterson initiated the study of the forms on them. Patterson's two papers here seem to be a good starting point. Later Patterson alone and jointly with Heath-Brown applied the new knowledge to old objects in number theory like Gauss and Kummer sums, see e.g. here and here . Patterson and Kazhdan in 1984 greatly generalized Kubota's work to metaplectic covers of GL(r), see here . All in all I believe the general theory is technically quite involved which explains why so few are familiar with it. However, forms of fractional weight are no doubt an organic part of number theory, but they appear more naturally on symmetric spaces of higher rank.
{ "source": [ "https://mathoverflow.net/questions/52996", "https://mathoverflow.net", "https://mathoverflow.net/users/3757/" ] }
53,036
I think that the title is self-explanatory but I'm thinking about mathematical subjects that have not received a full treatment in book form or if they have, they could benefit from a different approach. (I do hope this is not inappropriate for MO.) Let me start with some books I would like to read (again with self-explanatory titles). The Weil conjectures for dummies 2-categories for the working mathematician Representations of groups: Linear and permutation representations made side by side The Burnside ring A functor of points approach to algebraic geometry Profinite groups: An approach through examples Any other suggestions ?
I don't know for certain that this doesn't exist, so I'm in a no-lose situation: if this is a rubbish answer then it means that a book that I want to exist does exist. Many mathematicians of a pure bent have taken it upon themselves to get a good understanding of theoretical physics. And many have actually managed this. But it seems to me that they usually go native in the process, with the result that I cease to be able to understand what they are saying. It could be that this is just an irreducibly necessary feature of physics, but I doubt it. Out there in book space I believe there exists a book that explains theoretical physics in a way that physicists would dislike intensely but mathematicians would find much easier to read. It may well be that if you want to do serious work in mathematical physics then you have to understand the subject as physicists do. However, this book would be aimed at pure mathematicians who were not necessarily intending to do serious work in mathematical physics but just wanted to understand what was going on from a distance. I used to have a similar view about explanations of forcing, but I think Timothy Chow's wonderful Forcing for Dummies has filled that gap now.
{ "source": [ "https://mathoverflow.net/questions/53036", "https://mathoverflow.net", "https://mathoverflow.net/users/2162/" ] }
53,048
The following identity is a bit isolated in the arithmetic of natural integers $$3^3+4^3+5^3=6^3.$$ Let $K_6$ be a cube whose side has length $6$. We view it as the union of $216$ elementary unit cubes. We wish to cut it into $N$ connected components, each one being a union of elementary unit cubes, such that these components can be assembled so as to form three cubes of sizes $3,4$ and $5$. Of course, the latter are made simultaneously: a component may not be used in two cubes. There is a solution with $9$ pieces. What is the minimal number $N$ of pieces into which to cut $K_6$ ? About connectedness: a piece is connected if it is a union of elementary cubes whose centers are the nodes of a connected graph with arrows of unit length parallel to the coordinate axes. Edit . Several comments ask for a reference for the $8$-pieces puzzle, mentioned at first in the question. Actually, $8$ was a mistake, as the solution I know consists of $9$ pieces. The only one that I have is the photograph in François's answer below. Yet it is not very informative, so let me give you additional information (I manipulated the puzzle a couple weeks ago). There is a $2$-cube (middle) and a $3$-cube (right). At left, the $4$-cube is not complete, as two elementary cubes are missing at the end of an edge. Of course, one could not have both a $3$-cube and a $4$-cube in a $6$-cube. So you can imagine how the $3$-cube and the imperfect $4$-cube match (two possibilities). Other rather symmetric pieces are a $1\times1\times2$ (it fills the imperfect $4$-cube when you build the $3$-, $4$- and $5$-cubes) and a $1\times2\times3$. Two other pieces have only a planar symmetry, whereas the last one has no symmetry at all. Here is a photograph of the cut mentioned above. (source)
8 is the least. You can't have a piece of length 6, thus no two corners of the 6x6x6 cube can be part of the same piece, the cube has 8 corners, we need then a minimum of 8 pieces.
{ "source": [ "https://mathoverflow.net/questions/53048", "https://mathoverflow.net", "https://mathoverflow.net/users/8799/" ] }
53,119
I am reposting this question from math.stackexchange where it has not yet generated an answer I had been looking for. The volume of an $n$-dimensional ball of radius $R$ is given by the classical formula $$V_n(R)=\frac{\pi^{n/2}R^n}{\Gamma(n/2+1)}.$$ It is not difficult to see that the "dimensionless" ratio $V_n(R)/R^n$ attains its maximal value when $n=5$. The "dimensionless" ratio $S_n(R)/R^n$ where $S_n(R)$ is the $n$-dimensional volume of an $n$-sphere attains its maximum when $n=7$. Question. Is there a purely geometric explanation of why the maximal values in each case are attained at these particular values of the dimension? [EDIT. Thanks to all for the answers and comments.]
There are many "dimensionless" ratios: choosing $R$ as the linear measurement is arbitrary. For instance, the ratio of the volume of the sphere to the volume of the circumscribed cube has a maximum at $n=1$. The ratio to the volume of the inscribed cube never attains a maximum. There are intermediate geometrically-related "midscribed" cubes, where all faces of some dimension are tangent to the unit sphere. Here is the graph for the ratio of volumes when the codimension 2 faces of a cube are tangent. It attains the maximum for $n = 12$ (just barely more than for $n=11$). There are many other reasonable dimensionless comparisons, for instance comparing to a simplex, etc. etc. Since the Gamma function grows super-exponentially, these simple geometric variations tend to shift the maximum --- there's nothing special about 5 or 7. (source: Wayback Machine)
{ "source": [ "https://mathoverflow.net/questions/53119", "https://mathoverflow.net", "https://mathoverflow.net/users/5371/" ] }
53,122
When I was a young and impressionable graduate student at Princeton, we scared each other with the story of a Final Public Oral, where Jack Milnor was dragged in against his will to sit on a committee, and noted that the class of topological spaces discussed by the speaker consisted of finite spaces. I had assumed this was an "urban legend", but then at a cocktail party, I mentioned this to a faculty member, who turned crimson and said that this was one of his students, who never talked to him, and then had to write another thesis (in numerical analysis, which was not very highly regarded at Princeton at the time). But now, I have talked to a couple of topologists who should have been there at the time of the event, and they told me that this was an urban legend at their time as well, so maybe the faculty member was pulling my leg. So, the questions are: (a) any direct evidence for or against this particular disaster? (b) what stories kept you awake at night as a graduate student, and is there any evidence for or against their truth? EDIT (this is unrelated, but I don't want to answer my own question too many times): At Princeton, there was supposedly an FPO in Physics, on some sort of statistical mechanics, and the constant $k$ appeared many times. The student was asked: Examiner: What is $k?$ Student: Boltzmann's constant. Examiner: Yes, but what is the value? Student: Gee, I don't know... Examiner: OK, order of magnitude? Student: Umm, don't know, I just know $k\dots$ The student was failed, since he was obviously not a physicist.
This happened just last year, but it certainly deserves to be included in the annals of mathematical legends: A graduate student (let's call him Saeed) is in the airport standing in a security line. He is coming back from a conference, where he presented some exciting results of his Ph.D. thesis in Algebraic Geometry. One of the people whom he met at his presentation (let's call him Vikram) is also in the line, and they start talking excitedly about the results, and in particular the clever solution to problem X via blowing up eight points on a plane . They don't notice other travelers slowly backing away from them. Less than a minute later, the TSA officers descend on the two mathematicians, and take them away. They are thoroughly and intimately searched, and separated for interrogation. For an hour, the interrogation gets nowhere: the mathematicians simply don't know what the interrogators are talking about. What bombs? What plot? What terrorism? The student finally realizes the problem, pulls out a pre-print of his paper, and proceeds to explain to the interrogators exactly what "blowing up points on a plane" means in Algebraic Geometry.
{ "source": [ "https://mathoverflow.net/questions/53122", "https://mathoverflow.net", "https://mathoverflow.net/users/11142/" ] }
53,262
Why is it so hard to implement Haken's Algorithm for recognizing whether a knot is unknotted? (Is there a computer implementation of this algorithm?)
Regarding Haken's algorithm: It's not so hard to implement (it's essentially implemented in Regina, though at present you need to type a few lines of python to glue the bits together; a single "big red button" is on its way). However, it's hard to run , since the algorithm has exponential running time (and, depending on how you implement it, exponential memory use). There are two facts that make Haken's algorithm easier to implement than many other normal surface decision algorithms: You only need to search through vertex normal surfaces, not fundamental normal surfaces (Jaco & Tollefson, 1995). Vertex normal surfaces are much easier (and much faster) to enumerate. The test that you apply to each vertex normal surface is relatively simple (see if it describes a disk with non-trivial boundary). For other problems (notably Hakenness testing), the test that you apply to each vertex normal surface can be far more difficult than the original vertex enumeration. The reason Haken's algorithm is slow is that vertex enumeration is NP-hard in general. There are some tempting short-cuts: one is to run $3^n$ polynomial-time linear programs that maximise Euler characteristic over the $3^n$ possible combinations of quad types. However, experimental experience suggests that this short-cut makes things worse: solving $3^n$ linear programs guarantees $\Omega(3^n)$ running time even in the best case for a non-trivial knot. On the other hand, if you perform a full vertex enumeration (and you structure your vertex enumeration code well [1]) then you often see much faster running times in practice, even though the theoretical worst case is slower. An aside (which has already been noted above): there are much faster heuristic tests for unknot recognition, though these are not always guaranteed to give a definitive answer. SnapPea has some, as does Regina. There are many fast ways of proving you have a non-trivial knot (e.g., invariants or geometric structures). One fast way of proving you have a trivial knot is to triangulate the complement and "greedily simplify" this triangulation. If you're lucky, you get an easily-recognised 1-tetrahedron solid torus. If you're unlucky, you go back and run Haken's algorithm. The interesting observation here is that, if your greedy simplification is sophisticated enough, you almost always get lucky. (This is still being written up, but see arXiv:1011.4169 for related experiments with 3-sphere recognition.) Btw, thanks Ryan for dragging me online. :) [1] arXiv:0808.4050, arXiv:1010.6200
{ "source": [ "https://mathoverflow.net/questions/53262", "https://mathoverflow.net", "https://mathoverflow.net/users/1956/" ] }
53,399
A common caution about Whitehead's theorem is that you need the map between the spaces; it's easy to give examples of spaces with isomorphic homotopy groups that are not homotopy equivalent. (See Are there two non-homotopy equivalent spaces with equal homotopy groups? ). It's surely also true that the pair (homotopy groups, homology groups) is not a complete invariant, but can anyone give examples? That is, I'm looking for spaces $X$ and $Y$ so that $\pi_n(X) \simeq \pi_n(Y)$ and $H_n(X;\mathbb{Z}) \simeq H_n(Y; \mathbb{Z})$ but $X$ and $Y$ are still not (weakly) homotopy equivalent. (Easier examples are preferred, of course.)
Following up on John's comment, one can consider $S^2$-fibrations over $S^2$. There are two of them since such fibrations are classified by $\pi_1(\textrm{Diff}^{+}(S^2))=\mathbb{Z}_2$. One of them is $S^2\times S^2$ while the other can be shown to be the connected sum of $\mathbb{CP}^2$ and $\overline{\mathbb{CP}}^2$. These two spaces have the same homology. They have the same homotopy groups since they both form the base of a $S^1$-fibration with total space $S^2 \times S^3$. However, the intersection forms are not equivalent and hence they are not homotopy equivalent.
{ "source": [ "https://mathoverflow.net/questions/53399", "https://mathoverflow.net", "https://mathoverflow.net/users/5010/" ] }
53,431
This is a somewhat frivolous question, so I won't mind if it gets closed. One of the categories of Olympiad-style problems (e.g. at the IMO) is solving various functional equations, such as those given in this handout . While I can see the pedagogical value in doing a few of these problems, I never saw the point in practicing this particular type of problem much, and now that I'm a little older and wiser I still don't see anywhere that problems of this type appear in a major way in modern mathematics. (There are a few notable exceptions, such as the functional equation defining modular forms, but the generic functional equation problem has much less structure than a group acting via a cocycle. I am talking about a contrived problem like finding all functions $f : \mathbb{R} \to \mathbb{R}$ satisfying $$f(x f(x) + f(y)) = y + f(x)^2.$$ When would this condition ever appear in "real life"?!) Is this impression accurate, or are there branches of mathematics where these kinds of problems actually appear? (I would be particularly interested if the condition, like the one above, involves function composition in a nontrivial way.) Edit: Thank you everyone for all of your answers. As darij correctly points out in the comments, I haven't phrased the question specifically enough. I am aware that there is a lot of interesting mathematics that can be phrased as solving certain nice functional equations; the functional equations I wanted to ask about are specifically the really contrived ones like the one above. The implicit question being: "relative to other types of Olympiad problems, would it have been worth it to spend a lot of time solving functional equations?"
In additive combinatorics, one often seeks to count patterns such as an arithmetic progression $a, a+r, \ldots, a+(k-1)r$. When doing so, one is naturally led to expressions such as $$ {\bf E}_{a,r \in G} f_0(a) f_1(a+r) \ldots f_{k-1}(a+(k-1)r)$$ for some finite abelian group $G$ and some complex-valued functions $f_0,\ldots,f_{k-1}$. If these functions are bounded in magnitude by $1$, then the above expression is also bounded in magnitude by one. When does equality hold? Precisely when one has a functional equation $$ f_0(a) f_1(a+r) \ldots f_{k-1}(a+(k-1)r) = c$$ for some constant $c$ of magnitude $1$. One can solve this functional equation, and discover that each $f_j$ must take the form $f_j(a) = e^{2\pi i P_j(a)}$ for some polynomial $P_j: G \to {\bf R}/{\bf Z}$ of degree at most $k-2$. This observation can be viewed as the starting point for the study of Gowers uniformity norms, and one can perform a similar analysis to start understanding many other patterns in additive combinatorics. In ergodic theory, cocycle equations, of which the coboundary equation $$ \rho(x) = F(T(x)) - F(x)$$ is the simplest example, play an important role in the study of extensions of dynamical systems and their cohomology. Despite the apparently algebraic nature of such equations, though, one often solves these equations instead by analytic means (and in particular, not by IMO techniques), for instance by using the spectral theory or mixing properties of the shift $T$, and exploiting the measurability or regularity properties of $\rho$ or $F$. (The solving of such equations, incidentally, is a crucial aspect of the ergodic theory analogue of the study of the Gowers uniformity norms mentioned earlier, as developed by Host-Kra and Ziegler.) Returning to the more "contrived" functional equations of Olympiad type, note that such equations usually use (a) the additive structure of the domain and range, (b) the multiplicative structure of the domain and range, and (c) the fact that the domain and range are identical (so that one can perform compositions such as $f(f(x))$). In most mathematical subjects, at least one of these features is absent or irrelevant, which helps explain why such equations are relatively rare in research mathematics. For instance, in many branches of analysis, the range of functions (typically ${\bf R}$ or ${\bf C}$) usually has no natural reason to be identified with the domain of functions (which may ``accidentally'' be ${\bf R}$ or ${\bf C}$, but is often more naturally viewed in a more general category, such as that of measure spaces, topological spaces, or manifolds), so (c) is usually absent. Conversely, in dynamics, (c) is prominent, but (a) and (b) are not. The only fields that come to my mind that naturally exhibit all three of (a), (b), (c) (without also automatically exhibiting much richer algebraic structure, such as ring homomorphism structure) are complex dynamics, universal algebra, and certain types of cryptography, but I don't have enough experience in these fields to actually provide some interesting examples.
{ "source": [ "https://mathoverflow.net/questions/53431", "https://mathoverflow.net", "https://mathoverflow.net/users/290/" ] }
53,471
Some years ago I took a long piece of string, tied it into a loop, and tried to twist it up into a tangle that I would find hard to untangle. No matter what I did, I could never cause the later me any difficulty. Ever since, I have wondered whether there is some reasonably simple algorithm for detecting the unknot. I should be more precise about what I mean by "reasonably simple": I mean that at every stage of the untangling, it would be clear that you were making the knot simpler. I am provoked to ask this question by reading a closely related one: can you fool SnapPea? . That question led me to a paper by Kaufmann and Lambropoulou, which appears to address exactly my question: http://www.math.uic.edu/~kauffman/IntellUnKnot.pdf , since they define a diagram of the unknot to be hard if you cannot unknot it with Reidemeister moves without making it more complicated. For the precise definition, see page 3, Definition 1. A good way to understand why their paper does not address my question (by the way, when I say "my" question, I am not claiming priority -- it's clear that many people have thought about this basic question, undoubtedly including Kaufmann and Lambropoulou themselves) is to look at their figure 2, an example of an unknot that is hard in their sense. But it just ain't hard if you think of it as a three-dimensional object, since the bit of string round the back can be pulled round until it no longer crosses the rest of the knot. The fact that you are looking at the knot from one particular direction, and the string as it is pulled round happens to go behind a complicated part of the tangle is completely uninteresting from a 3D perspective. So here's a first attempt at formulating what I'm actually asking: is there a generalization of the notion of Reidemeister moves that allows you to pull a piece of string past a whole chunk of knot, provided only that that chunk is all on one side, so to speak, with the property that with these generalized Reidemeister moves there is an unknotting algorithm that reduces the complexity at every stage? I'm fully expecting the answer to be no, so what I'm really asking for is a more convincing unknot than the ones provided by Kaufmann and Lambropoulou. (There's another one on the Wikipedia knot theory page, which is also easily unknotted if you allow slightly more general moves.) I wondered about the beautiful Figure 5 in the Kaufmann-Lambropoulou paper, but then saw that one could reduce the complexity as follows. (This will be quite hard to say in words.) In that diagram there are two roughly parallel strands in the middle going from bottom left to top right. If you move the top one of these strands over the bottom one, you can reduce the number of crossings. So if this knot were given to me as a physical object, I would have no trouble in unknotting it. With a bit of effort, I might be able to define what I mean by a generalized Reidemeister move, but I'm worried that then my response to an example might be, "Oh, but it's clear that with that example we can reduce the number of crossings by a move of the following slightly more general type," so that the example would merely be showing that my definition was defective. So instead I prefer to keep the question a little bit vaguer: is there a known unknot diagram for which it is truly the case that to disentangle it you have to make it much more complicated? A real test of success would be if one could be presented with it as a 3D object and it would be impossible to unknot it without considerable ingenuity. (It would make a great puzzle ...) I should stress that this question is all about combinatorial algorithms: if a knot is hard to simplify but easily recognised as the unknot by Snappea, it counts as hard in my book. Update. Very many thanks for the extremely high-quality answers and comments below: what an advertisement for MathOverflow. By following the link provided by Agol, I arrived at Haken's "Gordian knot," which seems to be a pretty convincing counterexample to any simple proposition to the effect that a smallish class of generalized moves can undo a knot monotonically with respect to some polynomially bounded parameter. Let me see if I can insert it: ( J.O'Rourke substituted a hopefully roughly equivalent image for Timothy's now-inaccessible image link.) I have stared at this unknot diagram for some time, and eventually I think I understood the technique used to produce it. It is clear that Haken started by taking a loop, pulling it until it formed something close to two parallel strands, twisting those strands several times, and then threading the ends in and out of the resulting twists. The thing that is slightly mysterious is that both ends are "locked". It is easy to see how to lock up one end, but less easy to see how to do both. In the end I think I worked out a way of doing that: basically, you lock one end first, then having done so you sort of ignore the structure of that end and do the same thing to the other end with a twisted bunch of string rather than a nice tidy end of string. I don't know how much sense that makes, but anyway I tried it. The result was disappointing at first, as the tangle I created was quite easy to simplify. But towards the end, to my pleasure, it became more difficult, and as a result I have a rather small unknot diagram that looks pretty knotted. There is a simplifying move if one looks hard enough for it, but the move is very "global" in character -- that is, it involves moving several strands at once -- which suggests that searching for it algorithmically could be quite hard. I'd love to put a picture of it up here: if anyone has any suggestions about how I could do this I would be very grateful.
As you suggest, a lot of people have thought about this question. It's hard to find arrangements of an unknot that are convincingly hard to untie, but there are techniques that do pretty well. Have you ever had to untangle a marionette, especially one that a toddler has played with? They tend to become entangled in a certain way, by a series of operations where the marionette twists so that two bundles of control strings are twisted in an opposite sense, sometimes compounded with previous entanglements. It can take considerable patience and close attention to get the mess undone. The best solution: don't give marionettes to young or inattentive children! You can apply this to the unknot, by first winding it up in a coil, then taking opposite sides of the coil and braiding them (creating inverse braids on the two ends), then treating what you have like a marionette to be tangled. Once the arrangement has a bit of complexity, you can regroup it in another pattern (as two globs of stuff connected by $2n$ strands) and do some more marionette type entanglement. In practice, unknots can become pretty hard to undo. As far as I can tell, the Kaufmann and Lambropoulou paper you cited deals is discussing various cases of this kind of marionette-tangling operation. I think it's entirely possible that there's a polynomial-time combinatorial algorithm to unknot an unknottable curve, but this has been a very hard question to resolve. The minimum area of a disk that an unknot bounds grows exponentially in terms of the complexity of an unknotted curve. However, such a disk can be described with data that grows as a polynomial in terms of the number of crossings or similar measure, using normal surface theory. It's unknown (to me) but plausible (to me) that unknotting can be done by an isotopy of space that has a polynomially-bounded, perhaps linearly-bounded, "complexity", suitably defined --- that is, things like the marionette untangling moves. This would not imply you can find the isotopy easily---it just says the problem is in NP, which is already known. One point: the Smale Conjecture, proved by Allen Hatcher, says that the group of diffeomorphisms of $S^3$ is homotopy equivalent to the subgroup $O(4)$. A corollary of this is that the space of smooth unknotted curves retracts to the space of great circles, i.e., there exists a way to isotope smooth unknotted curves to round circles that is continuous as a function of the curve.
{ "source": [ "https://mathoverflow.net/questions/53471", "https://mathoverflow.net", "https://mathoverflow.net/users/1459/" ] }
53,724
Some irrational numbers are transcendental, which makes them in some sense "more irrational" than algebraic numbers. There are also numbers, such as the golden ratio $\varphi$, which are poorly approximable by rationals. But I wonder if there is another sense in which one number is more irrational than another. Consider the following well known irrationals: $\sqrt{2}$, $\varphi$, $\log_2{3}$, $e$, $\pi$, $\zeta(3)$. The proofs of irrationality of these numbers increase in difficulty from grade-school arguments, to calculus, to advanced methods. Other probable irrationals such as $\gamma$ most likely have very difficult proofs. Can this notion be made precise? Is there a well defined way in which, for example, $\pi$ is more irrational than $e?$
Yes, there is such a thing as the irrationality measure of a real number (I'm not sure if it can be / has already been extended to complex numbers). It is based on the idea that all algebraic numbers (including the golden ratio) are hard to approximate well by rationals, relative to the size of the denominator of the rational used, while it is sometimes possible for a transcendental number to be approximated better. In particular, if a number $\alpha\in\mathbb{R}\setminus\mathbb{Q}$ has the property that there are infinitely many rational approximations $\frac pq\in\mathbb{Q}$ with $|\,\alpha-\frac pq| < q^{-t}$, then $t$ is a lower bound for the irrationality measure of $\alpha$; the larger $t$ is, i.e. the better your approximations are relative to the denominator, the "more irrational" you are, at least from a Diophantine approximation point of view. From Wikipedia: The irrationality measure of a rational number is 1; the very deep theorem of Thue, Siegel, and Roth shows that any algebraic number that isn't rational has irrationality measure 2; and transcendental numbers will have an irrationality measure $\geq2$. However, as Douglas Zare has pointed out in the comments, the set of transcendental numbers of irrationality measure $>2$ has measure 0, so that in most cases it's unfortunately not useful as a comparison. It appears that the irrationality measure of $\pi$ is not currently known, but that there are upper bounds; the most recent one I could find is this , which would appear to show that $\mu(\pi)\leq7.6063$. The Wikipedia article claims that $\mu(e)=2$, so whether or not $\pi$ is "more irrational" than $e$ looks like an open question.
{ "source": [ "https://mathoverflow.net/questions/53724", "https://mathoverflow.net", "https://mathoverflow.net/users/175/" ] }
53,855
Suppose that $\epsilon_1,\epsilon_2,\ldots$ are IID random variables with the Bernoulli distribution $\mathbb{P}(\epsilon_n=\pm1)=1/2$ , and $a_1,a_2,\ldots$ is a real sequence with $\sum_na_n^2=1$ . Letting $S=\sum_n\epsilon_na_n$ , the question is whether there exists a constant $c > 0$ , independent of the choice of $a$ , with $$ \mathbb{P}(\vert S\vert\ge1)\ge c.\qquad\qquad{\rm(1)} $$ That is, I am interested in finding a bound on the probability of the sum being within one standard deviation of its mean. If true, this represents a particularly sharp version of the $L^0$ Khintchine inequality . Considering the example with $a_1=1$ and all other $a_i$ set to zero, for which $\mathbb{P}(\vert S\vert > 1)=0$ , it is necessary that the inequality inside the probability in (1) is not strict. Also, considering the example with $(a_1,a_2,a_3)=(1/\sqrt2,1/2,1/2)$ , it can be seen that $c\le1/4$ . I wonder if it is possible to construct further examples showing that $c$ must, in fact, be zero? For any $0 < u < 1$ , it is easy to find a bound $$ \mathbb{P}(\vert S\vert > u)\ge c_u $$ for $c_u > 0$ a constant independent of $a$ . Considering the case with $a_1=a_2=1/\sqrt{2}$ and all other $a_i$ set to zero, it is clear that $c_u \le 1/2$ . In fact, it can be shown that $c_u=(1-u^2)^2/3$ will suffice ( see my answer to this other MO question ), but $c_u$ decreases to zero as $u$ goes to $1$ , so this does not help with (1). Combining the Paley-Zygmund inequality with the optimal constants in the $L^p$ -versions of the Khintchine inequality for $p > 0$ (see ref. 1 or 2) it is possible to give improved values for $c_u$ , but it still tends to zero as $u$ goes to 1. My apologies if this is either obvious or some well-known fact that I have missed, but I could not find any reference for it. This question is something that I originally thought about while writing up some notes on stochastic integration (posted on my blog ), as the $L^0$ -version of the Khintchine inequality can be used to prove the existence of the stochastic integral. However, it is not necessary to have something as strong as (1) in that case. More recently, it came up again while answering this MO question . [ Update : Its been some time since this question was posted and answered. Many thanks to Anthony, Iosif and Ravi. There is ongoing research on this problem, and it seems likely that the optimal value of $c$ is 7/32 as conjectured by Oleszkiewicz in the paper linked in Ravi's answer. See Some explorations on two conjectures about Rademacher sequences by Hu, Lan and Sun, where the optimal value of 7/32 is shown for sequences of length at most 7, but it is still open in general. Also, the preprint Proof of Tomaszewski's Conjecture on Randomly Signed Sums by Keller and Klein also includes the claim that their methods improve the best known value for $c$ to 1/8.] Refs: Haagerup, The best constants in the Khintchine inequality , Studia Math., 70 (3) (1982), 231-283. Nazarov & Podkorytov, Ball, Haagerup, and distribution functions , Preprint (1997). Available from Fedja Nazarov's homepage .
OK. Here's a proof that $c > 0.002$. No doubt it can be substantially improved. We can assume the $a_i$ are arranged in decreasing order. Write $a$ for $a_1$. If $a\ge 1/2$, let $X=a_1\epsilon_1$ and $Y=(1-a^2)^{-1/2}(a_2\epsilon_2+\ldots+a_n\epsilon_n)$ so that $S=X+\sqrt{1-a^2}Y$. Notice that $Y$ is of the form so that the inequality in the question applies. Now $\mathbb P(|\sqrt{1-a^2}Y|\ge 1-a)=\mathbb P(|Y|\ge \sqrt{\frac{1-a}{1+a}})\ge \left(1-\frac{1-a}{1+a}\right)^2/3=4a^2/(3(1+a)^2)$. Since $a\ge 1/2$, this exceeds 4/27, so that provided $\epsilon_1$ has the same sign as $Y$, the sum is at least 1. This occurs with probability at least 2/27. If on the other hand $a<1/2$ then we have $a_i^2<1/4$ for each $i$. In particular there exists a partition of $\{1,\ldots,n\}$ into two sets $A$ and $B$ such that $3/8\le \sum_{i\in A}a_i^2\le \sum_{i\in B}a_i^2\le 5/8$. Let $\alpha^2=\sum_{i\in A}a_i^2$ and $\beta^2=\sum_{i\in B}a_i^2$. Let $X=\sum_{i\in A}(a_i/\alpha)\epsilon_i$ and $Y=\sum_{i\in B}(a_i/\beta)\epsilon_i$. Then $\mathbb P(|X|\ge 3/4)\ge 49/768$ by the given inequality. Similarly $\mathbb P(|Y|\ge 3/4)\ge 49/768$. The probability that they both exceed $3/4$ and have the same sign is at least $1/2(49/768)^2$. If this is the case $|S|=\alpha |X|+\beta |Y|\ge (3/4)(\alpha+\beta)$. In the worst case $\alpha=\sqrt{3/8}$ and $\beta=\sqrt{5/8}$, but even in this case the right side exceeds 1.
{ "source": [ "https://mathoverflow.net/questions/53855", "https://mathoverflow.net", "https://mathoverflow.net/users/1004/" ] }