source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
41,059
I am looking to generate combinations from a list of elements. Right now i am using a approach of generating power set. For example to generate combinations from {a,b,c}, i will enumerate 001,010,100 ,101 etc...and take the element for which the corresponding binary index is set to 1. But the problem comes when there are repeated characters in the list Say {a,a,b}. the above approach would give a,a,b,ab,ba,aa,aab. where as i would like to see only a,b,ab,aa,aab. I was thinking of writing some binary mask to eliminate repeated strings but was not succesfull. Any thoughts on how to generate unique combinations ?
There's a big difference between teaching category theory and merely paying attention to the things that category theory clarifies (like the difference between direct products and direct sums). In my opinion, the latter should be done early (and late, and at all other times); there's no reason for intentional sloppiness. On the other hand, teaching category theory is better done after the students have been exposed to some of the relevant examples. Many years ago, I taught a course on category theory, and in my opinion it was a failure. Many of the students had not previously seen the examples I wanted to use. One of the beauties of category theory is that it unifies many different-looking concepts; for example, left adjoints of forgetful functors include free groups, universal enveloping algebras, Stone-Cech compactifications, abelianizations of groups, and many more. But the beauty is hard to convey when, in addition to explaining the notion of adjoint, one must also explain each (or at least several) of these special cases. So I think category theory should be taught at the stage where students have already seen enough special cases of its concepts to appreciate their unification. Without the examples, category theory can look terribly unmotivated and unintuitive.
{ "source": [ "https://mathoverflow.net/questions/41059", "https://mathoverflow.net", "https://mathoverflow.net/users/9792/" ] }
41,086
Many theorems have the form : Premise(es) implies Conclusion(s) Example A of wrongness : There are many examples in which a theorem is stated without mentioning that part of the premise is not necessary to reach the conclusion. Usually it is simple (and much better) to add a remark stating that the result is not sharp (ideally providing an example of weaker premise holding with the solution). But there is another type of bias : Added Note: Below composition means the AND of two relations ( for classical composition the transitivity does not compose! ( thanks to HenrikRüping remark). Example B of wrongness : Theorem 1 : The composition of 2 equivalence relations is an equivalence relation. Or in fewer words : Equivalence relations are stable under composition. Actually there is a much finer version of B : Theorem A : For relations each of the following properties are stable under composition : Reflexive , Transitive , Symmetric. By conjunction of the above we obtain: Corollary B : Equivalence relations are stable under composition Note: The second form is not only more precise but it also makes the mention "left as an easy exercise" more acceptable. The "WRONG" notion: I called theorem 1 (or its statement) wrong as it induced the reader to think that the conjunction of the 3 properties plays a role in proving the conclusion. Of course only true theorems may be qualified as wrong. Taking an absolute stance you may call wrong any theorem that is not a tautology. A less absolute stance would call wrong any theorem that is not a tautology and in which you forget to mention non-sharpness. Question 1: is there a better / more adequate term than wrong ( the subtext is: do you think it is a good notion?) . Question 2: Do you know examples that follow a pattern like B or some variation in lack of tautology? ADDED TO BE MORE SPECIFIC: Question 3: More specifically : Are there other types of patterns showing a distance between premise and conclusion. The types need to be common in the mathematical literature, not purely logical types ( of course those are more countable).
I remember years ago sitting in Leo Harrington's office in Berkeley explaining my dissertation to him (he was on my committee), and he spent some time just scanning through the dissertation seeking out any theorem of the form If P, then Q . At such a theorem, he would stop, smile with glee, and then turn to me and ask: Is the converse true? And I would have to explain why or give a counterexample. This little exercise definitely made a better dissertation. His point, of course, was that such theorems could be seen as flawed in a way very similar to the sense of your question. If the converse was true, then this fact might become part of the theorem, which could be stated as the full if and only if version. And if the converse was not true, then the hypothesis was wastefully strong, and might be improved by weakening it and finding a better theorem. So the exercise guides one to what might be better theorems lurking just nearby your existing results. Since that time, I have often found this perspective illuminating---it has helped my own mathematical writing and understanding in many instances---and so now I find myself carrying out that little exercise with my own students... At the same time, I recognize that one should not take a dogmatic view on it. There are numerous instances where one wants to draw attention to a surprising or illuminating implication, even though it isn't optimal, because one wants to focus attention on a particular aspect of the mathematics at hand. The choice after all of how to present a mathemetical result is also a choice about non-mathematical issues, such as style or emphasis, and surely many of us have wished in certain cases that the author of a text had given more attention to such presentational aspects of a mathematical text. Perhaps the best way to communicate the mathematical idea you want to communicate is to focus only on the implication P implies Q , even in cases when the hypothesis can be weakened or when the converse is also true, since those other aspects might be a distraction from the construction you want to present or the example you want to explore. Perhaps part of the point is that the implication is easy when P holds, while the optimal implication may be difficult. And so we should relax, and in such circumstances allow such flawed theorems into our papers. (But still, you should nevertheless try to know the answer to the Harrington exercise for your theorems, even if you decide ultimately not to include those more exact results for the reasons I mentioned.) But you seemed particularly interested in phenomenon B, so let me offer a specific example, as you requested: Theorem. Every forcing extension of a model of ZFC is a model of ZFC. This theorem breaks apart in a manner similar to your equivalence relation example, since for most of the stronger axioms, to verify the axiom in the extension V[G] one appeals to the axiom in the ground model V. But I definitely don't call this a wrong theorem in any sense, and I wouldn't see it as a necessary improvement to deliniate exactly which ground model axioms are needed to get the particular axioms in the forcing extension, unless the focus of the work was specifically on models that did not satisfy all of ZFC. If one is interested just in ZFC models, then this theorem expresses exactly the desired implication, and the broken-apart version in the style of your Theorem A could be seen as an irrelevant technical distraction. Almost any theorem about ZFC models would exhibit a very similar phenomenon to this.
{ "source": [ "https://mathoverflow.net/questions/41086", "https://mathoverflow.net", "https://mathoverflow.net/users/3005/" ] }
41,141
I want to cite a paper which is on arxiv.org but is not published or reviewed anywhere, and no publication or review seems to be in the pipeline. Would citing this arxiv.org paper be bad? Should I wait for a paper to be peer reviewed before I cite it? Added: I don't actually know whether a 'real' publication is in the pipeline. The alternative to citing the paper would probably be to ignore it; I have a way to extend the results in the paper if the paper's results are true, but I don't have the skill or time to verify that the arxiv.org paper is correct.
[It is] Not really [bad to cite an arXiv paper] * . If the paper on arXiv provides the result you want, you are free to cite it. Before the arXiv, citing "private communication" or "pre-print" is not unheard of. On the other hand, since it hasn't been peer reviewed, you probably should double check and make sure you understand and believe the paper before you cite it (if you use one of its results crucially) (not that you shouldn't do the same for peer-reviewed papers, just that one may want to be extra careful with referring to pre-prints). Note that there are two reasons for citations. The first is to give credit where credit is due: you do not want to look like you are appropriating someone else's result (or in some cases, inadvertently slighting somebody by sin of omission). The second is to provide references for assertions made without proof in your paper. Obviously if you are citing for the former reason, a paper is arXiv is really no different from a paper in a published journal. If the author's right, you covered your bases. If he was wrong, then better for you, perhaps. It is with the latter case you need to be more careful. If the paper has been on arXiv for a long time and not appeared in any journals (definition of "long time" of course vary from field to field), you may want to be a bit cautious in deciding whether the foundation to your house is sound. Also, how do you know "no publication or review seems to be in the pipeline"? I know several people (myself included) who would only include the journal ref on arXiv after it has been accepted for publication. Perhaps you should double check with the original author whether it has been submitted, and if not, why not? * As Joel pointed out in his comments to the original question, and Emerton in his comments to this answer, there is some ambiguity as to which question I was answering.
{ "source": [ "https://mathoverflow.net/questions/41141", "https://mathoverflow.net", "https://mathoverflow.net/users/9501/" ] }
41,212
I would be interested to learn if the following generalization of the classical Looman-Menchoff theorem is true. Assume that the function $f=u+iv$ , defined on a domain $D\subset\mathbb{C}$ , is such that $u_x$ , $u_y$ , $v_x$ , $v_y$ exist almost everywhere in $D$ . $u$ , $v$ satisfy the Cauchy–Riemann equations almost everywhere in $D$ . $f=f(x,y)$ is separately continuous (in $x$ and $y$ ) in $D$ . $f$ is locally integrable. Question: Does it follow that $f$ is analytic everywhere in $D$ ? Remark 1. Condition 3 is essential (take $f=1/z$ ). Remark 2. G. Sindalovskiĭ proved analyticity of $f$ under conditions 2-4 when the partial derivatives exist everywhere in $D$ , except on a countable union of closed sets of finite linear Hausdorff measure ( link ).
No. Let $c$ be the Cantor function on $[0,1]$ , so that $c$ is continuous, $c' = 0$ a.e., but $c$ is not constant. Then take $u(x+iy) = v(x+iy)=c(x)c(y)$ . We have $u_x=u_y=v_x=v_y=0$ a.e. so the Cauchy–Riemann equations are trivially satisfied, and $f(z)=u(z)+iv(z)$ is bounded and continuous on the unit square, but certainly not analytic. Almost everywhere differentiability is almost never the right condition for solutions to a PDE. A better condition would be to have $u,v$ in some Sobolev space.
{ "source": [ "https://mathoverflow.net/questions/41212", "https://mathoverflow.net", "https://mathoverflow.net/users/5371/" ] }
41,214
Mathematics has undergone some rather nice developments recently with the adoption of new techologies, things like on-line journals, the arXiv, this website, etc. I imagine there must be many further developments that could be quite useful. What I'm thinking of is a website where anyone can contribute formal proofs of theorems. In particular there would be many proofs of the same theorem provided the proof is different -- like a constructive proof of Brouwer's fixed point theorem, and non-constructive proof, etc. The idea would be to build up a large web of formal proofs, one building on another so that one could eventually do searches through this space of formal proofs to find out what the most efficient proofs are, in the sense of how many ASCII characters it would take to write-up the proof using Zermelo-Frankel set theory. One hope would be to have a big, active database of verified formal proofs. Another would be to have a webpage where you could hope to discover whether or not there are simpler proofs of theorems you know, that you may have not been be aware of. Being a web-page there would be certain useful efficiencies -- the webpage could "compile" your proof and check to see it's valid. Being a wiki would make it relatively easy for people to contribute and build on an existing infrastructure. And you'd be free to use pre-existing proofs (provided they've been verified as valid) in any subsequent proofs. One could readily check what axioms a proof needs -- for example to what extent a proof needs the axiom of choice, and so on. Is there any efforts towards such a development? Such a tool would hopefully function like the publishing arm of some sort of modern internet-era Bourbaki.
There are lots of sites for formal proofs, but no wiki that I am aware of. Typical examples are: archive of formal proofs at https://www.isa-afp.org/ Mizar http://mizar.org/ Lots of proofs are contained in the distributions of various interactive theorem provers: Isabelle, Hol, hol light, Coq, acl2 etc etc As stated in another post, there is no agreement on foundations (formulas, axioms and rules of inference). A typical split is between classical (Hol et al.) and non-classical (Coq et al.) systems, but the differences are typically much more subtle than that. As a result all these systems are effectively unable to reuse proofs from other systems. Occasionally someone writes a translator from one system to another, but the problem here is that the translation typically does not produce a readable proof in the target system; a readable proof is necessary if the translated proof is to be maintainable. If you fix on ZFC+(maybe some other axioms), then Mizar probably has the most extensive library. Every few years, someone proposes a big database of formal proofs, but these projects invariably die for various reasons related to the issues above. An example is the QED project: http://en.wikipedia.org/wiki/QED_manifesto My personal view is that constructing formal proofs, and maintaining them, is currently too difficult. Having said that, in the long run this is clearly an idea whose time will come.
{ "source": [ "https://mathoverflow.net/questions/41214", "https://mathoverflow.net", "https://mathoverflow.net/users/1465/" ] }
41,253
I am teaching a course leading up to Tate's thesis and I told the students last week, when defining ideles, that the first topology that was put on the ideles was not so good (e.g., it was not Hausdorff; it's basically the profinite topology on the ideles, so archimedean components don't get separated well). You can find this mentioned on the second page of the memorial article Claude Chevalley (1909–1984) by Dieudonné and Tits in Bulletin AMS 17 (1987) (doi: 10.1090/S0273-0979-1987-15509-1 ), where they also say that Chevalley's introduction of the ideles was "a definite improvement on earlier similar ideas of Prüfer and von Neumann, who had only embedded $K$ [the number field] into the product over the finite places" (emphasis theirs). [Edit: Scholl's answer says in a little more detail what Prüfer and von Neumann were doing, with references.] I have two questions: 1) Can anyone point to a specific article where Prüfer or von Neumann used a product over just the finite places, or at least indicate whether they were able to do anything with it? 2) Who introduced the restricted product topology on the ideles? (In Chevalley's 1940 paper deriving global class field theory using the ideles and not using complex analysis, Chevalley uses a different topology, as I mentioned above.) I would've guessed it was Weil, but BCnrd told me that he heard it was due to von Neumann. Any answer with some kind of evidence for it is appreciated. Edit: For those wondering why the usual notation for the ideles is $J_K$ and not $I_K$ , the use of $J_K$ goes right back to Chevalley's papers introducing ideles. (One may imagine $I_K$ could have been taken already for something related to ideals, but in any event it's worth noting the use of " $J$ " wasn't some later development in the subject.)
I know nothing about work of ``idelic nature'' by Von Neumann or Pruefer. Already in the 1930's Weil understood that Chevalley was wrong to ignore the connected component, because Weil understood already then that Hecke's characters were the characters of the idele class group for the right topology on that. I don't know of any place before his paper dedicated to Takagi where he defined the ideles explicitly as a topological group, but he must have understood the situation way before that When I wrote my thesis I used what seemed to me to be the obvious topology without going into the history of the matter.
{ "source": [ "https://mathoverflow.net/questions/41253", "https://mathoverflow.net", "https://mathoverflow.net/users/3272/" ] }
41,563
Dear members, Way back in the stone age when I was an undergraduate (the mid 90's), the internet was a germinal thing and that consisted of not much more than e-mail, ftp and the unix "talk" command (as far as I can remember). HTML and web-pages were still germinal. Google wouldn't have had anything to search, had it existed. Nowadays Google is an incredibly convenient way of finding almost anything -- not just solutions to mathematics problems, but even friends you lost track of 20+ years ago. My question concerns how Google (and to a lesser extent other technological advances) has changed the landscape for you. Specifically, when you're teaching proofs. More details on what I'm getting at: A "rite of passage" homework problem in the 2nd year multi-variable calc/analysis course at the University Alberta was the Cantor-Schroeder-Bernstein theorem. In the 3rd year there was the Kuratowski closure/14-set theorem. It's not very useful to ask students to prove such theorems on homework assignments nowadays, since the "pull" of Google is too strong. They easily find proofs of these theorems even if they're not deliberately searching for them . The reason I value these "named" traditional problems is primarily that they are fairly significant problems where a student, after they've completed the problem, can look back and know they've proven (on their own) some kind structural theorem - they know they're not just proving meaningless little lemmas, as the theorems have historical significance. As these kinds of accomplishments accumulate, students observe they've learned to some extent how an area develops and what it takes in terms of contributions of new ideas, dogged deduction, and so on. I'm curious to what extent you've adapted to this new dynamic. I have certainly noticed students being able to look-up not just named theorems but also relatively simple, arbitrary problems. After all, even if you create a problem that you think is novel, it's rather unlikely that this is the case - sometimes students find your problem on a 3-year-old homework assignment on a course webpage half-way around the planet, even if it's new to you. As Jim Conant mentioned in the comments, this is a relatively new thing. When I was an undergraduate, going to the library meant a 30-minute walk each way, then the decision process of trying to figure out what textbook to look in, frequently a long search that led me to learning something interesting that I hadn't planned on, and frequently not finding what I set out to find. But type in part of your problem into Google and it brings you to the exact line of all the textbooks in which it appears. It brings up all the home-pages where the problem appears and frequently solutions keys, if not Wikipedia pages on the problem -- I've deleted more than one Wikipedia page devoted to solutions to particular homework problems. Of course there are direct ways to adapt: asking relatively obscure questions. And there's "denying the problem" - the idea that good students won't (deliberately or accidentally) look up solutions. IMO this underestimates how easy it is to find solutions nowadays. And it underestimates how diligent students have to be in order to succeed in mathematics. Any insights welcome.
How would you teach anything in an age when the "arcana" or guild secrets had been made public? Well, you would teach . And you would not ask questions that had answers that could be called "answers" on the basis of some look-up. I'm not involved in such things these days, but when I was, I wrote my own questions for students. I did not expect to take questions down off the shelf from anywhere, and for that reason my questions perhaps had a few rough edges. But then I was in an institution that actually thought teaching quite demanding. It is an answer, though it probably betrays a lack of sympathy: if you don't want students simply to look up the answer, don't simply look up the question.
{ "source": [ "https://mathoverflow.net/questions/41563", "https://mathoverflow.net", "https://mathoverflow.net/users/1465/" ] }
41,609
Any number less than 1 can be expressed in base g as $\sum _{k=1}^\infty {\frac {D_k}{g^k}}$, where $D_k$ is the value of the $k^{th}$ digit. If we were interested in only the non-zero digits of this number, we could equivilantly express it as $\sum _{k=1}^\infty {\frac {C_k}{g^{Z(k)}}}$, where $Z(k)$ is the position of the $k^{th}$ non-zero digit base $g$ and $C_k$ is the value of that digit (i.e. $C_k = D_{Z(k)}$). Now, consider all the numbers of this form $(\sum _{k=1}^\infty {\frac {C_k}{g^{Z(k)}}})$ where the function $Z(k)$ eventually dominates any polynomial. Is there a proof that any number of this form is transcendental? So far, I have found a paper demostrating this result for the case $g=2$; it can be found here: http://www.escholarship.org/uc/item/44t5s388?display=all
I don't know of a paper proving the result, but I can prove it for you now. In fact, the methods in the paper you link generalize to an arbitrary base $g\gt2$. The authors of the paper don't seem to think that it generalizes quite so easily, as in the Open Problems section they state that "For bases $b\gt2$ there is the problem of having more than two possible digits. What kinds of bounds might be placed on counts of 1's and 2's for ternary expansions of algebraic numbers?". Hopefully I have not made any major mistakes... [Edit: A paper by Bugeaud, On the b-ary expansion of an algebraic number , available from his homepage gives lower bounds on the number of nonzero digits in an irrational algebraic number. There, he references the paper linked in the question, saying "Apparently, their approach does not extend to a base $b$ with $b\ge3$". However, he has just responded to this question , agreeing that the method does indeed generalize. So I'm more confident about my proof now.] Use $\#(x,N)$ to denote the number of nonzero base-$g$ digits in the expansion of $x$, up to and including the $N$'th digit after the 'decimal' point, then what you are asking for is implied by the following. If $x$ is irrational and satisfies a rational polynomial of degree $D$ then $\#(x,N)\ge cN^{1/D}$ for a positive constant $c$ and all $N$. First, I'll introduce some notation similar to that used in the linked paper. Use $r_1(n)$ to denote the $n$'th base-$g$ digit of $x$, so that $0\le r_1(n)\le g-1$ and $$ x=\sum_nr_1(n)g^{-n}. $$ It's enough to consider $1\le x\lt2$, so I'll do that throughout. Then $r_1(n)=0$ for $n\lt0$ and $r_1(0)=1$. Also use $r_d(n)$ to denote $$ r_d(n)=\sum_{p_1+p_2+\cdots+p_d=n}r_1(p_1)r_1(p_2)\cdots r_1(p_d)=\sum_{j+k=n}r_1(j)r_{d-1}(k) $$ This satisfies the inequalities $r_d(n)\ge r_1(0)r_{d-1}(n)=r_{d-1}(n)$ and $$ \sum_{n\le N}r_d(n)\le(g-1)^d\#(x,N)^d\le(g-1)^d(N+1)^d.\qquad\qquad{\rm(1)} $$ Also, raising $x$ to the $d$'th power gives $$ x^d=\sum_nr_d(n)g^{-n}, $$ which differs from the base $g$ expansion of $x^d$ only because $r_d(n)$ can exceed $g$. We also introduce notation for the expansion of $x^d$ with the digits shifted to the left $R$ places and truncated to leave the fractional part, $$ T_d(R)=\sum_{n\ge1}r_d(R+n)g^{-n}, $$ so that $g^Rx^d-T_d(R)$ is an integer. This can also be bounded, using (1), $$ \begin{array}{rl} \displaystyle T_d(R)&\displaystyle\le\sum_{n\ge1}(g-1)^d(R+n+1)^dg^{-n}\\ &\displaystyle\le\sum_{n\ge1}(g-1)^d(R+1)^d(n+1)^dg^{-n}\\ &\displaystyle\le C_d(R+1)^d \end{array} $$ where $C_d=\sum_{n\ge1}(g-1)^d(n+1)^dg^{-d}$ is a constant independent of $R$. Now suppose that $x$ satisfies an integer polynomial of degree $D\gt1$, $$ A_Dx^D+A_{D-1}x^{D-1}+\cdots+A_1x+A_0=0 $$ with $A_D\gt0$. It follows that $$ T(R)\equiv\sum_{d=1}^D A_dT_d(R) $$ is an integer for each $R$. The following is similar to Theorem 3.1 in the linked paper. Lemma 1 : For all sufficiently large $N$, there exists $n\in(N/(D+1),N)$ with $r_1(n)\gt0$. Proof: This is a consequence of Liouville's theorem for rational approximation. If the statement was false then setting $m=\lfloor N/(D+1)\rfloor$, $p=\sum_{n=0}^mr_1(n)g^{-n}$, $q=g^{m}$ gives infinitely many approximations $\vert x-p/q\vert=q^{-D}o(1)$ as $N$ increases, contradicting Liouville's theorem. In Lemma 1, Roth's theorem could have been used to reduce the $D+1$ term to $2+\epsilon$. In fact, Ridout's theorem as discussed in the comments can be used to reduce it even further to $1+\epsilon$. This isn't needed here, so I just used the more elementary Liouville's theorem. Lemma 6.1 from the linked paper generalizes to base $b$, and puts upper bounds on the number of times at which $T(n)$ can be nonzero. Lemma 2 : For large enough $N$, setting $K=\lceil 2D\log_g N\rceil$ gives $$ \sum_{1\le R\le N-K}T_d(R) < (g-1)^{d-1}\#(x,N)^d+1 $$ for $1\le d\le D$ and so, $$ \sum_{1\le R\le N-K}\vert T(R)\vert\le\sum_{d=1}^D\vert A_d\vert ((g-1)^{d-1}\#(x,N)^D+1) $$ Proof: Using similar inequalities to the proof used in the linked paper, $$ \begin{array}{rl} \displaystyle\sum_{1\le R\le N-K}T_d(R) &\displaystyle=\sum_{m\ge1}g^{-m}\sum_{R\le N-K}r_d(R+m)\\ &\displaystyle\le\sum_{m=1}^Kg^{-m}\sum_{R\le N}r_d(R)+g^{-K}\sum_{m > K}g^{K-m}\sum_{R\le N-K}r_d(R+m)\\ &\displaystyle \le \frac{1}{g-1}\sum_{R\le N}r_d(R)+g^{-K}\sum_{K\le R\le N}T_d(R)\\ &\displaystyle\le(g-1)^{d-1}\#(x,N)^d+g^{-K}NC_d(N+1)^d. \end{array} $$ The final term is bounded by $C_d(N+1)^{d+1}/N^{2D}$ which will be less than 1 for $N$ large. Lemma 6.2 also generalizes, which gives blocks where $T(R)$ is nonzero. Lemma 3 : Let $R_1\lt R_2$ be positive integers with $r_{D-1}(R)=0$ for all $R\in(R_0,R_1]$ and $T(R_1)\gt0$. Then $T(R)\gt0$ for all $R\in[R_0,R_1]$. Proof: We have the following relation for $T$, $$ T(R-1)=\frac{1}{g}T(R)+\frac{1}{g}\sum_{d=1}^D A_dr_d(R). $$ As $r_d(n)\ge r_{d-1}(n)$, the hypothesis implies that $r_d(R)=0$ for all $1\le d\le D-1$ and $R\in(R_0,R_1]$. Therefore, $$ T(R-1)=\frac{1}{g}T(R)+\frac{1}{g}A_Dr_D(R)\ge \frac{1}{g}T(R). $$ Assuming inductively that $T(R)\gt0$ gives $T(R-1)\gt0$. Putting this together gives the result (Theorem 7.2 in the linked paper). Theorem 4 : There is a constant $c$ such that, for all sufficiently large $N$ $$ \#(x,N)>cN^{1/D} $$ Proof : Suppose not, then for any $\delta\gt0$, there are infinitely many $N$ with $\#(x,N)\lt\delta N^{1/D}$ and, using (1), $$ \sum_{n\le N}r_{D-1}(n)\le \delta N^{1-1/D}\qquad\qquad{\rm(2)} $$ In particular, the proportion of integers $R$ with $r_{D-1}(R)\gt0$ goes to $0$. Let $0=R_1\lt R_2\lt\cdots\lt R_M\le N$ be those integers in the range $[0,N]$ with $r_{D-1}(R_k)\gt0$ and set $R_{M+1}=N$. Then (2) gives $M+1\le\delta N^{1-1/D}$, and $r_d(R)=0$ for $d\le D-1$ and $R$ in any of the ranges $(R_i,R_{i+1})$. So, $T_d(R-1)=g^{R-R_{i+1}}T_d(R_{i+1}-1)$. Fixing $\epsilon\gt0$ and letting $I$ denote the numbers $i$ with $R_{i+1}-R_i\gt\epsilon N^{1/D}$ gives $$ \sum_{i\in I}(R_{i+1}-R_i)\ge N - (M+1)\epsilon N^{1/D}\ge N(1-\epsilon \delta). $$ So, the intervals $(R_i,R_{i+1})$ larger than $\epsilon N^{1/D}$ cover most of the interval $[0,N]$, as long as $\epsilon\delta$ is small enough. If $R$ is in the range $(R_i,R_{i+1}-D\log_gN)$ and $r_D(R)\gt0$ then $T(R-1)\gt0$: $$ T(R-1)\ge \frac{1}{g}A_D-g^{R-R_{i+1}}\sum_{d=1}^{D-1}\vert A_d\vert T_d(R_{i+1}-1) \ge\frac1g-N^{-D}\sum_{d=1}^{D-1}\vert A_d\vert C_d R_{i+1}^d $$ which is positive, so long as $N$ is chosen large enough. Assuming that $N$ is large enough, by Lemma 1, for each $i$ in $I$, there is $$ j\in\left(\frac{1}{D+1}(R_{i+1}-R_i-D\log_gN),R_{i+1}-R_i-D\log_gN\right) $$ with $r_1(j)\gt0$. Then, $r_D(R_i+j)\ge r_{D-1}(R_i)r_1(j)$ is positive, so $T(R_i+j-1)\gt0$. Lemma 3 implies that $T(R_i+j)$ is positive for all $0\le j\lt(R_{i+1}-R_i-D\log_gN)/(D+1)$. $$ \sum_{1\le n< N-2D\log_g N}\vert T(n)\vert\ge\frac{1}{D+1}\sum_{i\in I}(R_{i+1}-R_i-2D\log_gN) \ge\frac{N(1-\epsilon\delta)}{D+1}-2\delta N^{1-1/D}\log_gN $$ This contradicts Lemma 2, which gives, for $N$ large, $$ \sum_{1\le n< N-2D\log_g N}\vert T(n)\vert =O(\delta^D N). $$
{ "source": [ "https://mathoverflow.net/questions/41609", "https://mathoverflow.net", "https://mathoverflow.net/users/9712/" ] }
41,616
The classifying space $BG=|Nerve(G)|$ of an arbitrary topological group $G$ does not necessarily have the homotopy type of a CW-complex but the fundamental group should still be accessible. What is $\pi_{1}(BG)$? A reference on this would be great. My initial guess: $\pi_{1}(BG)$ is the quotient group $\pi_{0}(G)$ for arbitrary $G$ Motivation: There is a natural way to make $\pi_1$ a functor to topological groups. I am interested in relating the topologies of $G$ and $\pi_{1}(BG)$ but the topology on $\pi_{1}(X)$ is boring (discrete) when $X$ is a CW-complex.
If $G$ is homeomorphic to a Cantor set (e.g. $G=\mathbb Z_p$), then $BG$ contains a copy of the Hawaiian earrings in it. To see this, take a sequence of points of $G$ that converges to the identity element: you'll get a corresponding sequence of 1-cells in $BG$ that converge to the the degenerate 1-cell. The fundamental group of the Hawaiian earrings is a rather wild object, and looks nothing like the free group that you might naively expect. If, on the other hand, if you agreed to redefine $BG$ to be the fat geometric realization of the simplicial space $NG$, then you would get $\pi_1(BG)\cong\pi_0(G)$, as desired. I would even bet that the above isomorphism respects the natural topologies that are present on both sides.
{ "source": [ "https://mathoverflow.net/questions/41616", "https://mathoverflow.net", "https://mathoverflow.net/users/5801/" ] }
41,719
I recently posted a short (6 page) note on arXiv, and have more or less decided that I should not submit it to a journal. I could have tacked it onto the end of a previous paper, but I thought it would be somewhat incongruous -- it is an interesting consequence of the key lemma unrelated to the main result. I really liked the concision of the paper and didn't want to spoil it. This brings to my mind several philosophical and/or ethical questions about the culture of publishing that I find interesting, particularly at this point in my life since I am nearing the point of seeking a permanent position. Two things are clear: It is to one's advantage, especially in the early career stage, to have many publications. It is also to one's advantage to have a strong publications. So take a result that is not very difficult to prove, but is interesting mostly because I think it may be useful as a stepping stone to another as-yet-unknown result . Given that I have already put it on arXiv, is "because I think it could be published" a good enough reason to publish it? One good reason to submit a paper to a journal is to have it in the refereed scientific record. I was tempted to claim the result at the end of the previous paper without proof as a remark, but in the end thought better of making such a claim without giving a proof. How bad is it to make such a claim if the proof (or even the truth) is not obvious, but can be proven as a reasonably straightforward generalization of someone else's proof of a different result? This is a more pragmatic and frank question. Should young researchers be careful to avoid the impression of splitting their work into MPUs (minimum publishable units)? In this instance my reasons for not including the result in another paper are somewhat complex and non-obvious. As a counterpoint to 3, how should one balance the purity/linearity of the ideas/results in a paper, versus including as many related results as possible? I realize that these are very soft and subjective questions, but I am interested to hear opinions on the matter, even if there is no universally true answer.
This is a question of interest to most mathematicians who are research active and not slowly but surely knocking off important problems in their field at the rate of one per paper. (I think I could have ended the previous sentence at the word "active" without much affecting the meaning!) I think the answer is ultimately quite personal: you are free to set your own standards as to how much of your work to publish. I myself understand the psychology both ways: on the one hand, math is usually long, hard work and when you finish off something, you want to record that accomplishment and receive some kind of "credit" for it. On the other hand, we want to display the best of what we have done, not the entirety. This position is well understood in the artistic and literary world: e.g. some authors spend years on works that they deem not ready to be released. Sometimes they literally destroy or throw away their work, and when they don't, their executors are faced with difficult ethical issues. (This is roaming off-topic, but I highly recommend Milan Kundera's book-length essay Testaments Betrayed , especially the part where he details the history of how after Kafka's death, his close friend Max Brod disobeyed Kafka's instructions and published a large amount of work that Kafka had specifically requested be destroyed. If Brod had done what he was told to do, the greater part of Kafka's Oeuvres -- e.g. The Trial, The Castle, Amerika -- would simply not exist to us. What does Kundera think of Brod's decision? He condemns it in the strongest possible terms!) Another consideration is that publication of work is an effort in and of itself, to the extent that I would not say that anyone has a duty to do so, even after releasing it in some preprint form, as on the arxiv. A substandard work can be especially hard to publish in a "reasonable" journal. I have a friend who wrote a short note outlining the beginnings of a possible approach to a famous conjecture. She has high standards as to which journals are "reasonable", and rather than compromise much on this she determinedly resubmitted her paper time after time. And it worked -- eventually it got published somewhere pretty good, but I think she had four rejections first. I myself would probably not have the fortitude to resubmit a paper time after time to journals of roughly similar quality. As you say, though, one advantage of formal publication is that the paper gets formal refereeing. Of course, the quality of this varies among journals, editors, referees and fields, but speaking as a number theorist / arithmetic geometer, most of my papers have gotten quite close readings (and required some revisions), to the extent that I have gained significant confidence in my work by going through this process. I have one paper -- my best paper, in fact! -- which I have rather mysteriously been unable to publish. It is nevertheless one of my most widely cited works, including by me (I have had little trouble publishing other, lesser papers which build on it), and it is a minor but nagging worry that a lot of people are using this work which has never received a referee's imprimatur. I will try again some day, but like I said, the battle takes something out of you. Finally, you ask about how it looks for your career, which is a perfectly reasonable question to ask. I think young mathematicians might get the wrong idea: informal mathematical culture spends a lot of time sniping at people who publish "too many papers", especially those which seem similar to each other or are of uneven quality. Some wag (Rota?) once said that every mathematician judges herself by her best paper and judges every other mathematician by dividing his worst paper by the total number of papers he has published. But of course this is silly: we say this at dinner and over drinks, for whatever reasons (I think sour grapes must be a large part of it), but I have heard much, much less of this kind of talk when it comes to hiring and promotion discussions. On the contrary, very good mathematicians who have too few papers often get in a bit of trouble. As long as you are not "self plagiarizing" -- i.e., publishing the same results over and over again without admission -- I say that keeping an eye on the Least Publishable Unit is reasonable. Note that most journals also like shorter papers and sometimes themselves recommend splitting of content. So, in summary, please do what you want! In your case, I see that you have on the order of ten other papers, so one more short paper which is in content not up there with your best work (I am going entirely on your description; I don't know enough about your area to judge the quality and haven't tried) is probably not going to make a big difference in your career. But it's not going to hurt it either: don't worry about that. So if in your heart you want this work to be published, go for it. If you can live without it, try that for a while and see how you feel later.
{ "source": [ "https://mathoverflow.net/questions/41719", "https://mathoverflow.net", "https://mathoverflow.net/users/4580/" ] }
41,784
Consider the equation $x^2=x_0$ in the symmetric group $S_n$ , where $x_0\in S_n$ is fixed. Is it true that for each integer $n\geq 0$ , the maximal number of solutions (the number of square roots of $x_0$ ) is attained when $x_0$ is the identity permutation? How far may it be generalized?
$\DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\SL}{SL}$ The maximum of the function counting square roots is attained at $x_0=1$ and this statement generalises quite well. Let $s(\chi)$ denote the Frobenius-Schur indicator of the irreducible character $\chi$ . For the definition, see the edit below. One has $s(\chi)=1$ if the representation of $\chi$ can be realised over $\mathbb{R}$ , $s(\chi)=-1$ if $\chi$ is real-valued but the corresponding representation is not realisable over $\mathbb{R}$ , and $s(\chi)=0$ if $\chi$ is not real-valued. Then, the number of square roots of an element $g$ in any group is equal to $$\sum_\chi s(\chi)\chi(g),$$ where the sum runs over all irreducible characters of the group. See below for a reference and a quick proof of this identity. It follows from the usual theory of representations of $S_n$ that in this special case all Frobenius-Schur indicators are $1$ , so the number of square roots of $x_0$ is just $\sum_\chi \chi(x_0)$ . This proves that the maximal number of solutions is indeed attained by $x_0 = 1$ , since each character value attains its maximum there. This generalises immediately to all groups for which every representation is either realisable over $\mathbb{R}$ or has non-real character, in other words has no symplectic (or sometimes called quaternionic) representations. That includes all abelian groups, all alternating groups, all dihedral groups, $\GL_n(\mathbb{F}_q)$ for all $n\in \mathbb{Z}_{\geq 1}$ and all prime powers $q$ (see [1, Ch. III, 12.6]), and many more. [1] A. Zelevinsky, Representations of Finite Classical Groups, Lecture Notes in Mathematics, Vol. 869, Springer-Verlag, New York/Berlin, 1981. Edit : One reference I have found for the identity expressing the number of square roots in terms of Frobenius-Schur indicators is Eugene Wigner, American Journal of Mathematics Vol. 63, No. 1 (Jan., 1941), pp. 57-63, "On representations of certain finite groups" . Once you get used to the notation, you will recognise it in displayed formula (11). Since the notation is really heavy going, I will supply a quick proof here: Claim: If $n(g)$ is the number of square roots of an element $g$ of a finite group $G$ , then we have $$n(g) = \sum_\chi s(\chi)\chi(g),$$ where the sum runs over all characters of $G$ , and $s(\chi)$ denotes the Frobenius-Schur indicator of $\chi$ , defined as $s(\chi)=\frac{1}{|G|}\sum_{h\in G}\chi(h^2)$ . Proof: It is clear that $n(g)$ is a class function, so it is a linear combination of the irreducible characters of $G$ . The coefficient of $\chi$ in this linear combination can be recovered as the inner products of $n$ with $\chi$ . We can write $n(g) = \sum_h \delta_{g,h^2}$ (here $\delta$ is the usual Kronecker delta), so we obtain $$ \begin{align*} \left< n,\chi \right> &= \frac{1}{|G|}\sum_{g\in G}n(g)\chi(g) = \frac{1}{|G|}\sum_{g\in G}\sum_{h\in G}\delta_{g,h^2}\chi(g)=\\\\ &=\frac{1}{|G|}\sum_{h\in G}\sum_{g\in G}\delta_{g,h^2}\chi(g) = \frac{1}{|G|}\sum_{h\in G}\chi(h^2), \end{align*}$$ as claimed. Edit 2 : I got curious and ran a little experiment. The proof above applies to all finite groups that have no symplectic representations. So the natural question is: what happens for those that do? Among the groups of size $\leq 150$ , there are 1911 groups that have a symplectic representation, and for 1675 of them, the square root counting function does not attain its maximum at the identity! There are several curious questions that suggest themselves: is there a similar (representation-theoretic?) 2-line criterion that singles out those 300-odd groups that satisfy the conclusion but not the assumptions of the above proof? What happens for the others? Can we find a complete characterisation of the groups whose square root counting functions is maximised by the identity? Following Pete's suggestion, I have started two follow-up questions on this business: one on square roots and one on $n$ -th roots .
{ "source": [ "https://mathoverflow.net/questions/41784", "https://mathoverflow.net", "https://mathoverflow.net/users/4312/" ] }
41,836
Nakayama's lemma is as follows: Let $A$ be a ring, and $\frak{a}$ an ideal such that $\frak{a}$ is contained in every maximal ideal. Let $M$ be a finitely generated $A$ -module. Then if $\frak{a}$$M=M$ , we have that $M = 0$ . Most proofs of this result that I've seen in books use some non-trivial linear algebra results (like Cramer's rule), and I had come to believe that these were certainly necessary. However, in Lang's Algebraic Number Theory book, I came across a quick proof using only the definitions and induction. I felt initially like something must be wrong--I thought perhaps the proof is simpler because Lang is assuming throughout that all rings are integral domains, but he doesn't use this in the proof he gives, as far as I can see. Here is the proof, verbatim: We do induction on the number of generators of $M$ . Say M is generated by $w_1, \cdots, w_m$ . There exists an expression $$w_1 = a_1w_1 + \cdots + a_mw_m$$ with $a_i \in \frak{a}$ . Hence $$(1-a_1)w_1 = a_2w_2 + \cdots +a_mw_m$$ If $(1-a_1)$ is not a unit in A, then it is contained in a maximal ideal $\frak{p}$ . Since $a_1 \in \frak{p}$ by hypothesis, we have a contradiction. Hence $1-a_1$ is a unit, and dividing by it shows that $M$ can be generated by $m-1$ elements, thereby concluding the proof. Is the fact that $A$ is assumed to be a domain being smuggled in here in some way that I missed? Or is this really an elementary proof of Nakayama's lemma, in full generality?
There are various forms of the Nakayama lemma. Here is a rather general one; note that it does not involve maximal ideals and is a constructive theorem (Atiyah-MacDonald, Commutative Algebra, Prop. 2.4 ff). Let $M$ be a finitely generated $A$-module, $\mathfrak{a} \subseteq A$ be an ideal and $\phi \in End_A(M)$ such that $\phi(M) \subseteq \mathfrak{a} M$. Then there is an equation of the form $\phi^n + r_1 \phi^{n-1} + ... + r_n = 0$, where the $r_i$ are in $\mathfrak{a}$. The proof uses the equality $adj(X) \cdot X = \text{det}(X)$ for quadratic matrices over a ring. I call this an elementary linear algebra fact. Of course, there you only prove it for fields but using function fields implies the result for general rings. If we take $\phi=\text{id}_M$, we get the following form: Let $M$ be a finitely generated $A$-module and let $\mathfrak{a} \subseteq A$ be an ideal such that $\mathfrak{a} M = M$. Then there exists some $r \in A$ such that $rM = 0$ and $r \equiv 1$ mod $\mathfrak{a}$. In particular, we get: Let $M$ be a finitely generated $A$-module and let $\mathfrak{a} \subseteq A$ be an ideal such that $\mathfrak{a} M = M$ and $\mathfrak{a}$ lies in every maximal ideal of $A$. Then $M=0$. Observe that this argument uses Zorn's lemma (namely that every non-unit is contained in a maximal ideal) and is thus nonconstructive. Which is of course not surprising since without Zorn's lemma it is consistent that there are nontrivial rings without any maximal ideals at all. This should convince you that the first form of the Nakayama lemma is the most easy and elementary one. The last form has another short proof, which is standard and given in the question above. Here is another short well-known proof for the last form, which also works if $A$ is noncommutative (then we have to replace "maximal ideal" by "maximal left ideal"): Assume $M \neq 0$. Since $M$ is finitely generated, an application of Zorn's lemma shows that $M$ has a maximal proper submodule $N$. Then $M/N$ is simple, thus isomorphic to $A/\mathfrak{m}$ for some maximal left ideal $\mathfrak{m}$. Then $N = \mathfrak{m} M = M$, contradiction. By the way, I don't know if the first form is true if $A$ is noncommutative. The theory of determinants is not really prosperous over noncommutative rings. Hints? In many texts about algebraic geometry only the last form of the Nakayama lemma is needed. But the first one is stronger and is used in many results in commutative algebra.
{ "source": [ "https://mathoverflow.net/questions/41836", "https://mathoverflow.net", "https://mathoverflow.net/users/9960/" ] }
41,939
A box contains n balls coloured 1 to n. Each time you pick two balls from the bin - the first ball and the second ball, both uniformly at random and you paint the second ball with the colour of the first. Then, you put both balls back into the box. What is the expected number of times this needs to be done so that all balls in the box have the same colour? Answer (Spoiler put through rot13.com): Gur fdhner bs gur dhnagvgl gung vf bar yrff guna a. Someone asked me this puzzle some four years back. I thought about it on and off but I have not been able to solve it. I was told the answer though and I suspect there may be an elegant solution. Thanks.
It can probably be done by looking at the sum of squares of sizes of color clusters and then constructing an appropriate martingale. But here's a somewhat elegant solution: reverse the time! Let's formulate the question like that. Let $F$ be the set of functions from $\{1,\ldots,n\}$ to $\{1,\ldots,n\}$ that are almost identity, i.e., $f(i)=i$ except for a single value $j$. Then if $f_t$ is a sequence of i.i.d. uniformly from $F$, and $$g_t=f_1 \circ f_2 \circ \ldots \circ f_t$$ then you can define $\tau= \min \{ t | g_t \verb"is constant"\}$. The question is then to calculate $\mathbb{E}(\tau)$. Now, one can also define the sequence $$h_t=f_t \circ f_{t-1} \circ \ldots \circ f_1$$ That is, the difference is that while $g_{t+1}=g_t \circ f_{t+1}$, here we have $h_{t+1}=f_{t+1} \circ h_t$. This is the time reversal of the original process. Obviously, $h_t$ and $g_t$ have the same distribution so $$\mathbb{P}(h_t \verb"is constant")=\mathbb{P}(g_t \verb"is constant")$$ and so if we define $\sigma=\min \{ t | h_t \verb"is constant"\}$ then $\sigma$ and $\tau$ have the same distribution and in particular the same expectation. Now calculating the expectation of $\sigma$ is straightforward: if the range of $h_t$ has $k$ distinct values, then with probability $k(k-1)/n(n-1)$ this number decreases by 1 and otherwise it stays the same. Hence $\sigma$ is the sum of geometric distributions with parameters $k(k-1)/n(n-1)$ and its expectation is $$\mathbb{E}(\sigma)=\sum_{k=2}^n \frac{n(n-1)}{k(k-1)}= n(n-1)\sum_{k=2}^n \frac1{k-1} - \frac1{k} = n(n-1)(1-\frac1{n}) = (n-1)^2 .$$
{ "source": [ "https://mathoverflow.net/questions/41939", "https://mathoverflow.net", "https://mathoverflow.net/users/7576/" ] }
41,978
I was wondering if anyone could offer some intuition for why Alexander duality holds. Of course, the proof is easy enough to check, and it is also easy to work out many examples by hand. However, I don't have any feeling for why it is true. To give you an example of what I am looking for, when I think of Poincare duality I think of the picture in terms of triangulations and dual triangulations. Is there any picture like that for Alexander duality? Is there at least maybe some kind of obvious bilinear pairing between the two sides of it or something?
Let $M$ be a closed orientable $n$-manifold containing the compact set $X$. Given an $n-q-1$-cocyle on $X$ (I am choosing this degree just to match with the notation of the Wikipedia article to which you linked), we extend it to some small open neighbourhood $U$ of $X$. By Lefschetz--Poincare duality on the open manifold $U$, we can convert this $n-q-1$-cocylce into a Borel--Moore cycle (i.e. a locally-finite cycle made up of infinitely many simplices) on $U$ of degree $q+1$. Throwing away those simplices lying in $U \setminus X$, we obtain a usual (i.e. finitely supported) cycle giving a class in $H_{q+1}(U,U\setminus X) = H_{q+1}(M,M\setminus X)$ (the isomorphism holding via excision). Alexander duality for an arbitrary manifold then states that the map $H^{n-q-1}(X) \to H_{q+1}(M,M \setminus X)$ is an isomorphism. (If $X$ is very pathological, then we should be careful in how define the left-hand side, to be sure that every cochain actually extends to some neighbourhood of $X$.) Now if $M = S^{n+1}$, then $H^i(S^{n+1})$ is almost always zero, and so we may use the boundary map for the long exact sequence of a pair to identify $H_{q+1}(S^{n+1}, S^{n+1}\setminus X)$ with $H_{q}(S^{n+1}\setminus X)$ modulo worrying about reduced vs. usual homology/cohomology (to deal with the fact that $H^i(S^{n+1})$ is non-zero at the extremal points $i = 0$ or $n$). So, in short: we take a cocycle on $X$, expand it slightly to a cocyle on $U$, represent this by a Borel--Moore cycle of the appropriate degree, throw away those simplices lying entirely outside $X$, so that it is now a chain with boundary lying outside $X$, and finally take this boundary , which is now a cycle in $S^{n+1} \setminus X$. (I found these notes of Jesper Moller helpful in understanding the general structure of Alexander duality.) One last thing: it might help to think this through in the case of a circle embedded in $S^2$. We should thicken the circle up slightly to an embedded strip. If we then take our cohomology class to be the generator of $H^1(S^1)$, the corresponding Borel--Moore cycle is just a longitudinal ray of the strip (i.e. if the strip is $S^1 \times I$, where $I$ is an open interval, then the Borel--More cycle is just $\{\text{point}\} \times I$). If we cut $I$ down to a closed subinterval $I'$ and then take its boundary, we get a pair of points, which you can see intuitively will lie one in each of the components of the complement of the $S^1$ in $S^2$. More rigorously, Alexander duality will show that these two points generate the reduced $H^0$ of the complement of the $S^1$, and this is how Alexander duality proves the Jordan curve theorem. Hopefully the above sketch supplies some geometric intuition to this argument.
{ "source": [ "https://mathoverflow.net/questions/41978", "https://mathoverflow.net", "https://mathoverflow.net/users/10019/" ] }
41,979
$\newcommand{\Spec}{\mathrm{Spec}\,}$Let $X=\Spec A$ be a variety over $k$, then we have the definition of the tangent bundle $\hom_k(\Spec k[\varepsilon]/(\varepsilon^2),X)$ (note that this has the structure of a variety). On the other hand, we have the definition of a tangent sheaf $\hom_{\mathscr{O}_X}(\Omega_{X/k},\mathscr{O}_X)$. What is the relationship between the two? Also, when $X$ is an arbitrary scheme (not necessarily affine), then does the relationship still hold?
You can always apply the "vector bundle" construction to $\Omega:=\Omega_{X/k}$ (locally free or not). What you get is a scheme $T=\mathrm{Spec\ Sym}(\Omega)\to X$ which deserves to be called "tangent bundle" (albeit not locally trivial); in particular its $k$-points are what you want and, more generally, if, say, $Z=\mathrm{Spec}\ C$ is an affine $k$-scheme, then $T(C)$ is just $\mathrm{Hom}_k (\mathrm{Spec}\ C[\varepsilon]/\varepsilon^2, X)$. On the other hand consider the ${\cal O}_X$-dual ${\cal T}:={\Omega}^\vee$. For every $X$-scheme $y:Y\to X$ there is a canonical map $\Gamma(Y,y^*\mathcal{T})\to \mathrm{Hom}_X(Y,T)$. If $Y$ is an open subset of $X$ this is bijective. But if $y$ is a point of $X$, then the LHS is $\Omega^\vee \otimes \kappa(y)$ while the RHS is the $\kappa(y)$-dual of $\Omega\otimes \kappa(y)$. Clearly the image consists of those tangent vectors at $y$ which extend to vector fields in a neighbourhood. The computation when $X$ is the union of the two axes in the plane is a good exercise; if $y$ is the origin the above map is zero. [EDIT] after seeing Unknown's answer (BTW, there are some problems with TeX there). The above argument shows that the "tangent bundle" is always a scheme, if you define it right. Another way of seeing this is that it's just an instance of Weil restriction: if $R$ is a finite-dimensional $k$-algebra you can define the functor $\underline{\mathrm{Hom}}_k (\mathrm{Spec}(R),X)$ in a similar way. This is always an algebraic space, and it is a scheme if $X$ is quasiprojective. But it is also a scheme if $R$ is local , which is the case here with $R=D_1(k)$. The reason is that if you cover $X$ by affines $U_i$, every morphism frome a local scheme to $X$ factors through one of the $U_i$'s, so we can construct the Weil restriction of $X$ by gluing those of the $U_i$'s.
{ "source": [ "https://mathoverflow.net/questions/41979", "https://mathoverflow.net", "https://mathoverflow.net/users/9035/" ] }
42,127
I learned Bezout's Theorem in class, stated for plane curves (if irreducible, sum of intersection multiplicities equals product of degrees). What is the proper general statement, for projective varieties of degree n? I think it is something like: If finite, the sum of multiplicities equals the product of degrees.. else the (dimension? degree? sums over irreducible components?) of the intersection is less than or equal to the difference in degrees. Help is appreciated!
Dear unknown, the most straightforward generalization of Bézout's theorem might be the following. Consider $\mathbb P^n $, projective space over the field $k$, and $n$ hypersurfaces $H_1,...,H_n$in general position in the sense that their intersection is a finite set. Then, calling $h_i$ their local equations, Bézout says $$\sum dim_k \mathcal O_{\mathbb P^n,P_i}/(h_1,...,h_n) =\prod deg (H_i) $$ The dimension on the left hand side is, of course, to be interpreted as the multiplicity with which to count the point $P_i$, seen as a fat point i.e. a zero-dimensional non-reduced scheme. A related, more abstract point of view is the description of the Chow ring of $\mathbb P^n$ as $CH^\ast (\mathbb P^n)=\mathbb Z[x]/(x^{n+1})$ ( where $x$ is the class of a hyperplane in $\mathbb P^n$). From this point of view we have the following version of Bézout. Consider $r$ cycles $\alpha_1,...,\alpha_r$ on $\mathbb P^n$ with $\alpha_i \in CH^{d_i}(\mathbb P^n) $ and $d_1+...d_r \leq n$, . Then $$deg \prod {\alpha_i} =\prod deg (\alpha_i)$$ the product of the $\alpha_i$'s on the left being calculated in the Chow ring and the degree $deg (\alpha) $ of a cycle $\alpha \in CH^d (\mathbb P^n)$ being the integer $t$ such that $\alpha =t . x \in CH^d(\mathbb P^n)=\mathbb Z .x $. This is only the tip of the iceberg: a definitive answer would require a book. Fortunately that book exists and has been written, to our eternal gratitude, by Fulton: Intersection theory, volume 2 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Springer-Verlag, Berlin, second edition, 1998.
{ "source": [ "https://mathoverflow.net/questions/42127", "https://mathoverflow.net", "https://mathoverflow.net/users/10054/" ] }
42,186
There are many equivalent ways of defining the notion of compact space, but some require some kind of choice principle to prove their equivalence. For example, a classical result is that for $X$ to be compact, it is necessary and sufficient that every ultrafilter on $X$ converge to a point in $X$. The necessity is easy to prove, but the sufficiency requires a choice principle to the effect that every filter can be extended to an ultrafilter. Some years ago I heard from a very good categorical topologist that many, perhaps most of the useful properties of compact spaces $X$ readily flow from the fact that for every space $Y$, the projection map $\pi: X \times Y \to Y$ is closed. Of course that is a very classical consequence of compactness which can be left as an exercise to beginners in topology, and I was struck by the topologist's assertion that you could in fact use this as a definition of compactness, and that this is a very good definition for doing categorical topology. (I am still not sure what he really meant by this, but that's not my question.) My own proof that this condition implies compactness goes as follows. Let $Y$ be the space of ultrafilters on the set $X$ with its usual compact Hausdorff topology, and suppose the projection $\pi: X \times Y \to Y$ is a closed map. Let $R \subseteq X \times Y$ be the set of pairs $(x, U)$ where the ultrafilter $U$ converges to the point $x$. One may show that $R$ is a closed subset, so the image $\pi(R)$ is closed in $Y$. But every principal ultrafilter (one generated by a point) converges to the point that generates it, so every principal ultrafilter belongs to $\pi(R)$. Now principal ultrafilters are dense in the space of all ultrafilters, so $\pi(R)$ is both closed and dense, and therefore is all of $Y$. This is the same as saying that every ultrafilter on $X$ converges to some point of $X$, and therefore $X$ is compact. I was at first happy with this proof, but later began to wonder if it's overkill. Certainly it uses heavily the choice principle mentioned above, and my question is whether the implication I just proved above really requires some form of choice like that.
Martin Escardó wrote a very nice note "Intersections of compactly many open sets are open" which you might want to read.
{ "source": [ "https://mathoverflow.net/questions/42186", "https://mathoverflow.net", "https://mathoverflow.net/users/2926/" ] }
42,215
The classic example of a non-measurable set is described by wikipedia . However, this particular construction is reliant on the axiom of choice; in order to choose representatives of $\mathbb{R} /\mathbb{Q}$. "Since each element intersects [0,1], we can use the axiom of choice to choose a set containing exactly one representative out of each element of R / Q." Is it possible to construct a non-measurable set (in $\mathbb{R}$ for example) without requiring the A.o.C.?
In the 1960's, Bob Solovay constructed a model of ZF + the axiom of dependent choice (DC) + "all sets of reals are Lebesgue measurable." DC is a weak form of choice, sufficient for developing the "non-pathological" parts of real analysis, for example the countable additivity of Lebesgue measure (which is not provable in ZF alone). Solovay's construction begins by assuming that there is a model of ZFC in which there is an inaccessible cardinal. Later, Saharon Shelah showed that the inaccessible cardinal is really needed for this result.
{ "source": [ "https://mathoverflow.net/questions/42215", "https://mathoverflow.net", "https://mathoverflow.net/users/3121/" ] }
42,234
It is well known that any compact smooth $m$-manifold can be obtained from $m$-ball by gluing some points on the boundary. Is it still true for topological manifold? Comments: To proof the smooth case, fix a Riemannian metric and consider exponential map up to cutlocus. The question was asked by D. Burago. I made a bet that a complete answer will be given in an hour — please help :)
The answer is yes -- Morton Brown's mapping theorem says that for every closed (connected) topological $n$-manifold $M$ there is a continuous map $f$ from the $n$-cube $I^n$ onto $M$ which is injective on the interior of the cube (for manifolds with boundary, see Remark 1 below). This was proved in early sixties and can be found in M.Brown, "A mapping theorem for untriangulated manifolds," Topology of 3-manifolds, pp.92-94, M.K.Fort, Jr.(Editor), Prentice-Hall, Englewood Cliffs, N.J.,1962. MR0158374 Main idea of the proof is simple: "Use local PL structures to expand a small $n$-cell in $M$ gradually, until it becomes the whole manifold." This can be realized the "infinite composition" of engulfings of finitely many points at a time. This argument is not too difficult, so I will try to sketch it. First consider a closed $n$-cell $C$ in $M$ and a finite set $X=\{x_1,\ldots,x_k\}$ of points of $M$ disjoint from $C$ which we want to engulf. Assume that each $x_i$ lies in some open $n$-cell $U_i$ which intersects $C$. For each $i$, fix a PL structure on $U_i$, and join $x_i$ and some point $y_i\in \partial C$ by a PL arc $\alpha_i\subset U_i$ relative to this structure. If we assume $\dim M\geq 3$ (this is not a restriction because the theorem is clear for $\dim M=1$, and easily follows from the surface classification for $\dim M=2$), we may require that $\alpha_i$'s should be disjoint. The regular neighborhood $Q_i$ of $\alpha_i$ within $U_i$ is a closed $n$-cell, and $Q_i$'s can be made disjoint. Let $h$ be a homeomorphism of $M$ pushing $y_i$ towards $x_i$ within $Q_i$ for each $i$ which is identity outside $Q_i$'s. Then $X\subset h(C)$, and we can apply this process again to $h(C)$ and another finite set $X'\subset M\setminus h(C)$. Repeating this process we obtain a sequence of engulfing homeomorphisms $h_1, h_2,\ldots$. We can arrage that the uniform limit $f\colon M\to M$ of the composition of engulfing homeomorphisms $f_n=h_n\circ\cdots\circ h_1$ exists and $f(C)=M$. If we choose sufficiently small $Q_i$'s in each stage, one can make sure that $f$ is injective on the interior of $C$ (indeed, we can arrange that for each interior point $x$ there is $n$ such that $f_n(x)=f(x)$). Remark 1: In Brown's paper, one can find a corollary that "If $M$ is a compact connected manifold with nonempty boundary $B$, then there is a surjection $f\colon B\times [0,1]\to M$ that restricts to the identity on $B\times 0$ and is injective on $B\times[0,1)$". Notice also that the above theorem can be applied to the closed manifold $B$. Then it follows that any compact topological $n$-manifold (possibly with boundary) can be obtained by identifying some points in the boundary of the $n$-ball. Remark 2: Berlanga's theorem extended the Brown's theorem to noncompact manifolds: This theorem states that for every (connected) $\sigma$-compact $n$-manifold $M$, there is a nice kind of surjection $I^n\to \overline{M}$ similar to Brown's map, where $\overline{M}$ denotes the end compactification of $M$.
{ "source": [ "https://mathoverflow.net/questions/42234", "https://mathoverflow.net", "https://mathoverflow.net/users/1441/" ] }
42,310
I have never studied any measure theory, so apologise in advance, if my question is easy: Let $X$ be a measure space. How can I decide whether $L^2(X)$ is separable? In reality, I am interested in Borel sets on a locally compact space $X$. I can also assume that the support of the measure is $X$, if it helps... I cannot even decide at the moment for which locally compact groups $G$ with Haar measure, $L^2(G)$ is separable...
Without loss of generality we can assume that the support of the measure equals $X$ (i.e., the measure is faithful), because we can always pass to the subspace defined by the support of the measure. The space $^2(X)$ is independent of the choice of a faithful measure and depends only on the underlying enhanced measurable space of $X$ , i.e., measurable and negligible subsets of $X$ . There is a complete classification of measurable spaces up to isomorphism. Every measurable space canonically splits as a disjoint union of its ergodic subspaces, i.e., measurable spaces that do not admit measures invariant under all automorphisms. Ergodic measurable spaces in their turn can be characterized using two cardinal invariants $(m,n)$ , where either $m=0$ or both $m≥ℵ_0$ and $n≥ℵ_0$ . The measurable space represented by $(m,n)$ is the disjoint union of $n$ copies of $2^m$ , where $2=\{0,1\}$ is a measurable space consisting of two atoms and $2^m$ denotes the product of $m$ copies of 2. The case $m=0$ gives atomic measurable spaces (disjoint unions of points), whereas $m=ℵ_0$ gives disjoint unions of real lines (alias standard Borel spaces). Thus isomorphism classes of measurable spaces are in bijection with functions M: Card'→Card, where Card denotes the class of cardinals and Card' denotes the subclass of Card consisting of infinite cardinals and 0. Additionally, if $m>0$ , then $M(m)$ must belong to Card'. The Banach space $^p(X)$ ( $1≤p<∞$ ) is separable if and only if $M(0)$ and $M(ℵ_0)$ are at most countable and $M(m)=0$ for other $m$ . Thus there are two families of measurable spaces whose $^p$ -spaces are separable: Finite or countable disjoint unions of points; The disjoint union of the above and the standard Borel space. Equivalent reformulations of the above condition assuming $M(m)=0$ for $m>ℵ_0$ : $^p(X)$ is separable if and only if $X$ admits a faithful finite measure. $^p(X)$ is separable if and only if $X$ admits a faithful $σ$ -finite measure. $^p(X)$ is separable if and only if every (semifinite) measure on $X$ is $σ$ -finite. The underlying measurable space of a locally compact group $G$ satisfies the above conditions if and only if $G$ is second countable as a topological space. The underlying measurable space of a paracompact Hausdorff smooth manifold $M$ satisfies the above conditions if and only if $M$ is second countable, i.e., the number of its connected components is finite or countable. More information on this subject can be found in this answer: Is there an introduction to probability theory from a structuralist/categorical perspective? Bruckner, Bruckner, and Thomson discuss separability of $^p$ -spaces in Section 13.4 of their textbook Real Analysis: http://classicalrealanalysis.info/documents/BBT-AlllChapters-Landscape.pdf
{ "source": [ "https://mathoverflow.net/questions/42310", "https://mathoverflow.net", "https://mathoverflow.net/users/5301/" ] }
42,329
It's clear that the axiom of replacement can be used to construct very large sets, such as $$ \bigcup_{i=0}^\infty P^i N, $$ where $N$ is the natural numbers. I assume that it can be used to construct sets much lower in the Zermelo hierarchy, such as sets of natural numbers, but I don't know of an example. Is there an easy example? (Just to be clear, I mean an example that requires the use of replacement, not just one where you could use replacement if you wanted to.) I would guess you can cook up an example using Borel determinacy, since that involves games of length $\omega$, but it would be great if there was an even more direct example. Also, I'd be curious to know for any such examples at what stage they first come along in the constructible universe. $\omega + 1$? The first Church-Kleene ordinal? Some other ordinal I've never heard of?
Without loss of generality we can assume that the support of the measure equals $X$ (i.e., the measure is faithful), because we can always pass to the subspace defined by the support of the measure. The space $^2(X)$ is independent of the choice of a faithful measure and depends only on the underlying enhanced measurable space of $X$ , i.e., measurable and negligible subsets of $X$ . There is a complete classification of measurable spaces up to isomorphism. Every measurable space canonically splits as a disjoint union of its ergodic subspaces, i.e., measurable spaces that do not admit measures invariant under all automorphisms. Ergodic measurable spaces in their turn can be characterized using two cardinal invariants $(m,n)$ , where either $m=0$ or both $m≥ℵ_0$ and $n≥ℵ_0$ . The measurable space represented by $(m,n)$ is the disjoint union of $n$ copies of $2^m$ , where $2=\{0,1\}$ is a measurable space consisting of two atoms and $2^m$ denotes the product of $m$ copies of 2. The case $m=0$ gives atomic measurable spaces (disjoint unions of points), whereas $m=ℵ_0$ gives disjoint unions of real lines (alias standard Borel spaces). Thus isomorphism classes of measurable spaces are in bijection with functions M: Card'→Card, where Card denotes the class of cardinals and Card' denotes the subclass of Card consisting of infinite cardinals and 0. Additionally, if $m>0$ , then $M(m)$ must belong to Card'. The Banach space $^p(X)$ ( $1≤p<∞$ ) is separable if and only if $M(0)$ and $M(ℵ_0)$ are at most countable and $M(m)=0$ for other $m$ . Thus there are two families of measurable spaces whose $^p$ -spaces are separable: Finite or countable disjoint unions of points; The disjoint union of the above and the standard Borel space. Equivalent reformulations of the above condition assuming $M(m)=0$ for $m>ℵ_0$ : $^p(X)$ is separable if and only if $X$ admits a faithful finite measure. $^p(X)$ is separable if and only if $X$ admits a faithful $σ$ -finite measure. $^p(X)$ is separable if and only if every (semifinite) measure on $X$ is $σ$ -finite. The underlying measurable space of a locally compact group $G$ satisfies the above conditions if and only if $G$ is second countable as a topological space. The underlying measurable space of a paracompact Hausdorff smooth manifold $M$ satisfies the above conditions if and only if $M$ is second countable, i.e., the number of its connected components is finite or countable. More information on this subject can be found in this answer: Is there an introduction to probability theory from a structuralist/categorical perspective? Bruckner, Bruckner, and Thomson discuss separability of $^p$ -spaces in Section 13.4 of their textbook Real Analysis: http://classicalrealanalysis.info/documents/BBT-AlllChapters-Landscape.pdf
{ "source": [ "https://mathoverflow.net/questions/42329", "https://mathoverflow.net", "https://mathoverflow.net/users/3711/" ] }
42,331
Every symplectic form on a manifold $M^n$ determines a De Rham cohomology class in $H^2(M)$ (often a nontrivial class), and this in turn determines a class in $H_{n-2}(M)$. What in general can be said about this class? For example, over the rationals this class is represented by a submanifold of $M$; is it possible to explicitly describe such a submanifold in terms of the symplectic structure? If there is a nice answer to this question, does it also shed light on the Poincare duals of $\omega^2$, $\omega^3$, etc?
One of the big advances in symplectic topology in the 90s was Donaldson's theorem that when the symplectic class is integral, high multiples of its dual are represented by symplectic submanifolds. These submanifolds behave like hyperplane sections in algebraic geometry; for instance, they satisfy the Lefschetz hyperplane theorem. They form the fibres of "symplectic Lefschetz pencils". Their intersections can be made to give symplectic submanifolds dual to multiples of wedge-powers of $\omega$. Imagine first that $M$ is a compact complex manifold, $L\to M$ a hermitian, holomorphic line bundle, whose Chern connection has curvature $-2\pi i\omega$, a closed $(1,1)$-form. Then the zero-set of a $C^\infty$ section $s$ of $L^{\times k}$, if cut out transversely, is dual to $k[\omega]$. If $\omega$ is positive, the Kodaira embedding theorem then tells us that $L$ is ample: its high powers have enough holomorphic sections to embed $M$ into projective space. If $M$ is merely symplectic, with $-2\pi i\omega$ the curvature of some unitary connection in a hermitian line bundle, we can choose an almost complex structure $J$ on $M$ and consider transverse sections $s_k$ of $L^{\otimes k}$ for which, asymptotically, the $(0,1)$-part of $\nabla s_k$ along $s_k^{-1}(0)$ is much smaller than the $(1,0)$-part. Then $s_k^{-1}(0)$ will not quite be a $J$-holomorphic submanifold, but for $k \gg 0$ its tangent spaces will be close enough to being $J$-linear that it will still be a symplectic submanifold. References: S. K. Donaldson, "Symplectic submanifolds and almost-complex geometry", J. Differential Geom. Volume 44, Number 4 (1996), 666-705; "Lefschetz pencils on symplectic manifolds", J. Differential Geom. Volume 53, Number 2 (1999), 205-236. These papers are brilliant both geometrically and analytically: the analysis is mostly low-tech but extremely subtle.
{ "source": [ "https://mathoverflow.net/questions/42331", "https://mathoverflow.net", "https://mathoverflow.net/users/4362/" ] }
42,406
It is quite clear why certain differential equations, among the jungle of possible diff equations that is possible to conceive, are studied: some come from physical problems, or from "spontaneous" mathematical generalizations thereof, others come from geometry in a variety of ways. For diophantine equations there seem not to be such a direct link to other areas. I would like to roughly understand why the attention of number theorists concentrates on some kinds of diophantine equations and not on others. Why an equation such as $x^2-ny^2=1$ or $x^3+y^3=z^3$ is (or have been) considered worth studying, and not, say, any other random variant such as (if that specific example is not enough nontrivial for you or if it actually happens to have been studied, feel free to substitute it with your favourite "random" diophantine equation): $x^3+y^5=z^2$ ? So: Are there any reasons why certain diophantine equations are worth attention besides the mere approachability (i.e. being neither trivial nor hopelessly difficult to analyze)?
$x^2 - ny^2 = 1$ is interesting for at least two reasons: on the one hand, $x^2 - ny^2$ is a norm from the quadratic field, so the equation has to do with the rather natural question of studying units in real quadratic fields. On the other hand (or, really, on a different finger of the same hand) it is just what you want to study if you are interested in rational approximations to square roots of integers, which in some sense are the "simplest" irrational numbers and thus the first context in which you might think about approximating irrationals by rationals. Similarly, the Fermat and generalized Fermat equations are quite natural in the following sense: there is a long history of studying the interplay between addition and multiplication in integers, and in particular the additive relations between multiplicatively defined sets (primes, perfect powers, etc.) In this context it makes sense to think about $x^n + y^n = z^n$ and things like the Goldbach conjecture. What makes the former more natural? In some sense, it is natural because there's an approach to it! It turns out that the equation $x^n + y^n = z^n$ is intimately related to the geometry of $P^1$ - three points (in some sense the algebraic curve on which all others are based) and to the closely related object X(1), the moduli space of elliptic curves. There is no hard and fast rule for "which Diophantine questions are interesting" -- but in general it is not so far off to say that the ones which are interesting are the ones where we have at least some idea how to attack them, because the reason we have some idea how to attack them is typically because they're connected to some other mathematical objects of interest.
{ "source": [ "https://mathoverflow.net/questions/42406", "https://mathoverflow.net", "https://mathoverflow.net/users/4721/" ] }
42,449
How would one approach proving that every real number is a zero of some power series with rational coefficients? I suspect that it is true, but there may exist some zero of a non-analytic function that is not a zero of any analytic function. I was thinking about approaching the problem using arguments of cardinality, but I am unsure about how to begin. Thank you in advance.
Call your real number $\alpha$. Suppose you have found a polynomial $p$ of degree $n-1$ with rational coefficients such that $|p(\alpha)|\lt\epsilon$. Show you can find a rational $r$ such that $|p(\alpha)-r\alpha^n|\lt\epsilon/2$.
{ "source": [ "https://mathoverflow.net/questions/42449", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
42,460
Suppose that the function $p(x)$ is defined on an open subset $U$ of $\mathbb{R}$ by a power series with real coefficients. Suppose, further, that $p$ maps rationals to rationals. Must $p$ be defined on $U$ by a rational function?
No. In fact, $p(x)$ can be a complex analytic function with rational coefficients that takes any algebraic number $\alpha$ in an element of $\mathbb{Q}(\alpha)$. (And everywhere analytic functions are not rational unless they are polynomials). The algebraic numbers are countable, so one can find a countable sequence of polynomials $q_1(x), q_2(x), \ldots \in \mathbb{Q}[x]$ such that every algebraic number is a root of $q_n(x)$ for some $n$. Suppose that the degree of $q_i(x)$ is $a_i$, and choose integers $b_i$ such that $$b_{n+1} > b_{n} + a_1 + a_2 + \ldots + a_n.$$ Then consider the formal power series: $$p(x) = \sum_{n=0}^{\infty} c_n x^{b_n} \left( \prod_{i=0}^{n} q_i(x) \right),$$ By the construction of $b_n$, the coefficient of $x^k$ for $k = b_n$ to $b_{n+1} -1$ in $p(x)$ is the coefficient of $x^k$ in $c_n x^{b_n} \prod_{i=1}^{n} q_i(x)$. Hence, choosing the $c_n$ to be appropriately small rational numbers, one can ensure that the coefficients of $p(x)$ decrease sufficiently rapidly and thus guarantee that $p(x)$ is analytic. On the other hand, clearly $p(\alpha) \in \mathbb{Q}[\alpha]$ for every (algebraic) $\alpha$, because then the sum above will be a finite sum. With a slight modification one can even guarantee that the same property holds for all derivatives of $p(x)$. I learnt this fun argument from the always entertaining Alf van der Poorten (who sadly died recently).
{ "source": [ "https://mathoverflow.net/questions/42460", "https://mathoverflow.net", "https://mathoverflow.net/users/5229/" ] }
42,512
It is sometimes the case that one can produce proofs of simple facts that are of disproportionate sophistication which, however, do not involve any circularity. For example, (I think) I gave an example in this M.SE answer (the title of this question comes from Pete's comment there) If I recall correctly, another example is proving Wedderburn's theorem on the commutativity of finite division rings by computing the Brauer group of their centers. Do you know of other examples of nuking mosquitos like this?
Irrationality of $2^{1/n}$ for $n\geq 3$: if $2^{1/n}=p/q$ then $p^n = q^n+q^n$, contradicting Fermat's Last Theorem. Unfortunately FLT is not strong enough to prove $\sqrt{2}$ irrational. I've forgotten who this one is due to, but it made me laugh. EDIT: Steve Huntsman's link credits it to W. H. Schultz.
{ "source": [ "https://mathoverflow.net/questions/42512", "https://mathoverflow.net", "https://mathoverflow.net/users/1409/" ] }
42,629
Fix a dimension $n\geqslant 2$. Let $S= \{M_1,\ldots, M_k\}$ be a finite set of smooth compact $n$-manifold with boundary. Let us say that a smooth closed $n$-manifold is generated by $S$ if it may be obtained by gluing some copies of elements in $S$ via some arbitrary diffeomorphisms of their boundaries. For instance: Every closed orientable surface is generated by a set of two objects: a disc and a pair-of-pants $P$, Waldhausen's graph manifolds are the 3-manifolds generated by $D^2\times S^1$ and $P\times S^1$, The 3-manifolds having Heegaard genus $g$ are those generated by the handlebody of genus $g$ alone, The exotic $n$-spheres with $n\geqslant 5$ are the manifolds generated by $D^n$ alone. A natural question is the following: Fix $n\geqslant 3$. Is there a finite set of compact smooth $n$-manifolds which generate all closed smooth $n$-manifolds? I expect the answer to be ''no'', although I don't see an immediate proof. In particular, I expect some negative answers to both of these questions: Is there a finite set of compact 3-manifolds which generate all hyperbolic 3-manifolds? and Is there a finite set of compact 4-manifolds which generate all simply connected 4-manifolds?
Thanks to Ian Agol for pointing out this question and a related one on levels of Morse functions - /30567/ . In both case, for smooth manifold of dim $> 3$, as expected, there is no finite list of blocks ( or regular level components.) The idea is that one may define the "width" of a group, by representing the group G as the fundamental group of some complex K and then slicing K into "levels". The game to to arrange the slices so that the image of $\pi_1$ of each component of each level set maps a subgroup of small rank under the inclusion into $\pi_1(K)$. width( G) is defined as a Minmax over all slicings of all complexes K with $\pi_1 K = G$ of the rank of these image subgroups. I wrote a few pages to show that width( $\mathbb{Z}^k $) $= k-1$. The only slightly technical ingredient is Lusternick-Schnirelmann category. This answers negatively these finiteness questions since there are $d$ manifolds with $\pi_1 =\mathbb{Z}^k$ all $k$, as long as $d>3$. As soon as the notes are teXed, I can post them on the arxiv or math overflow.
{ "source": [ "https://mathoverflow.net/questions/42629", "https://mathoverflow.net", "https://mathoverflow.net/users/6205/" ] }
42,647
Is there a special name for the class of (commutative) rings in which every non-unit is a zero divisor? The main example is $\mathbf{Z}/(n)$. Are there other natural or interesting examples?
A commutative ring $A$ has the property that every non-unit is a zero divisor if and only if the canonical map $A \to T(A)$ is an isomorphism, where $T(A)$ denotes the total ring of fractions of $A$. Also, every $T(A)$ has this property. Thus probably there will be no special terminology except "total rings of fractions". Artinian rings provide examples: If $x \in A$, the chain $... \subseteq (x^2) \subseteq (x) \subseteq A$ is stationary, say $x^k = y x^{k+1}$ for some minimal $k \geq 0$. If $k=0$, $x$ is a unit. If $k \geq 1$, $x (x^k y - x^{k-1})=0$ and $x^{k-1} \neq y x^k$, i.e. $x$ is a zero divisor. The class of total rings of fractions is closed under (infinite) products and directed unions. Is it the smallest such class containing the artinian rings?
{ "source": [ "https://mathoverflow.net/questions/42647", "https://mathoverflow.net", "https://mathoverflow.net/users/532/" ] }
42,744
This question made me wonder about the following: Are there orientedly diffeomorphic Kähler manifolds with different Hodge numbers? It seems that this would require that those manifolds are not deformation equivalent. However, there are examples by Catanese and Manetti that that happens already for smooth projective surfaces.
This question was debated in another forum a few years ago. The result was a note by Frédéric Campana in which he describes a counterexample as a corollary of another construction. In 1986 Gang Xiao ( An example of hyperelliptic surfaces with positive index Northeast. Math. J. 2 (1986), no. 3, 255–257.) found two simply connected complex surfaces $S$ and $S'$ (that is, complex dimension 2), with different Hodge numbers, that are homeomorphic by Freedman's classification. The homeomorphism has to be orientation-reversing, but $S \times S$ and $S' \times S'$ are orientedly diffeomorphic and of course still have different Hodge numbers. Freedman's difficult classification is not essential to the argument, because in 8 real dimensions you can use standard surgery theory to establish the diffeomorphism. Campana also explains that Borel and Hirzebruch found the first counterexample in 1959, in 5 complex dimensions.
{ "source": [ "https://mathoverflow.net/questions/42744", "https://mathoverflow.net", "https://mathoverflow.net/users/10076/" ] }
42,929
I occasionally come across a new piece of notation so good that it makes life easier by giving a better way to look at something. Some examples: Iverson introduced the notation [X] to mean 1 if X is true and 0 otherwise; so for example Σ 1≤n<x [n prime] is the number of primes less than x, and the unmemorable and confusing Kronecker delta function δ n becomes [n=0]. (A similar convention is used in the C programming language.) The function taking x to x sin(x) can be denoted by x ↦ x sin(x). This has the same meaning as the lambda calculus notation λx.x sin(x) but seems easier to understand and use, and is less confusing than the usual convention of just writing x sin(x), which is ambiguous: it could also stand for a number. I find calculations with Homs and ⊗ easier to follow if I write Hom(A,B) as A→B. Similarly writing B A for the set of functions from A to B is really confusing, and I find it much easier to write this set as A→B. Conway's notation for orbifolds almost trivializes the classification of wallpaper groups. Has anyone come across any more similar examples of good notation that should be better known? (Excluding standard well known examples such as commutative diagrams, Hindu-Arabic numerals, etc.)
Among recent introductions, I like the notation and names (introduced by Kenneth Iverson and popularized by Donald Knuth) for the ceiling function $\lceil x\rceil$ and floor function $\lfloor x\rfloor$. Compare with the heavy "approximation by excess/defect"...
{ "source": [ "https://mathoverflow.net/questions/42929", "https://mathoverflow.net", "https://mathoverflow.net/users/51/" ] }
43,002
Let $G$ be a topological group, and $\pi_1(G,e)$ its fundamental group at the identity. If $G$ is the trivial group then $G \cong \pi_1(G,e)$ as abstract groups. My question is: If $G$ is a non-trivial topological group can $G \cong \pi_1(G,e)$ as abstract groups? About all I know now is that $G$ would have to be abelian.
Here is an example: a product of infinitely many $\mathbb{RP}^\infty$'s. The crucial thing thing to see is that $\mathbb{RP}^\infty$ (or, easier to see, its universal cover $S^\infty$) has a group structure whose underlying group is a vector space of dimension $2^{\aleph_0}$. This is not hard: the total space $S^\infty$ of the universal $\mathbb{Z}_2$-bundle is obtained by applying a composite of functors to the group structure $\mathbb{Z}_2$ in the category of sets: $$\textbf{Set} \stackrel{K}{\to} \textbf{Cat} \stackrel{\text{nerve}}{\to} \textbf{Set}^{\Delta^{op}} \stackrel{R}{\to} \textbf{CGHaus}$$ ($\textbf{CGHaus}$ here is the category of compactly generated Hausdorff spaces and continuous maps). Here $K$ is the right adjoint to the "underlying set of objects" functor; it takes a set to the category whose objects are the elements of the set and there is exactly one morphism between any two objects. The functor $R$ is of course geometric realization. Each of these functors is product-preserving, and since the concept of group can be formulated in any category with finite products, a product-preserving functor will map a group object in the domain category to one in the codomain category. Even more: the concept of a $\mathbb{F}_2$-vector space makes sense in any category with finite products since we merely need to add the equation $\forall_x x^2 = 1$ to the axioms for groups, which can be expressed by a simple commutative diagram. Thus $S^\infty$ is an internal vector space over $\mathbb{F}_2$ in $\textbf{CGHaus}$. It can also be considered an internal vector space over $\mathbb{F}_2$ in $\textbf{Top}$, the category of ordinary topological spaces, because a finite power $X^n$ in $\textbf{Top}$ of a CW-complex $X$ has the same topology as $X^n$ does in $\textbf{CGHaus}$ provided that $X$ has only countably many cells, which is certainly the case for $S^\infty$ (see Hatcher's book , Theorem A.6). Thus $S^\infty$ can be considered as an honest commutative topological group of exponent 2. The underlying group of $S^\infty$ (in $\textbf{Set}$) is clearly a vector space of dimension $2^{\aleph_0}$. We make take this vector space to be the countable product $\mathbb{Z}_2^{\mathbb{N}}$. Modding out by $\mathbb{Z}_2$ (modding out by a 1-dimensional subspace), the space $\mathbb{RP}^\infty$ is also, as an abstract group, isomorphic to this. And so is a countably infinite product $(\mathbb{RP}^\infty)^{\mathbb{N}}$ of copies of $\mathbb{RP}^\infty$. Finally, the functor $\pi_1$ is product-preserving, and so $$\pi_1((\mathbb{RP}^\infty)^{\mathbb{N}}) \cong \mathbb{Z}_{2}^{\mathbb{N}}$$ and we are done.
{ "source": [ "https://mathoverflow.net/questions/43002", "https://mathoverflow.net", "https://mathoverflow.net/users/5795/" ] }
43,147
I have a basic question that others have definitely considered. Often there are papers that originally appeared in a language that one might not understand (and I mean a natural language here). I would like to cite the original paper, because that is where the credit belongs. But on the other hand, doing so violates the golden-rule of read that paper that you cite ! What should I do to overcome this dilemma? So far, I have always cited the original, and if possible some other related work that has appeared in English---but sometimes, reviewers write back that I should not be citing papers written in a language different from English, which is what motivated me to ask this question. Thanks for any useful advice.
I think a common-sense approach is to cite the original paper (whatever the language) in order to give credit and attribution but only rely on arguments from papers you can understand in your proofs (so you don't violate the golden rule). Regarding reviewers, the worst that can happen (I think) is that you use a crucial argument from a paper you can understand but that the reviewer cannot understand. In that case, I think the problem is the same whether the reviewer cannot understand it because he is unfamiliar with the math or because he is unfamiliar with the natural language. In both cases, you, as the author, should try to present relatively clear references, and that includes translations when appropriate I guess, but ultimately this is a failure of the reviewer. If I were reviewing a paper and found myself in this situation, I would politely ask the author if there is a translation available. If not, I would tell the editors I am not competent, but wouldn't blame the author. It is a bad idea to upset reviewers, but banning reference in languages other than English (or any other language) even for attribution purpose is an outrageous suggestion that should not be complied with.
{ "source": [ "https://mathoverflow.net/questions/43147", "https://mathoverflow.net", "https://mathoverflow.net/users/8430/" ] }
43,209
Dear all, I have a probably rather simple question: Suppose we have a Matrix $ M\in SL_2(\mathbb{Q}) $. Does the group $ M^{-1} SL_2(\mathbb{Z}) M \cap SL_2(\mathbb{Z})$ then always have finite index in $SL_2(\mathbb{Z})$? Why? Why not? I really was not able to solve this problem! All the best Karl
I, for one, am less than thrilled with snobbish kibitzing in the comments. Just answer the question already instead of dropping hints and passing judgment. The answer is yes for $\text{SL}(n,\mathbb{Z})$. Let $d$ be the product of the denominators in the matrices $M$ and $M^{-1}$. Let $\Gamma_d \subseteq \text{SL}(n,\mathbb{Z})$ be the subgroup of matrices of the form $I+dA$. This subgroup has finite index because it is the kernel of the congruence homomorphism $$\text{SL}(n,\mathbb{Z}) \longrightarrow \text{SL}(n,\mathbb{Z}/d),$$ whose target is a finite group. On the other hand, $M\Gamma_dM^{-1} \subseteq \text{SL}(n,\mathbb{Z})$ because $MIM^{-1} = I$ and $dMAM^{-1}$ is an integer matrix. Thus the intersection in question has finite index because it contains $\Gamma_d$ as a subgroup. The argument is quite general: You can replace $\text{SL}$ by other algebraic groups defined over $\mathbb{Z}$, and you can replace $\mathbb{Z}$ by any number field ring and $\mathbb{Q}$ by the corresponding number field.
{ "source": [ "https://mathoverflow.net/questions/43209", "https://mathoverflow.net", "https://mathoverflow.net/users/10264/" ] }
43,313
Every now and then I attempt to understand better quantum mechanics and quantum field theory, but for a variety of possible reasons, I find it very difficult to read any kind of physicist account, even when the physicist is trying to be mathematically respectable. (I am not trying to be disrespectful or controversial here; take this as a confession of stupidity if it helps.) I am generally interested in finding online mathematical accounts which ideally would come close to being of "Bourbaki standard": definition-theorem-proof and written for mathematicians who prefer conceptual explanations, and ideally with tidy or economical notation (e.g., eschewing thickets of subscripts and superscripts). More specifically, right now I would like a (mathematically trustworthy) online account of rigged Hilbert spaces, if one exists. Maybe I am wrong, but the Wikipedia account looks a little bit suspect to me: they describe a rigged Hilbert space as consisting of a pair of inclusions $i: S \to H$ , $j: H \to S^\ast$ of topological vector space inclusions, where $S^\ast$ is the strong dual of $S$ , $H$ is a (separable) Hilbert space, $i$ is dense, and $j$ is the conjugate linear isomorphism $H \simeq H^\ast$ followed by the adjoint $i^\ast: H^\ast \to S^\ast$ . This seems a little vague to me; should $S$ be more specifically a nuclear space or something? My guess is that a typical application would be where $S$ is Schwartz space on $\mathbb{R}^4$ , with its standard dense inclusion in $L^2(\mathbb{R}^4)$ , so $S^\ast$ consists of tempered distributions. I also hear talk of a nuclear spectral theorem (due to Gelfand and Vilenkin) used to help justify the rigged Hilbert space technology, but I don't see precise details easily available online.
Some time ago I was interested in rigged Hilbert space to get a better understanding of quantum physics. On that occasion I collected some references on this subject, see below. It's quite comprehensive. A good starting point for an overview could be the works of Madrid and Gadella. Note that there are different versions of "rigged Hilbert space" (in context of quantum physics) in literature. J.-P. Antoine. Dirac formalism and symmetry problems in quantum mechanics. I. General Dirac formalism. Journal of Mathematical Physics , 10(1):53–69, 1969. Zbl 0172.56602 N.Bogolubov, A.Logunov, and I.Todorov. Introduction to Axiomatic Quantum Field Theory , chapters 1: Some Basic Concepts of Functional Analysis, 4: The Space of States, pages 13–44, 112–128. Benjamin, Reading, Massachusetts, 1975. Zbl 1114.81300 R.de la Madrid. Quantum Mechanics in Rigged Hilbert Space Language . PhD thesis, Depertamento de Fisica Teorica Facultad de Ciencias. Universidad de Valladolid, 2001. Available at http://galaxy.cs.lamar.edu/~rafaelm/dissertation.html . Also see: The role of the rigged Hilbert space in quantum mechanics. European Journal of Physics , 26(2):277–312, 2005. arXiv:quant-ph/0502053 . Zbl 1079.81022 M.Gadella and F.Gómez. A unified mathematical formalism for the Dirac formulation of quantum mechanics. Foundations of Physics , 32:815–869, 2002. M.Gadella and F.Gómez. On the mathematical basis of the Dirac formulation of quantum mechanics. International Journal of Theoretical Physics , 42:2225–2254, 2003. Zbl 1038.81020 M.Gadella and F.Gómez. Dirac formulation of quantum mechanics: Recent and new results. Reports on Mathematical Physics , 59:127–143, 2007. I.M. Gelfand and N.J. Vilenkin. Generalized Functions, vol. 4: Some Applications of Harmonic Analysis , volume 4, chapters 2–4, pages 26–133. Academic Press, New York, 1964. Zbl 0136.11201 A.R. Marlow. Unified Dirac–von Neumann formulation of quantum mechanics. I. Mathematical theory. Journal of Mathematical Physics , 6:919–927, 1965. E.Prugovecki. The bra and ket formalism in extended Hilbert space. J. Math. Phys. , 14:1410–1422, 1973. Zbl 0277.47015 J.E. Roberts. The Dirac bra and ket formalism. Journal of Mathematical Physics , 7(6):1097–1104, 1966. J.E. Roberts. Rigged Hilbert spaces in quantum mechanics. Commun. Math. Phys. , 3:98–119, 1966. Zbl 0144.23404 D.Tjøstheim. A note on the unified Dirac–von Neumann formulation of quantum mechanics. Journal of Mathematical Physics , 16(4):766–767, 1975. Edit. I remember that there is also a discussion about Gelfand triples in physics in the Funktionalanalysis books by Siegfried Großmann but I don't have a copy handy at the moment. Though it is in German it might be interesting for you, too.
{ "source": [ "https://mathoverflow.net/questions/43313", "https://mathoverflow.net", "https://mathoverflow.net/users/2926/" ] }
43,464
I'm an undergrad who is taking a Complex Analysis Course mainly for its applications in number theory. So I would like to ask some guidelines about which theorems/concepts should I focus on in order to develop a narrower path for self study. In addition, it would be helpful to know if there is a book that does a good job showing off how the complex analysis machinery can be used effectively in number theory, or at least one with a good amount of well-developed examples in order to provide a wide background of the tools that complex analysis gives in number theory.
This question is like asking how abstract algebra is useful in number theory: lots of it is used in certain areas of the subject so there's no tidy answer. You probably won't be using Morera's theorem directly in number theory, but most of single-variable complex analysis is needed if you want to understand basic ideas in analytic number theory. A few topics you should pay attention to are: the residue theorem, the argument principle, the maximum modulus principle, infinite product factorizations (esp. the Hadamard factorization theorem), the Fourier transform and Fourier inversion, the Gamma function (know its poles and their residues), and elliptic functions. Basically pay attention to the whole course! There really isn't a whole lot in a first course on complex variables where one can say "that you should ignore if you are interested in number theory". If you want to be careful and not just wave your hands, you need to know conditions that guarantee the convergence of series and products of analytic functions (and that the limit is analytic), the existence of a logarithm of an analytic function (it's not the composite of the three letters "log" and your function), that let you reorder terms in series and products, that justify termwise integration, and of course the workhorse of analysis: how to make good estimates.
{ "source": [ "https://mathoverflow.net/questions/43464", "https://mathoverflow.net", "https://mathoverflow.net/users/3937/" ] }
43,478
A subset of ℝ is meagre if it is a countable union of nowhere dense subsets (a set is nowhere dense if every open interval contains an open subinterval that misses the set). Any countable set is meagre. The Cantor set is nowhere dense, so it's meagre. A countable union of meagre sets is meagre (e.g. all rational translates of the Cantor set). There can also be meagre sets of positive measure, like "fat Cantor sets". To form a fat Cantor set, you start with a closed interval, then remove some open interval from the middle of it, then remove some open intervals from the remaining intervals, and so on. The result is nowhere dense because you removed open intervals all over the place. If the sizes of the intervals you remove get small fast, then the result has positive measure. So does meagreness have any connection at all to measure? Specifically, are all measure zero sets meagre?
Let $p_i$ be a list of the rational numbers. Let $U_{i,n}$ be an open interval centered on $p_i$ of length $2^{-i}/n$. Then $V_n=\cup_i U_{i,n}$ is an open cover of the rationals, of measure at most $\sum_i 2^{-i}/n=2/n$. Then $\cap_n V_n$ is a co-meager set of measure zero. So yes, there is a measure zero set that is not meager, and so no, not every measure zero set is meager. Computability theory gives a neat way to look at this. There is a certain type of real number that is called 1-generic and there is another type that is called 1-random or "Martin-Löf random". These two sets are disjoint. The set of 1-generic reals is co-meager and has measure zero, whereas the set of 1-random reals is meager and has full measure. Thus measure and category are quite orthogonal. Set theorists would say they correspond to two different notions of forcing. A good general reference for this kind of question is Oxtoby's classic book Measure and category .
{ "source": [ "https://mathoverflow.net/questions/43478", "https://mathoverflow.net", "https://mathoverflow.net/users/1/" ] }
43,586
The line bundle $O(-1)$ on a projective space or $O(-\rho)$ on a flag variety has a property that all its cohomology vanish. Is there a story behind such sheaves? Here are more precise questions. Let $X$ be a smooth complex projective surface (say, a nice one like Del Pezzo or K3). Does there always exist a coherent locally free sheaf $M$ whose derived global sections vanish? Can one describe all such sheaves? Is there a coarse moduli space of such sheaves?
The bundles with no derived global sections (more generally the objects $F$ of the derived category $D^b(coh X)$ such that $Ext^\bullet(O_X,F) = 0$) form the left orthogonal complement to the structure sheaf $O_X$. It is denoted $O_X^\perp$. This is quite an interesting subcategory of the derived category. For example, if $O_X$ itself has no higher cohomology (i.e. it is exceptional) then there is a semiorthogonal decomposition $D^b(coh X) =< O_X^\perp, O_X >$. Then every object can be split into components with respect to this decomposition and so many questions about $D^b(coh X)$ can be reduced to $O_X^\perp$ which is smaller. Further, if you have an object $E$ in $O_X^\perp$ which has no higher self-exts (like $O(-1)$ on $P^2$), you can continue simplifying your category --- considering a semiorthogonal decomposition $O_X^\perp = < E^\perp, E >$. For example if $X = P^2$ and $E = O(-1)$ then $E^\perp$ is generated by $O(-2)$, so there is a semiorthogonal decomposition $D^b(coh P^2) = < O_X(-2), O_X(-1), O_X >$ also known as a full exceptional collection on $P^2$. It allows a reduction of many problems about $D^b(coh P^2)$ to linear algebra. Another interesting question is when $O_X$ is spherical (i.e. its cohomology algebra is isomorphic to the cohomology of a topological sphere). This holds for example for K3 surfaces. Then there is a so called spherical twist functor for which $O_X^\perp$ is the fixed subcategory. Thus, as you see, the importance of the category $O_X^\perp$ depends on the properties of the sheaf $O_X$.
{ "source": [ "https://mathoverflow.net/questions/43586", "https://mathoverflow.net", "https://mathoverflow.net/users/5301/" ] }
43,681
In grad school I learned the isomorphism between de Rham cohomology and singular cohomology from a course that used Warner's book Foundations of Differentiable Manifolds and Lie Groups . One thing that I remember being puzzled by, and which I felt was never answered during the course even though I asked the professor about it, was what the theorem could be used for. More specifically, what I was hoping to see was an application of the de Rham theorem to proving a result that was "elementary" (meaning that it could be understood, and seen to be interesting, by someone who had not already studied the material in that course). Is there a good motivating problem of this type for the de Rham theorem? To give you a better idea of what exactly I'm asking for, here's what I consider to be a good motivating problem for the Lebesgue integral. It is Exercise 10 in Chapter 2 of Rudin's Real and Complex Analysis . If $\lbrace f_n\rbrace$ is a sequence of continuous functions on $[0,1]$ such that $0\le f_n \le 1$ and such that $f_n(x)\to 0$ as $n\to\infty$ for every $x\in[0,1]$, then $$\lim_{n\to\infty}\int_0^1 f_n(x)\thinspace dx = 0.$$ This problem makes perfect sense to someone who only knows about the Riemann integral, but is rather tricky to prove if you're not allowed to use any measure theory. If it turns out that there are lots of answers then I might make this community wiki, but I'll hold off for now.
Here is a really "trivial" application. Since a volume form (say from a Riemannian metric) for a compact manifold $M$ is clearly closed (it has top degree) and not exact (by Stoke's Theorem), it follows that the cohomology is non-trivial, so $M$ cannot be contractible.
{ "source": [ "https://mathoverflow.net/questions/43681", "https://mathoverflow.net", "https://mathoverflow.net/users/3106/" ] }
43,690
I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it. My question is: what can one (such as myself) contribute to mathematics? I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in enough men in will surely break through some barrier. Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere. Thank you.
It's not mathematics that you need to contribute to. It's deeper than that: how might you contribute to humanity, and even deeper, to the well-being of the world, by pursuing mathematics? Such a question is not possible to answer in a purely intellectual way, because the effects of our actions go far beyond our understanding. We are deeply social and deeply instinctual animals, so much that our well-being depends on many things we do that are hard to explain in an intellectual way. That is why you do well to follow your heart and your passion. Bare reason is likely to lead you astray . None of us are smart and wise enough to figure it out intellectually. The product of mathematics is clarity and understanding. Not theorems, by themselves. Is there, for example any real reason that even such famous results as Fermat's Last Theorem, or the Poincaré conjecture, really matter? Their real importance is not in their specific statements, but their role in challenging our understanding, presenting challenges that led to mathematical developments that increased our understanding. The world does not suffer from an oversupply of clarity and understanding (to put it mildly). How and whether specific mathematics might lead to improving the world (whatever that means) is usually impossible to tease out, but mathematics collectively is extremely important. I think of mathematics as having a large component of psychology, because of its strong dependence on human minds. Dehumanized mathematics would be more like computer code, which is very different. Mathematical ideas, even simple ideas, are often hard to transplant from mind to mind. There are many ideas in mathematics that may be hard to get, but are easy once you get them. Because of this, mathematical understanding does not expand in a monotone direction. Our understanding frequently deteriorates as well. There are several obvious mechanisms of decay. The experts in a subject retire and die, or simply move on to other subjects and forget. Mathematics is commonly explained and recorded in symbolic and concrete forms that are easy to communicate, rather than in conceptual forms that are easy to understand once communicated. Translation in the direction conceptual -> concrete and symbolic is much easier than translation in the reverse direction, and symbolic forms often replaces the conceptual forms of understanding. And mathematical conventions and taken-for-granted knowledge change, so older texts may become hard to understand. In short, mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new. The real satisfaction from mathematics is in learning from others and sharing with others. All of us have clear understanding of a few things and murky concepts of many more. There is no way to run out of ideas in need of clarification. The question of who is the first person to ever set foot on some square meter of land is really secondary. Revolutionary change does matter, but revolutions are few, and they are not self-sustaining --- they depend very heavily on the community of mathematicians.
{ "source": [ "https://mathoverflow.net/questions/43690", "https://mathoverflow.net", "https://mathoverflow.net/users/4361/" ] }
43,721
Is an arbitrary union of non-trivial closed balls in the Euclidean space $\mathbb{R}^n$ Lebesgue measurable? If so, is it a Borel set? @George I still have two questions concerning your sketch of proof. First, how can you guarantee each of the open balls in the countable union has radius greater than or equal to 1? Second, I don't know how to use convexity to prove $\mu (B') \leq (1+\epsilon)^{N}\mu(B)$
No, in dimension $N>1$, it does not have to be Borel measurable. E.g., in 2 dimensions, consider, a non Borel measurable subset of the reals $S$, and let $A$ be the union of closed unit balls centered at points $(x,0)$ for all $x\in S$. The intersection of $A$ with $\mathbb{R}\times \{1\}$ is the non-Borel set $S \times \{1\}$, so $A$ is not Borel. On the other hand, for $N=1$, any union of non-trivial closed intervals is Borel-measurable. If $A$ is such a union and $B$ is the union of the open interiors, then it can be seen that $A$ is just the union of $B$ with (at most countably many) endpoints of connected components of $B$. Lebesgue measurability does hold, however. Faisal posted a link for this as I was typing my answer, but I think its still worth giving a brief sketch of the proof I was starting to type (Edit: added more detail, as requested). Reduce the problem to that of balls with at least some positive radius $r$ and within some bounded region. To do this, suppose that $S$ is the set of closed balls and $S_r$ denotes the balls of radius at least $r$ and with center no further than $r$ from the origin. Then, $$ \cup S=\bigcup_{n=1}^\infty\left(\cup S_{1/n}\right). $$ As the measurable sets are closed under countable unions, it is enough to show that $\cup S_r$ is Lebesgue measurable for each $r>0$. So, we can assume that all balls are of radius at least $r$ and are within some bounded distance of the origin. Let $A$ be the union of the closed balls, and $B\subseteq A$ be the union of their interiors. This is open so, by second countability, is a union of countably many open balls of radius at least $r$. Also, $A$ lies between $B$ and its closure $\bar B$. Show that the boundary $\bar B\setminus B$ of $B$ has zero measure. If we scale up the radius of each of the countable sequence of open balls used to obtain $B$ by a factor $1+\epsilon$ to get the new set $B^\prime$ then $\mu(B^\prime)\le(1+\epsilon)^N\mu(B)$. Showing this is the tricky part, but it does follow from convexity of the balls: If the balls have radius $r_k$ and centres $x_k$, then consider the sets $$ B_t=\bigcup_{k=1}^\infty B(r_k,tx_k) $$ for real $t$, so that $B_1=B$. The function $t\mapsto\mu(B_t)$ is increasing in $t\ge0$ * . Also, $B^\prime= (1+\epsilon)B_{1/(1+\epsilon)}$ giving, $$ \mu(B^\prime)=(1+\epsilon)^{N}\mu(B_{1/(1+\epsilon)})\le(1+\epsilon)^{N}\mu(B) $$ as claimed. As $\bar B\subseteq B^\prime$ we get $\mu(\bar B\setminus B)\le((1+\epsilon)^N-1)\mu(B)$ which can be made as small as we like by choosing ε small. * Edit: in my initial response, I was thinking that this answer is enough to prove that $\mu(B_t)$ is increasing in $t$. However, as Mizar points out in the comments below, this is not clear. Actually, I don't think we can reduce it to that case. However, the result is still true, by the Kneser-Poulson conjecture . This states that if the centres of set of unit balls in Euclidean space are all moved apart, then the measure of their union increases. Although only a conjecture, it has been proved for continuous motions, which applies in our case. Also, expressing each ball of radius greater than some arbitrarily small $r > 0$ as a union of balls of radius $r$, then it still applies in our case for balls of non equal radii. Edit: having seen Faisal's explanation, the proof I outline here is completely different to his. The result Faisal quotes is a bit more general as it applies to convex sets with nonempty interior, rather than just balls. However, the proof given above also works for symmetric convex sets with nonempty interiors. As every convex set with nonempty interior is a union of (translates of) symmetric ones, this implies the same result
{ "source": [ "https://mathoverflow.net/questions/43721", "https://mathoverflow.net", "https://mathoverflow.net/users/6018/" ] }
43,726
Is there someone who can give me some hints/references to the proof of this fact?
To elaborate on Qiaochu's answer. The subgroup generated by the two matrices $$\left[ \begin{array}{cc} 1 & 2 \\\ 0 & 1 \end{array} \right]$$ and $$\left[ \begin{array}{cc} 1 & 0 \\\ 2 & 1 \end{array} \right]$$ is the Sanov subgroup. It consists, by an exercise in Kargapolov-Merzlyakov, of matrices of the form $$\left[ \begin{array}{cc} 4k+1 & 2l \\\ 2m & 4n+1 \end{array} \right]$$ and det=1. The congruence subgroup $\Gamma(2)$ consists of matrices of the form $$\left[ \begin{array}{cc} 2k+1 & 2l \\\ 2m & 2n+1 \end{array} \right]$$ and det=1. Those matrices from $\Gamma(2)$ and not in the Sanov subgroup have the form $$\left[ \begin{array}{cc} 4k+3 & 2l \\\ 2m & 4n+3 \end{array} \right].$$ Taking the product of any two such matrices gives us a matrix from the Sanov subgroup. So the Sanov subgroup has index 2 in $\Gamma(2)$, and index 12 in $SL_2(\mathbb Z)$.
{ "source": [ "https://mathoverflow.net/questions/43726", "https://mathoverflow.net", "https://mathoverflow.net/users/9401/" ] }
43,768
I am currently trying to learn a bit about Grothendieck-Riemann-Roch... To try to get a better feeling for it, I am looking for examples of nice applications of GRR applied to a proper morphism $X \to Y$ where $Y$ is not a point. I already I know of a fair number of nice applications of HRR, i.e. GRR when $Y$ is a point. I've read through some of the relevant sections of Fulton's Intersection Theory book, but I've only found applications of HRR there, though it's very possible that I overlooked something. I am also interested in seeing worked-out, explicit, concrete examples, with explicit Chow/cohomology classes. Thanks much!
Check out Harris & Morrison's "Moduli of Curves", section 3E. There is a wealth of examples of applications of GRR coming from moduli theory, in which one applies it to projection from the universal family or some fibered power of the universal family. The basic idea in these cases is that both the base space and the total space are rather complicated beasts but the fibers of the morphisms are usually quite tractable, since they are just the gadgets you are trying to parametrize. For more examples in the same vein, you could read the classic "Towards an enumerative geometry of the moduli space of curves" by David Mumford.
{ "source": [ "https://mathoverflow.net/questions/43768", "https://mathoverflow.net", "https://mathoverflow.net/users/83/" ] }
43,820
Currently in my undergraduate courses I am being taught how to set up various machinery using slick, short proofs and then how to apply that machinery. What I am not being taught, largely, is what came before these slick, short proofs. What did mathematicians do before so-and-so proved such-and-such lemma? Where, in other words, are the tedious, long proofs that we can look to as examples of the horrible mess we are escaping? What insights helped mathematicians escape those messes? Right now I am particularly interested in examples from measure theory. What did people do before, for example, Dynkin's lemma or Caratheodory's extension theorem? Or were these tools available from near the start? An answer should include both some indication of how tedious and long the old approach was and how much slicker and shorter the modern approach is. Ideally, it should also discuss how the transition between the two happened. (If you prefer the old approach to the modern approach, for example for pedagogical reasons, that would also be interesting to hear about.)
Not from measure theory, alas, but the example that jumps to my mind is Gauss's first proof of Quadratic Reciprocity. It appears in the Disquisitiones Mathematicae . The proof occupies arts. 135 through 144 (five and a half pages in the English edition published by Springer); the proof is by strong induction on $q$ (when $p\lt q$). I don't recall who, but someone once called it a proof by "mathematical revulsion." The proof is quite messy. Gauss argues by cases, considering the congruence classes of $p$ and $q$ modulo $4$, and whether $p$ is or is not a quadratic residue modulo $q$. He actually casts his proof as if it were a proof by minimal counterexample, so he further assumes in some instances that the result does not hold (e.g., for $p\equiv q\equiv 1 \pmod{4}$, either $p$ is a quadratic residue modulo $q$ and $q$ is not one modulo $p$; or $p$ is not a quadratic residue modulo $q$ and $q$ is a quadratic residue modulo $p$). They fall into eight cases, though some of those cases themselves break into subcases. For example, Gauss looks at the case when $p$ and $q$ are both congruent to $1$ modulo $4$, and $\pm p$ is not a residue modulo $q$; then he takes a prime $\ell\neq p$ less than $q$ for which $q$ is not a quadratic residue, and considers the cases in which $\ell\equiv 1 \pmod{4}$ or $\ell\equiv 3 \pmod{4}$ separately; the first subcase itself breaks into four separate sub -subcases: since $p\ell$ is a quadratic residue modulo $q$, it is the square of some even $e$; then he considers the case when $e$ is not divisible by either $p$ or $\ell$, when it is divisible by $p$ but not $\ell$; when it is divisible by $\ell$ but not $p$; and when it is divisible by $\ell$ and $p$. And so on. By the time Gauss finally gets to the eighth and final case, he is clearly somewhat exhausted, writing merely "The demonstration is the same as in the preceding case." On the one hand, the proof is pretty much the first proof that one might think to try when encountering the problem. But the different cases are just way too messy, and one quickly loses sight of the forest because one is so intently staring at the beetles in the bark of the tree directly in front. Plenty of other proofs would follow (including five more by Gauss), ranging from the clever to the almost magical (do this, do that, and oops, quadratic reciprocity falls out).
{ "source": [ "https://mathoverflow.net/questions/43820", "https://mathoverflow.net", "https://mathoverflow.net/users/290/" ] }
43,846
My question is how should one think of p-adic L functions? I know they have been constructed classically by interpolating values of complex L-functions. Recently I have seen people think about them in terms of Euler systems. But we know only a few Euler systems and there are lot of p-adic L functions. In case of elliptic curves(at least over $\mathbb{Q}$) complex L-functions give information about the Galois representations. Should the p-adic L-function give some information about some p-adic Galois representation? It seems to be the case in case of cyclotomic fields where we think of the cyclotomic character as a 1-dimensional representation. I apologize in advance if my questions are vague. I am just starting to learn about the subject.
There are three way to obtain $p$-adic L-functions. The big dream is that one can do all of them for a large class of $p$-adic Galois representations $V$. To study them one starts best to look at the cases $\mathbb{Q}(1)$ for the classical Kubota-Leopoldt $p$-adic $L$-functions or the Tate-module of an elliptic curve etc. Let $K_{\infty}=\mathbb{Q}(\mu_{p^{\infty}})$ be the union of all cyclotomic fields of roots of unity of $p$-power order. Let $G$ be its Galois group, which is isomorphic to $\mathbb{Z}_p^{\times}$. Attached to $V$ there is a complex $L$-function and there are conjectures saying that certain values are algebraic and satisfy to certain congruences modulo powers of $p$, e.g. Kummer congruences. So in some cases, one can show the algebraicity and the congruences. So the values fit together to a $p$-adic analytic function. But the better way of presenting the $p$-adic $L$-function is by constructing a measure on the Galois group $G$ with values in $\mathbb{C}_p$. One can then evaluate the $p$-adic $L$-function on characters of the group $G$. This way the $p$-adic $L$-function resembles a lot its complex counterpart as they are described in Tate's thesis. See Lang's Cyclotomic Fields or Washington or Mazur-Tate-Teitelbaum for instance. On the algebraic side, we have a Selmer group or a class group that we watch growing in the tower $K_{\infty}/\mathbb{Q}$. The characteristic series of the dual of this Selmer group as a $\Lambda$-module is a sort of a generating function for this growth. Like zeta-functions for varieties over finite fields. These characteristic series are in fact power-series, but they are defined up to a unit (as they are generators of some ideal). Greenberg's paper give a good introduction to this side. The Euler system (if we are lucky to be in one of the few cases where we have one) is a system of norm-compatible cohomology classes. In particular they give an element in $H^1(K_n, V)$ for each intermediate field $K_n$. But there should be an element over sufficiently many abelian extensions. The norm-compatibility is involves a factor that looks like an Euler factor of the complex $L$-function. There is a general map, called the Coleman map or the logarithme élargi or whatever, from the inverse limit of the $H^1(K_{n,p}, V)$ to a ring of power-series. The image of the Euler system under this map should be the analytically defined $p$-adic $L$-function. Typically one shows that they satisfy the same interpolation property. In some sense the Euler system is the bridge between the analytic and the algebraic world. Under the Coleman map it links to the analytic side. In the other direction, one can form derivative classes out of the cohomology classes. These derived classes can be analysed locally and they can be used to bound the Selmer group and hence the characteristic series. That is how one can prove the main conjecture in some cases in one direction. Probably a good place to start is Coates-Sujatha. The $p$-adic $L$-function of an elliptic curve is conjectured to satisfy a $p$-adic Birch and Swinnerton-Dyer formula. (Mazur-Tate-Teitelbaum and Bernardi-Perrin-Riou in the supersingular case). On the algebraic side instead, we know almost that the characteristic series satisfies this formula. The order of vanishing is known to be at least as large as the rank and if they agree then the leading term has the desired shape involving the Tate-Shafarevich group; of course only up to a $p$-adic unit. In the geometric case, say an elliptic curve over a function field $K$ of a curve over a finite field $k$, the complex and the $p$-adic function are the same ($p\neq\text{char}(k)$), since they are both just a polynomial with integer coefficients. Tate's Bourbaki talk on BSD shows how one can use the tower $K_{\infty} = \bar{k} \cdot K$ to prove a good deal about BSD. Iwasawa theory tries to mimic this. So I believe that $p$-adic $L$-functions are just as nice and interesting as their complex counterparts. Even if they seem more mysterious and the definition is less straight forward, we sometimes know more about them. Now I stop otherwise I am going to write a book about it here...
{ "source": [ "https://mathoverflow.net/questions/43846", "https://mathoverflow.net", "https://mathoverflow.net/users/2081/" ] }
43,889
I hate to keep going with the big lists, but the question about one-sentence summaries of topics/areas spurred this question...and I just can't help myself! Definition (Fraleigh): A proof synopsis is a one or two sentence synopsis of a proof, explaining the idea of the proof without all the details and computations. My question is this: What is your favorite proof synopsis of a theorem we all should know? (I'm sorry, I'll do my time in big-list hell...)
Mean Value Theorem: Tilt your head and apply Rolle's Theorem.
{ "source": [ "https://mathoverflow.net/questions/43889", "https://mathoverflow.net", "https://mathoverflow.net/users/6269/" ] }
43,923
The following problem is homework of a sort -- but homework I can't do! The following problem is in Problem 1.F in Van Lint and Wilson : Let $G$ be a graph where every vertex has degree $d$. Suppose that $G$ has no loops, multiple edges, $3$-cycles or $4$-cycles. Then $G$ has at least $d^2+1$ vertices. When can equality occur? I assigned the lower bound early on in my graph theory course. Solutions for $d=2$ and $d=3$ are easy to find. Then, last week, when I covered eigenvalue methods, I had people use them to show that there were no solutions for $d=4$, $5$, $6$, $8$, $9$ or $10$. (Problem 2 here .) I can go beyond this and show that the only possible values are $d \in \{ 2,3,7,57 \}$, and I wrote this up in a handout for my students. Does anyone know if the last two exist? I'd like to tell my class the complete story.
This is the Moore graph , which is a regular graph of degree $d$ with diameter $k$, with maximum possible nodes. A calculation shows that the number of nodes $n$ is at most $$ 1+d \sum_{i=0}^{k-1} (d-1)^i $$ and as you mentioned it can be shown by spectral techniques that the only possible values for $d$ are $$ d = 2,3,7,57. $$ Example for $d=7$ is the Hoffman–Singleton graph , but for the case $d=57$ it is still open. See Theorem 8.1.5 in the book " Spectra of graphs " by Brouwer and Haemers for reference.
{ "source": [ "https://mathoverflow.net/questions/43923", "https://mathoverflow.net", "https://mathoverflow.net/users/297/" ] }
43,950
The symbol $\Subset$ (occurring in places where $\subseteq$ could occur syntactically) comes up frequently in a paper I'm reading. The paper lives at the intersection of a few areas of math, and I don't even know where to begin looking for the meaning of a symbol whose latex code is "\Subset". Do you know what this usually denotes? Edit: some context follows. All the sets in question are subsets of $\hat{\mathbb{C}} = \mathbb{C}\cup\{\infty\}$. Example 1. In a situation where $J$ is closed with empty interior, $U$ and $V$ are closed with $U\subsetneq V$, it is written "Note that $J \Subset U$ and, selecting a neighborhood $W \subset U$ of $J$ which is compactly contained in $V$, ..." Example 2. In a situation where $R$ is a rational mapping, and where it is assumed that $B\subset \hat{\mathbb{C}}$ is such that $R(B)\Subset B$, it is written "Let $\Omega_0 = \hat{\mathbb{C}}\setminus B$. Define $\Omega_1 = R^{-1}(\Omega_0)$. By the properties of $B$, we have $\Omega_1\Subset\Omega_0$. If we let $U_0$ be any finite union of closed balls such that $\Omega_1 \subset U_0 \subset \Omega_0$, ..." In both cases I have paraphrased to simplify the notation, so I hope I have not introduced errors into it.
In my experience $U \Subset V$ means that the closure of U is a compact subset of V. ${}{}$
{ "source": [ "https://mathoverflow.net/questions/43950", "https://mathoverflow.net", "https://mathoverflow.net/users/6649/" ] }
43,986
Let $V$ be an infinite dimensional topological vector space and consider the natural application $\iota\colon V\to V^{**}$. The space $V$ is said to be reflexive if $\iota$ is an isomorphism. Are there examples where $\iota$ fails to be an isomorphism but $V$ and $V^{**}$ are nevertheless isomorphic? Can one find an example where $V$ is a Banach space and the isomorphism is actually an isometry?
Yes, the James space. This is a good question, and R. C. James is rightly praised for this example. MR0044024 (13,356d) James, Robert C. A non-reflexive Banach space isometric with its second conjugate space. Proc. Nat. Acad. Sci. U. S. A. 37, (1951). 174–177.
{ "source": [ "https://mathoverflow.net/questions/43986", "https://mathoverflow.net", "https://mathoverflow.net/users/9871/" ] }
44,021
This question is only motivated by curiosity; I don't know a lot about manifold topology. Suppose $M$ is a compact topological manifold of dimension $n$ . I'll assume $n$ is large, say $n\geq 4$ . The question is: Does there exist a simplicial complex which is homeomorphic to $M$ ? What I think I know is: If $M$ has a piecewise linear (PL) structure, then it is triangulable, i.e., homeomorphic to a simplicial complex. There is a well-developed technology (" Kirby-Siebenmann invariant ") which tells you whether or not a topological manifold admits a PL-structure. There are exotic triangulations of manifolds which don't come from a PL structure. I think the usual example of this is to take a homology sphere $S$ (a manifold with the homology of a sphere, but not maybe not homeomorphic to a sphere), triangulate it, then suspend it a bunch of times. The resulting space $M$ is supposed to be homeomorphic to a sphere (so is a manifold). It visibly comes equipped with a triangulation coming from that of $S$ , but has simplices whose link is not homemorphic to a sphere; so this triangulation can't come from a PL structure on $M$ . This leaves open the possibility that there are topological manifolds which do not admit any PL-structure but are still homeomorphic to some simplicial complex. Is this possible? In other words, what's the difference (if any) between "triangulable" and "admits a PL structure"? This Wikipedia page on 4-manifolds claims that the E8-manifold is a topological manifold which is not homeomorphic to any simplicial complex; but the only evidence given is the fact that its Kirby-Siebenmann invariant is non trivial, i.e., it doesn't admit a PL structure.
Galewski-Stern proved https://mathscinet.ams.org/mathscinet-getitem?mr=420637 " It follows that every topological m-manifold, m≥7 (or m≥6 if ∂M=∅), can be triangulated if and only if there exists a PL homology 3-sphere H3 with Rohlin invariant one such that H3#H3 bounds a PL acyclic 4-manifold." The Rohlin invariant is a Z/2 valued homomorphsim on the 3-dimensional homology cobordism group, $\Theta_3\to Z/2$ , so if it splits there exist non-triangulable manifodls in high dimensions.
{ "source": [ "https://mathoverflow.net/questions/44021", "https://mathoverflow.net", "https://mathoverflow.net/users/437/" ] }
44,060
Let $(G,\cdot,T)$ and $(H,\star,S)$ be topological groups such that $(G,T)$ is homeomorphic to $(H,S)$ and $(G,\cdot)$ is isomorphic to $(H,\star)$. Does it follow that $(G,\cdot,T)$ and $(H,\star,S)$ are isomorphic as topological groups? If no, what if they are both Hausdorff? What if they are both Hausdorff and two-sided complete?
The 2-adic rationals $\mathbb{Q}_2$ and the 3-adic rationals $\mathbb{Q}_3$ are homeomorphic, because each one is a countable disjoint union of Cantor sets. They are also isomorphic as groups if you assume the axiom of choice, because they are both fields of characteristic 0 and therefore vector spaces over $\mathbb{Q}$ (of the same cardinal dimension). However, the 2-adic integers $\mathbb{Z}_2$ are a compact subgroup of $\mathbb{Q}_2$ in which every element is infinitely divisible by 3. On the other hand, in $\mathbb{Q}_3$, any non-trivial sequence $x, x/3, x/9, \ldots$ is unbounded in the complete metric, and is therefore not contained in a compact subgroup. Keith Conrad asks whether these is an example without the axiom of choice, and Jason De Vito asks whether there is an example using Lie groups. In fact, there is a cheap example using disconnected Lie groups. Let $G$ and $H$ be two connected Lie groups that are homeomorphic but not isomorphic. For instance, abelian $\mathbb{R}^3$, the universal cover $\widetilde{\text{SL}(2,\mathbb{R})}$, and the Heisenberg group of upper unitriangular, real $3 \times 3$ matrices are all homeomorphic, but not isomorphic. If $G'$ and $H'$ are $G$ and $H$ with the discrete topology, then $G' \times H$ and $G \times H'$ are explicitly isomorphic and explicitly homeomorphic. But they are not continuously isomorphic, because the connected component of the identity is $G$ for one of them but $H$ for the other one.
{ "source": [ "https://mathoverflow.net/questions/44060", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
44,095
I started to learn about large cardinals a while ago, and I read that the existence, and even the consistency of the existence of an inaccessible cardinal, i.e. a limit cardinal which is additionally regular, is unprovable in ZFC. Nevertheless large cardinals were studied extensively in the last century and (apart from attempts that went too far as the Reinhardt-Cardinals) nobody ever found a contradiction to ZFC. As a consequence it seems to me that set theorists today don't consider the possibility of the non-existence of a large cardinal. Therefore my questions: Why is it so unreasonable to think that the existence of large cardinals contradicts ZFC? Are there any mathematicians that do believe that large cardinals don't exist? And what are their arguments? EDIT: I want to say thank you for all your very interesting answers and comments. It will take some time for me to fully understand them though, however I feel like I have already learned a lot due to this discussion. Thank you!
I do not know of any active set theorists who think large cardinals are inconsistent. At least, within the realm of cardinals we have seriously studied. [Reinhardt suggested an ultimate axiom of the form "there is a non-trivial elementary embedding $j:V\to V$ ". Though some serious set theorists found it of possible interest immediately following its formulation, Kunen quickly afterward showed that this is inconsistent, using choice. It is not known whether choice is needed, but current research suggest that, even without choice, natural strengthenings of this axiom may be inconsistent in $\mathsf{ZF}$ alone. This is hardly an argument against large cardinals in general. Instead, it provides us with natural limitations on their reach. For another example, see here .] There are several active set theorists who do not commit themselves one way or the other to the consistency of large cardinals, but use them if necessary, and do not object on mathematical grounds to arguments that involve them. For some of them, set theory is about all possible (natural) extensions of $\mathsf{ZFC}$ , and there are certainly many interesting such extensions (such as $V=L$ ) that rule out large cardinals. Thus, it is not that they consider the cardinals inconsistent. (I confess, this may be ignorance on my part.) And I know of only two mathematicians who years ago were serious set theorists and who have expressed doubts about the consistency of (certain) large cardinals. Neither is currently active within the field, and so their position should be taken with a grain of salt, since it missed the significant results from the late 80s that could very well have forced them to reconsider. Why do we expect $\mathsf{ZFC}$ to be consistent, to begin with? We expect more than mere consistency, of course, but doubting large cardinals usually means distrust in set theory as a whole. I am not a philosopher, so I will not discuss philosophical positions or justifications. A good reference for the heuristics behind the basic ZFC axioms is the wonderful paper Penelope Maddy. Believing the axioms. I , J. Symbolic Logic, 53 (2) , (1988), 481-511. MR0947855 (89i:03007) . Large cardinals are discussed in its follow-up, Penelope Maddy. Believing the axioms. II , J. Symbolic Logic, 53 (3) , (1988), no. 736-764. MR0960996 (89m:03007) . Maddy's two books on Platonism and Naturalism discuss extensively why the view of a set theoretic universe with large cardinals rather than not is the reasonable choice, given our current understanding, see Penelope Maddy. Realism in mathematics . The Clarendon Press, Oxford University Press, New York, 1990. MR1075998 (92h:00007) . and Penelope Maddy. Naturalism in mathematics . The Clarendon Press, Oxford University Press, New York, 1997. MR1699270 (2000e:00009) . The books present several subtle technical points that can only be completely understood once one is aware of the deep connections between large cardinals and (generic) absoluteness. Maddy's more recent views on the subject can be seen here: Penelope Maddy. Defending the axioms: on the philosophical foundations of set theory . Oxford University Press, Oxford, 2011. MR2779203 . How do set theorists measure the internal plausibility of large cardinal assumptions, beyond their usefulness in proving results? The point of the inner model program (and of its most recent offspring, descriptive inner model theory) is to develop fine structural (" $L$ like") models for large cardinals. These models are canonical in several precise ways, and have a rich internal structure that many set theorists take as evidence of the consistency of the large cardinals under consideration. Thanks to its advances, we have a much clearer view of the set theoretic universe nowadays (for example, we now have the different covering lemmas, and several generic invariance results) than when the program began, motivated by what we now call Gödel's program. The program has currently reached well past Woodin cardinals, but is not yet at the level of supercompact cardinals. This can be interpreted as saying that, using the strongest tools currently at our disposal, we are fairly certain of the consistency of, say "there is a Woodin limit of Woodin cardinals". Time will tell whether the program will reach supercompactness. If it does not, this will provide us with strong evidence of their inconsistency, though I am not sure anybody actually expects this to be the outcome. John Steel and Tony Martin have over the years refined something they call "the speech", where they explain their position towards large cardinals. It is well worth reading, and trying to summarize it in a few lines would be an injustice. It can be found in these two postings to the Foundations of Mathematics (FOM) list: 1 , 2 (the notation here is $P_T =$ set of $\Pi^0_1$ consequences of $T$ ), and in the papers from the "Does mathematics need new axioms?" panel, see Solomon Feferman, Harvey M. Friedman, Penelope Maddy, and John R. Steel. Does mathematics need new axioms? , Bull. Symbolic Logic, 6 (4) ,(2000), 401–446. MR1814122 (2002a:03007) . Steel's own views are also presented in some detail in Maddy's books. For very recent developments, see his talk: John Steel. Gödel's program , given at the CSLI meeting at Stanford, June 1, 2013. At the risk of not being balanced, let me point out some highlights: We have a coherent picture of the universe of sets, with large cardinals. We can, within this picture, interpret theories where there are no such cardinals. However, we do not have such a coherent picture in the opposite direction. The consequences of large cardinals, at the arithmetic level (and more, as we climb up through the hierarchy) are compatible. The arithmetic consequences of any natural extension of $\mathsf{ZFC}$ fall somewhere within this hierarchy (as far as the theories we can currently analyze), even if the theory does not mention large cardinals. In fact, determinacy statements, incompatible with choice, also fall within this hierarchy and are mutually interpretable with large cardinals (again, as far as those theories we can currently analyze). This deep connections with determinacy are behind what we now call descriptive inner model theory, see Grigor Sargsyan. Descriptive inner model theory , Bull. Symbolic Logic, 19 (1) , (2013), 1-55. Large cardinals provide us with generic absoluteness, and generic absoluteness, a natural requirement if we are interested in understanding the projective theory of the reals, requires the consistency of large cardinals. See this answer for a bit more on this issue; let me emphasize that this is not some technical or artificial requirement, but rather a natural extension of basic results in classical descriptive set theory. Large cardinals seem inherently necessary to mathematical practice, not just set theory. Harvey Friedman has written extensively on this issue. In short: We have a very clear measure of progress understanding large cardinals and their consequences. By this measure, we can now understand many set theoretical issues that do not involve large cardinals but for which they are necessary in deeper ways (not just consistency-wise). This measure actually requires the large cardinals, we do not have anything like that without them. This measure is meaningful even in settings that are not set theoretical, and seems unavoidable even within mathematical practice (though it is perhaps too soon to tell how significant this will be at the end for "practicing mathematicians"). We do not have any serious mathematical model where large cardinals would be inconsistent, however, we have a serious program of research that would ultimately teach us that, were this the case. The program has provided us, instead, with many positive results (in particular, we have nice inner models for measurability, for strong cardinals, for Woodin cardinals, and we have nice inner models of models of determinacy, that capture the large cardinals that provide us with the consistency of the determinacy statements). To conclude, we understand (motivate/explain) large cardinals within the larger context of reflection principles, the simplest of which follow already from $\mathsf{ZFC}$ . (So, we have a natural generating principle for them.) On the other hand, I know of no objections to large cardinals beyond "they are too large" or "they do not feel right", neither of which seems mathematical to me. The first also seems particularly artificial. The only 'program' towards their inconsistency (that I am aware of) instead produced many interesting consequences for the partition calculus at the level of $0^\sharp$ (and is perhaps responsible for the early theory of $0^\sharp$ itself). As far as I understand, a similar attempt to disprove measurable cardinals resulted instead in the development of the covering lemma, which has since been one of the key tools to measure our understanding of particular large cardinals as part of the inner model program, see William J. Mitchell. The covering lemma . In Handbook of set theory. Vols. 1, 2, 3 , Matthew Foreman, and Akihiro Kanamori, eds., pp. 1497–1594, Springer, Dordrecht, 2010. MR2768697 . ( Wayback Machine ) Perhaps I should add that our intuitions about large cardinals do not come for free, but are the result of the programs mentioned above. I am in particular suspicious of a priori mistrust of large cardinals, since it tends to hide misunderstanding, or ignorance, of the actual mathematics involved in these programs.
{ "source": [ "https://mathoverflow.net/questions/44095", "https://mathoverflow.net", "https://mathoverflow.net/users/8996/" ] }
44,102
Ten years ago, when I studied in university, I had no idea about definable numbers , but I came to this concept myself. My thoughts were as follows: All numbers are divided into two classes: those which can be unambiguously defined by a limited set of their properties (definable) and those such that for any limited set of their properties there is at least one other number which also satisfies all these properties (undefinable). It is evident that since the number of properties is countable, the set of definable numbers is countable. So the set of undefinable numbers forms a continuum. It is impossible to give an example of an undefinable number and one researcher cannot communicate an undefinable number to the other. Whatever number of properties he communicates there is always another number which satisfies all these properties so the researchers cannot be confident whether they are speaking about the same number. However there are probability based algorithms which give an undefinable number as a limit, for example, by throwing dice and writing consecutive numbers after the decimal point. But the main question that bothered me was that the analysis course we received heavily relied on constructs such as "let $a$ be a number such that...", "for each $s$ in the interval..." etc. These seemed to heavily exploit the properties of definable numbers and as such one can expect the theorems of analysis to be correct only on the set of definable numbers. Even the definitions of arithmetic operations over reals assumed the numbers are definable. Unfortunately one cannot take an undefinable number to bring a counter-example just because there is no example of undefinable number. How can we know that all of those theorems of analysis are true for the whole continuum and not just for a countable subset?
The concept of definable real number, although seemingly easy to reason with at first, is actually laden with subtle metamathematical dangers to which both your question and the Wikipedia article to which you link fall prey. In particular, the Wikipedia article contains a number of fundamental errors and false claims about this concept. ( Update , April 2018: The Wikipedia article, Definable real numbers , is now basically repaired and includes a link to this answer.) The naive treatment of definability goes something like this: In many cases we can uniquely specify a real number, such as $e$ or $\pi$ , by providing an exact description of that number, by providing a property that is satisfied by that number and only that number. More generally, we can uniquely specify a real number $r$ or other set-theoretic object by providing a description $\varphi$ , in the formal language of set theory, say, such that $r$ is the only object satisfying $\varphi(r)$ . The naive account continues by saying that since there are only countably many such descriptions $\varphi$ , but uncountably many reals, there must be reals that we cannot describe or define. But this line of reasoning is flawed in a number of ways and ultimately incorrect. The basic problem is that the naive definition of definable number does not actually succeed as a definition. One can see the kind of problem that arises by considering ordinals, instead of reals. That is, let us suppose we have defined the concept of definable ordinal; following the same line of argument, we would seem to be led to the conclusion that there are only countably many definable ordinals, and that therefore some ordinals are not definable and thus there should be a least ordinal $\alpha$ that is not definable. But if the concept of definable ordinal were a valid set-theoretic concept, then this would constitute a definition of $\alpha$ , making a contradiction. In short, the collection of definable ordinals either must exhaust all the ordinals, or else not itself be definable. The point is that the concept of definability is a second-order concept, that only makes sense from an outside-the-universe perspective. Tarski's theorem on the non-definability of truth shows that there is no first-order definition that allows us a uniform treatment of saying that a particular particular formula $\varphi$ is true at a point $r$ and only at $r$ . Thus, just knowing that there are only countably many formulas does not actually provide us with the function that maps a definition $\varphi$ to the object that it defines. Lacking such an enumeration of the definable objects, we cannot perform the diagonalization necessary to produce the non-definable object. This way of thinking can be made completely rigorous in the following observations: If ZFC is consistent, then there is a model of ZFC in which every real number and indeed every set-theoretic object is definable. This is true in the minimal transitive model of set theory, by observing that the collection of definable objects in that model is closed under the definable Skolem functions of $L$ , and hence by Condensation collapses back to the same model, showing that in fact every object there was definable. More generally, if $M$ is any model of ZFC+V=HOD, then the set $N$ of parameter-free definable objects of $M$ is an elementary substructure of $M$ , since it is closed under the definable Skolem functions provided by the axiom V=HOD, and thus every object in $N$ is definable. These models of set theory are pointwise definable , meaning that every object in them is definable in them by a formula. In particular, it is consistent with the axioms of set theory that EVERY real number is definable, and indeed, every set of reals, every topological space, every set-theoretic object at all is definable in these models. The pointwise definable models of set theory are exactly the prime models of the models of ZFC+V=HOD, and they all arise exactly in the manner I described above, as the collection of definable elements in a model of V=HOD. In recent work (soon to be submitted for publication), Jonas Reitz, David Linetsky and I have proved the following theorem: Theorem. Every countable model of ZFC and indeed of GBC has a forcing extension in which every set and class is definable without parameters. In these pointwise definable models, every object is uniquely specified as the unique object satisfying a certain property. Although this is true, the models also believe that the reals are uncountable and so on, since they satisfy ZFC and this theory proves that. The models are simply not able to assemble the definability function that maps each definition to the object it defines. And therefore neither are you able to do this in general. The claims made in both in your question and the Wikipedia page on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property. Update. Since this question was recently bumped to the main page by an edit to the main question, I am taking this opportunity to add a link to my very recent paper "Pointwise Definable Models of Set Theory", J. D. Hamkins, D. Linetsky, J. Reitz , which explains some of these definability issues more fully. The paper contains a generally accessible introduction, before the more technical material begins.
{ "source": [ "https://mathoverflow.net/questions/44102", "https://mathoverflow.net", "https://mathoverflow.net/users/10059/" ] }
44,109
In most basic courses on general topology, one studies mainly Hausdorff spaces and finds that they fit quite well with our geometric intuition and generally, things work "as they should" (sequences/nets have unique limits, compact sets are closed, etc.). Most topological spaces encountered in undergraduate studies are indeed Hausdorff, often even normed or metrizable. However, at some point one finds that non-Hausdorff spaces do come up in practice, e.g. the Zariski topology in algebraic geometry, the Fell topology in representation theory, the hull-kernel topology in the theory of C*-algebras, etc. My question is: how should one think about (and work with) these topologies? I find it very difficult to think of such topological spaces as geometric objects, due to the lack of the intuitive Hausdorff axiom (and its natural consequences). With Hausdorff spaces, I often have some clear, geometric picture in my head of what I'm trying to prove and this picture gives good intuition to the problem at hand. With non-Hausdorff spaces, this geometric picture is not always helpful and in fact relying on it may lead to false results. This makes it difficult (for me, at least) to work with such topologies. As this question is somewhat ambiguous, I guess I should make it a community wiki. EDIT : Thanks for the replies! I got many good answers. It is unfortunate that I can accept just one.
For a variety of reasons, it's often useful to develop an intuition for finite topological spaces. Since the only Hausdorff finite spaces are discrete, one will have to deal with the non-Hausdorff case almost all the time. The fact of the matter is that the category of finite spaces is equivalent to the category of finite preorders, i.e., finite sets equipped with a reflexive transitive relation. In terms of a picture, draw an arrow $x \to y$ between points $x$ and $y$ whenever $x$ belongs to the closure of $y$ (or the closure of $x$ is contained in the closure of $y$). This defines a reflexive transitive relation. Two points $x$, $y$ have the same open neighborhoods if and only if $x \to y$ and $y \to x$. It follows that the topology is $T_0$ (the topology can distinguish points) if and only if the preorder is a poset, where antisymmetry of $\to$ is satisfied. The closure of a point $y$ is the down-set {$x: x \to y$}, and a set is closed iff it is downward closed in the preorder. In the finite case, I believe it is true that every closed irreducible set (one that isn't the union of two proper closed subsets) is the closure of a point = principal ideal; if the point is unique, the space is called sober . Sober spaces are the kinds of spaces that arise as underlying topological spaces of schemes, and it seems to be true that a finite space is sober iff it is $T_0$.
{ "source": [ "https://mathoverflow.net/questions/44109", "https://mathoverflow.net", "https://mathoverflow.net/users/7392/" ] }
44,208
Ultrafinitism is (I believe) a philosophy of mathematics that is not only constructive, but does not admit the existence of arbitrarily large natural numbers. According to Wikipedia , it has been primarily studied by Alexander Esenin-Volpin. On his opinions page , Doron Zeilberger has often expressed similar opinions. Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective? Edit: Neel Krishnaswami in his answer gave a link to a paper by Vladimir Sazonov ( On Feasible Numbers ) that seems to go a ways towards giving a formal foundation to ultrafinitism. First, Sazonov references a result of Parikh's which says that Peano Arithmetic can be consistently extended with a set variable $F$ and axioms $0\in F$ , $1\in F$ , $F$ is closed under $+$ and $\times$ , and $N\notin F$ , where $N$ is an exponential tower of $2^{1000}$ twos. Then, he gives his own theory, wherein there is no cut rule and proofs that are too long are disallowed, and shows that the axiom $\forall x\ \log \log x < 10$ is consistent.
Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective? There are no foundations for ultrafinitism as satisfactory for it as (say) intuitionistic logic is for constructivism. The reason is that the question of what logic is appropriate for ultrafinitism is still an open one, for not one but several different reasons. First, from a traditional perspective -- whether classical or intuitionistic -- classical logic is the appropriate logic for finite collections (but not K-finite). The idea is that a finite collection is surveyable: we can enumerate and look at each element of any finite collection in finite time. (For example, the elementary topos of finite sets is Boolean.) However, this is not faithful to the ultra-intuitionist idea that a sufficiently large set is impractical to survey. So it shouldn't be surprising that more-or-less ultrafinitist logics arise from complexity theory, which identifies "practical" with "polynomial time". I know two strands of work on this. The first is Buss's work on $S^1_2$ , which is a weakening of Peano arithmetic with a weaker induction principle: $$A(0) \land (\forall x.\;A(x/2) \Rightarrow A(x)) \Rightarrow \forall x.\;A(x)$$ Then any proof of a forall-exists statement has to be realized by a polynomial time computable function. There is a line of work on bounded set theories, which I am not very familiar with, based on Buss's logic. The second is a descendant of Bellantoni and Cook's work on programming languages for polynomial time, and Girard's work on linear logic. The Curry-Howard correspondence takes functional languages, and maps them to logical systems, with types going to propositions, terms going to proofs, and evaluation going to proof normalization. So the complexity of a functional program corresponds in some sense to the practicality of cut-elimination for a logic. IIRC, Girard subsequently showed that for a suitable version of affine logic, cut-elimination can be shown to take polynomial time. Similarly, you can build set theories on top of affine logic. For example, Kazushige Terui has since described a set theory, Light Affine Set Theory , whose ambient logic is linear logic, and in which the provably total functions are exactly the polytime functions. (Note that this means that for Peano numerals, multiplication is total but exponentiation is not --- so Peano and binary numerals are not isomorphic!) The reason these proof-theoretic questions arise, is that part of the reason that the ultra-intuitionist conception of the numerals makes sense, is precisely because they deny large proofs . If you deny that large integers exist, then a proof that they exist, which is larger than the biggest number you accept, doesn't count! I enjoyed Vladimir Sazonov's paper "On Feasible Numbers" , which explicitly studies the connection. I should add that I am not a specialist in this area, and what I've written is just the fruits of my interest in the subject -- I have almost certainly overlooked important work, for which I apologize.
{ "source": [ "https://mathoverflow.net/questions/44208", "https://mathoverflow.net", "https://mathoverflow.net/users/1574/" ] }
44,244
E.T. Bell called Fermat the Prince of Amateurs. One hundred years ago Ramanujan amazed the mathematical world. In between were many important amateurs and mathematicians off the beaten path, but what about the last one hundred years? Is it still possible for an amateur to make a significant contribution to mathematics? Can anyone cite examples of important works done by amateur mathematicians in the last one hundred years? For a definition of amateur: I think that to make the term "amateur" meaningful, it should mean someone who has had no formal instruction in mathematics past undergraduate school and does not maintain any sort of professional connection with mathematicians in the research world. – Harry Gindi
About ten years ago Ahcène Lamari and Nicholas Buchdahl independently proved that all compact complex surfaces with even first Betti number are Kahler. This was known since 1983, but earlier proofs made use of the classification of surfaces to reduce to hard case-by-case verification. At the time, Lamari was a teacher at a high school in Paris. Apparently he announced his result by crashing a conference in Paris and going up to Siu (who had proved the last case in the earlier proof in 1983) with a copy of his proof. Lamari's proof was published in the Annales de l'Institut Fourier in 1999 ( Courants kählériens et surfaces compactes , Annales de l'institut Fourier, 49 no. 1 (1999), p. 263-285, doi: 10.5802/aif.1673 ), next to Buchdahl's ( On compact Kähler surfaces , Annales de l'institut Fourier, 49 no. 1 (1999), p. 287-302, doi: 10.5802/aif.1674 )
{ "source": [ "https://mathoverflow.net/questions/44244", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
44,269
Let $G$ be a group, $G'=[G, G]$. "Note that it is not necessarily true that the commutator subgroup $G'$ of $G$ consists entirely of commutators $[x, y], x, y \in G$ (see [107] for some finite group examples)." Quoted from http://www.math.ucdavis.edu/~kapovich/EPR/ggt.pdf page 8. Anybody can provide the examples? I can't find the book [107].
The problem is whether the commutator subgroup may contain elements that are not commutators. One example are the free groups. For instance, in the free group of rank $4$, freely generated by $x$, $y$, $z$, and $w$, the element $[x,y][z,w]$ of the commutator subgroup cannot be written in the form $[a,b]$ for some $a,b$ in the group. The smallest finite examples are groups of order 96; there's two of them, nonisomorphic to each other. (This was a result in Robert Guralnick's thesis). See this Math Stack exchange question for a description of these groups, and some references.
{ "source": [ "https://mathoverflow.net/questions/44269", "https://mathoverflow.net", "https://mathoverflow.net/users/3922/" ] }
44,326
Given the vast number of new papers / preprints that hit the internet everyday, one factor that may help papers stand out for a broader, though possibly more casual, audience is their title. This view was my motivation for asking this question almost 7 years ago ( wow! ), and it remains equally true today (those who subscribe to arXiv feeds, MO feeds, etc., may agree). I was wondering if the MO-users would be willing to share their wisdom with me on what makes the title of a paper memorable for them; or perhaps just cite an example of title they find memorable? This advice would be very helpful in helping me (and perhaps others) in designing better, more informative titles (not only for papers, but also for example, for MO questions). One title that I find memorable is: Nineteen dubious ways to compute the exponential of a matrix, by Moler and van Loan. The response to this question has been quite huge. So, what have I learned from it? A few things at least. Here is my summary of the obvious: Amongst the various "memorable" titles reported, some of the following are true: A title can be memorable , attractive , or even both (to oversimplify a bit); A title becomes truly memorable if the accompanying paper had memorable substance A title can be attractive even without having memorable material. To reach the broadest audience, attractive titles are good, though mathematicians might sometimes feel irritated by needlessly cute titles Titles that are bold, are usually short, have an element of surprise, but do not depart too much from the truth seems to be more attractive in general. 5.101 Mathematical succinctness might appeal to some people---but is perhaps not that memorable for me---so perhaps such titles are attractive, but maybe not memorable. If you are a bigshot, you can get away with pretty much any title!
I can't believe no one's mentioned this: Pavol Ševera, Some title containing the words "homotopy" and "symplectic", e.g. this one , arXiv:math/0105080
{ "source": [ "https://mathoverflow.net/questions/44326", "https://mathoverflow.net", "https://mathoverflow.net/users/8430/" ] }
44,561
Say that a number is an odd-bit number if the count of 1-bits in its binary representation is odd. Define an even-bit number analogously. Thus $541 = 1000011101_2$ is an odd-bit number, and $523 = 1000001011_2$ is an even-bit number. Are there, asymptotically, as many odd-bit primes as even-bit primes? For the first ten primes, we have $$ \lbrace 10, 11, 101, 111, 1011, 1101, 10001, 10011, 10111, 11101 \rbrace $$ with 1-bits $$ \lbrace 1, 2, 2, 3, 3, 3, 2, 3, 4, 4 \rbrace $$ and so ratio of #odd to $n$ is $5/10=0.5$ at the 10-th prime. Here is a plot of this ratio up to $10^5$ : (Vertical axis is mislabeled: It is #odd/ $n$ .) I would expect the #odd/ $n$ ratio to approach $\frac{1}{2}$ , except perhaps the fact that primes ( $>2$ ) are odd might bias the ratio. The above plot does not suggest convergence by the 100,000-th prime (1,299,709). Pardon the naïveness of my question. Addendum : Extended the computation to the $10^6$ -th prime (15,485,863), where it still remains 1.5% above $\frac{1}{2}$ :
Yes. This was proven in C. Mauduit and J. Rivat, Sur un problème de Gelfond: la somme des chiffres des nombres premiers , Ann. Math. I found this by searching for "evil prime" and "odious prime" in the OEIS. More precisely, they prove the Gelfond conjecture : Let $s_q(p)$ denote the sum of the digits of $p$ in base $q$ . For $m, q$ with $\gcd(m, q-1) = 1$ there exists $\sigma_{q,m} > 0$ such that for every $a \in \mathbb{Z}$ we have $$| \{ p \le x : s_q(p) \equiv a \bmod m \} | = \frac{1}{m} \pi(x) + O_{q,m}(x^{1 - \sigma_{q,m}})$$ where $p$ is prime and $\pi(x)$ the usual prime counting function.
{ "source": [ "https://mathoverflow.net/questions/44561", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
44,593
I think we all secretly hope that in the long run mathematics becomes easier, in that with advances of perspective, today's difficult results will seem easier to future mathematicians. If I were cryogenically frozen today, and thawed out in one hundred years, I would like to believe that by 2110 the Langlands program would be reduced to a 10-page pamphlet (with complete proofs) that I could read over breakfast. Is this belief plausible? Are there results from a hundred years ago that have not appreciably simplified over the years? From the point of view of a modern mathematician, what is the hardest theorem proven a hundred years ago (or so)? The hardest theorem I can think of is the Riemann Mapping Theorem , which was first proposed by Riemann in 1852 and (according to Wikipedia) first rigorously proven by Caratheodory in 1912. Are there harder ones?
Difficulty is not additive, and measuring the difficulty of proving a single result is not a good measure of the difficulty of understanding the body of work in a given field as a whole. Suppose for instance that 100 years ago, there were ten important theorems in (say) complex analysis, each of which took 30 pages of elementary arguments to prove, with not much in common between these separate arguments. (These numbers are totally made up for the purposes of this discussion.) Nowadays, thanks to advances in understanding the "big picture", we can now describe the core theory of complex analysis in, say, 40 pages, but then each of the ten important theorems become one-page consequences of this theory. By doing so, we have actually made the total amount of pages required to prove each theorem longer (41 pages, instead of 30 pages); but the net amount of pages needed to comprehend the subject as a whole has shrunk dramatically (from 300 pages to 50). This is generally a worthwhile tradeoff (although knowing the "low tech" elementary proofs is still useful to round out one's understanding of the subject). There are very slick and short proofs now of, say, the prime number theorem, but actually this is not the best measure of how well we understand such a result, and more importantly how it fits in with the rest of its field. The fact that we can incorporate the prime number theorem into a much more general story of L-functions, number fields, Euler products, etc. which then ties in with many other parts of number theory is a much stronger sign that we understand number theory as a whole.
{ "source": [ "https://mathoverflow.net/questions/44593", "https://mathoverflow.net", "https://mathoverflow.net/users/3711/" ] }
44,692
If ${L}$ is a line bundle over a complex manifold, what does the square root line bundle $L^{\frac{1}{2}}$ mean? After some google, I got to know that there are certain conditions for the existence of square root line bundle. In particular,I've following questions : What is the square root of a line bundle, what are the conditions for its existence. More importantly, how to think of the square root line bundle. Intuitively, It seems that a square root bundle $L^{\frac{1}{2}}$ is a line bundle s.t. the tensor bundle obtained by taking the tensor product of $L^{\frac{1}{2}}$ with itself gives the line bundle $L$.(correct if I am wrong). I am particularly interested in square root of the Canonical Line bundle over a Riemann Sphere and the relation of square root bundle to Spinors in QFT. Please provide some references to look at.
If $L$ is any line bundle over a compex manifold $X$, a square root of $L$ is a line bundle $M$ such that $M^{\otimes2}=L$. So your guess in part (2) is correct. This square root (if it exists) is not unique in general, and two of them will differ by a $2$-torsion line bundle, that is a line bundle $\eta$ such that $\eta^{\otimes 2}$ is trivial. In particular, if $\textrm{Pic}(X)$ is torsion free, then there is at most one square root. In some cases no square root exists. Some general results are: A line bundle of degree $0$ has always at least one square root. This because $\textrm{Pic}^0(X)$ is a complex torus, hence a divisible group (in fact, there are roots of any order). A line bundle over a Riemann surface of genus $g$ has a square root if and only if it has even degree. The number of different square roots equals in this case $2^{2g}$, the number of $2$-torsion points in $\textrm{Pic}^0(X) \cong \textrm{Jac}(X)$. If $L$ is effective, that is $H^0(X, L) \neq 0$, and $Z \subset X$ is the zero locus of a holomorphic section of $L$, then the existence of a square root of $L$ is equivalent to the existence of a double cover $Y \to X$ branched over $Z$. In particular, non-trivial square roots of the trivial bundle correspond to non-trivial unramified double covers of $X$. The square root of the canonical bundle of the Riemann Sphere $S$ is unique, since $\textrm{Pic}(S)=\mathbb{Z}$, and it is isomorphic to $\mathcal{O}(-1)$, the dual of the hyperplane bundle (the unique line bundle of degree $1$, whose transition function is $z \to 1/z$). A readable introduction to spinor bundles is provided in the book of MOORE "Lectures on Seiberg-Witten invariants".
{ "source": [ "https://mathoverflow.net/questions/44692", "https://mathoverflow.net", "https://mathoverflow.net/users/9534/" ] }
44,705
It seems that in most theorems outside of set theory where the size of some set is used in the proof, there are three possibilities: either the set is finite, countably infinite, or uncountably infinite. Are there any well known results within say, algebra or analysis that require some given set to be of cardinality strictly greater than $2^{\aleph_{0}}$? Perhaps in a similar vein, are any objects encountered that must have size larger than $2^{\aleph_{0}}$ in order for certain properties to hold?
The Zariski tangent space at any point of a positive dimensional $C^1$ -manifold $X$ has dimension $2^{2^{\aleph_0}}= 2^{\frak c}$ . Let me explain in the case when $X=\mathbb R$ . Consider the ring $C^1_0$ of germs of $C^1$ - functions at $0\in \mathbb R$ and its maximal ideal $\frak m $ of germs of functions vanishing at zero. The cotangent space at zero of $\mathbb R $ is $Cot_0=\frak m /\frak m ^2$ and the Zariski tangent space is $T_0=(Cot_0)^{\ast}$ (dual $\mathbb R$ -vector space). Now the germs of the functions $x^\alpha $ are linearly independent modulo $\frak m ^2$ for $\; \alpha\in(1,2)$ . Hence $\dim_{\mathbb R} (Cot_0)=\frak c$ and so indeed the Zariski tangent space at zero of $\mathbb R$ is $\dim_{\mathbb R} (T_0)=2^{\frak c}$ . It is noteworthy that many textbooks erroneously claim that for an $n$ -dimensional manifold of class $C^1$ the Zariski tangent space defined above has dimension $n$ . Or they make some equivalent mistake like claiming that the vector space of derivations of $C^1_0$ has dimension $n$ . An example of such an error is on page 42 in Claire Voisin's (excellent!) book Hodge Theory And Complex Algebraic Geometry I published by Cambridge University Press. To end on a positive note, the phenomenon I am describing only raises its ugly head for $C^k$ -manifolds with $k<\infty$ . For $n$ -dimensional $C^\infty$ -manifolds the Zariski tangent space at any point has dimension $n$ , as it should. The heart of the matter is that a $C^\infty$ function $f$ , on $\mathbb R$ say, which vanishes at zero can be written $f=xg$ for some function $g$ which is also of class $C^\infty$ , whereas $g$ would only be of class $C^{k-1}$ if $f$ were of class $C^k$ .
{ "source": [ "https://mathoverflow.net/questions/44705", "https://mathoverflow.net", "https://mathoverflow.net/users/6856/" ] }
44,774
Let $L: C^\infty(\mathbb{R}) \to C^\infty(\mathbb{R})$ be a linear operator which satisfies: $L(1) = 0$ $L(x) = 1$ $L(f \cdot g) = f \cdot L(g) + g \cdot L(f)$ Is $L$ necessarily the derivative? Maybe if I throw in some kind of continuity assumption on $L$? If it helps you can throw the "chain rule" into the list of properties. I can see that $L$ must send any polynomial function to it's derivative. I want to say "just approximate any function by polynomials, and pass to a limit", but I see two complications: First $\mathbb{R}$ is not compact, so such an approximation scheme is not likely to fly. Maybe convolution with smooth cutoff functions could help me here. Even if I could rig up something I am concerned that if polynomials $p_n$ converge to $f$, I may not have $p_n'$ converging to $f'$. My Analysis skills are really not too hot so I would like some help. I am interested in this question because it is a slight variant of a characterization given here: Why do we teach calculus students the derivative as a limit? I am not sure whether or not those properties characterize the derivative, and they are closely related to mine. If these properties do not characterize the derivative operator, I would like to see another operator which satisfies these properties. Can you really write one down or do you need the axiom of choice? I feel that any counterexample would have to be very weird.
Yeah, these force it to be ordinary differentiation. We have to show that for each fixed $x_0 \in \mathbb{R}$, the composite $$C^\infty(\mathbb{R}) \stackrel{L}{\to} C^\infty(\mathbb{R}) \stackrel{ev_{x_0}}{\to} \mathbb{R}$$ is just the derivative at $x_0$. For each $f \in C^\infty(\mathbb{R})$, there is a $C^\infty$ function $g$ such that $$f(x) = f(x_0) + f'(x_0)(x - x_0) + (x - x_0)^2g(x)$$ and so $(ev_{x_0} L)(f) = ev_{x_0}(f'(x_0) + 2(x - x_0)g(x) + (x - x_0)^2 L(g)(x))$ by the properties you listed. Of course evaluation at $x_0$ kills the last two terms and one is left with $f'(x_0)$, as desired.
{ "source": [ "https://mathoverflow.net/questions/44774", "https://mathoverflow.net", "https://mathoverflow.net/users/1106/" ] }
44,801
So I was having tea with a colleague immensely more talented than myself and we were discussing his teaching algebraic number theory. He told me that he had given a few examples of abelian and solvable extensions unramified everywhere for his students to play with and that he had find this easy to construct with class field theory in the back of his head. But then he asked me if I knew how to construct an extension of number fields with Galois group $A_{5}$ and unramified everywhere. All I could say at the time (and now) is: There are Hilbert modular forms unramified everywhere. There are Hilbert modular forms whose residual $G_{{F}_{v}}$-representation mod $p$ is trivial for all $v|p$. There are Hilbert modular forms whose residual $G_{F}$-representation mod $p$ has image $A_{5}$ inside $\operatorname{GL}_{2}(\mathbb F_{p})$. Suppose there is a Hilbert modular form satisfying all three conditions. Then the Galois extension through which its residual $G_{F}$-representation factors would have Galois group $A_{5}$ and would be unramified everywhere. Can this be made to work? Regardless of the validity of this circle of idea, can you construct an extension of number fields unramified everywhere and with Galois group $A_{5}$?
If you take the splitting field of $x^5+ax+b$ and consider it as an extension of its quadratic subfield, then it will be unramified with Galois group contained in $A_5$ whenever $4a$ and $5b$ are relatively prime. This is a result of Yamamoto . For almost all $a$ and $b$ (specifically, on the complement of a thin set ), the group is $A_5$. You might also enjoy this preprint of Kedlaya, which I found very readable. A note on Kedlaya's webpage, dated May 2003, says that he will not be publishing this because it has been superseded by a recent result of Ellenberg and Venkatesh. I assume he is referring to this paper , but I can't figure out why that one supersedes his.
{ "source": [ "https://mathoverflow.net/questions/44801", "https://mathoverflow.net", "https://mathoverflow.net/users/2284/" ] }
44,861
It is possible that on a sphere $S^n$ there is a natural Riemannian metric in $R^(n+1)$. But it is not always possible for pseudo Riemann metric since the sum of two symmetric matrix which are not positive definite but may have rank different from the two matrix. So I wonder what is the sufficient and necessary condition for the dimension of a sphere which can be endowed with a Lorentz metric.
A compact simply connected manifold carriez a Lorenz metric iff its Euler characteristic vanishes. Proof: If $\chi(M)=0$, $M$ carries a nowhere vanishing vector field $X$. Pick up a Riemannian metric $g$ on $M$ (using a partition of unity argument) and denote by $\eta$ the 1-form dual to $X$: $\eta(Y):=g(X,Y)$ for all $Y\in TM$. Then $$g-2\frac{\eta\otimes\eta}{g(X,X)}$$ is a Lorenz metric. Conversely, if $M$ has a Lorenz metric $h$ of signature $(n-1,1)$, pick again a Riemannian metric $g$ and consider the symmetric endomorphism $A$ of $TM$ defined by $h(.,.)=g(A.,.)$. The eigenspaces of $A$ corresponding to the unique negative eigenvalue define a line sub-bundle of $TM$ which is trivial if $M$ is simply connected, so $\chi(M)=0$. Therefore, the answer to your question is: $S^n$ carries a Lorenz metric iff $n$ is odd.
{ "source": [ "https://mathoverflow.net/questions/44861", "https://mathoverflow.net", "https://mathoverflow.net/users/1964/" ] }
44,866
It is ''well-known'' that the third stable homotopy group of spheres is cyclic of order $24$. It is also ''well-known'' that the quaternionic Hopf map $\nu:S^7 \to S^4$, an $S^3$-bundle, suspends to a generator of $\pi_8 (S^5)=\pi_{3}^{st}$. It is even better known that the complex Hopf map $\eta:S^3 \to S^2$ suspends to a generator of $\pi_4 (S^3) = \pi_{1}^{st} = Z/2$. For this, there is a reasonably elementary argument, see e.g. Bredon, Topology and Geometry, page 465 f: By the long exact sequence, $\pi_3 (S^2)=Z$, generated by $\eta$. By Freudenthal, $\pi_3 (S^2) \to \pi_4 (S^3) = \pi_{1}^{st}$ is surjective. Because $Sq^2: H^2(CP^2;F_2) \to H^4(CP^2;F_2)$ is nonzero, the order of $\eta$ in $\pi_{1}^{st}$ is at least $2$ (the relation between these things is that $\eta$ is the attaching map for the $4$-cell of $CP^2$). By a direct construction, $2\eta$ is stably nullhomotopic. Essentially, $\eta g = r \eta$, where $r,g$ are the complex conjugations on $S^2=CP^1$ and $S^3 \subset C^2$. $g$ is homotopic to the identity, $\eta=r\eta$. The degree of $r$ is $-1$, so after suspension (but not before), composition with $r$ becomes taking the additive inverse. Therefore $\eta=-\eta$ in the stable stem. My question is whether one can mimick substantial parts of this argument for $\nu$. Here is what I already know and what not: There is a short exact sequence $0 \to Z \to \pi_7 (S^4) \to \pi_6 (S^3) \to 0$ that can be split by the Hopf invariant. Thus $\nu$ generates a free summand. is the same argument as for $\eta$. using the Steenrod operations mod $2$ and mod $3$ on $HP^2$, I can see that the order of $\nu$ in $\pi_{3}^{st}$ is at least $6$. this is a complete mystery to me and certainly to others-:)). How can I bring $24$ in via geometry? How do I relate the quaternions and $24$? What one sees immediately is that one has to be careful when talking about conjugations in the quaternionic setting, in order to avoid proving the false result ''$2 \nu=0 \in \pi_{3}^{st}$''. I know that this result goes back to Serre, but I cannot find a detailed computation in his papers and it seems that the calculation using the Postnikov-tower and the Serre spectral sequence is a bit lengthy. There are three other approaches I know but they are much less elementary: Adams spectral sequence, J-homomorphism (enough to show that the order of $\nu$ is $24$), framed bordism (supported by things like Rochlin's theorem and Hirzebruch's signature formula). Any idea? P.S.: if there is a similar argument for the octonionic Hopf fibration $S^{15} \to S^8$ (the stable order is 240), that would be really great.
You said you don't want to talk about framed manifolds, but that's a good way of seeing the 24. $\nu$ is represented by $SU(2)$ in its invariant framing. Take a K3 surface. It's framed, and it has Euler characteristic 24. Take a vector field that has 24 isolated zeroes of index 1. If you cut out a little disk around each of these 24 zeroes, the boundary will be an $S^3 = SU(2)$ with its invariant framing. So the K3 surface minus these 24 little disks is a null-bordism of $24\nu$. Probably not suitable for your course, as you would have to explain framed bordism and K3 surfaces, but cute nonetheless I think. By the way, the analog for $\eta$ is the two-sphere (Euler characteristic 2).
{ "source": [ "https://mathoverflow.net/questions/44866", "https://mathoverflow.net", "https://mathoverflow.net/users/9928/" ] }
44,877
Suppose we are talking about graphs with $n$ labeled vertices. Which graphs are more common: connected or disconnected?
Connectedness wins, since the complement of any disconnected graph is connected. EDIT: Perhaps you'd like a proof of this. Let G be a disconnected graph, G' its complement. If v and u are in different components of G, then certainly they're connected by an edge in G'. And if they're in the same component of G, then there's some w in another component (since G was disconnected), so v-w-u is a path in G'.
{ "source": [ "https://mathoverflow.net/questions/44877", "https://mathoverflow.net", "https://mathoverflow.net/users/979/" ] }
44,979
Let $f:[0,1]\to[0,1]$ be the classical devil's staircase . Has anybody ever computed (or studied) the fourier coefficient of $f(x)$? Related question: is the fourier series of $f(x)-x$ normally convergent (with respect to uniform norm)?
The Fourier transform of the derivative $\mu$ of the Devil staircase is explicitely stated on the wikipedia page of the Cantor distribution , in the table at the right, under the heading "cf" (characteristic function). Its value is $$ \int_0^1 e^{itx} d\mu(x) = e^{it/2}\ \ \prod_{k=1}^\infty \cos(t/3^k)$$ Just multiply by $-1/it$, add $1/it$, and you get the Fourier transform of the Devil staircase. A word on the proof. The Cantor distribution is the weak limit of the functions obtained by summing the indicator functions of the 2^n intervals generating the Cantor set at the nth step (after renormalization). The Fourier transform of these sums can be computed explicitely. Then let n goes to infinity.
{ "source": [ "https://mathoverflow.net/questions/44979", "https://mathoverflow.net", "https://mathoverflow.net/users/7979/" ] }
45,008
Is it possible to partition any rectangle into congruent isosceles triangles?
No. Note that the acute angle of your triangle must divide $\pi/2$ (look at a corner), so there are countably many such triangles (up to similarity), and hence you get only a countable set of possible ratios of sides.
{ "source": [ "https://mathoverflow.net/questions/45008", "https://mathoverflow.net", "https://mathoverflow.net/users/2884/" ] }
45,036
My friend and I are attempting to learn about spectral sequences at the moment, and we've noticed a common theme in books about spectral sequences: no one seems to like talking about differentials. While there are a few notable examples of this (for example, the transgression), it seems that by and large one is supposed to use the spectral sequence like one uses a long exact sequence of a pair- hope that you don't have to think too much about what that boundary map does. So, after looking at some of the classical applications of the Serre spectral sequence in cohomology, we decided to open up the black box, and work through the construction of the spectral sequence associated to a filtration. And now that we've done that, and seen the definition of the differential given there... we want some examples. To be more specific, we were looking for an example of a filtration of a complex that is both nontrivial (i.e. its spectral sequence doesn't collapse at the $E^2$ page or anything silly like that) but still computable (i.e. we can actually, with enough patience, write down what all the differentials are on all the pages). Notice that this is different than the question here: Simple examples for the use of spectral sequences , though quite similar. We are looking for things that don't collapse, but specifically for the purpose of explicit computation (none of the answers there admit explicit computation of differentials except in trivial cases, I think). For the moment I'm going to leave this not community wikified, since I think the request for an answer is specific and non-subjective enough that a person who gives a good answer deserves higher reputation for it. If anyone with the power to thinks otherwise, then feel free to hit it with the hammer.
Two simple examples with lots of interesting differentials are given by the Serre spectral sequences for integer homology (rather than cohomology) for the fibrations $$K({\mathbb Z}_2,1) \to K({\mathbb Z}_4,1)\to K({\mathbb Z}_2,1)$$ and $$K({\mathbb Z}_2,1) \to K({\mathbb Z},2) \to K({\mathbb Z},2)$$ where in the second case the map $K({\mathbb Z},2) \to K({\mathbb Z},2)$ induces multiplication by $2$ on $\pi_2$. In both cases one knows the homology of all three spaces and this allows one to work out what all the differentials must be. The differentials give a real shoot-out, with nontrivial differentials on more than one page, and in the second case there are nontrivial differentials on infinitely many pages. The best thing is to work everything out oneself, but if you want to check your answers these two examples are worked out as Examples 1.6 and 1.11 in Chapter 1 of my spectral sequence notes, available on my webpage. These examples may not really be the sort of thing you're looking for since they involve computing differentials purely formally, not by actually digging into the construction of the spectral sequence. But of course a lot of spectral sequence calculations have to be formal if one is to have any chance of succeeding.
{ "source": [ "https://mathoverflow.net/questions/45036", "https://mathoverflow.net", "https://mathoverflow.net/users/6936/" ] }
45,098
The probability of a random walk returning to its origin is 1 in two dimensions (2D) but only 34% in three dimensions: This is Pólya's theorem . I have learned that in 2D the condition of returning to the origin holds even for step-size distributions with finite variance, and as Byron Schmuland kindly explained in this Math.SE posting , even for distributions with infinite variance, recurrence depends upon the details of the step-length tail distribution. But this is all in 2D. My question is: Are there conditions on the step-size and step-direction distributions in three dimensions (3D) that ensure that the walk will return to the origin with probability 1? Of course I exclude here a step-direction distribution that squashes 3D → 2D. But perhaps partial dimensional compression suffices? (source) (3D random-walk image credit to http://logo.twentygototen.org/ .)
For a fairly robust intuitive argument, think of a random walk in $\mathbb{R}^d$ as the "product" of $d$ one-dimensional walks in $\mathbb{R}^1$ . For a (finite variance) random walk in $\mathbb{R}^1$ , the probability the random walk is within $O(1)$ of the origin after $n$ steps scales like $n^{-1/2}$ . If the $d$ -dimensional random walk were to literally just be the independent product of $d$ one-dimensional walks, this would mean that in $\mathbb{R}^d$ the probability the random walk is near the origin after $n$ steps would be about $n^{-d/2}$ , and indeed, this answer is correct. Roughly speaking, then, the reason random walk changes behavior between $d=2$ and $d=3$ is that this is when $\sum_n n^{-d/2}$ switches from divergent to convergent. This intuition suggests that if your walk is "truly" at least $(2+\epsilon)$ -dimensional for some $\epsilon > 0$ , then it should be transient (if you're willing to accept this intuition of $n^{-d/2}$ behavior for fractional $d$ ). Terry Lyons has derived a necessary and sufficient condition for the transience of a reversible Markov chain which I think formalizes and extends this intuition. He in particular uses it to prove a necessary and sufficient condition for the transience of simple random walk on "wedges" in $\mathbb{Z}^d$ . Specializing his result even further, he mentions that, letting $\Omega$ be the subgraph of $\mathbb{Z}^3$ with $$ \Omega=\{(x,y,z) \in \mathbb{Z}^3, y \leq x, x \leq (\log(z+1))^{\alpha}\} $$ then the simple random walk on $\Omega$ is transient whenever $\alpha > 1$ . (The same would be true for any finite variance random walk constrained to lie in $\Omega$ , though I'm not sure Terry Lyons' theorem will prove this in full generality.) The graph $\Omega$ is just a very slight "fattening" of part of $\mathbb{Z}^2$ , and the walk is already transient. In a sense, random walks in $\mathbb{Z}^2$ only "just" fail to be transient, and if you go above $\mathbb{Z}^2$ in any way you will immediately be transient.
{ "source": [ "https://mathoverflow.net/questions/45098", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
45,116
It is well known that total space of the tautological line bundle $\mathcal{O}(-1)$ over projective space $\mathbb{P}^n$ is closed subvariety of $\mathbb{P}^n\times\mathbb{A}^{n+1}$. My question is how to realize total space of $\mathcal{O}(1)$ over $\mathbb{P}^n$ in such manner, i.e. I need an embedding of $Tot(\mathcal{O}(1))$ in simple variety and defining equations. Thanks.
It is the complement $\mathbb{P}^{n+1} - \{x\}$ of a point in a projective space.
{ "source": [ "https://mathoverflow.net/questions/45116", "https://mathoverflow.net", "https://mathoverflow.net/users/10626/" ] }
45,150
There seem to be two conflicting definitions for p-adic valuation in the literature. Firstly, for any non-zero integer n, we have $\nu=\nu_p(n)$ is the greatest non-negative integer such that $p^\nu$ divides $n$. Secondly, we have $|n|_p$ which is defined as $1/p^\nu$. [These definitions can be extended to the rationals.] $\nu$ is defined as the p-adic valuation in Khrennikov, Nilson, P-adic deterministic and random dynamics (for example) and $|\cdot|_p$ is defined as the p-adic valuation in Khrennikov, P-adic and group valued probabilities , in Harmonic, wavelet and p-adic analysis (for example). Question: Is there a preferred definition for p-adic valuation?
I will explain what's going on. We call $\lvert x\rvert_p$ the $p$ -adic absolute value of $x$ and $v_p(x)$ the $p$ -adic valuation of $x$ . The distinction that is made by the two terms "absolute value" and "valuation" is completely standard… in English. However, Khrennikov is originally from Russia and in Russian there is one term for both concepts (нормирование = normirovanie, with stress on the second syllable -- I am not making that up, but see comments below this answer about stress on derived words in Russian, like verbs becoming nouns or nouns becoming adjectives). There is a term "absolute value" in Russian, but it is not an abstract concept; it refers only to the usual absolute value on the real or complex numbers (and quaternions?). This is perhaps why Khrennikov is using the term "valuation" incorrectly to refer to an absolute value function. (I'm giving a course in Moscow this semester and I found this point frustrating when I was preparing my initial lectures. In different books I found the same word used for an absolute value and for a valuation and couldn't find the term that exclusively means absolute value. Eventually I determined there isn't one; you just know by context what meaning is intended. Native speakers are welcome to correct me here.) UPDATE (3 years later): I learned from a student in St. Petersburg that the mathematicians there use separate terms for an absolute value $\lvert\cdot\rvert$ on a field and its corresponding valuation $v$ : they call $\lvert x\rvert$ the norm (норма) of $x$ and $v(x)$ the exponent (показатель) of $x$ . UPDATE (11 years later): Consistent with Laurent's answer, the Russian Wikipedia page for absolute value (find the English one and then click on Русский) refers to an absolute value as нормирование, a word I mentioned in the first paragraph above, and a valuation as экспоненциальное нормиорование, where the word in front of нормирование is eksponentsialnoe = exponential.
{ "source": [ "https://mathoverflow.net/questions/45150", "https://mathoverflow.net", "https://mathoverflow.net/users/2264/" ] }
45,159
I know the definition of symplectic structure, symplectic group, and so on. But what does the word "symplectic" itself mean? Meta question: I have many other mathematical words whose etymologies are obscure to me. Is it OK for me to ask one question per such word?
The term "symplectic group" was suggested in The Classical Groups: their invariants and representations (1939, p. 165) by Herman Weyl: The name "complex group" formerly advocated by me in allusion to line complexes, as these are defined by the vanishing of antisymmetric bilinear forms, has become more and more embarrassing through collision with the word "complex" in the connotation of complex number. I therefore propose to replace it by the corresponding Greek adjective "symplectic." Dickson calls the group the "Abelian linear group" in homage to Abel who first studied it. Take a look at the Earliest Known Uses of Some of the Words of Mathematics web page.
{ "source": [ "https://mathoverflow.net/questions/45159", "https://mathoverflow.net", "https://mathoverflow.net/users/5420/" ] }
45,185
Many mathematicians know that Lewis Carroll was quite a good mathematician, who wrote about logic (paradoxes) and determinants. He found an expansion formula, which bears his real name (Charles Lutwidge) Dodgson. Needless to say, L. Carroll was his pseudonym, used in literature. Another (alive) mathematician writes under his real name and under a pseudonym (John B. Goode). (That person, by the way, is Bruno Poizat: it's no secret, even MathSciNet knows it.) What other mathematicians (say dead ones) had a pseudonym, either within their mathematical activity, or in a parallel career ? Of course, don't count people who changed name at some moment of their life because of marriage, persecution, conversion, and so on. Edit . The answers and comments suggest that there are at least four categories of pseudonyms, which don't exhaust all situations. Professional mathematicians, who did something outside of mathematics under a pseudonym (F. Hausdorff - Paul Mongré , E. Temple Bell - John Taine ), People doing mathematics under a pseudonym, and something else under their real name (Sophie Germain - M. Le Blanc , W. S. Gosset - Student )), Professional mathematicians writing mathematics under both their real name and a pseudonym (B. Poizat - John B. Goode ), Collaborative pseudonyms ( Bourbaki, Blanche Descartes )
Monsieur Antoine Auguste Le Blanc. (Sophie Germain, 1776–1831) Sophie Germain hid behind the male pseudonym "M. Le Blanc" to study at the École Polytechnique and to be taken seriously in mail correspondence with other mathematicians, including Lagrange and Gauss.
{ "source": [ "https://mathoverflow.net/questions/45185", "https://mathoverflow.net", "https://mathoverflow.net/users/8799/" ] }
45,212
The first time I got in touch with the abstract notion of a sheaf on a topological space $X$, I thought of it as something which assigns to an open set $U$ of $X$ something like the ring of continuous functions $\hom(U,\mathbb{R})$. People said that sections of a sheaf $F$, i.e. elements of $F(?)$, are something which allow 'glueing' like in the example: if two functions $f:U\to\mathbb{R}$ and $g:V\to\mathbb{R}$ coincide on the intersection $U\cap V$ there is an unique function $U\cup V\to\mathbb{R}$ restricting to $f$ and $g$. So a sheaf consists of 'glueable' objects. A presheaf, say of rings, on a topological space $X$ is a functor $F:Op(X)^{op}\to Rng$ where $Op(X)$ denotes the category of open sets of $X$. One may generalize all this using the terms 'site' and 'topos' but let's consider this easy situation. A sheaf is a presheaf fulfilling an extra condition, so there is an inclusion of categories $$ Pre(X)\leftarrow Shv(X):i. $$ Please excuse the awful notation but this inclusion functor admits a left-adjoint, the sheafification functor $$ f:Pre(X)\leftrightarrow Shv(X):i. $$ Since I got in touch with schemes, I think of a presheaf or a sheaf as of a space. There is a notation of a 'stalk' $F_x\in Rng$ at a point $x$ of $X$. I think of a stalk as the point $x$ of the space $F$. The inclusion functor and the sheafification functor both respect the stalks. For example the sheafification of a constant presheaf is a locally constant sheaf. My first question is: How shoud I really think about sheafification? A presheaf of sets is the canonical co-completion of a category: You take a (small) category $S$ which does not allow glueing (=has not all colimits) and then $Pre(S)$ has all colimits. $S$ is fully and faithfully embedded into $Pre(S)$ with the Yoneda embedding $Y:S\to Pre(S)$. This functor does not respect colimits, so, loosely speaking, the way of glueing is not respected in this transition. Maybe, considering sheaves instead of presheaves is a way of repairing this failure. My second question is: With respect to the interpretation above, what really makes the difference between a presheaf and a sheaf and how should I visualize that difference, if I think of a presheaf as if it is a space? Thank you.
There are two ways a presheaf can fail to be a sheaf. It has local sections that should patch together to give a global section, but don't, It has non-zero sections which are locally zero. When dividing the problems into two classes, it is easy to see what sheafifying does. It adds the missing sections from the first problem, and it throws away the extra sections from the second problem. The latter kind of sections tend to be easier to notice, but are less common. Usually, when a construction or functor must be sheafified, it has local sections that should patch together but don't. A simple example of a presheaf with this property is the presheaf $F_{p=q}$ of continuous functions on the circle $S^1$ which have the same value at two distinct points $p,q\in S^1$. When I restrict to an open neighborhood of $p$ that doesn't have $q$, the condition on their values goes away. Because the same thing is true for open neighborhoods of $q$ which don't contain $p$, the condition on the functions in this presheaf has no effect on sufficiently small open sets. It follows that this presheaf is locally the same as the sheaf of continuous functions. Therefore, for any function on $S^1$ which has different values on $p$ and $q$, I can restrict it to an open cover where each local section is in $F_{p=q}$, but this function is not in $F_{p=q}$. This is why $F_{p=q}$ is not a sheaf. When we sheafify, we just add in all these sections, to get the full sheaf of continuous functions. This is clear, because any two sheaves which agree locally are the same (though, I mean that the local sections and local restriction maps agree). This example really does come up in examples. Consider the map $S^1\rightarrow \infty$, where $\infty$ is the topological space which is $S^1$ with $p$ and $q$ identified. If I pull back the sheaf of functions on $\infty$ in the naive way, the resulting presheaf on $S^1$ is $F_{p=q}$. To get a sheaf, we need to sheafify.
{ "source": [ "https://mathoverflow.net/questions/45212", "https://mathoverflow.net", "https://mathoverflow.net/users/2625/" ] }
45,347
Hartogs Theorem says every function whose undefined locus is of codim 2 can be extend to the whole domain. I saw people saying this corresponds to the (S2) property of a ring. But I can't see why this is true. Can anybody explain this or give a heuristic argument?
Let $\mathscr F$ be a coherent sheaf on a noetherian scheme $X$ and assume that ${\rm supp}\mathscr F=X$. Let $Z\subset X$ be a subscheme of codimension at least $2$ and $U=X\setminus Z$. Let $\iota:U\hookrightarrow X$ denote the natural embedding and assume that $\mathcal F_x$ is $S_2$ for every $x\in Z$. Now the $S_2$ assumption implies that $$ \mathscr H^0_Z(X,\mathscr F)= \mathscr H^1_Z(X,\mathscr F)=0 $$ and the Hartogs type extension is equivalent to $$ \iota_*\iota^*\mathscr F\simeq \mathscr F. $$ Finally one has the exact sequence $$ \mathscr H^0_Z(X,\mathscr F) \to \mathscr F\to \iota_*\iota^*\mathscr F \to \mathscr H^1_Z(X,\mathscr F).$$ [See also this MO answer ]
{ "source": [ "https://mathoverflow.net/questions/45347", "https://mathoverflow.net", "https://mathoverflow.net/users/1657/" ] }
45,378
I strongly believe that - given the rules of Conway's Game of Life and an initial configuration - it is not decidable by a Turing Machine whether a given pattern will emerge, let alone as a stable pattern, be it static, moving, and/or rotating. How can this be proven? I guess, this kind of uncomputability would go far beyond the "simple" unpredictability of non-linear systems.
Conway's game of Life can simulate a universal Turing machine which means that it is indeed undecidable by reduction from the halting problem. You can program this Turing machine in the game of Life so that it builds some pattern when it halts that doesn't occur while it's still running. Then the pattern will be built if and only if the Turing machine halts.
{ "source": [ "https://mathoverflow.net/questions/45378", "https://mathoverflow.net", "https://mathoverflow.net/users/2672/" ] }
45,448
The first infinite cardinal, $\aleph_0$, has many large cardinal properties (or would have many large cardinal properties if not deliberately excluded). For example, if you do not impose uncountability as part of the definition, then $\aleph_0$ would be the first inaccessible cardinal, the first weakly compact cardinal, the first measureable cardinal, and the first strongly compact cardinal. This is not universally true ($\aleph_0$ is not a Mahlo cardinal), so I am wondering how widespread of a phenomenon is this. Which large cardinal properties are satisfied by $\aleph_0$, and which are not? There is a philosophical position I have seen argued, that the set-theoretic universe should be uniform, in that if something happens at $\aleph_0$, then it should happen again. I have seen it specifically used to argue for the existence of an inaccessible cardinal, for example. The same argument can be made to work for weakly compact, measurable, and strongly compact cardinals. Are these the only large cardinal notions where it can be made to work? (Trivially, the same argument shows that there's a second inaccessible, a second measurable, etc., but when does the argument lead to more substantial jump?) EDIT: Amit Kumar Gupta has given a terrific summary of what holds for individual large cardinals. Taking the philosophical argument seriously, this means that there's a kind of break in the large cardinal hierarchy. If you believe this argument for large cardinals, then it will lead you to believe in stuff like Ramsey cardinals, ineffable cardinals, etc. (since measurable cardinals have all those properties), but this argument seems to peter out after a countable number of strongly compact cardinals. This doesn't seem to be of interest in current set-theoretical research, but I still find it pretty interesting.
This probably isn't the kind of thing anyone just knows off hand, so anyone who's going to answer the question is just going to look at a list of large cardinal axioms and their definitions, and try to see which ones are satisfied by $\aleph _0$ and which aren't. You could've probably done this just as well as I could have, but I decided I'd do it just for the heck of it. First of all, this doesn't cover all large cardinal axioms. Second, many large cardinal axioms have different formulations which end up being equivalent for uncountable cardinals, or perhaps inaccessible cardinals, but may end up inequivalent in the context of $\aleph _0$. So even for the large cardinals that I'll look at, I might not look at all possible formulations. weakly inaccessible - yes, obviously inaccessible - yes, obviously Mahlo - no, since the only finite "inaccessibles" are 0, 1, and 2 as noted by Michael Hardy in the comments, and the only stationary subsets of $\omega$ are the cofinite ones weakly compact: in the sense of the Weak Compactness Theorem - yes, by the Compactness Theorem in the sense of being inaccessible and having the tree property - yes, by Konig's Lemma in the sense of $\Pi ^1 _1$-indescribability - no, it's not even $\Pi ^0 _2$-indescribable as witnessed by the sentence $\forall x \exists y (x \in y)$ indescribable - no, since it's not even $\Pi ^0 _2$-indescribable Jonsson - no, the algebra $(\omega, n \mapsto n \dot{-} 1)$ has no proper infinite subalgebra Ramsey - no, the function $F : [\omega ]^{< \omega} \to \omega$ defined by $F(x) = 1$ if $|x| \in x$ and $0$ otherwise has no infinite homogeneous set measurable: in the sense of ultrafilters - yes, by Zorn's Lemma, and because filters are $\omega$-complete by definition, i.e. closed under finite intersections in the sense of elementary embeddings - no, obviously strong - no, obviously (taking the elementary embedding definition) Woodin - ditto strongly compact: in the sense of the Compactness Theorem - yes, by the Compactness Theorem in the sense of complete ultrafilters - yes, as in the case of measurables in the sense of fine measures - yes, by Zorn's Lemma, and because filters are $\omega$-complete by definition supercompact: in the sense of normal measures - no, if $x \subset \lambda$ is finite and $X$ is the collection of all finite subsets of $\lambda$ which contain $x$, then the function $f : X \to \lambda$ defined by $f(y) = \max (y)$ is regressive, but for any $Y \subset X$, if $f$ is constant on $Y$ with value $\alpha$, then Y avoids the collection of finite subsets of $\lambda$ which contain $\{ \alpha + 1\}$ and hence Y cannot belong to any normal measure on $P_{\omega }(\lambda)$ in the sense of elementary embeddings - no, obviously Vopenka - no, take models of the empty language of different (finite) sizes huge - no, obviously (taking the elementary embedding definition)
{ "source": [ "https://mathoverflow.net/questions/45448", "https://mathoverflow.net", "https://mathoverflow.net/users/3711/" ] }
45,477
Let's call a function f:N→N half-exponential if there exist constants 1<c<d such that for all sufficiently large n, c n < f(f(n)) < d n . Then my question is this: can we prove that no half-exponential function can be expressed by composition of the operations +, -, *, /, exp, and log, together with arbitrary real constants? There have been at least two previous MO threads about the fascinating topic of half-exponential functions: see here and here . See also the comments on an old blog post of mine. However, unless I'm mistaken, none of these threads answer the question above. (The best I was able to prove was that no half-exponential function can be expressed by monotone compositions of the operations +, *, exp, and log.) To clarify what I'm asking for: the answers to the previous MO questions already sketched arguments that if we want (for example) f(f(x))=e x , or f(f(x))=e x -1, then f can't even be analytic , let alone having a closed form in terms of basic arithmetic operations, exponentials, and logs. By contrast, I don't care about the precise form of f(f(x)): all that matters for me is that f(f(x)) has an asymptotically exponential growth rate. I want to know: is that hypothesis already enough to rule out a closed form for f?
Yes All such compositions are transseries in the sense here: G. A. Edgar, "Transseries for Beginners". Real Analysis Exchange 35 (2010) 253-310 No transseries (of that type) has this intermediate growth rate. There is an integer "exponentiality" associated with each (large, positive) transseries; for example Exercise 4.10 in: J. van der Hoeven, Transseries and Real Differential Algebra (LNM 1888) (Springer 2006) A function between $c^x$ and $d^x$ has exponentiality $1$, and the exponentiality of a composition $f(f(x))$ is twice the exponentiality of $f$ itself. Actually, for this question you could just talk about the Hardy space of functions. These functions also have an integer exponentiality (more commonly called "level" I guess).
{ "source": [ "https://mathoverflow.net/questions/45477", "https://mathoverflow.net", "https://mathoverflow.net/users/2575/" ] }
45,653
Let's say a normed division algebra is a real vector space $A$ equipped with a bilinear product, an element $1$ such that $1a = a = a1$ , and a norm obeying $|ab| = |a| |b|$ . There are only four finite-dimensional normed division algebras: the real numbers, the complex numbers, the quaternions and the octonions. This was proved by Hurwitz in 1898: Adolf Hurwitz, Über die Composition der quadratischen Formen von beliebig vielen Variabeln, Nachr. Ges. Wiss. Göttingen (1898), 309-316. Are there any infinite-dimensional normed division algebras? If so, how many are there?
A MathSciNet search reveals a paper by Urbanik and Wright ( Absolute-valued algebras. Proc. Amer. Math. Soc. 11 (1960), 861–866 ) where it is proved that an arbitrary real normed algebra (with unit) is in fact a finite-dimensional division algebra, hence is one of the four mentioned in the OP. A key piece of the argument (Theorem 1) is to show that such an algebra $A$ is algebraic , in the sense that if $x \in A$, then the subalgebra of $A$ generated by $x$ is finite-dimensional. The authors then invoke a theorem of A. A. Albert stating that a unital algebraic algebra is a finite-dimensional division algebra.
{ "source": [ "https://mathoverflow.net/questions/45653", "https://mathoverflow.net", "https://mathoverflow.net/users/2893/" ] }
45,802
I believe this is the right place to ask this, so I was wondering if anyone could give me advice on research at the undergraduate level. I was recently accepted into the McNair Scholars program . It is a preparatory program for students who want to go on to graduate school. I am expected to submit a research topic proposal in the middle of the spring semester and study it during the summer with a mentor. Since I am currently in the B.S. Mathematics program and I want to get my Masters later. I figured that while my topic can be in any area, it should be in math since it is my main interest as well. I am a junior at the moment and taking: One-Dimensional Real Analysis, Intro to Numerical Methods, and Abstract Algebra. I frequently search MathWorld and Wikipedia for topics that interest me, although I don't consider myself a brilliant student or particularly strong. I have begun speaking with professors about their research also. I have not met any other students doing undergraduate math research and my current feeling is that many or all the problems in math are far beyond my ability to research them. This may seem a little defeatist but it seems mathematics is progressively becoming more specialized. I know that there are many areas emerging in Applied mathematics but they seem to be using much higher mathematics as well. My current interest is Abstract Algebra and Game Theory and I have been considering if there are possibilities to apply the former to the latter. So my questions are: 1) Are my beliefs about the possibilities of undergraduate research unfounded? 2) Where can I find online math journals? 3) How can I go about finding what has been explored in areas of interest. Should I search through Wikipedia and MathWorld bibliographies and or look in the library for research? Thanks I hope someone can help to clarify and guide me.
Since you are a student who's already interested in going on to graduate school and is specifically asking about finding a topic to study at your undergraduate level program at McNair, please disregard the negative nattering nabobs whose answers and comments suggest that undergraduates have no place or business in trying to perform research, whether it's research as defined for all scientists or the "research experience" that is put together for undergraduates and for advanced high-school students. Undergraduates can definitely perform research, or even benefit from going through a structured and well-administered "research experience". I agree with Peter Shor about finding a mentor, or multiple mentors, as soon as possible. There's no reason you have to be limited to getting advice from just one professor or teacher. I agree with Ben Webster, specifically about speaking with professors in order to get a reasonable idea about the level of work that would be needed for you to perform useful research at an undergraduate level. A few other suggestions come to mind: if you are at an institution that offers Masters and Ph.D. level degrees in mathematics, then your institution's library should have multiple research journals in hard-copy . I have found that it is much easier to go to the stacks in the library and browse through one or two year's worth of Tables of Contents and Abstracts in one journal in an afternoon or evening. This will familiarize you with the types of research papers being published currently, and make you aware of what "quanta" of research is enough to be a single research article. make sure to attend Seminars, Colloquia, and (if your school's graduate students have one) any graduate research seminar courses that you can find time for. This will allow you to become more familiar with various subtopics within the topics of your interests, and to see what the current areas of interest are for local and visiting faculty members. Colloquia are great as they often start by including a brief history of the topic by an expert in that field. Seminars are great because they allow students to see the social aspect of math, including the give-and-take and the critical comments and requests for more detail and explanation, even by tenured faculty who don't follow a speaker's thought processes. Graduate student seminar presentations are great because a student observes how graduate students can falter during presentations, how they are quizzed/coached/criticized/mentored/assisted by faculty during their presentations. I'll admit that I'm not sure attending dissertation defenses would be of any serious benefit to the undergraduate student, other than observing the interaction level (animosity level?) between faculty and graduate students. absolutely make sure to schedule some time to meet with mathematics professors who specialize in the fields of your interest, and communicate your desire to do research while you are an undergraduate, and communicate your desire to go on to graduate studies in mathematics. look on the internet and search for undergraduate opportunities for research in mathematics. I guarantee you will find quite a number of web sites that can give you more information. MIT has an undergraduate research opportunity program that many of their students take advantage of. Your institution may have professors who can speak with you and give you advice. Also, make sure to speak with more than one professor, and do not take any single person's advice as being the final word. Mathematicians are human beings too, and subject to the foibles and inclinations and disinclinations that all human beings have. If you run into disgruntled and critical individuals, do not let that dissuade you from going on into mathematics or decrease your desires. If you run into overly optimistic individuals who praise you too much and are too eager to take you on to do "scut work" computer programming, thank them for their time and let them know you'll come back to speak with them after you've spoken with other professors and weighed your options. Don't turn anyone down immediately. Always be polite in speaking with professors and teachers. Ask them how they chose their topics for their degrees, and you'll learn a lot.
{ "source": [ "https://mathoverflow.net/questions/45802", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
45,832
Whilst browsing through Marcel Berger's book "A Panoramic View of Riemannian Geometry" and thinking about the Klein bottle, I came across the sentence: "The unorientable surfaces are never discussed in the literature since the primary interest of mathematicians in surfaces is in the study of one complex variable, number theory, algebraic geometry etc. where all of the surfaces are oriented." (I won't give context, other than a page number: 446.) This got me thinking, perhaps non-orientability is purely an invention of topologists. Surely non-orientable manifolds play an important role in other areas of mathematics? Are there "real world" examples of non-orientability phenomenon in the natural sciences?
The real projective plane is the space of orientations for "nematic liquid crystals": these are materials (often found in your TV or computer screen!) composed of molecules shaped roughly like rods, which can point in any direction in 3D. However, they have no head or tail, so two antipodal orientations are identified. We can model nematic liquid crystals thus by a map from $U\subset \mathbb{R}^3$ to $\mathbb{RP}^2$. The topology of the real projective plane thus comes into play when one thinks about "topological defects" in these materials. A topological defect is a sort of singularity, where in some tubular neighborhood of this defect the material is continuous, but at the points of the defect, there is a discontinuity. Furthermore, this defect is topological, in that it cannot be homotoped away locally. With a bit of oversimplifying, $\pi_1(\mathbb{RP}^2)=\mathbb{Z}_2$ means that there is one nontrivial type of line defect (since $S^1$ surrounds a line) and $\pi_2(\mathbb{RP}^2)=\mathbb{Z}$ means that there are an infinite number of types of point defects in 3 dimensional nematic liquid crystals. Here's a schematic image of a cross section of a line defect and a corresponding path on $\mathbb{RP}^2$ corresponding to a circuit around it. These are both from Jim Sethna's article "Order Parameters, Broken Symmetry, and Topology" : Here's an old photograph of droplets of nematic liquid crystal between crossed polarizers from the paper P. Poulin, H. Stark, T. C. Lubensky, and D. A. Weitz, Novel Colloidal Interactions in Anisotropic Fluids. J. Science (1997) vol. 275 page 1770. . I won't say too much about the colors, but they correspond roughly to the orientation of the molecule. The sharp points at the center of each droplet are one or more point defects, discontinuities in orientation. The dark brush-like structures coming out of each point are the regions where molecules are oriented in directions parallel to either of the polarizers - thus it's kind of like the inverse image of two different points on $\mathbb{RP}^2$. Roughly speaking, a homotopy class of a map from a 1- or 2-sphere to the projective plane being nontrivial, means that the defect cannot be smoothed away (otherwise there would be a homotopy to a constant). This is part of a much bigger picture of course; and there are other nonorientable spaces that describe the order of materials. I've been vague above because all of this is explained quite beautifully in the article by N.D. Mermin, The topological theory of defects in ordered media Rev. Mod. Phys. 51, 591–648 (1979). For a quicker introduction, the paper cited above "Order Parameters, Broken Symmetry, and Topology" by Jim Sethna (published in 1991 Lectures in Complex Systems, Eds. L. Nagel and D. Stein, Santa Fe Institute Studies in the Sciences of Complexity, Proc. Vol. XV, Addison-Wesley, 1992) covers the basics. I love this stuff, so let me know if you have any questions and we can correspond further.
{ "source": [ "https://mathoverflow.net/questions/45832", "https://mathoverflow.net", "https://mathoverflow.net/users/8103/" ] }
45,841
On some occasion I was gifted a calendar. It displays a math quizz every day of the year. Not really exciting in general, but at least one of them let me raise a group-theoretic question. The quizz: consider an hexagon where the vertices and the middle points of the edges are marked, as in the figure $\hskip11em$ One is asked to place the numbers $1,2,3,4,5,6,8,9,10,11,12,13$ (mind that $7$ is omitted) on points $a,\ldots,\ell$, in such a way that the sum on each edge equals $21$. If you like, you may search a solution, but this is not my question. Of course the solution is non unique. You may apply any element of the isometry group of the hexagon. A little subtler is the fact that the permutation $(bc)(ef)(hi)(kl)(dj)$ preserves the set of solutions (check this). Question. What is the invariance group of the solutions set ? Presumably, it is generated by the elements described above. What is its order ? Because it is not too big, it must be isomorphic to a known group. Which one ?
The invariance group of the solutions set can be given a geometric interpretation as follows. Note that $\mathfrak{S}_4 \times \frac{\mathbf{Z}}{2\mathbf{Z}}$ is none other than the group of isometries of the cube. It is known that if one cuts a cube by the bisecting plane of a space diagonal, the cross-section is a regular hexagon (see the picture at the middle of this page ). The vertices of this hexagon are midpoints of (some) edges of the cube. Let $X$ be the set of corners and middles of this hexagon (it has cardinality $12$). Let us consider the following bijection between $X$ and the set $E$ of edges of the cube : if $[AB]$ is a side of the hexagon, with midpoint $M$, we map $A$ (resp. $B$) to the unique edge $e_A$ (resp. $e_B$) in $E$ containing it, and we map $M$ to the unique edge $e_M \in E$ such that $e_A$, $e_B$ and $e_M$ meet at a common vertex of the cube. Given any solution of the initial problem, we can label the edges of the cube using the above bijection. This labelling has the following nice property : the sum of three edges meeting at a common vertex is always 21. Proof : by construction, six of these eight summing conditions are satisfied. The remaining two conditions read $b+f+j=d+h+\ell=21$ using Denis' notations, and are implied by the first six conditions. So we found an equivalent ($3$-dimensional) formulation of the problem, namely labelling the edges of a cube. It is now clear that the symmetry group of the cube acts on the set of solutions. It remains to prove that the solution is unique up to isometry, which can be done by hand, here is how I did it : note that only two possible sums involve $1$ (resp. $13$), namely $1+8+12$ and $1+9+11$ (resp. $2+6+13$ and $3+5+13$). Therefore $1$ and $13$ must sit on opposite edges. Then $4$ and $10$ must sit on the unique edges which are parallel to $1$ and $13$. It is the easy to complete the cube. The resulting labelling has some amusing properties For example, the sum of edges of a given face is always $28$. The sum of two opposite edges is always $14$. Finally, the sum of edges along a cyclohexane-like circuit is always $42$.
{ "source": [ "https://mathoverflow.net/questions/45841", "https://mathoverflow.net", "https://mathoverflow.net/users/8799/" ] }
45,928
Inspired by a recent Math.SE question entitled Where do we need the axiom of choice in Riemannian geometry? , I was thinking of the Arzelà--Ascoli theorem . Let's state a very simple version: Theorem. Let $\{f_n : [a,b] \to [0,1]\}$ be an equicontinuous sequence of functions. Then a subsequence $\{f_{n(i)}\}$ converges uniformly on $[a,b]$. The proofs I have seen operate as follows: Take a countable dense subset $E$ of $[a,b]$. Use a "diagonalization argument" to find a subsequence converging pointwise on $E$. Use equicontinuity to conclude that this subsequence actually converges uniformly on $[a,b]$. The "diagonalization" step goes like this: Enumerate $E$ as $x_1, x_2, \dots$. $\{f_n(x_1)\}$ is a sequence in $[0,1]$, hence has a convergent subsequence $\{f_{n_1(i)}(x_1)\}$. $\{f_{n_1(i)}(x_2)\}$ now has a convergent subsequence $\{f_{n_2(i)}(x_2)\}$, and so on. Then $\{f_{n_i(i)}\}$ converges at all points of $E$. Of course, to do this, at each step $k$ we had to choose one of the (possibly uncountably many) convergent subsequences of $\{f_{n_{k-1}(i)}(x_k)\}$, so some sort of choice is needed here (I guess dependent choice is enough? I am not a set theorist (IANAST)). Indeed, we have proved that $[0,1]^E$ is sequentially compact (it is metrizable so it is also compact). On the other hand, we have not used (equi)continuity in this step, so perhaps there is a clever way to make use of it to avoid needing a choice axiom. So the question is this: Can the Arzelà--Ascoli theorem be proved in ZF? If not, is it equivalent to DC or some similar choice axiom?
There is a canonical way of checking the literature for most questions of this kind. Since they come up with some frequency, I think having the reference here may be useful. First, look at "Consequences of the Axiom of Choice" by Paul Howard and Jean E. Rubin, Mathematical Surveys and Monographs, vol 59, AMS, (1998). If the question is not there, but has been studied, there is a fair chance that it is in the database of the book that is maintained online, http://consequences.emich.edu/conseq.htm Typing "Ascola" on the last entry at the page just linked, tells me this is form 94 Q. Note the statement they provide is usually called the classical Ascoli theorem: For any set $F$ of continuous functions from ${\mathbb R}$ to ${\mathbb R}$, the following conditions are equivalent: 1. Each sequence in $F$ has a subsequence that converges continuously to some continuous function (not necessarily in $F$ ). 2. (a) For each $x \in{\mathbb R}$ the set $F (x) =\{f (x) \mid f \in F \}$ is bounded, and (b) $F$ is equicontinuous. To see the other equivalent forms of entry 94, type "94" on the line immediately above. From there we learn: Form 94 is "Every denumerable family of non-empty sets of reals has a choice function." There are some other equivalent forms that may be of interest. For example: (94 E) Every second countable topological space is Lindelöf. (94 G) Every subset of ${\mathbb R}$ is separable. (94 R) Weak Determinacy. If $A$ is a subset of ${\mathbb N}^{\mathbb N}$ with the property that $\forall a \in A\forall x \in{\mathbb N}^{\mathbb N}($ if $x(n) = a(n)$ for $n = 0$ and $n$ odd, then $x\in A)$, then in the game $G(A)$ one of the two players has a winning strategy. (94 X) Every countable family of dense subsets of ${\mathbb R}$ has a choice function. Proofs and references are provided by the website and the book. A reference that comes up with some frequency in form 94 is Rhineghost, Y. T. "The naturals are Lindelöf iff Ascoli holds". Categorical perspectives (Kent, OH, 1998), 191–196, Trends Math., Birkhäuser Boston, Boston, MA (2001).
{ "source": [ "https://mathoverflow.net/questions/45928", "https://mathoverflow.net", "https://mathoverflow.net/users/4832/" ] }
45,936
Hi there, Assuming X and Y are modal formulae and diamond X is satisfiable and diamond Y is satisfiable, how would one show that they X AND Y is satisfiable? I don't think it requires much effort? I think you need to choose one world and one model where X AND Y is true and that would mean it is satisfiable? So assuming I'm going about it correctly, any ideas what model and world I should select to show this X AND Y is satisfiable? Any advice would be great, Thank you. P.S. NO appropriate tags for this type of most, maybe someone should create a modal logic one (I can't as I'm a new user)
There is a canonical way of checking the literature for most questions of this kind. Since they come up with some frequency, I think having the reference here may be useful. First, look at "Consequences of the Axiom of Choice" by Paul Howard and Jean E. Rubin, Mathematical Surveys and Monographs, vol 59, AMS, (1998). If the question is not there, but has been studied, there is a fair chance that it is in the database of the book that is maintained online, http://consequences.emich.edu/conseq.htm Typing "Ascola" on the last entry at the page just linked, tells me this is form 94 Q. Note the statement they provide is usually called the classical Ascoli theorem: For any set $F$ of continuous functions from ${\mathbb R}$ to ${\mathbb R}$, the following conditions are equivalent: 1. Each sequence in $F$ has a subsequence that converges continuously to some continuous function (not necessarily in $F$ ). 2. (a) For each $x \in{\mathbb R}$ the set $F (x) =\{f (x) \mid f \in F \}$ is bounded, and (b) $F$ is equicontinuous. To see the other equivalent forms of entry 94, type "94" on the line immediately above. From there we learn: Form 94 is "Every denumerable family of non-empty sets of reals has a choice function." There are some other equivalent forms that may be of interest. For example: (94 E) Every second countable topological space is Lindelöf. (94 G) Every subset of ${\mathbb R}$ is separable. (94 R) Weak Determinacy. If $A$ is a subset of ${\mathbb N}^{\mathbb N}$ with the property that $\forall a \in A\forall x \in{\mathbb N}^{\mathbb N}($ if $x(n) = a(n)$ for $n = 0$ and $n$ odd, then $x\in A)$, then in the game $G(A)$ one of the two players has a winning strategy. (94 X) Every countable family of dense subsets of ${\mathbb R}$ has a choice function. Proofs and references are provided by the website and the book. A reference that comes up with some frequency in form 94 is Rhineghost, Y. T. "The naturals are Lindelöf iff Ascoli holds". Categorical perspectives (Kent, OH, 1998), 191–196, Trends Math., Birkhäuser Boston, Boston, MA (2001).
{ "source": [ "https://mathoverflow.net/questions/45936", "https://mathoverflow.net", "https://mathoverflow.net/users/10814/" ] }
45,950
Occasionally I find myself in a situation where a naive, non-rigorous computation leads me to a divergent sum, like $\sum_{n=1}^\infty n$. In times like these, a standard approach is to guess the right answer by assuming that secretly my non-rigorous manipulations were really manipulating the Riemann zeta function $\zeta(s) = \sum_{n=1}^\infty n^{-s}$ and its cousins. Then it's reasonable to guess that the "correct" answer is, for example, $\sum_{n=1}^\infty n = \zeta(-1) = -\frac1{12}$. Thus the zeta function and its cousins are a valuable tool for other non-number-theoretic problem solving: it's always easier to rigorously prove that your guess is correct (or discover, in trying to prove it, that it's wrong) than it is to rigorously derive an answer from scratch. I recently found myself wishing I could do something similar for the sum of the quantum integers. Recall that at quantum parameter $q = e^{i\hbar}$, quantum $n$ is the complex number $$[n]_q = \frac{q^n - q^{-n}}{q - q^{-1}} = q^{n-1} + q^{n-3} + \dots + q^{3-n} + q^{1-n}.$$ The point is that $[n]_1 = n$. Question: Are there established methods to sum the divergent series $\sum_{n=1}^\infty [n]_q $ and its cousins? For example, is there some well-behaved function $\zeta_q(s)$ for which the series is naturally the $s=-1$ value? Note that when $n$ is a root of unity, the series truncates, and it would be nice (but maybe too much too hope for) if the regularized series agreed with the truncated series at these values. I should mention also that I consider the following answer tempting but inaccurate, as it definitely doesn't work at roots of unity, which I do care about: $$ \sum_{n=1}^\infty [n]_q = \frac1{q-q^{-1}} \sum_{n=1}^\infty (q^n - q^{-n}) = \frac1{q-q^{-1}} \left( \sum_{n=1}^\infty q^n - \sum_{n=1}^\infty q^{-n}\right) = $$ $$ = \frac1{q-q^{-1}} \left( \frac{q}{1-q} - \frac{q^{-1}}{1-q^{-1}}\right) = \frac{q+1}{(q-q^{-1})(q-1)}$$
The paper by Cherednik On q-analogues of Riemann's zeta function ( arXiv:math/9804099 ) gives precisely the definition you're after: $$ \zeta_q(s)=\sum\limits_{n=1}^\infty q^{sn}/[n]_q^s $$ His paper also contains a brief discussion of the properties of this $q$ -zeta function. On the other hand, the term quantum zeta function appears to have a somewhat different meaning, see e.g. the paper On the quantum zeta function by R.E. Crandall.
{ "source": [ "https://mathoverflow.net/questions/45950", "https://mathoverflow.net", "https://mathoverflow.net/users/78/" ] }
45,951
I included this footnote in a paper in which I mentioned that the number of partitions of the empty set is 1 (every member of any partition is a non-empty set, and of course every member of the empty set is a non-empty set): "Perhaps as a result of studying set theory, I was surprised when I learned that some respectable combinatorialists consider such things as this to be mere convention. One of them even said a case could be made for setting the number of partitions to 0 when $n=0$ . By stark contrast, Gian-Carlo Rota wrote in \cite{Rota2}, p.~15, that 'the kind of mathematical reasoning that physicists find unbearably pedantic' leads not only to the conclusion that the elementary symmetric function in no variables is 1, but straight from there to the theory of the Euler characteristic, so that 'such reasoning does pay off.' The only other really sexy example I know is from applied statistics: the non-central chi-square distribution with zero degrees of freedom, unlike its 'central' counterpart, is non-trivial." The cited paper was: G-C.~Rota, Geometric Probability , Mathematical Intelligencer , 20 (4), 1998, pp. 11--16. The paper in which my footnote appears is the first one you see here , doi: 10.37236/1027 . Question: What other really gaudy examples are there? Some remarks: From one point of view, the whole concept of vacuous truth is silly. It is a counterintuitive but true proposition that Minneapolis is at a higher latitude than Toronto. "Ex falso quodlibet" (or whatever the Latin phrase is) and so if you believe Toronto is a more northerly locale than Minneapolis, it will lead you into all sorts of mistakes like $2 + 2 = 5,$ etc. But that is nonsense. From another point of view, in its proper mathematical context, it makes perfect sense. People use examples like propositions about all volcanoes made of pure gold, etc. That's bad pedagogy and bad in other ways. What if I ask whether all cell phones in the classroom have been shut off? If there are no cell phones in the room (that is more realistic than volcanoes made of gold, isn't it??) then the correct answer is "yes". That's a good example, showing, if only in a small way, the utility of the concept when used properly. I don't think it's mere convention that the number of partitions of the empty set is 1; it follows logically from some basic things in logic. Those don't make sense in some contexts (see "Minneapolis", "Toronto", etc., above) but in fact the only truth value that can be assigned to $\text{“}F\Longrightarrow F\text{''}$ or $\text{“}F\Longrightarrow T\text{''}$ that makes it possible to fill in the truth table without knowing the content of the false proposition (and satisfies the other desiderata?) is $T.$ That's a fact whose truth doesn't depend on conventions.
How many open covers does the empty topological space have? Not one, not none, but two: the empty cover $\varnothing$ , since its union is $\bigcup\varnothing=\varnothing$ , and the cover $\{\varnothing\}$ , since its union is also $\bigcup\{\varnothing\} =\varnothing$ . This comes up when using the Grothendieck plus-construction to sheafify a presheaf. Apply the construction to the (nonseparated) presheaf $P:\mathcal{O}(X)^\mathrm{op}\to \mathrm{Set}$ sending every open set to the set $A$ , with $|A|\geq 2$ . Then the presheaf $P^+:\mathcal{O}(X)^\mathrm{op}\to\mathrm{Set}$ agrees with $P$ on every open set except $\varnothing\subseteq X$ , where $P^+(\varnothing)$ is now a one-element set $\{*\}.$ This is because the matching families for the cover $\{\varnothing\}$ of $\varnothing$ (of which there is one for each $a\in A$ ) are all set equal to the unique matching family for the refining cover $\varnothing\subseteq\{\varnothing\}$ of $\varnothing$ . This elementary example comes from "Sheaves in Geometry and Logic", by Moerdijk and MacLane.
{ "source": [ "https://mathoverflow.net/questions/45951", "https://mathoverflow.net", "https://mathoverflow.net/users/6316/" ] }
45,953
I'm curious about the following: Is every real $n$-manifold isomorphic to a quotient of $\mathbb{R}^n$? Thanks. EDIT: As Tilman points out, the manifold should be connected. Also, yes, I'm thinking about topological quotients. Specifically, is there a surjective map $\mathbb{R}^n\to M$ such that $M$ has the quotient topology? EDIT': I guess an interesting addendum to the question is "when is it true?"
Hahn–Mazurkiewicz Theorem: Suppose $X$ is a nonempty Hausdorff topological space. Then the following are equivalent: there is a surjection $[0,1]\to X$, $X$ is compact, connected, locally connected and second-countable. It follows that a Hausdorff space satisfying the conditions of (2) is a quotient of $I = [0,1]$. Cor: Every connected compact manifold is a quotient of $I$. Since $I$ is a quotient of $\mathbb{R}^n$, we have your answer. Cor: Every compact connected $m$-manifold is a quotient of $\mathbb{R}^n$ for any $n\geq 1$.
{ "source": [ "https://mathoverflow.net/questions/45953", "https://mathoverflow.net", "https://mathoverflow.net/users/2857/" ] }
46,019
Suppose $F: C\to D$ is an additive functor between abelian categories and that $$0\to X\xrightarrow f Y\xrightarrow g Z\to 0$$ is and exact sequence in $C$. Does it follow that $F(X)\xrightarrow{F(f)} F(Y)\xrightarrow{F(g)} F(Z)$ is exact in $D$? In other words, is $\ker(F(g))=\mathrm{im}(F(f))$? Remark 1: If the answer is no, a counterexample must use a non-split short exact sequence. This is because additive functors send split exact sequences to split exact sequences. A splitting is a pair $s:Y\to X$ and $r:Z\to Y$ so that $id_Y=f\circ s+r\circ g$, $id_X=s\circ f$, and $id_Z=g\circ r$. An additive functor preserves these properties, so $F(s)$ and $F(r)$ will split the sequence in $D$. Remark 2: You probably know you know lots of left exact and right exact additive functors, but you also know lots of exact in the middle additive functors. $H^i$ and $H_i$ for any (co)homology theory are neither left or right exact, but they are exact in the middle by the long exact sequence in (co)homology.
Consider the abelian category of morphisms of vector spaces, i.e., the objects are linear maps $f:U\to V$, and the morphisms are commutative squares. Let the functor $Im$ assign to a morphism $f$ its image $Im(f)$. Consider the short exact sequence of morphisms $(0\to V)\to (U\to V)\to (U\to 0)$. The functor $Im$ transforms it to the sequence $0\to Im(f)\to 0$, i.e. $Im$ is not exact in the middle. On the other hand, notice that $Im$ is epimorphic and monomorphic, i.e., transforms epimorphisms to epimorphisms and monomorphisms to monomorphisms.
{ "source": [ "https://mathoverflow.net/questions/46019", "https://mathoverflow.net", "https://mathoverflow.net/users/1/" ] }
46,068
Let $n$ be a large natural number, and let $z_1, \ldots, z_{10}$ be (say) ten $n^{th}$ roots of unity: $z_1^n = \ldots = z_{10}^n = 1$. Suppose that the sum $S = z_1+\ldots+z_{10}$ is non-zero. How small can $|S|$ be? $S$ is an algebraic integer in the cyclotomic field of order $n$, so the product of all its Galois conjugates has to be a non-zero rational integer. Using the utterly crude estimate that the magnitude of a non-zero rational integer is at least one, this gives an exponential lower bound on $S$. On the other hand, standard probabilistic heuristics suggest that there should be a polynomial lower bound, such as $n^{-100}$, for $|S|$. (Certainly a volume packing argument shows that one can make $S$ as small as, say, $O(n^{-5/2})$, though it is unclear to me whether this should be close to the true bound.) Is such a bound known? Presumably one needs some algebraic number theoretic methods to attack this problem, but the only techniques I know of go through Galois theory and thus give exponentially poor bounds. Of course, there is nothing special about the number $10$ here; one can phrase the question for any other fixed sum of roots, though the question degenerates when there are four or fewer roots to sum.
In this paper they talk about this problem for 5 instead of 10 roots. http://www.jstor.org/stable/2323469 EDIT: In view of Todd Trimble's comment, here's a summary of what's in the paper. Let $f(k,N)$ be the least absolute value of a nonzero sum of $k$ (not necessarily distinct) $N$-th roots of unity. Then $f(2,N)$ is asymptotic to $cN^{-1}$, where $c$ is $2\pi$ for even $N$, $\pi$ for odd $N$, $f(3,N)$ is asymptotic to $cN^{-1}$, where $c$ is $2\pi\sqrt3$ for $N$ divisible by 3, $2\pi\sqrt3/3$ otherwise, $f(4,N)$ is asymptotic to $cN^{-2}$, where $c$ is $4\pi^2$ for even $N$, $\pi^2$ for $N$ odd, $f(k,N)>k^{-N}$ for all $k,N$, $f(2s,N)<c_sN^{-s}$ for $N$ even and $s\le10$, $f(k,N)<c_kN^{-[\sqrt{k-6}]-1}$ for $N$ even and $k>5$, and If $N$ is twice a prime, and $k<N/2$, then there exists $k'<2k$ such that $f(k',N)\le2k2^{k/2}\sqrt{k!}N^{-k/2}$. The only result in the paper for 5 roots of unity is (the trivial) $f(5,N)>5^{-N}$, but it is suggested that maybe $f(5,N)>cN^{-d}$ for some $d$, $2\le d\le3$, and some $c>0$.
{ "source": [ "https://mathoverflow.net/questions/46068", "https://mathoverflow.net", "https://mathoverflow.net/users/766/" ] }
46,087
The category of simplicial sets has a standard model structure, where the weak equivalences are those maps whose geometric realization is a weak homotopy equivalence, the cofibrations are monomorphisms, and the fibrations are Kan fibrations. Simplicial sets are combinatorial objects, so morally their model structure should not be dependent on topological spaces. Are there any approaches to this model structure which do not use the geometric realization functor, and do not use topological spaces?
Quillen's original proof (in Homotopical Algebra , LNM 43, Springer, 1967) is purely combinatorial (i.e. does not use topological spaces): he uses the theory of minimal Kan fibrations, the fact that the latter are fiber bundles, as well as the fact that the classifying space of a simplicial group is a Kan complex. This proof has been rewritten several times in the literature: at the end of S.I. Gelfand and Yu. I. Manin, Methods of Homological Algebra , Springer, 1996 as well as in A. Joyal and M. Tierney An introduction to simplicial homotopy theory (I like Joyal and Tierney's reformulation a lot). However, Quillen wrote in his seminal Lecture Notes that he knew another proof of the existence of the model structure on simplicial sets, using Kan's $Ex^\infty$ functor (but does not give any more hints). A proof (in fact two variants of it) using Kan's $Ex^\infty$ functor is given in my Astérisque 308: the fun part is not that much about the existence of model structure, but to prove that the fibrations are precisely the Kan fibrations (and also to prove all the good properties of $Ex^\infty$ without using topological spaces); for two different proofs of this fact using $Ex^\infty$, see Prop. 2.1.41 as well as Scholium 2.3.21 for an alternative). For the rest, everything was already in the book of Gabriel and Zisman, for instance. Finally, I would even add that, in Quillen's original paper, the model structure on topological spaces in obtained by transfer from the model structure on simplicial sets. And that is indeed a rather natural way to proceed.
{ "source": [ "https://mathoverflow.net/questions/46087", "https://mathoverflow.net", "https://mathoverflow.net/users/1709/" ] }
46,156
Let $F\subset\mathbb{Q}^2$ a closed set. Does there exists some closed and connected set $G\subset\mathbb{R}^2$ such that $F=G\cap\mathbb{Q}^2$? For example if $F=\{a,b\}$, you can take $G$ the reunion of two lines of different irrational slopes passing through $a$ and $b$. This is a connected set and the intersection with $\mathbb{Q}^2$ is $\{a,b\}$ because the slopes are irrationnals. But I don’t know how to prove it in general (and I don’t know if it’s true). When there are many connected components this is not clear how to connect them without adding new rational points.
Enumerate all rational points outside your set. Then cover these points by open balls by induction as follows: the next ball is centered at the first rational point not covered so far, its radius is so small that is does not intersect $F$ and the previous balls and is chosen so that the boundary of the ball does not contain rational points. Then the complement of the union of these balls is path-connected: to connect two points, draw a segment between them and go around every ball intersected by this segment. Note that this works for any countable set, not just $\mathbb Q^2$.
{ "source": [ "https://mathoverflow.net/questions/46156", "https://mathoverflow.net", "https://mathoverflow.net/users/10217/" ] }
46,252
Without prethought, I mentioned in class once that the reason the symbol $\partial$ is used to represent the boundary operator in topology is that its behavior is akin to a derivative. But after reflection and some research, I find little support for my unpremeditated claim. Just sticking to the topological boundary (as opposed to the boundary of a manifold or of a simplicial chain), $\partial^3 S = \partial^2 S$ for any set $S$ . So there seems to be no possible analogy to Taylor series. Nor can I see an analogy with the fundamental theorem of calculus. The only tenuous sense in which I can see the boundary as a derivative is that $\partial S$ is a transition between $S$ and the "background" complement $\overline{S}$ . I've looked for the origin of the use of the symbol $\partial$ in topology without luck. I have only found references for its use in calculus. I've searched through History of Topology (Ioan Mackenzie James) online without success (but this may be my poor searching). Just visually scanning the 1935 Topologie von Alexandroff und Hopf, I do not see $\partial$ employed. I have two questions: Q1 . Is there a sense in which the boundary operator $\partial$ is analogous to a derivative? Q2 . What is the historical origin for the use of the symbol $\partial$ in topology? Thanks! Addendum ( 2010 ). Although Q2 has not been addressed [was subsequently addressed by @FrancoisZiegler], it seems appropriate to accept one among the wealth of insightful responses to Q1 . Thanks to all!
The surface area $|\partial S|$ of a (bounded, smooth) body $S$ is the derivative of the volume $|S_r|$ of the $r$-neighbourhoods $S_r$ of $S$ at $r=0$: $$ |\partial S| = \frac{d}{dr} |S_r| |_{r=0}.$$ Thus, for instance, the boundary $\partial D_r$ of the disk $D_r$ of radius $r$ has circumference $\frac{d}{dr} (\pi r^2) = 2\pi r$. More generally, one intuitively has the Newton quotient-like formula $$ \partial S = \lim_{h \to 0^+} \frac{S_h \backslash S}{h};$$ the right-hand side does not really make formal sense, but certainly one can view $S_h \backslash S$ as a $[0,h]$-bundle over $\partial S$ for $h$ sufficiently small (in particular, smaller than the radius of curvature of $S$). In a similar spirit, one informally has the "chain rule" $$ {\mathcal L}_X S "=" (X \cdot n) \partial S $$ for the "Lie derivative" of $S$ along a vector field $X$, where $n$ is the outward normal. (There may also be a divergence term, depending on whether one is viewing $S$ as a set, a measure, or a volume form.) Again, this does not really make formal sense, although Stokes' theorem already captures most of the above intuition rigorously (and, as noted in the comments, Stokes' theorem is probably the clearest way to link boundaries and derivatives together). EDIT: A more rigorous way to link boundaries with derivatives proceeds via the theory of distributions. The weak derivative $\nabla 1_S$ of the indicator function of a smooth body $S$ is equal to $-n d\sigma$, where $n$ is the outward normal and $d\sigma$ is the surface measure on $\partial S$. (This is really just a fancy way of restating Stokes' theorem, after one unpacks all the definitions.) This can be used, for instance, to link the Sobolev inequality with the isoperimetric inequality. In a similar spirit, $\frac{1_{S_h} - 1_S}{h}$ converges in the sense of distributions as $h \to 0$ to surface measure $d\sigma$ on $\partial S$, thus providing a rigorous version of the intuitive difference formula given previously.
{ "source": [ "https://mathoverflow.net/questions/46252", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
46,566
The existence and uniqueness of algebraic closures is generally proven using Zorn's lemma. A quick Google search leads to a 1992 paper of Banaschewski , which I don't have access to, asserting that the proof only requires the ultrafilter lemma. Questions: Is it known whether the two are equivalent in ZF? Would anyone like to give a quick sketch of the construction assuming the ultrafilter lemma? I dislike the usual construction and am looking for others.
Qiaochu, using the link I provided in my answer to this question , you find that this question is still open (or was, as of the mid 2000s, and I haven't heard of any recent results in this direction). (According to the site's notation, the existence of algebraic closures is form 69, the ultrafilter theorem is form 14, uniqueness of the algebraic closure (in case they exist) is form 233; these numbers can be found by entering appropriate phrases in the last entry form in the page linked to above.) It is known that uniqueness implies neither existence nor the ultrafilter theorem. It is open whether existence implies uniqueness or the ultrafilter theorem, and also whether (existence and uniqueness) implies the ultrafilter theorem. (Enter 14, 69, 233 in Table 1 in the link above for these implications/non-implications.) Jech's book on the axiom of choice should provide the proofs of the known implications and references, and the book by Howard-Rubin (besides updates past the publication date of Jech's book) provides references for the known non-implications. Here are some details on Banaschewski's paper: 1. First, lets see that the ultrafilter theorem can be used to prove uniqueness of algebraic closures, in case they exist. Let $K$ be a field, and let $E$ and $F$ be algebraic closures. We need to show that there is an isomorphism from $E$ onto $F$ fixing $K$ (pointwise). Following Banaschewski, denote by $E_u$ (resp., $F_u$) the splitting field of $u\in K[x]$ inside $E$ (resp., $F$); we are not requiring that $u$ be irreducible. We then have that if $u|v$ then $E_u\subseteq E_v$ and $F_u\subseteq F_v$. Also, since $E$ is an algebraic closure of $K$, we have $E=\bigcup_u E_u$, and similarly for $F$. Denote by $H_u$ the set of all isomorphisms from $E_u$ onto $F_u$ that fix $K$; it is standard that $H_u$ is finite and non-empty (no choice is needed here). If $u|v$, let $\varphi_{uv}:H_v\to H_u$ denote the restriction map; these maps are onto. Now set $H=\prod_{u\in K[x]} H_u$ and for $v|w$, let $$ H_{vw}=\{(h_u)\in H\mid h_v=h_w\upharpoonright E_v\}. $$ Then the Ultrafilter theorem ensures that $H$ and the sets $H_{vw}$ are non-empty. This is because, in fact, Tychonoff for compact Hausdorff spaces follows from the Ultrafilter theorem, see for example the exercises in Chapter 2 of Jech's "The axiom of choice." Also, the sets $H_{vw}$ have the finite intersection property. They are closed in the product topology of $H$, where each $H_u$ is discrete. It then follows that the intersection of the $H_{vw}$ is non-empty. But each $(h_u)$ in this intersection determines a unique embedding $h:\bigcup_uE_u\to\bigcup_u F_u$, i.e., $h:E\to F$, which is onto and fixes $K$. 2. Existence follows from modifying Artin's classical proof. For each monic $u\in K[x]$ of degree $n\ge 2$, consider $n$ "indeterminates" $z_{u,1},\dots,z_{u,n}$ (distinct from each other, and for different values of $u$), let $Z$ be the set of all these indeterminates, and consider the polynomial ring $K[Z]$. Let $J$ be the ideal generated by all polynomials of the form $$ a_{n-k}-(-1)^k\sum_{i_1\lt\dots\lt i_k}z_{u,i_1}\dots z_{u,i_k} $$ for all $u=a_0+a_1x+\dots+a_{n-1}x^{n-1}+x^n$ and all $k$ with $1\le k\le n$. The point is that any polynomial has a splitting field over $K$, and so for any finitely many polynomials there is a (finite) extension of $K$ where all admit zeroes. From this it follows by classical (and choice-free) arguments that $J$ is a proper ideal. We can then invoke the ultrafilter theorem, and let $P$ be any prime ideal extending $J$. Then $K[Z]/P$ is an integral domain. Its field of quotients $\hat K$ is an extension of $K$, and we can verify that in fact, it is an algebraic closure. This requires to note that, obviously, $\hat K/K$ is algebraic, and that, by definition of $J$, every non-constant polynomial in $K[x]$ split into linear factors in $\hat K$. But this suffices to ensure that $\hat K$ is algebraically closed by classical arguments (see for example Theorem 8.1 in Garling's "A course in Galois theory"). 3. The paper closes with an observation that is worth making: It follows from the ultrafilter theorem, and it is strictly weaker than it, that countable unions of finite sets are countable. This suffices to prove uniqueness of algebraic closures of countable fields, in particular, to prove the uniqueness of $\bar{\mathbb Q}$.
{ "source": [ "https://mathoverflow.net/questions/46566", "https://mathoverflow.net", "https://mathoverflow.net/users/290/" ] }
46,684
I am interested in determining a collection of geometric conditions that will guarantee that a convex polyhedron of $n$ faces is a fair die in the sense that, upon random rolling, it has an equal $1/n$ probability of landing on each of its faces. (Assume the polyhedron is composed of a homogeneous material; i.e., it is not "loaded.") There has been study of what Grünbaum and Shephard call isohedral polyhedra, which always represent fair dice: "An isohedron is a convex polyhedron with symmetries acting transitively on its faces with respect to the center of gravity. Every isohedron has an even number of faces." It is clear such a polyhedral die is fair. Here is an example of the trapezoidal dodecahedron , an isohedron of 12 congruent faces, from an attractive web site on polyhedral dice : But a clever argument in a delightful paper by Persi Diaconis and Joseph Keller (" Fair Dice ." Amer. Math. Monthly 96, 337-339, 1989) shows (essentially, by continuity) that there must be fair polyhedral dice that are not symmetric. For example, there is no reason to expect that equal face areas is a necessary condition for a polyhedral die to be fair. Nor is it reasonable to expect that the distance from each face to the center of gravity of the polyhedron is alone a determining condition. Rather it should depend on the dihedral angles between faces, the likelihood of one face rolling to the next—perhaps a Markov chain of transitions? My question is: Is there a collection of geometric conditions—broader than isohedral—that guarantee that a (perhaps asymetrical, perhaps unequal-face-areas) convex polyhedron represents a fair die? Sufficient conditions welcomed; necessary and sufficient conditions may be too much to hope for! Speculations and literature leads appreciated!
Depending on rules and technique, a person throwing a die can reasonably control the amount of angular momentum, the total kinetic energy when it first lands and the angle of its trajectory. I want to suppose that the collisions of the die with the throwing surface have a reasonably high coefficient of restitution, enough that the die will undergo a good number of hits on the surface before it comes to rest. (One could imagine an alternate model, where the main randomization is by shaking the die before throwing, and the die stops where it lands ---- but that's not hte situation I want to discuss). I think there's a reasonable range of polyhedral die shapes and energies where if the bouncing were completely elastic, the system would be ergodic --- the possible positions and motions of the die, up to translations in the plane of the throwing surface, would be visited a.s. in proportion to their measure among all states of the same energy. If the surface of the die were smooth, but just marked off into different areas of contact, this would not generally be true: KAM theory (small divisors and invariant tori) very often makes it non-ergodic. If the die can act like a top, it's not ergodic at that energy level. But I think of the rolling die more like a particle bumping into numerous obstacles, and systems like that are often ergodic. The rolling of a real die is not perfectly elastic, and kinetic energy is gradually lost. Here's a hypothesis that should guarantee fairness in the limiting case where energy is lost very slowly: Let's suppose we have a rule to partition phase space into sets $A_i$ associaed with the different possible outcomes. We want The intersection of $A_i$ with each energy level E has volume $V(E)$ independent of $i$. The dynamics is ergodic in each component of each energy level that intersects more than one $A_i$. The labeling by $A_i$ depends reasonably on energy level --- for every pair of nearby energy levels $E$ and $F$, for most $x$ at energy level $E$ and most $y$ in energy level $F$ such that $d(x,y) < \epsilon$, $x$ and $y$ are in the same state. Also, the labeling shouldn't tunnel to a different connected components of the phase space of energy $<\le E$: the ratios of measure of $A_i$ intersected with each connected component of an energy level should stay the same. With these hypotheses, with sufficiently slow loss of energy, the final state should be uniformly distributed. If the dynamics at enough energy levels also has reasonably high entropy and is mixing (I think both are likely to be true for reasonable die shapes), then the uniformization should occur fairly well at realistic rates of energy loss. The big difficulty though is condition (1). I think that pretty often, even with a symmetric die, the phase space becomes disconnected well before the die comes to rest. For a standard cubical die, what are the components of phase space just above where it settles on a face? I think it can roll on 4 sides and not have enough energy to switch which four sides. At the same energy level, it might be spinning slowly on a vertical axis on one face. If so, that would make 9 components, 6 of which are already committed to one face. These are the kinds of things one would need to understand to show an asymmetrical die is fair. With more complicated dice, the fragmentation into components looks much trickier. The volume of phase space must be equitably apportioned at each transition where the phase space disconnects, until the final outcome is determined, otherwise one could influence the outcome by the energy of the throw. Connely's suggestion of a twisted deltoidal Icositetrahedron might work, but it might fail the test (1) as energy levels are decreased. Even though each face is the same, I'm not convinced that the fragmentation into continents, before the die has settled on one face, would be fair, since it presumably depends on bigger neighborhoods that are not exactly the same. If one understood in detail how the components of energy levels separate and if there aren't too many of them, then one should in principle be able to engineer a die to be fair with the help of the Brouwer fixed point theorem (multi-dimensional intermediate value theorem). It would appear to be quite a challenge, though, for all but the simplest examples when there is only a small number of symmetry classes of faces. Proving 2 seems like a significant challenge. My guess could be wrong, it might usually be non-ergodic. It's worth thinking about in it's own right. Further thoughts : Actual dice are made with rounded edges and rounded corners. How this rounding is done seems significant. If the projected image of a die along a certain axis is almost round, then at low energy levels it should roll more easily about those axes than about axes where the projection is bumpy, other things being equal. This suggests larger components of the phase space for these kinds of rolls, when the phase space becomes disconnected. Also, it depending on the details of the shape, it seems likely there are ergodic components associated with rolling at energy levels above where the phase space becomes disconnected --- this is similar to the stability of a wheel rolling on somwhat bumpy terrain, which is explained by KAM theory. It seems interesting to try to design shapes that appear fair, but are not, by exploiting this kind of behavior: creating rolling modes or rocking modes that, as the die settles down, tend to funnel the behavior into preferred outcomes.
{ "source": [ "https://mathoverflow.net/questions/46684", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
46,748
This is not actually a question asked by me. But since I do not know the answer, I would love to know if someone here could answer it.
No. There are locally connected subsets of $\mathbb{R}^2$ which are totally path disconnected. See my answer to this old MO question " Can you explicitly write R 2 as a disjoint union of two totally path disconnected sets? ". Also, Gerald Edgar's response to the same question says that such sets cannot be totally disconnected, although he does not mention local connectedness. In fact, the sets given by my answer are locally connected, so provide a counterexample to your question. As in the linked question: Let $S$ be a subset of the reals such that $S\cap[a,b]$ and $S^{\rm c}\cap[a,b]$ cannot be written as a countable union of closed sets for any $a < b$ (e.g., this explicit example of a non-Borel set ). Then, $A=(S\times\mathbb{Q})\cup(S^{\rm c}\times\mathbb{Q}^{\rm c})$ and $B=(S\times\mathbb{Q}^{\rm c})\cup(S^{\rm c}\times\mathbb{Q})$ partition the plane into a pair of locally connected and totally pathwise disconnected sets. That they are totally pathwise disconnected is proven in my answer to the linked question. Let us show that $A\cap U$ is connected for any nonempty 'open rectangle' $U=(x_0,x_1)\times(y_0,y_1)$ . If not, there would be nonempty disjoint open sets $V,W\subset U$ with $A\cap(V\cup W)=A\cap U$ . If $\pi(x,y)=x$ is the projection onto the x -axis then $\pi(V)\cup\pi(W)=\pi(V\cup W)=(x_0,x_1)$ is connected. So, we can find a nontrivial closed interval $[a,b]\subseteq\pi(V)\cap\pi(W)$ . Now, for every $x\in S^{\rm c}\cap[a,b]$ , the line segment $\lbrace x\rbrace\times(y_0,y_1)$ intersects with both $V$ and $W$ and, by connectedness of line segments, it will intersect with $U\setminus(V\cup W)$ . Hence, there is a $q\in\mathbb{R}$ with $(x,q)\in U\setminus(V\cup W)\subset B$ . So, $q\in\mathbb{Q}$ . For each rational $q$ , let $S_q$ be the (closed) set of $x\in[a,b]$ such that $(x,q)$ is in $U\setminus(V\cup W)$ . Then, $S^{\rm c}\cap[a,b]=\bigcup_{q\in\mathbb{Q}}S_q$ is a countable union of closed sets, giving the required contradiction. Hence $A$ is locally connected and, exchanging $S$ and $S^{\rm c}$ in the argument above, so is $B$ .
{ "source": [ "https://mathoverflow.net/questions/46748", "https://mathoverflow.net", "https://mathoverflow.net/users/4760/" ] }
46,787
A well known theorem in algebraic topology relates the (co)homology of the Thom space $X^\mu$ of a orientable vector bundle $\mu$ of dimension $n$ over a space $X$ to the (co)homology of $X$ itself: $H_\ast(X^\mu) \cong H_{\ast-n}(X)$ and $H^\ast(X^\mu) \cong H^{\ast-n}(X)$. This isomorphism can be proven in many ways: Bott & Tu has an inductive proof using good covers for manifolds and I learned on MathOverflow that one can use a relative Serre spectral sequence. However, I believe that there should also be a proof using stable homotopy theory, in the case of homology by directly constructing a isomorphism of spectra $X^\mu \wedge H\mathbb{Z} \to X_+ \wedge \Sigma^{-n} H\mathbb{Z}$, where $X^\mu$ denotes the Thom spectrum, $H\mathbb{Z}$ the Eilenberg-Mac Lane spectrum for $\mathbb{Z}$ and $X_+$ the suspension spectrum of $X$ with a disjoint basepoint added. Is there an explicit construction of such a map implementing the Thom isomorphism on the level of spectra? I am interested in such a construction for both homology and cohomology. If so, is there a similar construction for generalized (co)homology theories? I would also be interested in references.
There is a construction for both Thom isomorphisms, homological and cohomological, via classical stable homotopy theory. You find the details in Rudyaks book "On Thom spectra, orientability, and cobordism", chapter V, §1. The Thom class is a map $X^{\mu} \to\Sigma^{n} H \mathbb{Z}$. Moreover, there is a map of spectra $X^{\mu} \to X_+ \wedge X^{\mu}$ which is induced from the map of vector bundles $\mu \to \mathbb{R}^0 \times \mu$ over the diagonal map $X \to X \times X$. Here is the definition of the homological Thom isomorphism; the cohomological one is in the same spirit. Consider the composition $X^{\mu} \wedge H \mathbb{Z} \to X_+ \wedge X^{\mu} \wedge H\mathbb{Z} \to X_+ \wedge \Sigma^n H \mathbb{Z} \wedge H \mathbb{Z} \to X_+ \wedge \Sigma^n H \mathbb{Z} $. On homotopy groups, it induces a map lowering the degree by $n$ (there is a sign mistake in your question that confused me for some minutes). It is clear that this works for orientations with respect to other ring spectra as well.
{ "source": [ "https://mathoverflow.net/questions/46787", "https://mathoverflow.net", "https://mathoverflow.net/users/798/" ] }
46,793
Just today I had a bet with my friend over the following problem: How many winning configurations can you have in a nxn Tic-Tac-Toe game where players win if a they get n/2 in either a row or column, consecutively. n is even. For example, in a 4x4 game, players win if a they get 2 of their symbols in either a row or column, consecutively. I bet the figure to be "2 * ( 2 * n ) * ( 3 ** ( n / 2 ) )" Do I win? How to proceed if we were to count only draws? ( how many board configurations can there be so that they are always draws - i.e. no one wins ) Note that I do not think that the board always need to have as many X's as there are O's: Consider a 10x10 board. At the minimum, the winning player needs to make 5 moves to win, and the loser gets to make 4. So it's not always a filled board with half the cells X and half the cells O.
There is a construction for both Thom isomorphisms, homological and cohomological, via classical stable homotopy theory. You find the details in Rudyaks book "On Thom spectra, orientability, and cobordism", chapter V, §1. The Thom class is a map $X^{\mu} \to\Sigma^{n} H \mathbb{Z}$. Moreover, there is a map of spectra $X^{\mu} \to X_+ \wedge X^{\mu}$ which is induced from the map of vector bundles $\mu \to \mathbb{R}^0 \times \mu$ over the diagonal map $X \to X \times X$. Here is the definition of the homological Thom isomorphism; the cohomological one is in the same spirit. Consider the composition $X^{\mu} \wedge H \mathbb{Z} \to X_+ \wedge X^{\mu} \wedge H\mathbb{Z} \to X_+ \wedge \Sigma^n H \mathbb{Z} \wedge H \mathbb{Z} \to X_+ \wedge \Sigma^n H \mathbb{Z} $. On homotopy groups, it induces a map lowering the degree by $n$ (there is a sign mistake in your question that confused me for some minutes). It is clear that this works for orientations with respect to other ring spectra as well.
{ "source": [ "https://mathoverflow.net/questions/46793", "https://mathoverflow.net", "https://mathoverflow.net/users/10987/" ] }
46,804
I was writing up some notes on harmonic analysis and I thought of a question that I felt I should know the answer to but didn't, and I hope someone here can help me. Suppose I have a compact Riemannian manifold $M$ on which a compact Lie group $G$ acts isometrically and transitively---so you can think of $M$ as $G/K$ for some closed subgroup $K$ of $G$. Then the real Hilbert space $H = L^2(M, \mathbb{R})$ is an orthogonal representation space of $G$ and hence splits as an orthogonal direct sum of finite dimensional irreducible sub-representations. On the other hand, the Laplacian $L$ of $M$ is a self-adjoint operator on $H$, so $H$ is also the orthogonal direct sum of its eigenspaces---which are also finite dimensional. My question is, when do these two orthogonal decompositions of $H$ coincide? Put slightly differently, since $L$ commutes with the action of $G$, each eigenspace of $L$ is a finite dimensional subrepresentation of $H$ and so a direct sum of irreducibles, and I would like to know conditions under which each eigenspace is in fact irreducible. For example, this is true for the circle acting on itself and for $SO(3)$ acting on $S^2$ (where we get the harmonic polynomials of various degrees). Is it perhaps always true for the case of a symmetric space? Of course a standard reference in addition to the answer would be most welcome.
The Peter-Weyl theorem tells you that $L^2(G)$ is isomorphic to $\bigoplus_{\pi}\pi\otimes\pi^*$ as $G\times G$ representation, where $\pi$ runs through all irreducible unitary representations. It follows that $$ L^2(G/K)\cong L^2(G)^K\cong\bigoplus_\pi \pi\otimes(\pi^*)^K. $$ So, the first thing you absolutely need, is a multiplicity one property, which says that $\dim\pi^K\le 1$ for every $\pi$. This is already a rare property, but known to be true for, say $G=SO(n)$ and $K=SO(n-1)$, see Zhelobenko's book for this. But, the Laplacian may have the same eigenvalue on different representations. For this you need highest weight theory (see for instance the book by Broecker and tom Dieck): Assume $G$ to be connected. The irreducible representations are parametrized by highest weights and the Laplace eigenvalue depends on the value of a quadratic form on the space of weights. So, in each case you need to identify those weights with $K$-invariants and consider the values of the quadratic form, which in the case of a simple group should be the Killing form. I guess that in the above cases it might actually be true.
{ "source": [ "https://mathoverflow.net/questions/46804", "https://mathoverflow.net", "https://mathoverflow.net/users/7311/" ] }
46,866
The falsity of the following conjecture would be a nice counter-intuitive fact. Given a square sheet of perimeter $P$, when folding it along origami moves, you end up with some polygonal flat figure with perimeter $P'$. Napkin conjecture : You always have $P' \leq P$. In other words, you cannot increase the perimeter using any finite sequence of origami folds. Question 1 : Intuition tells us it is true (how in hell can it increase?). Yet, I think I read somewhere that there was some weird folding (perhaps called "mountain urchin"?) which strictly increases the perimeter. Is this true? Note 1 : I am not even sure that the initial sheet's squareness is required. I cannot find any reference on the internet. Maybe the name has changed; I heard about this 20 years ago. The second question is about generalizing the conjecture. Question 2 : With the idea of generalizing the conjecture to continuous folds or bends (using some average shadow as a perimeter), I stumble on how you can mathematically define bending a sheet. Alternatively, how do you say "a sheet is untearable" in mathematical terms? Note 2 : It might also be a matter of physics about how much we idealize bending mathematically.
There is a general version of this question which is known as "the rumpled dollar problem" . It was posed by V.I. Arnold at his seminar in 1956. It appears as the very first problem in "Arnold's Problems" : Is it possible to increase the perimeter of a rectangle by a sequence of foldings and unfoldings? According to the same source (p. 182), Alexei Tarasov has shown that a rectangle admits a realizable folding with arbitrarily large perimeter. A realizable folding means that it could be realized in such a way as if the rectangle were made of infinitely thin but absolutely nontensile paper. Thus, a folding is a map $f:B\to\mathbb R^2$ which is isometric on every polygon of some subdivision of the rectangle $B$. Moreover, the folding $f$ is realizable as a piecewise isometric homotopy which, in turn, can be approximated by some isotopy of space (which corresponds to the impossibility of self-intersection of a paper sheet during the folding process). Have a look at A. Tarasov, Solution of Arnold’s “folded rouble” problem. (in Russian) Chebyshevskii Sb. 5 (2004), 174–187. I. Yashenko, Make your dollar bigger now!!! Math. Intelligencer 20 (1998), no. 2, 38–40. A history of the problem is also briefly discussed in Tabachnikov's review of "Arnold's Problems": It is interesting that the problem was solved by origami practitioners way before it was posed (at least, in 1797, in the Japanese origami book “Senbazuru Orikata”).
{ "source": [ "https://mathoverflow.net/questions/46866", "https://mathoverflow.net", "https://mathoverflow.net/users/3005/" ] }