source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
31,004
At various times I've heard the statement that computing the group structure of $\pi_k S^n$ is algorithmic . But I've never come across a reference claiming this. Is there a precise algorithm written down anywhere in the literature? Is there one in folklore, and if so what are the run-time estimates? Presumably they're pretty bad since nobody seems to ever mention them. Are there any families for which there are better algorithms, say for the stable homotopy groups of spheres? or $\pi_k S^2$ ? edit: I asked Francis Sergeraert a few questions related to his project. Apparently it's still an open question as to whether or not there is an exponential run-time algorithm to compute $\pi_k S^2$.
Francis Sergeraert and his coworkers have implemented his effective algebraic topology theory in a program named Kenzo. It seems capable of computing any $\pi_n(S^k)$ (in fact homotopy groups of any simply connected finite CW complex), although I don't know how far it is feasible. For instance $\pi_6 S^3$ is computed in about 30 seconds. In a 2002 paper , they mention other algorithms by Rolf Schön and by Justin Smith, not implemented at that time.
{ "source": [ "https://mathoverflow.net/questions/31004", "https://mathoverflow.net", "https://mathoverflow.net/users/1465/" ] }
31,109
I am interested in the differences between algebras and coalgebras. Naively, it does not seem as though there is much difference: after all, all you have done is to reverse the arrows in the definitions. There are some simple differences: The dual of a coalgebra is naturally an algebra but the dual of an algebra need not be naturally a coalgebra. There is the Artin-Wedderburn classification of semisimple algebras. I am not aware of a classification even of simple, semisimple coalgebras. More surprising is: a finitely generated comodule is finite dimensional. This question is about a more striking difference. The free algebra on a vector space $V$ is $T(V)$, the tensor algebra on $V$. I have been told that there is no explicit construction of the free coalgebra on a vector space. However these discussions took place following the consumption of alcohol. What is known about free coalgebras?
There cannot be a "free coalgebra" functor, at least in what I think is the standard usage. Namely, suppose that "orange" is a type of algebraic object, for which there is a natural "forgetful" functor from "orange" objects to "blue" objects. Then the "free orange" functor from Blue to Orange is the left adjoint, if it exists, to the forgetful map from Orange to Blue. Suppose that the forgetful map from coalgebras to vector spaces had a left adjoint; then it would itself be a right adjoint, and so would preserve products. Now the product in the category of coalgebras is something huge — think about the coproduct in the category of algebras, which is some sort of free product — and it's clear that the forgetful map does not preserve products. On the other hand, the coproduct in the category of coalgebras is given by the direct sum of underlying vector spaces, and so the forgetful map does preserve coproducts. This suggests that it may itself be a left adjoint; i.e. it may have a right adjoint from vector spaces to coalgebras, which should be called the "cofree coalgebra" on a vector space. Let me assume axiom of choice, so that I can present the construction in terms of a basis. Then I believe that the cofree coalgebra on the vector space with basis $L$ (for "letters") is the graded vector space whose basis consists of all words in $L$, with the comultiplication given by $\Delta(w) = \sum_{a,b| ab = w} a \otimes b$, where $a,b,w$ are words in $L$. I.e. the cofree coalgebra has the same underlying vector space as the free algebra, with the dual multiplication. It's clear that for finite-dimensional vector spaces, the cofree coalgebra on a vector space is (canonically isomorphic to) the graded dual of the free algebra on the dual vector space. Anyway, this is clearly a coalgebra, and the map to the vector space is zero on all words that are not singletons and identity on the singletons. I haven't checked the universal property, though. Edit: The description above of the cofree coalgebra is incorrect. I learned the correct version from Alex Chirvasitu. The description is as follows. Let $V$ be a vector space, and write $\mathcal T(V)$ for the tensor algebra of $V$, i.e. for the free associative algebra generated by $V$. Then the cofree coassociative algebra cogenerated by $V$ is constructed as follows. First, construct $\mathcal T(V^\ast)$, and second construct its finite dual $\mathcal T(V^\ast)^\circ$, which is the direct limit of duals to finite-dimensional quotients of $\mathcal T(V^\ast)$. There is a natural inclusion $\mathcal T(V^\ast)^\circ \hookrightarrow \mathcal T(V^\ast)^\ast$, and a natural map $\mathcal T(V^\ast)^\ast \to V^{\ast\ast}$ dual to the inclusion $V^\ast \to \mathcal T(V^\ast)$. Finally, construct $\operatorname{Cofree}(V)$ as the union of all subcoalgebras of $\mathcal T(V^\ast)^\circ$ that map to $V \subseteq V^{\ast\ast}$ under the map $\mathcal T(V^\ast)^\circ \hookrightarrow \mathcal T(V^\ast)^\ast \to V^{\ast\ast}$. Details are in section 6.4 (and specifically 6.4.2) of the book Hopf Algebras by Moss E. Sweedler. In any case, $\operatorname{Cofree}(V)$ is something like the coalgebra of "finitely supported distributions on $V$" (or, anyway, that's is how to think of it in the cocomutative version). For example, when $V = \mathbb k$ is one-dimensional, and $\mathbb k = \bar{\mathbb k}$ is algebraically closed, then $\operatorname{Cofree}(V) = \bigoplus_{\kappa \in \mathbb k} \mathcal T(\mathbb k)$. I should emphasize that now when I write $\mathcal T(\mathbb k)$, in characteristic non-zero I do not mean to give it the Hopf algebra structure. Rather, $\mathcal T(\mathbb k)$ has a basis $\lbrace x^{(n)}\rbrace$, and the comultiplication is $x^{(n)} \mapsto \sum x^{(k)} \otimes x^{(n-k)}$. Identifying $x^{(n)} = x^n/n!$, this is the comultiplication on the "divided power" algebra. It's reasonable to think of the $\kappa$th summand as consisting of (divided power) polynomials times $\exp(\kappa x)$, but maybe better to think of it as the algebra of descendants of $\delta(x - \kappa)$ — but this is just some Fourier duality. In the non-algebraically-closed case, there are also summands corresponding to other closed points in the affine line. end edit I should mention that in my mind the largest difference between algebras and coalgebras (by which I mean, and I assume you mean also, "associative unital algebras in Vect" and "coassociative counital coalgebras in Vect", respectively) is one of finiteness. You hinted at the difference in your answer: if $A$ is a (coassociative counital) coalgebra (in Vect), then it is a colimit (sum) of its finite-dimensional subcoalgebras, and moreover if $X$ is any $A$-comodule, then $X$ is a colimit of its finite-dimensional sub-A-comodules. This is absolutely not true for algebras. It's just not the case that every algebra is a limit of its finite-dimensional quotient algebras. A good example is any field of infinite-dimension. It follows from the finiteness of the corepresentation theory that a coalgebra can be reconstructed from its category of finite-dimensional corepresentations. Let $A$ be a coalgebra, $\text{f.d.comod}_A$ its category of finite-dimensional right comodules, and $F : \text{f.d.comod}_A \to \text{f.d.Vect}$ the obvious forgetful map. Then there is a coalgebra $\operatorname{End}^\vee(F)$, defined as some natural coequalizer in the same way that the algebra of natural transformations $F\to F$ is defines as some equalizer, and there is a canonical coalgebra isomorphism $A \cong \operatorname{End}^\vee(F)$. (Proof: see André Joyal and Ross Street, An introduction to Tannaka duality and quantum groups, Category theory (Como, 1990), Lecture Notes in Math., vol. 1488, Springer, Berlin, 1991, pp. 413–492. MR1173027 (93f:18015).) For an algebra, on the other hand, knowing its finite-dimensional representation theory is not nearly enough to determine the algebra. Again, the example is of an infinite-dimensional field (e.g. the field of rational functions). On the other hand, it is true that knowing the full representation theory of an algebra determines the algebra. Namely, if $A$ is an (associcative, unital) algebra (in Vect), $\text{mod}_A$ its category of all right modules, and $F: \text{mod}_A \to \text{Vect}$ the forgetful map, then there is a canonical isomorphism $A \cong \operatorname{End}(F)$. (Proof: $F$ has a left adjoint, $V \mapsto V\otimes A$. But $V \mapsto V\otimes \operatorname{End}(F)$ is also left-adjoint to $F$. The algebra structure comes from the adjunction: the $\text{mod}_A$ map $A\otimes A \to A$ corresponds to the vector space map $\operatorname{id}: A\to A$.) ((Note that you don't actually need the full representation theory, which probably doesn't exist foundationally, but you do need modules at least as large as $A$.)) All this all means is that if you believe that almost everything is finite-dimensional, you should reject algebras as "wrong" and coalgebras as "right", whereas if you like infinite-dimensional objects, algebras are the way to go.
{ "source": [ "https://mathoverflow.net/questions/31109", "https://mathoverflow.net", "https://mathoverflow.net/users/3992/" ] }
31,113
Zagier has a very short proof ( MR1041893 , JSTOR ) for the fact that every prime number $p$ of the form $4k+1$ is the sum of two squares. The proof defines an involution of the set $S= \lbrace (x,y,z) \in N^3: x^2+4yz=p \rbrace $ which is easily seen to have exactly one fixed point. This shows that the involution that swaps $y$ and $z$ has a fixed point too, implying the theorem. This simple proof has always been quite mysterious to me. Looking at a precursor of this proof by Heath-Brown did not make it easier to see what, if anything, is going behind the scenes. There are similar proofs for the representation of primes using some other quadratic forms, with much more involved involutions. Now, my question is: is there any way to see where these involutions come from and to what extent they can be used to prove similar statements?
This paper by Christian Elsholtz seems to be exactly what you're looking for. It motivates the Zagier/Liouville/Heath-Brown proof and uses the method to prove some other similar statements. Here is a German version , with slightly different content. Essentially, Elsholtz takes the idea of using a group action and examining orbits as given (and why not -- it's relatively common) and writes down the axioms such a group action would have to fulfill to be useful in a proof of the two-squares theorem. He then algorithmically determines that there is a unique group action satisfying his axioms -- that is, the one in the Zagier proof. The important thing is that having written down these (fairly natural) axioms, there's no cleverness required; finding the involution in Zagier's proof boils down to solving a system of equations.
{ "source": [ "https://mathoverflow.net/questions/31113", "https://mathoverflow.net", "https://mathoverflow.net/users/3635/" ] }
31,198
Since a lie group is a manifold with the structure of a continuous group, then each point of the manifold [Edit: provided we fix a metric, for example an invariant or bi-invariant one] has some scalar curvature R. Question [Edited] Is there a nice formula which expresses the scalar curvature at a point of the manifold in terms of the lie algebra of the group?
See Exercice 1 in Chapter 4 of Do Carmo's "Riemannian Geometry". The formula is $R(X,Y)Z = \frac 1 4 [[X,Y], Z]$. In particular, if $X$ and $Y$ are orthonormal, the sectional curvature of the generated plane is $K(\sigma)= \frac 1 4 \|[X,Y]\|^2$ Which is always $\geq 0$. EDIT: In view of the comments, it is important to add that this is for a bi-invariant metric.
{ "source": [ "https://mathoverflow.net/questions/31198", "https://mathoverflow.net", "https://mathoverflow.net/users/7466/" ] }
31,222
I read (in a paper by Emil Saucan ) that the flat torus may be isometrically embedded in $\mathbb{R}^3$ with a $C^1$ map by the Kuiper extension of the Nash Embedding Theorem , a claim repeated in this Wikipedia entry . I have been unsuccessful in finding a description of such a mapping, or an image of what the embedding looks like. I'd be grateful to any pointers on this topic. Thanks! Addendum. It seems Benoît Kloeckner's answer below is definitive. What I asked for apparently does not yet exist, but is "in process" and will soon be available through the work of the Hévéa project. [ 23Apr2012 ] This is taken from the link in DamienC's comment and Benoît's update in the latter's answer below:
A group of french mathematicians and computer scientists are currently working on this. The project is named Hévéa, and has already produced a few images. Edit: a few images and the PNAS paper have been released, see http://hevea-project.fr/ENPageToreDossierDePresse.html Just a few word to explain what I understood of their method (which is by using h-principle) from the few image I saw in preview. Start with a revolution torus. The meridians are cool, because they all have the same length, as expected from those of a flat torus. But the parallels are totally uncool, because their lengths differ greatly: they witness the non-flatness of the revolution torus. Now perturb your torus by adding waves in the direction of the meridians (like an accordion), with large amplitude on the inside and small amplitude on the outside. If you design this perturbation well, you can manage so that the parallels now all have the same length. Of course, the perturbed meridian have now varying lengths! So you do the same thing by adding small waves in another direction, getting all meridians to have the same length again. You can iterate this procedure in a way so that the embedding converges in the $C^1$ topology to a flat embedded torus. But to prove that the precise perturbation you chose in order to get a nice image does converge, and that your maps are embeddings needs work (getting an immersion is easier if I remember well). Also, the Hévéa project plans to draw images of Nash spheres, that is $C^1$ isometric embeddings of spheres of radius $>1$ inside a ball of unit radius. Edit: Roman Kogan mentions in an answer below the following relevant, more recent links: detailed paper on the isometric embedding of a flat torus including photographs of a 3D-printed model on p.67. video showcasing the 3D model .
{ "source": [ "https://mathoverflow.net/questions/31222", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
31,270
Hello, I would like to ask you if there is a mathematical theory, that is complete (in the sense of Goedel's theorem) but practically applicable. I know about Robinson arithmetic that is very limited but incomplete already. So, I would like to know if there is some mathematics that could be practically used (expressiveness) and reduced to logics (completeness). I'm very new to the site and to the maths as well, so please tell me if that's a silly question.
You probably intended to restrict the question to effectively axiomatizable theories. Otherwise, for example, the first-order theory of the standard model of arithmetic is a complete theory, as is the theory of the standard model of ZFC. Gödel's incompleteness theorem establishes some limitations on which effective theories can be complete. It shows that no effective, complete, consistent theory can interpret even weak theories of arithmetic such as Robinson arithmetic. However, there are many mathematically interesting theories that do not interpret the natural numbers. Examples of complete, consistent, effectively axiomatizable theories include: For any prime $p$, the theory of algebraically closed fields of characteristic $p$ The theory of real closed ordered fields, mentioned by Ricky Demer The theory of dense linear orderings without endpoints Many axiomatizations of Euclidean geometry
{ "source": [ "https://mathoverflow.net/questions/31270", "https://mathoverflow.net", "https://mathoverflow.net/users/6702/" ] }
31,275
Let $R$ be a nonzero commutative ring with $1$, such that all finite matrices over $R$ have a Smith normal form . Does it follow that $R$ is a principal ideal domain? If this fails, suppose we additionally suppose that $R$ is an integral domain? What can we say if we impose the additional condition that the diagonal entries be unique up to associates?
The implication is false without the assumption that R is Noetherian, because finite matrices don't detect enough information about infinitely generated ideals. For example, let R be the ring $$ \bigcup_{n \geq 0} k[[t^{1/n}]] $$ where $k$ is a field (an indiscrete valuation ring). Any finite matrix with coefficients in R comes from a subring $k[[t^{1/N}]]$ for some large $N$, and hence can be reduced to Smith normal form within this smaller PID. However, the ideal $\cup (t^{1/N})$ is not principal.
{ "source": [ "https://mathoverflow.net/questions/31275", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
31,315
The following problem has been attributed to Lebesgue. Let "set" denote any subset of the Euclidean plane. What is the greatest lower bound of the diameter of any set which contains a subset congruent to every set of diameter 1? There are a number of interesting geometric problems of this type. Is it possible that some of them may be difficult to solve because the solution is a real irrational number which (when expressed in decimal form) is not even recursive-and so cannot be approximated in the usual way?
The question is still open. There are at least two versions. The most popular asks for the minimal-area convex subset of the plane such that every set with diameter 1 can be translated, rotated and/or reflected to fit inside it. Here is the best lower bound I know: Peter Brass and Mehrbod Sharifi, A lower bound for Lebesgue's universal cover problem , Int. Jour. Comp. Geom. & Appl. 15 (2005), 537--544. Their lower bound is 0.832, obtained through a rigorous computer-aided search for the convex set with the smallest area that contains a circle, equilateral triangle and pentagon of diameter 1. The best upper bound I'm 100% sure of is 0.8441153, proved here: John Baez, Karine Bagdasaryan and Philip Gibbs, The Lebesgue universal covering problem , Jour. Comp. Geom. 16 (2015), 288-299; arXiv:1502.01251 . Our paper also reviews the history of this problem, which is rather interesting. In 1920, Pál noted that a regular hexagon of area circumscribed around the unit circle does the job. This has area $$ \sqrt{3}/2 \approx 0.86602540 $$ But in the same paper, he showed you could safely cut off two corners of this hexagon, defined by fitting a dodecagon circumscribed around the unit circle inside the hexagon. This brought the upper bound down to $$ 2 - 2/\sqrt{3} \approx 0.84529946 $$ He guessed this solution was optimal. In 1936, Sprague sliced tiny pieces of Pal's proposed solution and bring the upper bound down to $$ \sim 0.84413770 $$ (Image from Hansen's paper, added by J.O'Rourke.) The big hexagon above is Pál's original solution. He then inscribed a regular dodecagon inside this, and showed that you can remove two of the resulting corners, say $B_1B_2B$ and $F_1F_2F,$ and get a smaller universal covering. But Sprague noticed that near $D$ you can also remove the part outside the circle with radius 1 centered at $B_1$ , as well as the part outside the circle with radius 1 centered at $F_2.$ In 1975, Hansen showed you could slice off very tiny corners off Sprague's solution, each of which reduces the area by $6 \cdot 10^{-18}$ . In a later paper, Hansen did better: H. Hansen, Small universal covers for sets of unit diameter , Geometriae Dedicata 42 (1992), 205--213. He again sliced two corners off Sprague's solution, but now one reduces the area by a whopping $4 \cdot 10^{-11}$ , while the other, he claimed, reduces the area by $6 \cdot 10^{-18}$ . One author, in a parody of the usual optimistic prophecies of accelerating progress, commented that ...progress on this question, which has been painfully slow in the past, may be even more painfully slow in the future. In 1980, Duff considered nonconvex subsets of the plane with least area such that every set with diameter one can be rotated and translated to fit inside it. He found one with area $$ \sim 0.84413570 $$ which is smaller than the best known convex solution: G. F. D. Duff, A smaller universal cover for sets of unit diameter, C. R. Math. Acad. Sci. 2 (1980), 37--42. In 2015, Philip Gibbs, Karine Bagdasaryan and I wrote a paper on this topic, mentioned above. We found a new smaller universal cover, and noticed that Hansen had made a mistake in his 1992 paper. Hansen claimed to have removed pieces of area $4\cdot 10^{-11}$ and $6 \cdot 10^{-18}$ from Sprague's universal cover, but the actual areas removed were $3.7507 \cdot 10^{-11}$ and $8.4460 \cdot 10^{-21}$ . So, Hansen's universal covering has area $$ \sim 0.844137708416 $$ Our new, smaller universal covering had area $$ \sim 0.8441153 $$ This is about $2.2 \cdot 10^{-5}$ smaller than Hansen's. To calculate the area of our universal cover Philip used a Java program, which is available online. Greg Egan checked our work using high-precision calculations in Mathematica, which are also available online. See the references in our paper for these programs and also a Java applet that Gibbs created to visualize Hansen's universal covering. It's fun to look at the smallest sliver Hansen removed, because it's 30 million times longer that it is wide! More recently Philip Gibbs wrote a paper claiming to have an even smaller universal covering, with area $$ \sim 0.8440935944 $$ Philip Gibbs, An upper bound for Lebesgue’s universal covering problem , 22 January 2018. Gibbs is a master at this line of work, but I must admit I haven't checked all the details, so it would be good for some people to carefully check them. I've written a slightly more detailed account of the Lebesgue universal covering problem with some pictures here: J. Baez, Lebesgue's universal covering problem (part 1) , Azimuth , 8 December 2013. J. Baez, Lebesgue's universal covering problem (part 2) , Azimuth , 3 February 2015. J. Baez, Lebesgue's universal covering problem (part 3) , Azimuth , 7 October 2018. If anyone knows of further progress on this puzzle, please let me know!
{ "source": [ "https://mathoverflow.net/questions/31315", "https://mathoverflow.net", "https://mathoverflow.net/users/4423/" ] }
31,337
Paper A is in the literature, and has been for more than a decade. An error is discovered in paper A and is substantial in that many details are affected, although certain fundamental properties claimed by the theorems are not. (As a poor analogue, it would be like showing that certain solutions to the Navier-Stokes equations had different local properties than what were claimed, but that the global properties were not affected. The error is not of the same caliber as Russell's correction of Frege's work in logic.) The author is notified, who kindly acknowledges the error. Now what? Should the remaining action lie fully on the author, or should the discoverer of the error do more, such as contact the journal, or publish his own correction to paper? How long should one wait before suitable action is taken? And what would be suitable action if not done by the author? Based on remarks from those who previewed this question on meta.mathoverflow, I propose the following Taxonomy: There are various kinds of error that could be considered. typographical - An error where a change of a character or a word would render the portion of the paper correct. In some cases, the context will provide enough redundancy that the error can be easily fixed by the reader. Addressing these errors by errata lists and other means have their importance, but handling those properly is meant for another question. slip - (This version is slightly different from the source; cf the discussion on meta for the source http://mathoverflow.tqft.net/discussion/493/how-do-i-fix-someones-published-error/ ) This is an error in a proof which may be corrected, although not obviously so. In a slip, the claimed main theorem is either true or can be rescued with little cost. In my opinion, the degree of response is proportional to the amount of effort needed to fix it (and is often minor), but there may be slips major enough to warrant the questions above. miscalculation - Often a sign or quantity error. In some cases the results are minor, and lead to better or worse results depending on the calculation. I've included some miscalculations in some of my work to see if anyone would catch them. I've also prepared a response which shows the right calculation and still supports the main claims of the work. (See below on impact as a factor.) oversight or omission - This is stating a fact as true without sufficient folklore to back up that fact. In some cases the author doesn't include the backup to ease (the reading of) the paper and because the author thinks the audience can provide such backup. More seriously, the omission occurs because the author thought the fact was true and that there was an easy proof, when actually the fact may or may not be a fact and the author actually had a faulty argument leading him to think it true. major blunder - This is claiming a result which is true, and turns out not to be true in a socially accepted proof system. Proofs of Euclid's fifth postulate from the other four fall into this type. The above taxonomy is suggested to help determine the type of response to be made by the discoverer. Also, degree of severity is probably not capable of objective measure, but that doesn't stop one from trying. However, there are at least two other considerations: Degree to which other theorems (even from other papers) depend on the error in the result. I call this impact. Degree to which the error is known in the community. The case that inspired this question falls, in my mind, into the category of a miscalculation that invalidates a proposition and several results in paper A following from the proposition. However, as I alluded to above in the Navier-Stokes analogy, the corrected results have the same character as the erroneous results. I would walk on a bridge that was built using the general characteristics of the results, and not walk on a bridge that needed the specific results. In this case, I do not know to what degree impact the miscalculation has on other papers, nor how well known this miscalculation is in the community. If someone thinks they know what area of mathematics my case lies (and are sufficiently experienced in the area), and they are willing to keep information confidential, I am willing to provide more detail in private. Otherwise, in your responses, I ask that no confidentiality be broken, and that no names be used unless to cite instances that are already well-enough known that revealing the names here will do no harm. Also, please include some idea of the three factors listed above (error type, impact on other results, community awareness), as well as other contributing factors. This feels like a community-wiki question. Please, one response/case per answer. And do no harm. Motivation: Why do I care about fixing someone else's error? Partly, it adds to my sense of self-worth that I made a contribution, even if the contribution has no originality. Partly, I want to make sure that no one suffers from the mistake. Partly, I want to bring attention to that area of mathematics and encourage others to contribute. Mostly though, it just makes an empty feeling when one reaches the "Now What?" stage mentioned above. Feel free to include emotional impact, muted sufficiently for civil discourse. Gerhard "Ask Me About System Design" Paseman, 2010.07.10
Some advice explicitly directed at less senior people. I would very much advise some who does not yet have tenure to NOT take the nuclear option (e.g. posting a paper on the arXiv accusing someone of being wrong, or writing irate letters to the editors of a journal). In the extremely rare cases in which this has to be done, it is best done by someone who is both pretty senior and very politically skilled. This leads me to my other piece of advice. Namely, talk to other, more senior people in your research area. First, they might be able to convince you that it isn't really as serious an error as you think. Second, they will probably know the personalities involved better, and be more effective at convincing an author to do the right thing if something has to be done. The two times something like has happened to me, I had ended up proving stronger results than the erroneous papers by pretty different techniques. I buried remarks at the ends of the introductions of my papers mentioning the wrong papers and explaining where they went wrong. On one of those occasions the author had left math and I didn't know how to contact him, so I didn't correspond with him first (after I posted the paper the arXiv, one of his friends contacted him and we exchanged some friendly emails). The other time, I explicitly cleared the language I used with the original author.
{ "source": [ "https://mathoverflow.net/questions/31337", "https://mathoverflow.net", "https://mathoverflow.net/users/3402/" ] }
31,358
This question originates from a bit of history. In the first paper on quantum Turing machines, the authors left a key uniformity condition out of their definition. Three mathematicians subsequently published a paper proving that quantum Turing machines could compute uncomputable functions. In subsequent papers the definition of quantum Turing machine was revised to include the uniformity condition, correcting what was clearly a mathematical error the original authors made. It seems to me that in the idealized prescription for doing mathematics, the original definition would have been set permanently, and subsequent papers would have needed to use a different term (say uniform quantum Turing machine ) for the class of objects under study. I can think of a number of cases where this has happened; even in cases where, in retrospect, the original definition should have been different. My question is: are there other cases where a definition has been revised after it was realized that the first formulation was "wrong"?
Here's my favorite example. Imre Lakatos' book Proofs and Refutations contains a very long dialogue between a teacher and pupils who debate what are good definitions of polyhedra, with respect to a claimed proof that $V-E+F=2$ is true for polyhedra. It's common that a good definition (or reformulation) of a concept can help yield proofs of theorems, and this book promotes the "dual" view that a proof of a theorem can lead to a good definition in hindsight. The footnotes of this dialogue show that Lakatos is actually tracing the history of the Euler characteristic in the mathematical literature. In short, both the definition(s) and the proof(s) went through substantial revisions over time.
{ "source": [ "https://mathoverflow.net/questions/31358", "https://mathoverflow.net", "https://mathoverflow.net/users/2294/" ] }
31,387
The common knowledge in this regard seems to be that Hilbert's Fifth Problem was completely solved by Gleason, Montgomery, and Zippin. However, such wisdom was contested by Peter Olver: Olver, Peter J. , Non-associative local Lie groups , J. Lie Theory 6, No.1, 23-51 (1996). ZBL0862.22005 , PDF p. 6fn. More precisely: Hilbert’s fifth problem concerns the role of analyticity in general transformation groups, and seeks to generalize the result of Lie, [ 18 ; p. 366], and Schur, [ 32 ]. The Gleason–Montgomery– Zippin result only addresses the special case when a global Lie group acts on itself by right or left multiplication. Palais wrote about it in the Notices : Bolker, Ethan D. , Andrew M. Gleason 1921-2008 , Notices Am. Math. Soc. 56, No. 10, 1236-1267 (2009). ZBL1178.01040 , pp. 1243-1248, but he only treats the old story from the 1950s and seems not to be aware of Olver's remark.
The OP says: " ...Recently, Palais wrote about it in the Notices but he only treats the old story from the 1950s and seems not to be aware of Olver’s facts." Actually, I am aware of Olver's work and also Sören Illman's contribution. Sören is an old friend and wrote to me somewhat miffed that I did not mention his work on the problem. What he proved was a very nice fact---that if a proper Lie group action is differentiable, then it can be made real analytic. As I pointed out in my article, there are very simple examples that Hilbert should have noticed (see my article---linked below---if you think I am being hard on Hilbert) that show that without properness this is false. As for Olver, his contribution is also nice but a little off the beaten track. Here is a quick version. One facet of what Hilbert asked was whether a "local Lie group" (i.e., an open set in $R^n$ with a continuous group operation and inverse defined near the identity) could always be embedded in a global Lie group. Cartan answered that in a way that suffices for all practical purposes; he showed that after cutting back the original neighborhood to a smaller one it could be embedded. However a number of people (including Malcev and Douady) showed that without cutting back the answer could be "no". Their examples were infinite dimensional however, and Olver in his paper "Non-Associative Local Lie Groups" provided a class of finite dimensional examples. OK, so why didn't I mention the work of Illman, Olver and a host of others who worked on the Fifth Proble after the 1950s. If you look at my article, available for download here: http://www.ams.org/notices/200910/rtx091001236p.pdf the answer is clear. My article was part of a larger memorial article for Andy Gleason (my PhD advisor) and it was titled "Gleason's Contribution to the Solution of the Hilbert Fifth Problem". There was plenty to talk about there, and a discussion of other contributions to the Fifth Problem that happened decades later would have been out of place. By the way, in regard to what is called "Route B" in an answer above, the first section of my article is titled "What IS Hilbert's Fifth Problem" in which I try to explain at least a little bit about how and why Hilbert's original statement of the Fifth Problem morphed over time.
{ "source": [ "https://mathoverflow.net/questions/31387", "https://mathoverflow.net", "https://mathoverflow.net/users/7269/" ] }
31,391
The classical problem regarding the action of symplectic group on its Lie algebra gives rise to the following question in the finite field case. Let $\mathbb F_p$ be a finite field. Then the symplectic group over $\mathbb F_p$ acts by conjugation on the set of matrices over $\mathbb F_p$ that satisfy $\Omega A + A^t \Omega = 0$, $\Omega$ is the skew symmetric matrix $$ \begin{pmatrix} 0 & I \\\\ -I & 0 \end{pmatrix} $$ where $I$ is identity matrix. What are the orbits of this action?
The OP says: " ...Recently, Palais wrote about it in the Notices but he only treats the old story from the 1950s and seems not to be aware of Olver’s facts." Actually, I am aware of Olver's work and also Sören Illman's contribution. Sören is an old friend and wrote to me somewhat miffed that I did not mention his work on the problem. What he proved was a very nice fact---that if a proper Lie group action is differentiable, then it can be made real analytic. As I pointed out in my article, there are very simple examples that Hilbert should have noticed (see my article---linked below---if you think I am being hard on Hilbert) that show that without properness this is false. As for Olver, his contribution is also nice but a little off the beaten track. Here is a quick version. One facet of what Hilbert asked was whether a "local Lie group" (i.e., an open set in $R^n$ with a continuous group operation and inverse defined near the identity) could always be embedded in a global Lie group. Cartan answered that in a way that suffices for all practical purposes; he showed that after cutting back the original neighborhood to a smaller one it could be embedded. However a number of people (including Malcev and Douady) showed that without cutting back the answer could be "no". Their examples were infinite dimensional however, and Olver in his paper "Non-Associative Local Lie Groups" provided a class of finite dimensional examples. OK, so why didn't I mention the work of Illman, Olver and a host of others who worked on the Fifth Proble after the 1950s. If you look at my article, available for download here: http://www.ams.org/notices/200910/rtx091001236p.pdf the answer is clear. My article was part of a larger memorial article for Andy Gleason (my PhD advisor) and it was titled "Gleason's Contribution to the Solution of the Hilbert Fifth Problem". There was plenty to talk about there, and a discussion of other contributions to the Fifth Problem that happened decades later would have been out of place. By the way, in regard to what is called "Route B" in an answer above, the first section of my article is titled "What IS Hilbert's Fifth Problem" in which I try to explain at least a little bit about how and why Hilbert's original statement of the Fifth Problem morphed over time.
{ "source": [ "https://mathoverflow.net/questions/31391", "https://mathoverflow.net", "https://mathoverflow.net/users/7386/" ] }
31,436
What is the largest $m$ such that there exist $v_1,\dots,v_m \in \mathbb{R}^n$ such that for all $i$ and $j$ , $1\leq i< j\leq m$ , we have $v_i \cdot v_j < 0$ . Also, the preview screen is not displaying set braces for LaTeX. Is that just the preview, or does it mean the site wouldn't display them after the question had been posted either? (I formatted this question without the braces in case it's the latter.)
You can have $m=n+1$. Take the vertices of a regular simplex with centre at the origin. You can't have $m=n+2$. There is at least a two-dimensional space of vectors $(a_1,\ldots,a_{n+2})$ such that $$\sum_{i=1}^{n+2} a_i v_i=0.$$ This gives enough room for manoeuvre to ensure some $a_i>0$ and some $a_j<0$. Thus we get some nontrivial relation $$\sum_{i\in I}a_i v_i=\sum_{j\in J}b_j v_j\qquad\qquad(*)$$ where all the $a_i>0$ and $b_j>0$ and $I$ and $J$ are disjoint non-empty sets of indices. It follows that the dot product of the two sides of $(*)$ is negative, but that contradicts it being the square of the length of the left side.
{ "source": [ "https://mathoverflow.net/questions/31436", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
31,475
This is a follow-up question to this one about eigenvalues of matrix sums. Suppose you have matrices $A$ and $B$, and know their singular values. What can you say about the singular values of $A+B$? For Hermitian matrices and eigenvalues, this question was answered by a famous theorem of Knutson and Tao , but I don't know of anything similar for the more general case of singular values. This result would have come in useful for an estimate that I needed. I was able to obtain the estimate in a differenr way, but now I'm curious about the question.
The singular values of a $n \times m$ matrix A are more or less the eigenvalues of the $n+m \times n+m$ matrix $\begin{pmatrix} 0 & A \\\ A^* & 0 \end{pmatrix}$. By "more or less", I mean that one also has to throw in the negation of the singular values, as well as some zeroes. Using this, one can deduce inequalities for the singular values from that of the Hermitian matrices problem. This may even be the complete list of inequalities, though I don't know if this has already been proven in the literature. See also my blog post on this at http://terrytao.wordpress.com/2010/01/12/254a-notes-3a-eigenvalues-and-sums-of-hermitian-matrices/
{ "source": [ "https://mathoverflow.net/questions/31475", "https://mathoverflow.net", "https://mathoverflow.net/users/2294/" ] }
31,538
Over the years, I've been somewhat in the habit of asking questions in this vein to experts in the Langlands programme. As is well known, given an algebraic number field $K$, they propose to replace the reciprocity map $$A_K^\*/K^*\rightarrow Gal(K^{ab}/K)$$ of abelian class field theory by a correspondence between the $n$-dimensional representations $\rho$ of $Gal(\bar{K}/K)$ and certain automorphic representations $\pi_{\rho}$ of $GL_n(A_K)$. (We'll skip the Weil group business for this discussion.) Substantial arithmetic information is carried on either side by the $L$-functions, which are supposed to be equal. This involves deep and beautiful mathematics whenever something can be proved, and there are many applications, such as the Sato-Tate conjecture or this recent paper of Chenevier and Clozel: http://www.math.polytechnique.fr/~chenevier/articles/galoisQautodual2.pdf (I mention this one because it is in some ways very close to the point of this question.) However, there are elementary consequences of abelian class field theory that seem not to have obvious non-abelian analogues. The one I wish to mention today has to do with the fundamental group. Given a number field $K$ (assume it's totally imaginary to avoid some silly issues), how can we tell if it has non-trivial abelian unramified extensions? Class field theory says we can look at the class group, which is quite computable in principle, and even in practice for small discriminants. But now, suppose we go on to ask the non-abelian question: which number fields have $$\pi_1(Spec(O_K))=0?$$ That is to say, when does $K$ have no unramified extension at all, abelian or not? As far as I know, there is no easy answer to this question. Niranjan Ramachandran has pointed out that there are at least ten examples, $K=\mathbb{Q}$ (oops, that's real) and $K$ an imaginary quadratic field of class number one. I know of no others. Of course I would be happy to collect some more, if someone else has them lying around. But the question I really wanted to ask is: Suppose we are in a Langlands paradise where everything reasonably conjectured by the programme is a theorem. Does this give a way to algorithmically (as we run over fields $K$) resolve this question as in the abelian case? Otherwise, is there a sensible refinement of the usual formulation that would subsume such applications? Added: I'm embarrassed to admit I hadn't followed the question mentioned by David Hansen (even after commenting on it). Thanks to David for pointing it out. Of course my main question still stands. I've changed the title following Andy Putman's suggestion. The original title evolved from a (humorously) provocative version that I normally use only among friends who already know I'm a Langlands fan: 'What is the Langlands programme good for?' Regarding jnewton's very natural thought: in addition to other difficulties, one would also need to bound $n$. Added, 13 July: Here is one more remark concerning jnewton's suggestion. Of course in the realm of classical holomorphic cusp forms, there are infinitely many of level one. More generally, it is shown in the paper http://www.math.uchicago.edu/~swshin/Plancherel.pdf that whenever $G$ is a split reductive group over $\mathbb{Q}$, there are infinitely many cuspidal automorphic representations that are unramified everywhere and belong to the discrete series at $\infty$. (I presume there are other results of this sort. This one I just happen to know from a talk last Fall.) According to Clozel's conjecture as you might find in http://seven.ihes.fr/IHES/Scientifique/asie/textes/Clozel-juil06.pdf (conjecture (2.1)), algebraic ones among them should correspond to motivic Galois representations (after we choose a representation of the dual group)*. I don't have the expertise to recognize algebraicity in such constructions, in addition to the danger that I'm misunderstanding something more elementary. But it seems to me quite a task to show directly that there are none corresponding to Artin representations. (The only case I could do myself is the classical one.) Now, I would like very much to be corrected on all this. But such families do seem to indicate that a 'purely automorphic' approach to the the $\pi_1$ question is somewhat unlikely, at least within the current framework of the Langlands correspondence. I suppose I'm sabotaging my own question. *Note that in these situations, the Galois representations don't have to be unramified, since there is the choice of a coefficient field $\mathbb{Q}_p$. In general, they should only be crystalline at $p$. Added 14 July: Matthew: Since I didn't really expect a complete answer to my question, if you could write your extremely informative series of comments as an answer, I will accept it. (Barring the highly unlikely possibility that someone will write something better between the time you submit your answer and the time I look at it.)
In response to Minhyong's request, I am reposting my comments above as an answer: As James Newton commented, if $L/K$ is unramified, then an irreducible $n$-dimensional representation (over $\mathbb C$) of $Gal(L/K)$ will correspond, in the Langlands paradise, to a cuspidal automorphic representation of $GL_n(\mathbb A_K)$. The cuspidal automorphic representations that arise in this way are sometime (especially in the older literature) called "Galois type". Thus one can (more or less --- there is the issue of irreducible vs. all reps. which I won't think about here) encode unramified extensions of $K$ whose Galois groups admit $n$-dimensional representations in terms of Galois type cuspidal automorphic representations $\pi$ of $GL_n(\mathbb A_K)$ that are unramified at every finite prime. Now the question arises: how many such $\pi$ are there, and can one compute them? Being of Galois type is (conjecturally, but we are in paradise!) purely a condition on $\pi_v$ for primes $v$ of $K$ lying over $\infty$, and in fact there are a finite number of prescribed representations of $GL_n(K_v)$ ($= GL_n(\mathbb R)$ or $GL_n(\mathbb C)$) which are allowed. (E.g. for $GL_n(K)$, the possibilities are limit of discrete series, corresponding to holomorphic weight one forms, or principal series with $\lambda = 1/4$, corresponding to Maass forms with eigenvalue of Laplacian equal to $1/4$.) For a given $n$ and $K$, these can be enumerated. Now since we are asking that the "weight" (i.e. the collection of $\pi_v$ for $v|\infty$) be bounded (i.e. lie in a given finite set), and we are also asking that "level" be one (i.e. that there is no ramification at any finite prime), there are only a finite set of $\pi$ corresponding to irreducible everywhere unramified $n$-dimensional complex representations of $GL_n(Gal(\bar{K}/K)$. [Aside: Minhyong asked for a sketch of a proof of this; here goes: fixing the representation at infinity means that we are fixing a bunch of elliptic operators that the automorphic forms must satisfy. Fixing the level means that we are working on some particular quotient $X/\Gamma$ (here $X$ is the symmetric space attached to the real group in question, in our particular case $GL_n(K\otimes \mathbb R)$, and $\Gamma$ is the fixed level). This need not be compact (indeed won't be in our particular case), but the cuspidal condition (indeed, even the moderate growth condition that non-cuspidal automorphic forms are required to satisfy) means that we can pretend it is, since we explicitly rule out the possibility of extreme growth at infinity. So now we looking at sections of some bundle on a compact space satsifying a bunch of elliptic equations, and such a space of sections if finite dimensional. (The holomorphic modular forms case is the most familiar: in this case the elliptic equations are the Cauchy--Riemann equations. In the Maass form case, the corresponding fact is the finiteness of the eigenspaces of the Laplacian. These are good models for the general case.)] To actually compute them (say for a fixed choice of $K$ and $n$) would be quite difficult (as David Hansen notes in his comment). The reason is that the relevant $\pi_v$ for $v|\infty$ are never discrete series (even when $n = 2$, and in any case, note that $GL_n(\mathbb R)$ never has discrete series if $n > 2$, and $GL_n(\mathbb C)$ never has discrete series when $n > 1$), and so standard applications of the trace formula to counting automorphic forms won't work. Nevertheless, it seems that one might still be able to use the trace formula to analyze the situation, at least in principle. For example, Selberg used his original formulation of the trace formula for $SL_2(\mathbb R)/SL_2(\mathbb Z)$ to compute cuspidal Maass forms of level 1, and showed that the smallest eigenvalue $\lambda$ that occurs has $\lambda$ much greater than 1/4 (maybe closer to 90?). And we all know that it is not hard to show that there are no holomorphic weight one forms of level one. So one can automorphically prove (modulo standard conjectures in the Maass form case) that there are no everywhere unramified two-dimensional complex representations of $Gal(\bar{\mathbb Q}/\mathbb Q)$. (This is of course an incredible battle, even in paradise, for a tiny portion of the information that Minkowski gives us, but is meant just to illustrate that this approach is not a priori ridiculous.) What I don't see at all from this point of view is how to study all $n$ simultaneously. For example, one could imagine implementing this program and finding, for some $K$ and some $n$, maybe $n = 10^6$, that there are no unramified extensions $L/K$ with $L$ admitting an irrep. of dimension $\leq 10^6$. This doesn't rule out the possibility that there is a beautiful, everywhere unramified extension $L/K$ whose Galois group's lowest degree irrep. happens to be of enormous dimension. The Langlands program seems to be intrinsically geared to thinking about linear representations of Galois groups, and to set the scene, you have to begin by choosing a linear group, which will then cut everything else down in a Procrustean manner. At least superficially (and this answer reflects just superficial thoughts about the question), it doesn't seem well adapted to questions related to the nature of $\pi_1(\mathcal O)$, where no a priori linear structure is given, or indeed expected. [Added July 14, in response to Minhyong's question in the comments as to whether or not discrete series can convert into non-discrete series after applying some functoriality. The answer is essentially no, as I will now explain. Added November 29, 2011: What follows is wrong; the answer seems rather to be yes . (See below for details.) For an arithmetic geometer, one should think of an automorphic form on the adelic group $G(\mathbb A_K)$ as a morphism from the motivic Galois group (over the base number field $K$) to the $L$-group of $G$. (There are subtleties and caveats, of course, but they need not concern us here; all I will say about them is that automorphic forms can give rise to "motives" with non-integral $(p,q)$ in its Hodge decomposition, which necessitates enlarging the category of motives to an unknown larger category, whose hypothetical Tannakian group is called "the Langlands group".) Now functoriality takes place when you have a map from the $L$-group of $G$ to the $L$-group of $H$; one can just compose this with a map from the motivic Galois group to the former, to obtain a map from the motivic Galois group to the latter. Functoriality is the assertion that the corresponding automorphic form on $H(\mathbb A_K)$ exists. Now given an automorphic form $\pi$, its factors at the primes $v|\infty$ encode (via the local Langlands corresondence for $\mathbb R$ or $\mathbb C$) the Hodge numbers of the corresponding motive. One feature of discrete series is that (among other properties) they give rise to regular Hodge numbers, i.e. to sequences of $h^{p,q}$ with each $h^{p,q} \leq 1$. Now our original automorphic rep'n $\pi$ on $G(\mathbb A)$ corresponds to a motive whose Mumford--Tate group lies in the $L$-group of $G$, and if $\pi$ has discrete series components at primes above $\infty$, it has regular Hodge numbers at every place dividing $\infty$. If we then pass to a new motive by applying some map from the $L$-group of $G$ to the $L$-group of $H$, then concretely this corresponds to doing some kind of multilinear algebra on our motive, and the only way this can kill the property of having regular Hodge--Tate weights is if we do something like taking the diagonal map from the $L$-group of $G$ into its product with itself, and then embed the latter into the $L$-group of $H$. All such constructions will necessarily be a "reducible" rep'n of the $L$-group of $G$ in the $L$-group of $H$ (more precisely, the centralizer will be a non-trivial Levi), and the corresponding automorphic form won't be a cuspform. But even if we destroy the property of having regular Hodge numbers, we typically still don't have an Artin motive. To get an Artin motive we have to have $h^{p,q} = 0$ unless $p = q = 0$, and to do this , we have to do even more destructive things, like map the $L$-group of $G$ into the $L$-group of $H$ via the trivial representation, or something similar. Again, this won't correspond to any kind of interesting automorphic forms, just those that correspond to (certain) sums of characters. So we can't produce interesting Galois type automorphic forms out of automorphic forms whose factors at primes above $\infty$ are discrete series.] [ Correction added Nov. 29, 2011: From the Galois/motivic point of view, we have an algebraic group (the Mumford--Tate group of some motive), with a representation (the particular motive), and the Mumford--Tate group contains a cocharacter whose eigenvalues are the Hodge numbers. Discrete series corresponds to the eigenspaces being one dimensional. We now apply some functoriality, which is essentially to say that we apply some multi-linear algebraic process to the representation. Now this can certainly produce eigenspaces for the cocharacter of multiplicity $> 1$. (E.g. the adjoint representation of $SL_3$ has a two-dimensional eigenspace.) So it seems that functoriality doesn't preserve being discrete series. It does preserve being tempered. And the remarks about not getting Artin motives still seem okay, since while the eigenspaces can become greater than $1$-dimensional, for all the eigenspaces to become trivial, we have to do something pretty destructive, like applying functoriality for the trivial representation.]
{ "source": [ "https://mathoverflow.net/questions/31538", "https://mathoverflow.net", "https://mathoverflow.net/users/1826/" ] }
31,553
For every even $n>4$ there exists nonabelian group. As example of such group we can take dihedral group. The question is about odd $n$ . For some of them there are no nonabelian groups of order $n$ (for example, if $n$ is prime then the group of order $n$ is cyclic and hence abelian). For what odd $n$ are there known examples of nonabelian finite groups of order $n$ ?
It's well-known that for a natural number $n$ with prime factorization $n=\prod_i p_i^{r_i}$, all groups of order $n$ are abelian if and only if all $r_i\le 2$ and $\gcd(n,\Phi(n))=1$ where $\Phi(n)=\prod_i (p_i^{r_i}-1)$. (See http://groups.google.co.uk/group/sci.math/msg/215efc43ebb659c5?hl=en ) For other $n$ there are non-abelian groups. If some $r_i\ge3$ then we can take a direct product of a non-abelian group of order $p_i^3$ and a cyclic group. There are always non-abelian groups of order $p^3$; when $p=2$ take the quaternion group, and when $p$ is odd the group of upper triangular matrices with unit diagonal over $\mathbb{F}_p$. Otherwise $G$ will have a factor $pq$ with $p\mid(q-1)$ or $pq^2$ with $p\mid(q^2-1)$. In the first case the group of all maps $x\mapsto ax+b$ for $a$, $b$, $x\in\mathbb{F}_q$ and $a\ne 0$ has a non-abelian subgroup of order $pq$. In the second case replace $\mathbb{F}_q$ by $\mathbb{F}_{q^2}$ and then get a non-abelian group of order $pq^2$. In both cases multiply by a cyclic group to get an order $n$ non-abelian group.
{ "source": [ "https://mathoverflow.net/questions/31553", "https://mathoverflow.net", "https://mathoverflow.net/users/7079/" ] }
31,595
$1-ab$ invertible $\implies$ $1-ba$ invertible has a slick power series "proof" as below, where Halmos asks for an explanation of why this tantalizing derivation succeeds. Do you know one? Geometric series. In a not necessarily commutative ring with unit (e.g., in the set of all $3 \times 3$ square matrices with real entries), if $1 - ab$ is invertible, then $1 - ba$ is invertible. However plausible this may seem, few people can see their way to a proof immediately; the most revealing approach belongs to a different and distant subject. Every student knows that $1 - x^2 = (1 + x) (1 - x),$ and some even know that $1 - x^3 =(1+x +x^2) (1 - x).$ The generalization $1 - x^{n+1} = (1 + x + \cdots + x^n) (1 - x)$ is not far away. Divide by $1 - x$ and let $n$ tend to infinity; if $|x| < 1$, then $x^{n+1}$ tends to $0$, and the conclusion is that $\frac{1}{1 - x} = 1 + x + x^2 + \cdots$. This simple classical argument begins with easy algebra, but the meat of the matter is analysis: numbers, absolute values, inequalities, and convergence are needed not only for the proof but even for the final equation to make sense. In the general ring theory question there are no numbers, no absolute values, no inequalities, and no limits - those concepts are totally inappropriate and cannot be brought to bear. Nevertheless an impressive-sounding classical phrase, "the principle of permanence of functional form", comes to the rescue and yields an analytically inspired proof in pure algebra. The idea is to pretend that $\frac{1}{1 - ba}$ can be expanded in a geometric series (which is utter nonsense), so that $(1 - ba)^{-1} = 1 + ba + baba + bababa + \cdots$ It follows (it doesn't really, but it's fun to keep pretending) that $(1 - ba)^{-1} = 1 + b (1 + ab + abab + ababab + \cdots) a.$ and, after one more application of the geometric series pretense, this yields $(1 -ba)^{-1} = 1 + b (1 - ab)^{-1} a.$ Now stop the pretense and verify that, despite its unlawful derivation, the formula works. If, that is, $ c = (1 - ab)^{-1}$, so that $(1 - ab)c = c(1 - ab) = 1,$ then $1 + bca$ is the inverse of $1 - ba.$ Once the statement is put this way, its proof becomes a matter of (perfectly legal) mechanical computation. Why does it all this work? What goes on here? Why does it seem that the formula for the sum of an infinite geometric series is true even for an abstract ring in which convergence is meaningless? What general truth does the formula embody? I don't know the answer, but I note that the formula is applicable in other situations where it ought not to be, and I wonder whether it deserves to be called one of the (computational) elements of mathematics. -- P. R. Halmos [1] [1] Halmos, P.R. Does mathematics have elements? Math. Intelligencer 3 (1980/81), no. 4, 147-153 http://dx.doi.org/10.1007/BF03022973
The best way that I know of interpreting this identity is by generalizing it: $$(\lambda-ba)^{-1}=\lambda^{-1}+\lambda^{-1}b(\lambda-ab)^{-1}a.\qquad\qquad\qquad(*)$$ Note that this is both more general than the original formulation (set $\lambda=1$) and equivalent to it (rescale). Now the geometric series argument makes perfect sense in the ring $R((\lambda^{-1}))$ of formal Laurent power series, where $R$ is the original ring or even the "universal ring" $\mathbb{Z}\langle a,b\rangle:$ $$ (\lambda-ba)^{-1}=\lambda^{-1}+\sum_{n\geq 1}\lambda^{-n-1}(ba)^n=\lambda^{-1}(1+\sum_{n\geq 0}\lambda^{-n-1}b(ab)^n a)=\lambda^{-1}(1+b(\lambda-ab)^{-1}a).\ \square$$ A variant of $(*)$ holds for rectangular matrices of transpose sizes over any unital ring: if $A$ is a $k\times n$ matrix and $B$ is a $n\times k$ matrix then $$(\lambda I_n-BA)^{-1}=\lambda^{-1}(I_n+B(\lambda I_k-AB)^{-1}A).\qquad\qquad(**)$$ To see that, let $a = \begin{bmatrix}0 & 0 \\ A & 0\end{bmatrix}$ and $b= \begin{bmatrix}0 & B \\ 0 & 0\end{bmatrix}$ be $(n+k)\times (n+k)$ block matrices and apply $(*).\ \square$ Here are three remarkable corollaries of $(**)$ for matrices over a field: $\det(\lambda I_n-BA) = \lambda^{n-k}\det(\lambda I_k-AB)\qquad\qquad\qquad$ (characteristic polynomials match) $AB$ and $BA$ have the same spectrum away from $0$ $\lambda^k q_k(AB)\ |\ q_k(BA)\qquad\qquad\qquad\qquad\qquad\qquad\qquad $ (compatibility of the invariant factors) I used a noncommutative version of $(**)$ for matrices over universal enveloping algberas of Lie algebras $(\mathfrak{g},\mathfrak{g'})$ forming a reductive dual pair in order to investigate the behavior of primitve ideals under algebraic Howe duality and to compute the quantum elementary divisors of completely prime primitive ideals of $U(\mathfrak{gl}_n)$ (a.k.a. quantizations of the conjugacy classes of matrices). Addendum The identity $(1+x)(1-yx)^{-1}(1+x)=(1+y)(1-xy)^{-1}(1+x)$ mentioned by Richard Stanley in the comments can be easily proven by the same method: after homogenization, it becomes $$(\lambda+x)(\lambda^2-yx)^{-1}(\lambda+y)= (\lambda+y)(\lambda^2-xy)^{-1}(\lambda+x).$$ The left hand side expands in the ring $\mathbb{Z}\langle x,y\rangle((\lambda^{-1}))$ as $$1+\sum_{n\geq 1}\lambda^{-2n}(yx)^n+ \sum_{n\geq 0}\lambda^{-2n}(x(yx)^n+y(xy)^n)+ \sum_{n\geq 1}\lambda^{-2n}(xy)^n,$$ which is manifestly symmetric with respect to $x$ and $y.\ \square$
{ "source": [ "https://mathoverflow.net/questions/31595", "https://mathoverflow.net", "https://mathoverflow.net/users/6716/" ] }
31,603
I've been studying a bit of probability theory lately and noticed that there seems to be a universal agreement that random variables should be defined as Borel measurable functions on the probability space rather than Lebesgue measurable functions. This is so in every textbook on probability theory which I consulted. In general, it seems to me that probability theory favors the Borel algebra more than the algebra of Lebesgue measurable sets. My question is: why? In every course in measure theory, one is taught of the notion of a complete measure and completion of measures and I got the impression that a complete measure space is somewhat superior to a non-complete one (or at least that completeness makes life a bit easier on the technical level), so this preference of Borel sets puzzles me.
One should be careful with the definitions here. Notation: Given measurable spaces $(X, \mathcal{B}_X), (Y, \mathcal{B}_Y)$, a measurable map $f : X \to Y$ is one such that $f^{-1}(A) \in \mathcal{B}_X$ for $A \in \mathcal{B}_Y$. To be explicit, I'll say $f$ is $(\mathcal{B}_X, \mathcal{B}_Y)$-measurable. Let $\mathcal{B}$ be the Borel $\sigma$-algebra on $\mathbb{R}$, so the Lebesgue $\sigma$-algebra $\mathcal{L}$ is its completion with respect to Lebesgue measure $m$. Then for functions $f : \mathbb{R} \to \mathbb{R}$, "Borel measurable" means $(\mathcal{B}, \mathcal{B})$-measurable. "Lebesgue measurable" means $(\mathcal{L},\mathcal{B})$ measurable; note the asymmetry! Already this notion has some defects; for instance, if $f,g$ are Lebesgue measurable, $f \circ g$ need not be, even if $g$ is continuous. (See Exercise 2.9 in Folland's Real Analysis .) $(\mathcal{L}, \mathcal{L})$-measurable functions are not so useful; for instance, a continuous function need not be $(\mathcal{L}, \mathcal{L})$-measurable. (The $g$ from the aforementioned exercise is an example.) $(\mathcal{B}, \mathcal{L})$ is even worse. Given a probability space $(\Omega, \mathcal{F},P)$, our random variables are $(\mathcal{F}, \mathcal{B})$-measurable functions $X : \Omega \to \mathbb{R}$. The Lebesgue $\sigma$-algebra $\mathcal{L}$ does not appear. As mentioned, it would not be useful to consider $(\mathcal{F}, \mathcal{L})$-measurable functions; there simply may not be enough good ones, and they may not be preserved by composition with continuous functions. Anyway, the right analogue of "Lebesgue measurable" would be to use the completion of $\mathcal{F}$ with respect to $P$, and this is commonly done. Indeed, many theorems assume a priori that $\mathcal{F}$ is complete. Note that, for similar reasons as above, we should expect $f(X)$ to be another random variable when $f$ is Borel measurable, but not when $f$ is Lebesgue measurable. Using $(\mathcal{F}, \mathcal{L})$ in our definition of "random variable" would not avoid this, either. The moral is this: To get as many $(\mathcal{B}_X, \mathcal{B}_Y)$-measurable functions $f : X \to Y$ as possible, one wants $\mathcal{B}_X$ to be as large as possible, so it makes sense to use a complete $\sigma$-algebra there. (You already know some of the nice properties of this, e.g. an a.e. limit of measurable functions is measurable.) But one wants $\mathcal{B}_Y$ to be as small as possible. When $Y$ is a topological space, we usually want to be able to compose $f$ with continuous functions $g : Y \to Y$, so $\mathcal{B}_Y$ had better contain the open sets (and hence the Borel $\sigma$-algebra), but we should stop there.
{ "source": [ "https://mathoverflow.net/questions/31603", "https://mathoverflow.net", "https://mathoverflow.net/users/7392/" ] }
31,635
After a previous question that I asked https://mathoverflow.net/questions/31565/request-for-comments-about-a-claimed-simple-proof-of-flt-closed was closed, someone suggested in the comments that I ask another question that is more suited for MO. That question is as follows: Are there any nontrivial theorems of the form "Method X cannot possibly prove FLT."? The reason I am asking is because I'd like to know (by deductive reasoning and not relying on 350 years experience) if it is impossible to prove FLT using elementary methods.
For a while at the end of undergrad and beginning of graduate school I made some money correcting an enthusiastic amateur mathematician's incorrect proofs of FLT. (It was a good experience, he was an academic in another field so was professional and was willing to pay what my time was worth.) When I first read his argument I tried to convince him why the method was doomed to failure. He wasn't interested in trying to understand that, but maybe it'll still be informative here. His method of proof was to consider the "even-ness" (that is the highest power of 2 which is a factor) each side of the equation. He would then do ever more esoteric changes of variables until he inevitably misapplied the tricky part of the non-archimedian triangle inequality (i.e. if 4 divides x and 4 divides y then you have no idea what the "even-ness" is of x+y). At that point he got a contradiction. Now why is this doomed to failure? Well any reasonable argument along those lines would apply to the 2-adic integers. However, if p is an odd prime, then the power series for the pth root converges 2-adically. So every 2-adic integer is a pth power, so certainly the Fermat equation has loads of nontrivial 2-adic integer solutions.
{ "source": [ "https://mathoverflow.net/questions/31635", "https://mathoverflow.net", "https://mathoverflow.net/users/7089/" ] }
31,650
Can anyone offer advice on roughly how much commutative algebra, homological algebra etc. one needs to know to do research in (or to learn) modern algebraic geometry. Would you need to be familiar with something like the contents of Eisenbud's Commutative Algebra: With a View Toward Algebraic Geometry , or is less needed in reality? (I am familiar with more commutative algebra than that which is covered in Atiyah and MacDonald's *Introduction to Commutative Algebra", but less than that which is covered in Eisenbud's textbook.) Also, is modern algebraic geometry concerned with abstractions such as schemes, sheaves, topological spaces, commutative and noncommutative rings etc., or is it just classical algebraic geometry in an abstract form? Perhaps more specifically, to do research in modern algebraic geometry, do you need to be familiar with classical algebraic geometry, or is it possible to think of algebraic geometry as an "abstract language" and do research based just on this perception? While I suspect that, as with other branches of mathematics, "abstraction was invented to analyze the concrete", with all the emphasis currently given to the understanding of abstract tools, for someone who is not very familiar with the subject (such as myself), it seems that algebraic geometry is a "mixture" of general topology and abstract algebra. Is this right? If not, succinctly my question is: how great an influence does classical algebraic geometry have on modern algebraic geometry today?
I agree with Donu Arapura's complaint about the artificial distinction between modern and classical algebraic geometry. The only distinction to me seems to be chronological: modern work was done recently, while classical work was done some time ago. However, the questions being studied are (by and large) the same. As I commented in another post , two of the most important recent results in algebraic geometry are the deformation invariance of plurigenera for varieties of general type, proved by Siu, and the finite generation of the canoncial ring for varieties of general type, proved by Birkar, Cascini, Hacon, and McKernan, and independently by Siu. Both these results would be of just as much interest to the Italians, or to Zariski, as they are to us today. Indeed, they lie squarely on the same axis of research that the Italians, and Zariski, were interested in, namely, the detailed understanding of the birational geometry of varieties. Furthermore, to understand these results, I don't think that you will particularly need to learn the contents of Eisenbud's book (although by all means do learn them if you enjoy it); rather, you will need to learn geometry! And by geometry, I don't mean the abstract foundations of sheaves and schemes (although these may play a role), I mean specific geometric constructions (blowing up, deformation theory, linear systems, harmonic representatives of cohomology classes -- i.e. Hodge theory, ... ). To understand Siu's work you will also need to learn the analytic approach to algebraic geometry which is introduced in Griffiths and Harris. In summary, if you enjoy commutative algebra, by all means learn it, and be confident that it supplies one road into algebraic geometry; but if you are interested in algebraic geometry, it is by no means required that you be an expert in commutative algebra. The central questions of algebraic geometry are much as they have always been (birational geometry, problems of moduli, deformation theory, ...), they are problems of geometry, not algebra, and there are many available avenues to approach them: algebra, analysis, topology (as in Hirzebruch's book), combinatorics (which plays a big role in some investigations of Gromov--Witten theory, or flag varieties and the Schubert calculus, or ... ), and who knows what others.
{ "source": [ "https://mathoverflow.net/questions/31650", "https://mathoverflow.net", "https://mathoverflow.net/users/4842/" ] }
31,655
I'm looking for an overview of statistics suitable for the mathematically mature reader: someone familiar with measure theoretic probability at say Billingsley level, but almost completely ignorant of statistics. Most texts I've come across are either too basic, or are monographs focused on a specific area or technique. Any suggestions?
This won't directly answer the question, but here are some things a mathematician who wants to learn about statistics should learn: When is a random variable a statistic and when is it not? (A statistic is an observable random variable. For example $X - E(X)$ is not a statistic if the "population average" $E(X)$ is not observable. Fisher's concept of sufficiency. Examples, characterizations, theorems. In particular, the Rao--Blackwell theorem and examples of its use. That's way cool. So is the concept of completeness and the Lehmann--Scheffe theorem. If you think that linear regression is called linear because you're fitting a line, then you are naive. If you're fitting, e.g., a parabola by finding least-squares estimators of three parameters, then you're doing linear regression. There is also such a thing as non-linear regression. Learn the Gauss--Markov theorem on Best Linear Unbiased Estimators (BLUEs). Look at my recent answer to a question on prediction intervals. Why do you need the (finite-dimensional version of) the spectral theorem to understand linear regression? (Look at the aforementioned answer and consider this question an exercise.) As long as we're on linear regression (the topic of the three bullets immediately above this one), look at the Wikipedia article titled "errors and residuals in statistics" (written mostly by me). Learn the difference between an error and a residual. Maybe look at "Studentized residual" as an afterthought. ....and then at "lack-of-fit sum of squares". If you think linear regression is child's play rather than something to which the most brilliant person could devote a long career in research, grow up. Learn the difference between frequentism and Bayesianism. In fact, look at the rant I posted on nLab about this . (The essence of Bayesianism is that probabilities are taken to be epistemic. Bayesianism is not more subjective than frequentism; rather Bayesians and frequentists put their subjectivity in different places. (A really glaring example is the 5% critical value legendarily used in medical journals. Why 5%? Because that's a subjective economic choice.)) Learn design of experiments. Learn why Latin squares and a myriad of other combinatorial designs are used. OK, maybe a small and incomplete but nonetheless direct answer to the original question: perhaps Hocking's book on linear models. Learn to use the word "sample" correctly. If you ask the next 100 people you meet whether they intend to vote "yes" or "no", that's not 100 samples; that's one sample. Another thing that will give you some idea of the distinct flavor of the subject, and how it differs from probability theory and some other fields, is books on sampling. Learn about the Wishart distribution. And the multivariate normal distribution. Exercise: How do you prove that every non-negative-definite matrix is the variance of some random vector? Learn why the Behrens--Fisher problem cannot be regarded as a math problem. It belongs up there with Hilbert's problems as one of the great challenges, but it's not mathematics for this reason: One can model it as a math problem in any of a variety of different non-equivalent ways. One can solve those math problems. But which one is the "right" model? That's essentially a philosophical question. And that question, not the math problems, is the Behrens--Fisher problem. (The Behrens--Fisher problem is this: how do you draw inferences about the difference between the means of two normally distributed populations which may have different variances? "Inferences" can mean point-estimates or interval estimates or perhaps other things.) This is just a sampling of the first things that come immediately to mind. It leans toward showing you what the subject tastes like rather than what it's important to know to do theoretical or applied research. Statistics is an immensely broader field than mathematical probability theory.
{ "source": [ "https://mathoverflow.net/questions/31655", "https://mathoverflow.net", "https://mathoverflow.net/users/7534/" ] }
31,690
Given a smooth manifold $M$, we define the differentiable structure on $TM$ in the usual way. I would like to know examples of two smooth manifolds which are non-diffeomorphic, but their tangent bundles are. Which is the smallest dimension in which one can find such examples? What if I ask the same question for $k$ pairwise non-diffeomorphic manifolds? Can we have $k=\infty$?
Here are examples of non-diffeomorphic closed manifolds with diffeomorphic tangent bundles: 3-dimensional lens spaces have trivial tangent bundles, which are diffeomorphic if and only if the lens spaces are homotopy equivalent, e.g. $L(7,1)$, $L(7,2)$ are not homeomorphic, but their tangent bundles are diffeomorphic. This follows from proofs in [Milnor, John, Two complexes which are homeomorphic but combinatorially distinct. Ann. of Math. (2) 74 1961 575--590]. Tangent bundle to any exotic $n$-sphere is diffeomorphic to $TS^n$ as proved in [De Sapio, Rodolfo, Disc and sphere bundles over homotopy spheres, Math. Z. 107 1968 232--236]. In dimensions $n\ge 5$ one can attack this question via surgery theory. For example, let $f:N\to M$ be a homotopy equivalence of closed $n$-manifolds that has trivial normal invariant (which is a bit more than requiring that $f$ preserves stable tangent bundle). Multiply $f$ by the identity map of $(D^n, S^{n-1})$, where $D^n$ is the closed $n$-disk. Then Wall's $\pi-\pi$ theorem implies that $M\times D^n$ and $N\times D^n$ are diffeomorphic, so if tangent bundles of $M, N$ are trivial, this gives manifolds with diffeomorphic tangent bundles. To illustrate the method in 3, here is a particular example of infinitely many non-homeomorphic closed manifolds with diffeomorphic tangent bundles. Fix any closed $(4r-1)$-manifold $M$ where $r\ge 2$ such that $TM$ is stably trivial and $\pi_1(M)$ has (nontrivial) elements of finite order. Then results of Chang-Weinberger imply that there are infinitely many closed $n$-manifolds $M_i$ that are simply-homotopy equivalent to $M$ and such that the tangent bundles $TM_i$ are all diffeomorphic (I am not quite sure how to get them be diffeomorphic to $M\times\mathbb R^n$ even though this should be possible). I know how to deduce this from [On invariants of Hirzebruch and Cheeger-Gromov, Geom. Topol. 7 (2003), 311--319]. I have been thinking extensively of related issues, so you might want to look at my papers at arxiv, e.g. this one.
{ "source": [ "https://mathoverflow.net/questions/31690", "https://mathoverflow.net", "https://mathoverflow.net/users/47274/" ] }
31,699
Suppose you are trying to prove result $X$ by induction and are getting nowhere fast. One nice trick is to try to prove a stronger result $X'$ (that you don't really care about) by induction. This has a chance of succeeding because you have more to work with in the induction step. My favourite example of this is Thomassen's beautiful proof that every planar graph is 5-choosable. The proof is actually pretty straightforward once you know what you should be proving. Here is the strengthened form (which is a nice exercise to prove by induction), Theorem. Let $G$ be a planar graph with at least 3 vertices such that every face of $G$ is bounded by a triangle (except possibly the outer face). Let the outer face of $G$ be bounded by a cycle $C=v_1 \dots v_kv_1$ . Suppose that $v_1$ has been coloured 1 and that $v_2$ has been coloured 2. Further suppose that for every other vertex of $C$ a list of at least 3 colours has been specified, and for every vertex of $G - C$ , a list of at least 5 colours has been specified. Then, the colouring of $v_1$ and $v_2$ can be extended to a colouring of $G$ with the specified lists. Question 1. What are some other nice examples of this phenomenon? Question 2. Under what conditions is the strategy of strengthening the induction hypothesis likely to work?
Here is a bit of advice that took me a while to learn: You don't need to know what you are proving when you start to write a proof by induction. The following method isn't needed for easy problems, but it has several times gotten me a lemma that wouldn't yield to any other means. After you've worked on the problem for a week or so and have a good feel for it, write down all the conditions that seem to be relevant. Write down all the implications you can prove of the form "If the conditions in set $S$ hold for some values of the parameters, then the conditions in set $T$ hold for some other values." Discard any in which set $S$ can be shrunk, or $T$ enlarged. Now, draw a graph with arrows between the sets. You are looking for a loop in this graph, a path from your hypothesis to the loop, and a path from the loop to your conclusion. If you find one, then you have a potential proof by induction, assuming that your parameters "decrease" as you go around the loop. This is the other part of the trick. When you start this procedure, there is no need to decide which order you are using on the set of parameters. Wait until you've found the loop! Then the loop gives you a specific recursion on your parameter set, and you can try to find a well-order with respect to which this recursion is decreasing. I'd been thinking about writing a blog post on this, but all the examples I know are really technical.
{ "source": [ "https://mathoverflow.net/questions/31699", "https://mathoverflow.net", "https://mathoverflow.net/users/2233/" ] }
31,811
For reference, the Feit-Thompson Theorem states that every finite group of odd order is necessarily solvable. Equivalently, the theorem states that there exist no non-abelian finite simple groups of odd order. I am well aware of the complexity and length of the proof. However, would it be possible to provide a rough outline of the ideas and techniques in the proof? More specifically, the sub-questions of this question are: Are the techniques in this proof purely group-theoretic or are techniques from other areas of mathematics borrowed? (Such as, for example, other branches of algebra.) In the same vein, how great an influence do the techniques (if any) from number theory and combinatorics have on the proof? (Here "combinatorics" is of course not very specific. I should emphasize that I mean "tools from combinatorics that are pure and solely derived from techniques within the area of combinatorics and that do not require "deep" group theory to derive". Similarly for "number theory".) What sorts of "character-free" techniques and ideas exist in the proof? Does a character-free proof of this result exist? (Since I suspect the answer to the latter is in the negative, I am primarily interested in an answer to the former.) What are the underlying "intuitions" behind the proof? That is, how does one come up with such a proof, or at least, certain parts of it? This is a rough question of course; "coming up" with things in mathematics is very difficult to describe. However, since the argument is so long, I suspect some sort of inspiration must have driven the proof. I have observed in group theory that many arguments naturally divide into "cases" and often the individual cases are easy to tackle and the arguments naturally "flow". Of course, here I speak of arguments whose lengths are no more than a few pages. Does the proof of the Feit-Thompson theorem share the same "structure" as smaller proofs, or is the proof structurally unique? How often do explicit "elementwise computations" arise in the proof? Is there any hope that one day someone might discover a considerably shorter proof of the Feit-Thompson Theorem? For example, would the existence of a proof of this theorem less than 50 or so pages be likely? (A proof making strong use of the classification of finite simple groups, or any other non-trivial consequence of the Feit-Thompson Theorem, does not count.) If not, why is it so difficult in group theory to provide more concise arguments? While I have Gorenstein's excellent book entitled Finite Groups at hand, I did not go far enough (when I was reading it) to actually get into the "real meat" of the discussion of the Feit-Thompson theorem; that is, to actually get a sense of the mathematics used to prove the theorem. Nor do I intend to do so in the near future. (Don't get me wrong, I would be really interested to see this proof, but it seems too much unless you intend to research finite group theory or a related area.) Thank you very much for any answers. I am aware that some aspects of this question are imprecise; I have tried my best to be as clear as possible in some cases, but there might still be possible sources of ambiguity and I apologize if they are. (If there are, I would appreciate it if you could try to look for the "obvious interpretation".) Also, I have a relatively strong background in finite group theory (but not a "research-level" background in the area) so feel free to use more complex group-theoretic terminology and ideas if necessary, but if possible, try to give an exposition of the proof that is as elementary as possible. Thanks again!
During a discussion at the n-category theory cafe Stephen Harris sent me this excellent expository article by Glauberman which goes into a bit more depth than wikipedia.
{ "source": [ "https://mathoverflow.net/questions/31811", "https://mathoverflow.net", "https://mathoverflow.net/users/4842/" ] }
31,846
1) Can the Riemann Hypothesis (RH) be expressed as a $\Pi_1$ sentence? More formally, 2) Is there a $\Pi_1$ sentence which is provably equivalent to RH in PA? Update (July 2010): So we have two proofs that the RH is equivalent to a $\Pi_1$ sentence. Martin Davis, Yuri Matijasevic, and Julia Robinson, "Hilbert's Tenth Problem. Diophantine Equations: Positive Aspects of a Negative Solution", 1974. Published in " Mathematical developments arising from Hilbert problems ", Proceedings of Symposium of Pure Mathematics", XXVIII:323-378 AMS. Page 335 $$\forall n >0 \ . \ \left(\sum_{k \leq \delta(n)}\frac{1}{k} - \frac{n^2}{2} \right)^2 < 36 n^3 $$ 2. Jeffrey C. Lagarias, " An Elementary Problem Equivalent to the Riemann Hypothesis ", 2001 $$\forall n>60 \ .\ \sigma(n) < \exp(H_n)\log(H_n)$$ But both use theorems from literature that make it difficult to judge if they can be formalized in PA. The reason that I mentioned PA is that, for Kreisel's purpose, the proof should be formalized in a reasonably weak theory. So a new question would be: 3) Can these two proofs of "RH is equivalent to a $\Pi_1$ sentence" be formalized in PA? Motivation: This is mentioned in P. Odifreddi, " Kreiseliana: about and around George Kreisel ", 1996, page 257. Feferman mentions that when Kreisel was trying to "unwind" the non-constructive proof of Littlewood's theorem , he needed to deal with RH. Littlewood's proof considers two cases: there is a proof if RH is true and there is another one if RH is false. But it seems that in the end, Kreisel used a $\Pi_1$ sentence weaker than RH which was sufficient for his purpose. Why is this interesting? Here I will try to explain why this question was interesting from Kreisel's viewpoint only. Kreisel was trying to extract an upperbound out of the non-constructive proof of Littlewood. His "unwinding" method works for theorems like Littlewood's theorem if they are proven in a suitable theory. The problem with this proof was that it was actually two proofs: If the RH is false then the theorem holds. If the RH is true then the theorem holds. If I remember correctly, the first one already gives an upperbound. But the second one does not give an upperbound. Kreisel argues that the second part can be formalized in an arithmetic theory (similar to PA) and his method can extract a bound out of it assuming that the RH is provably equivalent to a $\Pi_1$ sentence. (Generally adding $\Pi_1$ sentences does not allow you to prove existence of more functions.) This is the part that he needs to replace the usual statement of the RH with a $\Pi_1$ statement. It seems that at the end, in place of proving that the RH is $\Pi_1$ , he shows that a weaker $\Pi_1$ statement suffices to carry out the second part of the proof, i.e. he avoids the problem in this case. A simple application of proving that the RH is equivalent to a $\Pi_1$ sentences in PA is the following: If we prove a theorem in PA+RH (even when the proof seems completely non-constructive), then we can extract an upperbound for the theorem out of the proof. Note that for this purpose, we don't need to know whether the RH is true or is false. Note: Feferman's article mentioned above contains more details and reflections on "Kreisel's Program" of "unwinding" classical proofs to extract constructive bounds. My own interest was mainly out of curiosity. I read in Feferman's paper that Kreisel mentioned this problem and then avoided it, so I wanted to know if anyone has dealt with it.
I don't know the best way to express RH inside PA, but the following inequality $$\sum_{d \mid n} d \leq H_n + \exp(H_n)\log(H_n),$$ where $H_n = 1+1/2+\cdots+1/n$ is the $n$-th harmonic number, is known to be equivalent to RH. [J. Lagarias, An elementary problem equivalent to the Riemann hypothesis , Amer. Math. Monthly, 109 (2002), 5347–543.] The same paper mentions another inequality of Robin, $$\sum_{d \mid n} d \leq e^\gamma n \log\log n \qquad (n \geq 5041),$$ where $e^\gamma = 1.7810724\ldots$, which is also equivalent to RH. Despite the appearance of $\exp,$ $\log$ and $e^\gamma$, it is a routine matter to express these inequalities as $\Pi^0_1$ statement. (Indeed, the details in Lagarias's paper make this even simpler than one would originally think.)
{ "source": [ "https://mathoverflow.net/questions/31846", "https://mathoverflow.net", "https://mathoverflow.net/users/7507/" ] }
31,879
I've been very positively impressed by Tristan Needham's book "Visual Complex Analysis" , a very original and atypical mathematics book which is more oriented to helping intuition and insight than to rigorous formalization. I'm wondering if anybody knows of other nice math books which share this particular style of exposition.
John Stillwell's recent book Naive Lie Theory is amazing and in a similar vein. It provides great geometrical intuition for many of the common matrix groups. What is particularly impressive about this book is how he motivates more complicated ideas, such as maximal tori, in a very elementary fashion. It is perfect for undergrads looking for a good introduction.
{ "source": [ "https://mathoverflow.net/questions/31879", "https://mathoverflow.net", "https://mathoverflow.net/users/6357/" ] }
31,905
Let the Chern-Simons lagrangian for a group $G$ be, $$L= k \epsilon^{\mu \nu \rho} Tr[A_\mu \partial _ \nu A_\rho + \frac{2}{3} A_\mu A_\nu A_\rho]$$ Then it is claimed that on "infinitesimal" variation of the gauge field ("connection") the lagrangian changes by, $$\delta L = k \epsilon^{\mu \nu \rho} Tr[\delta A_\mu F_{\nu \rho}]$$ where the "curvature" $F_{\mu \nu}$ is given as $\partial _ \mu A_\nu - \partial _ \nu A_\mu +[A_\mu,A_\nu]$ Under gauge transformations on $A$ by $g \in G$ it changes to say $A'$ whose $\mu$ component is given as $g^{-1}A_\mu g+g^{-1}\partial _ \mu g$ (This makes sense once a representation of $G$ has been fixed after which $A$ and $g$ are both represented as matrices on the same vector space) Say the Lagrangian under the above gauge transformations change to $L'$ and then one has the relation, $$L'-L=-k \epsilon^{\mu \nu \rho}\partial_\mu Tr[\partial_\nu g g^{-1}A_\rho]-\frac{k}{3}\epsilon^{\mu \nu \rho} Tr[g^{-1}\partial_\mu gg^{-1}\partial_\nu gg^{-1}\partial_\rho g]$$ The second term of the above expression is what is proportional to the "winding number density" of the Chern-Simons lagrangian and thats what eventually gets quantized. I would like to know the following things, Is there a neat coordinate free way of proving the above two variation change equations? Doing this in the above coordinate way is turning out to be quite intractable! Since the Lagrangian is just a complex number one can talk of the "real" and the "imaginary" part of it. But I get the feeling that at times a split of this kind is done at the level of the gauge field itself. Is this true and if yes the how is it defined? (Definitely there is lot of interest in doing analytic continuation of the "level" $k$) The Euler-Lagrange equations of this action give us only the "flat" connections and in that sense it is a topological theory since only boundary conditions seem to matter. Still all the flat connection configurations are not equivalent but are labelled by homomorphisms from the first fundamental group of the 3-manifold on which the theory is defined to $G$. How to see this? What is the background theory from which this comes? And why is this called "holonomy"? (I am familiar with "holonomy" as in the context of taking a vector and parallel transporting it around a loop etc)
The secret to understanding the Chern-Simons functional over a 3-manifold is to realise that it's a 4-dimensional functional in disguise. If $X$ is a closed, oriented 4-manifold, and $P \to X$ a principal $SU(n)$-bundle, one has the Chern-Weil formula for the second Chern number, $$ c_2(P)[X] = \frac{1}{8 \pi^2} \int_X {Tr F_A^2}, $$ where $F_A$ is the curvature of a connection $A$ in $P$. If now $Y$ is a closed, oriented 3-manifold, and $P_Y \to Y$ a principal $SU(n)$-bundle (necessarily trivial) then we can define the Chern-Simons functional $$ cs: Connections(P_Y) \to \mathbb{R}/\mathbb{Z} $$ for a connection $A$ by choosing any compact oriented 4-manifold $X$ bounding $Y$ (one always exists), any $SU(n)$-bundle $P_X \to X$ extending $P_Y \to Y$ (one again exists), and any connection $B$ in $P_X$ extending $A$ (ditto), and setting $$ cs(A) = \frac{1}{8 \pi^2} \int_X {Tr F_A^2}. $$ This is well-defined (mod integers) because given another choice $(X',P_{X'},B')$, we can glue together the two manifolds along $Y$ to get a closed 4-manifold (we reverse the orientation of one of them), and express the difference in values of $cs(A)$ as a Chern-Weil integral as above, which is an integer. To obtain a formula for $cs(A)$, take $P_X$ to be a trivial bundle, and take $B$ to be a connection that is trivial except over the collar $Y\times [-1,0]$. Over that collar, the bundle is $[-1,0]\times P_Y$, and we can let $B$ the 4-dimensional connection $A + t(A-A_0)$, where $A_0$ is a reference connection trivialising $P_Y$ (i.e., flat with trivial holonomy). Then we can explicitly perform the integral defining $cs(A)$ over the $[0,1]$-factor (try it!) and get a formula for real-valued lift of $cs(A)$ (here we use $A_0$ to trivialise $P_Y)$, $$ CS(A) = \frac{1}{8 \pi^2} \int_Y{ Tr (A \wedge dA + \frac{2}{3} A\wedge A\wedge A)}, $$ which is the formula you gave in indices (up to a factor). From this formula, it's straightforward to compute the first variation of $CS$, and to interpret it in terms of $F_A=dA+A\wedge A$. The gauge transformation formula says that if $g$ is an automorphism of $P_Y$ of degree $d $ then $ CS(gA)-CS(A) = d$. This has a conceptual explanation: linearly interpolate from $A$ to $gA$ to obtain a connection over $Y\times [0,1]$ whose Chern-Weil integral is $CS(gA)-CS(A)$. Use $g$ as a gluing recipe to obtain a bundle over $Y\times S^1$ whose second Chern number is $d$ (this defines $d$, if you will), and apply the Chern-Weil formula. Reference: S.K. Donaldson, "Floer homology groups in Yang-Mills theory". Very briefly on the second question: one can work with $SL(n,\mathbb{C})$-bundles instead. The Lie algebra is the complexfication of that of $SU(n)$, so (after the bundle is trivialised), connections have real and imaginary parts which are 1-forms valued in $su(n)$.
{ "source": [ "https://mathoverflow.net/questions/31905", "https://mathoverflow.net", "https://mathoverflow.net/users/2678/" ] }
31,932
The Hessian matrix $\{\partial_i \partial_j f \}$ of a function $f:\mathbb{R}^n \to \mathbb{R}$ depends on the coordinate system you choose. If $x_1,\cdots,x_n$ and $y_1,\cdots,y_n$ are two sets of coordinates (say, in some open neighborhood of a manifold), then $\frac{\partial f(y(x))}{\partial x_i} = \sum_{k} \frac{\partial f}{\partial y_k} \frac{\partial y_k}{\partial x_i}$. Differentiating again, this time with respect to $x_j$, we get $\frac{\partial^2 f(y(x))}{\partial x_i \partial x_j} = \sum_{k} \sum_{l} \frac{\partial^2 f}{\partial y_k \partial y_l} \frac{\partial y_l}{\partial x_j} \frac{\partial y_k}{\partial x_i}+\frac{\partial f(y(x))}{\partial y_k}\frac{\partial^2y}{\partial x_i \partial x_j}$. At a critical point, the second term goes away, so we will consider such a case. In other words, if the derivative is a differential $1$-form, i.e. $\sum_{i} \frac{\partial f}{\partial x_i} dx_i$, a section of the cotangent bundle, then the second derivative should be $\sum_{k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} dy_k \otimes dy_l$. This makes sense since $dy_k=\sum_{i} \frac{\partial y_k}{\partial x_i} dx_i$, and $dy_l=\sum_{j} \frac{\partial y_l}{\partial x_j} dx_i$, meaning that $\sum_{k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} dy_k \otimes dy_l = \sum_{k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} (\sum_{i} \frac{\partial y_k}{\partial x_i} dx_i) \otimes (\sum_{j} \frac{\partial y_l}{\partial x_j} dx_j) = \sum_{i,j,k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} \frac{\partial y_k}{\partial x_i} \frac{\partial y_l}{\partial x_j} dx_i dx_j = \sum_{i,j} \frac{\partial^2 f}{\partial x_i \partial x_j}$, making it coordinate independent. Note that I did not use exterior powers, I used tensor powers, since I wanted to actually find a way to make sense of second derivatives, rather than having $d^2=0$. This means the Hessian should be a rank $2$ tensor ((2,0) or (0,2), I can't remember which, but definitely not (1,1)). Does this make sense? Can we then express the third, etc, derivative as a tensor? More interestingly, how can this help us make sense of Taylor's formula? Can we come up with a coordinate-free Taylor series of a function at a point on a manifold? EDIT: An in general, if the first $n$ derivatives vanish, then the $n+1$ derivative should be a rank $n+1$ tensor, right?
No, no, no! You left out a term involving $\frac{df(y(x))}{dy}\frac{d^2y}{dx^2}$. This term vanishes at critical points -- points where $df=0$ -- so that indeed at such a point the Hessian define a tensor -- a symmetric bilinear form on the tangent space at that point -- independent of coordinates. Paying attention to what kind of bilinear form it happens to be is the beginning of Morse theory, but it's only intrinsically defined as a tensor if you're at a critical point. Notice that even the question of whether the Hessian is zero or not is dependent on coordinates. Even in a one-dimensional manifold. Taylor polynomials don't live in tensor bundles, but in something more subtle called jet bundles.
{ "source": [ "https://mathoverflow.net/questions/31932", "https://mathoverflow.net", "https://mathoverflow.net/users/1355/" ] }
31,934
Infinite Suburbia is a Euclidean plane, P. All residents live in open unit disks which, like caravans, can travel around but are stationary most of the time. When stationary, these disks lie in a fixed subset S ⊂ P which is a countable, disjoint union of open unit disks, as dense as possible in P, subject to the following caveat: Residents of Infinite Suburbia like to travel. Therefore, if any disk d 1 in S is currently vacant, any other disk d 0 should be able to travel from its current stationary position, through the "road network" P\S (where, for the purpose of travelling, S is considered not to include d 0 or d 1 ), to the position of d 1 . Furthermore, the manner of travel is constrained: associated with P is a (fixed) "differentiable" vector field, such that any point p ∈ P has an associated velocity v(p) (I guess by "differentiable" I mean that both of its components have a gradient). In particular, if p is the centre of a disk in S then v(p) = 0. If a disk is moving and its centre is currently at p, then it must travel with velocity v(p). Since v is "differentiable", it has a corresponding acceleration field, whose magnitude is bounded by A. The city planners know that, being infinite, the Suburbia's residents are liable to want to travel arbitrarily far. Therefore, the main concern when originally deciding on S and v (apart from the density of S) was that expected journey times should be "asymptotically nice", in that if E(x) is the expected journey time between two disks which are no more than x distance apart (over all such ordered pairs in S), then E(x) should be asymptotically as small as possible. For variations on the problem, consider E to be the mean square journey time, or the maximum journey time. The question, then, is: how to choose S and v? This will presumably be a balancing act since E(x) can be made much lower if S is sufficiently sparse. But I'm imagining a compromise looking like some sort of fractal, much like the road system in real cities, but much more ordered and extending to infinity. Assuming an exact optimum can't be found, what strategies will guarantee at least a decent approximation? Am I on the wrong track with the fractal idea? Is the problem, in fact, not sufficiently well-defined?
No, no, no! You left out a term involving $\frac{df(y(x))}{dy}\frac{d^2y}{dx^2}$. This term vanishes at critical points -- points where $df=0$ -- so that indeed at such a point the Hessian define a tensor -- a symmetric bilinear form on the tangent space at that point -- independent of coordinates. Paying attention to what kind of bilinear form it happens to be is the beginning of Morse theory, but it's only intrinsically defined as a tensor if you're at a critical point. Notice that even the question of whether the Hessian is zero or not is dependent on coordinates. Even in a one-dimensional manifold. Taylor polynomials don't live in tensor bundles, but in something more subtle called jet bundles.
{ "source": [ "https://mathoverflow.net/questions/31934", "https://mathoverflow.net", "https://mathoverflow.net/users/4336/" ] }
32,011
There are plenty of simple proofs out there that $\sqrt{2}$ is irrational. But does there exist a proof which is not a proof by contradiction? I.e. which is not of the form: Suppose $a/b=\sqrt{2}$ for integers $a,b$. [ deduce a contradiction here ] $\rightarrow\leftarrow$, QED Is it impossible (or at least difficult) to find a direct proof because ir -rational is a negative definition, so "not-ness" is inherent to the question? I have a hard time even thinking how to begin a direct proof, or what it would look like. How about: $\forall a,b\in\cal I \;\exists\; \epsilon$ such that $\mid a^2/b^2 - 2\mid > \epsilon$.
Below is a simple direct proof that I found as a teenager: THEOREM $\;\rm r = \sqrt{n}\;$ is integral if rational, for $\;\rm n\in\mathbb{N}$ . Proof: $\;\rm r = a/b,\;\; {\text gcd}(a,b) = 1 \implies ad-bc = 1\;$ for some $\rm c,d \in \mathbb{Z}$ , by Bezout so: $\;\rm 0 = (a-br) (c+dr) = ac-bdn + r \implies r \in \mathbb{Z} \quad\square$ This idea immediately generalizes to a proof by induction on degree that $\Bbb Z$ is integrally closed (i.e. the monic case of the rational root test). Nowadays my favorite proof is the 1-line gem using Dedekind's conductor ideal - which, as I explained at length elsewhere , beautifully encapsulates the descent in ad-hoc "elementary" irrationality proofs.
{ "source": [ "https://mathoverflow.net/questions/32011", "https://mathoverflow.net", "https://mathoverflow.net/users/7647/" ] }
32,099
I am planning an introductory combinatorics course (mixed grad-undergrad) and am trying to decide whether it is worth budgeting a day for Lagrange inversion . The reason I hesitate is that I know of very few applications for it -- basically just enumeration of trees and some slight variants on this. I checked van Lint and Wilson , Enumerative Combinatorics II (but not the exercises) and Concrete Mathematics , and they all only present this application. So, besides counting trees, where can we use Lagrange inversion?
You can use Lagrange inversion to explicitly solve $$x^5-x-a=0\qquad (*)$$ (yes, a fifth degree equation, gasp ). More precisely, it yields an infinite series expansion $$x=-\sum_{k\geq 0}\binom{5k}{k}\frac{a^{4k+1}}{4k+1}$$ for the root of $(*)$ which is $0$ at $a=0.$ Although this isn't combinatorics, I'd gladly devote a class in any subject I teach to be able to derive it, because by Bring–Jerrard, any quintic equation can be reduced to this form, and you get a solution of something that many people believe, albeit for differing reasons, to be unsolvable.
{ "source": [ "https://mathoverflow.net/questions/32099", "https://mathoverflow.net", "https://mathoverflow.net/users/297/" ] }
32,126
There is an example of a function that is unbounded on every open set. Just take $f(n/m) = m$ for coprime $n$ and $m$ and $f(irrational) = 0$. I want to generalize this in a way to get a function which is not just unbounded on every open set, but whose range is equal to $\mathbb{R}$ on every open set. The latter construction clearly doesn't work. I'm interested whether such function exists and if it exists is there any constructive way to define it?
See Conway's base 13 function .
{ "source": [ "https://mathoverflow.net/questions/32126", "https://mathoverflow.net", "https://mathoverflow.net/users/7079/" ] }
32,133
Suppose $A\in R^{n\times n}$ , where $R$ is a commutative ring. Let $p_i \in R$ be the coefficients of the characteristic polynomial of $A$ : $\operatorname{det}(A-xI) = p_0 + p_1x + \dots + p_n x^n$ . I am looking for a proof that $-\operatorname{adj}(A) = p_1 I + p_2 A + \dots + p_n A^{n-1}$ . In the case where $\operatorname{det}(A)$ is a unit, $A$ is invertible, and the proof follows from the Cayley–Hamilton theorem. But what about the case where $A$ is not invertible?
Here is a direct proof along the lines of the standard proof of the Cayley–Hamilton theorem. [ This works universally, i.e. over the commutative ring $R=\mathbb{Z}[a_{ij}]$ generated by the entries of a generic matrix $A$. ] The following lemma combining Abel's summation and Bezout's polynomial remainder theorem is immediate. Lemma Let $A(\lambda)$ and $B(\lambda)$ be matrix polynomials over a (noncommutative) ring $S.$ Then $A(\lambda)B(\lambda)-A(0)B(0)=\lambda q(\lambda)$ for a polynomial $q(\lambda)\in S[\lambda]$ that can be expressed as $$q(\lambda)=A(\lambda)\frac{B(\lambda)-B(0)}{\lambda}+\frac{A(\lambda)-A(0)}{\lambda}B(0)=A(\lambda)b(\lambda)+a(\lambda)B(0) \qquad (*)$$ with $a(\lambda),b(\lambda)\in S[\lambda].$ Let $A(\lambda)=A-\lambda I_n$ and $B(\lambda)=\operatorname{adj} A(\lambda)$ [ viewed as elements of $S[\lambda]$ with $S=M_n(R)$ ], then $$A(\lambda)B(\lambda)=\det A(\lambda)=p_A(\lambda)=p_0+p_1\lambda+\ldots+p_n\lambda^n$$ is the characteristic polynomial of $A$ and $$A(0)B(0)=p_0 \text{ and } q(\lambda)=p_1+\ldots+p_n\lambda^{n-1}$$ Applying $(*),$ we get $$q(\lambda)=(A-\lambda I)b(\lambda)-\operatorname{adj} A \qquad (**) $$ for some matrix polynomial $b(\lambda)$ commuting with $A.$ Specializing $\lambda$ to $A$ in $(**),$ we conclude that $$q(A)=-\operatorname{adj} A\qquad \square$$
{ "source": [ "https://mathoverflow.net/questions/32133", "https://mathoverflow.net", "https://mathoverflow.net/users/7667/" ] }
32,138
Let $(P,\pi,B,G)$ be a principal bundle with total space $P$, base $B$, projection $\pi$ and structure group $G$. Now I am searching for a good reference (with proofs) for the following facts: 1) The fundamental vector fields on $P$ span pointwise the vertical space - or equivalently they generate the $C^\infty(P)$-module of smooth sections of the vertical bundle. 2) Let $\gamma \colon TP \to \mathrm{Lie}(G)$ a connection one-form. The horizontal lifts of vector fields span pointwise the horizontal space - or equivalently they generate the $C^\infty(P)$-module of smooth sections of the horizontal bundle.
Here is a direct proof along the lines of the standard proof of the Cayley–Hamilton theorem. [ This works universally, i.e. over the commutative ring $R=\mathbb{Z}[a_{ij}]$ generated by the entries of a generic matrix $A$. ] The following lemma combining Abel's summation and Bezout's polynomial remainder theorem is immediate. Lemma Let $A(\lambda)$ and $B(\lambda)$ be matrix polynomials over a (noncommutative) ring $S.$ Then $A(\lambda)B(\lambda)-A(0)B(0)=\lambda q(\lambda)$ for a polynomial $q(\lambda)\in S[\lambda]$ that can be expressed as $$q(\lambda)=A(\lambda)\frac{B(\lambda)-B(0)}{\lambda}+\frac{A(\lambda)-A(0)}{\lambda}B(0)=A(\lambda)b(\lambda)+a(\lambda)B(0) \qquad (*)$$ with $a(\lambda),b(\lambda)\in S[\lambda].$ Let $A(\lambda)=A-\lambda I_n$ and $B(\lambda)=\operatorname{adj} A(\lambda)$ [ viewed as elements of $S[\lambda]$ with $S=M_n(R)$ ], then $$A(\lambda)B(\lambda)=\det A(\lambda)=p_A(\lambda)=p_0+p_1\lambda+\ldots+p_n\lambda^n$$ is the characteristic polynomial of $A$ and $$A(0)B(0)=p_0 \text{ and } q(\lambda)=p_1+\ldots+p_n\lambda^{n-1}$$ Applying $(*),$ we get $$q(\lambda)=(A-\lambda I)b(\lambda)-\operatorname{adj} A \qquad (**) $$ for some matrix polynomial $b(\lambda)$ commuting with $A.$ Specializing $\lambda$ to $A$ in $(**),$ we conclude that $$q(A)=-\operatorname{adj} A\qquad \square$$
{ "source": [ "https://mathoverflow.net/questions/32138", "https://mathoverflow.net", "https://mathoverflow.net/users/7538/" ] }
32,169
On a Riemannian manifold, a coordinate system is called "isothermal" if the Riemannian metric in those coordinates is conformal to the Euclidean metric: $$g_{ij} = e^{f} \delta_{ij}$$ My question is: Why are such coordinate systems called "isothermal"? It must have something to do with classical thermal physics. I tried looking for a reason online, with no success. It is well known that when the dimension $n=2$, there always exist isothermal coordinates, and this is probably where they were first introduced. So maybe the nomenclature has something to do with heat diffusion in the plane? (The reason I ask is because I am planning to give a seminar talk next week giving a proof that such coordinates exist when $n=2$, and thought it would be nice to explain to the students where the name comes from...)
Isothermal coordinates are harmonic. In other words, it solves $\triangle_g u = 0$. So locally it is a stationary solution of the heat equation. In physics, for a steady state distribution of temperatures, each level set is called an isotherm.
{ "source": [ "https://mathoverflow.net/questions/32169", "https://mathoverflow.net", "https://mathoverflow.net/users/6871/" ] }
32,269
Consider a game where one player picks an integer number between 1 and 1000 and other has to guess it asking yes/no questions. If the second player always gives correct answers than it's clear that in worst case it's enought to ask 10 questions. And 10 is the smallest such number. What if the second player is allowed to give wrong answers? I'm interested in a case when the second player is allowed to give at most one wrong answer. I know the strategy with 15 guesses in worst case. Consider a number in range [1..1000] as 10 bits. At first you ask the values of all 10 bits ("Is it true that $i$-th bit is zero?"). After that you get some number. Ask if this number is the number first player guessed. And if not you have to find where he gave wrong answer. There are 11 positions. Using the similar argument you can do it in 4 questions. Is it possible to ask less then 15 questions in worst case?
Yes, there is a way to guess a number asking 14 questions in worst case. To do it you need a linear code with length 14, dimension 10 and distance at least 3. One such code can be built based on Hamming code (see http://en.wikipedia.org/wiki/Hamming_code ). Here is the strategy. Let us denote bits of first player's number as $a_i$, $i \in [1..10]$. We start with asking values of all those bits. That is we ask the following questions: "is it true that i-th bit of your number is zero?" Let us denote answers on those questions as $b_i$, $i \in [1..10]$. Now we ask 4 additional questions: Is it true that $a_{1} \otimes a_{2} \otimes a_{4} \otimes a_{5} \otimes a_{7} \otimes a_{9}$ is equal to zero? ($\otimes$ is sumation modulo $2$). Is it true that $a_{1} \otimes a_{3} \otimes a_{4} \otimes a_{6} \otimes a_{7} \otimes a_{10}$ is equal to zero? Is it true that $a_{2} \otimes a_{3} \otimes a_{4} \otimes a_{8} \otimes a_{9} \otimes a_{10}$ is equal to zero? Is it true that $a_{5} \otimes a_{6} \otimes a_{7} \otimes a_{8} \otimes a_{9} \otimes a_{10}$ is equal to zero? Let $q_1$, $q_2$, $q_3$ and $q_4$ be answers on those additional questions. Now second player calculates $t_{i}$ ($i \in [1..4]$) --- answers on those questions based on bits $b_j$ which he previously got from first player. Now there are 16 ways how bits $q_i$ can differ from $t_i$. Let $d_i = q_i \otimes t_i$ (hence $d_i = 1$ iff $q_i \ne t_i$). Let us make table of all possible errors and corresponding values of $d_i$: position of error -> $(d_1, d_2, d_3, d_4)$ no error -> (0, 0, 0, 0) error in $b_1$ -> (1, 1, 0, 0) error in $b_2$ -> (1, 0, 1, 0) error in $b_3$ -> (0, 1, 1, 0) error in $b_4$ -> (1, 1, 1, 0) error in $b_5$ -> (1, 0, 0, 1) error in $b_6$ -> (0, 1, 0, 1) error in $b_7$ -> (1, 1, 0, 1) error in $b_8$ -> (0, 0, 1, 1) error in $b_9$ -> (1, 0, 1, 1) error in $b_{10}$ -> (0, 1, 1, 1) error in $q_1$ -> (1, 0, 0, 0) error in $q_2$ -> (0, 1, 0, 0) error in $q_3$ -> (0, 0, 1, 0) error in $q_4$ -> (0, 0, 0, 1) All the values of $(d_1, d_2, d_3, d_4)$ are different. Hence we can find where were an error and hence find all $a_i$.
{ "source": [ "https://mathoverflow.net/questions/32269", "https://mathoverflow.net", "https://mathoverflow.net/users/7694/" ] }
32,315
A few months ago there were several math talks about how the Lie group E8 had been detected in some physics experiment. I recently looked up the original paper where this was announced, "Quantum Criticality in an Ising Chain: Experimental Evidence for Emergent E8 Symmetry", Science 327 (5962): 177–180, doi:10.1126/science.1180085 and was less than convinced by it. The evidence for the detection of E8 appears to be that they found a couple of peaks in some experiment, at points whose ratio is close to the golden ratio, which is apparently a prediction of some paper that I have not yet tracked down. The peaks are quite fuzzy and all one can really say is that their ratio is somewhere around 1.6. This seems to me to be a rather weak reason for claiming detection of a 248 dimensional Lie group; I would guess that a significant percentage of all experimental physics papers have a pair of peaks looking somewhat like this. Does anyone know enough about the physics to comment on whether the claim is plausible? Or has anyone heard anything more about this from a reliable source? (Most of what I found with google consisted of uninformed blogs and journalists quoting each other.) Update added later: I had a look at the paper mentioned by Willie Wong below, where Zamoldchikov predicts the expected masses. In fact he predicts there should be 8 peaks, and while the experimental results are consistent with the first 2 peaks, there are no signs of any of the other peaks. My feeling is that the interpretation of the experimental results as confirmation of an E8 symmetry is somewhat overenthusiastic.
It should be emphasized that this is not the $E_8$ of heterotic string theory or the $E_8$ gauge group in various grand unified theories. It comes out of something much more down-to-earth, namely solid state physics. The $E_8$ in this story is an unexpected symmetry of the two-dimensional Ising model in a magnetic field that was discovered by Zamolodchikov in 1989. The Ising model was devised as a simple mathematical model that, it was hoped, would exhibit a ferromagnetic critical point. This turned out to be the case, as Onsager showed in 1944. The model is connected to a lot of beautiful mathematics, including Kac-Moody algebras, the Yang-Baxter equation, q-series, Painlevé equations, and conformal field theory. Zamolodchikov's amazing discovery was that the conformal field theory that describes the model at its critical point can remain integrable when one perturbs away from the critical point in certain directions. One of these perturbations corresponds physically to turning on an external magnetic field. It is in this context that the $E_8$ symmetry emerges. Since the Ising model is a toy model - real ferromagnets are messier (and 3-dimensional!) - it doesn't really need experimental test. The model is important more for the physical and mathematical insight it gives, than for any quantitative information it might yield. Nevertheless, there is a tradition of trying to find real physical systems that embody the simple microscopic picture of the Ising model. (Such systems have to behave effectively as if they are one or two dimensional.) The recent work lies within this tradition, and they have managed to verify one of the consequences of Zamolodchikov's work, namely that the mass ratio of the two lightest quasiparticles is $\phi$. I have no reason to doubt that what Coldea et al. measured in their experiment is a genuine manifestation of the $E_8$ symmetry of the model. One consequence of the $E_8$ symmetry is that there are eight species of particles, with definite mass ratios. If I had to guess, I would say that it's going to be very difficult to observe the peaks corresponding to the remaining particles. The third particle has a mass very close to twice that of the lightest particle, which means that that peak will be buried under lots of nearby two-particle states. Furthermore, the particles with masses higher than two will be unstable. (The field theory studied by Zamolodchikov is integrable and therefore does not have unstable particles, but any attempt to realize the model experimentally will surely destroy the integrability and therefore the stability of the higher-mass particles.) Note: The one-dimensional quantum spin chain model in the Science article is described by a Hamiltonian that has the same eigenvectors as the transfer matrices of the classical two-dimensional square lattice Ising model solved by Onsager. So for the purposes of this discussion they are equivalent. Addendum: (This an answer to the comment of Victor Prostak that was too large to fit in the comment box.) It is widely believed (but not a theorem as far as I know) that there are only two integrable perturbations: Onsager's thermal perturbation, which has a very simple symmetry, and Zamolodchikov's magnetic perturbation with the $E_8$ symmetry. The model is not expected to be integrable if one does both perturbations simultaneously, but the question is physically just as interesting, and was studied in B.M. McCoy and T.T. Wu, Two-dimensional Ising field theory in a magnetic field: Breakup of the cut in the two-point function, Phys. Rev. D 18 , 1259–1267 (1978). I am not aware of any computation of the mass ratios in the general case. I wouldn't expect any exceptional symmetries to show up. On the other hand, exceptional groups do show up in related conformal field theories. A perturbation of the $\mathcal{M}(4,5)$ minimal model with central charge $c=7/10$ has an $E_7$ symmetry, and a perturbation of the $\mathcal{M}(6,7)$ minimal model with central charge $c=6/7$ has an $E_6$ symmetry. These models describe different kinds of critical points than the $c=1/2$ model does, and so are experimentally distinguishable. In addition, the mass ratios of the quasiparticles should be different.
{ "source": [ "https://mathoverflow.net/questions/32315", "https://mathoverflow.net", "https://mathoverflow.net/users/51/" ] }
32,479
Either intentionally or unintentionally. Include location and sculptor, if known.
The Mathematical Research Institute in Oberwolfach have a sculpture on their grounds depicting Boy's surface: (source: mfo.de )
{ "source": [ "https://mathoverflow.net/questions/32479", "https://mathoverflow.net", "https://mathoverflow.net/users/454/" ] }
32,533
Convex Optimization is a mathematically rigorous and well-studied field. In linear programming a whole host of tractable methods give your global optimums in lightning fast times. Quadratic programming is almost as easy, and there's a good deal of semi-definite, second-order cone and even integer programming methods that can do quite well on a lot of problems. Non-convex optimization (and particularly weird formulations of certain integer programming and combinatorial optimization problems), however, are generally heuristics like "ant colony optimization". Essentially all generalizable non-convex optimization algorithms I've come across are some (often clever, but still) combination of gradient descent and genetic algorithms. I can understand why this is - in non-convex surfaces local information is a lot less useful - but I would figure that there would at least be an algorithm that provably learns for a broad class of functions whether local features indicate a nearby global optimum or not. Also, perhaps, general theories of whether and how you can project a non-convex surface into higher dimensions to make it convex or almost convex. Edit: An example. A polynomial of known degree k only needs k + 1 samples to reconstruct - does this also give you the minimum within a given range for free, or do you still need to search for it manually? For any more general class of functions, does "ability to reconstruct" carry over at all to "ability to find global optima"?
If the question is "Are there non-convex global search algorithms with provably nice properties?" then the answer is "Yes, lots." The algorithms I'm familiar with use interval analysis. Here's a seminal paper from 1979: Global Optimization Using Interval Analysis . And here's a book on the topic . The requirements on the function are that it be Lipschitz or smooth to some order, and that it not have a combinatorial explosion of local optima. These techniques aren't as fast as linear or convex programming, but they're solving a harder problem, so you can't hold it against them. And from a practical point of view, for reasonable functions, they converge plenty fast.
{ "source": [ "https://mathoverflow.net/questions/32533", "https://mathoverflow.net", "https://mathoverflow.net/users/942/" ] }
32,554
I'm teaching a short summer course on algebraic groups and it's time to talk about the Killing form on the Lie algebra. The students are all undergrads of varying levels of inexperience, and I try to make everything seem like it has a point (going back to the basic goals of "what is an algebraic group" and "what does this have to do with representation theory"). I am having a hard time justifying the Killing form from anything like first principles: it is useful, and I can prove theorems explaining why it is useful, but I can't think of an explanation of why it is reasonable to invent. The ideal answer to this question will be a "naive" explanation. Other interesting answers (which I would appreciate for myself) can be more sophisticated.
Hi Ryan, I presume given your description of the students that they know finite groups pretty well, and have seen the averaging idempotent $e=\frac{1}{|G|}\sum_{g\in G} g$, and how this can be used to construct an invariant inner product on any representation of a finite group. Perhaps you can convince them that compact groups admit the same sort of averaging idempotents via integral, and so perhaps you can construct the invariant inner product on finite dimensional representations of a compact group in more or less direct analogy with finite groups. Then you can derive the properties the Killing form should satisfy on the Lie algebra by setting g=e^tX, and taking derivatives of the axioms of the group's inner product? This is the closest connection I can think of to finite group theory, which is hopefully well-understood by, or at least familiar to, your students. What do you think? -david
{ "source": [ "https://mathoverflow.net/questions/32554", "https://mathoverflow.net", "https://mathoverflow.net/users/6545/" ] }
32,566
Hi, I'm sure I'm not the only Ph.D. mathematician on MO in serious need of career advice. I'm sure there will be other readers in similar situations, who will find any good advice very helpful. Can anyone suggest anything? Honest, serious answers only please. Note that the obvious advice, i.e. do lots of great research, write loads of papers, make friends with lots of professors at conferences and seminars, apply to many jobs, learn loads of new topics, get a brain upgrade , etc. etc. etc. is already known to me and most other people on MO. Background motivation Suppose someone (who shall remain anonymous, but let's call him Dr.H for the sake of argument) is in the following position: Dr.H has a Ph.D. in Pure Mathematics from a good English university. Dr.H 's Ph.D., whilst perfectly respectable from the mathematician's viewpoint, is not known to be of any use for industrial research or non-university jobs of any kind. Dr.H has several years' postdoc/lecturing experience, but only at universities with very low academic reputations, which has now ended. Dr.H has several published papers in good journals; but unfortunately, less than other people of his age in his area. He can do good work, but too slowly. Dr.H currently has a non-university, non-research job teaching in a school, and cannot easily attend conferences, seminars, university libraries, etc. etc., and consequently Dr.H now has even less time for research than before. Most advertised mathematical jobs (e.g. www.math-jobs.com, www.jobs.ac.uk), both academic and non-academic, demand teaching experience or other skills which Dr.H does not possess, and does not know how to acquire. SUMMARY: Dr.H 's research record is quite good, but it seems not good enough for Dr.H to get a university job involving research. However, Dr.H 's teaching experience also seems not to be good enough to get a purely teaching university job. So Dr.H appears to be in a very tricky situation! Question : what should Dr.H do? Does Dr.H have any reasonable chance of continuing his academic career? If so, how? ( Apart from the obvious "apply for more jobs, publish more papers" ). Should he apply to advertised university jobs, even though he does not satisfy the requirements? Isn't this simply a waste of time? Or should Dr.H abandon the universities entirely and seek non-university jobs? Does Dr.H have any real advantage over new B.Sc. Mathematics graduates when applying for non-university jobs of relatively low mathematical content? If so, how should he find such jobs, what exactly are these advantages, and how should he make full use of them? Important note: in your answers, please state which country you are referring to, since this can make a big difference!
I am going to give a non-serious answer, but there is a serious point behind. Lots of current and former mathematicians and theoretical computer scientists excelled in other fields. Here is my top 10 list: 1) You can change your name and start selling puzzles. 2) You can drop out and start a company ( this or that , whatever). 3) You can start a hugely successful hedge fund which will in turn employ over a hundred other Ph.D's. 4) You can write a popular book explaining why people can't count. 5) You can write a comic book , a very good one. 6) You can write three volumes of a proposed seven volume monograph , get upset over its print quality, invent a new way , write a manual on it, and sell these and other books in dozens of languages. 7) You can design a bomb. 8) Why stop on a bomb? You can move to NJ and design a computer . 9) You can start a company selling in bulk a number-theoretic algorithm accessible to undergraduates. 10) In the good old days you could become French Minister of the Interior, but it helps to be friends with an Emperor. UPDATE: while writing I discarded a few other career choices which I felt were somehow "less relevant", such as Iraqi Oil Minister , Russian oligarch , or even World Scrabble Champion .
{ "source": [ "https://mathoverflow.net/questions/32566", "https://mathoverflow.net", "https://mathoverflow.net/users/6651/" ] }
32,597
John Hubbard recently told me that he has been asking people if there are compact surfaces of negative curvature in $\mathbb{R}^4$ without getting any definite answers. I had assumed it was possible, but couldn't come up with an easy example off the top of my head. In $\mathbb{R}^3$ it is easy to show that surfaces of negative curvature can't be compact: throw planes at your surface from very far away. At the point of first contact, your plane and the surface are tangent. But the surface is everywhere saddle-shaped, so it cannot be tangent to your plane without actually piercing it, contradicting first contact. This easy argument fails in $\mathbb{R}^4$. Can the failure of the easy argument be used to construct an example? Is there a simple source of compact negative curvature surfaces in $\mathbb{R}^4$?
You will find examples (topologically, spheres with seven handles) in section 5.5 of Surfaces of Negative Curvature by E. R. Rozendorn, in Geometry III: Theory of surfaces , Yu. D. Burago VI A. Zalgaller (Eds.) EMS 48. Rozendorn tells us that «from the visual point of view, their construction seems fairly simple.» Well...
{ "source": [ "https://mathoverflow.net/questions/32597", "https://mathoverflow.net", "https://mathoverflow.net/users/2510/" ] }
32,666
Hello, recently, I've been reading some algebra and sometimes I stumble up on the concept of something "being too big" to be a set. An example, is given in ( http://www.dpmms.cam.ac.uk/~wtg10/tensors3.html ) , where he writes, "Let B be the set of all bilinear maps defined on VxW. (That's the naughtiness - B is too big to be a set, but actually we will see in a moment that it is enough to look just at bilinear maps into R.)" (where V and W are vector spaces over R). This is, too big to be a set, but why? My general question is this, when is something too big to be a set? What is it instead? Why have we put these requirements on the definition? Do we run into any problems if we let, say, B as defined up there be a set? What kind of problems do we run into?
I just want to give a refinement of other answers so far, as well as a different point of view (namely, that of a person who knows little about set theory but who also encounters these kinds of issues). As others have mentioned, the root cause of the problem is that there are big logical problems with considering "the set of all sets". Unless you would like to learn more about set theory, you needn't concern yourself with what these problems are (though Russel's paradox is fairly elementary and kind of fun). It is just one of those facts of life that non-set theorists learn to live with and that set theorists learn to love. The non-existence of the set of all sets forces us to abandon other putative sets, such as "the set of all groups", "the set of all vector spaces", "the set of all manifolds", etc. For example, it is possible to equip any set with the structure of a group and so if we were able to build the set of all groups then we would necessarily have also build the set of all sets. This is almost always what people mean when they claim that a certain construction is "too big to be a set" - the construction invokes a sloppy use of set theory language that taken literally accidentally constructs the set of all sets as a byproduct. In your case, the existence of the set of all bilinear maps on $V \times W$ constructs as a byproduct the set of all vector spaces over $R$ (every bilinear map has to have a target), and if there were a set of all vector spaces over $R$ then there would be a set of all sets. This is probably not the last time you will encounter this sort of issue. In basically every case, however, there is a trick that swoops in and saves the day. Generally the idea is to observe that you don't actually need all of the flexibility that you tried to give yourself by constructing a non-set, and that it is enough to consider a simpler object (in your case the set of all bilinear maps from $V \times W$ to $\mathbb{R}$) which is small enough to be a set but big enough to have the property that you want (in your case you want it to function as a sort of universal bilinear pairing between $V$ and $W$). Ultimately I regard these sorts of concerns as analogous to the "end user agreements" that you have to certify you've read whenever you install a Microsoft product or sign up for a gmail account. I'm sure all that fine print is important, but I feel like I would have to become a lawyer to understand it all. And just as in that case, you don't have to be a set theorist to understand how to resolve these sorts of issues most of the time - usually it just requires you to capture the flexibility present in what you are already working on. Just recently I was reading about an object which was given as the quotient by a certain equivalence relation of the set of all pairs $(T, H)$ where $H$ is a Hilbert space and $T$ is a certain kind of operator on $H$. The book pointed out that one cannot consider the set of all Hilbert spaces, but that the set theoretic difficulties can be resolved by proving that every pair $(T', H')$ is equivalent to a pair $(T, H)$ on a fixed Hilbert space $H$. So the problem was avoided by exploiting some inherent flexibility in the equivalence relation under consideration. This sort of behavior is quite typical.
{ "source": [ "https://mathoverflow.net/questions/32666", "https://mathoverflow.net", "https://mathoverflow.net/users/7607/" ] }
32,720
This is a simple doubt of mine about the basics of measure theory, which should be easy for the logicians to answer. The example I know of non Borel sets would be a Hamel basis, which needs axiom of choice, and examples of non Lebesgue measurable set would be Vitali sets , which seems to be unprovable without axiom of choice. Then I saw the answer of François G. Dorais . His construction of an uncountable $\mathbb{Q}$-independent set in $\mathbb R$ does not require axiom of choice. Which leaves a faint hope for the following: Is it possible to construct without using the axiom of choice examples of non Borel sets? There is a classic example of an analytic but non-Borel set due to Lusin, described by Gerald Edgar here , but it is not clear to me whether it needs axiom of choice since it seems to be putting restrictions on higher and higher terms in the continued fraction expansion. Since I do not know logic and set theory, I hoped of asking the experts.
No, it is not possible. It is consistent with ZF without choice that the reals are the countable union of countable sets. (*) From this it follows that all sets of reals are Borel. Of course, the "axiom" (*) makes it impossible to do any analysis. As soon as one allows the bit of choice that it is typically used to set up classical analysis as one is used to (mostly countable choice, but DC seems needed for Radon-Nikodym), one can implement the arguments needed to show (**) The usual hierarchy of Borel sets (obtained by first taking open sets, then complements, then countable unions of these, then complements, etc) does not terminate before stage $\omega_1$ (this is a kind of diagonal argument). Logicians call the sets obtained this way $\Delta^1_1$. They are in general a subcollection of the Borel sets. To show that they are all the Borel sets requires a bit of choice (One needs that $\omega_1$ is regular). There is actually a nice result of Suslin relevant here. He proved that the Borel sets are precisely the $\Delta^1_1$ sets: These are the sets that are simultaneously the continuous image of a Borel set ($\Sigma^1_1$ sets), and the complement of such a set ($\Pi^1_1$ sets). That there are $\Pi^1_1$ sets that are not $\Delta^1_1$ (and therefore, via a bit of choice, not Borel) is again a result of Suslin. He also showed that any $\Sigma^1_1$ set is either countable, or contains a copy of Cantor's set and therefore has the same size as the reals. His example of a $\Sigma^1_1$ not $\Delta^1_1$ set uses logic (a bit of effective descriptive set theory), and nowadays is more common to use the example of the $\Pi^1_1$ set WO mentioned by Joel, which is not $\Delta^1_1$ by what logicians call a boundedness argument. A nice reference for some of these issues is the book Mansfield-Weitkamp, Recursive Aspects of Descriptive Set Theory , Oxford University Press, Oxford (1985).
{ "source": [ "https://mathoverflow.net/questions/32720", "https://mathoverflow.net", "https://mathoverflow.net/users/2938/" ] }
32,766
Let $X$ be a complex normal projective variety. Is there any sufficient condition to guarantee the torsion-freeness of Picard group of $X$? One technique I sometimes use is following: If $X$ can be represented by GIT quotient $Y//G$ for some projective variety $Y$ with well-known Picard group, then by using Kempf's descent lemma, we can attack the computation of integral Picard group. Of course, if we can make a sequence of smooth blow-ups/downs between $X$ and $X'$ with well known Picard group, then we can get the information of $\mathrm{Pic}(X)$ from $\mathrm{Pic}(X')$. Is there any way to attack this problem?
[EDIT: A previous version mistakenly argued that the fundamental group of X was responsible for torsion in the Picard group. I hope that this is correct now! Btw, there is probably a more direct way of arguing, but I cannot find one at the moment.] The Picard group of X is torsion free if and only if the group ${\rm H_1}(X,\mathbf{Z})$ vanishes. By the exponential sequence, the torsion in the Picard group of X comes from the torsion in ${\rm H^2}(X,\mathbf{Z})$ and from ${\rm H^1}(X,\mathbf{C})/{\rm H^1}(X,\mathbf{Z})$. Thus the vanishing of ${\rm H_1}(X,\mathbf{Z})$ is equivalent (by the Universal Coefficient Theorem) to the torsion-freeness of the Picard group of X . ADDED (for explicitness) To make everything more explicit, assume that X is non-singular. The Picard group of X may contain torsion coming from two different sources. There might contain torsion in the connected component of the identity, and this is recorded by the torsion free part of the first homology group. Or there might be torsion in the component group of the Picard group, and this is recorded by the torsion in the first homology group. In terms of the exponential sequence, the first kind of torsion appears in the image of ${\rm H^1}(X,\mathbf{C})$, while the second one "appears" in torsion in ${\rm H^2}(X,\mathbf{Z})$. The Universal Coefficient Theorem implies that the "combination" of these two groups is the whole first integral homology group. An example of torsion of the first kind is already present in the case of curves of genus at least one: the Jacobian of the curve contains plenty of torsion bundles. An example of torsion of the second kind is the case of Enriques surfaces: the canonical divisor on such a surface is a torsion line bundle that is non-trivial. If the characteristic of the ground-field is different from two, the corresponding cover of X is a K3 surface.
{ "source": [ "https://mathoverflow.net/questions/32766", "https://mathoverflow.net", "https://mathoverflow.net/users/4643/" ] }
32,880
Does anybody know who was Atle Selberg's advisor? I find it interesting to know the advisor's impact on his students. Unfortunately, in Selberg case, this information (even his advisor's name) seems to be nowhere to be found.
I'm a student at the university of Oslo, so I thought I'd have a go at this. I just talked to Erling Størmer (Carl's grandson) who is a professor emeritus here. He said that in practice Atle had no advisor. Of course someone must have signed the papers but he doesn't know who (I don't really see what difference it makes anyway). Erling told me that according to Atle the reason why so many Norwegian mathematicians at the time worked in number theory is that they were all self-taught, and number theory is more accessible to the autodidact.
{ "source": [ "https://mathoverflow.net/questions/32880", "https://mathoverflow.net", "https://mathoverflow.net/users/7820/" ] }
32,892
In Ebbinghaus-Flum-Thomas's Introduction to Mathematical Logic, the following assertion is made: If ZFC is consistent, then one can obtain a polynomial $P(x_1, ..., x_n)$ which has no roots in the integers. However, this cannot be proved (within ZFC). So if $P$ has no roots, then mathematics (=ZFC, for now) cannot prove it. The justification is that Matiyasevich's solution to Hilbert's tenth problem allows one to turn statements about provable truths in a formal system to the existence of integer roots to polynomial equations. The statement is "ZFC is consistent," which cannot be proved within ZFC thanks to Gödel's theorem. Question: Has such a polynomial ever been computed? (This arose in a comment thread on the beta site math.SE.)
In his Master's Thesis, Merlin Carl has computed a polynomial that is solvable in the integers iff ZFC is inconsistent. A joint paper with his advisor Boris Moroz on this subject can be found at http://www.math.uni-bonn.de/people/carl/preprint.pdf .
{ "source": [ "https://mathoverflow.net/questions/32892", "https://mathoverflow.net", "https://mathoverflow.net/users/344/" ] }
32,923
I'm currently trying to understand the concepts and theory behind some of the common proof verifiers out there, but am not quite sure on the exact nature and construction of the sort of systems/proof calculi they use. Are they essentially based on higher-order logics that use Henkin semantics, or is there something more to it? As I understand, extending Henkin semantics to higher-order logic does not render the formal system any less sound, though I am not too clear on that. Though I'm mainly looking for a general answer with useful examples, here are a few specific questions: What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative. Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics? Where does typed lambda calculus come into proof verification? Are there any other approaches than higher order logic to proof verification? What are the limitations/shortcomings of existing proof verification systems (see below)? The Wikipedia pages on proof verification programs such as HOL Light Coq , and Metamath give some idea, but these pages contain limited/unclear information, and there are rather few specific high-level resources elsewhere. There are so many variations on formal logics/systems used in proof theory that I'm not sure quite what the base ideas of these systems are - what is required or optimal and what is open to experimentation. Perhaps a good way of answering this, certainly one I would appreciate, would be a brief guide (albeit with some technical detail/specifics) on how one might go about generating a complete proof calculus (proof verification system) from scratch? Any other information in the form of explanations and examples would be great too, however.
What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative. Don't think of type theory, categorical logic, and model theory as alternatives to one another. Each step on the progression forgets progressively more structure, and whether that structure is essence or clutter depends on the problem you are trying to solve. Roughly speaking, the two poles are type theory and model theory, which focus on proofs and provability, respectively. To a model theorist, two propositions are the same if they have the same provability/truth value. To a type theorist, equisatisfiability means that we have a proof of the biimplication, which is obviously not the same thing as the propositions being the same. (In fact, even the right notion of equivalence for proofs is still not settled to type theorists' satisfaction.) Categorical logicians tend to move between these two poles; on the one hand, gadgets like Lawvere doctrines and topoi are essentially model-theoretic, since they are provability models. On the other hand, gadgets like cartesian closed categories give models of proofs, up to $\beta\eta$-equivalence. Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics? It depends on what you are doing. If you are building a computerized tool, then typically either natural deduction or sequent calculus is the way to go, because these calculi both line up with human practice and help constrain proof search in ways helpful to computers. It makes sense to cook up a sequent calculus or natural deduction system even if the theory you want to use (e.g., set theory) is not normally cast in these terms. On the other hand, model theory has been spectacularly successful in applications to mathematics, and this is in part because it does not have a built-in notion of proof system -- so there is simply less machinery you need to reinterpret before you can apply it to a mathematical problem. (The corresponding use of type theory is much less developed; homotopy theorists are in the very earliest stages of turning dependent type theory into ordinary mathematics.) Where does typed lambda calculus come into proof verification? Every well-behaved intuitionistic logic has a corresponding typed lambda calculus. See Sorensen and Urcyczyn's Lectures on the Curry-Howard Correspondence for (many) more details. Are there any other approaches than higher order logic to proof verification? Yes and no. If you're interested in actual, serious mathematics, then there is no alternative to HOL or the moral equivalent (such as dependent type theory or set theory) because mathematics deals intrinsically with higher-order concepts. However, large portions of any development involve no logically or conceptually complex arguments: they are just symbol management, often involving decidable theories. This is often amenable to automation, if the problems in question are not stated in unnaturally higher-order language. (Sometimes, as in the case of the Kepler conjecture, there is an artificial way of stating the problem in a simple theory. This is essentially the reason why Hales' proof relies so heavily on computers: he very carefully reduced the Kepler conjecture to a collection of machine-checkable statements about real closed fields.) What are the limitations/shortcomings of existing proof verification systems (see below)? The main difficulty with these tools is finding the right balance between automation and abstraction. Basically, the more expressive the logic, the harder automated proof search becomes, and the more easily and naturally you can define abstract theories that can be used in many different contexts.
{ "source": [ "https://mathoverflow.net/questions/32923", "https://mathoverflow.net", "https://mathoverflow.net/users/602/" ] }
32,967
The history of proving numbers irrational is full of interesting stories, from the ancient proofs for $\sqrt{2}$, to Lambert's irrationality proof for $\pi$, to Roger Apéry's surprise demonstration that $\zeta(3)$ is irrational in 1979. There are many numbers that seem to be waiting in the wings to have their irrationality status resolved. Famous examples are $\pi+e$, $2^e$, $\pi^{\sqrt 2}$, and the Euler–Mascheroni constant $\gamma$. Correct me if I'm wrong, but wouldn't most mathematicians find it a great deal more surprising if any of these numbers turned out to be rational rather than irrational? Are there examples of numbers that, while their status was unknown, were "assumed" to be irrational, but eventually shown to be rational?
I don't think Legendre expected this number to be rational, let alone integer...
{ "source": [ "https://mathoverflow.net/questions/32967", "https://mathoverflow.net", "https://mathoverflow.net/users/175/" ] }
33,037
Let $S$ = { $a^2b^3$ : $a, b \in \mathbb{Z}_{>1}$ }. Does there exist $n$ such that $n$, $n+1 \in S$? Motivation: I was thinking about Question on consecutive integers with similar prime factorizations , wondering whether any such pair had to have prime signatures with at least one 1. This would follow if the answer to the above question is negative. (This would also follow from weaker versions of the above question too, such as taking out perfect $n$th powers from $S$.) Please note that $a$ and $b$ in the set definition are not allowed to be equal to 1. Otherwise, there'd be solutions like 8, 9 or 465124, 465125. (465124 = $(2\cdot 11 \cdot 31)^2$ and 465125 = $61^25^3$.)
As once remarked by Mahler, $x^2 - 8 y^2 = 1$ has infinitely many solutions with $27 | x$.
{ "source": [ "https://mathoverflow.net/questions/33037", "https://mathoverflow.net", "https://mathoverflow.net/users/7434/" ] }
33,096
Hi everyone, the summer break is coming and I am thinking of reading something about mathematical logic. Could anyone please give me some reading materials on this subject?
Here are a few suggestions (which depending on your background may be more or less useful): Logic and Structure by Dirk van Dalen. I have used this as a textbook when teaching mathematical logic and for that purpose it is decent. Some people find it a bit dry, but at least it covers a large amount of material in a reasonably clear manner. Mathematical Logic by Joseph R. Shoenfield. This book is, I think, regarded by many logicians as being the gold standard text on the subject. A Course in Mathematical Logic by John Bell and Moshe Machover. This is my personal favorite textbook in mathematical logic. (Unfortunately, it's a North Holland book and so is a bit less affordable.) A Course in Mathematical Logic for Mathematicians by Yuri I. Manin (with contributions from Boris Zilber). I think that pretty much anything written by Manin is worth taking seriously and this book is no exception. Notes on Logic and Set Theory by Peter T. Johnstone. This is a delightful little (literally) book on logic which is highly recommended (perhaps in conjunction with one of the other larger books from this list). The Mathematics of Metamathematics by Helena Rasiowa and Roman Sikorski. This is a nice book which gives a lattice theoretic development of mathematical logic. (Difficult to find, but worth a look if your library has a copy.) Introduction to Metamathematics by Stephen C. Kleene. A classic text in mathematical logic which is still a rewarding read. I hope these (admittedly biased) suggestions are some use!
{ "source": [ "https://mathoverflow.net/questions/33096", "https://mathoverflow.net", "https://mathoverflow.net/users/3849/" ] }
33,265
It is widely stated that Fermat wrote his famous note on sums of powers ("Fermat's last theorem") in, or around, 1637. How do we know the date, if the note was only discovered after his death, in 1665? My interest in this stems from the fact that if this is true, we can be absolutely certain that whatever proof Fermat in mind was wrong, and he must have noticed (or he would have mentioned it to his correspondents in later years). On the other hand, if the note had been written much later the reasoning would fail. I have used this argument previously in talks for the general public, rather acritically, and I would like to make sure it is sound.
Not only do we not know the date, we don't even know whether he wrote the remark at all. For all we know it might have been invented by his son Samuel, who published his father's comments. In his letters, Fermat never mentioned the general case at all, but quite often posed the problem of solving the cases $n=3$ and $n=4$. I am almost certain that Fermat discovered infinite descent around 1640, which means that in 1637 he did not have any chance of proving FLT for exponent 4 (let alone in general). In 1637, Fermat also stated the polygonal number theorem and claimed to have a proof; this is just about as unlikely as in the case of FLT -- I guess Fermat wasn't really careful in these early days. Let me also mention that Fermat posed FLT for $n=3$ always as a problem or as a question, and did not claim unambiguously to have a proof; my interpretation is that he did not have a proof for $n = 3$, and that he knew he did not have one. Edit Let me briefly quote two letters from Fermat: I. Oeuvres II, 202--205, letter to Roberval Aug. 1640 Fermat claims that if $p = 4n-1$ be prime, then $p$ does not divide a sum of two squares $x^2 + y^2$ with $\gcd(x,y) = 1$. Then he writes I have to admit frankly that I have found nothing in number theory that has pleased me as much as the demonstration of this proposition, and I would be very pleased if you made the effort of finding it, if only for learning whether I estimate my invention more highly than it deserves. This looks as if Fermat had just discovered "his method" of descent. Starting from $x^2 + y^2 = pr$ one has to show that there is a prime $q \equiv 3 \bmod 4$ dividing $r$ which is strictly less than $p$. II. In his letter to Carcavi from Aug. 1659 (Oeuvres II, 431--436), Fermat writes: I then considered certain questions which, although negative, do not remain to receive a very great difficulty, for it will be easily seen that the method of applying descent is completely different from the preceding [questions]. Such cases include the following: There is no cube that can be divided into two cubes. There is only one square number which, augmented by $2$, makes a cube, namely $25$. There are only two square numbers which, augmented by $4$, make a cube, namely $4$ and $121$. All squared powers of $2$ augmented by $1$ are prime numbers. My interpretation of this is that Fermat lists four results which he believes can be proved using his method of descent. In my opinion this implies that Fermat did not have a proof of FLT for exponent $3$ in 1659. Edit 2 In light of the discission at wiki.fr let me add a couple of additional remarks along with a promise that a nonelectronic publication of my views on Fermat will appear within the next two years (if I can find a publisher, that is). A search in google books for "hanc marginis" and Fermat for the years up to 1900 reveals several hits, none of which claims that the remark was written around 1637; in particular there are no dates given in Fermat's Oeuvres or in Heath's Diophantus. Starting with Dickson's history, this changes dramatically, and nowadays the date 1637 seems to be firmly attached to this entry. The dating of the entry seems to come from a letter written by Fermat to J. de Sainte-Croix via Mersenne mentioned in Nurdin's answer; this letter is not dated, but since Descartes, in a letter to Mersenne from 1638, refers to a result he credits to Sainte-Croix, but which Fermat claims he has discovered, it is believed that Fermat's letter to Mersenne was written well before that date. The reasons for dating it to September 1636 are not explained in Fermat's Oeuvres. In this letter, Fermat poses the problem of finding two fourth powers whose sum is a fourth power, and of finding two cubes whose sum is a cube. The reasoning seems to be that in 1636, Fermat had not yet found (or believed to have found) a proof of the general theorem, so the entry must have been written at a later date. Since he did not refer to the general theorem in any of his existant letters, it is also believed that he soon found his mistake, so the entry cannot have been written at a time when Fermat was mature enough to find sufficiently difficult proofs. Let me also add that the following dates can be deduced from Fermat's letters: 1638 Numbers 4n-1 are not sums of two rational squares 1640 Fermat's Little Theorem 1640 Discovery of infinite descent; used for showing that (1) primes 4n-1 do not divide sums of to squares. 1640 Statement of the Two-Squares Theorem 1641 - 1645 Proof of (2) FLT for exponent 4 later: Proof of (3) the Two-Squares Theorem It is impossible to attach any dates between 1644 and 1654 to Fermat's discoveries since he either wrote hardly any letter in this period, or all of them are lost. Fermat claimed to have discovered infinite descent in connection with results such as (1), and that he at first could apply it only to negative statements such as (2), whereas it took him a long time until he could use his method for proving positive statements such as (3). Thus the proofs of (1) - (2) - (3) were found in this order. This means in particular that if Fermat's entry in his Diophantus was written around 1637, then the marvellous proof must have been a proof that does not use infinite descent. I would also like to remark that the Fermat equation for exponents 3 and 4 had already been studied by Arab mathematicians, such as Al-Khujandi and Al-Khazin, who both attempted proving that there are no solutions. The cubic equation also shows up in problems posed by Frenicle and van Schooten in response to Fermat's challenge to the English mathematicians.
{ "source": [ "https://mathoverflow.net/questions/33265", "https://mathoverflow.net", "https://mathoverflow.net/users/4790/" ] }
33,282
Does there exist a set $A$ such that $A=\{A\}$ ? Edit(Peter LL): Such sets are called Quine atoms. Naive set theory By Paul Richard Halmos On page three, the same question is asked. Using the usual set notation, I tried to construct such a set: First with finite number of brackets and it turns out that after deleting those finite number of pairs of brackets, we circle back to the original question. For instance, assuming $A=\{B\}$, we proceed as follows: $\{B\}=\{A\}\Leftrightarrow B=A \Leftrightarrow B=\{B\}$ which is equivalent to the original equation. So the only remaining possibility is to have infinitely many pairs of brackets, but I can't make sense of such set. (Literally, such a set is both a subset and an element of itself. Further more, It can be shown that it is singleton.) For some time, I thought this set is unique and corresponded to $\infty$ in some set-theoretic construction of naturals. To recap, my question is whether this set exists and if so what "concrete" examples there are. (Maybe this set is axiomatically prevented from existing.)
In standard set theory (ZF) this kind of set is forbidden because of the axiom of foundation . There are alternative axiomatisations of set theory, some of which do not have an equivalent of the axiom of foundation. This is called non-well-founded set theory. See e.g. Aczel's anti-foundation_axiom , where there is a unique set such that $x = \{x\}$.
{ "source": [ "https://mathoverflow.net/questions/33282", "https://mathoverflow.net", "https://mathoverflow.net/users/5627/" ] }
33,478
The coefficients of lowest and next-highest degree of a linear operator's characteristic polynomial are its determinant and trace. These have well-known geometric interpretations. But what about its intermediate coefficients? For a linear operator $f : V \to V$, we have the beautiful formula $$\chi(f) = det(f - t) = \sum_{i=0}^n (-1)^i\ tr(\wedge^{n-i}(f))\ t^i,$$ where $\wedge^{p}(f)$ is the map induced by $f$ on grade $p$ of $V$'s exterior algebra. While this formula is rarely mentioned (at least I haven't seen it in any of the standard textbooks), it is not too surprising if you have a good grasp of exterior algebra. It presents $\chi(f)$ as a generating function for the exterior traces of $f$. My question is whether these traces have a simple geometric interpretation on par with $tr$ and $det$.
A rather simple response is to differentiate the characteristic polynomial and use your interpretation of the determinant. $$det(I-tf) = {t^n}det(\frac{1}{t}I-f) = (-t)^ndet(f-\frac{1}{t}I)= {(-t)^n}\chi(f)(1/t)$$ So if we let $\chi(f)(t) = \Sigma_{i=0}^n a_it^i$, then ${(-t)^n}\chi(f)(1/t) = (-1)^n\Sigma_{i=0}^n a_it^{n-i}$ But $I-tf$ is the path through the identity matrix, and $Det(A)$ measures volume distortion of the linear transformation $A$. $$det(I-tf)^{(k)}(t=0) = (-1)^nk!a_{n-k}$$ and a change of variables ($t\longmapsto -t$) gives (and superscript $(k)$ indicates $k$-th derivative) $$det(I+tf)^{(k)}(t=0) = (-1)^{n+k}k!a_{n-k}$$ So the coefficients of the characteristic polynomial are measuring the various derivatives of the volume distortion, as you perturb the identity transformation in the direction of $f$. $$a_k = \frac{det(I+tf)^{(n-k)}(t=0)}{(n-k)!}$$
{ "source": [ "https://mathoverflow.net/questions/33478", "https://mathoverflow.net", "https://mathoverflow.net/users/2036/" ] }
33,522
The following statement is well-known: Let $A$ be a commutative Noetherian ring and $M$ a finitely generated $A$ -module. Then $M$ is flat if and only if $M_{\mathfrak{p}}$ is a free $A_{\mathfrak{p}}$ -module for all $\mathfrak{p}$ . My question is: do we need the assumption that $A$ is Noetherian? I have a proof (from Matsumura) that doesn't require that assumption, but the fact that other references (e.g. Atiyah, Wikipedia) are including this assumption makes me rather uneasy.
By request, my earlier comments are being upgraded to an answer, as follows. For finitely generated modules over any local ring $A$, flat implies free (i.e., Theorem 7.10 of Matsumura's CRT book is correct: that's what proofs are for). So the answer to the question asked is "no". The CRT book uses the "equational criterion for flatness", which isn't in Atiyah-MacDonald (and so is why the noetherian hypothesis was imposed there). This criterion is in the Wikipedia entry for "flat module", but Wikipedia has many entries on flatness so it's not a surprise that this criterion under "flat module" would not be appropriately invoked in whatever Wikipedia entry was seen by the OP. An awe-inspiring globalization by Raynaud-Gruson (in their overall awesome paper, really with authors in that order) is given without noetherian hypotheses: if $A$ has finitely many associated primes (e.g., any noetherian ring, or any domain whatsoever) and if $M$ is a finitely generated flat $A$-module then it's finitely presented (so Zariski-locally free!). See 3.4.6 (part I) of Raynaud-Gruson (set $X=S$ there). By 3.4.7(iii) of R-G, the finiteness condition on the set of associated primes cannot be removed, as any absolutely flat ring that isn't a finite product of fields provides a counterexample. (An explicit counterexample is provided by the link at the end of Daniel Litt's answer, namely a finitely generated flat module that is not finitely presented, over everyone's favorite crazy ring $\prod_{n=0}^{\infty} \mathbf{F}_2$.)
{ "source": [ "https://mathoverflow.net/questions/33522", "https://mathoverflow.net", "https://mathoverflow.net/users/5292/" ] }
33,697
I want to start by apologising for what is probably a weak attempt at a question on a site like this, but I'm having trouble understand a concept that doesn't seem to be properly explained elsewhere - perhaps someone can put it in layman's terms. I've been trying to understand the idea on Wikipedia that discusses Pythagorean Triple - namely the section entitled Parent/child relationships. It talks about a Swedish man called Berggren who devised a set of equations that would allow you to determine the children of this parental triple. Each parent created 3, which in turn created 3 and so on. When I started running the code, I couldn't pick up a certain triple - (200, 375, 425) Basically, I wondered if someone could provide a little clarification. Is it either that... My code is wrong, it's definitely possible to get to that triple from a starting point of (3,4,5). I haven't understood what Berggren used these equations for, and I need to back and read it properly. Any clarification would be superb, Thanks P.S - Could someone also tag this appropriately? I have no idea which subject it comes under.
A little-known chatoyant gem of elementary number theory is that the tree of Pythagorean triples has a beautiful geometric genesis in terms of reflections. This viewpoint should clarify the points that you raise. Below is a brief sketch excerpted from some emails I sent to John Conway and R. K. Guy, after noticing that they mention this topic (too) briefly in their "Book of Numbers". Namely, on p. 172 they write: $\quad\ \ $ . Below I explain briefly how to view this in terms of reflections and I mention some generalizations and closely related topics. I plan to discuss this at greater length in a future MO post when time permits. Consider the quadratic space $Z$ of the form $Q(x,y,z) = x^2 + y^2 - z^2$. It has Lorentzian inner product $(Q(x+y)-Q(x)-Q(y))/2$ given by $\; v \cdot u = v_1 u_1 + v_2 u_2 - v_3 u_3$. Recall that here one defines the $\quad$ reflection of $v$ in $u$ $\quad\quad v \mapsto v - 2 \dfrac{v \cdot u}{u \cdot u} u \quad\quad$ Reflectivity is clear: $\; u \mapsto -u$, and $\; v \mapsto v$ if $\; v\perp u, \;$ i.e. $v\cdot u = 0$. With $\; v = (x,y,z)$ and $\; u = (1,1,1)$ of norm 1 $\quad\quad (x,y,z)\; \mapsto (x,y,z) - 2 \dfrac{(x,y,z)\cdot(1,1,1)}{(1,1,1)\cdot(1,1,1)} (1,1,1)$ $\quad\quad\quad\quad\quad\quad = (x,y,z) - 2 \; (x+y-z) \; (1,1,1)$ $\quad\quad\quad\quad\quad\quad = (-x-2y+2z, \; -2x-y+2z, \; -2x-2y+3z)$ This is the nontrivial reflection that effects the descent in the triples tree. Said simpler: if $x^2 + y^2 = z^2$ then $(x/z, y/z)$ is a rational point $P$ on the unit circle $C$. A simple calculation shows that the line through $P$ and $(1,1)$ intersects $C$ in a smaller rational point, given projectively via the above reflection, e.g. $\quad\quad (5,12,13) \mapsto (5,12,13) - 2 \; (5+12-13) \; (1,1,1) = (-3,4,5)$ $\qquad\qquad$ We ascend the tree by inverting this reflection, combined with trivial sign-changing reflections: $\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$ $\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$ $\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$ Continuing in this manner one may reflectively generate the entire tree of primitive Pythagorean triples, e.g. the topmost edge of the triples tree corresponds to the ascending $C$-inscribed zigzag line $(-1,0), (3/5,4/5), (-3/5,4/5), (5/12,12/13), (-5/12,12/13), (7/25,24/25), (-7/25,24/25) \ldots$ This technique easily generalizes to the form $ x_1^2 + x_2^2 + \cdots + x_{n-1}^2 = x_n^2$ for $4 \le n \le 9$, but for $n \ge 10 $ the Pythagorean n-tuples fall into at least $[(n+6)/8]$ distinct orbits under the automorphism group of the form - see Cass & Arpaia (1990) [1] There are also generalizations to different shape forms that were first used by L. Aubry (Sphinx-Oedipe 7 (1912), 81-84) to give elementary proofs of the 3 & 4 square theorem (see Appendix 3.2 p. 292 of Weil's: Number Theory an Approach Through History). These results show that if an integer is represented by a form rationally, then it must also be so integrally. In particular, the following class of forms is included $x^2+y^2, x^2 \pm 2y^2, x^2 \pm 3y^2, x^2+y^2+2z^2, x^2+y^2+z^2+t^2,\ldots$ More precisely, essentially the same proof as for Pythagorean triples shows THEOREM Suppose that the $n$-ary quadratic form $F(x)$ has integral coefficients and has no nontrivial zero in ${\mathbb Z}^n$, and suppose further that for any $x \in {\mathbb Q}^n$ there exists $y \in {\mathbb Z}^n$ such that $\; |F(x-y)| < 1$. Then $F$ represents $m$ over $\mathbb Q$ $\iff$ $F$ represents $m$ over $\mathbb Z$, for all nonzero integers $m$. The condition $|F(x-y)| < 1$ is closely connected to the Euclidean algorithm. In fact there is a function-field analog that employs the Euclidean algorithm which was independently rediscovered by Cassels in 1963. Namely, a polynomial is a sum of $n$ squares in $k(x)$ iff the same holds true in $k[x]$. Pfister immediately applied this to obtain a complete solution of the level problem for fields. Shortly thereafter he generalized Cassels result to arbitrary quadratic forms, founding the modern algebraic theory of quadratic forms ("Pfister forms"). Aubry's results are, in fact, very special cases of general results of Wall, Vinberg, Scharlau et al. on reflective lattices , i.e. arithmetic groups of isometries generated by reflections in hyperplanes. Generally reflections generate the orthogonal group of Lorentzian quadratic forms in dim < 10. 2 Daniel Cass; Pasquale J. Arpaia Matrix Generation of Pythagorean n-Tuples. Proc. Amer. Math. Soc. 109, 1, 1990, 1-7. http://www.jstor.org/stable/2048355
{ "source": [ "https://mathoverflow.net/questions/33697", "https://mathoverflow.net", "https://mathoverflow.net/users/7990/" ] }
33,774
Let $k$ be a field (I'm mainly interested in the case where $k$ is a number field, however results for other fields would be interesting), and $X$ a smooth projective variety over $k$. By a zero cycle on $X$ over $k$ I mean a formal sum of finitely many (geometric) points on $X$, which is fixed under the action of the absolute Galois group of $k$. We can define the degree of a zero cycle to be the sum of the multiplicities of the points. Now, if $X$ contains a $k$-rational point then it is clear that $X$ contains a zero cycle of degree one over $k$. What is known in general about the converse? That is, which classes of varieties are known to satisfy the property that the existence of a zero cycle of degree one over $k$ implies the existence of a $k$-rational point? For example what about rational varieties and abelian varieties? As motivation I shall briefly mention that the case of curves is easy. Since here zero cycles are the same as divisors we can use Riemann-Roch to show that the converse result holds if the genus of the curve is zero or one, and there are plently of counter-examples for curves of higher genus. However in higher dimensions this kind of cohomological argument seems to fail as we don't (to my knowledge) have such tools available to us.
There has been a lot of work on this problem, although nothing like a general answer is known. By way of abbreviation, the index of a nonsingular projective variety is the least positive degree of a $k$ -rational zero cycle, so you are asking about the relationship between index one and having a $k$ -rational point. First, you ask whether rational varieties and abelian varieties with index one must have a rational point. Here you probably mean $k$ -forms of such things: i.e., geometrically rational varieties and torsors under abelian varieties. (Both rational varieties and abelian varieties have rational points, the latter by definition, the former e.g. by the theorem of Lang-Nishimura which says that having rational points is a birational invariant of a nonsingular projective variety.) I can answer this: A torsor under an abelian variety has index one iff it has a rational point. This follows from the cohomological interpretation of torsors as elements of $H^1(k,A)$ . A geometrically rational surface of index one need not have a rational point: this is a theorem of Colliot-Thelene and Coray. (A reference appears in the link below.) On to the general question. A very nice recent paper which proves a big result of this type and gives useful bibliographic information about other results is Parimala's 2005 paper on homogeneous varieties: Link Finally, there are some fields $k$ for which every geometrically irreducible projective variety has index one -- most notably finite fields. In this case any variety without a rational point over such a field gives a counterexample to "index one implies rational point". For instance, for any finite field $\mathbb{F}_q$ and all sufficiently large $g$ , one can easily write down a hyperelliptic cuve over $\mathbb{F}_q$ of genus $g$ without rational points. [N.B.: What I had written before was too strong: if instead you fix $g$ and let $q$ be sufficiently large, then by the Weil bounds you must have a rational point.] There are also K3 surfaces over finite fields without rational points, and so forth. Some further discussion of fields over which every (geometrically irreducible) variety has index one occurs in the appendix of a recent paper of mine: http://alpha.math.uga.edu/~pete/trans.pdf There are many more results than the ones I've mentioned so far. If you have further questions, please don't hesitate to ask!
{ "source": [ "https://mathoverflow.net/questions/33774", "https://mathoverflow.net", "https://mathoverflow.net/users/5101/" ] }
33,784
As I understand, if $0\rightarrow A\rightarrow X\rightarrow B\rightarrow 0$ is a short exact sequence of abelian groups, $\mbox{Ext }_{\mathbb{Z}}^{1}(B,A)$ gives all the isomorphism classes of what can come in as $X$. But when I consider $0\rightarrow \mathbb{Z}\rightarrow X\rightarrow \mathbb{Z}/(3)\rightarrow 0 $,$\ \ \ $ $\mbox{Ext } _{\mathbb{Z}}(\mathbb{Z}/(3),\mathbb{Z})=\mathbb{Z}/(3)\ \ $ but all I can think of for $X$ are only two, $\mathbb{Z}$ and $\mathbb{Z}\oplus\mathbb{Z}/(3)$. Am I missing something or am I not understanding the result of extension problem correctly?
There has been a lot of work on this problem, although nothing like a general answer is known. By way of abbreviation, the index of a nonsingular projective variety is the least positive degree of a $k$ -rational zero cycle, so you are asking about the relationship between index one and having a $k$ -rational point. First, you ask whether rational varieties and abelian varieties with index one must have a rational point. Here you probably mean $k$ -forms of such things: i.e., geometrically rational varieties and torsors under abelian varieties. (Both rational varieties and abelian varieties have rational points, the latter by definition, the former e.g. by the theorem of Lang-Nishimura which says that having rational points is a birational invariant of a nonsingular projective variety.) I can answer this: A torsor under an abelian variety has index one iff it has a rational point. This follows from the cohomological interpretation of torsors as elements of $H^1(k,A)$ . A geometrically rational surface of index one need not have a rational point: this is a theorem of Colliot-Thelene and Coray. (A reference appears in the link below.) On to the general question. A very nice recent paper which proves a big result of this type and gives useful bibliographic information about other results is Parimala's 2005 paper on homogeneous varieties: Link Finally, there are some fields $k$ for which every geometrically irreducible projective variety has index one -- most notably finite fields. In this case any variety without a rational point over such a field gives a counterexample to "index one implies rational point". For instance, for any finite field $\mathbb{F}_q$ and all sufficiently large $g$ , one can easily write down a hyperelliptic cuve over $\mathbb{F}_q$ of genus $g$ without rational points. [N.B.: What I had written before was too strong: if instead you fix $g$ and let $q$ be sufficiently large, then by the Weil bounds you must have a rational point.] There are also K3 surfaces over finite fields without rational points, and so forth. Some further discussion of fields over which every (geometrically irreducible) variety has index one occurs in the appendix of a recent paper of mine: http://alpha.math.uga.edu/~pete/trans.pdf There are many more results than the ones I've mentioned so far. If you have further questions, please don't hesitate to ask!
{ "source": [ "https://mathoverflow.net/questions/33784", "https://mathoverflow.net", "https://mathoverflow.net/users/5292/" ] }
33,817
It is an open problem to prove that $\pi$ and $e$ are algebraically independent over $\mathbb{Q}$. What are some of the important results leading toward proving this? What are the most promising theories and approaches for this problem?
People in model theory are currently studying the complex numbers with exponentiation. Z'ilber has an axiomatisation of an exponential field (field with exponential function) that looks like the complex numbers with exp. but satisfies Schanuel's conjecture. He proved that there is exactly one such field of the size of $\mathbb C$. I would find it odd if Z'ilber's field turned out to be different from the complex numbers. By results of Wilkie, the reals with exponentiation are well understood, and the complex numbers with exponentiation is in some way the next step up. The model theoretic frame work (o-minimality) that works for the reals with exp. fails for the complex numbers, but there might be a similar theory that works for the complex field with exponentiation.
{ "source": [ "https://mathoverflow.net/questions/33817", "https://mathoverflow.net", "https://mathoverflow.net/users/4361/" ] }
33,896
I was always bothered by the definition of the cross product given in e.g. a calculus course because it's never made clear how one would go about defining the cross product in a coordinate-free manner. I now know, not one, but two ways of doing this, and I can't quite see how they're related: The cross product is the Lie bracket in the Lie algebra of $\text{SO}(3)$. The cross product is the Hodge star map $\Lambda^2(V) \to V$ where $V$ is an oriented $3$-dimensional real inner product space. Okay, so there's one obvious relation here: $V$ has automorphism group $\text{SO}(3)$. But for some reason I can't figure out where to go from here. A good starting point would be to exhibit a canonical isomorphism between an oriented $3$-dimensional inner product space $V$ and the Lie algebra of $\text{Aut}(V)$. Maybe this is obvious. In any case, I would appreciate some clarification.
To expand on Victork Protsak's comment, if $V$ is an $n$-dimensional real vector space with inner-product, the inner-product gives an isomorphism $V\to V^*$ and hence $V\otimes V \to \mathrm{End}(V)$. Under this isomorphism, $\Lambda^2(V)$ is identified with skew-adjoint endomorphisms of $V$, which is precisely the Lie algebra $\mathfrak{so}(V)$. In the case $\dim V =3,$ the Hodge star gives an isomorphism $\Lambda^2(V) \to V$ and so in total we see that $V$ is canonically isomorphic to $\mathfrak{so}(V)$. A more direct way to see this isomorphism is to send the vector $v \in V$ to the generator of the right-handed rotation about the axis in the direction of $v$ with speed $|v|$. The use of the phrase "right-handed" makes it clear that in order to identify $V$ and $\mathfrak{so}(V)$ we have used an orientation on $V$; indeed, you need that for the Hodge star. What is interesting is that if you reverse the orientation on $V$, the map to $\mathfrak{so}(V)$ changes sign. This means that what ever orientation you chose on $V$, the push-forward to $\mathfrak{so}(V)$ is the same. Conclusion: $\mathfrak{so}(3)$ is naturally oriented . This is analogous to the natural orientation on $\mathbb{C}$. A more prosaic way to describe the orientation is to pick two independent elements $x,y \in \mathfrak{so}(3)$ and then use $[x,y]$ to complete them to an oriented basis. (Of course, you then need to check that this doesn't depend on your choice of $x,y$.)
{ "source": [ "https://mathoverflow.net/questions/33896", "https://mathoverflow.net", "https://mathoverflow.net/users/290/" ] }
33,911
Edit: the original poster is Menny , but the question is CW; the first-person pronoun refers to Menny, not to the most recent editor. I'm doing an introductory talk on linear algebra with the following aim: I want to give the students a concrete example through which they will be able to see how many notions arise "naturally". Notions such as vector spaces, the zero vector, span, linear dependency and independency, basis, dimension, "good" bases, solving linear equations, and even linear maps and eigenvectors. A related MO question is Linear algebra proofs in combinatorics . The aim of this post is to find some more "concrete","real" and "natural" examples in this spirit that can interest everyone who loves what we do (and give them motivation to learn new definitions and formalisms). So if you have some ideas - please post them! Thanks, Menny
My favorite elementary application of linear algebra is proving that the decomposition used in Calculus of rational functions into partial fractions works. Start with a polynomial $Q(x)=(x-r_1)(x-r_2)\cdots(x-r_n)$. Then the space of $P(x)/Q(x)$ with $deg P < deg Q$ is $n$-dimensional since it has a basis {$\frac{1}{Q(x)}, \frac{x}{Q(x)}, \frac{x^2}{Q(x)}, \dots, \frac{x^{n-1}}{Q(x)}$}. But {$\frac{1}{(x-r_1)},\frac{1}{(x-r_2)},\dots,\frac{1}{(x-r_n)}$} are linearly independent vectors in the space and thus form a basis. Hence, $\frac{P(x)}{Q(x)}=\frac{A_1}{(x-r_1)}+\frac{A_2}{(x-r_2)}+\dots+\frac{A_n}{(x-r_n)}$ for some constants {$A_1,\dots,A_n$}, which then we can furthermore find by taking the limit of $(x-r_i)\frac{P(x)}{Q(x)}$ as $x$ goes to $r_i$.
{ "source": [ "https://mathoverflow.net/questions/33911", "https://mathoverflow.net", "https://mathoverflow.net/users/6836/" ] }
34,007
Nowadays, the usual way to extend a measure on an algebra of sets to a measure on a $\sigma$-algebra, the Caratheodory approach, is by using the outer measure $m^* $ and then taking the family of all sets $A$ satisfying $m^* (S)=m^* (S\cap A)+m^* (S\cap A^c)$ for every set $S$ to be the family of measurable sets. It can then be shown that this family forms a $\sigma$-algebra and $m^*$ restricted to this family is a complete measure. The approach is elegant, short, uses only elementary methods and is quite powerful. It is also, almost universally, seen as completely unintuitive (just google "Caratheodory unintuitve" ). Given that the problem of extending measures is fundamental to all of measure theory, I would like to know if anyone can provide a perspective that renders the Caratheodory approach natural and intuitive. I'm familiar with the fact that there is a topological approach to the extension problem (see here or link text ) for the $\sigma$-finite case due to M.H. Stone (Maharam has actually shown how to extend it to the general case ), but it doesn't give much of an insight into why the Caratheodoy approach works and that is what I`m interested in here.
Here is an argument that may give some intuition: Assume that $m^{*}$ is an outer measure on $X$, and let us assume furthermore that this outer measure is finite: $m^* (X) < \infty$ Define an "inner measure" $m_*$ on $X$ by $m_* (E) = m^* (X) - m^* (E^c) $ If $m^*$ was, say, induced from a countably additive measure defined on some algebra of sets in $X$ (like Lebesgue measure is built using the algebra of finite disjoint unions of intervals of the form $(a,b]$), then a subset of $X$ will be measurable in the sense of Caratheodory if and only if its outer measure and inner measure agree. From this viewpoint, the construction of the measure (as well as the $\sigma$-algebra of measurable sets) is just a generalization of the natural construction of the Riemann integral on $\mathbb{R}^n$ - you try to approximate the area of a bounded set $E$ from the outside by using finitely many rectangles, and similarly from the inside, and the set is "measurable in the sense of Riemann" (or "Jordan measurable") if the best outer approximation of its area agrees with the best inner approximation of its area. The point here (which often isn't emphasized when Riemann integration is taught for the first time) is that the concept of "inner area" is redundant and can be defined in terms of the outer area just as I did above (you take some rectangle containing the set and consider the outer measure of the complement of the set with respect to this rectangle). Of course, Caratheodory's construction doesn't require $m^*$ to be finite, but I still think that this gives some decent intuition for the general case (unless you think that the construction of the Riemann integral itself is not intuitive :) ).
{ "source": [ "https://mathoverflow.net/questions/34007", "https://mathoverflow.net", "https://mathoverflow.net/users/35357/" ] }
34,044
I have seen that if $G$ is a finite group and $H$ is a proper subgroup of $G$ with finite index then $ G \neq \bigcup\limits_{g \in G} gHg^{-1}$. Does this remain true for the infinite case also?
Not in general. Every matrix in $\text{GL}_2(\mathbf C)$ is conjugate to an invertible upper triangular matrix (use eigenvectors), and the invertible upper triangular matrices are a proper subgroup.
{ "source": [ "https://mathoverflow.net/questions/34044", "https://mathoverflow.net", "https://mathoverflow.net/users/1483/" ] }
34,052
How many functions are there which are differentiable on $(0,\infty)$ and that satisfy the relation $f^{-1}=f'$ ?
Let $a=1+p>1$ be given. We shall construct a function $f$ of the required kind with $f(a)=a$ by means of an auxiliary function $h$, defined in the neighborhood of $t=0$ and coupled to $f$ via $x=h(t)$, $f(x)=h(a t)$, $f^{-1}(x)=h(t/a)$. The condition $f'=f^{-1}$ implies that $h$ satisfies the functional equation $$(*)\quad h(t/a) h'(t)=a h'(at).$$ Writing $h(t)=a+\sum_{k \ge 1} c_k t^k$ we obtain from $(*)$ a recursion formula for the $c_k$, and one can show that $0< c_r<1/p^{r-1}$ for all $r\ge 1$. This means that $h$ is in fact analytic for $|t|< p$, satisfies $(*)$ and possesses an inverse $h^{-1}$ in the neighborhood of $t=0$. It follows that the function $f(x):=h(ah^{-1}(x))$ has the required properties.
{ "source": [ "https://mathoverflow.net/questions/34052", "https://mathoverflow.net", "https://mathoverflow.net/users/1483/" ] }
34,059
Let $f$ be an infinitely differentiable function on $[0,1]$ and suppose that for each $x \in [0,1]$ there is an integer $n \in \mathbb{N}$ such that $f^{(n)}(x)=0$. Then does $f$ coincide on $[0,1]$ with some polynomial? If yes then how. I thought of using Weierstrass approximation theorem, but couldn't succeed.
The proof is by contradiction. Assume $f$ is not a polynomial. Consider the following closed sets: $$ S_n = \{x: f^{(n)}(x) = 0\} $$ and $$ X = \{x: \forall (a,b)\ni x: f\restriction_{(a,b)}\text{ is not a polynomial} \}. $$ It is clear that $X$ is a non-empty closed set without isolated points. Applying Baire category theorem to the covering $\{X\cap S_n\}$ of $X$ we get that there exists an interval $(a,b)$ such that $(a,b)\cap X$ is non-empty and $$ (a,b)\cap X\subset S_n $$ for some $n$. Since every $x\in (a,b)\cap X$ is an accumulation point we also have that $x\in S_m$ for all $m\ge n$ and $x\in (a,b)\cap X$. Now consider any maximal interval $(c,e)\subset ((a,b)-X)$. Recall that $f$ is a polynomial of some degree $d$ on $(c,e)$. Therefore $f^{(d)}=\mathrm{const}\neq 0$ on $[c,e]$. Hence $d< n$. (Since either $c$ or $e$ is in $X$.) So we get that $f^{(n)}=0$ on $(a,b)$ which is in contradiction with $(a,b)\cap X$ being non-empty.
{ "source": [ "https://mathoverflow.net/questions/34059", "https://mathoverflow.net", "https://mathoverflow.net/users/1483/" ] }
34,088
Let $M$ be a Riemannian manifold. There exists a unique torsion-free connection in the (co)tangent bundle of $M$ such that the metric of $M$ is covariantly constant. This connection is called the Levi-Civita connection and its existence and uniqueness are usually proven by a direct calculation in coordinates. See e.g. Milnor, Morse theory, chapter 2, \S 8. This is short and easy but not very illuminating. According to C. Ehresmann, a connection in a fiber bundle $p:E\to B$ (where $E$ and $B$ are smooth manifolds and $p$ is a smooth fibration) is just a complementary subbundle of the vertical bundle $\ker dp$ in $T^*E$. If $G$ is the structure group of the bundle and $P\to B$ is the corresponding $G$-principal bundle, then to give a connection whose holonomy takes values in $G$ is the same as to give a $G$-equivariant connection on $P$. If $p:E\to B$ is a rank $r$ vector bundle with a metric, then one can assume that the structure group is $O(r)$; the corresponding principal bundle $P\to B$ will in fact be the bundle of all orthogonal $r$-frames in $E$. One can then construct an $O(r)$-equivariant connection by taking any metric on $P$, averaging so as to get an $O(r)$-equivariant metric and then taking the orthogonal complement of the vertical bundle. Notice that in general one can have several $O(r)$-equivariant connections: take $P$ to be the total space constant $U(1)$-bundle on the circle; $P$ is a 2-torus and every rational foliation of $P$ that is non-constant in the "circle" direction gives a $U(1)$-equivariant connection. (All these connections are gauge equivalent but different.) So I would like to ask: given a Riemannian manifold $M$, is there a way to interpret the Levi-Civita connection as a subbundle of the frame bundle of the tangent bundle of $M$ so that its existence and uniqueness become clear without any calculations in coordinates?
To understand the existence and uniqueness of the LC connection, it is not possible to sidestep some algebra, namely the fact (with a 1-line proof) that a tensor $a_{ijk}$ symmetric in $i,j$ and skew in $j,k$ is necessarily zero. The geometrical interpretation is this: once one has the $O(n)$ subbundle $P$ of the frame bundle $F$ defined by the metric, there exists (at each point) a unique subspace transverse to the fibre that is tangent both to $P$ and to a coordinate-induced section $\{\partial/\partial x_1,\ldots,\partial/\partial x_n\}$ of $F$.
{ "source": [ "https://mathoverflow.net/questions/34088", "https://mathoverflow.net", "https://mathoverflow.net/users/2349/" ] }
34,145
The following question was a research exercise (i.e. an open problem) in R. Graham, D.E. Knuth, and O. Patashnik, "Concrete Mathematics", 1988, chapter 1. It is easy to show that $$\sum_{1 \leq k } \left(\frac{1}{k} \times \frac{1}{k+1}\right) = \sum_{1 \leq k } \left(\frac{1}{k} - \frac{1}{k+1}\right) = 1.$$ The product $\frac{1}{k} \times \frac{1}{k+1}$ is equal to the area of a $\frac{1}{k}$ by $\frac{1}{k+1}$ rectangle. The sum of the areas of these rectangles is equal to 1, which is the area of a unit square. Can we use these rectangles to cover a unit square? Is this problem still open? What are the best results we know about this problem (or its relaxations)?
This problem actually goes back to Leo Moser. The best result that I'm aware of is due to D. Jennings, who proved that all the rectangles of size $k^{-1} × (k + 1)^{-1}$ , $k = 1, 2, 3 ...$ , can be packed into a square of size $(133/132)^2$ ( link ). Edit 1. A web search via Google Scholar gave a reference to this article by V. Bálint, which claims that the rectangles can be packed into a square of size $(501/500)^2$ . Edit 2. The state of art of this and related packing problems due to Leo Moser is discussed in Chapter 3 of "Research Problems in Discrete Geometry" by P.Brass, W. O. J. Moser and J. Pach. The problem was still unsettled as of 2005.
{ "source": [ "https://mathoverflow.net/questions/34145", "https://mathoverflow.net", "https://mathoverflow.net/users/7507/" ] }
34,215
How do professional mathematicians learn new things? How do they expand their comfort zone? By talking to colleagues?
Slowly and with difficulty, just like amateur mathematicians.
{ "source": [ "https://mathoverflow.net/questions/34215", "https://mathoverflow.net", "https://mathoverflow.net/users/9558/" ] }
34,228
Let $R$ be a sheaf of rings on a topological space $X$. Assume $R \neq 0$. Does then $R$ have a maximal ideal? So this is a spacified analogon of the theorem, that every nontrivial ring has a maximal ideal. Currently I try to develope this sort of spacified commutative algebra and algebraic geometry. If anyone knows some literature about it, please let me know. So let's try to imitate the known proof for rings and use Zorn's Lemma. For that, we need that for every linear ordered set $(J_k)_{k \in K}$ of proper ideals in $R$, their sum $\sum_{k \in K} J_k$ is also a proper ideal. Note that if we replace $R \neq 0$ by $R_x \neq 0$ for all $x \in X$ and the notion proper by "stalkwise proper", then everything works out fine since stalks and sum commute. However, global sections do not commute with (infinite) sums. Anyway, let's try to continue: Assume $\sum_{k \in K} J_k = R$, that is, $1$ is a global section of the sum. Then there is an open covering $X = \cup_{i \in I} U_i$, such that $1 \in \sum_{k \in K} J_k(U_i) = \cup_{k \in K} J_k(U_i)$. Thus we get a function $I \to K, i \mapsto k_i$, such that $1 \in J_{k_i}(U_i)$. If this function has an upper bound, say $k$, then we get a contradiction $J_k=R$. Thus the function is unbounded. And now? I think that this already indicates that there will be counterexamples, but I'm not sure. Also note that everything is fine when $X$ is quasi compact.
Slowly and with difficulty, just like amateur mathematicians.
{ "source": [ "https://mathoverflow.net/questions/34228", "https://mathoverflow.net", "https://mathoverflow.net/users/2841/" ] }
34,237
I've been skulking around MathOverflow for about a month, reading questions and answers and comments, and I guess it's about time I asked a question myself, so here is one has interested me for a long time. Suppose $M$ is a compact even dimensional smooth manifold with two symplectic forms $\omega_0$ and $\omega_1$ When are they "isotopic", i.e., when does there exist a 1-parameter family of diffeos $\phi_t$ of $M$, starting from the identity, such that $\phi_1^*(\omega_0) = \omega_1$? Of course a necessary condition is that $\omega_0$ and $\omega_1$ should define the same 2-dimensional cohomology classes. Is this also sufficient? One can ask the same question for volume forms. I asked Juergen Moser about this twenty-five years ago, and he came back with an elegant proof of sufficiency for the volume element case a few months later in a well-known paper in TAMS. He remarks in that paper as follows: "The statement concerning 2-forms was also suggested by R. Palais. Unfortunately it seems very difficult to decide when two 2-forms which are closed, belong to the same cohomology class and are nondegenerate can be deformed homotopically into each other within the class of these differential forms." So my question is, what if any progress has been made on this question. Poking around here and in Google hasn't turned up anything. Does anyone know if there are any progress?
It is known, for a long time now, that there exist examples of symplectic forms in the same cohomology class which are non-isotopic. I do not remember if there exists such example in the dimension $4$, but in dimension $6$ there are different examples. Here is an example constructed by Dusa McDuff: Let $X$ be a product $S^2\times S^2\times T^2$ ($T^2$ is a torus $(\mathbb R/2\pi\mathbb Z)^2$ with angle coordinates $(\psi,\gamma)$) and $\omega$ is a sum $\omega_1\oplus\omega_2\oplus\omega_3$ of area forms on factors. We suppose that total areas of the first and of the second factor coincides. Consider the map $\varphi \colon X \to X$, where $\varphi (x,y,\psi,\gamma) = (x, T_{x,\psi}(y),\psi,\gamma)$, where $T_{x,\psi}$ is the rotation around $x$ on the angle $\psi$. Then forms $\omega$ and $\varphi^*(\omega)$ define the same cohomology class and non-isotopic. Moreover, forms $\omega$ and $\varphi^*(\omega)$ could be joined by a path in a space of symplectic structures. There is a survey containing the statement of this result and helpful references: http://www.math.sunysb.edu/~dusa/princerev98.pdf
{ "source": [ "https://mathoverflow.net/questions/34237", "https://mathoverflow.net", "https://mathoverflow.net/users/7311/" ] }
34,316
OK, the title is opinionated and contentious, but I have a definite question. I know that the title refers to the Bourbaki volume Groupes et Algèbres de Lie (Chapters 4-6), published in 1968, but who said that it is the only great book that Bourbaki ever wrote? The only reference I can find is the 2009 Prize Booklet for the AMS-MAA Joint Meetings, where no source is given, but I'm sure I've seen the claim somewhere else. Edit. I have rolled back the title of this question to almost its original form, because putting the title in quotes misled some people into thinking I sought a source for the exact phrase "the only great book that Bourbaki ever wrote." Rather, I wanted a source (not necessarily unique) for the idea that Chapters 4-6 of Groupes et Algèbres de Lie is Bourbaki's one great book. Gerald's answer and Jim's comment together are exactly what I wanted.
Google found this: Notices of the AMS, September 1998, p. 979: Bill Casselman's review of POLYHEDRA by Cromwell, we find the phrase "the one great book by Bourbaki"
{ "source": [ "https://mathoverflow.net/questions/34316", "https://mathoverflow.net", "https://mathoverflow.net/users/1587/" ] }
34,332
As defined in many modern algebra books, a homomorphism of unital rings must preserve the unit elements: $f(1_R)=1_S$. But there has been a minority who do not require this, one prominent example being Herstein in Topics in Algebra . What are some of the most striking consequences of not requiring ring homomorphisms to be unital? For example, what aspects of algebraic geometry would need to be reworked if we no longer required it? What interesting theorems or techniques arise in the not-necessarily-unital theory which do not apply (or are degenerate) for unital homomorphisms?
Let's suppose that our rings are commutative (which is the case that is immediately relevant to algebraic geometry). If $\phi:A \to B$ is a (possibly non-unital) homomorphism, then $e := \phi(1_A)$ is an idempotent in $B$, and so we get a decomposition $B = eB \times (1-e)B,$ and the map $\phi$ factors as $A \to eB \to B,$ where the first map is unital, and the second map is simply the inclusion, which is the inclusion of a direct factor. On Specs, we thus get the composite of the map Spec $eB \to $ Spec $A$, composed with the "map" (this is not necessarily an honest map of schemes, because it corresponds to the possibly non-unital map $e B \to B$) Spec $B \to$ Spec $eB$, which just vaporizes the open and closed subset Spec $(1-e)B$ of Spec $B$, and is the identity on the open and closed subset Spec $eB$. So the upshot is that nothing much new happens in algebraic geometry, except that we allow maps which are only defined on some open and closed subset of a given scheme. Of course, this is a big except , because these are not honest maps at all (they are simply not defined on some part of their "domain"). There doesn't seem to be any reason to add them into the mix, which is surely one reason why this generalized notion of homomorphism is not used much in practice. P.S. One could argue another way, beginning with geometry, and passing to algebra by remembering that rings are rings of functions. If we have a map $\phi:X \to Y$ of spaces (of some type, e.g. affine schemes, or anything else), then surely the constant function 1 on $Y$ will pull-back to the constant function 1 on $X$. Thus the induced homomorphism on rings of functions will have to be unital, and so one simply has no cause to consider non-unital homomorphisms in the geometric setting. P.P.S. The argument in the first paragraph shows that allowing non-unital homomorphisms in the category of commutative rings is the same as adding, in addition to unital homomorphisms, homomorphisms of the form $B_1 \to B_1\times B_2,$ given by $b_1\mapsto (b_1,0),$ for any pair of commutative rings $B_1$ and $B_2$. So it's not really a very exciting change from the purely algebraic point of view either.
{ "source": [ "https://mathoverflow.net/questions/34332", "https://mathoverflow.net", "https://mathoverflow.net/users/1916/" ] }
34,424
This Wikipedia article states that the isomorphism type of a finite simple group is determined by its order, except that: L 4 (2) and L 3 (4) both have order 20160 O 2n+1 (q) and S 2n (q) have the same order for q odd, n > 2 I think this means that for each integer g, there are 0, 1 or 2 simple groups of order g. Do we need the full strength of the Classification of Finite Simple Groups to prove this, or is there a simpler way of proving it? (Originally asked at math.stackexchange.com ).
It is usually extraordinarily difficult to prove uniqueness of a simple group given its order, or even given its order and complete character table. In particular one of the last and hardest steps in the classification of finite simple groups was proving uniqueness of the Ree groups of type $^2G_2$ of order $q^3(q^3+1)(q-1)$, (for $q$ of the form $3^{2n+1}$) which was finally solved in a series of notoriously difficult papers by Thompson and Bombieri. Although they were trying to prove the group was unique, proving that there were at most 2 would have been no easier. Another example is given in the paper by Higman in the book "finite simple groups" where he tries to characterize Janko's first group given not just its order 175560, but its entire character table. Even this takes several pages of complicated arguments. In other words, there is no easy way to bound the number of simple groups of given order, unless a lot of very smart people have overlooked something easy.
{ "source": [ "https://mathoverflow.net/questions/34424", "https://mathoverflow.net", "https://mathoverflow.net/users/4947/" ] }
34,540
This may not be appropriate for MathOverflow, as I haven't seen precedent for this type of question. But the answer is certainly of interest to me, and (I think) would be of interest to many other undergraduates. Often, while seeing some lecture online, or reading lecture notes from the internet, or hearing about someone who is an expert in a topic I'm interested in, I feel as though I want to email the professor in charge of whatever I'm looking at or hearing about. Usually this is about some question; these questions can range from the very technical to the very speculative. In the past, when I have mustered up enough guts to actually email the question, I've had good results (question answered, or reference given, etc.) However, every time I send a question I get the same sort of anxiety of "I have no idea what I'm doing!" So here are my questions: When is it appropriate to email a professor (that you don't know) a question about their work? (Remember; I'm an undergraduate, so these questions have a high chance of being inane in one way or another). How should this email be formatted? Do I give my name in the first line? Or do I give a bit of background, ask the question, and sign my name. (The latter is what I have been doing). How specific should the subject line be in order to increase the chances that the email gets read? Should I mention anywhere in the email that I am an undergraduate? This one is kind of silly, but I always have trouble deciding what to do: Suppose that I've emailed a professor, and they email me back with some answer... is it appropriate to email back with just a "Thank you"? I always feel like I'm wasting their time with such a contentless email, but at the same time I do want to thank them... Actually, in general I feel as though my questions are a waste of the professor's time (probably what feeds the email anxiety)... which brings me to: If you happen to be a professor who has received such emails in the past; are they a waste of your time? Please be honest! EDIT: Small new question... If you receive a response and the professor signs with their first name, are you supposed to refer to them by first name the next time you email them? Once again, if you think this is an inappropriate question, I totally understand. But please start a thread on meta to discuss it, because I think this might be borderline, at least.
Andrew Stacey covered this briefly, but I want to re-emphasize: If you are an undergraduate it is unlikely that you have questions that actually require asking the expert in something. If you're at an institution with a graduate program (this may be different at a liberal arts college) you ought to be able to find someone local you can ask instead. This way you can talk in person (which is an easier way to communicate), you can ask during someone's office hours (when they're in their office answering questions anyway), and you're building connections with professors at your school who might write you letters in the future. Furthermore, if the professor you ask locally doesn't know the answer, they'll be able to point you to a friend of theirs who will know the answer. That way you can start your letter "Hi, I'm a student at school X and Prof. Y suggested you might know the answer to the following question." In graduate school things are a bit different, you're more likely to have a question that really does need to be asked of a particular person, and you're more likely to know whether the question is a good use of that person's time. Furthermore, as a graduate student you'll want to start getting to know the experts in your field at different schools. So, if your advisor doesn't know the answer to a question you shouldn't hesitate to email someone who you think would know the answer.
{ "source": [ "https://mathoverflow.net/questions/34540", "https://mathoverflow.net", "https://mathoverflow.net/users/6936/" ] }
34,592
It is well known that if $K$ is a finite index subgroup of a group $H$, then there is a finite index subgroup $N$ of $K$ which is normal in $H$. Indeed, one can observe that there are only finitely many distinct conjugates $hKh^{-1}$ of $K$ with $h \in H$, and their intersection $N := \bigcap_{h \in H} h K h^{-1}$ will be a finite index normal subgroup of $H$. Alternatively, one can look at the action of $H$ on the finite quotient space $H/K$, and observe that the stabiliser of this action is a finite index normal subgroup of $H$. But now suppose that $K$ is a finite index subgroup of two groups $H_1$, $H_2$ (which are in turn contained in some ambient group $G$, thus $K \leq H_1 \leq G$ and $K \leq H_2 \leq G$ with $[H_1:K], [H_2:K] < \infty$). Is it possible to find a finite index subgroup $N$ of $K$ which is simultaneously normal in both $H_1$ and in $H_2$ (or equivalently, is normal in the group $\langle H_1 H_2 \rangle$ generated by $H_1$ and $H_2$)? The observation in the first paragraph means that we can find a finite index subgroup $N$ which is normal in $H_1$, or normal in $H_2$, but it does not seem possible to ensure normality in both $H_1$ and $H_2$ simultaneously. However, I was not able to find a counterexample (though it has been suggested to me that the automorphism groups of trees might eventually provide one). By abstract nonsense one can assume that the ambient group $G$ is the amalgamated free product of $H_1$ and $H_2$ over $K$, but this does not seem to be of too much help. I'm ultimately interested in the situation in which one has finitely many groups $H_1,\ldots,H_m$ rather than just two, but presumably the case of two groups is already typical.
I think the answer to your question is no. Take $G=PSL_d(\mathbb{Q}_p)$. It is a simple group. Take $H_1=PSL_d(\mathbb{Z}_p)$ and take $H_2=H_1^g$ for some $g \in G$ so that $H_1 \ne H_2$. Now, if I am not mistaken $H_1$ and $H_2$ are maximal in $G$ so together they generate $G$. Also, $G$ commensurates $H_1$ since $H_1$ is open in $G$ and profinite. So $K=H_1 \cap H_2$ is open and of finite index in both $H_1$ and $H_2$. But as $G$ is simple, $K$ contains no non-trivial normal subgroup of $G$. I am sure you can do something similar with Lie groups and lattices.
{ "source": [ "https://mathoverflow.net/questions/34592", "https://mathoverflow.net", "https://mathoverflow.net/users/766/" ] }
34,658
For smooth $n$-manifolds, we know that they can always be embedded in $\mathbb R^{2n}$ via a differentiable map. However, is there any corresponding theorem for the topological category? (i.e. Can every topological manifold embed continuously into some $\mathbb R^N$, and do we get the same bound for $N$?)
I'm not sure about $\mathbb{R}^{2n}$, but you can embed them in $\mathbb{R}^{2n+1}$ using dimension theory. The theorem is that every compact metric space whose covering dimension is $n$ can be embedded in $\mathbb{R}^{2n+1}$. The example of non-planar graphs (which are $1$-dimensional) shows that this is the best you can do in general. The classic source for this is Hurewitz-Wallman's beautiful book "Dimension Theory", which I recall being pretty readable to me when I was an undergraduate, though I haven't looked at it in a while. There is also a nice discussion of this in Munkres's book on point-set topology -- when I last taught a point-set topology class, I used this as one of the capstone theorems in the course.
{ "source": [ "https://mathoverflow.net/questions/34658", "https://mathoverflow.net", "https://mathoverflow.net/users/8188/" ] }
34,673
What is a good reference for the following fact (the hypotheses may not be quite right): Let $X$ and $Y$ be projective varieties over a field $k$. Let $\mathcal{F}$ and $\mathcal{G}$ be coherent sheaves on $X$ and $Y$, respectively. Let $\mathcal{F} \boxtimes \mathcal{G}$ denote $p_1^*(\mathcal{F}) \otimes_{\mathcal{O}_{X \times Y}} p_2^* \mathcal{G}$. Then $$H^m(X \times Y, \mathcal{F} \boxtimes \mathcal{G}) \cong \bigoplus_{p+q=m} H^p(X,\mathcal{F}) \otimes_k H^q(Y, \mathcal{G}).$$ Note: Wikipedia leads me to believe that this may be related to Theorem 6.7.3 in EGA III 2 , but I find this theorem quite intimidating. Although I would be willing to study this if there is no more basic reference, I would at least like some confirmation that I am studying the right thing.
The treatment in EGA is indeed intimidating, but in fact over a field the formula is not hard to prove. You only need $X$ and $Y$ to be separated schemes over $k$, and $\mathcal{F}$ and $\mathcal{G}$ to be quasi-coherent. Then cover $X$ and $Y$ by affine open subsets $\{U_i\}$, and $\{V_j\}$, and write down the Čech complex for $\mathcal{F}$ and $\mathcal{G}$ with respect to these two coverings, and the Čech complex of $\mathcal{F} \boxtimes \mathcal{G}$ with respect to the covering $U_i \times V_j$. It is not hard to see that the last is the tensor product of the first two; then the thesis follows from Eilenberg-Zilber (or however you want to call the fact that the cohomology of the tensor product of two complexes over a field is the tensor product of the cohomlogies).
{ "source": [ "https://mathoverflow.net/questions/34673", "https://mathoverflow.net", "https://mathoverflow.net/users/5094/" ] }
34,806
This question is inspired in part by this answer of Bill Dubuque, in which he remarks that the fairly common belief that Kummer was motivated by FLT to develop his theory of cyclotomic number fields is essentially unfounded, and that Kummer was instead movitated by the desire to formulate and prove general higher reciprocity laws. My own (not particular well-informed) undestanding is that the problem of higher reciprocity laws was indeed one of Kummer's substantial motivations; after all, this problem is a direct outgrowth of the work of Gauss, Eisenstein, and Jacobi (and others?) in number theory. However, Kummer did also work on FLT, so he must have regarded it to be of some importance (i.e. important enough to work on). Is there a consensus view on the role of FLT as motivating factor in Kummer's work? Was his work on it an afterthought, something that he saw was possible using all the machinery he had developed to study higher reciprocity laws? Or did he place more importance on it than that? (Am I right in also thinking that there were prizes attached to its solution which could also have played a role in directing his attention to it? If so, did they actually play any such role?)
When Kummer started working on research problems,he tried to solve what became known as "Kummer's problem", i.e., the determination cubic Gauss sums (its cube is easy to compute). Kummer asked Dirichlet to find out whether Jacobi or someone else had already been working on this, and to send him everything written by Jacobi on this subject. Dirichlet organized lecture notes of Jacobi's lectures on number theory from 1836/37 (see also this question ), where Jacobi had worked out the quadratic, cubic and biquadratic reciprocity laws using what we now call Gauss and Jacobi sums. In his first article, Kummer tried to generalize a result due to Jacobi, who had proved (or rather claimed) that primes $5n+1$, $8n+1$, and $12n+1$ split into four primes in the field of 5th, 8th and 12th roots of unity. Kummer's proof of the fact that primes $\ell n+1$ split into $\ell-1$ primes in the field of $\ell$-th roots of unity (with $\ell$ an odd prime) was erroneous, and eventually led to his introduction of ideal numbers. After Lame (1847) had given is "proof" of FLT in Paris, Liouville had observed that there are gaps related to unique factorization; he then asked his friend Dirichlet in a letter whether he knew that Lame's assumption was valid (Liouville only knew counterexamples for quadratic fields). A few weeks later, Kummer wrote Liouville a letter. In the weeks between, Kummer had looked at FLT and found a proof based on several assumptions, which later turned out to hold for regular primes. Kummer must have looked at FLT before, because in a letter to Kronecker he said that "this time" he quickly found the right approach. The Paris Prize, as John Stilwell already wrote, did play a role for Kummer, as he confessed in one of his letters to Kronecker that can be found in Kummer's Collected Papers. But mathematically, Kummer attached importance only to the higher reciprocity laws. Kummer worked out the arithmetic of cyclotomic extensions guided by his desire to find the higher reciprocity laws; notions such as unique factorization into ideal numbers, the ideal class group, units, the Stickelberger relation, Hilbert 90, norm residues and Kummer extensions owe their existence to his work on reciprocity laws. His work on Fermat's Last Theorem is connected to the class number formula and the "plus" class number, and a meticulous investigation of units, in particular Kummer's Lemma, as well as the tools needed for proving it, his differential logarithms, which much later were generalized by Coates and Wiles. Some of the latter topics were helpful to Kummer later when he actually proved his higher reciprocity law. Here's my article on Jacobi and Kummer's ideal numbers .
{ "source": [ "https://mathoverflow.net/questions/34806", "https://mathoverflow.net", "https://mathoverflow.net/users/2874/" ] }
34,843
This post is partially about opinions and partially about more precise mathematical questions. Most of this post is not as formal as a precise mathematical question. However, I hope that most readers will understand this post and the nature of the question. I will first try to explain what I would call Realistic Mathematics . Let us say that mathematics is about the formalization, organization and expression of thought. At the same time one could have the feeling that thought is usually trying to capture some aspect of physical reality; and let that be anything from a feeling, an impression of something, to experimental data of an experiment, or the observed geometric properties of lines and points in two-dimensional space. Of course thought itself (as described above) can be about anything and hence anything axiomatizable could then be seen as mathematics (that is David Hilbert's point of view). At the other hand if thought is primarily about physical reality then the focus of mathematics should be more restrictive. (I remember that Arnold argued in favor of this view; also von Neumann.) Let us call this restricted part of mathematics for the moment Realistic Mathematics . I am not saying that such a restriction of focus would be good or necessary; and I do not want to start a discussion about this. I just want to find out and discuss, whether mathematicians could agree on what we could call Realistic Mathematics . Let us suppose for a moment that we made some sense of the concept of Realistic Mathematics , and observed that it is a science that is about some part or aspect of physical reality. My naive question is now: Question: What is Realistic Mathematics about from a mathematical (or model theoretic) perspective? or Question: Is there any mathematical structure which serves as the part of mathematics which is about physical reality? Just to give some examples: I consider anything related to finite-dimensional geometry (manifolds, simplicial complexes, convex sets etc.), number theory, operators or algebras of operators on separable Hilbert spaces, differential equations, discrete geometry, combinatorics (under countability assumptions) etc. as being part of observed physical reality or potentially useful for the study of physical reality. At the other side, existence of large cardinals, non-measurable subsets of the reals, etc. are not (immediately) useful for such a study. In particular, my view is that the Axiom of Choice does not add anything to the understanding of physical reality. It produces highly counter-intuitive statements (not observed in nature) about subsets of finite-dimensional euclidean space and has its merits (in Realistic Mathematics ) only through short proofs and the knowledge that many statements are provable in ZFC if and only they are provable with more realistic assumptions.) What about $L(R)$? (See Wikipedia for definitions.) That would be a concrete model and maybe a realistic mathematician just studies properties of $L(R)$? Maybe a realistic mathematician studies what can be proved using $ZF + DC$? Is there any other canonical candidate which arises? My question here is mainly about opinions or some sort of vision which explains why this or that model or object of study arises naturally. Question: Is there any mathematical application to the study of physical reality which is not captured by the study of the model $L(R)$? More specifically: What about concrete statements which are undecidable in $ZF$? Does the Continuum Hypothesis belong to the statements we want to be true in Realistic Mathematics? What about the Open Coloring Axiom? Here, I am also asking for opinions or some consistent perspective on the realistic part of mathematics which captures my imprecise way of describing it.
When Solovay showed that ZF + DC + "all sets of reals are Lebesgue measurable" is consistent (assuming ZFC + "there is an inaccessible cardinal" is consistent), there was an expectation among set-theorists that analysts (and others doing what you call realistic mathematics) would adopt ZF + DC + "all sets of reals are Lebesgue measurable" as their preferred foundational framework. There would be no more worries about "pathological" phenomena (like the Banach-Tarski paradox), no more tedious verification that some function is measurable in order to apply Fubini's theorem, and no more of various other headaches. But that expectation wasn't realized at all; analysts still work in ZFC. Why? I don't know, but I can imagine three reasons. First, the axiom of choice is clearly true for the (nowadays) intended meaning of "set". Solovay's model consists of certain "definable" sets. Although there's considerable flexibility in this sort of definability (e.g., any countable sequence of ordinal numbers can be used as a parameter in such a definition), it's still not quite so natural as the general notion of "arbitrary set." So by adopting the new framework, people would be committing themselves to a limited notion of set, and that might well produce some discomfort. Second, it's important that Solovay's theory, though it doesn't include the full axiom of choice, does include the axiom of dependent choice (DC). Much of (non-pathological) analysis relies on DC or at least on the (weaker) axiom of countable choice. (For example, countable additivity of Lebesgue measure is not provable in ZF alone.) So to work in Solovay's theory, one would have to keep in mind the distinction between "good" uses of choice (countable choice or DC) and "bad" uses (of the sort involved in the construction of Vitali sets or the Banach-Tarski paradox). The distinction is quite clear to set-theorists but analysts might not want to get near such subtleties. Third, in ZF + DC + "all sets of reals are Lebesgue measurable," one lacks some theorems that analysts like, for example Tychonoff's theorem (even for compact Hausdorff spaces, where it's weaker than full choice). I suspect (though I haven't actually studied this) that the particular uses of Tychonoff's theorem needed in "realistic mathematics" may well be provable in ZF + DC + "all sets of reals are Lebesgue measurable" (or even in just ZF + DC). But again, analysts may feel uncomfortable with the need to distinguish the "available" cases of Tychonoff's theorem from the more general cases. The bottom line here seems to be that there's a reasonable way to do realistic mathematics without the axiom of choice, but adopting it would require some work, and people have generally not been willing to do that work.
{ "source": [ "https://mathoverflow.net/questions/34843", "https://mathoverflow.net", "https://mathoverflow.net/users/8176/" ] }
34,848
There exist topological manifolds which don't admit a smooth structure in dimensions > 3, but I haven't seen much discussion on homotopy type. It seems much more reasonable that we can find a smooth manifold (of the same dimension) homotopy equivalent to a given topological manifold. Is this true, or is there a counterexample?
It is false for compact manifolds in 4 dimensions. Freedman showed that there is a compact simply connected topological 4-manifold with intersection form E8, but Donaldson showed that there is no such smooth manifold.
{ "source": [ "https://mathoverflow.net/questions/34848", "https://mathoverflow.net", "https://mathoverflow.net/users/8188/" ] }
34,861
An answer to a recent question motivated the following question: (how) is category theory actually useful in actual physics? By "actual physics" I mean to refer to areas where the underlying theoretical principle has solid if not conclusive experimental justification, thus ruling out not only string theory (at least for the moment) but also everything I could notice on this nLab page (though it is possible that I missed something). Note that I do not ask (e.g.) whether or not category theory has been used in connection with hypothetical models in physics. I've read Baez' blog from time to time over the decades and have already demonstrated knowledge of the existence of the nLab. I am dimly aware of stuff like (e.g.) the connection between between Hopf algebras and renormalization, but I have yet to encounter something that seems like it has a nontrivial category theoretic-component and cannot be expressed in some other more "traditional" language. Note finally that I am ignorant of category theory beyond the words "morphism" and "functor" and (in my youth) "direct limit". So answers that take this into account are particularly welcome.
Fusion categories and module categories come up in topological states of matter in solid state physics. See the research, publications, and talks at Microsoft's Station Q .
{ "source": [ "https://mathoverflow.net/questions/34861", "https://mathoverflow.net", "https://mathoverflow.net/users/1847/" ] }
34,938
If one iterates the map $z \mapsto z^2 + c$ there is obviously a simple formula for the sequence one gets if $c=0$. Less obviously, there is also a simple formula when $c = -2$ (use the identity $2 \cos(2x) = (2\cos(x))^2 - 2)$. Are there any other values of $c$ for which one can solve this recurrence explicitly? (For all initial values of course: there are many trivial explicit solutions for special initial values, such as fixed points.) Related links: http://en.wikipedia.org/wiki/Mandelbrot_set (the points $c$ where 0 remains bounded under iteration of this map: this strongly suggests that there is no simple exact solution for general $c$). http://en.wikipedia.org/wiki/Logistic_map (gives the explicit solutions above, after a change of variable) Motivation: I once used the map with $c=-2$ in a lecture to show that one could prove limits exist even without a formula for the exact solution. A first year calculus student pointed out the non-obvious exact solution above, and I don't want to be caught out like this again.
No, there are no others. Analytically, one can show that if a Julia set contains an analytic arc, it is in fact a straight line or a circle (up to conjugation). For the class $z^2+c$, $0$ and $-2$ are the only values where this occurs. This does not quite imply that there are no closed-form solutions of the recurrence for any other value of $c$, but such a closed form would naturally describe a fractal. As far as I know, there are no "closed form" functions which directly have a fractal as one of its level set.
{ "source": [ "https://mathoverflow.net/questions/34938", "https://mathoverflow.net", "https://mathoverflow.net/users/51/" ] }
35,021
Call a computable function a total function $\mathbb{R} \to \mathbb{R}$, for which there exists a Turing machine outputting arbitrary close approximation to $f(x)$ given arbitrary close approximation to $x$. Obviously not every computable function is differentiable (for example, absolute value). For arbitrary continuous functions, the set of points of differentiability is $\Pi_{3}^0$. Can this be improved for computable functions? Suppose $f$ is computable and continously differentiable everywhere. Must $f'$ be computable?
John Myhill gave an example of a recursive function defined on a compact interval and having a continuous derivative that is not recursive [Michigan Math. J. 18 (1971), 97-98, MR0280373 ]. However, Pour-El and Richards have shown that if a recursive function defined on a compact interval has a continuous second derivative, then it has a recursive first derivative [ Computability and noncomputability in classical analysis , TAMS 275 (1983), 539-560, MR0682717 ].
{ "source": [ "https://mathoverflow.net/questions/35021", "https://mathoverflow.net", "https://mathoverflow.net/users/158/" ] }
35,198
Let us define the nth smooth homotopy group of a smooth manifold $M$ to be the group $\pi_n^\infty(S^k)$ of smooth maps $S^n \to S^k$ modulo smooth homotopy. Of course, some care must be taken to define the product, but I don't think this is a serious issue. The key is to construct a smooth map $S^n \to S^n \lor S^n$ (regarded as subspaces of $\mathbb{R}^{n+1}$) which collapses the equator to a point; we then define the product of two (pointed) maps $f, g: S^n \to S^k$ to be the map $S^n \lor S^n \to S^k$ which restricts to $f$ on the left half and $g$ on the right half. To accomplish this, use bump functions to bend $S^n$ into a smooth "dumbell" shape consisting of a cylinder $S^{n-1} \times [0,1]$ with two large orbs attached to the ends, and retract $S^{n-1}$ to a point while preserving smoothness at the ends. Then retract $[0,1]$ to a point, and we're done. Question: is the natural "forget smoothness" homomorphism $\phi: \pi_n^\infty(S^k) \to \pi_n(S^k)$ an isomorphism? If not, what is known about $\pi_n^\infty(S^k)$ and what tools are used? In chapter 6 of "From Calculus to Cohomology", Madsen and Tornehave prove that every continuous map between open subsets of Euclidean spaces is homotopic to a smooth map. Thus every continuous map $f$ between smooth manifolds is "locally smooth up to homotopy", meaning that every point in the source has a neighborhood $U$ such that $f|_U$ is homotopic to a smooth map. However it is not clear to me that the local homotopies can be chosen in such a way that they glue together to form a global homotopy between $f$ and a smooth map. This suggests that $\phi$ need not be surjective. In the same reference as above, it is shown that given any two smooth maps between open subsets of Euclidean spaces which are continuously homotopic, there is a smooth homotopy between them. As above this says that two smooth, continuously homotopic maps between smooth manifolds are locally smoothly homotopic, but I again see no reason why the local smooth homotopies should necessarily glue to form a global smooth homotopy. This suggests that $\phi$ need not be injective. I am certainly no expert on homotopy theory, but I have read enough to be surprised that this sort of question doesn't seem to be commonly addressed in the basic literature. This leads me to worry that my question is either fatally flawed, trivial, useless, or hopeless. Still, I'm retaining some hope that something interesting can be said.
Yes, the map you mention is an isomorphism. I think the main reason people rarely address your specific question in literature is that the technique of the proof is more important than the theorem. All the main tools are written up in ready-to-use form in Hirsch's Differential Topology textbook. There are two steps to prove the theorem. Step 1 is that all continuous maps can be approximated by (necessarily) homotopic smooth maps. The 2nd step is that if you have a continuous map that's smooth on a closed subspace (say, a submanifold) then you can approximate it by a (necessarily) homotopic smooth map which agrees with the initial map on the closed subspace. So that gives you a well-defined inverse to your map $\phi$. There are two closely-related proofs of this. Both use embeddings and tubular neighbourhoods to turn this into a problem for continuous maps defined on open subsets of euclidean space. And there you either use partitions of unity or a "smoothing operator", which is almost the same idea -- convolution with a bump function.
{ "source": [ "https://mathoverflow.net/questions/35198", "https://mathoverflow.net", "https://mathoverflow.net/users/4362/" ] }
35,209
Let $\xi_t$ be a zero-mean gaussian process on $[0,1]$ with covariance operator $C$. I would like to better understand the relation between the covariance operator and the regularity of the trajectories. I already know that Theorem (Kolmogorov) If there exists $\alpha>1,C\geq 0$ and $\epsilon>0$ such that $$E[|\xi_t-\xi_s|^{\alpha}] \leq C > |t-s|^{1+\epsilon}$$ then there exists a modification of the process that is almost surely Hölder-$\delta$ for $\delta\in ]0,\epsilon/\alpha[$. I would like to know if there are other results in this direction (with other spaces than Holder ?) and especially one that relates directly a norm (spectral norm under stationnarity assumption ?) of the covariance operator and the regularities of the trajectories. note: if I remember there is a link between the closure of the cameron martin space and the support of the gaussian measure associated to the process... how can I reformulate this to answer my question ? )
Yes, the map you mention is an isomorphism. I think the main reason people rarely address your specific question in literature is that the technique of the proof is more important than the theorem. All the main tools are written up in ready-to-use form in Hirsch's Differential Topology textbook. There are two steps to prove the theorem. Step 1 is that all continuous maps can be approximated by (necessarily) homotopic smooth maps. The 2nd step is that if you have a continuous map that's smooth on a closed subspace (say, a submanifold) then you can approximate it by a (necessarily) homotopic smooth map which agrees with the initial map on the closed subspace. So that gives you a well-defined inverse to your map $\phi$. There are two closely-related proofs of this. Both use embeddings and tubular neighbourhoods to turn this into a problem for continuous maps defined on open subsets of euclidean space. And there you either use partitions of unity or a "smoothing operator", which is almost the same idea -- convolution with a bump function.
{ "source": [ "https://mathoverflow.net/questions/35209", "https://mathoverflow.net", "https://mathoverflow.net/users/6531/" ] }
35,223
I started thinking about this question because of this discussion: http://sbseminar.wordpress.com/2010/08/10/negative-value-added-by-journals/ about how journals often change a paper (for the worse) after acceptance. Here's my question: Since it doesn't make sense to number every single equation (especially if you will only refer to it once in the text), I often use the phrase "the equation above" or "the equation below" to refer to the previous or next displayed equation. I am well aware that after compiling, said equation may be on the previous or next page, respectively, and not strictly above or below . I thought this was common practice. One journal publisher changed every single one of my "the above equation" phrases to "the earlier equation". I didn't bother protesting, but in my mind it definitely made the text worse. Do other people avoid using "above" and "below"?
Since it doesn't make sense to number every single equation... I used to number only equations I referred to, but then someone pointed out the following. When you write a paper and make it public, you are de facto allowing other people to talk about your paper and simply because you in your paper see no reason to refer to some equation does not mean that other people reading your paper will not do so. Hence as a friendly gesture to your readers, you should number all your equations, and in this way allow them to refer to, say, equation (n) in your paper, instead of having to come up with some complicated reference. Since that time I have tried to number all my equations.
{ "source": [ "https://mathoverflow.net/questions/35223", "https://mathoverflow.net", "https://mathoverflow.net/users/6871/" ] }
35,224
Given an even dimensional manifold, the mapping class group acts on middle dimensional cohomology (or homology) and this action preserves the intersection form. For manifold of dimension $4k+2$, the action symplectic, while it is orthogonal for manifold of dimension $4k$. In dimension 2, it is well-known that any integral symplectic transformation on the cohomology of degree 1 can be realized by some diffeomorphism. I would like to know if this is still true in higher dimension. I am interested mostly in the symplectic case (dimension $4k+2$). More generally, does anyone know a good reference about mapping class groups of manifolds of dimension higher than 2? All the references I found treat exclusively the case of surfaces. Thanks in advance.
Since it doesn't make sense to number every single equation... I used to number only equations I referred to, but then someone pointed out the following. When you write a paper and make it public, you are de facto allowing other people to talk about your paper and simply because you in your paper see no reason to refer to some equation does not mean that other people reading your paper will not do so. Hence as a friendly gesture to your readers, you should number all your equations, and in this way allow them to refer to, say, equation (n) in your paper, instead of having to come up with some complicated reference. Since that time I have tried to number all my equations.
{ "source": [ "https://mathoverflow.net/questions/35224", "https://mathoverflow.net", "https://mathoverflow.net/users/2183/" ] }
35,320
The Perron-Frobenius theorem says that the largest eigenvalue of a positive real matrix (all entries positive) is real. Moreover, that eigenvalue has a positive eigenvector, and it is the only eigenvalue having a positive eigenvector. Now suppose we want to construct a positive rational matrix with a particular Perron-Frobenius eigenvalue. Specifically, consider a positive real algebraic number $\lambda$ which is greater in absolute value than all of its Galois conjugates. Does there exist a positive rational matrix $A$ with $\lambda$ as its Perron-Frobenius eigenvalue?
The answer to a sharper question involving integers, rather than rationals, is affirmative. Let $\lambda$ be a positive real algebraic integer that is greater in absolute value than all its Galois conjugates ("Perron number" or "PF number"). Then $\lambda$ is the Perron–Frobenius eigenvalue of a positive integer matrix. (The converse statement is an integer version of the Perron–Frobenius theorem, and is easy to prove.) In a slightly weaker form (aperiodic non-negative matrix), this is theorem of Douglas Lind, from The entropies of topological Markov shifts and a related class of algebraic integers. Ergodic Theory Dynam. Systems 4 (1984), no. 2, 283--300 ( MR ) I don't have a good reference for the strong form, but it was discussed at Thurston seminar in 2008-2009. One interesting thing to note is that, while the proof can be made constructive, it is non-uniform: the size of the matrix can be arbitrarily large compared to the degree of $\lambda$.
{ "source": [ "https://mathoverflow.net/questions/35320", "https://mathoverflow.net", "https://mathoverflow.net/users/8410/" ] }
35,334
Is there a version of Stokes' theorem for vector bundle-valued (or just vector-valued ) differential forms? Concretely: Let $E \rightarrow M$ be a smooth vector bundle over an $n$ -manifold $M$ equipped with a connection. First of all, is there an $E$ -valued integration defined on the space $\Omega^{n-1}(M,E)$ of smooth sections of $E\otimes \Lambda^{n-1}T^\ast M$ mimicking what you have for $\mathbb{R}$ -valued forms? If so, does $\int_{\partial M} \omega = \int_M d\omega$ hold (or even make sense) for $\omega\in\Omega^{n-1}(M,E)$ if $d$ is the exterior derivative coming from our connection on $E$ ? In this setting, letting $E$ be the trivial bundle $M\times \mathbb{R}$ should give the ordinary integral and Stokes' theorem. Sorry if the answer is too obvious; I just don't have any of my textbooks available at the moment, and have never thought about Stokes' theorem for anything other than scalar-valued forms before.
For a general vector bundle, I there is no " $E$ -valued integration" as you put it. You are trying to add up elements in the fibres of $E$ , but since the fibres over different points are not the same vector space you can't add their elements. For the trivial bundle $M \times \mathbb{R}^k$ - and with a fixed choice of trivialisation! - you can carry out the integral component by component, But if you change the trivialisation you will get a different answer. Moreover, you can change the trivialisation in a way that varies over the manifold, so there is no hope that the integral will just change by a linear map of $\mathbb{R}^k$ . You can see this behaviour even with ordinary functions. A function is a section of the trivial ised rank 1 bundle. If you change the trivialisation, but insist on regarding ordinary $p$ -forms as bundle valued, you multiply all your forms by a fixed nowhere vanishing function. This can change the integral over a $p$ -cycle in a more-or-less arbitrary way. What you can integrate $E$ -valued forms against is $E^*$ -valued ones. Given $a \in \Omega^p(E)$ and $b \in \Omega^q(E^*)$ their wedge product is an ordinary $(p+q)$ -form which you can then integrate over a $(p+q)$ -cycle. Now you have a version of Stokes theorem. If you have a connection $A$ in $E$ then you can check that $$ d(a \wedge b) = d_A(a) \wedge b \pm a \wedge d_A(b). $$ So Stokes theorem gives $$ \int_{M}d_A(a) \wedge b \pm a \wedge d_A(b) = \int_{\partial M} a \wedge b. $$ In the case when $b$ is a covariant constant section of $E$ and $M$ has dimension one more than the degree of $a$ , we get $$\int_M \langle d_A(a) , b \rangle=\int_{\partial M} \langle a, b \rangle$$ This is just the usual Stokes theorem, for the component of $a$ in the direction $b$ . Since $b$ is covariant-constant, $\langle d_A(a), b \rangle = d \langle a, b \rangle$ .
{ "source": [ "https://mathoverflow.net/questions/35334", "https://mathoverflow.net", "https://mathoverflow.net/users/8415/" ] }
35,408
In group theory, the single most important piece of information about a group is its cardinality, which is of course either finite, countably infinite, or uncountably infinite. Usually, however, uncountably infinite simply means a cardinality of $\aleph_{1}$, the same as $\mathbb{R}$. My question is: is there anywhere that groups with cardinality strictly greater than $\aleph_{1}$ arise naturally? Of course, it is easy enough to construct groups with arbitrarily large cardinality, but I cannot recall ever seeing them used.
In line with Joel's answer, my favorite "outrageously large group" is the group $G = \operatorname{Aut}(\mathbb{C})$ of field automorphisms of the complex numbers. It has cardinality $2^{2^{\aleph_0}}$, which is pretty scary. But that's just the beginning of how large it is. For instance, from the study of real-closed fields, one can deduce that the number of conjugacy classes of order $2$ elements of $G$ is also $2^{2^{\aleph_0}}$. It is also an extension of the absolute Galois group of $\mathbb{Q}$ (a profinite group which is conjectured to have among its quotients every finite group, up to isomorphism) by the huge simple group $\operatorname{Aut}(\mathbb{C}/\overline{\mathbb{Q}})$.
{ "source": [ "https://mathoverflow.net/questions/35408", "https://mathoverflow.net", "https://mathoverflow.net/users/6856/" ] }
35,455
Question Suppose there is a bijection between the underlying sets of two finite groups $G, H$ , such that every subgroup of $G$ corresponds to a subgroup of $H$ , and that every subgroup of $H$ corresponds to a subgroup of $G$ . Does this imply that $G, H$ are isomorphic? Note that we do not require the bijection to actually be the isomorphism. Motivation The question is interesting to me because I am considering maps of groups which aren't homomorphisms but preserve the subgroup structure in some sense - given a group, we can forget the multiplication operation and look only at the closure operator that maps a subset of $G$ to the subgroup generated by it. If the question is resolved in the affirmative, then the forgetful functor from the usual category $Grp$ to this category won't create any new isomorphisms. (Note that I didn't precisely specify the morphisms this new category -- you could just use the usual definition of a homomorphism, and say that if the mapping commutes with the closure operator, then its a morphism. The definition I actually care about is, a morphism of this category is a mapping such that every closed set in the source object is the preimage of a closed set of the target object. It doesn't make much difference as far as this question is concerned, the isomorphisms of both categories are the same.) I asked a friend at Mathcamp about this a few weeks ago, he said a bunch of people started thinking about it but got stumped after a while. The consensus seems to have been that it is probably false, but the only counter examples may be very large. I don't really have any good ideas / tools for how to prove it might be true, I mostly wanted to just ask if anyone knew offhand / had good intuition for how to find a finite counterexample. Edit (YCor): (a) the question has reappeared in the following formulation: does the hypergraph structure of the set of subgroups of a (finite) group determine its isomorphism type? A hypergraph is a set endowed with a set of subsets. The hypergraph of subgroups is the data of the set of subgroups, and therefore to say that groups $G,H$ have isomorphic hypergraphs of subgroups means that there's a bijection $f:G\to H$ such that for every subset $A\subset G$ , $f(A)$ is a subgroup of $H$ if and only $A$ is a subgroup of $G$ . Several answers, complementing the one given here, have been provided in this question . (b) There a weaker well-studied notion for groups, namely to have isomorphic subgroup lattices. Having isomorphic hypergraphs of subgroups requires such an isomorphism to be implemented by a bijection (this is not always the case: take two groups of distinct prime order).
The answer is no in general. I.e, there are finite non-isomorphic groups G and H such that there exists a bijection between their elements which also induces a bijection between their subgroups. For this, I used two non-isomorphic groups which not only have the same subgroup lattice (which certainly is necessary), but also have the same conjugacy classes. There are two such groups of size 605, both a semidirect product $(C_{11}\times C_{11}) \rtimes C_5$ (see this site for details on the construction). In the small group library of GAP , these are the groups with id [ 605, 5 ] and. [ 605, 6 ]. These are provably non-isomorphic (you can construct the groups as described in the reference I gave, and then use GAPs IdSmallGroup command to verify that the groups described there are the same as the ones I am working with here). With a short computer program, one can now construct a suitable bijection. First, let us take the two groups: gap> G:=SmallGroup(605, 5); <pc group of size 605 with 3 generators> gap> H:=SmallGroup(605, 6); <pc group of size 605 with 3 generators> The elements of these groups are of order 1, 5 or 11, and there are 1, 484 and 120 of each. We will sort them in a "nice" way (that is, we try to match each subgroup of order 5 to another one, element by element) and obtain a bijection from this. First, a helper function to give us all elements in "nice" order: ElementsInNiceOrder := function (K) local elts, cc; elts := [ One(K) ]; cc := ConjugacyClassSubgroups(K, Group(K.1)); Append(elts, Concatenation(List(cc, g -> Filtered(g,h->Order(h)=5)))); Append(elts, Filtered(Group(K.2, K.3), g -> Order(g)=11)); return elts; end;; Now we can take the elements in the nice order and define the bijection $f$ : gap> Gelts := ElementsInNiceOrder(G);; gap> Helts := ElementsInNiceOrder(H);; gap> f := g -> Helts[Position(Gelts, g)];; Finally, we compute the sets of all subgroups of $G$ resp. $H$ , and verify that $f$ induces a bijection between them: gap> Gsubs := Union(ConjugacyClassesSubgroups(G));; gap> Hsubs := Union(ConjugacyClassesSubgroups(H));; gap> Set(Gsubs, g -> Group(List(g, f))) = Hsubs; true Thus we have established the claim with help of a computer algebra system. From this, one could now obtain a pen & paper proof for the claim, if one desires so. I have not done this in full detail, but here are some hints. Say $G$ is generated by three generators $g_1,g_2,g_3$ , where $g_1$ generates the $C_5$ factor and $g_2,g_3$ generate the characteristic subgroup $C_{11}\times C_{11}$ . We choose a similar generating set $h_1,h_2,h_3$ for $H$ . We now define $f$ in two steps: First, for $0\leq n,m <11$ it shall map $g_2^n g_3^m$ to $h_2^n h_3^m$ . This covers all elements of order 1 or 11, so in step two we specify how to map the remaining elements, which all have order 5. These are split into four conjugacy classes: $g_1^G$ , $(g_1^2)^G$ , $(g_1^3)^G$ and $(g_1^4)^G$ . We fix any bijection between $g_1^G$ and $h_1^H$ and extend that to a bijection on all elements of order 5 by the rule $f((g_1^g)^n)=f(g_1^g)^n$ . With some effort, one can now verify that this is a well-defined bijection between $G$ and $H$ with the desired properties. You will need to determine the subgroup lattice in each case; linear algebra helps a bit, as well as the fact that all subgroups have order 1, 5, 11, 55, 121 (unique) or 605. I'll leave the details to the reader, as I myself am happy enough with the computer result. UPDATE : as pointed out in another answer below by @dvitek (explained by @Ian Agol in comments), there is actually a much simpler example, which I somehow overlooked when I did my computer search. Credit to them, but just in case people want to reproduce their example with GAP, here is an input session doing just that: gap> G:=SmallGroup(16,5);; StructureDescription(G); "C8 x C2" gap> H:=SmallGroup(16,6);; StructureDescription(H); "C8 : C2" gap> Gelts := ListX([1..8],[1,2],{i,j}->G.1^i*G.2^j);; gap> Helts := ListX([1..8],[1,2],{i,j}->H.1^i*H.2^j);; gap> f := g -> Helts[Position(Gelts, g)];; gap> Gsubs := Union(ConjugacyClassesSubgroups(G));; gap> Hsubs := Union(ConjugacyClassesSubgroups(H));; gap> Set(Gsubs, g -> Group(List(g, f))) = Hsubs; true
{ "source": [ "https://mathoverflow.net/questions/35455", "https://mathoverflow.net", "https://mathoverflow.net/users/8445/" ] }
35,462
Suppose there are $ K $ buckets each can be filled upto $ N-1 $ balls. The gain on putting $ i $ balls in the $ k^{th} $ bucket is given by $ \Delta l_{k,i}, \, i \in [1,N-1] $. The problem is to put $ \lambda $ balls in those buckets to maximize the overall gain. How do we solve it?
The answer is no in general. I.e, there are finite non-isomorphic groups G and H such that there exists a bijection between their elements which also induces a bijection between their subgroups. For this, I used two non-isomorphic groups which not only have the same subgroup lattice (which certainly is necessary), but also have the same conjugacy classes. There are two such groups of size 605, both a semidirect product $(C_{11}\times C_{11}) \rtimes C_5$ (see this site for details on the construction). In the small group library of GAP , these are the groups with id [ 605, 5 ] and. [ 605, 6 ]. These are provably non-isomorphic (you can construct the groups as described in the reference I gave, and then use GAPs IdSmallGroup command to verify that the groups described there are the same as the ones I am working with here). With a short computer program, one can now construct a suitable bijection. First, let us take the two groups: gap> G:=SmallGroup(605, 5); <pc group of size 605 with 3 generators> gap> H:=SmallGroup(605, 6); <pc group of size 605 with 3 generators> The elements of these groups are of order 1, 5 or 11, and there are 1, 484 and 120 of each. We will sort them in a "nice" way (that is, we try to match each subgroup of order 5 to another one, element by element) and obtain a bijection from this. First, a helper function to give us all elements in "nice" order: ElementsInNiceOrder := function (K) local elts, cc; elts := [ One(K) ]; cc := ConjugacyClassSubgroups(K, Group(K.1)); Append(elts, Concatenation(List(cc, g -> Filtered(g,h->Order(h)=5)))); Append(elts, Filtered(Group(K.2, K.3), g -> Order(g)=11)); return elts; end;; Now we can take the elements in the nice order and define the bijection $f$ : gap> Gelts := ElementsInNiceOrder(G);; gap> Helts := ElementsInNiceOrder(H);; gap> f := g -> Helts[Position(Gelts, g)];; Finally, we compute the sets of all subgroups of $G$ resp. $H$ , and verify that $f$ induces a bijection between them: gap> Gsubs := Union(ConjugacyClassesSubgroups(G));; gap> Hsubs := Union(ConjugacyClassesSubgroups(H));; gap> Set(Gsubs, g -> Group(List(g, f))) = Hsubs; true Thus we have established the claim with help of a computer algebra system. From this, one could now obtain a pen & paper proof for the claim, if one desires so. I have not done this in full detail, but here are some hints. Say $G$ is generated by three generators $g_1,g_2,g_3$ , where $g_1$ generates the $C_5$ factor and $g_2,g_3$ generate the characteristic subgroup $C_{11}\times C_{11}$ . We choose a similar generating set $h_1,h_2,h_3$ for $H$ . We now define $f$ in two steps: First, for $0\leq n,m <11$ it shall map $g_2^n g_3^m$ to $h_2^n h_3^m$ . This covers all elements of order 1 or 11, so in step two we specify how to map the remaining elements, which all have order 5. These are split into four conjugacy classes: $g_1^G$ , $(g_1^2)^G$ , $(g_1^3)^G$ and $(g_1^4)^G$ . We fix any bijection between $g_1^G$ and $h_1^H$ and extend that to a bijection on all elements of order 5 by the rule $f((g_1^g)^n)=f(g_1^g)^n$ . With some effort, one can now verify that this is a well-defined bijection between $G$ and $H$ with the desired properties. You will need to determine the subgroup lattice in each case; linear algebra helps a bit, as well as the fact that all subgroups have order 1, 5, 11, 55, 121 (unique) or 605. I'll leave the details to the reader, as I myself am happy enough with the computer result. UPDATE : as pointed out in another answer below by @dvitek (explained by @Ian Agol in comments), there is actually a much simpler example, which I somehow overlooked when I did my computer search. Credit to them, but just in case people want to reproduce their example with GAP, here is an input session doing just that: gap> G:=SmallGroup(16,5);; StructureDescription(G); "C8 x C2" gap> H:=SmallGroup(16,6);; StructureDescription(H); "C8 : C2" gap> Gelts := ListX([1..8],[1,2],{i,j}->G.1^i*G.2^j);; gap> Helts := ListX([1..8],[1,2],{i,j}->H.1^i*H.2^j);; gap> f := g -> Helts[Position(Gelts, g)];; gap> Gsubs := Union(ConjugacyClassesSubgroups(G));; gap> Hsubs := Union(ConjugacyClassesSubgroups(H));; gap> Set(Gsubs, g -> Group(List(g, f))) = Hsubs; true
{ "source": [ "https://mathoverflow.net/questions/35462", "https://mathoverflow.net", "https://mathoverflow.net/users/8447/" ] }
35,468
Are there any examples in the history of mathematics of a mathematical proof that was initially reviewed and widely accepted as valid, only to be disproved a significant amount of time later, possibly even after being used in proofs of other results? (I realise it's a bit vague, but if there is significant doubt in the mathematical community then the alleged proof probably doesn't qualify. What I'm interested in is whether the human race as a whole is known to have ever made serious mathematical blunders.)
Mathematicians used to hold plenty of false, but intuitively reasonable, ideas in analysis that were backed up with proofs of one kind or another (understood in the context of those times). Coming to terms with the counterexamples led to important new ideas in analysis. A convergent infinite series of continuous functions is continuous. Cauchy gave a proof of this (1821). See Theorem 1 in Cours D'Analyse Chap. VI Section 1. Five years later Abel pointed out that certain Fourier series are counterexamples. A consequence is that the concept of uniform convergence was isolated and, going back to Cauchy's proof, it was seen that he had really proved a uniformly convergent series of continuous functions is continuous. For a nice discussion of this as an educational tool, see "Cauchy's Famous Wrong Proof" by V. Fred Rickey . [Edit: This may not be historically fair to Cauchy. See Graviton's answer for another assessment of Cauchy's work, which operated with continuity using infinitesimals in such a way that Abel's counterexample was not a counterexample to Cauchy's theorem.] Lagrange, in the late 18th century, believed any function could be expanded into a power series except at some isolated points and wrote an entire book on analysis based on this assumption. (This was a time when there wasn't a modern definition of function; it was just a "formula".) His goal was to develop analysis without using infinitesmals or limits. This approach to analysis was influential for quite a few years. See Section 4.7 of Jahnke's "A History of Analysis". Work in the 19th century, e.g., Dirichlet's better definition of function, blew the whole work of Lagrange apart, although in a reverse historical sense Lagrange was saved since the title of his book is "Theory of Analytic Functions..." Any continuous function (on a real interval, with real values) is differentiable except at some isolated points. Ampere gave a proof (1806) and the claim was repeated in lots of 19th century calculus books. See pp. 43--44, esp. footnote 11 on page 44, of Hawkins's book "Lebesgue's theory of integration: its origins and development". Here is a Google Books link . In 1872 Weierstrass killed the whole idea with his continuous nowhere differentiable function, which was one of the first fractal curves in mathematics. For a survey of different constructions of such functions, see "Continuous Nowhere Differentiable Functions" by Johan Thim . A solution to an elliptic PDE with a given boundary condition could be solved by minimizing an associated "energy" functional which is always nonnegative. It could be shown that if the associated functional achieved a minimum at some function, then that function was a solution to a certain PDE, and the minimizer was believed to exist for the false reason that any set of nonnegative numbers has an infimum. Dirichlet gave an electrostatic argument to justify this method, and Riemann accepted it and made significant use of it in his development of complex analysis (e.g., proof of Riemann mapping theorem). Weierstrass presented a counterexample to the Dirichlet principle in 1870: a certain energy functional could have infimum 0 with there being no function in the function space under study at which the functional is 0. This led to decades of uncertainty about whether results in complex analysis or PDEs obtained from Dirichlet's principle were valid. In 1900 Hilbert finally justified Dirichlet's principle as a valid method in the calculus of variations, and the wider classes of function spaces in which Dirichlet's principle would be valid eventually led to Sobolev spaces. A book on this whole story is A. F. Monna, "Dirichlet's principle: A mathematical comedy of errors and its influence on the development of analysis" (1975), which is not reviewed on MathSciNet.
{ "source": [ "https://mathoverflow.net/questions/35468", "https://mathoverflow.net", "https://mathoverflow.net/users/7869/" ] }
35,469
It is a well known result that a random walk on a 2D lattice will return to the origin see Polya's random walk constant . Based on this, it is not a big stretch to conclude that the random walk will visit every point of the plane with probability 1. A bit more surprising is the fact that this is not true in higher dimensions (see the link above). My question is the following: What is the probability that two random walks with distinct origins will arrive at the same point after the same number of steps? I think it's pretty clear that the answer will depend on the distance between the origins of the walks. So far, I've been trying reduce this to a problem of one random walk in a higher dimensional lattice, but I'm not sure if this is a good approach. In case the answer is obvious, this problem is easy generalize (higher dimensional lattices, more random walks, etc.). I'd appreciate a reference or two where I could learn more. Thanks
The difference between the positions is another random walk in the same dimension. You can either view the steps as different, or sample a random walk at even times. So, the probability is $0$ if meeting is ruled out by parity, and $1$ in the plane if meeting is possible.
{ "source": [ "https://mathoverflow.net/questions/35469", "https://mathoverflow.net", "https://mathoverflow.net/users/5593/" ] }
35,590
k-XORSAT is the problem of deciding whether a Boolean formula $$\bigwedge_{i \in I} \oplus_{j=1}^k l_{s_{ij}}$$ is satisfiable. Here $\oplus$ denotes the binary XOR operation, $I$ is some index set, and each clause has $k$ distinct literals $l_{s_{ij}}$ each of which is either a variable $x_{s_{ij}}$ or its negation. Equivalently, $k$-XORSAT requires deciding whether a set of linear equations, each of the form $\sum_{j=1}^k x_{s_{ij}}\equiv 1\; (\mod 2)$, has a solution over $\mathbb{Z}_2 = \mathbb{Z}/2\mathbb{Z}$. Every decision problem Q has an associated counting problem #Q, which (informally speaking) requires establishing the number of distinct solutions. Such counting problems form the complexity class #P . The "hardest" problems in #P are #P-complete, as any problem in #P can be reduced to a #P-complete problem. The counting problem associated with any NP-complete decision problem is #P-complete. However, many "easy" decision problems (some even solvable in linear time) also lead to #P-complete counting problems. For instance, Leslie Valiant's original 1979 paper The Complexity of Computing the Permanent shows that computing the permanent of a 0-1 matrix is #P-complete. As a second example, the list of #P-complete problems in the companion paper The Complexity of Enumeration and Reliability Problems includes #MONOTONE 2-SAT; this problem requires counting the number of solutions to Boolean formulas in conjunctive normal form, where each clause has two variables and no negated variables are allowed. (MONOTONE 2-SAT is of course rather trivial as a decision problem.) Andrea Montanari has written about the partition function of $k$-XORSAT in some lecture notes , and his book with Marc Mézard apparently discusses this (unfortunately I do not have a copy available to hand, and the relevant Chapter 17 is not included in Montanari's online draft). These considerations lead to the following question: Is #$k$-XORSAT #P-complete? Note that the formula in Montanari's notes does not obviously appear to answer this question. Just because there is a nice closed form solution, doesn't mean we can evaluate it efficiently: consider the Tutte polynomial . Some difficult counting problems in #P can still be approximated in a certain sense, by means of a scheme called an FPRAS. Jerrum, Sinclair, and collaborators have linked the existence of an FPRAS for #P problems to the question of whether $NP = RP$. I would therefore also be interested in the subsidiary question Does #$k$-XORSAT have an FPRAS? Edit: clarified second question as per comment by Tsuyoshi Ito. Note that Peter Shor's answer renders this part of the question moot.
The solutions for XOR-SAT form an affine subspace of the vector space GF(2)$^n$. You can see this by realizing that if you add three solutions together, you get another solution. The counting problem for XOR-SAT is then that of deciding how many points are in this affine space over GF(2). This is trivial if you know the rank of a generating matrix for this space (the number is $2^r$ for rank $r$). This rank can be figured out by a standard linear algebra algorithm, so the counting problem is in polynomial time.
{ "source": [ "https://mathoverflow.net/questions/35590", "https://mathoverflow.net", "https://mathoverflow.net/users/7252/" ] }
35,594
Littlewood's well-known conjecture about simultaneous rational approximation is that for all $x, y \in \mathbb{R}$, $\liminf_{n \to \infty} n \Vert nx \Vert \Vert ny \Vert = 0$ (where $\Vert x \Vert$ denotes the distance from $x$ to the nearest integer). A heuristic argument for this (mentioned in this survey article by Akshay Venkatesh) is as follows. Write $q_n(x)$ for the denominator of the $n$th convergent of the continued-fraction expansion of $x$. We know that $q_n(x) \Vert q_n(x) x \Vert$ is bounded, and it is reasonable to expect that $\Vert q_n(x) y \Vert$ will be small sometimes. Venkatesh points out that this argument doesn't work: in fact, given any sequence $q_n$ such that $\liminf q_{n+1} / q_n > 1$, we can find a $y \in \mathbb{R}$ such that $\Vert q_n y \Vert$ is bounded away from zero. However, this doesn't quite rule out using the continued-fraction expansions of $x$ and $y$ to prove Littlewood's conjecture. The result would follow if it could be shown that for all $x, y \in \mathbb{R}$, either $\liminf_{n \to \infty} \Vert q_n(x) y \Vert = 0$ or $\liminf_{n \to \infty} \Vert q_n(y) x \Vert = 0$. My question is whether this can be shown to be false. That is: do there exist $x$ and $y$ such that both $\Vert q_n(x) y \Vert$ and $\Vert q_n(y) x \Vert$ are bounded away from zero? In light of gowers's answer below, let me add the condition that $x$ and $y$ are badly-approximable, i.e. that they have bounded partial quotients in their continued fractions. (It is still the case that a negative answer would imply Littlewood's conjecture.) Here is an example from the realm of badly-approximable numbers. Let $x = \frac12 (1 + \sqrt{5})$ and $y = \frac12 (1 - 1/\sqrt{5})$. Then $q_n(x) = F_n$ (Fibonacci number), and $q_n(y) = 2F_n + F_{n+1}$ for $n \geq 1$. In this case, $\liminf \Vert q_n(x) y \Vert = \frac15 > 0$, but $\Vert q_n(y) x \Vert \to 0$.
The solutions for XOR-SAT form an affine subspace of the vector space GF(2)$^n$. You can see this by realizing that if you add three solutions together, you get another solution. The counting problem for XOR-SAT is then that of deciding how many points are in this affine space over GF(2). This is trivial if you know the rank of a generating matrix for this space (the number is $2^r$ for rank $r$). This rank can be figured out by a standard linear algebra algorithm, so the counting problem is in polynomial time.
{ "source": [ "https://mathoverflow.net/questions/35594", "https://mathoverflow.net", "https://mathoverflow.net/users/3755/" ] }
35,664
If this problem is really stupid, please close it. But I really wanna get some answer for it. And I learnt computational complexity by reading books only. When I learnt to the topic of relativization and oracle machines, I read the following theorem: There exist oracles A, B such that $P^A = NP^A$ and $P^B \neq NP^B$. And then the book said because of this, we can't solve the problem of NP = P by using relativization. But I think what it implies is that $NP \neq P$. The reasoning is like this: First of all, it is quite easy to see that: $$A = B \Leftrightarrow \forall \text{oracle O, }A^O = B^O$$ Though I think it is obvious, I still give a proof to it: A simple proof of NP != P ? And the negation of it is: $$A \neq B \Leftrightarrow \exists \text{oracle O such that } A^O \neq B^O$$ Therefore since there is an oracle B such that: $$ NP^B \neq P^B$$ we can conclude that $ NP \neq P $ What's the problem with the above reasoning?
The map $A \to A^O$ does not depend only on the elements contained in the language $O$, so it is not an operation on languages. It depends on the semantic way in which the language $A$ is defined. For instance, $NP^O$ is allowed both nondeterminism and access to $O$. $P^O$ is allowed deterministic polynomial time and access to $O$. I believe it is true that $BPP$ can be separated from $P$, even though it is thought that $BPP = P$.
{ "source": [ "https://mathoverflow.net/questions/35664", "https://mathoverflow.net", "https://mathoverflow.net/users/5217/" ] }
35,680
I've seen a couple papers (that I now can't find) that say that in his paper "On irreducible 3-manifolds which are sufficiently large" Waldhausen proved that the data $\pi_1(\partial (S^3\setminus K)) \to \pi_1(S^3\setminus K)$ is a complete knot invariant. However, the word "knot" doesn't appear in this paper (although the phrase "to avoid an orgy of notation" does :-). Is the claimed result a straightforward corollary of his main results? Or am I looking at the wrong paper?
As Ryan says, this follows from Waldhausen's paper, when appropriately interpreted. Sufficiently large 3-manifolds are usually called "Haken" in the literature, and as Ryan says, they are irreducible and contain an incompressible surface (which means that the surface is incompressible and boundary incompressible). An irreducible manifold with non-empty boundary and not a ball (ie no 2-sphere boundary components) is always sufficiently large, by a homology and surgery argument. By Alexander's Lemma , knot complements are irreducible, and therefore sufficiently large (the sphere theorem implies that they are aspherical). Waldhausen's theorem implies that if one has two sufficiently large 3-manifolds $M_1, M_2$ with connected boundary components, and an isomorphism $\pi_1(M_1) \to \pi_1(M_2)$ inducing an isomorphism $\pi_1(\partial M_1) \to \pi_1(\partial M_2)$, then $M_1$ is homeomorphic to $M_2$. This is proven by first showing that there is a homotopy equivalence $M_1\simeq M_2$ which restricts to a homotopy equivalence $\partial M_1\simeq \partial M_2$. Then Waldhausen shows that this relative homotopy equivalence is homotopic to a homeomorphism by induction on a hierarchy. The peripheral data is necessary if the manifold has essential annuli, for example the square and granny knots have homotopy equivalent complements. If $K_1, K_2\subset S^3$ are (tame) knots, and $M_1=S^3-\mathcal{N}(K_1), M_2=S^3-\mathcal{N}(K_2)$ are two knot complements, then Waldhausen's theorem applies. However, one must also cite the knot complement problem solved by Gordon and Luecke , in order to conclude that $K_1$ and $K_2$ are isotopic knots. Otherwise, one must also hypothesize that the isomorphism $\partial M_1 \to \partial M_2$ takes the meridian to the meridian (the longitudes are determined homologically). This extra data is necessary to solve the isotopy problem for knots in a general 3-manifold $M$, to guarantee that the homeomorphism $(M_1,\partial M_1)\to (M_2,\partial M_2)$ extends to a homeomorphism $(M,K_1)\to (M,K_2)$, since for example there are knots in lens spaces which have homeomorphic complements by a result of Bleiler-Hodgson-Weeks .
{ "source": [ "https://mathoverflow.net/questions/35680", "https://mathoverflow.net", "https://mathoverflow.net/users/2669/" ] }
35,736
I have heard that the canonical divisor can be defined on a normal variety X since the smooth locus has codimension 2. Then, I have heard as well that for ANY algebraic variety such that the canonical bundle is defined: $$\mathcal{K}=\mathcal{O}_{X,-\sum D_i}$$ where the $D_i$ are representatives of all divisors in the Class Group. I want to prove that formula or I want to find a reference for that formula, or I want someone to rephrase it in a similar way if they heard about it. Why do I want to prove it? Well, I use the definition that something is Calabi Yau if its canonical bundle is 0. In the case of toric varieties, $\sum D_i$~0 if all the primitive generators for the divisors lie on a hyperplane. Then the sum is 0 and therefore the toric variety is Calabi-Yau. Can someone confirm or fix the above formula? I do not ask for a debate on when something is Calabi-Yau, I handle that OK, I just ask whether the above formula is correct. A reference would be enough. I have little access to references at the moment.
Edit (11/12/12): I added an explanation of the phrase "this is essentially equivalent to $X$ being $S_2$" at the end to answer aglearner's question in the comments. [See also here and here ] Dear Jesus, I think there are several problems with your question/desire to define a canonical divisor on any algebraic variety. First of all, what is any algebraic variety? Perhaps you mean a quasi-projective variety (=reduced and of finite type) defined over some (algebraically closed) field. OK, let's assume that $X$ is such a variety. Then what is a divisor on $X$? Of course, you could just say it is a formal linear combination of prime divisors , where a prime divisor is just a codimension 1 irreducible subvariety. OK, but what if $X$ is not equidimensional? Well, let's assume it is, or even that it is irreducible. Still, if you want to talk about divisors, you would surely want to say when two divisors are linearly equivalent . OK, we know what that is, $D_1$ and $D_2$ are linearly equivalent iff $D_1-D_2$ is a principal divisor . But, what is a principal divisor? Here it starts to become clear why one usually assumes that $X$ is normal even to just talk about divisors, let alone defining the canonical divisor. In order to define principal divisors, one would need to define something like the order of vanishing of a regular function along a prime divisor. It's not obvious how to define this unless the local ring of the general point of any prime divisor is a DVR. Well, then this leads to one to want to assume that $X$ is $R_1$, that is, regular in codimension $1$ which is equivalent to those local rings being DVRs. OK, now once we have this we might also want another property: If $f$ is a regular function, we would expect, that the zero set of $f$ should be 1-codimensional in $X$. In other words, we would expect that if $Z\subset X$ is a closed subset of codimension at least $2$, then if $f$ is nowhere zero on $X\setminus Z$, then it is nowhere zero on $X$. In (yet) other words, if $1/f$ is a regular function on $X\setminus Z$, then we expect that it is a regular function on $X$. This in the language of sheaves means that we expect that the push-forward of $\mathscr O_{X\setminus Z}$ to $X$ is isomorphic to $\mathscr O_X$. Now this is essentially equivalent to $X$ being $S_2$. So we get that in order to define divisors as we are used to them, we would need that $X$ be $R_1$ and $S_2$, that is, normal . Now, actually, one can work with objects that behave very much like divisors even on non-normal varieties/schemes, but one has to be very careful what properties work for them. As far as I can tell, the best way is to work with Weil divisorial sheaves which are really reflexive sheaves of rank $1$. On a normal variety, the sheaf associated to a Weil divisor $D$, usually denoted by $\mathcal O_X(D)$, is indeed a reflexive sheaf of rank $1$, and conversely every reflexive sheaf of rank $1$ on a normal variety is the sheaf associated to a Weil divisor (in particular a reflexive sheaf of rank $1$ on a regular variety is an invertible sheaf) so this is indeed a direct generalization. One word of caution here: $\mathcal O_X(D)$ may be defined for Weil divisors that are not Cartier, but then this is (obviously) not an invertible sheaf. Finally, to answer your original question about canonical divisors. Indeed it is possible to define a canonical divisor (=Weil divisorial sheaf) for all quasi-projective varieties. If $X\subseteq \mathbb P^N$ and $\overline X$ denotes the closure of $X$ in $\mathbb P^N$, then the dualizing complex of $\overline X$ is $$ \omega_{\overline X}^\bullet=R{\mathscr H}om_{\mathbb P^N}(\mathscr O_{\overline X}, \omega_{\mathbb P^N}[N]) $$ and the canonical sheaf of $X$ is $$ \omega_X=h^{-n}(\omega_{\overline X}^\bullet)|_X=\mathscr Ext^{N-n}_{\mathbb P^N}(\mathscr O_{\overline X},\omega_{\mathbb P^N})|_X $$ where $n=\dim X$. (Notice that you may disregard the derived category stuff and the dualizing complex, and just make the definition using $\mathscr Ext$.) Notice further, that if $X$ is normal, this is the same as the one you are used to and otherwise it is a reflexive sheaf of rank $1$. As for your formula, I am not entirely sure what you mean by "where the $D_i$ are representatives of all divisors in the Class Group". For toric varieties this can be made sense as in Josh's answer, but otherwise I am not sure what you had in mind. (Added on 11/12/12): Lemma A scheme $X$ is $S_2$ if and only if for any $\iota:Z\to X$ closed subset of codimension at least $2$, the natural map $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an isomorphism. Proof Since both statements are local we may assume that $X$ is affine. Let $x\in X$ be a point and $Z\subseteq X$ its closure in $X$. If $x$ is a codimension at most $1$ point, there is nothing to prove, so we may assume that $Z$ is of codimension at least $2$. Considering the exact sequence (recall that $X$ is affine): $$ 0\to H^0_Z(X,\mathscr O_X) \to H^0(X,\mathscr O_X) \to H^0(X\setminus Z,\mathscr O_X) \to H^1_Z(X,\mathscr O_X) \to 0 $$ shows that $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an isomorphism if and only if $H^0_Z(X,\mathscr O_X)=H^1_Z(X,\mathscr O_X)=0$ the latter condition is equivalent to $$ \mathrm{depth}\mathscr O_{X,x}\geq 2, $$ which given the assumption on the codimension is exactly the condition that $X$ is $S_2$ at $x\in X$. $\qquad\square$
{ "source": [ "https://mathoverflow.net/questions/35736", "https://mathoverflow.net", "https://mathoverflow.net/users/1887/" ] }
35,746
In a recent issue of New Scientist (16 Aug 2010), I was surprised to read that a part of Wiles' proof of Taniyama-Shimura conjecture relies on inaccessible cardinals. Here's the article Richard Elwes, To infinity and beyond: The struggle to save arithmetic , New Scientist, August 2010. Here's the relevant bit from the article: "Large cardinals have been studied by logicians for a century, but their intangibility means they have seldom featured in mainstream mathematics. A notable exception is the most celebrated result of recent years, the proof of Fermat's last theorem by the British mathematician Andrew Wiles in 1994 [...] To complete his proof, Wiles assumed the existence of a type of large cardinal known as an inaccessible cardinal, technically overstepping the bounds of conventional arithmetic" Is this true ? If so, could someone please outline how they are used ?
The basic contention here is that Wiles' work uses cohomology of sheaves on certain Grothendieck topologies, the general theory of which was first developed in Grothendieck's SGAIV and which requires the existence of an uncountable Grothendieck universe . It has since been clarified that the existence of such a thing is equivalent to the existence of an inaccessible cardinal , and that this existence -- or even the consistency of the existence of an inaccessible cardinal! -- cannot be proved from ZFC alone, assuming that ZFC is consistent. Many working arithmetic and algebraic geometers however take it as an article of faith that in any use of Grothendieck cohomology theories to solve a "reasonable problem", the appeal to the universe axiom can be bypassed. Doubtless this faith is predicated on abetted by the fact that most arithmetic and algebraic geometers (present company included) are not really conversant or willing to wade into the intricacies of set theory. I have not really thought about such things myself so have no independent opinion, but I have heard from one or two mathematicians that I respect that removing this set-theoretic edifice is not as straightforward as one might think. (Added: here I meant removing it from general constructions , not just from applications to some particular number-theoretic result. And I wasn't thinking solely about the small etale site -- see e.g. the comments on crystalline stuff below.) Here is an article which gives more details on the matter: Colin McLarty, What does it take to prove Fermat’s last theorem? Grothendieck and the logic of number theory , Bull. Symb. Log. 16 No. 3 (2010) pp. 359–377, doi: 10.2178/bsl/1286284558 , archived author pdf . Note that I do not necessarily endorse the claims in this article, although I think there is something to the idea that written work by number theorists and algebraic geometers usually does not discuss what set-theoretic assumptions are necessary for the results to hold, so that when a generic mathematician tries to trace this back through standard references, there may seem to be at least an apparent dependency on Grothendieck universes. P.S.: If a mathematician of the caliber of Y.I. Manin made a point of asking in public whether the proof of the Weil conjectures depends in some essential way on inaccessible cardinals, is this not a sign that "Of course not; don't be stupid" may not be the most helpful reply?
{ "source": [ "https://mathoverflow.net/questions/35746", "https://mathoverflow.net", "https://mathoverflow.net/users/8528/" ] }