source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
35,788
In this question , Ariyan asks about the question of uniqueness of extensions of vector bundles when they exist. Sasha's answer suggests that extensions of vector bundles don't always exist. More precisely, if $F$ is a vector bundle on an open subscheme $U$, there does not always exist a vector bundle $F'$ on the ambient space $X$ such that $F'|_U \cong F$. Can anyone give me a simple example of such an $F$? I am mainly interested in the case when $X$ is a variety (over $\mathbb{C}$), and $U$ is an open subvariety. Probably I want $X$ to be smooth.
The simplest example is the following. Take $X = A^3$ with coordinates $(x,y,z)$, and let $E = Ker(O_X \oplus O_X \oplus O_X \stackrel{(x,y,z)}\to O_X)$. Let $U$ be the complement of the point $(0,0,0) \in X$. Then $E_{|U}$ is a vector bundle. On the other hand, $E$ is not a vector bundle, but $E^{**} \cong E$, hence $E$ is the reflexive envelope of $i_*i^*E$, and thus there is no vector bundle on $X$ extending $E_{|U}$. [Edit by Anton: I just spent some time digesting some pieces of the above answer, so figured I'd include the results for future readers similar to me.] ("$E$ is not a vector bundle") The sequence $O_X\xrightarrow{\pmatrix{z\\ y \\ x}}O_X^3\xrightarrow{\pmatrix{y & -z & 0\\ -x & 0 & z\\ 0 &x&-y}}O_X^3\xrightarrow{\pmatrix{x& y& z}}O_X$ is exact, so $E$ is the cokernel of the first map. Since taking fibers commutes with taking cokernels, we compute that $E$ has 2-dimensional fibers away from the origin, and 3-dimensional fiber at the origin. ("$E^{**}\cong E$") Note that $E$ is $S_2$ (i.e. sections defined away from codimension 2 extend uniquely) since it is the kernel of a map from an $S_2$ sheaf to a torsion-free sheaf (the section of $O_X^3$ extends uniquely, and its image is zero away from codimension 2, so must be zero, so the extended section is in $E$). Note also that the dual of any sheaf is $S_2$ (if $\phi\colon F\to O_X$ is defined on an open set $V$ with codimension 2 complement and $s$ is a section, $\phi(s)$ must be the unique extension of $\phi(s|_V)$ as a section of $O_X$), so $E^{**}$ is $S_2$. The canonical map $E\to E^{**}$ is then a map of $S_2$ sheaves which is an isomorphism away from codimension 2, so it must be an isomorphism. ("and thus there is no vector bundle on $X$ extending $E|_U$") If $F$ is an $S_2$ extension of $E|_U$ (i.e. $i^*F=i^*E$), then there is a map $F\to i_*i^*E\to (i_*i^*E)^{**}=E$ which is an isomorphism over $U$, so is an isomorphism by the argument in the previous paragraph. A vector bundle extension would be a different $S_2$ extension.
{ "source": [ "https://mathoverflow.net/questions/35788", "https://mathoverflow.net", "https://mathoverflow.net/users/83/" ] }
35,793
It is clear that each maximal ideal in ring of continuous functions over $[0,1]\subset \mathbb R$ corresponds to a point and vice-versa. So, for each ideal $I$ define $Z(I) =\{x\in [0,1]\,|\,f(x)=0, \forall f \in I\}$. But map $I\mapsto Z(I)$ from ideals to closed sets is not an injection! (Consider the ideal $J(x_0)=\{f\,|\,f(x)=0, \forall x\in\hbox{ some closed interval which contains }x_0\}$) How can we describe ideals in $C([0,1])$ ? Is it true that prime ideals are maximal for this ring?
Here is a way to construct a non-maximal prime ideal: consider the multiplicative set $S$ of all non-zero polynomials in $C[0,1]$. Use Zorn lemma to get an ideal $P$ that is disjoint from $S$ and is maximal with this property. $P$ is clearly prime (for this you only need $S$ to be multiplicative.) On the other hand $P$ cannot be any one of the maximal ideals, since it does not contain $x-c$ for every $c \in [0,1]$.
{ "source": [ "https://mathoverflow.net/questions/35793", "https://mathoverflow.net", "https://mathoverflow.net/users/4298/" ] }
35,795
Let $G$ be a simple Lie group over ${\mathbb C}$, $P \subset G$ a parabolic subgroup, and $L \subset P$ its Levi subgroup. Let $\lambda$ be a $G$-dominant weight and $V_G^\lambda$ an irreducible representation of $G$ with highest weight $\lambda$. I am interested in restrictions on highest weights of irreducible components of the restriction $$ (V_G^\lambda)_{|L}. $$ In the simplest case, when $P = B$ is the Borel subgroup, $L = T$ is the maximal torus, and there is a well-known restriction --- the weights of $V_G^\lambda$ all lie in $$ Conv(\lbrace w\lambda \rbrace_{w \in W}), $$ where $W$ is the Weyl group. I would be happy to know something of the same sort for arbitrary parabolic subgroup.
Here is a way to construct a non-maximal prime ideal: consider the multiplicative set $S$ of all non-zero polynomials in $C[0,1]$. Use Zorn lemma to get an ideal $P$ that is disjoint from $S$ and is maximal with this property. $P$ is clearly prime (for this you only need $S$ to be multiplicative.) On the other hand $P$ cannot be any one of the maximal ideals, since it does not contain $x-c$ for every $c \in [0,1]$.
{ "source": [ "https://mathoverflow.net/questions/35795", "https://mathoverflow.net", "https://mathoverflow.net/users/4428/" ] }
35,868
Let $T$ be a torus $V/\Gamma$, $\gamma$ a loop on $T$ based at the origin. Then it is easy to see that $$2 \gamma = \gamma \ast \gamma \in \pi_1(T).$$ Here $2 \gamma$ is obtained by rescaling $\gamma$ using the group law, while $\ast$ denotes the operation in the fundamental group. The way I can check this is rather direct: one lifts the loop (up to based homotopy) to a segment in $V$ and uses the identification of $\pi_1(T)$ with the lattice $\Gamma$. Is there a more conceptual way to prove this identity that will extend to more general (real or complex) Lie groups, or maybe to linear algebraic groups? Or is this fact false in more generality?
Yay! It's the Eckmann-Hilton argument! There are two group structures on $\pi_1(G)$ and they commute with each other . It turns out that that is sufficient to show that they are the same structure and that that structure is commutative. For a proof of this, using interpretative dance, take a look at the movie in this seminar that I gave last semester. There's also something on YouTube by The Catsters (see the nLab page linked above). (Forgot to actually answer your question!) This only depends on the fact that $\pi_1$ is a representable group functor and that $G$ is a group object in $hTop$. So it will extend to other group objects in $hTop$, such as those that you mention. This also explains why $\pi_k$ is abelian for $k \ge 2$ since $\pi_2(X) = \pi_1(\Omega X)$ and $\Omega X$ is a group object in $hTop$.
{ "source": [ "https://mathoverflow.net/questions/35868", "https://mathoverflow.net", "https://mathoverflow.net/users/828/" ] }
35,880
As an undergraduate we are trained as mathematicians to be universalists. We are expected to embrace a wide spectrum of mathematics. Both algebra and analysis are presented on equal footing with geometry/topology coming in later, but given its fair share(save the inherent bias of professors). Number theory, and more applied fields like numerical analysis are often given less emphasis, but it is highly recommended that we at least dabble in these areas. As a graduate student, we begin by satisfying the breadth requirement, and thus increasing these universalist tendencies. We are expected to have a strong background in all of undergraduate mathematics, and be comfortable working in most areas at a elementary level. For economic reasons, if our inclinations are for the more pure side, we are recommended to familiarize ourselves with the applied fields, in case we fall short of landing an academic position. However, after passing preliminary exams, this perspective changes. Very suddenly we are expected to focus on research, and abandon these preinclinations of learning first, then doing something. Professors espouse the idea that working graduate student should stop studying theories, stop working through textbooks, and get to work on research. I am finding it difficult to eschew my habits of long self-study to gain familiarity with a subject before working. Even during my REU and as an undergrad, I was provided with more time and expectation to study the background. I am a third year graduate student who has picked an area of study and has a general thesis problem. My advisor is a well known mathematician, and I am very interested in this research area. However, my background in some of the related material is weak. My normal mode of behavior, would be to pick up a few textbooks and fix my weak background. Furthermore, to take many more graduate courses on these subjects. However, both of my major professors have made it clear that this is the wrong approach. Their suggestion is to learn the relevant material as I go, and that learning everything I will need up front would be impossible. They suggest begin to work and when I need something, pick up a book and check that particular detail. So in short my question is: How can I get over this desire to take lots of time and learn this material from the bottom-up approach, and instead attack from above, learning the essentials necessary to move more quickly to making original contributions? Additionally, for those of you advising students, do you recommend them the same as my advisor is recommending me? A relevant MO post to cite is How much reading do you do before attacking a problem . I found relevant advice there also. As a secondary question, in relation to the question of universalist. I find it difficult to restrain myself to working on one problem at a time. My interests are broad, and have difficulty telling people no. So when asked if I am interested in taking part in other projects, I almost always say yes. While enjoyable(and on at least one occasion quite fruitful), this is also not conducive to finishing a Ph.D.(even keeping in mind the advice of Noah Snyder to do one early side project ). With E.A. Abbot's claim that Poincaré was the last universalist, with an attempt at modesty I wonder How to get over this bred desire to work on everything of interest, and instead focus on one area? I ask this question knowing full well that some mathematicians referred to as modern universalists visit this site. (I withhold names for fear of leaving some out.) Also, I apologize for the anonymity. Thank you for your time! EDIT: CW since I cannot imagine there is one "right answer". At best there is one right answer for me, but even that is not clear.
I think that, for the majority of students, your advisor's advice is correct. You need to focus on a particular problem, otherwise you won't solve it, and you can't expect to learn everything from text-books in advance, since trying to do so will lead you to being bogged down in books forever. I think that Paul Siegel's suggestion is sensible. If you enjoy reading about different parts of math, then build in some time to your schedule for doing this. Especially if you feel that your work on your thesis problem is going nowhere, it can be good to take a break, and putting your problem aside to do some general reading is one way of doing that. But one thing to bear in mind is that (despite the way it may appear) most problems are not solved by having mastery of a big machine that is then applied to the problem at hand. Rather, they typically reduce to concrete questions in linear algebra, calculus, or combinatorics. One part of the difficulty in solving a problem is finding this kind of reduction (this is where machines can sometimes be useful), so that the problem turns into something you can really solve. This usually takes time, not time reading texts, but time bashing your head against the question. One reason I mention this is that you probably have more knowledge of the math you will need to solve your question than you think; the difficulty is figuring out how to apply that knowledge, which is something that comes with practice. (Ben Webster's advice along these lines is very good.) One other thing: reading papers in the same field as your problem, as a clue to techniques for solving your problem, is often a good thing to do, and may be a compromise between working solely on your problem and reading for general knowledge.
{ "source": [ "https://mathoverflow.net/questions/35880", "https://mathoverflow.net", "https://mathoverflow.net/users/8556/" ] }
35,900
Let $X$ be a differentiable manifold. Its cotangent bundle $T^*X$ carries a canonical 1-form $ \alpha$ whose exterior differential $\omega = d\alpha$ endows $T^*X$ with the structure of a symplectic manifold. But what about the converse question? Which symplectic manifolds are cotangent bundles? Clearly a necessary condition is that $\omega$ must be exact, so cohomological obstructions are relevant. Is that all? Compact symplectic manifolds have non-trivial de Rham cohomology in grade two, so the cohomological test passes muster for that important class of examples. I'm also interested in examples of manifolds where different symplectic forms (modulo exact 2-forms) give qualitatively different dynamics with the same Hamiltonian. As symplectic structures on the same manifold are all locally equivalent by Darboux's theorem, one expects such phenomena would occur only on a global scale.
Using the h-principle, Gromov showed that there is a symplectic form on $\mathbb{R}^6$ which admits $S^3$ as a Lagrangian submanifold. Using holomorphic curves, he showed that the standard symplectic form on $T^* \mathbb{R}^3$ does not admit any such Lagrangian. There is now a whole industry of building exotic symplectic forms on non-compact manifolds (see papers of Seidel-Smith, Mark McLean, ...). Probably the only reasonable answer to characterising cotangent bundles uses the existence of a Lagrangian foliation by planes. If you have a foliation parametrised by a manifold which admits a Lagrangian section, then you have yourself an open subset of a cotangent bundle (this is just Weinstein's theorem). You can't drop the condition of the existence of a section precisely because you can add the pull back of a $2$-form on the base. If your symplectic form is "complete" then the existence of a Lagrangian section is a cohomological condition. Pick any section: If the pullback of $\omega$ doesn't vanish, then you don't have a cotangent bundle. If it vanishes in cohomology, you can use a primitive $1$-form to flow your section to a Lagrangian. Added Remark: I want to point out that the methods we have for producing different symplectic forms do not proceed by writing down different $2$-forms on the same space. Rather, you find some construction of symplectic manifolds (using some general notion of symplectic surgery) which produces a large class of symplectic manifolds, then you prove that some of these result in the same smooth manifold. The existence of a diffeomorphism is obtained abstractly, so I do not know of examples where we can write down a Hamiltonian whose dynamics for two different symplectic forms can be compared.
{ "source": [ "https://mathoverflow.net/questions/35900", "https://mathoverflow.net", "https://mathoverflow.net/users/2036/" ] }
35,988
I have been told that the study of matrix determinants once comprised the bulk of linear algebra. Today, few textbooks spend more than a few pages to define it and use it to compute a matrix inverse. I am curious about why determinants were such a big deal. So here's my question: a) What are examples of cool tricks you can use matrix determinants for? (Cramer's rule comes to mind, but I can't come up with much more.) What kind of amazing properties do matrix determinants have that made them such popular objects of study? b) Why did the use of matrix determinants fall out of favor? Some back history would be very welcome. Update: From the responses below, it seems appropriate to turn this question into a community wiki. I think it would be useful to generalize the original series of questions with: c) What significance do matrix determinants have for other branches of mathematics? (For example, the geometric significance of the determinant as the signed volume of a parallelepiped.) What developments in mathematics have been inspired by/aided by the theory of matrix determinants? d) For computational and theoretical applications that matrix determinants are no longer widely used for today, what have supplanted them?
I don't think that determinants is an old fashion topic. But the attitude towards them has changed along decades. Fifty years ago, one insisted on their practical calculation, by bare hands of course. This way of teaching linear algebra has essentially disappeared. But the theoretical importance of deteminants is still very high, and they are usefull in almost every branch of Mathematics, and even in other sciences. Let me give a few instances where determinants are unavoidable. Change of variable in an integral. Isn't the Jacobian of a transformation a determinant? The Wronskian of solutions of a linear ODE is a determinant. It plays a central role in spectral theory (Hill's equation with periodic coefficients), and therefore in stability analysis of travelling waves in PDEs. A well-known proof of the simplicity of the Perron's eigenvalue of an irreducible non-negative matrix is a very nice use of the multilinearity of the determinant. The $n$th root of the determinant is a concave function over the $n\times n$ Hermitian positive definite matrices. This is at the basis of many development in modern analysis, via the Brunn-Minkowski inequality. In combinatorics, determinants and Pfaffians occur in formulas counting configurations of lines between sets of points in network. D. Knuth advocates that there are no determinants, but only Pfaffians. Of course, the eigenvalues of a matrix are the roots of a determinant, the characteristic polynomial. In control theory, the Routh-Hurwitz algorithm, which checks whether a system is stable or not, is based on the calculation of determinants. As mentioned by J.M., Slater determinants are used in quantum chemistry. Frobenius Theory provides an algorithm for classifying matrices $M\in M_n(k)$ up to conjugation. It consists in calculating all the minors of the matrix $XI_n-M\in M_n(k[X])$ (these are determinants, aren't they?), then the g.c.d. of the minors of size $k=1,\ldots,n$ for each $k$. This is the theory of similarity invariants , which are polynomials $p_1,\ldots,p_n$, with $p_j|p_{j+1}$ and $p_1\cdots p_n=P_M$, the characteristic polynomial. If one goes further by decomposing the $p_j$'s (but this is beyond any algorithm), one obtains the theory of elementary divisors. If $L$ is an algebraic extension of a field $K$, the norm of $a\in L$ is nothing but the determinant of the $K$-linear map $x\mapsto ax$. It is an element of $K$. Kronecker's principle characterizes the power series that are rational functions, in terms of determinants of Hankel matrices. This has several important applications. One is the proof by Dwork of Weil's conjecture that Zeta functions of algebraic curves are rational functions. Another one is Salem's theorem: if $\theta>1$ and $\lambda>0$ are real numbers, such that the distances of $\lambda\theta^n$ to ${\mathbb N}$ are square summable, then $\theta$ is an algebraic number of class $S$. Above all, invertible matrices are characterized by their determinant: it is an invertible scalar. This is true when the scalars belong to a unit commutative ring. Besides, the determinant is the unique morphism ${\bf GL}_n(A)\mapsto A^*$ ; it therefore plays the same role in the linear group as that played by the signature in the symmetric group $\frak S_n$. Powers of the determinant of $2\times2$ matrices appear in the definition of automorphic forms over the Poincaré half-plane. See also the answers to JBL's question, Wonderful applications of the Vandermonde determinant In algebraic geometry, most projective curves can be seen as the zero set of some determinantal equality $\det(xA+yB+zC)=0$. The theory was developped by Helton & Vinnikov. For instance, a hyperbolic polynomial in three variables can be written as $\det(xI_n+yH+zK)$ with $H,K$ Hermitian matrices; this was conjectured by P. Lax. The discriminant of a quadratic form is the determinant of its matrix, say in a given basis. There are two important situations. A) If the scalars form a field $k$, the discriminant is really a scalar modulo the squares of $k^\times$. It is an element of the classification of quadratic forms up to isomorphism. B) Gauss defines a composition rule of two binary forms (say $ax^2+bxy+cy^2$) with integer coefficients when they have the same discriminant. The classes of equivalent forms of given discriminant make an abelian group. In 2014, a Fields medal was awarded to Manjul Bhargawa for major advances in this area. In a real vector space, the orientation of a basis is the sign of its determinant. One of the most important PDE, the Monge-Ampère equation writes $\det D^2u=f$. It is central in optimal transport theory. Recently, I proved the following amazing result. Let $T:{\mathbb R}^d\rightarrow{\bf Sym}_d^+$ be periodic, according to some lattice. Assume that $T$ is row-wise divergence-free, that is $\sum_j\partial_jt_{ij}=0$ for every $i=1,\ldots,d$. Then $$\langle(\det T)^{\frac1{d-1}}\rangle\le\left(\det\langle T\rangle\right)^{\frac1{d-1}},$$ where $\langle\cdot\rangle$ denotes the average of a periodic function. With the exponent $\frac1d$ instead, this would be a consequence of Jensen inequality and point 4 above. The equality case occurs iff $T$ is the cofactor matrix of the Hessian of some convex function. The Gauss curvature of a hypersurface is the Jacobian determinant of the Gauss map (the map which to a point $x$ associates the unit normal to the hypersurface at $x$). Of course, this list is not exhaustive (otherwise, it should be infinite). I do teach Matrix Theory, at Graduate level, and spend a while on determinant, even if I rarely compute an exact value. Edit . The following letter by D. Perrin to J.-L. Dorier (1997) supports the importance of determinants in algebra and in teaching of algebra.
{ "source": [ "https://mathoverflow.net/questions/35988", "https://mathoverflow.net", "https://mathoverflow.net/users/1674/" ] }
36,276
Perhaps this question will not be considered appropriate for MO - so be it. But hear me out before you dismiss it as completely elementary. As the question suggests, I would like to know when $\sin(p\pi/q)$ can be expressed in radicals (in the way that $\sin(\pi/4) = \sqrt{2}/2$ and $\sin(\pi/3) = \sqrt{3}/2$ can). Let $\alpha = \sin(x)$, and consider the field extension $\mathbb{Q}[\alpha]$. Using $(\cos(x) + i\sin(x))^k = \cos(kx) + i \sin(kx)$ together with the binomial formula and the Pythagorean identity relating sine and cosine, we can see that $\sin(kx)$ lies in a solvable extension of $\mathbb{Q}[\alpha]$. Thus $\sin(p\pi/q )$ is expressible in radicals if $\sin(\pi/q)$ is. To handle $\sin(\pi/q)$, we start by using the same trick (which most people also learn in an elementary trig class). Write $-1 = (\cos(\pi/q) + i\sin(\pi/q))^q$, use the binomial theorem to expand, compare imaginary parts, and express the right-hand-side in terms of sine using the Pythagorean identity. This gives an explicit equation for any $q$ one of whose solutions is $\sin(\pi/q)$. This equation is not a polynomial in $\sin(\pi/q)$ since it involves terms of the form $\sqrt{1 - \sin^2(\pi/q)}$, but it is enough to prove that $\sin(\pi/q)$ is algebraic. So I am curious about the number theoretic properties of this equation. What can be said about the Galois group of its "splitting field" over $\mathbb{Q}$? Can we at least determine when it is solvable? Note that if the prime factors of $q$ are $p_1, \ldots p_k$ and we can express each $\sin(\pi/p_j)$ in radicals, then the same is true for $\sin(\pi/q)$. So it suffices to consider the case where $q$ is prime. That's about all the progress I have made.
As $\cos x=\pm\sqrt{1-\sin^2 x}$ and $e^{ix}=\cos x +i\sin x$, and $\sin x=(e^{ix}-1/e^{ix})/2i$ then $\sin x$ is in a radical extension of $\mathbb{Q}$ iff $e^{ix}$ is. For rational $r$ with denominator $d$, $e^{2\pi i r}$ is a primitive $d$-th roots of unity. The extension of $\mathbb{Q}$ generated by a root of unity is a cyclotomic field . Every cyclotomic field is an abelian extension of $\mathbb{Q}$. (By the Kronecker-Weber theorem any abelian extension is contained in a cyclotomic field.) The Galois group of the $n$-th cyclotomic field is isomorphic to the multplicative group $(\mathbb{Z}/n\mathbb{Z})^*$ So we can obtain any $\sin r\pi$ for $r$ rational in terms of radicals, both in a trivial way if you allow an $n$-th root of unity as $1^{1/n}$, and also in a stricter sense, if you insist that you ascend through a chain of fields by adjoining at each stage a root of $x^m-a$ where this polynomial is irreducible over the previous field. However if you insist on this more exacting definition, you will need radicals of non-real numbers unless $n$ is a product of distinct Fermat primes. All this is well-known.
{ "source": [ "https://mathoverflow.net/questions/36276", "https://mathoverflow.net", "https://mathoverflow.net/users/4362/" ] }
36,326
I'm finishing up a PhD in math and am thinking about options outside of academia. So far, I've really only focused on pure mathematics, but I have a year left of grad school. Suppose I am interested in looking for a job in software engineering, finance, or some other quantitative field. What should I be doing in the next year? What classes should I take?
This is the perspective of someone who went from a math PhD to the media industry and then to software/tech (and enjoyed it immensely). I can't speak for finance. Don't spend time on classes; instead, figure out the skills you want to acquire, and learn them yourself. One important reason to sidestep classes is that from now on you will have to learn things all the time without the benefit of formal instruction! So part of what you're going to be teaching yourself is how to learn quickly outside the classroom. Also, while it's always helpful to read theoretical material, you'll probably want to spend the majority of your time learning by doing. A good approach is to actually do the job you want. Software engineering is a very welcoming world and there are a lot of avenues. Follow and then join an open-source project. Put up your own web site that does something interesting. Help out a nonprofit. Here's a more offbeat thought: almost every newspaper and magazine in the US is currently (a) in a financial crisis; and (b) desperate for technology help. It's quite possible that, even with few qualifications, you can get an unpaid internship at a name-brand place that will net you good recommendations and excellent experience. Note that this will be helpful even if you ultimately want to work in an industry that's not desperate and in a financial crisis :-) Finally, keep in mind that a big benefit of the "do the job you want" approach is that you'll discover whether you actually like the job before you commit full-time. If you don't like it, choose something else. And keep this attitude long after you've graduated: one of the wonderful things about the non-academic world is the total freedom to change your career when you want.
{ "source": [ "https://mathoverflow.net/questions/36326", "https://mathoverflow.net", "https://mathoverflow.net/users/8694/" ] }
36,379
Just a basic point-set topology question: clearly we can detect differences in topologies using convergent sequences, but is there an example of two distinct topologies on the same set which have the same convergent sequences?
In a metric (or metrizable) space, the topology is entirely determined by convergence of sequences. This does not hold in an arbitrary topological space, and Mariano has given the canonical counterexample. This is the beginning of more penetrating theories of convergence given by nets and/or filters . For information on this, see e.g. http://alpha.math.uga.edu/~pete/convergence.pdf In particular, Section 2 is devoted to the topic of sequences in topological spaces and gives some information on when sequences are "topologically sufficient". In particular a topology is determined by specifying which nets converge to which points. This came up as a previous MO question . It is not covered in the notes above, but is well treated in Kelley's General Topology .
{ "source": [ "https://mathoverflow.net/questions/36379", "https://mathoverflow.net", "https://mathoverflow.net/users/7005/" ] }
36,466
Lets $X$ be a complex manifold (algebraic variety), $N$ an integer, and consider the sheaf $F$ defined by: $F(U)$ ={ holomorphic maps $f: U\rightarrow GL(N,\mathbb{C})$ } with multiplicative structure. Can we define $H^i(X,F)$ ? Note that for $N=1$, this would be just $H^i(X,O_X^*)$. (Please give reference for your claims)
The quick reply is: not really for $i \gt 2$, and not in the way you perhaps expect for $i=2$, see below. EDIT (Feb 2017): Debremaeker's PhD thesis [0] has now been translated into English and placed on the arXiv: Cohomology with values in a sheaf of crossed groups over a site , arXiv:1702.02128 The comment on Charles' answer about 'teaching you never to ask that question again' is partly true, partly not. The lesson to learn from Giraud is that really one does not use groups for coefficients of higher cohomology. For a start, Giraud's $H^2(X,G)$ is not functorial with respect to group homomorphisms $G\to H$! One also does not get the exact sequences that one expects (this is due to the lack of functoriality). But this is not a problem with his definition of the cohomology set, but a problem with what category you believe the coefficients lie in. This is because the coefficient object of Giraud's cohomology is actually the crossed module $AUT(G) = (G \to Aut(G))$, and the assignment $G \mapsto AUT(G)$ is not functorial. (Aside: Giraud contains lots of other important things on stacks and gerbes and sites and so on, so the book is not a waste of time by any means) But little-known work by Debremaeker[1-3] from the 1970s fixed this up and showed that really the Giraud cohomology was functorial with respect to morphisms of crossed modules. This has been recently extended by Aldrovandi and Noohi [4] by showing that it is functorial with respect to weak maps of crossed modules aka butterflies/papillion. It was realised by John E. Roberts (no relation) and Ross Street that the most general nonabelian cohomology has as coefficient objects higher categories. In fact, we now know that the coefficients of $n^{th}$ degree cohomology is an $n$-category (usually an $n$-groupoid, though), even when we are talking about usual abelian cohomology. Everything I've talked about is just for groups etc in Set, but it can all be done internal to a topos, i.e. for sheaves of groups, and more generally a Barr-exact category (and probably weaker, but Barr-exact means that the monadic description of cohomology therein due to Duskin (probably going back to Beck) works fine). [0] R. Debremaeker, Cohomologie met waarden in een gekruiste groepenschoof op een situs, PhD thesis, 1976 (Katholieke Universiteit te Leuven). English translation: Cohomology with values in a sheaf of crossed groups over a site, arXiv: 1702.02128 [1] R. Debremaeker, Cohomologie a valeurs dans un faisceau de groupes croises sur un site. I, Acad. Roy. Belg. Bull. Cl. Sci. (5), 63, (1977), 758 -- 764. [2] R. Debremaeker, Cohomologie a valeurs dans un faisceau de groupes croises sur un site. II, Acad. Roy. Belg. Bull. Cl. Sci. (5), 63, (1977), 765 -- 772. [3] R. Debremaeker, Non abelian cohomology, Bull. Soc. Math. Belg., 29, (1977), 57 -- 72. [4] E. Aldrovandi and B. Noohi, Butterflies I: Morphisms of 2-group stacks, Advances in Mathematics, 221, (2009), 687 -- 773.
{ "source": [ "https://mathoverflow.net/questions/36466", "https://mathoverflow.net", "https://mathoverflow.net/users/5259/" ] }
36,471
A professor of mine (a geometric topologist, I believe) once criticized the core graduate curriculum at my institution because it teaches all sorts of esoteric algebra, but does not include basic information about Galois theory and algebraic geometry, which, according to him, are important even for non-algebraists. What are some useful facts from algebraic geometry that are useful for non-algebraic geometers? Ideally, the statements at least should be accessible without knowing much algebraic geometry. Edit: Please do not post results that are only relevant to people who already know massive amounts of algebraic geometry anyway. In particular: Be very cautious about posting statements whose only applications are in number theory. Example: Here is a basic statement that I have seen applied outside algebraic geometry, if not necessarily outside of algebra: Let $U \subset \mathbb{C}^n$. If there is some nonzero polynomial satisfied by every point of $\mathbb{C}^n \smallsetminus U$, then $U$ is dense in $\mathbb{C}^n$ (with the usual topology), and in fact contains a dense open subset of $\mathbb{C}^n$. [Sketch of proof: Given any point $p \in \mathbb{C}^n$, find a complex line $L$ passing through $p$ that intersects $U$. Then $L \cap (\mathbb{C}^n \smallsetminus U)$ is algebraic, hence contains only finitely many points of $L$, and so $p$ is a limit point of $U$.]
I would vote for Chevalley's theorem as the most basic fact in algebraic geometry: The image of a constructible map is constructible. More down to earth, its most basic case (which, I think, already captures the essential content), is the following: the image of a polynomial map $\mathbb{C}^n \to \mathbb{C}^m$, $z_1, \dots, z_n \mapsto f_1(\underline{z}), \dots, f_m(\underline{z})$ can always be described by a set of polynomial equations $g_1= \dots = g_k = 0$, combined with a set of polynomial ''unequations'' (*) $h_1 \neq 0, \dots, h_l \neq 0$. David's post is a special case (if $m > n$, then the image can't be dense, hence $k > 0$). Tarski-Seidenberg is basically a version of Chevalley's theorem in ''semialgebraic real geometry''. More generally, I would argue it is the reason why engineers buy Cox, Little, O'Shea ("Using algebraic geometry"): in the right coordinates, you can parametrize the possible configurations of a robotic arm by polynomials. Then Chevalley says the possible configuration can also be described by equations. (*) Really it seems that "inequalities" would be the right word her...might be a little late to change terminology though...
{ "source": [ "https://mathoverflow.net/questions/36471", "https://mathoverflow.net", "https://mathoverflow.net/users/5094/" ] }
36,486
Where can I find a clear exposé of the so called "standard reduction to the local artinian (with algebraically closed residue field", a sentence I read everywhere but that is never completely unfold? EDIT: Here, was a badly posed question.
Dear Workitout: The list of comments above is getting unwieldy, so let me post an answer here, now that you have finally identified 1.10.1 in Katz-Mazur as (at least one) source of the question. As I predicted, you'll see that the basic technique to be used adapts to many other settings, and that it is very hard to formulate a "meta-theorem" to cover all cases. The only sure-fire method I know is to read all of sections 8, 9, 11, 12 in EGA IV$_3$ and sections 17 and 18 in EGA IV$_4$, and then it becomes really routine. Maybe there's a better method (well I can think of one, but I won't say it here). I will use the terminology and notation around 1.10.1 from Katz-Mazur without explanation below. Being a "full set of sections" of $Z/S$ is something which is sufficient to check using the constituents of a single open affine cover of $S$, and also a finite generating set of the coordinate ring of $Z$ over each such open. Thus, by working Zariski-locally on $S$ we may assume $S = {\rm{Spec}}(R)$ and that both sides of (1) in KM 1.10.1 have $R$-free coordinate rings (when viewed as finite $R$-schemes), and likewise for their sum (as effective Cartier divisors). Now the assertions (1) and (2) in KM 1.10.1 are identities among finitely many elements of some finite free $R$-modules. For instance, (1) asserts that certain elements in a finite free $R$-module have vanishing image in a finite free $R$-module quotient. This is all now a bunch of identities among finitely many elements of $R$. OK, finally we come to the part with a real idea. Consider the subring $R_0$ of $R$ generated over $\mathbf{Z}$ generated by those finitely many elements. It is noetherian. Now unfortunately your initial algebro-geometric setup over $R$ (the smooth separated finite-type curve $C$, the various effective relative Cartier divisors, etc.) probably does not arise via base change from the exact same setup (including flatness properties!) over $R_0$. But that doesn't matter: what would really be swell is if some noetherian subring of $R$ which contains $R_0$ permits such a descent of the situation. Now express $R$ as the direct limit of its finitely generated $R_0$-subalgebras (all of which are noetherian). Does the entire situation descend to one of those? If it did, we'd be in great shape, since it would then suffice to solve the problem in the case of a noetherian base ring (as that would imply the result over $R$ by suitable base change of the descent from the noetherian subring back up to $R$). How to implement this strategy of reduction to the noetherian case (after which we'll need to deal with the passage to artin local base with algebraically closed residue field)? OK, so that's where we are all very fortunate that Grothendieck actually wrote out the entire formalism in total detail to handle basically every such situation one could ever want to handle. So it becomes kind of a game in finding the references in EGA (which I admit is hard to do if one doesn't know where to look, but is really easy if one has read the right parts). For your particular situation with some smooth separated curves and some relative effective Cartier divisors, etc., the results you need are: EGA IV$_3$ 8.3.4, 8.9.1, 8.10.5 (e.g., (v)), 9.2.6.1, 11.2.6(ii), and IV$_4$ 17.7.9. I'm not going to say more about how you combine those references here; that is where I again remind you of your pseudonym. OK, now $R$ is noetherian (even finite type over $\mathbf{Z}$, which is very useful extra stuff to have for other arguments with excellence later in life), and you're trying to prove some finite collection of identities among elements of $R$. To do that it suffices to check in the local rings of $R$, so you can assume $R$ is local. Now a pair of elements of a local noetherian ring are equal if and only if they have equal images in each artinian quotient (Krull intersection theorem). So it suffices to prove the general result over arbitrary artin local rings (really just the artinian quotients of finitely generated $\mathbf{Z}$-algebra, so there's no set-theoretic quantification nonsense going on). Now $R$ is artin local. To check an identity in a ring it suffices to do so in a faithfully flat extension ring. So finally we haul out EGA 0$_{\rm{III}}$ 10.3.1 to find a faithfully flat artin local extension with an algebraically closed residue field. That's it! I presume you can now see why anyone who knows how to fill in such arguments never actually writes them out in papers: it is much simpler to say "by standard limit methods from EGA IV$_3$, sections 8,9, etc." (maybe even to be a bit more specific, as K-M are at the beginning of their 1.8.1), and to trust that the reader will pick up the clue that they should read those parts of EGA if they want to understand what is going in such arguments for themself. It is bad when people don't at least mention the relevance of sections 8, 9, etc., but things could be worse (e.g., Grothendieck could have not written EGA, leaving stuff in a complete mess reference-wise).
{ "source": [ "https://mathoverflow.net/questions/36486", "https://mathoverflow.net", "https://mathoverflow.net/users/8736/" ] }
36,596
I've refereed at least a dozen papers in my (short) career so far and I still find the process completely baffling. I'm wondering what is actually expected and what people tend to do... Some things I've wondered about: 1) Should you summarize the main results and or the argument? If so, how much is a good amount. What purpose does this serve? 2) What do you do when the journal requests you to evaluate the quality of the result (or its appropriateness for the journal)? Should one always make a recommendation regarding publication? 3) What to do about papers with major grammatical or syntactical errors, but that are otherwise correct? Does it matter if the author is clearly not a native English speaker? 4) On this note, at point does one correct such mistakes? Ever? If there are fewer than a dozen? Should one be proof-reading the paper? 5) What do you do when you do not understand an argument? Does it matter if it "feels" correct. How long should one spend trying to understand an argument? 6) What to do about papers that have no errors but whose exposition is hard to follow?
I've been spending a fair amount of time editing a journal this year, and it's pretty amazing what different people think of as a "referee report". The first thing you should keep in mind is that the editors will be incredibly appreciative if you look at the paper in detail and send in comments in a timely manner, whatever the comments are. In my mind a good referee report begins with a very short (a few sentences) summary of the result and the argument. It includes an opinion on whether the result and proof are (i) correct (ii) readable (iii) interesting to lots or only a few people; also (iv) a recommendation on whether it is good enough to appear in the given journal, or alternatively/also comparisons to the quality of one or two other recent papers in the field, or just a statement that the referee isn't sure if it is good enough or not, and (v) a non-empty list of specific corrections/suggestions. Re (3-4) it's perfectly fine to send in a one-line request that the paper be revised so that it is written in correct English. It's not your job to correct grammatical mistakes if there are more than a few. Re (5-6) if an argument is hard to follow, you can just ask for a revision in which the authors explain more. As an editor, it's quite easy (with computers and e-mail being what they are) to request a revision, even after the referee has only read part of the paper.
{ "source": [ "https://mathoverflow.net/questions/36596", "https://mathoverflow.net", "https://mathoverflow.net/users/26801/" ] }
36,693
At the end of the paper Division by three by Peter G. Doyle and John H. Conway, the authors say: Not that we believe there really are any such things as infinite sets, or that the Zermelo-Fraenkel axioms for set theory are necessarily even consistent. Indeed, we’re somewhat doubtful whether large natural numbers (like $80^{5000}$, or even $2^{200}$) exist in any very real sense, and we’re secretly hoping that Nelson will succeed in his program for proving that the usual axioms of arithmetic—and hence also of set theory—are inconsistent. (See Nelson [E. Nelson. Predicative Arithmetic. Princeton University Press, Princeton, 1986.].) All the more reason, then, for us to stick with methods which, because of their concrete, combinatorial nature, are likely to survive the possible collapse of set theory as we know it today. Here are my questions: What is the status of Nelson's program? Are there any obstruction to finding a relatively easy proof of the inconsistency of ZF? Is there anybody seriously working on this?
Nelson claimed to have succeeded just now. http://www.math.princeton.edu/~nelson/papers/outline.pdf I hope consensus about this forms soon, so I can know what to do with the rest of my life. If only I had been born a few years later, I wouldn't be put into the position of worrying that my chosen career path is doomed and I must go build houses or something. Update: As per Michael's comment, the claim has been withdrawn .
{ "source": [ "https://mathoverflow.net/questions/36693", "https://mathoverflow.net", "https://mathoverflow.net/users/8176/" ] }
36,897
Suppose there is a function $f(x)$ which is the "probability" that the integer $x$ is prime. The integer $x$ is prime with probability $f(x)$ , and then divides the larger integers with probability $1/x$ ; so as $x$ changes from $x$ to $x+1$ , $f(x)$ changes to (roughly) $$f(x)\left(1-f(x)/{x} \right).$$ How do I show that? I can go on to show $$\frac{df}{dx} + \frac{f^2}{x}=0$$ and thus $\frac{1}{\log{x}} + c$ is solution but I can't show that step on how $f(x)$ changes. please advise
First of all, I assume you understand that this is meant to be a nonrigorous argument, so there will be a limit to how rigorous I can make my answer. The intuition here is that $n$ is prime if and only if it is not divisible by any prime $<n$. So we "should" have $$f(n) \approx \prod_{p < n} \left( 1-1/p \right).$$ Similarly $$f(n+1) \approx \prod_{p<n+1} \left( 1-1/p \right) = \prod_{p < n} \left( 1-1/p \right) \cdot \left\{ \begin{matrix} \left( 1-1/n \right) \ \mbox{if}\ n\ \mbox{is prime} \\ 1 \ \mbox{if}\ n\ \mbox{is not prime} \end{matrix} \right.$$ $$\approx f(n) \cdot \left\{ \begin{matrix} \left( 1-1/n \right) \ \mbox{if}\ n\ \mbox{is prime} \\ 1 \ \mbox{if}\ n\ \mbox{is not prime} \end{matrix} \right. .$$ Since $n$ is prime with "probability $f(n)$", we interpolate between the two cases next to the brace by writing: $$f(n+1) \approx f(n) \left( 1-f(n)/n \right).$$ One might argue that it would be better to interpolate with a factor of $(1-1/n)^{f(n)}$, but this will make no difference in the asymptopics as $(1-1/n)^{f(n)} = 1-f(n)/n+O(1/n^2)$. This argument is famously fishy, because it gives the right answer, but the intermediate point is wrong! The actual asymptopics of $\prod_{p<n} \left( 1-1/p \right)$ do not look like $1/\log n$, but like $e^{-\gamma} /\log n$. I've never seen a good intuitive explanation for why we get the wrong estimate for $\prod_{p<n} \left( 1-1/p \right)$, but the right estimate for the density of the primes.
{ "source": [ "https://mathoverflow.net/questions/36897", "https://mathoverflow.net", "https://mathoverflow.net/users/8826/" ] }
37,021
I'm asking this question because I've been told by some people that Fourier analysis is "just representation theory of $S^1$." I've been introduced to the idea that Fourier analysis is related to representation theory. Specifically, when considering the representations of a finite abelian group $A$, these representations are all $1$-dimensional, hence correspond to characters $A \to \mathbb{R}/\mathbb{Z} \cong S^1 \subseteq \mathbb{C}$. On the other side, finite Fourier analysis is, in a simplistic sense, the study of characters of finite abelian groups. Classical Fourier analysis is, then, the study of continuous characters of locally compact abelian groups like $\mathbb{R}$ (classical Fourier transform) or $S^1$ (Fourier series). However, in the case of Fourier analysis, we have something beyond characters/representations: We have the Fourier series / transform. In the finite case, this is a sum which looks like $\frac{1}{n} \sum_{0 \le r < n} \omega^r \rho(r)$ for some character $\rho$, and in the infinite case, we have the standard Fourier series and integrals (or, more generally, the abstract Fourier transform). So it seems like there is something more you're studying in Fourier analysis, beyond the representation theory of abelian groups. To phrase this as a question (or two): (1) What is the general Fourier transform which applies to abelian and non-abelian groups? (2) What is the category of group representations we consider (and attempt to classify) in Fourier analysis? That is, it seems like Fourier analysis is more than just the special case of representation theory for abelian groups. It seems like Fourier analysis is trying to do more than classify the category of representations of a locally compact abelian group $G$ on vector spaces over some fixed field. Am I right? Or can everything we do in Fourier analysis (including the Fourier transform) be seen as one piece in the general goal of classifying representations? Let me illustrate this in another way. The basic result of Fourier series is that every function in $L^2(S^1)$ has a Fourier series, or in other words that $L^2$ decomposes as a (Hilbert space) direct sum of one dimensional subspaces corresponding to $e^{2 \pi i n x}$ for $n \in \mathbb{Z}$. If we encode this in a purely representation-theoretic fact, this says that $L^2(S^1)$ decomposes into a direct sum of the representations corresponding to the unitary characters of $S^1$ (which correspond to $\mathbb{Z}$). But this fact is not why Fourier analysis is interesting (at least in the sense of $L^2$-convergence; I'm not even worrying about pointwise convergence). Fourier analysis states furthermore an explicit formula for the function in $L^2$ giving this representation. Though I guess by knowing the character corresponding to the representation would tell you what the function is. So is Fourier analysis merely similar to representation theory, or is it none other than the abelian case of representation theory? (Aside: This leads into a more general question of mine about the use of representation theory as a generalization of modular forms. My question is the following: I understand that a classical Hecke eigenform (of some level $N$) can be viewed as an element of $L^2(GL_2(\mathbb{Q})\ GL_2(\mathbb{A}_{\mathbb{Q}})$ which corresponds to a subrepresentation. But what I don't get is why the representation tells you everything you would have wanted to know about the classical modular form. A representation is nothing more than a vector space with an action of a group! So how does this encode the information about the modular form?)
I would like to elaborate slightly on my comment. First of all, Fourier analysis has a very broad meaning. Fourier introduced it as a means to study the heat equation, and it certainly remains a major tool in the study of PDE. I'm not sure that people who use it in this way think of it in a particularly representation-theoretic manner. Also, when one thinks of the Fourier transform as interchanging position space and frequency space, or (as in quantum mechanics) position space and momentum space, I don't think that a representation theoretic view-point necessarily need play much of a role. So, when one thinks about Fourier analysis from the point of view of group representation theory, this is just one part of Fourier analysis, perhaps the most foundational part, and it is probably most important when one wants to understand how to extend the basic statements regarding Fourier transforms or Fourier series from functions on $\mathbb R$ or $S^1$ to functions on other (locally compact, say) groups. As I noted in my comment, the basic question is: how to decompose the regular representation of $G$ on the Hilbert space $L^2(G)$. When $G$ is locally compact abelian, this has a very satisfactory answer in terms of the Pontrjagin dual group $\widehat{G}$, as described in Dick Palais's answer: one has a Fourier transform relating $L^2(G)$ and $L^2(\widehat{G})$. A useful point to note is that $G$ is discrete/compact if and only if $\widehat{G}$ is compact/discrete. So $L^2(G)$ is always described as the Hilbert space direct integral of the characters of $G$ (which are the points of $\widehat{G}$) with respect to the Haar measure on $\widehat{G}$, but when $G$ is compact, so that $\widehat{G}$ is discrete, this just becomes a Hilbert space direct sum, which is more straightforward (thus the series of Fourier series are easier than the integrals of Fourier transforms). I will now elide Dick Palais's distinction between the Fourier case and the more general context of harmonic analysis, and move on to the non-abelian case. As Dick Palais also notes, when $G$ is compact, the Peter--Weyl theorem nicely generalizes the theory of Fourier series; one again describes $L^2(G)$ as a Hilbert space direct sum, not of characters, but of finite dimensional representations, each appearing with multiplicity equal to its degree (i.e. its dimension). Note that the set over which one sums now is still discrete, but is not a group. And there is less homogeneity in the description: different irreducibles have different dimensions, and so contribute in different amounts (i.e. with different multiplicities) to the direct sum. When G is locally compact but neither compact nor abelian, the theory becomes more complex. One would like to describe $L^2(G)$ as a Hilbert space direct integral of matrix coefficients of irreducible unitary representations, and for this, one has to find the correct measure (the so-called Plancherel measure) on the set $\widehat{G}$ of irreducible unitary representations. Since $\widehat{G}$ is now just a set, a priori there is no natural measure to choose (unlike in the abelian case, when $\widehat{G}$ is a locally compact group, and so has its Haar measure), and in general, as far as I understand, one doesn't have such a direct integral decomposition of $L^2(G)$ in a reasonable sense. But in certain situations (when $G$ is of "Type I") there is such a decomposition, for a uniquely determined measure, so-called Plancherel measure, on $\widehat{G}$. But this measure is not explicitly given. Basic examples of Type I locally compact groups are semi-simple real Lie groups, and also semi-simple $p$-adic Lie groups. The major part of Harish-Chandra's work was devoted to explicitly describing the Plancherel measure for semi-simple real Lie groups. The most difficult part of the question is the existence of atoms (i.e. point masses) for the measure; these are irreducible unitary representations of $G$ that embed as subrepresentations of $L^2(G)$, and are known as "discrete series" representations. Harish-Chandra's description of the discrete series for all semi-simple real Lie groups is one of the major triumphs of 20th century representation theory (indeed, 20th century mathematics!). For $p$-adic groups, Harish-Chandra reduced the problem to the determination of the discrete series, but the question of explicitly describing the discrete series in that case remains open. One important thing that Harish-Chandra proved was that not all points of $\widehat{G}$ (when $G$ is a real or $p$-adic semisimple Lie group) are in the support of Plancherel measure; only those which satisfy the technical condition of being "tempered". (So this is another difference from the abelian case, where Haar measure is supported uniformly over all of $\widehat{G}$.) Thus in explicitly describing Plancherel measure, and hence giving an explicit form of Fourier analysis for any real semi-simple Lie group, he didn't have to classify all unitary representations of $G$. Indeed, the classification of all such reps. (i.e. the explicit description of $\widehat{G}$) remains an open problem for real semi-simple Lie groups (and even more so for $p$-adic semi-simple Lie groups, where even the discrete series are not yet classified). This should give you some sense of the relationship between Fourier analysis in its representation-theoretic interpretation (i.e. the explicit description of $L^2(G)$ in terms of irreducibles) and the general classification of irreducible unitary representations of $G$. They are related questions, but are certainly not the same, and one can fully understand one without understanding the other.
{ "source": [ "https://mathoverflow.net/questions/37021", "https://mathoverflow.net", "https://mathoverflow.net/users/1355/" ] }
37,070
I recently saw the proof of the finite axiom of choice from the ZF axioms. The basic idea of the proof is as follows (I'll cover the case where we're choosing from three sets, but the general idea is obvious): Suppose we have $A,B,C$ non-empty, and we would like to show that the Cartesian product $A \times B \times C$ is non-empty. Then $\exists a \in A$, $\exists b \in B$, $\exists c \in C$, all because each set is non-empty. Then $a \times b \times c$ is a desired element of $A \times B \times C$, and we are done. In the case where we have infinitely (in this case, countably) many sets, say $A_1 \times A_2 \times A_3 \times \cdots$, we can try the same proof. But in order to use only the ZF axioms, the proof requires the infinitely many steps $\exists a_1 \in A_1$, $\exists a_2 \in A_2$, $\exists a_3 \in A_3$, $\cdots$ My question is, why can't we do this? Or a better phrasing, since I know that mathematicians normally work in logical systems in which only finite proofs are allowed, is: Is there some sort of way of doing logic in which infinitely-long proofs like these are allowed? One valid objection to such a system would be that it would allow us to prove Fermat's Last Theorem as follows: Consider each pair $(a,b,c,n)$ as a step in the proofs, and then we use countably many steps to show that the theorem is true. I might argue that this really is a valid proof - it just isn't possible in our universe where we can only do finitely-many calculations. So we could suggest a system of logic in which a proof like this is valid. On the other hand, I think the "proof" of Fermat's Last Theorem which uses infinitely many steps is very different from the "proof" of AC from ZF which uses infinitely many steps. In the proof of AC, we know how each step works, and we know that it will succeed, even without considering that step individually. In other words, we know what we mean by the concatenation of steps $(\exists a_i \in A_i)_{i \in \mathbb{N}}$. On the other hand, we can't, before doing all the infinitely many steps of the proof of FLT, know that each step is going to work out. What I'm suggesting in this paragraph is a system of logic in which the proof of AC above is an acceptable proof, whereas the proof of FLT outlined above is not acceptable. So I'm wondering whether such a system of logic has been considered or whether any experts here have an idea for how it might work, if it has not been considered. And, of course, there might be a better term to use than "system of logic," and others can give suggestions for that.
Even if logic were extended to allow infinitely long proofs, your attempted proof of the countable axiom of choice would still have a gap or two. After the infinitely many steps asserting that there exists an $a_i$ in $A_i$ (one step for each $i$), you still need to justify the claim that there's a function assigning, to each $i$, the corresponding $a_i$. The immediate problem is that your infinitely many steps haven't exactly specified which (of the many possible) $a_i$'s are the corresponding ones; the $a_i$'s in your formulas are just bound variables. Worse, even if the meaning of "the corresponding $a_i$" were perfectly clear, so that there's no doubt about which ordered pairs $(i,a_i)$ you want to have in your choice function, you'd still need to prove that there is a set consisting of just those ordered pairs. No ZF axiom does that job. I think you'd need an infinitely long axiom saying "for all $x_1,x_2,\dots$, there exists a set whose members are exactly $x_1,x_2,\dots$." If you're willing to accept not only proofs consisting of infinitely many statements but also single statements of infinite length, and if you're willing to add some such infinite statements as new axioms, then I think you can "prove" the countable axiom of choice (and fancier choice principles if you allow even longer new axioms). But, as long as you need to add some axioms to ZF for this purpose, it seems simpler to just add the countable axiom of choice. It's a finite statement, so you can reason with it using the usual rules of logic. One could view the axiom of choice as a sort of finitary (and therefore usable) surrogate for the infinitely long axioms and proofs that would come up in your approach. In fact, some of Zermelo's later work (he introduced the axiom of choice in 1904, and the work I'm thinking of dates from the late 20's or early 30's) takes an "infinitary logic" approach to the foundations of set theory (and is, in my opinion, not entirely clear).
{ "source": [ "https://mathoverflow.net/questions/37070", "https://mathoverflow.net", "https://mathoverflow.net/users/1355/" ] }
37,115
It seems to me like every book on representation theory leaps into groups right away, even though the underlying ideas, such as representations, convolution algebras, etc. don't really make explicit use of inverses. This leads to the fairly natural question of how much of representation theory still works for monoids (or even semigroups)? I would imagine that irreducible representations probably exist in some form, since all you really need is to have some way to reduce monoid-modules over the complex numbers. Of course it could also be that monoid representation theory is in some form fatally flawed and reduces down to something boring for reasons that are not obvious to me at this point.
Certainly irreducible representations exist; one can still construct the monoid algebra of a monoid and consider modules over the algebra. But Maschke's theorem is false in general for finite monoids. Indeed, consider the monoid $M = \langle x | x^3 = x^2 \rangle$. Complex (for the sake of argument) representations of $M$ are the same as representations of the monoid algebra $\mathbb{C}[x]/(x^3 - x^2)$ and this algebra is not semisimple, so its finite-dimensional representations are not completely reducible. (Note that the usual proof of Maschke's theorem fails miserably; you can't average inner products over a monoid without inverses, and any unitary representation of a monoid has to factor through the Grothendieck group). So part of the answer may just be that the representation theory of monoids is inherently more complicated. Although this isn't doesn't seem to be stressed much in textbooks, having inverses is a pretty important structural property of groups; it endows group algebras with an antipode and endows the category of representations of a group with duals.
{ "source": [ "https://mathoverflow.net/questions/37115", "https://mathoverflow.net", "https://mathoverflow.net/users/4642/" ] }
37,151
Most branches of mathematics have big, sexy famous open problems. Number theory has the Riemann hypothesis and the Langlands program, among many others. Geometry had the Poincaré conjecture for a long time, and currently has the classification of 4-manifolds. PDE theory has the Navier-Stokes equation to deal with. So what are the big problems in probability theory and stochastic analysis? I'm a grad student working in the field, but I can't name any major unsolved conjectures or open problems which are driving research. I've heard that stochastic Löwner evolutions are a big field of study these days, but I don't know what the conjectures or problems relating to them are. Does anyone have any suggestions?
To my mind one of the biggest open problems in probability, in the sense of being a famous basic statement that we don't know how to solve, is to show that there is "no percolation at the critical point" (mentioned in particular in section 4.1 of Gordon Slade's contribution to the Princeton Companion to Mathematics). A capsule summary: write $\mathbb{Z}_{d,p}$ for the random subgraph of the nearest-neighbour $d$ -dimensional integer lattice, obtained by independently keeping each edge with probability $p$ . Then it is known that there exists a critical probability $p_c(d)$ (the percolation threshold }) such that for $p < p_c$ , with probability one $\mathbb{Z}_{d,p}$ contains no infinite component, and for $p > p_c$ , with probability one there exists an unique infinite component. The conjecture is that with probability one, $\mathbb{Z}_{d,p_c(d)}$ contains no infinite component. The conjecture is known to be true when $d =2$ or $d \geq 19$ . Incidentally, one of the most effective ways we have of understanding percolation -- a technique known as the lace expansion, largely developed by Takeshi Hara and Gordon Slade -- is also one of the key tools for studying self-avoiding walks and a host of other random lattice models. That article of Slade's is in fact full of intriguing conjectures in the area of critical phenomena, but the conjecture I just mentioned is probably the most famous of the lot.
{ "source": [ "https://mathoverflow.net/questions/37151", "https://mathoverflow.net", "https://mathoverflow.net/users/8886/" ] }
37,172
What are the open big problems in algebraic geometry and vector bundles? More specifically, I would like to know what are interesting problems related to moduli spaces of vector bundles over projective varieties/curves.
A few of the more obvious ones: * Resolution of singularities in characteristic p * Hodge conjecture * Standard conjectures on algebraic cycles (though these are not so urgent since Deligne proved the Weil conjectures). *Proving finite generation of the canonical ring for general type used to be open though I think it was recently solved; I'm not sure about the details. For vector bundles, a longstanding open problem is the classification of vector bundles over projective spaces. (Added later) A very old major problem is that of finding which moduli spaces of curves are unirational. It is classical that the moduli space is unirational for genus at most 10, and I think this has more recently been pushed to genus about 13. Mumford and Harris showed that it is of general type for genus at least 24. As far as I know most of the remaining cases are still open.
{ "source": [ "https://mathoverflow.net/questions/37172", "https://mathoverflow.net", "https://mathoverflow.net/users/8234/" ] }
37,214
Much of modern algebraic number theory can be phrased in the framework of group cohomology. (Okay, this is a bit of a stretch -- much of the part of algebraic number theory that I'm interested in...). As examples, Cornell and Rosen develop basically all of genus theory from cohomological point of view, a significant chunk of class field theory is encoded as a very elegant statement about a cup product in the Tate cohomology of the formation module, and Neukirch-Schmidt-Wingberg's fantastic tome "Cohomology of Number Fields" convincingly shows that cohomology is the principal beacon we have to shine light on prescribed-ramification Galois groups. Of course, we also know that group cohomology can be studied via topological methods via the (topological) group's classifying space. My question is: Question: Why doesn't this actually happen? More elaborately: I'm fairly well-acquainted with the "Galois cohomology for number theory" literature, and not once have I come across an argument that passes to the classifying space to use a slick topological trick for a cohomological argument or computation (though I'd love to be enlightened). On the other hand, for example, are things like Tyler's answer to my question Coboundary Representations for Trivial Cup Products which strikes me as saying that there may be plenty of opportunities to carry over interesting constructions and/or lines of reasoning from the topological side to the number-theoretic one. Maybe the classifying spaces for gigantic profinite groups are too hideous to think about? (Though there's plenty of interesting Galois cohomology going on for finite Galois groups...). Or maybe I'm just ignorant to the history, and that indeed the topological viewpoint guided the development of group cohomology and was so fantastically successful at setting up a good theory (definition of differentials, cup/Massey products, spectral sequences, etc.) that the setup and proofs could be recast entirely without reference to the original topological arguments? ( Edit : This apparently is indeed the case. In a comment, Richard Borcherds gives the link Link and JS Milne suggests MacLane 1978 (Origins of the cohomology of groups. Enseign. Math. (2) 24 (1978), no. 1-2, 1--29. MR0497280)., both of which look like good reads.)
Classifying spaces are widely used in algebraic number theory, but in slightly disguised form. A classifying space is really just an approximation to the classifying topos of a group. However the classifying topos is just the category of G-sets, which is exactly what one uses in defining group cohomology and so on. Or to put it another way, all the useful information in the classifying space is already contained in the category of G-sets. The comment at the end of the question is correct: group cohomology was discovered as the cohomology of the classifying space, and the topological constructions were then turned into algebraic constructions and removed from the theory. So in some sense all the group cohomology calculations are implicitly using the classifying space.
{ "source": [ "https://mathoverflow.net/questions/37214", "https://mathoverflow.net", "https://mathoverflow.net/users/35575/" ] }
37,223
Let G be the (non-principal) ultraproduct of all finite cyclic groups of orders n!, n=1,2,3,... . Is there a homomorphism from G onto the infinite cyclic group?
I think the answer is no. The ultraproduct $U$ is naturally a quotient of ${\mathbb Z}^{\infty}$, the direct product of countably many copies of ${\mathbb Z}$. In the obvious quotient map, the image of the direct sum is zero. Now, it is enough to show that: Claim: Any homomorphism $ \phi: {\mathbb Z}^{\infty} \to {\mathbb Z}$ that vanishes on the direct sum is identically zero. Proof: (I learned this from a book by T.Y.Lam): For a prime number $p$, let $A_p$ be the set of elements in ${\mathbb Z}^{\infty}$ of the form $(a_0, pa_1, p^2a_2,...)$, i.e. the elements whose $i$-th coordinate is divisible by $p^i$. Any element $x \in A_p$ can be decomposed as $x= y+z= (a_0, pa_1, \dots, p^{n-1}a_{n-1}, 0,0, \dots ) + p^n (0,0,..,0, a_n, pa_{n+1},..)$ Now, $y$ is in the direct sum, hence $\phi(y)=0$. Also $\phi(z) \in p^n {\mathbb Z}$, which implies that $\phi (x) \in \cap_{n=1}^{\infty} p^n {\mathbb Z} = \{ 0 \}$ Now, choose two distinct primes $p$ and $q$. Since $\gcd(p^n,q^n)=1$, it is easy to see that $A_p+A_q= {\mathbb Z}^{\infty}$. This implies that $\phi \equiv 0$.
{ "source": [ "https://mathoverflow.net/questions/37223", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
37,272
The question is the title. Working in ZF, is it true that: for every nonempty set X, there exists a total order on X ? If it is false, do we have an example of a nonempty set that has no total order? Thanks
In the paper Dense orderings, partitions and weak forms of choice, by Carlos G. Gonzalez FUNDAMENTA MATHEMATICAE 147 (1995) , the author states the following theorem, where AC is the Axiom of Choice, DO is the assertion that every infinite set has a dense linear order, O is the assertion that every set has a linear order, and DPO is the assertion that every infinite set has a (nontrivial) dense partial order. Theorem 1. AC implies DO implies O implies DPO. Moreover, none of the implications is reversible in ZF and DPO is independent of ZF. Thus, in particular, the assertion that every set has a total order is strictly weaker than AC. (Also, it would seem that Gonzalez means to assume Con(ZF) for the latter claims of his theorem.)
{ "source": [ "https://mathoverflow.net/questions/37272", "https://mathoverflow.net", "https://mathoverflow.net/users/8913/" ] }
37,278
There are several questions in the Euler-Goldbach correspondence that I am unable to answer. Sometimes it does not take very much: in his letter to Goldbach dated June 9th, 1750, Euler conjectured that every odd number can be written as a sum of four squares in such a way that $n = a^2 + b^2 + c^2 + d^2$ and $a+b+c+d = 1$. I was just about to post this to MO when I saw that Euler's conjecture can be reduced to the Three-Squares Theorem in one line (am I supposed to spoil this right away?). Here's another one where I haven't found a proof yet. In his letter to Goldbach dated Apr.15, 1747, Euler wrote: The theorem Any number can be split into four squares'' depends on this: Any number of the form $4m+2$ can always be split into two parts such as $4x+1$ and $4y+1$, none of which has any divisor of the form $4p-1$'' (which does not appear difficult, although I cannot yet prove it). Later, Euler attributed to Goldbach the much stronger claim that the two summands can be chosen to be prime, which is a strong form of the Goldbach conjecture. Euler's intention was proving the Four-Squares Theorem (which he almost did . Assuming this result, write $4m+2 = a^2+b^2+c^2+d^2$; then congruences modulo $8$ show that two numbers on the right hand side, say $a$ and $b$, are even, and the other two are odd. Now $a^2 + c^2 = 4x+1$ and $b^2 + d^2 = 4y+1$ satisfy Euler's conditions except when $a$ and $c$ (or $b$ and $d$) have a common prime factor of the form $4n-1$. Can this be excluded somehow? Hermite [Oeuvres I, p. 259] considered a similar problem: Tout nombre impair est decomposable en quatre carres et, parmi ces decompositions, il en existe toujours de telles que la somme de deux carrees soit sans diviseurs communs avec la somme de deux autres. (Every odd number can be decomposed into four squares, and among these decompositions, there always exist some for which the sum of two squares is coprime to the sum of the other two.) Hermite's proof contains a gap. Can Hermite's claim be proved somehow?
We address the problem of Euler, showing an asymptotic lower bound for the number of ways of writing $n\equiv 2 \pmod 4$ as a sum of four squares $a^2+b^2+c^2+d^2$ where neither $a^2+b^2$ nor $c^2+d^2$ is divisible by any prime $\equiv 3 \pmod 4$. For simplicity we shall only count the solutions where $a^2+b^2\equiv c^2+d^2 \equiv 1\pmod 4$. Let $r(n)$ denote the number of ways of writing $n$ as a sum of two squares so that $r(n) = 4 \sum_{d|n} \chi_{-4}(d)$ with $\chi_{-4}$ being the non-trivial character $\pmod 4$. Clearly the number of solutions we want is at least $$ \sum_{\substack{{a+b=n} \\ {a\equiv b\equiv 1 \pmod 4}}} r(a)r(b) - 2 \sum_{p\equiv 3 \pmod 4} \sum_{\substack{{a+b=n} \\ {a\equiv b\equiv 1\pmod 4} \\ {p^2 |a} }} r(a) r(b) = S-2T, $$ say. (The first sum counts all possible solutions with $a\equiv b\equiv 1 \pmod 4$ and the second sum excludes all solutions with either $a$ or $b$ being divisible by the square of sum prime $\equiv 3 \pmod 4$, and by symmetry we may assume that $a$ is the one divisible by $p^2$.) Let us focus just on evaluating the sum $T$; the sum $S$ is simpler and may be handled similarly. Write the sum $T$ as $\sum_{p \equiv 3 \pmod 4} T(p)$, so that (since $r(ap^2)=r(a)$) $$ T(p) = \sum_{\substack{ {a p^2 +b =n} \\ {a\equiv b\equiv 1\pmod 4}}} r(a) r(b). $$ Since $r(k)\ll k^{\epsilon}$ for all $k$, clearly $T(p) \ll n^{1+\epsilon}/p^2$, so that large primes $p$ contribute a negligible amount. Now let us suppose that $p$ is small. Now if $a\equiv 1 \pmod 4$ we have $$ r(a) = 4\sum_{k\ell =a} \chi_{-4}(a) = 8\sum_{\substack{ {k|a} \\ {k\le \sqrt{a} } }}^{\prime} \chi_{-4}(k), $$ using that $\chi_{-4}(k)=\chi_{-4}(\ell)$ (because $k\ell\equiv 1\pmod 4$), and the prime over the sum indicates that the term $k=\sqrt{a}$ is counted with weight $1/2$. Therefore $$ T(p) =8 \sum_{k}^{\prime} \chi_{-4}(k) \sum_{\substack{{b=n-p^2 k \ell} \\ {b\equiv 1\pmod 4} \\ {\ell \ge k}} } r(b). $$ Now the sum over $b$ above is just a sum over values $b$ that are at most $n-p^2 k^2$ and satisfying the congruence conditions $b\equiv 1\pmod 4$ and $b\equiv n \pmod{p^2k}$. In other words, this is a problem of understanding the distribution of the sums of two squares function in arithmetic progressions. Sums of two squares in arithmetic progressions. Let us quickly recall what is known about this problem. Let $R(x;q,a)$ be the sum of $r(n)$ over all $n\le x$ with $n\equiv a \pmod q$. The key result we need is that there is a good asymptotic formula for $R(x;q,a)$ uniformly for $q$ up to $x^{2/3-\epsilon}$. This is due to Selberg and Hooley (unpublished), with details first appearing in a paper of R. A. Smith. For a recent nice version see Tolev's paper in Proc. Steklov Inst. (2012, vol 276, 261--274), whose version we shall use. Let $\eta_a(q)$ denote the number of solutions to the congruence $x^2+y^2 \equiv a \pmod q$ with $1\le x, y \le q$. Then $$ \sum_{\substack{ {n\le x} \\ {n\equiv a\pmod q}} } r(n) = \pi \frac{\eta_a(q)}{q^2} x + O((q^{\frac 12} + x^{\frac 13}) (a,q)^{\frac 12} x^{\epsilon}). $$ Note that this bound uses the Weil bound for Kloosterman sums; but for our application we need only a bound that permits $q$ a little larger than $\sqrt{x}$, and less precise elementary estimates for Kloosterman sums would suffice for this. On the function $\eta_a(q)$. Note also that $\eta_a(q)$ is a multiplicative function of $q$, and is always $\ll q d(q)$ (where $d(q)$ is the number of divisors of $q$). In fact it is not hard to compute what $\eta_a(q)$ is, and as we shall need this below we summarize this calculation. If $p$ is an odd prime not dividing $a$ then we may show that $\eta_a(p^{k}) = p^{k-1} (p-\chi_{-4}(p))$. If $p$ is an odd prime that does divide $a$, then to compute $\eta_a(p^k)$ we classify the solutions to $x^2+y^2 \equiv a \pmod {p^k}$ as non-singular if $p$ doesn't divide one of $x$ or $y$ and singular otherwise. The non-singular solutions are seen to number $2p^{k-1}(p-1)$ if $p\equiv 1\pmod 4$ and $0$ if $p\equiv 3 \pmod 4$. As for the singular solutions, if $k=1$ then this number is just $1$; if $k\ge 2$ and $p^2 \nmid a$ there are no singular solutions; and if $k\ge 2$ and $p^2|a$ then the number of singular solutions is $p^2 \eta_{a/p^2}(p^{k-2})$. This describes fully how to compute $\eta_a(q)$. From this description and some calculation we find that $$ \sum_{k=1}^{\infty} \chi_{-4}(k) \frac{\eta_a(k)}{k^{1+s}} = \frac{L(s,\chi_{-4})}{\zeta_2(s+1)} {\tilde \sigma}_s(n) $$ where $\zeta_2(s) = \zeta(s) (1-2^{-s})$ (the Euler factor at $2$ removed), and $$ {\tilde \sigma}_s(n) = \sum_{\substack{{d|n}\\ {d \text{ odd}} } } d^{-s}. $$ Given a prime $p\equiv 3 \pmod 4$, for our work on $T(p)$ we shall also need to understand the following related Dirichlet series: $$ \sum_{k=1}^{\infty} \chi_{-4}(k) \frac{\eta_a(kp^2)}{k^{1+s}} $$ and by a similar calculation we see that if $p\nmid a$ this equals $$ p(p+1)\Big(1-\frac{1}{p^{s+1}}\Big)^{-1} \frac{L(s,\chi_{-4})}{\zeta_2(s+1)} {\tilde \sigma}_s(n); $$ and if $p|a$ but $p^2\nmid a$ it is zero; and if $p^2|a$ it equals $$ p^2 \frac{L(s,\chi_{-4})}{\zeta_2(s+1)} {\tilde \sigma}_s(n/p^2) . $$ Back to $T(p)$. We now use the asymptotic formula above in our expression for $T(p)$. Since $\eta_1(4) = 8$ we obtain that $$ T(p) = 4\pi \sum_{k \le \sqrt{n}/p} \chi_{-4}(k) \Big( \frac{\eta_n(p^2k)}{p^4 k^2} (n-p^2k^2) + O\Big((p\sqrt{k} +n^{\frac 13}) (n,p^2k)^{\frac 12} n^{\epsilon}\Big)\Big). $$ There are three cases: either $p\nmid n$, $p|n$ but $p^2\nmid n$, and $p^2|n$. Consider the first case when $p\nmid n$. Here the error terms may be bounded easily as $$ O\Big( \frac{n^{\frac 56+\epsilon}}{p} + \frac{n^{\frac 34+\epsilon}}{\sqrt{p}}\Big). $$ Now consider the main term which is $$ 4\pi \frac{n}{p^4} \sum_{k\le \sqrt{n}/p} \chi_{-4}(k) \frac{\eta_n(p^2k)}{k^2} \Big(1 - \frac{p^2k^2}{n}\Big). $$ We can express this as a contour integral (for some suitably large $c$) $$ 4\pi \frac{n}{p^4} \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Big(\sum_{k=1}^{\infty} \chi_{-4}(k) \frac{\eta_n(p^2k)}{k^{2+s}}\Big) \Big( \frac{\sqrt{n}}{p}\Big)^s \frac{2}{s(s+2)}ds. $$ Using our knowledge of the Dirichlet series above, we get that the above is $$ 4\pi \frac{n}{p^4} \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} p (p+1) \Big(1-\frac{1}{p^{2+s}}\Big)^{-1} \frac{L(1+s,\chi_{-4})}{\zeta_2(s+2)} {\tilde \sigma}_{1+s}(n) \Big(\frac{\sqrt{n}}{p}\Big)^s \frac{2}{s(s+2)} ds. $$ Now moving the line of integration to Re$(s)=-1+\epsilon$ we obtain that the above is $$ 4\pi \frac{n}{p^4} \Big( p(p+1) \Big(1-\frac{1}{p^2}\Big)^{-1} \frac{L(1,\chi_{-4})}{\zeta_2(2)} {\tilde \sigma}_1(n) + O\Big( p^3 n^{-\frac 12+\epsilon}\Big) \Big). $$ Since $L(1,\chi_{-4}) =\pi/4$ and $\zeta_2(2) = \pi^2/8$ the above is $$ 8 \frac{n}{p^2} \Big(1-\frac{1}{p}\Big)^{-1} {\tilde \sigma}_1(n) + O\Big( \frac{n^{\frac 12+\epsilon}}{p}\Big). $$ Thus in this case we find that $$ T(p) = 8\frac{n}{p^2} \Big(1-\frac{1}{p}\Big)^{-1} {\tilde \sigma}_1(n) + O\Big( \frac{n^{\frac 56+\epsilon}}{p} + \frac{n^{\frac 34+\epsilon}}{\sqrt{p}} \Big). $$ Recall also that $T(p) \ll n^{1+\epsilon}/p^2$ which is useful if $p$ is very large. In the second case when $p|n$ but $p^2\nmid n$, trivially $T(p)=0$. Finally if $p^2|n$ then we carry out a similar argument to the one above. This gives $$ T(p) = 8\frac{n}{p^2} {\tilde \sigma}_1(n/p^2) + O \Big(n^{\frac 56+\epsilon} +\sqrt{p} n^{\frac 34+\epsilon} \Big). $$ Once again $T(p)\ll n^{1+\epsilon}/p^2$ which is useful for large $p$. Putting all our work above, and splitting into the cases $p\le n^{1/6}$ (where we use our asymptotic formula) and $p>n^{1/6}$ (where we use the trivial bound $\ll n^{1+\epsilon}/p^2$) we obtain that $$ T(p) = 8n {\tilde \sigma}_1(n) \sum_{\substack{{p\equiv 3 \pmod 4} \\ {p\nmid n}}} \frac{1}{p(p-1)} + 8n \sum_{\substack{{p\equiv 3 \pmod 4} \\ {p^2 |n}}} \frac{1}{p^2} {\tilde \sigma}_1(n/p^2) +O(n^{\frac 56+\epsilon}). $$ Conclusion. In the same way, we find that $$ S = 8n {\tilde \sigma}_1(n) + O(n^{\frac 56+\epsilon}). $$ It follows that the number of solutions to Euler's question is at least $$ S-2T \ge 8n {\tilde \sigma}_1(n) \Big( 1- \sum_{p\equiv 3 \pmod 4} \frac{2}{p(p-1)}\Big) + O(n^{\frac 56+\epsilon}) \ge 4n {\tilde \sigma}_1(n) + O(n^{\frac 56+\epsilon}). $$ Remarks. With more work one can establish an asymptotic in Euler's problem rather than just the lower bound. Similarly one can give an asymptotic in Hermite's problem. In principle one can make all the error terms explicit, and possibly obtain a complete result for all $n$, but this would entail a lot of work. Lastly in connection with Hermite's problem, a classical problem of Hardy and Littlewood asks for the representations of numbers as $p+x^2+y^2$ with $p$ a prime. This was resolved by Hooley on GRH, and Linnik unconditionally. Once the Bombieri-Vinogradov theorem became available, the result became much simpler. This could be modified to give a stronger version of Hermite's problem for large $n$, by also asking for $p$ to be $1\pmod 4$ -- a cross between Hermite and Goldbach.
{ "source": [ "https://mathoverflow.net/questions/37278", "https://mathoverflow.net", "https://mathoverflow.net/users/3503/" ] }
37,356
Frucht showed that every finite group is the automorphism group of a finite graph. The paper is here . The argument basically is that a group is the automorphism group of its (colored) Cayley graph and that the colors of edge in the Cayley graph can be coded into an uncolored graph that has the same automorphism group. This argument seems to carry over to the countably infinite case. Does anybody know a reference for this? In the uncountable, is it true that every group is the automorphism group of a graph? (Reference?) It seems like one has to code ordinals into rigid graphs in order to code the uncountably many colors of the Cayley graph.
According to the wikipedia page , every group is indeed the automorphism group of some graph. This was proven independently in de Groot, J. (1959), Groups represented by homeomorphism groups , Mathematische Annalen 138 and Sabidussi, Gert (1960), Graphs with given infinite group , Monatshefte für Mathematik 64: 64–67.
{ "source": [ "https://mathoverflow.net/questions/37356", "https://mathoverflow.net", "https://mathoverflow.net/users/7743/" ] }
37,392
When do the notions of totally disconnected space and zero-dimensional space coincide? From what I gather, there are at least three common notions of topological dimension: covering dimension, small inductive dimension, and large inductive dimension. A secondary question, then, would be to what extent and under what assumptions the three different definitions of zero-dimensional coincide. For example, Wikipedia claims that a space has covering dimension zero if and only if it has large inductive dimension zero, and that a Hausdorff locally compact space is totally disconnected if and only if it is zero-dimensional, but I can't track down their source and would like to understand the proofs. I would appreciate any explanation, or a reference, since this is a pretty textbookish question.
Just to agree on notation: A space is zero-dimensional if it is $T_1$ and has a basis consisting of clopen sets, and totally disconnected if the quasicomponents of all points (intersections of all clopen neighborhoods) are singletons. A space is hereditarily disconnected if no subspace is connected, i.e., if the components of all points are singletons. (Edit: There seems to be disagreement about the names of these properties. Often what I call hereditarily disconnected is called totally disconnected and what I call totally disconnected is then called totally separated.) Note that zero-dimensionality implies Hausdorffness. Zero-dimensional implies totally disconnected since every point can be separated from every other point by a clopen set. Totally disconnected implies hereditarily disconnected: given a set $A$ with at least two points, one point is not in the quasi-component of the other and hence the two points can be separated by a clopen set. Hence the set $A$ is not connected. This shows that the space is hereditarily disconnected. On the other hand, if $X$ is locally compact and hereditarily disconnected, take $x\in X$ and let $U$ be an open set containing $x$. By local compactness, find an open neighborhood $V\subseteq U$ of $x$ whose closure $\overline V$ is compact. In a compact space, components and quasi-components coincide and hence the quasi-component of $x$ in $\overline V$ is $\{x\}$ (you don't need this if you are not interested in hereditary disconnected spaces but just totally disconnected ones). Using compactness again, there are finitely many clopen subsets $C_1\dots,C_n$ of $\overline V$ such that $x\in C_1\cap\dots\cap C_n\subseteq V$. The intersection of the $C_i$ is closed in $X$ since this intersection is compact. It is open in $X$ since it is open in $V$ and $V$ is open in $X$. This shows that the clopen subsets of $X$ form a basis. Hence $X$ is zero-dimensional. Edit: As suggested by Joseph Van Name, I include a proof that in a compact space the components coincide with the quasi-components. Let $X$ be a compact space and $x\in X$. The component $C$ of $x$ is the union of all connected subsets of $X$ containing $x$. If $A\subseteq X$ is clopen and $x\in A$, then the component of $x$ is contained in $A$. It follows that the component of $x$ is contained in the quasi-component $Q$ of $x$. In order to show that the component $C$ and the quasi-component $Q$ coincide, it is now enough to show that $Q$ is connected. Observe that $Q$ is closed in $X$ and thus compact. Now suppose that $Q$ is not connected. Then there are nonempty, relatively open subsets $A$ and $B$ of $Q$ such that $A\cap B=\emptyset$ and $A\cup B=Q$. Note that $A$ and $B$ are relatively closed in $Q$ and hence compact. Hence $A$ and $B$ are closed in $X$. Two disjoint closed sets in a compact space can be separated by open subsets, i.e., there are disjoint open sets $U,V\subseteq X$ such that $A\subseteq U$ and $B\subseteq V$. We have $$Q=\bigcap\{F\subseteq X:F\mbox{ is clopen and }x\in F\}$$ and thus $$\bigcap\{F\subseteq X:F\mbox{ is clopen and }x\in F\}\cap(X\setminus(U\cup V))=\emptyset.$$ By compactness there are finitely many clopen sets $F_1,\dots,F_n$ containing $x$ such that $$F_1\cap\dots\cap F_n\cap(X\setminus(U\cup V))=\emptyset.$$ Let $F=F_1\cap\dots\cap F_n$. $F$ is clopen and we have $Q\subseteq F\subseteq U\cup V$. We have $$\overline{U\cap F}\subseteq\overline U\cap F=\overline U\cap(U\cup V)\cap F=U\cap F.$$ It follows that $U\cap F$ is clopen in $X$. We may assume $x\in A$. Since $B$ is nonempty, there is some $y\in B$. But now $y\not\in U\cap F$. It follows that $y$ is not in the quasi-component of $x$, a contradiction.
{ "source": [ "https://mathoverflow.net/questions/37392", "https://mathoverflow.net", "https://mathoverflow.net/users/3544/" ] }
37,525
The other day I was trying to figure out how to explain why isomorphisms are important. I pulled Boyer's A History of Mathematics off the bookshelf and was surprised to find that isomorphism isn't even listed in its index. The Wikipedia article on isomorphisms only gives two concrete examples. There are many surprising, significant, classic isomorphisms. I'll refrain from giving examples. What are your favorites? As usual, please limit yourself to one isomorphism per answer . (Related: your favorite surprising connections in mathematics . But this question is looking for more concrete examples, particularly those that illustrate the power of the idea.)
Here is an example that Mel Hochster used to explain the notion of isomorphism to a group of talented high school students. I was one of the course assistants rather than one of the students, but I'm sure the insight was at least as valuable for me as for them. Consider the following game. I'll write down the numbers 1 through 9 on a sheet of paper, and you and I will take turns selecting numbers from the list (crossing off each number once it has been selected). The winner is the first person to have chosen exactly three numbers which add up to 15. For example if I selected 9, 6, 2 and you selected 3, 8, 1, 4 then you would win because 3 + 8 + 4 = 15. The interesting thing is that this game is isomorphic to tic-tac-toe. I don't know what I precisely mean by that, but I can explain why it is true. Simply consider a 3 x 3 magic square: 4 9 2 3 5 7 8 1 6 The rows, columns, and diagonals all add up to 15, and moreover every way of writing 15 as the sum of three numbers from 1 to 9 is represented. When you choose a number, draw an X over it; when I choose a number, circle it. Tic-tac-toe!
{ "source": [ "https://mathoverflow.net/questions/37525", "https://mathoverflow.net", "https://mathoverflow.net/users/2599/" ] }
37,551
I'm trying to understand the necessity for the assumption in the Hahn-Banach theorem for one of the convex sets to have an interior point. The other way I've seen the theorem stated, one set is closed and the other one compact. My goal is to find a counter example when these hypotheses are not satisfied but the sets are still convex and disjoint. So here is my question: Question: I would like a counter example to the Hahn-Banach separation theorem for convex sets when the two convex sets are disjoint but neither has an interior point. It is trivial to find a counter example for the strict separation but this is not what I want. I would like an example (in finite or infinite dimensions) such that we fail to have any separation of the two convex sets at all. In other words, we have $K_1$ and $K_2$ with $K_1 \cap K_2 = \emptyset$ with both $K_1$ and $K_2$ convex belonging to some normed linear space $X$. I would like an explicit example where there is no linear functional $l \in X^*$ such that $\sup_{x \in K_1} l(x) \leq \inf_{z \in K_2} l(z)$. I'm quite sure that a counter example cannot arise in finite dimensions since I think you can get rid of these hypotheses in $\mathbb{R}^n$. I'm not positive though.
Here is a simple example of a linear space and 2 disjoint convex sets such that there is no linear functional separating the sets. Note that the notions of convexity and linear functional do not require any norm or whatever else. You can introduce them, if you want, but they are completely external to the problem. The usual trick with taking the difference of the sets shows that it is enough to assume that one set is a point, say, the origin. Now we want to design a convex set $K$ not containing the origin such that the only linear functional $\ell$ that is non-negative on this set is $0$. To this end, take the space $X$ to be the space of all real sequences with finitely many non-zero terms and let $K$ be the set of all such sequences whose last non-zero element is positive. Now, if $x\in X$, choose $y$ to be any sequence whose last non-zero element is $1$ and lies beyond the last non-zero element in $x$. Then, for every $\delta>0$, both $x+\delta y$ and $-x+\delta y$ are in $K$, so $\pm \ell(x)+\delta \ell(y)\ge 0$ with any $\delta>0$ whence $\ell(x)=0$. Thus $\ell$ vanishes identically.
{ "source": [ "https://mathoverflow.net/questions/37551", "https://mathoverflow.net", "https://mathoverflow.net/users/8755/" ] }
37,563
LLL and other lattice reduction techniques (such as PSLQ) try to find a short basis vector relative to the 2-norm, i.e. for a given basis that has $ \varepsilon $ as its shortest vector, $ \varepsilon \in \mathbb{Z}^n $ , find a short vector s.t. $ b \in \mathbb{Z}^n, \|b\|_2 < \|c^n \varepsilon\|_2 $ . Has there been any work done to find short vectors based on other, potentially higher, norms? Is this a meaningful question?
Here is a simple example of a linear space and 2 disjoint convex sets such that there is no linear functional separating the sets. Note that the notions of convexity and linear functional do not require any norm or whatever else. You can introduce them, if you want, but they are completely external to the problem. The usual trick with taking the difference of the sets shows that it is enough to assume that one set is a point, say, the origin. Now we want to design a convex set $K$ not containing the origin such that the only linear functional $\ell$ that is non-negative on this set is $0$. To this end, take the space $X$ to be the space of all real sequences with finitely many non-zero terms and let $K$ be the set of all such sequences whose last non-zero element is positive. Now, if $x\in X$, choose $y$ to be any sequence whose last non-zero element is $1$ and lies beyond the last non-zero element in $x$. Then, for every $\delta>0$, both $x+\delta y$ and $-x+\delta y$ are in $K$, so $\pm \ell(x)+\delta \ell(y)\ge 0$ with any $\delta>0$ whence $\ell(x)=0$. Thus $\ell$ vanishes identically.
{ "source": [ "https://mathoverflow.net/questions/37563", "https://mathoverflow.net", "https://mathoverflow.net/users/4106/" ] }
37,570
Let $X$ be a smooth projective variety over $\mathbb{C}$. And let $L$ be a big and nef line bundle on $X$. I want to prove $L$ is semi-ample($L^m$ is basepoint-free for some $m > 0$). The only way I know is using Kawamata basepoint-free theorem: Theorem. Let $(X, \Delta)$ be a proper klt pair with $\Delta$ effective. Let $D$ be a nef Cartier divisor such that $aD-K_X-\Delta$ is nef and big for some $a > 0$. Then $|bD|$ has no basepoints for all $b >> 0$. Question. What other kinds of techniques to prove semi-ampleness or basepoint-freeness of given line bundle are? Maybe I miss some obvious method. Please don't hesitate adding answer although you think your idea on the top of your head is elementary. Addition : In my situation, $X$ is a moduli space $\overline{M}_{0,n}$. In this case, Kodaira dimension is $-\infty$. More generally, I want to think genus 0 Kontsevich moduli space of stable maps to projective space, too. $L$ is given by a linear combination of boundary divisors. It is well-known that boundary divisors are normal crossing, and we know many curves on the space such that we can calculate intersection numbers with boundary divisors explicitely.
Here is a simple example of a linear space and 2 disjoint convex sets such that there is no linear functional separating the sets. Note that the notions of convexity and linear functional do not require any norm or whatever else. You can introduce them, if you want, but they are completely external to the problem. The usual trick with taking the difference of the sets shows that it is enough to assume that one set is a point, say, the origin. Now we want to design a convex set $K$ not containing the origin such that the only linear functional $\ell$ that is non-negative on this set is $0$. To this end, take the space $X$ to be the space of all real sequences with finitely many non-zero terms and let $K$ be the set of all such sequences whose last non-zero element is positive. Now, if $x\in X$, choose $y$ to be any sequence whose last non-zero element is $1$ and lies beyond the last non-zero element in $x$. Then, for every $\delta>0$, both $x+\delta y$ and $-x+\delta y$ are in $K$, so $\pm \ell(x)+\delta \ell(y)\ge 0$ with any $\delta>0$ whence $\ell(x)=0$. Thus $\ell$ vanishes identically.
{ "source": [ "https://mathoverflow.net/questions/37570", "https://mathoverflow.net", "https://mathoverflow.net/users/4643/" ] }
37,610
Any pure mathematician will from time to time discuss, or think about, the question of why we care about proofs, or to put the question in a more precise form, why we seem to be so much happier with statements that have proofs than we are with statements that lack proofs but for which the evidence is so overwhelming that it is not reasonable to doubt them. That is not the question I am asking here, though it is definitely relevant. What I am looking for is good examples where the difference between being pretty well certain that a result is true and actually having a proof turned out to be very important, and why. I am looking for reasons that go beyond replacing 99% certainty with 100% certainty. The reason I'm asking the question is that it occurred to me that I don't have a good stock of examples myself. The best outcome I can think of for this question, though whether it will actually happen is another matter, is that in a few months' time if somebody suggests that proofs aren't all that important one can refer them to this page for lots of convincing examples that show that they are. Added after 13 answers: Interestingly, the focus so far has been almost entirely on the "You can't be sure if you don't have a proof" justification of proofs. But what if a physicist were to say, "OK I can't be 100% sure, and, yes, we sometimes get it wrong. But by and large our arguments get the right answer and that's good enough for me." To counter that, we would want to use one of the other reasons, such as the "Having a proof gives more insight into the problem" justification. It would be great to see some good examples of that. (There are one or two below, but it would be good to see more.) Further addition: It occurs to me that my question as phrased is open to misinterpretation, so I would like to have another go at asking it. I think almost all people here would agree that proofs are important: they provide a level of certainty that we value, they often (but not always) tell us not just that a theorem is true but why it is true, they often lead us towards generalizations and related results that we would not have otherwise discovered, and so on and so forth. Now imagine a situation in which somebody says, "I can't understand why you pure mathematicians are so hung up on rigour. Surely if a statement is obviously true, that's good enough." One way of countering such an argument would be to give justifications such as the ones that I've just briefly sketched. But those are a bit abstract and will not be convincing if you can't back them up with some examples. So I'm looking for some good examples. What I hadn't spotted was that an example of a statement that was widely believed to be true but turned out to be false is, indirectly, an example of the importance of proof, and so a legitimate answer to the question as I phrased it. But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth. There are a few below. The more the merrier.
I once got a letter from someone who had overwhelming numerical evidence that the sum of the reciprocals of primes is slightly bigger than 3 (he may have conjectured the limit was π). The sum is in fact infinite, but diverges so slowly (like log log n) that one gets no hint of this by computation.
{ "source": [ "https://mathoverflow.net/questions/37610", "https://mathoverflow.net", "https://mathoverflow.net/users/1459/" ] }
37,651
I'm looking for explicit examples of Riemannian surfaces (two-dimensional Riemannian manifolds $(M,g)$) for which the distance function d(x,y) can be given explicitly in terms of local coordinates of x,y, assuming that x and y are sufficiently close. By "explicit", I mean things like a closed form description in terms of special functions, by implicitly solving a transcendental equation or (at worst) by solving an ODE, as opposed to having to solve a variational problem or a PDE such as the eikonal equation, or an inverse problem for an ODE, or to sum an asymptotic series. The only examples of this that I know of are the constant curvature surfaces, which can be locally modeled either by the Euclidean plane ${\bf R}^2$, the sphere ${\bf S}^2$, or the hyperbolic plane ${\bf H}^2$, for which we have classical formulae for the distance function. But I don't know of any other examples. For instance, the distance functions on the surface of the solid ellipsoid or solid torus in ${\bf R}^3$ look quite unpleasant already to write down explicitly. Presumably Zoll surfaces would be the next thing to try, but I don't know of any tractable explicit examples of Zoll surfaces that are not already constant curvature.
I'll briefly spell out what others have pointed to concerning geodesics on surfaces of revolution (or more generally, surfaces with a 1-parameter group of symmetries), because it's nice and not as widely understood as it should be. Geodesics on surfaces of revolution conserve angular momentum about the central axis, so the geodesic flow splits into 2-dimensional surfaces having constant energy (~length) and angular momentum (The more general principle is that the inner product of the tangent to a geodesic with any infinitesimal isometry of a Riemannian manifold is constant). The surfaces are generically toruses. The shadow of these toruses on the surface of revolution is an annulus, a component of a set of $r \ge r_0$, where on each point with $r > r_0$ there are two vectors having the given angular momentum, but they merge at the boundary, both becoming tangent to the boundary of the annulus. If you sketch the picture, you will see the torus. The geodesics correspond to the physical phenomenon of the pattern of string or thread mechnically but passively wound around a cylinder. As string builds up in the middle, geodesics start to oscillate back and forth in a sinusoidal pattern, further amplifying the bulge in the middle. To find the geodesic from point x to point y, you need to know which angular momentum will take you from x to y. For any two meridian circles and any choice of angular momentum, the geodesics of given angular momentum map one circle to the other by a rotation. Both the angle of rotation of the map and the length of the particular family of geodesics traversing the annulus is given by an integral over an interval cutting across the annulus, since the the slope of the vector field at all intervening points is known. I have an aversion to actual symbolic computation so I won't give you example formulas, but I believe this should meet your criterion for explicitness. But to take a step back: this question, asking for an explicit formula, has an unstated (and probably unintended) connotation that is worth examining: this use of language implicitly suggests that non-symbolic forms are less worthy. I don't know the background motivation for the question, but an alternative question for some purposes would be to give example of surfaces where you can exhibit the distance function. Communication of mathematics is biassed toward symbolic forms. However, for many people and many purposes, some kind of graphical representation of the distance function, and/or diagrams or explanations of why it is what it is as well as a striaghtforward method for computing it, would often be better than a symbolic answer. The geodesic flow of course is an ordinary differential equation. It is a vector field on the 3-manifold of unit-length tangent vectors to the surface, defined by very easy equations: the vectors are tangent to the surface, and their derivative (= the 2nd derivative of a geodesic arc) is normal to the surface. The solutions may not always have a nice symbolic form, but they always have a nice and easy-to-compute geometric form. Finding the distance involves the implicit function theorem, but this is easy and intuitive. One could, for instance, easily draw a parametric surface that is the graph of distance as a function of position directly from solutions to the ODE (which no doubt sometimes even have reasonable symbolic representations). Both the ODE for the geodesic flow and the inverse function to give distance as a function of position are easy to compute numerically, and easy to understand qualitatively.
{ "source": [ "https://mathoverflow.net/questions/37651", "https://mathoverflow.net", "https://mathoverflow.net/users/766/" ] }
37,708
The Nash embedding theorem tells us that every smooth Riemannian m-manifold can be embedded in $R^n$ for, say, $n = m^2 + 5m + 3$. What can we say in the special case of 2-manifolds? For example, can we always embed a 2-manifold in $R^3$?
The Nash-Kuiper embedding theorem states that any orientable 2-manifold is isometrically ${\cal C}^1$-embeddable in $\mathbb{R}^3$. A theorem of Thompkins [cited below] implies that as soon as one moves to ${\cal C}^2$, even compact flat $n$-manifolds cannot be isometrically ${\cal C}^2$-immersed in $\mathbb{R}^{2n-1}$. So the answer to your question for smooth embeddings is: No , as others have pointed out. I believe Gromov reduced the dimension you quote of the space needed for any compact surface to 5, but I don't have a precise reference for that. Tompkins, C. "Isometric embedding of flat manifolds in Euclidean space," Duke Math.J. 5 (1): 1939, 58-61. Edit. Both Deane Yang and Willie Wong were correct that the Gromov result is in Partial Differential Relations . I believe this is it, on p.298: "We construct here an isometric $\cal{C}^\infty$ ($\cal{C}^{\mathrm{an}}$)-imbedding of $(V,g) \rightarrow \mathbb{R}^5$ for all compact surfaces $V$." $g$ is a Riemannian metric on $V$.
{ "source": [ "https://mathoverflow.net/questions/37708", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ] }
37,777
In Presburger Arithmetic there is no predicate that can express divisibility, else Presburger Arithmetic would be as expressive as Peano Arithmetic. Divisibility can be defined recursively, for example $D(a,c) \equiv \exists b \: M(a,b,c)$, $M(a,b,c) \equiv M(a-1,b,c-b)$, $M(1,b,c) \equiv (b=c)$. But some predicates which can be expressed in Presburger Arithmetic also have recursive definitions, for example $P(x,y,z) \equiv (x+y=z)$ versus $P(x,y,z) \equiv P(x-1,y+1,z)$, $P(0,y,z) \equiv (y=z)$. How to tell if a predicate, defined recursively without use of multiplication, has an equivalent non-recursive definition which can be expressed in Presburger Arithmetic?
Presburger arithmetic admits elimination of quantifiers , if one expands the language to include truncated minus and the unary relations for divisibility-by-2, divisibility-by-3 and so on, which are definable in Presburger arithmetic. (One can equivalently expand the language to include congruence $\equiv_k$ modulo $k$ for each natural number $k$.) That is to say, every assertion in the language of Presburger arithmetic is equivalent to a quantifier-free assertion in the expanded language. It follows that the definable subsets of $\mathbb{N}$ in the language of Presburger arithmetic are exactly the eventually periodic sets. These are comparatively trivial sets, of course, and it means that the set of prime numbers and other interesting sets of natural numbers are simply not expressible in the language of Presburger arithmetic. A similar analysis holds in higher dimensions, and for this reason, we usually think of Presburger arithmetic as a weak theory. The quantifier elimination argument leads directly to the conclusion that Presburger arithmetic is a decidable theory: given any sentence, one finds the quantifier-free equivalent formulation, and such sentences are easily recognized as true or false. There is an interesting account in these slides for a talk by Cesare Tinelli.
{ "source": [ "https://mathoverflow.net/questions/37777", "https://mathoverflow.net", "https://mathoverflow.net/users/2003/" ] }
37,792
The homotopy groups $\pi_{n}(X)$ arise from considering equivalence classes of based maps from the $n$-sphere $S^{n}$ to the space $X$. As is well known, these maps can be composed, giving arise to a group operation. The resulting group contains a great deal of information about the given space. My question is: is there any extra information about a space that can be discovered by considering equivalence classes of based maps from the $n$-tori $T^{n}=S^{1}\times S^{1}\times \cdots \times S^{1}$. In the case of $T^{2}$, it would seem that since any path $S^{1}\to X$ can be "thickened" to create a path $T^{2}\to X$ if $X$ is three-dimensional, the group arising from based paths $T^{2}\to X$ would contain $\pi_{1}(X)$. Perhaps more generally, can useful information be gained by examining equivalence classes of based maps from some arbitrary space $Y$ to a given space $X$.
There's always information to be got. But in this case: Based homotopy classes of maps $T^2\to X$ don't form a group! To define a natural function $\mu\colon [T,X]_*\times [T,X]_*\to [T,X]_*$, you need a map $c\colon T\to T\vee T$ (where $\vee$ is one point union). And if you want $\mu$ to be unital, associative, etc., you'll want $c$ to be counital, coassociative, etc. For $T=T^n$ with $n\geq2$, there is no $c$ that is counital. (The usual way to see this is to think about the cohomology $H^*T$ with its cup-product structure.) The inclusion $S^1\vee S^1\to T^2$ gives a map $$r\colon [T^2,X]_* \to [S^1\vee S^1,X]_*\approx \pi_1X\times \pi_1X.$$ The image of this map will be pairs $(a,b)$ of elements in $\pi_1X$ which commute: $ab=ba$. It won't usually be injective; so there might be something interesting to think about the in preimages $r^{-1}(a,b)$.
{ "source": [ "https://mathoverflow.net/questions/37792", "https://mathoverflow.net", "https://mathoverflow.net/users/6856/" ] }
37,825
This question is inspired by this Tim Gowers blogpost . I have some familiarity with the work of Jacob Lurie, which contains ideas which are simply astounding; but what I don't understand is which key insight allowed him to begin his programme and achieve things which nobody had been able to achieve before. People had looked at $\infty$-categories for years, and the idea of $(\infty,n)$-categories is not in itself new. What was the key new idea which started "Higher Topos Theory", the proof of the Baez-Dolan cobordism hypothesis, "Derived Algebraic Geometry", etc.?
My answer would be that his insight was firstly that it pays to take what Grothendieck said in his various long manuscripts, extremely seriously and then to devote a very large amount of thought, time and effort. Many of the methods in HTT have been available from the 1980s and the importance of quasi-categories as a way to boost higher dimensional category theory was obvious to Boardman and Vogt even earlier. Lurie then put in an immense amount of work writing down in detail what the insights from that period looked like from a modern viewpoint.It worked as the progress since that time had provided tools ripe for making rapid progress on several linked problems. His work since HTT continues the momentum that he has built up. As far as `abstracter than thou' goes, I believe that Grothendieck's ideas in Pursuing Stacks were not particularly abstract and Lurie's continuation of that trend is not either. Once you see that there are some good CONCRETE models for $\infty$-categories the geometry involved gets quite concrete as well. Simplicial sets are not particularly abstract things, although they can be a bit scary when you meet them first. Quasi-categories are then a simple-ish generalisation of categories, but where you can use both categorical insights and homotopy insights. That builds a good intuition about infinity categories... now bring in modern algebraic topology with spectra, etc becoming available.
{ "source": [ "https://mathoverflow.net/questions/37825", "https://mathoverflow.net", "https://mathoverflow.net/users/2051/" ] }
37,838
Background I am reading Tom Blyth's book Categories as I am thinking of using it as a guide for a fourth-year project I'll be supervising this academic year. The books seems the right length and level for the kind of project we require of our final year single honours students in Edinburgh. In the fourth chapter of the book, a normal category is defined as one with zero objects, in which every morphism has a kernel and a cokernel, and in which every monomorphism is a kernel. This last condition can be rephrased as saying that monomorphisms are normal. My confusion stems from Theorem 4.6 in the book which states that a normal category has pullbacks . The proof in the book seems to use that in the diagram $$\begin{matrix} & B \cr & \downarrow \cr A \rightarrow & C \cr \end{matrix} $$ whose limit is the desired pullback, the morphism $A \to C$ is a kernel. This seems to me an unwarranted assumption, since it would seem to imply, in particular, that generic morphisms are monic. Alas, I have not been able to find an independent proof of the theorem and I am starting to suspect that this may not be true. Googling seems not to be of much help. For one thing one has to wade through a surprising number of hits which have nothing to do with category theory. Since much of the rest of the chapter seems to depend on this result, I am a little stuck. I have emailed the author, who is an emeritus professor across the firth in St Andrews, but so far no reply. So I thought I would try it here in MO, since I'm sure to get an authoritative answer to my Two questions Is the result true? And if so, Can someone point me to a proof? Thanks in advance!
My answer would be that his insight was firstly that it pays to take what Grothendieck said in his various long manuscripts, extremely seriously and then to devote a very large amount of thought, time and effort. Many of the methods in HTT have been available from the 1980s and the importance of quasi-categories as a way to boost higher dimensional category theory was obvious to Boardman and Vogt even earlier. Lurie then put in an immense amount of work writing down in detail what the insights from that period looked like from a modern viewpoint.It worked as the progress since that time had provided tools ripe for making rapid progress on several linked problems. His work since HTT continues the momentum that he has built up. As far as `abstracter than thou' goes, I believe that Grothendieck's ideas in Pursuing Stacks were not particularly abstract and Lurie's continuation of that trend is not either. Once you see that there are some good CONCRETE models for $\infty$-categories the geometry involved gets quite concrete as well. Simplicial sets are not particularly abstract things, although they can be a bit scary when you meet them first. Quasi-categories are then a simple-ish generalisation of categories, but where you can use both categorical insights and homotopy insights. That builds a good intuition about infinity categories... now bring in modern algebraic topology with spectra, etc becoming available.
{ "source": [ "https://mathoverflow.net/questions/37838", "https://mathoverflow.net", "https://mathoverflow.net/users/394/" ] }
37,933
What are some applications of the Implicit Function Theorem in calculus? The only applications I can think of are: the result that the solution space of a non-degenerate system of equations naturally has the structure of a smooth manifold; the Inverse Function Theorem. I am looking for applications that would be interesting to an advanced US math undergraduate or 1-st year graduate student who is seeing the Implicit Function Theorem for the first or second time. The context is I will be explaining this result as part of a review of manifold theory.
The infinite-dimensional implicit function theorem is used, among other things, to demonstrate the existence of solutions of nonlinear partial differential equations and parameterize the space of solutions. For equations of standard type (elliptic, parabolic, hyperbolic), the standard version on Banach spaces usually suffices, but you have to be clever about which Banach space to use. There is a generalization of the implicit function theorem, due to Nash who used it to demonstrate the existence of isometric embeddings of Riemannian manifolds in Euclidean space, that works for even more general types of PDE's. Moser stated and proved a simpler version of the theorem. There is a beautiful survey article by Richard Hamilton (who originally used the Nash-Moser implicit function theorem to prove the local-in-time existence of solutions to the Ricci flow) on the Nash-Moser implicit function theorem.
{ "source": [ "https://mathoverflow.net/questions/37933", "https://mathoverflow.net", "https://mathoverflow.net/users/5337/" ] }
38,019
It was asked in the Putnam exam of 1969, to list all sets which can be the range of polynomials in two variables with real coefficients. Surprisingly, the set $(0,\infty )$ can be the range of such polynomials. These don't attain their global infimum although they are bounded below. But is it also possible that such polynomials with range $(0,\infty )$ also have a non zero gradient everywhere?
$(1+x+x^2y)^2+x^2$
{ "source": [ "https://mathoverflow.net/questions/38019", "https://mathoverflow.net", "https://mathoverflow.net/users/9075/" ] }
38,049
I know this is a dangerous topic which could attract many cranks and nutters, but: According to Wikipedia [ and probably his own website, but I have a hard time seeing exactly what he's claiming ] Louis de Branges has claimed, numerous times, to have proved the Riemann Hypothesis; but clearly few people believe him. His website is: http://www.math.purdue.edu/~branges/site/Papers but I find his papers difficult to follow. However, whether or not you believe him, his arguments presumably should prove something , even if not the full RH. So, my question is: Are there any theorems related to the Riemann Hypothesis and similar problems, arising from his work, which have been fully accepted by the mathematical community and published (or at least submitted)?
The paper by Conrey and Li "A note on some positivity conditions related to zeta and L-functions" https://arxiv.org/abs/math/9812166 discusses some of the problems with de Branges's argument. They describe a (correct) theorem about entire functions due to de Branges, which has a corollary that certain positivity conditions would imply the Riemann hypothesis. However Conrey and Li show that these positivity conditions are not satisfied in the case of the Riemann hypothesis. So the answer is that de Branges has proved theorems in this area that are accepted, and his work on the Riemann hypothesis has been checked and found to contain a serious gap. (At least the version of several years ago has a gap; I think he may have produced updated versions, but at some point people lose interest in checking every new version.) Update: there is a more recent paper by Lagarias discussing de Branges's work. Lagarias, Jeffrey C. , Hilbert spaces of entire functions and Dirichlet $L$ -functions, Cartier, Pierre (ed.) et al., Frontiers in number theory, physics, and geometry I. On random matrices, zeta functions, and dynamical systems. Papers from the meeting, Les Houches, France, March 9–21, 2003. Berlin: Springer (ISBN 978-3-540-23189-9/hbk). 365-377 (2006). ZBL1121.11057 , MR2261101 . Author's website and Wayback Machine .
{ "source": [ "https://mathoverflow.net/questions/38049", "https://mathoverflow.net", "https://mathoverflow.net/users/6651/" ] }
38,089
The following is some version of Tannaka-Krein theory, and is reasonably well-known: Let $G$ be a group (in Set is all I care about for now), and $G\text{-Rep}$ the category of all $G$-modules (over some field $\mathbb K$, say). It is a fairly structured category (complete, cocomplete, abelian, $\mathbb K$-enriched, ...) and in particular carries a symmetric tensor product $\otimes$. The forgetful functor $\operatorname{Forget}: G\text{-Rep} \to \text{Vect}$ respects all of this structure, and in particular is (symmetric) monoidal. Let $\operatorname{End}_\otimes(\operatorname{Forget})$ denote the monoid of monoidal natural transformations of $\operatorname{Forget}$. Then it is a group, and there is a canonical isomorphism $\operatorname{End}_\otimes(\operatorname{Forget}) \cong G$. The following is probably also reasonably well-known, but I don't know it myself: Let $G$, etc., be as above, but suppose that we have forgotten what $G$ the category $G\text{-Rep}$ came from, and in particular forgot, at least momentarily, the data of the forgetful functor. We can nevertheless recover it, because in fact $\operatorname{Forget}$ is the unique-up-to-isomorphism ADJECTIVES functor $G\text{-Rep} \to \text{Vect}$. My question is: what are the words that should go in place of "ADJECTIVES" above? Certainly "linear, continuous, cocontinuous, monoidal" are all reasonable words, although my intuition has been that I can drop "cocontinuous" from the list. But even with all these words, I don't see how to prove the uniqueness. If I had to guess, I would guess that the latter claim is a result of Deligne's, although I don't read French well enough to skim a bunch of his papers and find it. Any pointers to the literature?
If $G$ is an affine algebraic group (for example a finite group), then the category of $k$ -linear cocontinuous symmetric monoidal functors from $\mathsf{Rep}(G)$ to $\mathsf{Vect}_k$ is equivalent to the category of $G$ -torsors over $k$ . In particular, not every such functor needs to be isomorphic to the identity. For example, if $k'$ is finite Galois extension of k with Galois group $G$ , then the functor $F(V) = (V \otimes_{k} k')^{G}$ will satisfy all the axioms you will think to write down, but is not isomorphic to the identity functor.
{ "source": [ "https://mathoverflow.net/questions/38089", "https://mathoverflow.net", "https://mathoverflow.net/users/78/" ] }
38,105
Given a function $f(z)$ on the complex plane, define the Schwarzian derivative $S(f)$ to be the function $S(f) = \frac{f'''}{f'} - \frac{3}{2} (\frac{f''}{f'})^2$ Here is a somewhat more conceptual definition, which justifies the terminology. Define $[f, z, \epsilon]$ to be the cross ratio $[f(z), f(z + \epsilon); f(z + 2\epsilon), f(z + 3\epsilon)]$ , and let $[z, \epsilon]$ denote the cross ratio $[z, z + \epsilon, z + 2\epsilon, z + 3\epsilon]$ (in fact this is just 4, but this notation makes my point clearer). One can ask if $[f, z, \epsilon]$ is well approximated by $[z, \epsilon]$ ; indeed, it turns out that the error is $o(\epsilon)$ . So one pursues the second order error term and finds that $[f, z, \epsilon] = [z, \epsilon] - 2 S(f)(z) \epsilon^2 + o(\epsilon^2)$ . So the Schwarzian derivative measures the infinitesimal change in cross ratio caused by $f$ . In particular, $S(f)$ is identically zero precisely for Möbius transformations. That's all background. From what I have said so far, the Schwarzian derivative is at best a curiosity. What is not obvious at first glance is the fact that the Schwarzian derivative has magical powers. Here are some examples: First magical power: The Schwarzian derivative is deeply relevant to one dimensional dynamics, stemming from the fact that it behaves in a specific way under compositions. For example, if $f$ is a smooth function from the unit interval to itself with negative Schwarzian derivative and $n$ critical points, then it has at most n+2 attracting periodic orbits. Second magical power: It says something profound about the solutions to the Sturm-Liouville equation, $f''(z) + u(z) f(z) = 0$ . If $f_1$ and $f_2$ are two linearly independent solutions, then the ratio $g(z) = f_1(z)/f_2(z)$ satisfies $S(g) = 2u$ . Third magical power: The Schwarzian derivative is the unique projectively invariant 1-cocyle for the diffeomorphism group of $\mathbb{R}P^1$ . This is probably just a restatement of the conceptual definition I gave above, but I'm not sure; in any event, this gives the Schwarzian derivative a great deal of relevance to conformal field theory (or so I'm told). I'm sure there are more. I'm wondering if all of these powers can be explained by some underlying geometric principle. They all seem vaguely relevant to each other, but the first power in particular seems very hard to relate to the definition in any obvious way. Does anybody have any insights?
Like may people (but not all people), I have trouble thinking in terms of formulas such as that for the Schwarzian. For me, a geometric image works much better. I'll describe a geometric picture, similar to what I discussed in my paper "Zippers and univalent functions" (and can be found elsewhere as well, but I don't have a good sense of references). The group of Moebius transformations is 3 dimensional, and for any locally-defined diffeomorphism f of R or any locally-defined holomorphic map of C, you can fit the value, the first derivative and the 2nd derivative at any point by a unique Moebius transformation, the osculating Moebius transformation at the point. From this, you can make a recipe to extend the diffeomorphism into the upper half plane or upper half space models for hyperbolic geometry: map each vertical line according to the osculating Moebius transformation at its base. When you do this, vertical lines are mapped isometrically, but the metric is necessarily distorted in the horizontal directions unless f is a Moebius transformation. The Schwarzian derivative gives the asymptotic behavior of this distortion. For real maps, if the Schwarzian is negative, the hyperbolic rays are bent away from each other. A hyperbolic line perpendicular to the vertical lines is mapped to a curve that (in terms of the hyperbolic metric) bends downward. If you consider any interval on R, it has a natural projectively-invariant metric, called the Hilbert metric, and identified with the 1-dimensional hyperbolic metric in this case. The bending implies that the metric is expanded by f, relative to the Hilbert metric of its image. For example, $\log(t)$ is an arc-length parametrization for $(0,\infty)$. The map $x \rightarrow x^k$ expands the parameter by a factor of $k$. The expanding property is highly significant for analyzing dynamics. In the complex analytic case, the Schwarzian is a holomorphic quadratic differential, and can be geometrically indicated by two perpendicular foliations: one set of streamlines where the quadratic form takes positive real values, and a perpendicular set of lines where it takes negative real values. Whenever the quadratic form has a simple zero, the foliations have singularities where the lines make a Y pattern, branching in 3 ways. At any critical point of f, there is a double pole of the Schwarzian, where the positive real foliation circles around and the negative real foliation is radial. These lines show how the extension of f bends surfaces asymptotically near the complex plane; if you start with an umbilic surface such as a plane, horosphere or equidistant surface, they show the asymptotic pattern for lines of curvature for the image of the surfaces via the extension of f. For example, you can visualize z -> z^2 as extending by mapping the hyperbolic cylinders around the z axis (they appear as cones in upper half space), wrapping them twice around the vertical axis (thus stretching the circumference by a factor of 2) , and stretching vertically by a factor of 2. The curvature along the meridians is increased, and the curvature in directions parallel to the axis is decreased. Of course, the Poincare metric of a small disk is preserved, but you can still see the metric effect in terms of the nonconformality of the behavior on surfaces in hyperbolic space. You can also see it by the shape of the image of a small circle on C. To 2nd order, it remains round, but there is a 3rd order effect that makes it elliptical, where the short axis is the direction in which the Schwarzian is negative. When you look at computer plots of the quadratic differentials for holomorphic maps, they pop into 3 dimensions, strongly suggesting the geometry of some families of surfaces that can be associated to a holomorphic map of C. (source: Wayback Machine) The Schwarzian for the rational function $f(z)(= (z^3-3 z -1)/(z^3+1)$. It's challenging to make a revealing plot for a complex rational function like $f$, since it maps 3 times over the Riemann sphere, but the Schwarzian is easy to draw, and shows how an extension of f bends hyperbolic space. The critical points of f(z) are surrounded by circular positive-real circles where the Schwarzian has a double pole, indicating how they are wrapped twice around a core singularity. A typical zero is visible in the center, where the bending branches three ways. The zeros and poles of the function f itself are not visible, since $0$ and $\infty$ have no special significance in the geometry of $S^2 = CP^1$. Enough ... there are endless mysteries to the Schwarzian, but these geometric images are helpful to me. I'll second Victor's suggestion: I haven't read the Osvienko- Tabachnikov book either, but I'm confident from prior experience that it's interesting material. Addendum. Since you expressed interest in the real case, here's one way to illustrate it. The idea is simple: take a standard family of horocycles in the domain, in this case circle of constant height tangent to the real line, and push them forward by the osculating Moebius transformation. The image is actually determined by just $f$ and $f'$, but to calculate (rather than just see) the envelope would require $f''$. This picture is for $x^3 - 3 x$ (which folds $[-2,2]$ 3 times over its image), in the interval $[-2.1,2.1]$ The fat shape with downard hyperbolic curvature of the envelope in contrast to upward curvature of the envelope in the domain, suggests a caterpillar outgrowing its skin and demonstrates the negative Schwarzian. (source: Wayback Machine)
{ "source": [ "https://mathoverflow.net/questions/38105", "https://mathoverflow.net", "https://mathoverflow.net/users/4362/" ] }
38,161
Obviously there exists a list of the finite simple groups, but why should it be a nice list, one that you can write down? Solomon's AMS article goes some way toward a historical / technical explanation of how work on the proof proceeded. But, though I would like someday to attain some appreciation of the mathematics used in the proof, I'm hoping that there is some plausibility argument out there to convince the non-expert (like me!) that a classification ought to be feasible at all. A few possible lines of thought come to mind: Groups have very simple axioms. So perhaps they should be easy to classify. This seems like not a very convincing argument, but perhaps there is some way to make it more convincing. Lie groups have a nice classification, and many tools are available for their study and that of their finite analogues. And in fact, it turns out that almost all finite simple nonabelian groups fall under this heading. Is it somehow clear a priori that these should be essentially all the examples? What sort of plausibility arguments might lead one to believe this? If there are not currently any good heuristic arguments to convince a non-expert that a classification should be possible, then will this always be the case? Or will we someday understand things better... There is probably a model-theoretic way to formalize this question. As a total guess, it might be something along the lines of "Do the finite simple groups have a finitely axiomatizable first-order theory?", except probably "finitely axiomatizable first-order theory" doesn't really capture the idea of a classification. If someone could point me towards how to formalize the idea of "classifiable", or "feasibly classifiable", I'd appreciate it. FSGs up to order SE FSGs up to order MO EDIT: To clarify, what I'd like is an argument that finite simple groups should be classifiable which does not boil down to an outline of the actual classification proof. Joseph O'Rourke asked on StackExchange Why are there only a finite number of sporadic simple groups? . There, Jack Schmidt pointed out the work of Michler towards a uniform construction of the sporadic groups, as reviewed here . Following the citation trail, one finds a 1976 lecture by Brauer in which he says that he's not sure whether there are finitely many or infinitely many sporadic groups, and which he concludes with some historical notes that describe a back-and-forth over the decades: at times it was believed there were infinitely many sporadic groups, and at times that there were only finitely many. So it appears that the answer to my question is no -- at least up to 1976, there was no evidence apart from the classification program as a whole to suggest that there should be only finitely many sporadic groups. So I'd like to refocus my question: are such lines of argument developing today, or likely to develop in the (near? distant?) future? And has there been further clarification of what exactly is meant by a classification? (Is this too drastic a change? should I start a new thread?)
It is unlikely that there is any easy reason why a classification is possible, unless someone comes up with a completely new way to classify groups. One problem, as least with the current methods of classification via centralizers of involutions, is that every simple group has to be tested to see if it leads to new simple groups containing it in the centralizer of an involution. For example, when the baby monster was discovered, it had a double cover, which was a potential centralizer of an involution in a larger simple group, which turned out to be the monster. The monster happens to have no double cover so the process stopped there, but without checking every finite simple group there seems no obvious reason why one cannot have an infinite chain of larger and larger sporadic groups, each of which has a double cover that is a centralizer of an involution in the next one. Because of this problem (among others), it was unclear until quite late in the classification whether there would be a finite or infinite number of sporadics. Any easy way to get around this has been overlooked by about a hundred finite group theorists.
{ "source": [ "https://mathoverflow.net/questions/38161", "https://mathoverflow.net", "https://mathoverflow.net/users/2362/" ] }
38,188
Is there an easy example of a (closed) hyperbolic 3-manifold that fibers over the circle but contains some totally geodesic surface? (Of course such manifolds exist if the 'Virtually Fibered Conjecture' were correct, since a geodesic surface lifts to the fibered cover. But is there something more eplicit?)
There are many specific known examples. Here is one construction: Start with the 3-torus $T^3$, parametrize in the standard way as $R^3/Z^3$. It fibers over the circle in many ways. Let $a$, $b$ and $c$ be three disjoint circles, coming form lines parallel to the x, y and z axes. For most fibrations, these three circles are transverse to the fibers. Form a branched cover of the torus with two-fold branching over all preimages of these 3 circles. The resulting manifold has a hyperbolic structure that can be constructed from right-angled hyperbolic dodecahedra, and is commensurable with the 4-fold branched cover of $S^3$ over the Borromean rings. You can think of it this way: you can take a unit cube as fundamental domain for the torus, and arrange that a, b and c lie on faces of the cube, each bisecting a pari of (glued together) opposite facce. This induces a subdividision of the boundary of the cube into what look like rectangles, but are really pentagons. The map (x,y,z) -> x+y+z gives a fibration over the torus, also works for any branched cover as described. The preimage of any face of the cube is an extended face plane of a dodecahedron, and is always a totally geodesic immersed surface, but it splits into two embedded surfaces for suitable branched covers of $T^3$ (perhaps the one you first come up with.) The tiling of hyperbolic space by right-angled dodecahedra has a cameo appearance in the video "Not Knot" we made at the Geometry Center, available together with "Outside In" on DVD from AKPeters . In the 1984 Scientific American Article The Mathematics of three-dimensional manifolds that Jeff Weeks and I wrote, a manifold in this family (constructed from right-angled hyperbolic dodecahedra and having the properties you asked for) was described as the configuration space of a mechanical linkage. I don't think these particular properties were pointed out in Scientific American. This and other examples that are counterintuitive at first were a good part of my motivation when I raised the question whether all hyperbolic 3-manifolds virtually fiber over the circle, which at the time was a radical idea.
{ "source": [ "https://mathoverflow.net/questions/38188", "https://mathoverflow.net", "https://mathoverflow.net/users/39082/" ] }
38,191
Consider a markov chain matrix P of size n x n (n states). P is known to be: 1- Not irreducible (i.e. there exist at least a pair of states i, j such that we cannot go from i to j) 2- Not all states are recurrent. 3- Aperiodic (the return to some states can occur at irregular times). 4- there are at least two absorbent states i,j (P_i,i = P_j,j = 1) It is true that limit when n goes to infinity of P^n converges? Is this result well known or is the proof simple? Thanks.
There are many specific known examples. Here is one construction: Start with the 3-torus $T^3$, parametrize in the standard way as $R^3/Z^3$. It fibers over the circle in many ways. Let $a$, $b$ and $c$ be three disjoint circles, coming form lines parallel to the x, y and z axes. For most fibrations, these three circles are transverse to the fibers. Form a branched cover of the torus with two-fold branching over all preimages of these 3 circles. The resulting manifold has a hyperbolic structure that can be constructed from right-angled hyperbolic dodecahedra, and is commensurable with the 4-fold branched cover of $S^3$ over the Borromean rings. You can think of it this way: you can take a unit cube as fundamental domain for the torus, and arrange that a, b and c lie on faces of the cube, each bisecting a pari of (glued together) opposite facce. This induces a subdividision of the boundary of the cube into what look like rectangles, but are really pentagons. The map (x,y,z) -> x+y+z gives a fibration over the torus, also works for any branched cover as described. The preimage of any face of the cube is an extended face plane of a dodecahedron, and is always a totally geodesic immersed surface, but it splits into two embedded surfaces for suitable branched covers of $T^3$ (perhaps the one you first come up with.) The tiling of hyperbolic space by right-angled dodecahedra has a cameo appearance in the video "Not Knot" we made at the Geometry Center, available together with "Outside In" on DVD from AKPeters . In the 1984 Scientific American Article The Mathematics of three-dimensional manifolds that Jeff Weeks and I wrote, a manifold in this family (constructed from right-angled hyperbolic dodecahedra and having the properties you asked for) was described as the configuration space of a mechanical linkage. I don't think these particular properties were pointed out in Scientific American. This and other examples that are counterintuitive at first were a good part of my motivation when I raised the question whether all hyperbolic 3-manifolds virtually fiber over the circle, which at the time was a radical idea.
{ "source": [ "https://mathoverflow.net/questions/38191", "https://mathoverflow.net", "https://mathoverflow.net/users/8021/" ] }
38,193
For simplicity, let me pick a particular instance of Gödel's Second Incompleteness Theorem: ZFC (Zermelo-Fraenkel Set Theory plus the Axiom of Choice, the usual foundation of mathematics) does not prove Con(ZFC), where Con(ZFC) is a formula that expresses that ZFC is consistent. (Here ZFC can be replaced by any other sufficiently good, sufficiently strong set of axioms, but this is not the issue here.) This theorem has been interpreted by many as saying "we can never know whether mathematics is consistent" and has encouraged many people to try and prove that ZFC (or even PA) is in fact inconsistent. I think a mainstream opinion in mathematics (at least among mathematician who think about foundations) is that we believe that there is no problem with ZFC, we just can't prove the consistency of it. A comment that comes up every now and then (also on mathoverflow), which I tend to agree with, is this: (*) "What do we gain if we could prove the consistency of (say ZFC) inside ZFC? If ZFC were inconsistent, it would prove its consistency just as well." In other words, there is no point in proving the consistency of mathematics by a mathematical proof, since if mathematics were flawed, it would prove anything, for instance its own non-flawedness. Hence such a proof would not actually improve our trust in mathematics (or ZFC, following the particular instance). Now here is my question: Does the observation (*) imply that the only advantage of the Second Incompleteness Theorem over the first one is that we now have a specific sentence (in this case Con(ZFC)) that is undecidable, which can be used to prove theorems like "the existence of an inaccessible cardinal is not provable in ZFC"? In other words, does this reduce the Second Incompleteness Theorem to a mere technicality without any philosophical implication that goes beyond the First Incompleteness Theorem (which states that there is some sentence $\phi$ such that neither $\phi$ nor $\neg\phi$ follow from ZFC)?
For the philosophical point encapsulated in (*) in the question, it seems that corollaries of the second incompleteness theorem are more relevant than the theorem itself. If we had doubts about the consistency of ZFC, then a proof of Con(ZFC) carried out in ZFC would indeed be of little use. But a proof of Con(ZFC) carried out in a more reliable system, like Peano arithmetic or primitive recursive arithmetic, would (before Gödel) have been useful, and I think this is what Hilbert was hoping for. Gödel's second incompleteness theorem tells us that this sort of thing can't happen (unless even the more reliable system is inconsistent).
{ "source": [ "https://mathoverflow.net/questions/38193", "https://mathoverflow.net", "https://mathoverflow.net/users/7743/" ] }
38,219
As I have been studying algebraic topology, something that I found puzzling was the existence of finite homotopy groups. For instance, $\pi_{4}(S^{2})\cong\pi_{5}(S^{4})\cong\mathbb{Z}/2\mathbb{Z}$. I was wondering if there was any kind of intuitive reason for why this might be true, and if there are spaces $X$ such that $\pi_{1}(X)$ is finite. Speaking very roughly, it would seem that a finite, nontrivial fundamental group means that if you repeat a closed path enough times, it can be contracted to a point, something which I find rather hard to visualize. So the question is: Is there any intuitive reason for the existence of finite homotopy groups?
The simplest (to understand) case of finite $\pi_1$ is the group $SO_3$. This can be illustrated using an arm or a belt! $SO_3$ is the group of rotations in space and a based loop in $SO_3$ can be thought of as a description of the motion of an object in such a way that it ends up back where it started. By attaching a strip of paper to the object, it's possible to see this path in space. For example, taking a belt and holding one end fixed whilst moving the other, or moving your hand (your arm forms the "strip"). So: hold your hand out in front of you, this is easiest if you do it palm-up. Keeping it palm-up, rotate it under your arm back to where you started. Your arm is now twisted (hopefully not too badly). Continue moving your hand in the same direction and with your palm facing up but this time over the top of your arm. When your hand gets back to where it started, your arm is now magically untwisted! So two times round the loop gets you back to an untwisted state, thus $2\gamma = 0$. But $\gamma \ne 0$ as evidenced by your twisted arm at the half-way stage. If you find this difficult to do, here's an alternative way using a belt. Take a belt and twist it once (that is, hold it out straight and imagine an axis along its length, then twist one end all the way around). Now try to straighten it without twisting either end (though you can move either end in space). Can't be done. But if you twist the belt twice then it can. (There's some funky you-tube videos showing the arm twists. If you get really good at it, you should do it with a beaker of water.)
{ "source": [ "https://mathoverflow.net/questions/38219", "https://mathoverflow.net", "https://mathoverflow.net/users/6856/" ] }
38,245
When trying to explain complexity theory to laypeople, I often mention randomized algorithms but seemingly lack good examples to motivate their usage. I often want to mention primality testing but the standard randomized algorithms don't admit a simple description (or proof of correctness) in a lay-atmosphere. I often resort to the saying that randomized algorithms allow "finding hay in a haystack", but that has little mathematical substance. The question: Is there a good example of a problem that: is easily explained (and sufficiently interesting) has a simple randomized algorithm appears non-trivial to get an efficient deterministic algorithm and ideally also satisfies: the randomized algorithm has a somewhat understandable proof of correctness - so no Markov/Chernoff/random-walk-mixing-times
Michael, how about approximating the volume of some shape in $\Re^n$ by sampling random points? This example has the following advantages: It seems easy enough to explain at a cocktail party (depending, of course, on the guests). It's used constantly in "real life" (indeed, pretty much any Monte Carlo simulation in physics, etc. could be interpreted as solving this problem). We don't know how to derandomize it. Indeed, the problem is easily seen to be PromiseBPP-complete (assuming of course that whether a point $x \in \Re^n$ belongs to the shape is decidable in polynomial time).
{ "source": [ "https://mathoverflow.net/questions/38245", "https://mathoverflow.net", "https://mathoverflow.net/users/4416/" ] }
38,307
At some point during my research I was confronted with this problem, but I did not dedicate serious time to it. Anyway it stayed in the back of my mind and I'm still interested in hints for it. Application: asymptotic properties of Schroedinger equations, scattering. You have two convex (compact, smooth, everything) disjoint sets in the plane. Consider a ray starting in the complement of the two sets and bouncing on the boundary of the sets in the usual way, with the ingoing and outgoing rays forming equal angles with the normal to the boundary. Q.: does it always exist a trapped ray which never leaves a ball containing the two sets? This can happen of course if the ray keeps bouncing forever between the two bodies. A trivial example is obtained if the sets have two parallel sides, and the ray is chosen perpendicular to both. Less trivial examples (even strictly convex) can be constructed by choosing a trajectory first, and then joining the dots (i.e., the turning points of the trajectory) with convex curves; with some work and some adjustments in the trajectory, you can produce plenty of examples. But, is this always the case? given two arbitrary bodies, does it always exist a trapped ray? EDIT (see Pietro's comment): I mean, another trapped ray besides the 'trivial' trapped ray bouncing between the closest points of the two sets (a general version of the trivial case mentioned above of two parallel sides). EDIT 2 (quick summary of the discussion for the benefit of future readers): the answer is yes for smooth boundaries and large (in particular, with nonempty interior) sets of initial points. A continuity argument is enough to prove this. If the boundary is non smooth problems may arise. E.g. for two polygons with a couple of facing parallel sides, the only trapped ray is the periodic one. PS in retrospect, the question was quite elementary, but I really enjoyed to discuss it here :)
Yes, there is always a trapped ray. The simplest way to see it is to find the path between the two bodies that minimizes length. It is necessarily perpendicular to both surfaces. EDIT: I see the question was edited to ask for more than this trivial answer, so the new answer: there is a unique trapped ray from any starting point, but it is not trapped in backward time unless it is on the shortest path between the bodies. One can find it by minimizing distance of a zig-zag path alternately touching the two bodies a finite number of times, then passing to a limit. Here is a generalization: suppose you have a collection of smooth disjoint convex shapes $\{S_i\}$ in the plane arranged in a way that no straight line intersects more than two. Then, for any doubly infinite sequence of indices $ \dots, i_{-1}, i_{0}, i_{1}, \dots $ such that $i_j \ne i_{j+1}$, there is a unique trajectory that intersects the shapes in that order, starting with $S_{i_1}$ in the positive direction and $S_{i_0}$ going backward. If the sequence is periodic, you can find the trajectory just as for the case of two objects. For the infinite case, you can take limits. Even if the shapes are not convex, as long as they are smooth the trajectories still exist, but they are not necessarily unique. If you want to say something about the case when the obstacles are not smooth, you can extend the rule to make it a non-deterministic dynamical system, where a ray hitting a corner has choices which way to go. This kind of system is classical dynamical systems, which has been well-understood since early last century. Perhaps someone more knowledgable will supply appropriate references. It is a limiting special case of the theory of the geodesic flow on surfaces of negative curvature. In response to a comment, here is some more detail (that doesn't itself fit into a comment). The question was about stability and how to prove convergence under the limiting process. To prove existence, you don't need stability: just take a sequence of longer and longer rays, and choose a convergent subsequence. This exists because of compactness of the set of possible initial directions. To prove uniqueness: this follows from the hyperbolicity of the flow. Think of the convex obstacles as trick mirrors that make you look skinny, cylinders with a convex cross-section. The convexity implies that reflected rays diverge at least as fast as they would from a flat mirror. Successive reflected images of the two mirrors in each other get thinner and thinner, so they narrow down to a unique point. (In the three-dimensional picture, they're also narrowing vertically, just at the relatively slow rate at which images shrink with distance in Euclidean space rather than at the exponential rate resulting from mirrors that are convex to 2nd order). One way to formalize the discussion above is by use of triangle comparison theorems. Double the complement of the convex bodies to make a surface. The surface can be smoothly approximated by a surface of nonpositive curvature if it's comforting, but that's not technically necessary; the (intuitively obvious) statement about image sizes above become cases of the Toponogov comparison theorem .
{ "source": [ "https://mathoverflow.net/questions/38307", "https://mathoverflow.net", "https://mathoverflow.net/users/7294/" ] }
38,324
In 1977, Henry Pogorzelski published what some believed was a claimed proof of Goldbach's Conjecture in Crelle's Journal (292, 1977, 1-12) . His argument has not been accepted as a proof of Goldbach's Conjecture, but as far as I know it has not been shown that his argument is incorrect. Pogorzelski's argument is said to depend on the "Consistency Hypothesis," the "Extended Wittgenstein Thesis," and "Church's Thesis." Pogorzelski has a Ph.D. in mathematics (his advisor was Raymond Smullyan). Daniel Shanks says in Solved and Unsolved Problems in Number Theory (fourth edition, 1993) that: "It seems unlikely that (most) number-theorists will accept this as a proof [of Goldbach's Conjecture] but perhaps we should wait for the dust to settle before we attempt a final assessment." ( page 222 ) Did Pogorzelski claim to present a proof of Goldbach's Conjecture? If so, and this claimed proof has not been disproven after 33 years, I am curious why this would be the case, given that Shanks considers it important enough to mention in his book.
In the 1970's Pogorzelski published a sequence of four papers in Crelle concerning the Goldbach Conjecture (and various generalizations and abstractions): MR0347566 (50 #69) Pogorzelski, H. A. On the Goldbach conjecture and the consistency of general recursive arithmetic. Collection of articles dedicated to Helmut Hasse on his seventy-fifth birthday, II. J. Reine Angew. Math. 268/269 (1974), 1--16. MR0505402 (58 #21554) Pogorzelski, H. A. Dirichlet theorems and prime number hypotheses of a conditional Goldbach theorem. J. Reine Angew. Math. 286/287 (1976), 33--45. MR0434999 (55 #7961) Pogorzelski, H. A. Semisemiological structure of the prime numbers and conditional Goldbach theorems. J. Reine Angew. Math. 290 (1977), 77--92. MR0538046 (58 #27414) Pogorzelski, H. A. Goldbach conjecture. J. Reine Angew. Math. 292 (1977), 1--12. The last paper is the one referred to in the question. (Edit: as I have been writing this, the question has been edited to make this reference explicit, which is good.) I think that describing Pogorzelski's last paper as a purported proof of the Goldbach Conjecture is a mischaracterization. Rather what he shows is an implication: three statements which are not known to be true imply Goldbach. (I don't pretend to understand these three statements. The only one that I recognize at all is Church's Thesis, but although I think I know what that means, it does not denote to me a precise mathematical conjecture, so I am for sure out of my depth here.) So far as I can see, Pogorzelski himself never claimed that his 1977 paper is a proof of Goldbach. Indeed, he worked for many years thereafter on the problem and published nearly two thousand pages of further work. Specifically, in 1982 he published the first of a proposed seven volume series, Foundations of a semiological theory of numbers , whose ultimate goal is to prove Goldbach by showing that a disproof is impossible in a certain formal system. Volume 1 is 608 pages. Volume 2 (743 pages) appeared in 1985. Volume 3 (522) pages appeared in 1988. MathSciNet does not list any further volumes. To summarize, his programme for proving Goldbach seems to be as yet unfinished (and, of course, may well be unfinishable), but none of the reviews I read -- some of which are written by leading mathematicians -- raised any mathematical objections to the work that has been published.
{ "source": [ "https://mathoverflow.net/questions/38324", "https://mathoverflow.net", "https://mathoverflow.net/users/2594/" ] }
38,399
Can we find out what the eigenvalues of a sum of two initial matrices are, by performing an operation on the eigenvalues & eigenvectors of the initial matrices?
I will interpret this to be the question `What do you need to know beyond the eigenvalues of $A$ and of $B$ to get the eigenvalues for $A+B$ from those of $A$ and $B$?' It's a good question. Here's one way to conceptualize it: Let's modify it to talk of the roots of the characteristic polynomial, rather than just eigenvalues. Over the complexes, for most but not all matrices, these are the same. Knowing the roots is equivalent to knowing the characteristic polynomials. You can put the characteristic polynomial of $A$ into homogeneous form as the determinant of $\lambda_1 I + \lambda_2 A$ --- this translates easily back and forth to the usual form. With two matrices, there is a polynomial in one more variable: the determinant of $\lambda_1 I + \lambda_2 A + \lambda_3 B$. The characteristic polynomial for $A+B$ is obtained by specializing this 3-variable polynomial. I believe this can be any homogeneous polynomial P( x,y,z) of degree n in 3 variables subject to the condition P(1,0,0)=(the determinant of the identity) = 1, but I don't know for sure: perhaps someone can elucidate this point. Update : Dave Speyer elucidated (see his comment below the fold): Yes, this is true. If the curve is smooth, then the space of such representations is $J \setminus \Theta$, where $J$ is the Jacobian of the curve and $\Theta$ is the $\Theta$ divisor. See Vinnikov, "Complete description of determinantal representations of smooth irreducible curves", Linear Algebra Appl. 125 (1989), 103--140 – David Speyer The question, what additional information do you need besides eigenvalues of $A$ and $B$ to get the eigenvalues of $A + B$, translates into a question about homogeneous polynomials. There is a triangle's worth of coefficients. You are given the coefficients on two of the sides of the triangle. What you want is the sum of coefficients along lines parallel to the third side. There are many ways you might get this information, depending on context. For example, the characteristic polynomial of $A B^{-1}$ corresponds to the 3rd edge of the triangle: in dimension 2, that gives the missing coefficient (all you actually need is Trace$(AB^{-1})$), while in higher dimensions, it's not enough. To illustrate, here's a plot for a pair of pseudo-random matrices of dimension 9 whose coefficients were chosen uniformly and independently in the interval $[0,1]$. This is the slice $\lambda_1 = 1$ of the homogeneous form of the characteristic polynomial, so on the two axes, its roots are the negative reciprocals of characteristic roots for $A$ and $B$, and on the diagonal, for $A+B$. The characteristic polynomial of $A B^{-1}$ determines the asymptotic behavior at infinity, in this slice. You can of course only see the real eigenvalues in this real picture, but you can see how they move around as you vary the slice through the origin. Under varying circumstance, you will have more a priori information. If $n=2$, the curves are conics. If the matrices are symmetric, they will intersect each line through the origin $n$ in $n$ points --- etc. (source: Wayback Machine) Addendum: Out of curiosity, I drew some pictures in the symmetric case, again for dimension 9. Here is a typical example, for a pair of pseudo-random 9-dimensional symmetric matrices. Observe the eigenvalues meeting and crossing as lines through the origin sweep around. [The buttons are from TabView in Mathematica, and they don't work in this static image.] (source: Wayback Machine)
{ "source": [ "https://mathoverflow.net/questions/38399", "https://mathoverflow.net", "https://mathoverflow.net/users/9160/" ] }
38,414
A friend of mine recently asked me if I knew any simple, conceptual argument (even one that is perhaps only heuristic) to show that if a triangulated manifold has a non-vanishing vector field, then Euler's formula (the alternating sum of the number of faces of given dimensions) vanishes. I didn't see how to get started, but it seems like a good MO question.
Consider a straight simplex $\Delta^n$ in $\mathbb R^n$ and take a generic constant vector field $v$ (transversal to the faces of $\Delta^n$). Choose all faces of $\Delta^n$ such that the field moves the center of the face inside the simplex. Then the alternating sum of the numbers of these simplices (signed by the parity of the dimension) is zero. Now, if you have a fine enough triangulation of $M$ and a vector field transversal to all faces, we can apply the above reasoning to the whole manifold. Edited . There was an explanation here with a mistake (spotted by Sergei) of why each simplex contributes zero, but the statement is correct. The new proof is a follows: $(-1)^{n-1}+(-1)^n=0$. Proof. Let us say that $v$ is the sunlight. Then it enlightens a part of the simplex $\Delta^{n+1}$. Consider the shade from $\Delta^n$ on some plane below the simplex. The shade is an convex set. It is naturally decomposed into simplices, so the sum of simplices over this shade is $(-1)^{n-1}$ (because the simplices in the boundary of this convex set do not contribute). And we also get $(-1)^n$ for $\Delta^n$.
{ "source": [ "https://mathoverflow.net/questions/38414", "https://mathoverflow.net", "https://mathoverflow.net/users/7311/" ] }
38,632
I asked this question on the new Theoretical Computer Science "overflow" site, and commenters suggested I ask it here. That question is here , and it contains additional links, which I doubt I can embed here because I don't have enough reputation. Anyway, here goes: Objective : Settle the conjecture that there is no projective plane of order 12. In 1989, using computer search on a Cray, Lam proved that no projective plane of order 10 exists. Now that God's Number for Rubik's Cube has been determined after just a few weeks of massive brute force search (plus clever math of symmetry), it seems to me that this longstanding open problem might be within reach. I'm hoping this question can serve as a sanity check. The Cube was solved by reducing the total problem size to "only" 2,217,093,120 distinct tests, which could be run in parallel. Questions: There have been special cases of nonexistence shown (again, by computer search). Does anyone know, if we remove those and exhaustively (cleverly?) search the rest, if the problem size is on the order of the Cube search? (Maybe too much to hope for that someone knows this....) Any partial information in this vein?
I am actually not aware of many results on planes of order 12 in the vein of what Lam et. al. did (I list the few I know of below). There seems to be a plethora of papers proving restrictions on the collineation group of a hypothetical such plane, but I am not aware of how any of these could be used to settle the existence problem. Moreover, I am quite skeptical that disproving the existence of planes of order 12 by a computer search would help for the general theory much. Though it certainly would be nice to know, and if one actually found a plane of order 12, that would be quite exciting; but it's hard to gain deep insights from these combinatorial brute force searches. Extending the approach by Lam et. al. to planes of order 12 is in principle possible. But probably still not feasible with today's computers, as the search space is a lot bigger than for order 10. Anyway, here are some reasons why I think that, and at the same time a sketch of things that would have to be done. But my personal belief is that one will need some substantially new ideas to make progress on this. Then again, only by actually trying to do it can one be sure... :) From here on, I'll assume you are familiar with Lam's "The Search for a Finite Projective Plane of Order 10" and the notation used within. A crucial point was the reduction of the (non-)existence to the value of certain weight enumerator coefficients $w_0$ to $w_{n^2+n+1}$ (a good exposition can be found in "On the existence of a projective plane of order 10" by MacWilliams, Sloane and Thompson). But the real breakthrough was when Assmus and Mattson proved that one only needs to know $w_{12},w_{15},w_{16}$ to determine all others. I'll refer to these as essential weight enumerator coefficients. Some steps towards this for order 12 have been executed in "Ternary and binary codes for a plane of order $12$" by Hall and Wilkinson. Yet many nice properties and theorems will be hard to recover for order 12. E.g. for orders of the form $8m+2$, one knows the $\mathbb{F}_2$-rank of the incidence matrix. Not so for order 12, where working with a ternary code is in some ways more "natural." In particular, the $\mathbb{F}_3$-rank of the incidence matrix is known, but, alas, working with a ternary code means losing the natural identification of codewords with point sets, so tons of new machinery would be needed to exploit the ternary code. Thus I'll focus on the binary code case here. Anyway, let's assume we reduced the number of essential weight enumerator coefficients as much as we can (Hall and Wilkinson pushed it down to 16; remember, for $n=10$ we had only 3). We must compute the essential coefficients. According to Lam, for $n=10$ and the case $w_{12}$, they estimated, using a Monte-Carlo method (before doing it) that $4\times 10^{11}$ configuration had to be checked. I don't have a good means to compute a good estimate for $n=12$, but for that there are 16 coefficients to determine, and I'd hazard to guess that some of them are much, much harder than the three cases for $n=10$ put together. Several orders of magnitude. However, this is just gut feeling. So let's assume we had somehow managed to overcome this and had computed all essential weight enumerator coefficients. We then would have the full weight enumerator at hand (and no projective plane arose as a byproduct of our search). Now, the hard part starts (corresponding roughly to the second half of Lam's paper), the one that took them 2 years for $n=10$: We have to somehow derive a contradiction (or construct a plane). A lot of ground work needs to be done (extending stuff from $n=10$), before one can even start writing code... Ah well. To anybody who wants to try out this strategy on $n=12$, I would recommend to first try reproducing the $n=10$ result -- with modern computers it should be possible to do this much, much quicker than it took Lam et. al. originally (this verification might already interest some people on its own). Actually, at the very start, try it with even smaller examples ($n=6,8$), then go up.
{ "source": [ "https://mathoverflow.net/questions/38632", "https://mathoverflow.net", "https://mathoverflow.net/users/9197/" ] }
38,639
How big a gap is there between how you think about mathematics and what you say to others? Do you say what you're thinking? Please give either personal examples of how your thoughts and words differ, or describe how they are connected for you. I've been fascinated by the phenomenon the question addresses for a long time. We have complex minds evolved over many millions of years, with many modules always at work. A lot we don't habitually verbalize, and some of it is very challenging to verbalize or to communicate in any medium. Whether for this or other reasons, I'm under the impression that mathematicians often have unspoken thought processes guiding their work which may be difficult to explain, or they feel too inhibited to try. One prototypical situation is this: there's a mathematical object that's obviously (to you) invariant under a certain transformation. For instant, a linear map might conserve volume for an 'obvious' reason. But you don't have good language to explain your reason---so instead of explaining, or perhaps after trying to explain and failing, you fall back on computation. You turn the crank and without undue effort, demonstrate that the object is indeed invariant. Here's a specific example. Once I mentioned this phenomenon to Andy Gleason; he immediately responded that when he taught algebra courses, if he was discussing cyclic subgroups of a group, he had a mental image of group elements breaking into a formation organized into circular groups. He said that 'we' never would say anything like that to the students. His words made a vivid picture in my head, because it fit with how I thought about groups. I was reminded of my long struggle as a student, trying to attach meaning to 'group', rather than just a collection of symbols, words, definitions, theorems and proofs that I read in a textbook. Please note: I'm not advocating that we turn mathematics into a touchy-feely subject. I'm not claiming that the phenomenon I've observed is universal. I do think that paying more attention than current custom to how you and others are really thinking, to the intuitions, is helpful both in proving theorems and in explaining mathematics. I'm very curious about the varied ways that people think, and I would like to hear. What am I really thinking? I'm anxious about offending the guardians of the forum and being scolded (as they have every right to do) for going against clearly stated advice with a newbie mistake. But I can't help myself because I'm very curious how you will answer, and I can endure being scolded.
Final addition: Since I've produced many rambles, I thought I'd close my (anti-)contribution with a distilled version of the example I've attempted below. It's still something very standard, but, I hope, in the spirit of the original question. I'll describe it as if it were a personal thing. Almost always, I think of an integer as a function of the primes. So for 20, say, 20(2)= 0 20(3)=2 20(5)=0 20(7)=6 . . 20(19)=1 20(23)=20 20(29)=20 20(31)=20 20(37)=20 . . . It's quite a compelling image, I think, an integer as a function that varies in this way for a while before eventually leveling off. But, for a number of reasons, I rarely mention it to students or even to colleagues. Maybe I should. Original answer: It's unclear if this is an appropriate kind of answer, in that I'm not putting forward anything very specific. But I'll take the paragraph in highlight at face value. I find it quite hard to express publicly my vision of mathematics, and I think this is a pretty common plight. Part of the reason is the difficulty of putting into words a sense of things that ultimately stems from a view of the landscape, as may be suggested by the metaphor. But another important reason is the disapprobation of peers. To appeal to hackneyed stereotypes, each of us has in him/her a bit of Erdos, a bit of Thurston, and perhaps a bit of Grothendieck, of course in varying proportions depending on education and temperament. I think I saw somewhere on this site the sentiment that 'a bad Erdos still might be an OK mathematician, but a bad Grothendieck is really terrible,' or something to that effect. This opinion is surrounded by a pretty broad consensus, I think. If I may be allowed some cliches now from the world of finance, it's almost as though definite mathematical results are money in the bank. After you've built up some savings, you can afford to spend a bit by philosophizing. But then, you can't let the balance get too low because people will start looking at you in funny, suspicious ways. I know that on the infrequent occasions* that I get carried away and convey at any length my vision of how a certain area of mathematics should work, what should be true and why, compelling analogies, and so on, I feel rather embarrassed for a little while. It feels like I am indeed running out of money and will need to back up the highfalutin words with some theorems (or at least lemmas) relatively soon. (And then, so many basically sound ideas are initially mistaken for trivial reasons.) Now, I wish to make it clear that unlike Grothendieck (see the beginning paragraphs of this letter to Faltings ) I find this quite sensible a state of affairs. For myself, it seems to be pretty healthy that my tendency to philosophize is held in check by the demand of the community that I have something to show for it. I grant that this may well be because my own visions are so meagre in comparison to Grothendieck's. In any case, the general phenomenon itself is interesting to observe, in myself and in others. Incidentally, I find the peer pressure in question remarkably democratic. Obviously, a well-established mathematician typically has more money than average in the bank, so to speak. But it's not a few times I've observed eminent people during periods of slowdown, being gradually ignored or just tolerated in their musings by many young people, even students. Meanwhile, if you're an energetic youngster with some compelling vision of an area of mathematics, it may not be so bad to let loose. If you have a really good business idea, it may even make sense to take out a large loan. And provided you have the right sort of personality, the pressure to back up your philosophical bravado with results may spur you on to great things. This isn't to say you won't have to put up with perfectly reasonable looks of incredulity, even from me, possibly for years. *Maybe it seems frequent to my friends. Added: Since I commented above on something quite general, here is an attempt at a specific contribution. It's not at all personal in that I'm referring to a well-known point of view in Diophantine geometry, whereby solutions to equations are sections of fiber bundles . Some kind of a picture of the fiber bundle in question was popularized by Mumford in his Red Book. I've discovered a reproduction on this page . The picture there is of $Spec(\mathbb{Z}[x])$ , but interesting equations even in two variables will conjure up a more complicated image of an arithmetic surface fibered over the 'arithmetic curve' $Spec(\mathbb{Z})$ . A solution to the equation will then be a section of the bundle cutting across the fibers, also in a complicated manner. Much interesting work in number theory is concerned with how the sections meet the singular fibers. Over the years, I've had many different thoughts about this perspective. For me personally, it was truly decisive, in that I hadn't been very interested in number theory until I realized, almost with a shock, that the study of solutions to equations had been 'reduced' to the study of maps between spaces of a quite rigid sort. In recent years, I think I've also reconciled myself with the more classical view, whereby numbers are some kinds of algebraic gadgets. That is, thinking about matters purely algebraically does seem to provide certain flexible modes that can be obscured by the insistence on geometry. I've also discovered that there is indeed a good deal of variation in how compelling the inner picture of a fiber bundle can be, even among seasoned experts in arithmetic geometry. Nevertheless, it's clear that the geometric approach is important, and informs a good deal of important mathematics. For example, there is an elementary but key step in Faltings' proof of the Mordell conjecture referred to as the 'Kodaira-Parshin trick,' whereby you (essentially) get a compact curve $X$ of genus at least two to parametrize a smooth family of curves $$Y\rightarrow X.$$ Then, whenever you have a rational point $$P:Spec(\mathbb{Q})\rightarrow X$$ of $X$ , you can look at the fiber $Y_P$ of $Y$ above $P$ , which is itself a curve. The argument is that if you have too many points $P$ , you get too many good curves over $\mathbb{Q}$ . What is good about them? Well, they all spread out to arithmetic surfaces over the spectrum of $\mathbb{Z}$ that are singular only over a fixed set of places. This part can be made obvious by spreading out both $Y$ , $X$ , and the map between them over the integers as well, right at the outset. If you don't have that picture in mind, the goodness of the $Y_P$ is not at all easy to explain. Anyways, what I wanted to say is that the picture of solutions as sections to fiber bundles is really difficult to explain to people without a certain facility in scheme theory. Because it seems so important, and because it is a crucial ingredient in my own thinking, I make an attempt every now and then in an exposition at the colloquium level, and fail miserably. I notice almost none of my colleagues even try to explain it in a general talk. Now, I've mentioned already that this is far from a personal image of a mathematical object. But it still seems to be a good example of a very basic picture that you refrain from putting into words most of the time. If it really had been only a personal vision, it may even have been all but maddening, the schism between the clarity of the mental image and what you're able to say about it. Note that the process of putting the whole thing into words in a convincing manner in fact took thousands of pages of foundational work. Added again: Professor Thurston: To be honest, I'm not sure about the significance of competing mental images in this context. If I may, I would like to suggest another possibility. It isn't too well thought out, but I don't believe it to be entirely random either. Many people from outside the area seem to have difficulty understanding the picture I mentioned because they are intuitively suspicious of its usefulness . Consider a simpler picture of the real algebraic curve that comes up when one studies cubic equations like $$E: y^2=x^3-2.$$ There, people are easily convinced that geometry is helpful, especially when I draw the tangent line at the point $P=(3,5)$ to produce another rational point. What is the key difference from the other picture of an arithmetic surface and sections? My feeling is it has mainly to do with the suggestion that the point itself has a complicated geometry encapsulated by the arrow $$P:Spec(\Bbb{Z})\rightarrow E.$$ That is, spaces like $Spec(\Bbb{Q})$ and $Spec(\Bbb{Z})$ are problematic and, after all, are quite radical. In $Spec(\Bbb{Q})$ , one encounters the absurdity that the space $Spec(\Bbb{Q})$ itself is just a point. So one has to go into the whole issue that the point is equipped with a ring of functions, which happens to be $\Bbb{Q}$ , and so on. At this point, people's eyes frequently glaze over, but not, I think, because this concept is too difficult or because it competes with some other view. Rather, the typical mathematician will be unable to see the point of looking at these commonplace things in this way. The temptation arises to resort to persuasion by authority then (such and such great theorem uses this language and viewpoint, etc.), but it's obviously better if the audience can really appreciate the ideas through some first-hand experience, even of a simple sort. I do have an array of examples that might help in this regard, provided someone is kind enough to be still interested. But how helpful they really are, I'm quite unsure. At the University of Arizona, we once had a study seminar on random matrices and number theory, to which I was called upon to contribute a brief summary of the analogous theory over finite fields. Unfortunately, this does involve some mention of sheaves, arithmetic fundamental groups, and some other strange things. Afterwards, my colleague Hermann Flaschka, an excellent mathematician with whom I felt I could speak easily about almost anything, commented that he couldn't tell if the whole language just consisted of word associations or if some actual geometry was going on. Now, I'm sure this was due in part to my poor powers of exposition. But further conversation gave me the strong impression that the question that really went through his mind was: 'How could it possibly be useful to think about these objects in this way?' To restate my point, I think a good deal of conceptual inhibition comes from a kind of intuitive utilitarian concern. Matters are further complicated by the important fact that this kind of conceptual conservatism is perfectly sensible much of the time. By the way, my choice of example was somewhat motivated by the fact that it is quite likely to be difficult for people outside of arithmetic geometry, including many readers of this forum. This gives it a different flavor from the situations where we all understand each other more or less well, and focus therefore on pedagogical issues referring to classroom practice. Yet again: Forgive me for being a bore with these repeated additions. The description of your approach to lectures seems to confirm the point I made, or at least had somewhat in mind: When someone can't understand what we try to explain, it's maybe in his or her best interest (real or perceived) not to. It's hard not to feel that this happens in the classroom as well oftentimes. This then brings up the obvious point that what we try to say is best informed by some understanding of who we're speaking to as well as some humility*. As a corollary, what we avoid saying might equally well be thus informed. My own approach, by the way, is almost opposite to yours. Of course I can't absorb technical details just sitting there, but I try my best to concentrate for the whole hour or so, almost regardless of the topic. (Here in Korea, it's not uncommon for standard seminar lectures to be two hours.) If I may be forgiven a simplistic generalization, your approach strikes me as common among deeply creative people, while perennial students like me tend to follow colloquia more closely. I intend neither flattery nor modesty with this remark, but only observation. Also, I am trying to create a complex picture (there's that word again) of the problem of communication. As to $Spec(\Bbb{Z})$ , perhaps there will be occasion to bore you with that some other time. Why don't you post a question (assuming you are interested)? Then you are likely to get a great many perspectives more competent than mine. It might be an interesting experiment relevant to your original question. *I realize it's hardly my place to tell anyone else to be humble.
{ "source": [ "https://mathoverflow.net/questions/38639", "https://mathoverflow.net", "https://mathoverflow.net/users/9062/" ] }
38,680
I'm fearful about putting this forward, because it seems the answer should be elementary. Certainly, the Weak Approximation Theorem allows every system of simultaneous inequalities among archimedean absolute values to be satisfied. But equality combined with inequality?
Yes. Take $$ \alpha=\sqrt{2-\sqrt{2}}+i\sqrt{\sqrt{2}-1}. $$ Neither of the conjugates $$ \sqrt{2+\sqrt{2}}\pm \sqrt{\sqrt{2}+1} $$ have absolute value 1. It is impossible, however, if $\mathbb{Q}(\alpha)/\mathbb{Q}$ is abelian, since then all automorphisms commute with complex conjugation. This was all stolen from Washington's Cyclotomic Fields book.
{ "source": [ "https://mathoverflow.net/questions/38680", "https://mathoverflow.net", "https://mathoverflow.net/users/4994/" ] }
38,751
I'm seeking a function which is Hölder continuous but does not belong to any Sobolev space. Question: More precisely, I'm searching for a function $u$ which is in $C^{0,\gamma}(\Omega)$ for $\gamma \in (0,1)$ and $\Omega$ a bounded set such that $u \notin W_{loc}^{1,p}(\Omega)$ for any $1 \leq p \leq \infty$. Take $\Omega$ to be bounded, open. My first guess is to do a construction with a Weierstrass function. I know this is differentiable 'nowhere' but that doesn't convince me it isn't weakly differentiable in some bizarre way. Hopefully someone knows of an explicit example.
Your guess is indeed right. Following a similar idea gives you the Takagi or blancmange function . It is even quasi-Lipschitz (it has a modulus of continuity $\omega(t)=ct(|\log(t)|+1)$ for a suitable constant $c>0$), thus it's Hoelder of any positive exponent less than 1. It is not even BV in any open interval, thus $W^{1,p} _ {loc}$ for no $p\geq1$. Rmk 1. The above example is for dimension 1: but of course it holds in any dimension a fortiori. Rmk 2. To get an example with a more classical flavor, actually a Weierstrass function, replace $s(x)$ with $\cos(x)$. I'd say that the resulting Fourier series defines a function with the same features, by the same reasons (the function $\cos(x)$ works better than $\sin(x),$ in view of point 2 below.) Rmk 3. Once you know that the Weierstrass function $f(x):=\sum_{k=0}^\infty 2^{-k}\cos(2^k x)$ is nowhere differentiable, you also have that it is BV on no open interval, for BV on an interval would imply differentiability a.e. there. However, for your needs it seems more direct just showing it has infinite variation on any interval. Details. To prove that the Takagi function $f(x)$ admits the above modulus of continuity, recall that that $f$ is characterized as the fixed point of the affine contraction $T:C_b(\mathbb{R})\to C_b(\mathbb{R})$ such that $(Tf)(x)=\frac{1}{2}f(2x)+s(x),$ for all $x\in\mathbb{R}$, where $s(x)$ is the distance function from the integers (a zig-zag piecewise 1-periodic function). Just find a $c$ such that the subset of $C_b(\mathbb{R})$ of functions that admit $\omega$ as modulus of continuity is a $T$-invariant set. The latter subset is obviously closed and non-empty, so the fixed point is there. (The above illustrated a standard general technique to prove properties of objects found by means of the contraction principle). Proving that $f$ is not of bounded variation on $[0,1]$ (hence in no open interval, due to the self-similarity encoded in the fixed point equation), requires a small computation on the partial sum $f_n$ of the series defining $f$. Let $$f_n(x):=\sum_{k=0}^{n-1}\, 2^{-k} s(2^k x).$$ First note that the derivative of $f_n$ only takes integer values, which of course come as a result of the sum of $n$ terms $\pm 1$ (with all the $2^n$ possible signs). In particular, for any $n\in\mathbb{N}$ the function $f_{2n}$ has ${2n \choose n} $ flat intervals of lenght $2^{-2n}$ within the unit interval $I$, and has derivative larger than $2$ in absolute value elsewhere in $I$. Thus, for the subsequent odd index $2n+1,$ the function $f_{2n+1}$ has ${2n \choose n}$ local maxima in $I$ (located in the mid-points of the above intervals). Moreover, passing to $f_{2n+1}$ each maximum point contributes to the increment of the total variation with $2^{-2n}$, while the total variation remains unchanged passing from $2n+1$ to the next even index $2n+2$. The conclusion is that, for any $n$, the total variation of $f_n$ on $I$ is $$V(f_n;I)=\sum_{0\leq k < n/2}{2k\choose k}2^{-2k} =O\big(\sqrt{n}\big),$$ since by the classical asymptotics for the central binomial coefficient, ${2k \choose k}=\frac{4^k}{\sqrt{\pi k}}(1+o(1)),\, k\to\infty.$ So actually $V(f_n;I)$ diverges. Yet this would not be sufficient to conclude that $V(f,I)=\infty,$ as the total variation is only lower semicontinuous with respect to the uniform convergence. However, the discrete variation on a given subdivision $P:=\{t_0 < \dots < t_r \}$ $$V(f_n; P\, )=\sum_{i=0}^{r-1}\, \big|f_n(t_{i+1})-f(t_i)\big|$$ does of course pass to the limit under even pointwise convergence. Now the point is that, for the binary subdivision $P_m:=\{ k2^{-m} \, : \, 0 \le k \le 2^m \},$ we have $V(f_n;I)=V(f_n;P_m)$ as soon as $n \geq m$. So for all $m$ letting $n\to\infty$ $$V(f;P_m)=\lim_{n\to\infty }V(f_n;P_m)=V(f_m;P_m)$$ and $$V(f;I)=\sup_{m\in\mathbb{N}}V(f;P_m)=\infty,$$ as we wished to show.
{ "source": [ "https://mathoverflow.net/questions/38751", "https://mathoverflow.net", "https://mathoverflow.net/users/8755/" ] }
38,752
I have not studied category theory in extreme depth, so perhaps this question is a little naive, but I have always wondered if analysis could be taught naturally using categories. I ask this because it seems like a quite a lot of topological and group theoretic concepts can be defined most succinctly using categorical concepts, and the categorical definitions are more revealing. So my question is: (1) Is it possible/beneficial to teach analysis using category theory? and (2) Are there any good textbooks that use this method?
I hesitate to let this out, but there's always this cute little note that I learned from another MO answer (I don't know which one): https://www.maths.ed.ac.uk/~tl/glasgowpssl/banach.pdf . Maybe this will satisfy your curiosity, but I maintain that it takes a warped mind to identify such a categorical formulation of integration as the "right" way to think about integrals. The advantage of categorical thinking in my view is that it helps to organize computations and arguments involving several different kinds of structures at the same time. For instance, (co)homology is all about capturing useful invariants associated to a complicated structure (e.g. a geometric object) in a much simpler structure (e.g. an abelian group). When we want to determine how the invariants behave under certain operations on the complicated structure (e.g. products, (co)limits) it helps to have a theory already set up to tell us what will happen to the simpler structure. That's where category theory comes into its own, and instances of this paradigm are so ubiquitous in algebra and topology that category theory has taken on a life of its own. It seems that people working in those areas have found it convenient to build categorical constructions into the foundations of their work in order to emphasize generality (one can treat algebraic varieties and solutions to diophantine equations on virtually the same footing), keep track of different notions of equivalence (e.g. homotopy versus homeomorphism), build new kinds of spaces (e.g. groupoids), and to achieve many other aims. In many kinds of analysis, this kind of abstraction isn't necessary because there's often only one structure to keep track of: $\mathbb{R}$ . When you think about it, analysis is only possible because we are willing to seriously overburden $\mathbb{R}$ . Take, for example, the expression " $\frac{d}{dt}\int_X f_t(x) d\mu(x)$ " and consider all of the different ways real numbers are being used. It is used as a geometric object (odds are X is built out of some construction involving the real numbers or a subspace thereof), a way to give $X$ additional structure (it wouldn't hurt to guess that $\mu$ is a real valued measure), a parameter ( $t$ ), and a reference system ( $f$ probably takes values in $\mathbb{R}$ or something related to it). In algebraic geometry, one would probably take each of these roles seriously and understand what kind of structure they are meant to bring to the problem. But part of the power and flexibility of analysis is that we can sweep these considerations under the rug and ultimately reduce most complications to considerations involving the real numbers. All that being said, the tools of category theory and homological algebra actually have started to make their way into analysis. Because of the fact that analysts generally consider problems tied to certain very specific kinds of structure, they have historically focused on providing the sharpest and most detailed solutions to their problems rather than extracting the crude, qualitative invariants for which cohomological thinking is most appropriate. However, as analysts have become more and more attuned to the deep relationships between functional analysis and geometry, they have turned to ideas from category theory to help keep things organized. K-theory and K-homology have become indispensable tools in operator theory; there is even a bivariant functor $KK(-,-) $ from the category of C -algebras to the category of abelian groups relating the two constructions, and many deep theorems can be subsumed in the assertion that there is a category whose objects are C -algebras and whose morphism spaces are given by $KK(A,B)$ . Cyclic homology and cohomology has also become extremely relevant to the interface between analysis and topology. So ultimately I think it all comes down to what kinds of subtleties are most relevant in a given problem. There is just something fundamentally different about the kind of thinking required to estimate the propagation speed of the solution operator for a nonlinear PDE compared to the kind of thinking required to relate the fixed point theory in characteristic 0 of a linear group acting on a variety to that in characteristic p.
{ "source": [ "https://mathoverflow.net/questions/38752", "https://mathoverflow.net", "https://mathoverflow.net/users/6856/" ] }
38,763
Is $L^p(\mathbb{R}) \setminus 0$ contractible? My intuition says that the answer is yes, but I'm afraid that this is based on thinking of this as somehow similar to a limit of $\mathbb{R}^n \setminus 0$ as n approaches $\infty$, which is of course nonsense. In any case, every contraction I've tried ends up making some function pass through $0$.
Here is something really cheap and dirty. Let $p<+\infty$. Take $g=\frac{1}{1+x^2}$. Then $f(x,t)=e^{-(1+|x|)t/(1-t)}f(x)$ ($0\le t\le 1$) is a continuous contraction of $L^p\setminus\{g\}$ to $0$. (the reason is that your only chance to hit $g$ is to start with it because $g(x)e^{s(1+|x|)}$ is not in $L^p$ for $s>0$). Let's make it more interesting without making it more abstract. Can we find a uniformly continuous (both in space and time, as usual) contraction of the unit ball in $L^p$ without the center to a point?
{ "source": [ "https://mathoverflow.net/questions/38763", "https://mathoverflow.net", "https://mathoverflow.net/users/9189/" ] }
38,780
An integral domain $R$ is said to be Euclidean if it admits some Euclidean norm: i.e., a function $N: R \rightarrow \mathbb{N} = \mathbb{Z}^{\geq 0}$ such that: for all $x, y \in R$ with $N(y) > 0$ , either $y$ divides $x$ or there exists $q \in R$ such that $N(x-qy) < N(y)$ . A well-known "descent" argument shows that any Euclidean domain is a PID. In fact, the argument that a Euclidean domain is necessarily a UFD is a little more direct and elementary than the argument that shows that a PID is a UFD (because, in the latter case, one needs some kind of ideal-theoretic argument to show the existence of factorizations into irreducible elements). Because of this, Euclidean domains are a familiar staple of undergraduate algebra. A lot of texts seem to emphasize the fact that a PID need not be a Euclidean domain. In order to show this, one has to show not only that some particular norm (and often there is a preferred norm in sight, see below) is not Euclidean, but that there is no Euclidean norm whatsoever. In general this is a very delicate question: for instance, the proof of the most standard example -- that the ring of integers of $\mathbb{Q}(\sqrt{-19})$ is a PID but does not admit any Euclidean norm -- is already rather intricate. My question is this: Given a ring $R$ that we already know is a PID, why do we care whether or not it admits some Euclidean norm? Note that in contrast, many domains admit natural norms. A class of domains which I have been thinking about recently are the infinite domains satisfying (FN): the quotient by every nonzero ideal is finite. In this case, the map $0 \mapsto 0$ , $x \in R \setminus \{0\} \mapsto \# R/(x)$ is a multiplicative norm, which I call canonical . For instance, the usual absolute value on $\mathbb{Z}$ is the canonical norm, as is the norm on any ring of integers in a number field that you meet in an algebraic number theory course. I have recently realized that I care quite a bit about whether certain specific norms on integral domains are Euclidean. (This has come up in my work on quadratic forms and the Davenport-Cassels theorem.) There is some very natural algebra and discrete geometry here. But why do I care if some crazy Euclidean norm exists? Here are three reasons that one might care about this: If a domain admits an "effective" Euclidean norm, one can give effective algorithms for linear algebra over that ring, whereas the structure theory of modules over an arbitrary PID is not a priori algorithmic in nature. (in algebraic K-theory): If $R$ is Euclidean, $\operatorname{SK}_1(R) = 0$ , but there exists a PID with nonvanishing $\operatorname{SK}_1$ . (Thanks to Charles Rezk for giving the precise result based on my vague allusion to it.) In algebraic number theory, there has been a lot of work towards proving the conjecture that if $K$ is a number field which is not $\mathbb{Q}(\sqrt{D})$ for $D = -19, -43, -67, -163$ , then the ring $\mathbb{Z}_K$ of integers of $K$ is a PID iff it is Euclidean (for some crazy norm). In particular, disproving this would disprove the generalized Riemann hypothesis. Comments on 1: There is something to this, but I somehow doubt that it's such a big deal. For instance, the ring of integers of $\mathbb{Q}(\sqrt{-19})$ is not Euclidean, but I'm pretty sure that there are algorithms for modules over it. In particular, it seems to me that for algorithmic purposes, having a Dedekind-Hasse norm is just as good as a Euclidean norm, and every PID has a Dedekind-Hasse norm. In fact, for every PID which satisfies (FN), the canonical norm is a Dedekind-Hasse norm. (See p. 27 of http://alpha.math.uga.edu/~pete/factorization2010.pdf for this.) Comments on 3: if I knew more about this result, I might appreciate it better. It does seem to involve some interesting geometry of numbers. But this convinces me why I should be interested in the special case of rings of integers in number fields, which, as a number theorist, I am already convinced are more worthy of scrutiny from every possible angle than an arbitrary domain. If there are other good reasons to care, I'd certainly like to know.
There are a lot of results in elementary number theory that can be proved with the quadratic reciprocity law. In such a proof you usually have to invert some Jacobi symbol $(a/b)$ and then reduce the numerator modulo the denominator. For number fields that are not Euclidean with respect to some simple map you have a problem if you want to follow this route (the same goes for applications of quadratic and higher residues to cryptography, although this is mostly a theoretical business). In principle, Dedekind-Hasse will also do the trick in some cases. If the ring of integers you're interested in is not Euclidean for the canonical norm, the first idea is to modify it. You could give prime ideals a different weight (weighted norms), or allow division chains in which the norm does not necessarily get smaller in every step (k-stage Euclidean rings), or try some version of Dedekind-Hasse. But if (given a pair $(a,b)$ of elements in a ring) you want to make the norm of $ka-bq$ small, you need more than just the knowledge that a suitable $k$ exists: you need a method for finding $k$ (in addition to finding $q$), perhaps by showing that you can select it from a finite set of elements with bounded norm or something similar. Edit. The Euclidean algorithm is closely related to continued fractions, and the latter are routinely used for doing calculations of units and ideal class groups of real quadratic number fields. For number fields that admit a Euclidean algorithm, something similar can be done: Hurwitz and Mathews worked out a theory of continued fractions over the Gaussian integers, and people like Arwin, Trinks, Degel, Lakein, Stein etc. generalized this to complex Euclidean number fields and used it for computing units and class numbers. I am not aware of too many recent contributions in this direction, but a short search has at least revealed D. Fried, Reduction theory over quadratic imaginary fields, T. Number Theory 2005.
{ "source": [ "https://mathoverflow.net/questions/38780", "https://mathoverflow.net", "https://mathoverflow.net/users/1149/" ] }
38,813
Let a lightray bounce around inside a cube whose faces are (internal) mirrors. If its slopes are rational, it will eventually form a cycle. For example, starting with a point $p_0$ in the interior of the $-Y$ face of an axis-aligned cube, and initially heading in a direction $v_0=(1,1,1)$ , the ray will rejoin $p_0$ after 5 reflections, forming a hexagon. The figure below shows a more complicated 16-cycle. Assume that $p_0$ and $v_0$ are chosen so that (a) the ray never directly hits an edge or corner of the cube, and (b) the ray path never self-intersects inside the cube. Can every knot type be realized by a lightray reflecting inside in a cube? The figure above is an unknot. I believe (but am not certain) the 31-cycle below is knotted: Any such knotted path is a stick representation of the knot, but perhaps the many unsolved problems in stick representations are not relevant to this situation. My question is related to the probability of random knots forming under various models, but usually those models are aimed at polymers or DNA. I have not seen this lightray model explored, but would be interested to know of related models. The choice of $(p_0,v_0)$ allows considerable freedom to "design" a knot, but it seems difficult to control the structure of the path to achieve a particular result. I've explored tiling space by reflected cubes so that the lightray may be viewed as a straight segment between two images of $p_0$ , but this viewpoint is not yielding me insights. If anyone has ideas, however partial, I would appreciate hearing them. Thanks! Edit1 ( 15Sep10 ). I have not been able to yet access the Jones-Przytycki paper that Pierre cites, but knowing the keywords he kindly provided, I did find related work by Christoph Lamm ( "There are infinitely many Lissajous knots" Manuscripta Mathematica 93 (1): 29-37 (1997) ) that provides useful information: Theorem : Billiard knots in a cube are isotopic to Lissajous knots. As Pierre said, many knots are unachievable in these models. In particular, algebraic knots cannot be achieved. The technical result is this. Theorem : The Alexander polynomial of a billiard knot is a square mod 2. In 1997, there were several intriguing open problems, including these two. (a) Is every knot a billiard knot in some convex polyhedron? (b) Can the unknot be achieved in every convex polyhedron that supports periodic paths? Edit2 ( 15Sep10 ). Here is a little more information on open problems ca. 2000, found in a list by Jozef H. Przytycki in the book that resulted from Knots in Hellas '98 : Is there a manifold that supports every knot? (By "supports every knot" he means there is a billiard path isotopic to every knot type.) Is there a finite polyhedron that supports every knot? There apparently is an "infinite polyhedron" that supports every knot. More specifically, is there a convex polyhedron that supports every knot? (This is 3(a) above.) [See Bill Thurston's correction below!] Even more specifically, is every knot supported by one of the Platonic solids? I have not been successful in finding information on this topic later than 2000. If anyone knows later status information, I would appreciate a pointer. Thanks for the interest! Edit3 ( 5Jul11 ). The question has been answered (affirmatively) in a paper by Pierre-Vincent Koseleff and Daniel Pecker recently (28Jun11) posted to the arXiv: " Every knot is a billiard knot ": "We show that every knot can be realized as a billiard trajectory in a convex prism. ... Using a theorem of Manturov [M], we first prove that every knot has a diagram which is a star polygon. [...Manturov’s theorem tells us that every knot (or link) is realized as the closure of a quasitoric braid...] Then, perturbing this polygon, we obtain an irregular diagram of the same knot. We deduce that it is possible to suppose that 1 and the arc lengths of the crossing points are linearly independent over $\mathbb{Q}$ . Then, it is possible to use the classical Kronecker density theorem to prove our result." Edit4 ( 4Oct11 ). A new paper was released by Daniel Pecker, "Poncelet's theorem and Billiard knots," arXiv:1110.0415v1 . The context is that, earlier, "Lamm and Obermeyer proved that not all knots are billiard knots in a cylinder," and Lamm conjectured an elliptic cylinder would suffice. Here is the abstract: Let $D$ be any elliptic right cylinder. We prove that every type of knot can be realized as the trajectory of a ball in $D$ . This proves a conjecture of Lamm and gives a new proof of a conjecture of Jones and Przytycki. We use Jacobi's proof of Poncelet's theorem by means of elliptic functions. Edit5 ( 13Nov12 ). The Pecker paper cited above is now published: Geometriae Dedicata , December 2012, Volume 161, Issue 1, pp 323-333.
These knots seem to be called billiard knots in the literature. They coincide with Lissajous knots as shown by Jones and Przytycki in "Lissajous knots and billiard knots", Banach center Publications 42, 145-163 (1998). Lissajous knots are knots admitting a parametrization of the form $(\cos(n_x t + d_x), \cos(n_y t + d_y), \cos(n_z t + d_z))$. There are strong constraints on such knots (see the wikipedia), and for example no torus knot can be Lissajous.
{ "source": [ "https://mathoverflow.net/questions/38813", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
38,824
Let $P$ be a pointset consisting of $n$ uniformly random elements of $[0,1]^2$. It is known that the diameter (greatest number of edges in any shortest path between two points) of the Delaunay triangulation of $P$ has expected order $\Theta(\sqrt{n})$; the upper bound follows from work by Bose and Devroye , and the lower bound follows from a more general result on weighted Delaunay triangulations by Pimentel . Since the Euclidean MST $T=T(P)$ is a subgraph of the Delaunay triangulation, it follows that $T$ has expected diameter of order at least $\sqrt{n}$. It seems a plausible guess that the expected order is in fact $\Theta(\sqrt{n})$. However, I know of no non-trivial upper bound on the expected diameter of $T$. (In particular, I don't even know if it is $o(n)$ -- or even if it is less than, say, $n/10$.) Do you? What upper bounds are known for the expected diameter of $T$, the minimum spanning tree of $n$ uniformly random points in $[0,1]^2$?
Added: What can we expect from the Euclidean MST After mulling over this question for a few days, I think it's inconsistent with the stated facts that the diameter of the Euclidean MST is $\Theta(\sqrt n)$. Here's why. Let's first interpret the theorem mentioned in the question: that the diameter of the Delaunay triangulation for $P$ has expected diameter $\Theta(\sqrt n)$. For each $n$, the Delaunay triangulations for $P$ define a probability measure on the space of metrics on $n$-element subsets of the unit square. We can interpolate this metric to a metric on the unit square by making each triangle of the Delaunay triangulation an equilateral triangle, and rescale it by $1/\sqrt n$ to get a measure on the space of metrics on the square. As $n$ goes to infinity, these metrics have uniformly bounded $\epsilon$ complexity (=minimum number of balls of size $\epsilon$ needed to cover). The set of metrics of $\epsilon$-complexity bounded by some fixed function of $\epsilon$ is compact in the Gromov-Hausdorff topology, so the space of probability measures on these metric spaces is compact in the weak topology, so there exists at least a subsequence of $n$ such that the scaled Delaunay metrics converge to a measure on metric spaces. The metric spaces are a.s. Lipschitz equivalent to the standard metric on the unit square, using the theorem about diameter (which can be used to deduce that the Delaunay distance between two random elements $p, q \in P$ is $O(\sqrt n)$). These metrics are path-metric spaces. The shortest paths are rectifiable arcs in the Euclidean sense as well as in the particular metric, since the two metrics are Lipschitz equivalent. A rectifiable arc in the plane has a tangent line almost everywhere, so there are rescaled limits where almost all limiting Delaunay geodesics are straight lines. Proposition: The rescaled Delaunay metrics converge a.s. to a constant multiple of the Euclidean metric in the plane. That the metric of the plane is the actual limit, not just a limit point, follows from considerations parallel to the fact that a rescaled limit of a Lipschitz function is a.s. linear. If there were any significant probability of significant deviation at any stage, these would accumulate and prevent the large-scale map from being Lipschitz. Now consider the Euclidean MST for a Delaunay triangulation as a metric tree. Suppose the trees have probable diameter is $\Theta(\sqrt n)$. For each pair of points $x$ and $y$ in the unit square and for each $P$, let $x_P$ be the point of $P$ closest to the $x$, let $y_P$ be closest to $y$, and let $\gamma_P$ be the path in the MST connecting $x_P$ to $y_P$, parametrized by $1/\sqrt(n)$ times the combinatorial Delaunay distance. In the limit, the measure on $P$'s would converge to a probaility measure on rectifiable paths from $x$ to $y$. But more than that, we would get a measurable map from the square cross the square that takes each pair of points to a rectifiable path between them, with arclength less than some constant times their distance. This is not topologically compatible with the tree-like condition: near the boundary between two large branches of a tree, the tree distance between points on one branch to points on the other branch is a large multiple of the distance in the plane. To test my understanding, I ran simulations (using Mathematica's Computational Geometry package and Combinatorica package). Here are two Euclidean MST's, the first with 1500 random points in the unit square, the second with 10,000. You can plainly see that paths connecting pairs of vertices are not converging to rectifiable, Lipschitz paths. As a further test, I calculated the diameter of the Euclidean MST's for 25 uniform pseudo-random distributions of $2^k$ points, for each $k$ ranging from 2 through 8. The sample mean diameters are {2.64, 5.8, 10.32, 18.32, 29.88, 49.12, 78.88}. The best linear fit to to the log of the diameter as a function of $k$ is $.03 + .55 k$. The sample standard deviations for the log of the diameter were nearly constant, all about .15. (that is, the diameters fluctuate on a multiplicative scale, typically about $\pm 16 \%$). It's curious that $Exp[.55] = 1.733$, close to $\sqrt 3 = 1.732$. This suggests a power law that asymptotic diameter $\approx n^{]log(\sqrt 3 / 2)}$, but this could be coincidence, since the numerical experiment was modest in size. Here's the plot of log of mean sample diameter vs. $k$: The actual asymptotic behavior of diameter seems intricately tied with percolation, a subject that I do not understand very well. That is: you can think of building up the MST by successively adding edges of length $t$ if they join points on different connected components. This gives an increasing union of equivalence relations on elements of $P$, consisting of clumps that can be connected by steps not greater than $t$. One would expect that for $t$ greater than some critical distance $t(n)$ that is approximately a constant time $1/\sqrt n$, large-scale clusters are likely. Paths between points in the a clusters must stay in the cluster. The geometry of the increasing clusters should enable one to estimate the Hausdorff dimension of tree geodesics, which should in turn give an exponent of growth for the diameter of the Euclidean MST. The Mathematica notebook containing these computations is here . The code in the notebook itself is brief, since algorithms for Delaunay triangulations and minimum spanning trees and graph diameter are provided in packages in the Mathematica distribution. It took me more effort to make it work than it should have, because functions in these auxiliary packages are poorly documented. Here are some simple ideas for better showing what's happening, and analyzing further: Draw the Delaunay triangulation in a different color, along with the tree Indicate the increasing clumps accessible with changing step size by edge thickness and/or color coding edges of the spanning tree. One idea is to use random colors for short edges within clumps, then use a saturated average color (with average weighted by clump size) when new edges make clumps collide and merge. Delaunay triangles could also be color coded to indicate clumping. Make a 3-dimensional plot of graph distance from a randomly selected member of $P$ to other elements of the tree, using the TriangularSurfacePlot function from the ComputationalGeometry package. Also: try showing distance from basepoint by color coding, perhaps Do bigger experiments, and make plots showing the actual data: e.g. make a ListPlot[] of the log of the actual diameter for trees of sizes something like Floor[Exp[ Range[2,6,.01]]] , but start with a much more modest range and aim for a more ambitious range. Draw contour plots or 3-D plots of Delaunay graph distance from a randomly selected vertex. How quickly do the contour lines begin to look like circles as the size of $P$ increases? The following is an earlier answer, which essentially shows that Delaunay "arc length" converges to a multiple of Euclidean arc length. Missing point: how much might larger detours around radars shorten the paths? Geometry of the Delaunay triangulation. Let's look at this problem on the surface of a sphere, instead of the unit square. The Delaunay triangulation of a set $P$ on the sphere is invariant under Moebius transformations (easy to verify by the definition in terms of circles), and it is equivalent to the triangulation of the convex hull of $P$. A set in the plane can be transformed to the sphere by stereographic projection. Its Dealaunay triangulation is equivalent to a subset of the polyhedron, namely the part on the far side from the center of projection. The problems, for the uniform distribution on the square and the uniform distribution on $S^2$, are equivalent, but the spherical setting has more symmetry. In the projective ball model of hyperbolic space, the convex hull inherits a metric as a hyperbolic surface of finite area. I want to describe a related metric. Imagine there is a radar installation at each point of $P$, and that a smuggler wants to fly a small plane below the horizon of any radar. They need to slow down when they're flying lower, at a speed proportional to the distance to $P$, so the metric will be 1/minimum distance to $P$ times spherical arc length. Near an element of $P$, this metric is asymptotically that of a cylinder of circumference $2 \pi$, but slightly smaller near where it is attached because we're rescaling the spherical metric, rather than the Euclidean metric. Define a "thick" part of $S^2 \setminus P$ to be the set $Q$ obtained by removing a disk around each element of $P$ whose radius is 1/3 the distance to its nearest neighbor. [Inessential mathematical sidenote: this metric is comparable to the Poincaré metric in Q, except in situations where $P$ is confined to a tiny disk on the sphere. The definition could be modified for that situation, but it's a distraction. The smuggling metric is also comparable to the hyperbolic metric on the image of $Q$ projected to the nearest point in the convex hull of $P$.] Claim : the smuggling diameter is comparable to the diameter of the Delaunay triangulation. Proof of Claim : The boundary circles of $Q$ all have comparable circumference in the smuggling metric, slightly less than $2 \pi$. The Delaunay triangles all have diameter bounded above and below in the smuggling metric. This gives an easy way to transform a path in the 1-skeleton of the Delaunay triangulation to a path in $Q$ without increasing length more than a bounded factor. Conversely, for a path in $Q$, the rate of near-encounters with elements of $P$ per smuggling distance is bounded. Therefore, you can do a simplicial approximation, pushing the path to the 1-skeleton with number of edges less than a bounded multiple of its length. Let's now consider a variation of the problem: A smuggler wants to go between point $X$ on the sphere and point $Y$ on the sphere, in the presence of $N$ uniformly randomly distributed radar installations. The smuggler is conservative, and decides to never exceed spherical speed $N^{-.5}$. This will be less than speed 1 in the smuggling metric except when there is a radar within distance $N^{-.5}$. The density of the set of points is $4 \pi / N$ and the strip has area about $d(X,Y) * 2 N^{-.5}$, so the expected number of radars in the strip is $\Theta(\sqrt N)$. The distribution (Poisson) has standard deviation proportional to square root of expectation, so for moderately large $N$ it is extremely unlikely that there will be more than twice the expected number within the strip: enough that even that the probability that the $\sup$ number over all $X$ and $Y$ tends rapidly to 0 with $N$. For each radar dish that is within the strip, the smuggler merely takes a detour around the corresponding boundary circle of $Q$, adding only a constant amount of time to the trip. This gives a path in the smuggling metric of $S^2 \setminus P$ with length $\Theta(\sqrt N)$. We want to know the length of a path in $Q$. This is now easy, since there is a distance-decreasing retraction from $S^2 \setminus P \rightarrow Q$: each cylinder can be mapped to its boundary without increasing lengths. Therefore, the expected maximum diameter of the Delaunay triangulation is $O(\sqrt N)$ This post should really have pictures ... maybe later, or maybe someone else can supply them.
{ "source": [ "https://mathoverflow.net/questions/38824", "https://mathoverflow.net", "https://mathoverflow.net/users/3401/" ] }
38,856
First, let me make it clear that I do not mean jokes of the "abelian grape" variety. I take my cue from the following passage in A Mathematician's Miscellany by J.E. Littlewood (Methuen 1953, p. 79): I remembered the Euler formula $\sum n^{-s}=\prod (1-p^{-s})^{-1}$; it was introduced to us at school, as a joke (rightly enough, and in excellent taste). Without trying to define Littlewood's concept of joke, I venture to guess that another in the same category is the formula $1+2+3+4+\cdots=-1/12$, which Ramanujan sent to Hardy in 1913, admitting "If I tell you this you will at once point out to me the lunatic asylum as my goal." Moving beyond zeta function jokes, I would suggest that the empty set in ZF set theory is another joke in excellent taste. Not only does ZF take the empty set seriously, it uses it to build the whole universe of sets. Is there an interesting concept lurking here -- a class of mathematical ideas that look like jokes to the outsider, but which turn out to be important? If so, let me know which ones appeal to you.
Let "$\int$" denote $\int_0^x$. We want to find the solution to $$\int f = f-1.$$ We simply "factor out" $f$, getting $1=\left(1-\int\right)f$. Thus, $f=(1-\int)^{-1}1$. Using the geometric series, $$f=\left(1+\int+\iint+\iiint+\cdots\right)1=1+\int_0^x1~dx+\int_0^x\int_0^x1~dx+\cdots$$ Thus, $$f=1+x+\frac{x^2}{2}+\frac{x^3}{6}+\cdots=e^x,$$ as expected. (This was told to me by Steve Miller)
{ "source": [ "https://mathoverflow.net/questions/38856", "https://mathoverflow.net", "https://mathoverflow.net/users/1587/" ] }
38,864
Recently I was introduced to the concept of Orthogonal Polynomials through the poly() function in the R programming language. These were introduced to me in the concept of polynomial transformations in order to do a linear regression. Bear in mind that I'm an economist and, as should be obvious, am not all that smart (choice of profession has an odd signaling characteristic). I'm really trying to wrap my head around what Orthogonal Polynomials are and how, if possible, to visualize them. Is there any way to visualize orthogonal polynomials vs. simple polynomials?
Helge presented the continuous case in his answer; for the purposes of data fitting in statistics, one usually deals with discrete orthogonal polynomials. Associated with a set of abscissas $x_i$, $i=1\dots n$ is the discrete inner product $$\langle f,g\rangle=\sum_{i=1}^n w(x_i)f(x_i)g(x_i)$$ where $w(x)$ is a weight function, a function that associates a "weight" or "importance" to each abscissa. A frequently occurring case is one where the $x_i$ are equispaced, $x_{i+1}-x_i=h$ where $h$ is a constant, and the weight function is $w(x)=1$; for this special case, special polynomials called Gram polynomials are used as the basis set for polynomial fitting. (I won't be dealing with the nonequispaced case in the rest of this answer, but I'll add a few words on it if asked). Let's compare a plot of the regular monomials $x^k$ to a plot of the Gram polynomials: On the left, you have the regular monomials. The "bad" thing about using them in data fitting is that for $k$ high enough, $x^k$ and $x^{k+1}$ are nigh-indistinguishable, and this spells trouble for data-fitting methods since the matrix associated with the linear system describing the fit is dangerously close to becoming singular. On the right, you have the Gram polynomials. Each member of the family does not resemble its predecessor or successor, and thus the underlying matrix used for fitting is a lot less likely to be close to singularity. This is the reason why discrete orthogonal polynomials are of interest in data fitting.
{ "source": [ "https://mathoverflow.net/questions/38864", "https://mathoverflow.net", "https://mathoverflow.net/users/4320/" ] }
38,966
What is sheaf cohomology intuitively? For local systems it is ordinary cohomology with twisted coefficients. But what if the sheaf in question is far from being constant? Can one still understand sheaf cohomology in some "geometric" way? For example I would be very interested in the case of coherent $\mathcal{O}_X$-Modules. Or even just line bundles.
One way to think about $H^1(A)$ is to use the long exact sequence not as a property of cohomology, but outright as a definition. That is, given an exact sequence of sheaves, $$ 0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$$ then $H^1(A)$ is measuring the obstruction of global sections to be exact: $$ 0\rightarrow \Gamma(A)\rightarrow \Gamma(B)\rightarrow \Gamma(C)\rightarrow H^1(A)$$ In words, $H^1$ is `measuring the failure of $\Gamma$ to preserve surjectivity'. If you want this idea to actually define $H^1(A)$, you have to be careful to chose $B$ so that $H^1(B)=0$. But, as far as intuition goes, this works pretty well for me. A demonstrative example of this, at least for me, is to think about the the complex variety $\mathbb{P}^1$, with $\mathcal{O}$ the structure sheaf and $\mathbb{C}_p$ the skyscraper sheaf at a point. Then there is a surjective map (it is surjective because it is surjective on stalks): $$ \mathcal{O}\rightarrow\mathbb{C}_p$$ which has kernel $\mathcal{O}(-1)$, the twisted structure sheaf. This whole short exact sequence can be twisted by $(-1)$, noting that twisting a skyscraper sheaf $\mathbb{C}_p$ gives an isomorphic sheaf $\mathbb{C}_p(-1)$ (which I identify with the original sheaf): $$0\rightarrow \mathcal{O}(-2)\rightarrow \mathcal{O}(-1)\rightarrow \mathbb{C}_p\rightarrow 0$$ On global sections, we then get $$ 0\rightarrow \Gamma(\mathcal{O}(-2))\rightarrow \Gamma(\mathcal{O}(-1))\rightarrow \Gamma(\mathbb{C}_p)\rightarrow H^1(\mathcal{O}(-2))$$ We know that $\Gamma(\mathcal{O}(-1))=0$ and $\Gamma(\mathbb{C}_p)=\mathbb{C}$, so the middle arrow is no longer surjective. Hence, $H^1(\mathcal{O}(-2))$ must contain at least $\mathbb{C}$ (in fact, it is exactly $\mathbb{C}$, since $H^1(\mathcal{O}(-1))=0$). Higher cohomology may be also thought of this way: $H^{i+1}$ measures the failure of $H^i$ to preserve surjective maps. However, I don't find this very useful for thinking about higher cohomology, since it would need that I somehow understand lower cohomology much better.
{ "source": [ "https://mathoverflow.net/questions/38966", "https://mathoverflow.net", "https://mathoverflow.net/users/2837/" ] }
39,056
Can someone please tell me some introductory book on symplectic geometry? I have no prior idea of the subject but I do know about Lagrangian and Hamiltonian dynamics (at the level of Landau-Lifshitz Vol. 1). Thanks in advance. :-)
If you are physically inclined, V.I.Arnold's Mathematical methods of classical mechanics provides a masterful short introduction to symplectic geometry, followed by a wealth of its applications to classical mechanics. The exposition is much more systematic than vol 1 of Landau and Lifschitz and, while mathematically sophisticated, it is also very lucid, demonstrating the interaction between physical ideas and mathematical concepts that support them. (It is also worth mentioning that Arnold was largely responsible for the reawakening of interest to symplectic geometry at the end of 1960s and pioneered the study of symplectic topology. Some of these developments were brand new when the book was first published in 1974 and are briefly discussed in the appendices). In addition to the notes by Cannas da Silva mentioned by Dick Palais, here are further two advanced books covering somewhat different territory: Michèle Audin, Torus actions on symplectic manifolds (2nd edition) A Dusa McDuff and Dietmar Salamon, Introduction to symplectic topology A In her book, Michèle Audin herself recommends Paulette Libermann and Charles-Michel Marle, Symplectic geometry and analytical mechanics as a wonderful introduction to symplectic geometry.
{ "source": [ "https://mathoverflow.net/questions/39056", "https://mathoverflow.net", "https://mathoverflow.net/users/9292/" ] }
39,061
According to Wikipedia, the Bohr-Mollerup Theorem (discussed previously on MO here ) was first published in a textbook. It says the authors did that instead of writing a paper because they didn't think the theorem was new. What other examples are there of significant theorems that first saw the light of day in a textbook? (I'm assuming Wikipedia is right about Bohr-Mollerup.) I recognize that the word "significant" is imprecise; I have in mind theorems that mathematicians have picked up on and used in their own work, but I'm open to other interpretations.
I recall that, and Wikipedia independently confirms that L'Hôpital's rule first appeared in a textbook, apparently the first textbook on differential calculus: Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes published by Guillaume de l'Hôpital and made up of content mostly provided by Johann Bernoulli, who was on retainer to l'Hôpital, more or less, for this purpose.
{ "source": [ "https://mathoverflow.net/questions/39061", "https://mathoverflow.net", "https://mathoverflow.net/users/3684/" ] }
39,127
Several ancient arguments suggest a curved Earth, such as the observation that ships disappear mast-last over the horizon, and Eratosthenes' surprisingly accurate calculation of the size of the Earth by measuring a difference in shadow length between Alexandria and Syene. These observations, however, suggest merely a curved Earth rather than a spherical one. Another ancient argument specifically suggesting a spherical Earth is the fact that the shadow of the Earth on the moon during a Lunar eclipse is circular. My question is: is it true that the sphere is the only surface all of whose projections are disks? It surely seems to be true. The corresponding fact, however, is not true in two dimensions. The Reuleaux triangle pictured below is a figure of constant width , meaning that every projection of it in the plane is a line segment of the same length. There are also surfaces of constant width in higher dimensions, meaning that any two parallel bounding set of hyperplanes (touching the boundary) have constant separation. But all of the non-spherical examples of such surfaces I have seen have obviously non-circular projections. It also seems clear that finitely many circular projections is insufficient, since intersecting finitely many cylinders would produce a surface having corners and containing some straight line segments. The fact that you can spin such a surface with all circular projections inside any bounding cylinder is suggestive, but it is also true that you can spin the Reuleaux triangle inside a square , even though it isn't circular. Further questions would include: To what extent are other surfaces determined by their projections? That is, which other shapes can we recognize by the set of their shadows? In particular, can we recognize the cube and other regular solids by their shadows? Which sets of shadows are realizable as projections of a surface? Is there some way to characterize these sets? Clearly they must be continuously deformable to one another and obey several other obvious conditions. We had a great time discussing the question after our logic seminar here in New York this week, when our speaker Maryanthe Malliaris asked the spherical Earth question. December 20, 2010: In light (or dark, as it were) of the lunar eclipse tonight , I am bumping this question, with the remark also that despite the truly outstanding answers we have received, several of the further questions stated above are not fully answered.
The answer to the title question is yes (well, I assume that by a "surface" you mean something reasonable, like a boundary of a convex set). Let $AB$ be the longest segment with endpoints on the surface. We may assume that its length equals 2 and its midpoint is the origin. Consider projections to the planes that contain $AB$. Since projections do not increase distances, $AB$ is a diameter of each projection. Hence all projections to this family of planes are unit discs centered at the origin. The intersection of the corresponding cylinders is the unit ball, hence the result. Added. In general, we cannot determine a convex body from the set of shadows (if we don't know the correspondence between shadows and directions of projections). Take a unit ball and cut off three identical tiny caps whose centers form a regular triangle on the sphere and are not on one great circle. Looking at shadows, you cannot tell whether all three or only two caps are removed, because each projection shows you no more than two of them. The same construction works for polyhedra if you start with an icosahedron rather than a ball.
{ "source": [ "https://mathoverflow.net/questions/39127", "https://mathoverflow.net", "https://mathoverflow.net/users/1946/" ] }
39,210
Does anybody know a solution to this problem? (Sorry, I've missed one summand in the previous post .)
I'm pretty sure this is open. As suggested from Brocard's problem, it is interesting to investigate the Diophantine equations $$n!=P(m)$$ for polynomials $P$. You can see the paper "On polynomial-factorial diophantine equations" , by D. Berend, J.E. Harmse where they make some advances on the problem and prove that this equation has finitely many solutions for many classes of polynomials (irreducible, with an irreducible factor of large degree or with an irreducible factor to a large power). So by their results it is known that the equation $n!=m^r(m+1)$ has finitely many solutions if $r\geq 4$. But for $r\in \{1,2,3\}$ the problem is open.
{ "source": [ "https://mathoverflow.net/questions/39210", "https://mathoverflow.net", "https://mathoverflow.net/users/5712/" ] }
39,224
Zipf's law is the empirical observation that in many real-life populations of n objects, the $k^{th}$ largest object has size proportional to $1/k$, at least for $k$ significantly smaller than $n$ (and one also sometimes needs to assume $k$ somewhat larger than 1). It is a special case of a power law distribution (in which $1/k$ is replaced with $1/k^\alpha$ for some exponent $\alpha$), but the remarkable thing is that in many empirical cases (e.g. frequencies of words, or sizes of cities), the exponent is very close to 1. My question is: is there a "natural" random process (e.g. a birth-death process) that one can rigorously demonstrate (or at least conjecture) to generate populations of n non-negative quantities $X_1,\ldots,X_n$ (with n large but possibly variable) that obey Zipf's law on average with exponent 1? There are plenty of natural ways to generate processes that have power law tails (e.g. consider n positive quantities $X_1,\ldots,X_n$ evolving by iid copies of log-Brownian motion), but I don't see how to ensure the exponent is 1 without artificially setting the parameters to force this. Ideally, such processes should be at least somewhat plausible as models for an empirical situation in which Zipf's law is observed to hold, such as city sizes, but I would be happy with any non-artificial example of a process. One obstruction here is the exponent one property is not invariant with respect to taking powers: if $X_1,\ldots,X_n$ obeys Zipf's law with exponent one, then for any fixed $\beta>0$, $X_1^\beta,\ldots,X_n^\beta$ obeys the power law with a different exponent $\beta$. So whatever random process one would propose for Zipf's law must somehow be quite different from its powers.
I'm not sure if this is an "answer" to your question, but I recall seeing somewhere that someone had shown that if you create a document by selecting the characters a...z plus a space character with uniform frequency then the "words" of such a document have a frequency distribution that follows Zipf's Law. (A little anecdote: when I was an undergraduate, I took a course on "Inductive Logic" given by Zipf. I recall being rather annoyed because he spent a lot of the time lecturing about "his" law and having us form groups that as part of our class work collected statistics to test it :-) (Added Remarks) I recalled that when we tested Zipf's Laws for city populations back then (more than 50 years ago !) the results were quite good---i.e., the population of the n-th city was pretty close to $1/n$ times the population of the first for many countries. I decided to see if that was still so. For the US it pretty much is: http://www.infoplease.com/ipa/A0763098.html#axzz0zuwyduxq However, for China, it is WAY off---not even close: http://en.wikipedia.org/wiki/List_of_cities_in_the_People%27s_Republic_of_China_by_population Of course the population of Chinese cities has been changing rapidly due to migrations into them from the countryside, and perhaps Zipf's Law pertains only to stable situations when things are in equilibrium.
{ "source": [ "https://mathoverflow.net/questions/39224", "https://mathoverflow.net", "https://mathoverflow.net/users/766/" ] }
39,408
I've met tall people. That is: people taller than the average. Every now and then we encounter really tall people, even taller than the average of tall people i.e. taller than the average of those who are taller than the average. Meybe you've met someone who's even taller than the average of those who are taller than the average of those who are taller than the average... And so on. So, take a quantity $X$ that we suppose normally distributed (caveat, I have no deep knowledge of probability theory), i.e. it's described by a gaussian distribution that we suppose standardized and call $f(x)$ . Now, define: $M_0:= \int_{-\infty}^{\infty}f(x)dx=1$ $\mu_0:=\int_{-\infty}^{\infty}xf(x)dx=0$ and, inductively, $M_{n+1}:= \int_{\mu_n}^{\infty}f(x)dx$ $\mu_{n+1}:=\frac{1}{M_n}\int_{\mu_n}^{\infty}xf(x)dx$ I think this describes the situation in which your $X$ (e.g. height) has the value $\mu_n$ precisely when you're as $X$ as the average of those who are more $X$ than the average of those who are more $X$ than...... (n times). If not, please explain why. So my questions: How does the sequence $\mu_n$ behave asymptotically? Does it converge? If yes, is there a nice expression for the limit? Is there even a reasonably explicit expression ("closed form") for $\mu_n$ as a function of $n$ ?
As in Nate's answer, we are interested in iterating the function $$G(y) := \frac{ \int_{y}^{\infty} x e^{- x^2} dx}{\int_{y}^{\infty} e^{- x^2} }.$$ The numerator is $e^{-y^2}/2$ (elementary). The denominator is $e^{-y^2}/2 \cdot y^{-1} \left( 1-(1/2) y^{-2} + O(y^{-4}) \right)$ (see Wikipedia ). So $G(y) = y + (1/2) y^{-1} + O(y^{-3})$. Set $z_n = \mu_n^2$. Then $$z_{n+1} = (\mu_n+\mu_n^{-1}/2 + O(\mu_n^{-3}))^2 = \mu_n^2 + 1 + O(\mu_{n}^{-2}) = z_n + 1 + O(z_n^{-1}).$$ So $z_n \approx n$ and we see that $\mu_n \to \infty$ like $\sqrt{n}$. I haven't checked the details, but I think you should be able to get something like $\mu_n = n^{1/2} + O(1)$.
{ "source": [ "https://mathoverflow.net/questions/39408", "https://mathoverflow.net", "https://mathoverflow.net/users/4721/" ] }
39,435
You're playing pinball. When you first shoot a ball it randomly comes down through 1 of 3 gates. When you go through an unlit gate, it lights up. Similarly, a lit gate will go out. What is the expected number of balls you have to throw for all 3 gates to light up? For example, ball A could go through gate 2, B through gate 3, and C through gate 1. This scenario took 3 rolls and has probability 1/27 . I've put serious thought into this question twice over the last couple of years but my answer gets more and more complicated until my brain explodes. Follow up Douglas hit the nail on the head. For kicks, here's the Python script I used as a reality check for both the 2 and 3 gate cases. from random import randint def pinball(gates): trials = [] for trial in range(10000): state = [False for g in range(gates)] balls = 0 while not all(state): gate = randint(0, len(state) - 1) state[gate] = not state[gate] balls += 1 trials.append(1.0 * balls) print sum(trials) / len(trials) pinball(2) pinball(3)
This is the average time it takes for a random walk on the 1-skeleton of a cube to reach the opposite vertex. There are more general theories for such values, but you can determine this particular one with a simple set of linear equations. Let $T_i$ be the expected time from when $i$ lights are lit. You want to determine $T_0$. $T_0 = 1 + T_1$ $T_1 = 1 + T_0/3 + 2T_2/3$ $T_2 = 1+ 2T_1/3 + 0$ which has the solution $\{T_0=10, T_1=9, T_2=7\}$.
{ "source": [ "https://mathoverflow.net/questions/39435", "https://mathoverflow.net", "https://mathoverflow.net/users/9402/" ] }
39,441
My -possibly flawed- mental picture of free products of groups certainly comes from the special case usually performed to illustrate the construction that proves the Banach-Tarski paradox . Thus I'm used to think of a free product as a certain kind of self-similar "fractal". Although from the set theoretical or algebraic viewpoint it may be clear, one may ask if this "fractal" property is also expressible in metric/topological terms, and if it is possessed by free products of, say, discrete groups in general (not only $\mathbb{Z}*\mathbb{Z}$). I would like to know if there is a more or less natural way to put a topology (or even a metric) on the free product $G*H$ of two discrete groups $G$ and $H$ so that the above "fractal" picture is retained. I suppose the topology should satisfy: compatibility with group structure: $G*H$ should be a topological group; the subgroup of $G*H$ made of words with letters from $G$ (resp., from $H$), that we still call $G$ (resp., $H$), should inherit the discrete topology In case we even look for a metric on $G*H$, I think it would be reasonable to require that left (and right) translations should be homotheties: $d(gx,gy)=C\cdot d(x,y)$ for a constant $C=C(g)$ depending only on $g$, and maybe: $G*H$ should have fractionary Hausdorff dimension. Perhaps a metric would be something like the metric one can put on the "space of words" used in the symbolic dynamics description (if I don't misremember - I'm no expert) of the horseshoe map ... Another random thought was originated by an answer to this MO question. It was noted that the modular group $PSL(2,\mathbb{Z})$ is isomorphic to the free product of cyclic groups $\mathbb{Z}/2*\mathbb{Z}/3$, and that it is not co-Hopf, i.e. it does have a proper subgroup isomorphic to itself (so, in a certain vein not dissimilar to the above one, it is "self similar"). May this fact have any interesting consequences about some properties of modular forms?
This is the average time it takes for a random walk on the 1-skeleton of a cube to reach the opposite vertex. There are more general theories for such values, but you can determine this particular one with a simple set of linear equations. Let $T_i$ be the expected time from when $i$ lights are lit. You want to determine $T_0$. $T_0 = 1 + T_1$ $T_1 = 1 + T_0/3 + 2T_2/3$ $T_2 = 1+ 2T_1/3 + 0$ which has the solution $\{T_0=10, T_1=9, T_2=7\}$.
{ "source": [ "https://mathoverflow.net/questions/39441", "https://mathoverflow.net", "https://mathoverflow.net/users/4721/" ] }
39,452
Friedman [1] conjectured Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in EFA . EFA is the weak fragment of Peano Arithmetic based on the usual quantifier free axioms for 0,1,+,x,exp, together with the scheme of induction for all formulas in the language all of whose quantifiers are bounded. This has not even been carefully established for Peano Arithmetic. It is widely believed to be true for Peano Arithmetic, and I think that in every case where a logician has taken the time to learn the proofs, that logician also sees how to prove the theorem in Peano Arithmetic. However, there are some proofs which are very difficult to understand for all but a few people that have appeared in the Annals of Mathematics - e.g., Wiles' proof of FLT. Have there been any serious challenges to this or the weaker conjecture with Peano arithmetic in place of exponential function arithmetic? [1] http://cs.nyu.edu/pipermail/fom/1999-April/003014.html
To Mark Sapir: The conjecture says "can be proved in EFA". If it "was not proved in EFA" then that does not count. However, I am still interested if it "was not proved in EFA". Since EFA can still develop some theory of recursive functions, the fact that recursive functions are mentioned, or even used, does not imply that the proof is outside EFA. It is also true that EFA is fully capable of proving recursive unsolvability theorems. The only proofs we have that given mathematical statements are not provable in EFA is to show that the mathematical statements inherently give rise to functions that are not bounded by any finite height exponential stack. Is this definitely the case here? Or is there a conjecture to that effect? Harvey Friedman
{ "source": [ "https://mathoverflow.net/questions/39452", "https://mathoverflow.net", "https://mathoverflow.net/users/6043/" ] }
39,508
When I was learning about spectral sequences, one of the most helpful sources I found was Ravi Vakil's notes here . These notes are very down-to-earth and give a kind of minimum knowledge needed about spectral sequences in order to use them. Does anyone know of a similar source for derived categories? Something that concentrates on showing how these things are used, without developing the entire theory, or necessarily even giving complete, rigorous definitions?
I would suggest "Fourier-Mukai transforms in algebraic geometry" by Daniel Huybrechts.
{ "source": [ "https://mathoverflow.net/questions/39508", "https://mathoverflow.net", "https://mathoverflow.net/users/5094/" ] }
39,540
The formula $\mathcal{L}_X\omega = i_Xd\omega + d(i_X \omega)$ is sometimes attributed to Henri Cartan (e.g. Peter Petersen; Riemannian Geometry 2nd ed.; p.380) and sometimes to his father Élie (e.g. Berline, Getzler, Vergne; Heat Kernels and Dirac Operators, p.17), and often just to "Cartan" (e.g. https://en.wikipedia.org/wiki/Lie_derivative ). Who is right? Reference?
Élie for sure. The formula is derived in Les systèmes differentiels extérieurs et leur applications géométriques which was probably written before Henri was born. BTW, here is a very short proof that Chern showed me long ago. The exterior derivative is an anti-derivation of the exterior algebra and so is the interior product with a vector field while the Lie derivative is a derivation. (These are all trivial to check.) Also, the anti-commutator of two anti-derivations is a derivation. Hence both sides of the "magic formula" are derivations. It is obvious that two derivations are equal if they agree on 0-forms $f$ and exact 1-forms $df$, since (locally) they generate the exterior algebra. Finally it is trivial that both sides of the magic formula agree on such forms.
{ "source": [ "https://mathoverflow.net/questions/39540", "https://mathoverflow.net", "https://mathoverflow.net/users/9161/" ] }
39,561
I gave this problem to my undergraduate assistant, as I saw that Euler had originally solved it (although I am having trouble finding his proof). After working on it for two weeks, we boiled the hard cases down to showing that (1) in $\mathbb{Z}[w]$ the fundamental unit is $1+w+w^2$ (where $w$ is the real cube-root of 2) [which I'm sorry to say, I'm not certain I know off the top of my head how to prove], and (2) using that fundamental unit, I found a crazy ad hoc computation to show that there are only the obvious solutions. So I'm wondering if someone else out there is more clever, knows where I can find Euler's proof, or if there is another nice elementary proof in the literature.
If your students know a little about the Eisenstein integers (unique factorization and what the units are) there's the following simple argument (maybe it's essentially Euler's?). Let $u$ and $v$ be the complex roots of $z^2+z+1=0$. Theorem. Let $A$, $B$, $C$ be non-zero elements of $\mathbb{Q}[u]$ with sum $0$ and product twice a cube. Then some two of $A$, $B$, $C$ are equal. Corollary. Suppose $x$ is in $\mathbb{Q}$ and $x^2-1$ is a cube. Then $x$ is $1$, $-1$, $0$, $3$, or $-3$. (To prove the corollary let $A=1+x$, $B=1-x$ and $C=-2$). The proof of the theorem is a reductio ad absurdum. If there's a counterexample, there's one with $A$, $B$, $C$ in $\mathbb{Z}[u]$; take such a counterexample with $d=\min(|A|,|B|,|C|)$ as small as possible. Then $A$, $B$, $C$ are pairwise coprime. Since $ABC=2$(cube) we may assume $A=2i$(cube), $B=j$(cube), $C=k$(cube) where $i$, $j$, and $k$ are in the set $\{1,u,v\}$. Now all cubes in $\mathbb{Z}[u]$ are $0$ or $1$ mod $2$. Since $B+C$ is $0$ mod $2$, $j=k$. Since $ABC=2$(cube), $ijk$ is a cube and $i=j=k$. We may assume $i=j=k=1$. Then $A=2r^3$, $B=s^3$, $C=t^3$, and we may further assume that $s$ and $t$ are $1$ mod $2$. $s$ and $t$ are not both in $\{1,-1\}$ and it follows that $d$ is at least $\sqrt{27}$. Now look at $s+t$, $us+vt$ and $vs+ut$. They sum to $0$ and their product is $B+C=-2(r^3)$. They are congruent to $0$, $1$ and $1$ mod $2$, and the last 2 of them can't be equal since $s$ is not equal to $t$. Since each of them is at most $2d^{1/3}$, this contradicts the minimality assumption. This is really a 3-descent argument on an elliptic curve, but the fancy language as you see isn't needed. An almost identical argument gives what I think is the nicest proof of Fermat's Last Theorem for exponent 3.
{ "source": [ "https://mathoverflow.net/questions/39561", "https://mathoverflow.net", "https://mathoverflow.net/users/3199/" ] }
39,818
Since Ronnie Brown and his collaborators have come up with a general proof of the higher Van Kampen theorems, what impediments are there to using these to compute the unstable homotopy groups of spheres?
Here are some answers on the HHSvKT - I have been persuaded by a referee that we ought also to honour Seifert. These theorems are about homotopy invariants of structured spaces, more particularly filtered spaces or n-cubes of spaces. For example the first theorem of this type Brown, R. and Higgins, P.~J. On the connection between the second relative homotopy groups of some related spaces. Proc. London Math. Soc. (3) \textbf{36}~(2) (1978) 193--212. said that the fundamental crossed module functor from pairs of pointed spaces to crossed modules preserves certain colimits. This allows some calculations of homotopy 2-types and then you need further work to compute the 1st and 2nd homotopy group; of course these two homotopy groups are pale shadows of the 2-type. As example calculations I mention R. Brown ``Coproducts of crossed $P$-modules: applications to second homotopy groups and to the homology of groups'', {\em Topology} 23 (1984) 337-345. (with C.D.WENSLEY), `Computation and homotopical applications of induced crossed modules', J. Symbolic Computation 35 (2003) 59-72. In the second paper some computational group theory is used to compute the 2-type , and so 2nd homotopy groups as modules, for some mapping cones of maps $ Bf: BG \to BH$ where $f:G \to H$ is a morphism of groups. For applications of the work with Loday I refer you for example to the bibliography on the nonabelian tensor product http://groupoids.org.uk/nonabtens.html which has 144 items (Dec. 2015: the topic has been taken up by group theorists, because of the relation to commutators) and also Ellis, G.~J. and Mikhailov, R. A colimit of classifying spaces. {Advances in Math.} (2010) arXiv: [math.GR] 0804.3581v1 1--16. So in the tensor product work, we determine $\pi_3 S(K(G,1))$ as the kernel of a morphism $\kappa: G \otimes G \to G$ (the commutator morphism!). In fact we compute the 3-type of $SK(G,1)$ so you can work out the Whitehead product $\pi_2 \times \pi_2 \to \pi_3$ and composition with the Hopf map $\pi_2 \to \pi_3$. These theorems have connectivity conditions which means they are restricted in their applications, and don not solve all problems! There is still some interest in computing homotopy types of some complexes which cannot otherwise be computed. It is also of interest that the calculations are generally nonabelian ones. So the aim is to make some aspects of higher homotopy theory more like the theory of the fundamental group(oid); this is why I coined the term `higher dimensional group theory' as indicating new structures underlying homotopy theory. Even the 2-dimensional theorem on crossed modules seems little known or referred to! The proof is not so hard, but requires the notion of the homotopy double groupoid of a pair of pointed spaces. See also some recent presentations available on my preprint page . Further comment: 11:12 24 Sept. The HHSvKT's have two roles. One is to allow some calculations and understanding not previously possible. People concentrate on the homotopy groups of spheres but what about the homotopy types of more general complexes? One aim is to give another weapon in the armory of algebraic topology. The crossed complex work applies nicely to filtered spaces. The new book (pdf downloadable) gives an account of algebraic topology on the border between homotopy and homology without using singular or simplicial homology, and allows for some calculations for example of homotopy classes of maps in the non simply connected case. It gets some homotopy groups as modules over the fundamental group. I like the fact that the Relative Hurewicz Theorem is a consequence of a HHSvKT, and this suggested a triadic Hurewicz Theorem which is one consequence of the work with Loday. Another is determination of the critical group in the Barratt-Whitehead n-ad connectivity theorem - to get the result needs the apparatus of cat^n-groups and crossed n-cubes of groups (Ellis/Steiner). The hope (expectation?) is also that these techniques will allow new developments in related fields - see for example work of Faria Martins and Picken in differential geometry. Developments in algebraic topology have had over the decades wide implications, eventually in algebraic number theory. People could start by trying to understand and apply the 2-dim HHSvKT! Edit Jan 11, 2014 Further to my last point, consider my answer on excision for relative $\pi_2$: https://math.stackexchange.com/questions/617018/failure-of-excision-for-pi-2/621723#621723 See also the relevance to the Blakers-Massey Theorem on this nlab link. December 28, 2015 I mention also a presentation at CT2015 Aveiro on A philosophy of modelling and computing homotopy types . Note that homotopy groups are but a pale "shadow on a wall" of a homotopy type. Note also that homotopy groups are defined only for a space with base point, i.e. a space with some structure. My work with Higgins and with Loday involves spaces with much more structure; that with Loday involves $n$-cubes of pointed spaces. As with any method, it is important to be aware of what it does, and what it does not, do. One aspect is that the work with Loday deals with nonabelian algebraic models, and obtains, when it applies, precise colimit results in higher homotopy . One inspiration was a 1949 Theorem of JHC Whitehead in "Combinatorial Homotopy II" on free crossed modules.
{ "source": [ "https://mathoverflow.net/questions/39818", "https://mathoverflow.net", "https://mathoverflow.net/users/1353/" ] }
39,828
Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that. So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples: Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations. The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake. Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own. The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself. Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting. How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you? Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups? Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study. I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is How much would you subscribe to the statement that EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"? Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO. Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true. Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".
Dear Alex, It seems to me that the general question in the background of your query on algebra really is the better one to focus on, in that we can forget about irrelevant details. That is, as you've mentioned, one could be asking the question about motivation and decision in any kind of mathematics, or maybe even life in general. In that form, I can't see much useful to write other than the usual cliches: there are safer investments and riskier ones; most people stick to the former generically with occasional dabbling in the latter, and so on. This, I think, is true regardless of your status. Of course, going back to the corny financial analogy that Peter has kindly referred to, just how risky an investment is depends on how much money you have in the bank. We each just make decisions in as informed a manner as we can. Having said this, I rather like the following example: Kac-Moody algebras could be considered 'idle' generalizations of finite-dimensional simple Lie algebras. One considers the construction of simple Lie algebras by generators and relations starting from a Cartan matrix. When a positive definiteness condition is dropped from the matrix, one arrives at general Kac-Moody algebras. I'm far from knowledgeable on these things, but I have the impression that the initial definition by Kac and Moody in 1968 really was somewhat just for the sake of it. Perhaps indeed, the main (implicit) justification was that the usual Lie algebras were such successful creatures. Other contributors here can describe with far more fluency than I just how dramatically the situation changed afterwards, accelerating especially in the 80's, as a consequence of the interaction with conformal field theory and string theory. But many of the real experts here seem to be rather young and perhaps regard vertex operator algebras and the like as being just so much bread and butter. However, when I started graduate school in the 1980's, this story of Kac-Moody algebras was still something of a marvel. There must be at least a few other cases involving a rise of comparable magnitude. Meanwhile, I do hope some expert will comment on this. I fear somewhat that my knowledge of this story is a bit of the fairy-tale version. Added: In case someone knowledgeable reads this, it would also be nice to get a comment about further generalizations of Kac-Moody algebras. My vague memory is that some naive generalizations have not done so well so far, although I'm not sure what they are. Even if one believes it to be the purview of masters, it's still interesting to ask if there is a pattern to the kind of generalization that ends up being fruitful. Interesting, but probably hopeless. Maybe I will add one more personal comment, in case it sheds some darkness on the question. I switched between several supervisors while working towards my Ph.D. The longest I stayed was with Igor Frenkel, a well-known expert on many structures of the Kac-Moody type. I received several personal tutorials on vertex operator algebras, where Frenkel expressed his strong belief that these were really fundamental structures, 'certainly more so than, say, Jordan algebras.' I stubbornly refused to share his faith, foolishly, as it turns out (so far). Added again: In view of Andrew L.'s question I thought I'd add a few more clarifying remarks. I explained in the comment below what I meant with the story about vertex operator algebras. Meanwhile, I can't genuinely regret the decision not to work on them because I quite like the mathematics I do now, at least in my own small way. So I think what I had in mind was just the platitude that most decisions in mathematics, like those of life in general, are mixed: you might gain some things and lose others. To return briefly to the original question, maybe I do have some practical remarks to add. It's obvious stuff, but no one seems to have written it so far on this page. Of course, I'm not in a position to give anyone advice, and your question didn't really ask for it, so you should read this with the usual reservations. (I feel, however, that what I write is an answer to the original question, in some way.) If you have a strong feeling about a structure or an idea, of course keep thinking about it. But it may take a long time for your ideas to mature, so keep other things going as well, enough to build up a decent publication list. The part of work that belongs to quotidian maintenance is part of the trade, and probably a helpful routine for most people. If you go about it sensibly, it's really not that hard either. As for the truly original idea, I suspect it will be of interest to many people at some point, if you keep at it long enough. Maybe the real difference between starting mathematicians and established ones is the length of time they can afford to invest in a strange idea before feeling like they're running out of money. But by keeping a suitably interesting business going on the side, even a young person can afford to dream. Again, I suppose all this is obvious to you and many other people. But it still is easy to forget in the helter-skelter of life. By the way, I object a bit to how several people have described this question of community interest as a two-state affair. Obviously, there are many different degrees of interest, even in the work of very famous people.
{ "source": [ "https://mathoverflow.net/questions/39828", "https://mathoverflow.net", "https://mathoverflow.net/users/35416/" ] }
39,848
Given a finite group $G$, let $\{(1,1),(m_1,n_1),\ldots,(m_r,n_r)\}$ be the list of pairs $(m,n)$ in which $m$ is the order of some element, and $n$ is the number of elements with this order. The order of $G$ is thus $1+n_1+\cdots+n_r$, and the pair $(1,1)$ accounts for the neutral element. Let $G,G'$ be two finite groups, with the same list. Is it true or not (I bet not ) that $G$ and $G'$ are isomorphic ? If not, please provide a counter-exemple. Edit . Nick's answer gives the correct terminology, of conformal groups . Ben's answer speaks of the refined notion of almost conjugate subgroups . Is there any other related notion ?
There are easy examples that are $p$-groups. For instance, the mod 3 Heisenberg group is the nilpotent group with presentation $\left < a,b,c \;\bigg |\, [a,b] = c, [a,c] = [b,c] = a^3 = b^3 = c^3 = 1 \right >$ has order 27, and all but the trivial element of order 3. This has the same order portrait as $C_3^3$ where $C_3 = \mathbb Z / 3\mathbb Z$ is the cyclic group of order 3.
{ "source": [ "https://mathoverflow.net/questions/39848", "https://mathoverflow.net", "https://mathoverflow.net/users/8799/" ] }
39,934
I've heard asserted in talks quite a few times that Lusztig's canonical basis for irreducible representations is known to not always have positive structure coefficents for the action of $E_i$ and $F_i$. There are good geometric reasons the coefficents have to be positive in simply-laced situations, but no such arguments can work for non-simply laced examples. However, this is quite a bit weaker than knowing the result is false. Does anyone have a good example or reference for a situation where this positivity fails?
Hi, The following formulas are examples of non-positive structure coefficients for non-symmetric cases which are easily verified by the algorithm presented in Leclerc's paper "Dual Canonical Bases, Quantum Shuffles, and q-characters" or quagroup package in GAP4. Professor Masaki Kashiwara told me that he has known such non-positive structure coefficient for $G_2$ since Shigenori Yamane found it in 1994 as treated in his master thesis at Osaka University (written in Japanese). You can see similar negative coefficients in at least case $A_{2n}^{(2)}, D_{n+1}^{(2)}$. Anyway, conjecture 52 in Leclerc's paper is false (I already told Professor Leclerc about it). Shunsuke Tsuchioka Notation: $G(i_1,\cdots,i_n)$ stands for the canonical basis element corresponds to a crystal element $b(i_1,\cdots,i_n)=\tilde{f}_{i_n}b(i_1,\cdots,i_{n-1})=\cdots$. $G_2$ (1 is the short root) : $f_2 G(121112211) = G(1211122211) + [2]G(1111222211) + G(2111112221)$ $ + [2]G(1211112221) + G(1111122221) - G(1112211122) + [2]G(1122111122)$ $C_3$ (1,2 are short roots) : $f_3 G(23122312) = [2]G(222333121) + [2]G(312222331) + [2]G(231222331)$ $ + [2]G(122223331) + G(231223312) + [2]G(122233312) - G(223112233) + [2]G(231122233)$ $B_4$ (1,2,3 are long roots) : $f_1G(4342341234) = [2]G(43344423211) + [2]G(43423443211) - G(44233443211)$ $ + [2]G(43423344211) + [2]G(43423442311) + [2]G(34234442311)$ $ + [2]G(43422334411) + G(43423412341)$
{ "source": [ "https://mathoverflow.net/questions/39934", "https://mathoverflow.net", "https://mathoverflow.net/users/66/" ] }
39,944
This forum brings together a broad enough base of mathematicians to collect a "big list" of equivalent forms of the Riemann Hypothesis...just for fun. Also, perhaps, this collection could include statements that imply RH or its negation. Here is what I am suggesting we do: Construct a more or less complete list of sufficiently diverse known reformulations of the Riemann Hypothesis and of statements that would resolve the Riemann Hypothesis. Since it is in bad taste to directly attack RH, let me provide some rationale for suggesting this: 1) The resolution of RH is most likely to require a new point of view or a powerful new approach. It would serve us to collect existing attempts/perspectives in a single place in order to reveal new perspectives. 2) Perhaps the resolution of RH will need ideas from many areas of mathematics. One hopes that the solution of this problem will exemplify the unity of mathematics, and so it is of interest to see very diverse statements of RH in one place. Even in the event where no solution is near after this effort, the resulting compilation would itself help illustrate the depth of RH. 3) It would take very little effort for an expert in a given area to post a favorite known reformulation of RH whose statement is in the language of his area. Therefore, with very little effort, we could have access to many different points of view. This would be a case of many hands making light work. (OK, I guess not such light work!) Anyhow, in case this indeed turns out to be an appropriate forum for such a collection, you should try to include proper references for any reformulation you include.
I like Lagarias "elementary" reformulation of Robin's theorem: that RH is true iff $\sigma(n)\leq H_n+e^{H_n}\log(H_n)$ holds for every $n\geq 1$, where $\sigma(n)$ is the sum of divisors function and $H_n$ is the nth harmonic number. Its major appeal is that anyone with rudimentary exposure to number theory can play with it. Having spent the better part of my youth fiddling with this reformulation really brought out the enormous difficulty of proving RH. In a way I think this reformulation is evil, because it looks tractable, but is ultimately useless and perhaps even harder to work with than other more complex reformulations. On the other hand I hope a future proof of RH will involve this reformulation because then I might have a chance of understanding the proof!
{ "source": [ "https://mathoverflow.net/questions/39944", "https://mathoverflow.net", "https://mathoverflow.net/users/6269/" ] }
40,005
One of the many articles on the Tricki that was planned but has never been written was about making it easier to solve a problem by generalizing it (which initially seems paradoxical because if you generalize something then you are trying to prove a stronger statement). I know that I've run into this phenomenon many times, and sometimes it has been extremely striking just how much simpler the generalized problem is. But now that I try to remember any of those examples I find that I can't. It has recently occurred to me that MO could be an ideal help to the Tricki: if you want to write a Tricki article but lack a supply of good examples, then you can ask for them on MO. I want to see whether this works by actually doing it, and this article is one that I'd particularly like to write. So if you have a good example up your sleeve (ideally, "good" means both that it illustrates the phenomenon well and that it is reasonably easy for others to understand) and are happy to share it, then I'd be grateful to hear it. I will then base an article on those examples, and I will also put a link from that article to this MO page so that if you think of a superb example then you will get the credit for it there as well as here. Incidentally, here is the page on this idea as it is so far . It is divided into subpages, which may help you to think of examples. Added later: In the light of Jonas's comment below (I looked, but not hard enough), perhaps the appropriate thing to do if you come up with a good example is to add it as an answer to the earlier question rather than this one. But I'd also like to leave this question here because I'm interested in the general idea of some kind of symbiosis between the Tricki and MO (even if it's mainly the Tricki benefiting from MO rather than the other way round).
Great question. Maybe the phenomenon is less surprising if one thinks that there are $\infty$ ways to generalize a question, but just a few of them make some progress possible. I think it is reasonable to say that successful generalizations must embed, consciously or not, a very deep understanding of the problem at hand. They operate through the same mechanism at work in good abstraction, by helping you forget insignificant details and focus on the heart of the matter. An example, which is probably too grandiose to qualify as an answer, since your question seems very specific, is Fredholm theory. At the beginning of last century integral equations were a hot topic and an ubiquitous tool to solve many concrete PDE problems. The theory of linear operators on Banach and Hilbert spaces is an outgrowth of this circle of problems. Now, generalizing from $$ u(x) + \lambda \int _a ^b k(x,s) u(s) ds = f(x) $$ to $$ (I+ \lambda K) u = f $$ makes the problem trivial, and we do it without thinking. But it must have been quite a shock in 1900.
{ "source": [ "https://mathoverflow.net/questions/40005", "https://mathoverflow.net", "https://mathoverflow.net/users/1459/" ] }
40,082
I'm not teaching calculus right now, but I talk to someone who does, and the question that came up is why emphasize the $h \to 0$ definition of a derivative to calculus students? Something a teacher might do is ask students to calculate the derivative of a function like $3x^2$ using this definition on an exam, but it makes me wonder what the point of doing something like that is. Once one sees the definition and learns the basic rules, you can basically calculate the derivative of a lot of reasonable functions quickly. I tried to turn that around and ask myself if there are good examples of a function (that calculus students would understand) where there isn't already a well-established rule for taking the derivative. The best I could come up with is a piecewise defined function, but that's no good at all. More practically, this question came up because when trying to get students to do this, they seemed rather impatient (and maybe angry?) at why they couldn't use the "shortcut" (that they learned from friends or whatever). So here's an actual question: What benefit is there in emphasizing (or even introducing) to calculus students the $h \to 0$ definition of a derivative (presuming there is a better way to do this?) and secondly, does anyone out there actually use this definition to calculate a derivative that couldn't be obtained by a known symbolic rule? I'd prefer a function whose definition could be understood by a student studying first-year calculus. I'm not trying to say that this is bad (or good), I just couldn't come up with any good reasons one way or the other myself. EDIT : I appreciate all of the responses, but I think my question as posed is too vague. I was worried about being too specific, so let me just tell you the context and apologize for misleading the discussion. This is about teaching first-semester calculus to students straight out of high school in the US, most of whom have already taken a calculus course in high school (and didn't do well or retake it for whatever reason). These are mostly students who have no interest in mathematics (the cause for this is a different discussion I guess) and usually are only taking calculus to fulfill some university requirement. So their view of the instructor trying to get them to learn how to calculate derivatives from the definition on an assignment or on an exam is that they are just making them learn some long, arbitrary way of something that they already have better tools for. I apologize but I don't really accept the answer of "we teach the limit definition because we need a definition and that's how we do mathematics". I know I am being unfair in my paraphrasing, and I am NOT trying to say that we should not teach definitions. I was trying to understand how one answers the students' common question: "Why can't we just do this the easy way?" (and this was an overwhelming response on a recent mini-evaluation given to them). I like the answer of $\exp(-1/x^2)$ for the purpose of this question though. It's hard to get students to take you seriously when they think that you're only interested in making them jump through hoops. As a more extreme example, I recall that as an undergraduate, some of my friends who took first year calculus (depending on the instructor) were given an oral exam at the end of the semester in which they would have to give a proof of one of 10 preselected theorems from the class. This seemed completely pointless to me and would only further isolate students from being interested in math, so why are things like this done? Anyway, sorry for wasting a lot of your time with my poorly-phrased question. I know MathOverflow is not a place for discussions, and I don't want this to degenerate into one, so sorry again and I'll accept an answer (though there were many good ones addressing different points).
This is a good question, given the way calculus is currently taught, which for me says more about the sad state of math education, rather than the material itself. All calculus textbooks and teachers claim that they are trying to teach what calculus is and how to use it. However, in the end most exams test mostly for the students' ability to turn a word problem into a formula and find the symbolic derivative for that formula. So it is not surprising that virtually all students and not a few teachers believe that calculus means symbolic differentiation and integration. My view is almost exactly the opposite. I would like to see symbolic manipulation banished from, say, the first semester of calculus. Instead, I would like to see the first semester focused purely on what the derivative and definite integral ( not the indefinite integral) are and what they are useful for. If you're not sure how this is possible without all the rules of differentiation and antidifferentiation, I suggest you take a look at the infamous "Harvard Calculus" textbook by Hughes-Hallett et al. This for me and despite all the furor it created is by far the best modern calculus textbook out there, because it actually tries to teach students calculus as a useful tool rather than a set of mysterious rules that miraculously solve a canned set of problems. I also dislike introducing the definition of a derivative using standard mathematical terminology such as "limit" and notation such as $h\rightarrow 0$. Another achievement of the Harvard Calculus book was to write a math textbook in plain English. Of course, this led to severe criticism that it was too "warm and fuzzy", but I totally disagree. Perhaps the most important insight that the Harvard Calculus team had was that the key reason students don't understand calculus is because they don't really know what a function is. Most students believe a function is a formula and nothing more. I now tell my students to forget everything they were ever told about functions and tell them just to remember that a function is a box, where if you feed it an input (in calculus it will be a single number), it will spit out an output (in calculus it will be a single number). Finally, (I could write on this topic for a long time. If for some reason you want to read me, just google my name with "calculus") I dislike the word "derivative", which provides no hint of what a derivative is. My suggested replacement name is "sensitivity". The derivative measures the sensitivity of a function. In particular, it measures how sensitive the output is to small changes in the input. It is given by the ratio, where the denominator is the change in the input and the numerator is the induced change in the output. With this definition, it is not hard to show students why knowing the derivative can be very useful in many different contexts. Defining the definite integral is even easier. With these definitions, explaining what the Fundamental Theorem of Calculus is and why you need it is also easy. Only after I have made sure that students really understand what functions, derivatives, and definite integrals are would I broach the subject of symbolic computation. What everybody should try to remember is that symbolic computation is only one and not necessarily the most important tool in the discipline of calculus, which itself is also merely a useful mathematical tool. ADDED: What I think most mathematicians overlook is how large a conceptual leap it is to start studying functions (which is really a process) as mathematical objects, rather than just numbers. Until you give this its due respect and take the time to guide your students carefully through this conceptual leap, your students will never really appreciate how powerful calculus really is. ADDED: I see that the function $\theta\mapsto \sin\theta$ is being mentioned. I would like to point out a simple question that very few calculus students and even teachers can answer correctly: Is the derivative of the sine function, where the angle is measured in degrees, the same as the derivative of the sine function, where the angle is measured in radians. In my department we audition all candidates for teaching calculus and often ask this question. So many people, including some with Ph.D.'s from good schools, couldn't answer this properly that I even tried it on a few really famous mathematicians. Again, the difficulty we all have with this question is for me a sign of how badly we ourselves learn calculus. Note, however, that if you use the definitions of function and derivative I give above, the answer is rather easy.
{ "source": [ "https://mathoverflow.net/questions/40082", "https://mathoverflow.net", "https://mathoverflow.net/users/321/" ] }
40,145
What is known about irrationality of $\pi e$, $\pi^\pi$ and $e^{\pi^2}$?
I believe most such questions are still very far from being resolved. Apparently, it is not even known if $\pi^{\pi^{\pi^\pi}}$ is an integer (let alone irrational).
{ "source": [ "https://mathoverflow.net/questions/40145", "https://mathoverflow.net", "https://mathoverflow.net/users/9550/" ] }
40,161
If $X$ is a scheme, the Hilbert scheme of points $X^{[n]}$ parameterizes zero dimensional subschemes of $X$ of degree $n$. Why do we care about it? Of course, there are lots of "in subject" reasons, which I summarize by saying that $X^{[n]}$ is maybe the simplest modern moduli space, and as such is an extremely fertile testing ground for ideas in moduli theory. But it is not clear that this would be very convincing to someone who was not already interested in $X^{[n]}$. The question I am really asking is: Why would someone who does not study moduli care about $X^{[n]}$? The main reason I ask is for the sake of having some relevant motivation sections in talks. But an answer to the following version of the question would be extremely valuable as well: What can someone who knows a lot something* about $X^{[n]}$ contribute to other areas of algebraic geometry, or mathematics more generally, or even other subjects? *reworded in light of the answer of Nakajima
What can someone who knows a lot about $X^{[n]}$ contribute to other areas of algebraic geometry, or mathematics more generally, or even other subjects? I know a little about $X^{[n]}$. And I have no contribution to mathematics nor other areas of algebraic geometry. But I find study of Hilbert schemes is very interesting. Isn't it enough to motivate to study Hilbert schemes ?
{ "source": [ "https://mathoverflow.net/questions/40161", "https://mathoverflow.net", "https://mathoverflow.net/users/4707/" ] }
40,268
This is a heuristic question that I think was once asked by Serge Lang. The gaussian: $e^{-x^2}$ appears as the fixed point to the Fourier transform, in the punchline to the central limit theorem, as the solution to the heat equation, in a very nice proof of the Atiyah-Singer index theorem etc. Is this an artifact of the techniques (such as the Fourier Transform) that people like use to deal with certain problems or is this the tip of some deeper platonic iceberg?
Quadratic (or bilinear) forms appear naturally throughout mathematics, for instance via inner product structures, or via dualisation of a linear transformation, or via Taylor expansion around the linearisation of a nonlinear operator. The Laplace-Beltrami operator and similar second-order operators can be viewed as differential quadratic forms, for instance. A Gaussian is basically the multiplicative or exponentiated version of a quadratic form, so it is quite natural that it comes up in multiplicative contexts, especially on spaces (such as Euclidean space) in which a natural bilinear or quadratic structure is already present. Perhaps the one minor miracle, though, is that the Fourier transform of a Gaussian is again a Gaussian, although once one realises that the Fourier kernel is also an exponentiated bilinear form, this is not so surprising. But it does amplify the previous paragraph: thanks to Fourier duality, Gaussians not only come up in the context of spatial multiplication, but also frequency multiplication (e.g. convolutions, and hence CLT, or heat kernels). One can also take an adelic viewpoint. When studying non-archimedean fields such as the p-adics $Q_p$, compact subgroups such as $Z_p$ play a pivotal role. On the reals, it seems the natural analogue of these compact subgroups are the Gaussians (cf. Tate's thesis). One can sort of justify the existence and central role of Gaussians on the grounds that the real number system "needs" something like the compact subgroups that its non-archimedean siblings enjoy, though this doesn't fully explain why Gaussians would then be exponentiated quadratic in nature.
{ "source": [ "https://mathoverflow.net/questions/40268", "https://mathoverflow.net", "https://mathoverflow.net/users/9569/" ] }
40,337
[This is just the kind of vague community-wiki question that I would almost certainly turn my nose up at if it were asked by someone else, so I apologise in advance, but these sorts of questions do come up on MO with some regularity now so I thought I'd try my luck] I have just been asked by the Royal Society of Arts to come along to a lunchtime seminar on "ingenuity". As you can probably guess from the location, this is not a mathematical event. In the email to me with the invitation, it says they're inviting me "...as I suppose that some mathematical proofs exhibit ingenuity in their methods." :-) The email actually defines ingenuity for me: it says it's "ideas that solve a problem in an unusually neat, clever, or surprising way.". My instinct now would usually be to collect a bunch of cute low-level mathematical results with snappy neat clever and/or surprising proofs, e.g. by scouring my memory for such things, over the next few weeks, and then to casually drop some of them into the conversation. My instinct now, however, is to ask here first, and go back to the old method if this one fails. Question: What are some mathematical results with surprising and/or unusually neat proofs? Now let's see whether this question (a) bombs, (b) gets closed, (c) gets filled with rubbish, (d) gets filled with mostly rubbish but a couple of gems, which I can use to amuse, amaze and impress my lunchtime arty companions and get all the credit myself. This is Community Wiki of course, and I won't be offended if the general consensus is that these adjectives apply to the vast majority of results and the question gets closed. I'm not so sure they do though---sometimes the proof is "grind it out". Although I don't think I'll be telling the Royal Society of Arts people this, I always felt that Mazur's descent to prove his finiteness results for modular curves was pretty surprising (in that he had enough data to pull the descent off). But I'm sure there are some really neat low-level answers to this.
The impossibility of tiling a chessboard with two opposite corners removed using dominos is quite good for this purpose I think, especially if you start by giving a boring case-analysis proof for a 4-by-4 board. The bridges of Königsberg is also pretty good. Marcus du Sautoy spoke about it last night in his series A Brief History of Mathematics on Radio 4 (though he overdid it when he claimed that the solution had "revolutionized the internet").
{ "source": [ "https://mathoverflow.net/questions/40337", "https://mathoverflow.net", "https://mathoverflow.net/users/1384/" ] }
40,399
Let $X$ be a connected CW complex. One can ask to what extent $H_\ast(X)$ determines $\pi_1(X)$. For example, it determines its abelianization, because the Hurewicz Theorem implies that $H_1(X)$ is isomorphic to the abelianization of $\pi_1(X)$ . I'm thinking about invariants of 2-knots which can be extracted from have to do with the second homology of (covers of) their complements, and I'm therefore very much interested in the answer to the following question: What part of the fundamental group is detected by $H_2(X)$? In particular, is there an obvious map from $H_2(X)$ (or from part of it) into $\pi_1(X)$? Where in the derived series of $\pi_1(X)$ would the image of $H_2(X)$ live?
$H_2(X)$ is all about $\pi_1(X)$ and $\pi_2(X)$ . If $\pi_2(X)$ is trivial (as for knot complements) then it is a functor of $\pi_1(X)$ . Let $H_n(G)$ be $H_n(BG)$ , the homology of the classifying space ( $K(G,1)$ ). If $X$ is path-connected than there is a surjection $H_2(X)\to H_2(\pi_1(X))$ whose kernel is a quotient of $\pi_2(X)$ , the cokernel of a map from $H_3(\pi_1(X))$ to the largest quotient of $\pi_2(X)$ on which the canonical action of $\pi_1(X)$ becomes trivial. This $H_2(G)$ isn't anything like the next piece of the derived series after $H_1(G)=G^{ab}$ , though. For example, if $G$ is abelian then $H_2(G)$ is the second exterior power of $H_1(G)$ (EDIT: so it can be nontrivial even though it knows no more than $H_1(G)$ does), while if $H_1(G)$ is trivial $H_2(G)$ is often nontrivial (EDIT: so, even when it does carry some more information than $H_1(G)$ , it is not necessarily derived-series information). EDIT: The previous paragraph comes from looking at the integral homology Serre spectral sequence of $X\to K(\pi_1(X),1)$ , where the homotopy fiber is the universal cover $\tilde X$ . Since $H_1\tilde X=0$ , the groups $E^\infty_{p,1}$ are trivial and we get an exact sequence $$ 0\to E^\infty_{0,2}\to H_2(X)\to E^\infty_{2,0}\to 0, $$ therefore $$ E^2_{3,0} \to E^2_{0,2}\to H_2(X)\to E^2_{2,0}\to 0. $$ Since $H_2(\tilde X)=\pi_2(\tilde X)=\pi_2(X)$ , this looks like $$ H_3(\pi_1(X)) \to H_0(\pi_1(X);\pi_2(X))\to H_2(X)\to H_2(\pi_1(X))\to 0. $$ The place to look for the rest of the derived series would be homology with nontrivial coefficients, for example homology of covering spaces.
{ "source": [ "https://mathoverflow.net/questions/40399", "https://mathoverflow.net", "https://mathoverflow.net/users/2051/" ] }
40,632
Given a continuous map $f \colon X \to Y$ of topological spaces, and a sheaf $\mathcal{F}$ on $Y$, the inverse image sheaf $f^{-1}\mathcal{F}$ on $X$ is the sheafification of the presheaf $$U \mapsto \varinjlim_{V \supseteq f(U)} \Gamma(V, \mathcal{F}).$$ If $X$ and $Y$ happen to be ringed spaces, $f$ a morphism of ringed spaces, and $\mathcal{F}$ an $\mathcal{O}_X$-module, one then defines the pullback sheaf $f^* \mathcal{F}$ on $X$ as $$f^{-1}\mathcal{F} \otimes_{f^{-1} \mathcal{O}_Y} \mathcal{O}_X.$$ However, I cannot think of any other usage of the inverse image sheaf in algebraic geometry. Moreover, if $X$ and $Y$ are schemes and $\mathcal{F}$ is quasicoherent, there is an alternate way of defining $f^* \mathcal{F}$. Given $f \colon \mathrm{Spec} B \to \mathrm{Spec} A$, and $\mathcal{F} = \widetilde{M}$, where $M$ is an $A$-module, one defines $f^* \mathcal{F}$ to be the sheaf associated to the $B$-module $M \otimes_A B$. To extend this to arbitrary schemes, it is necessary to prove that it is well-defined; but I still think it is easier to work with than the other definition, which involves direct limits and two sheafifications of presheaves (the inverse image, and the tensor product). I have not checked, but I imagine that something similar can be done for formal schemes. Hence, my question: What uses, if any, does the inverse image sheaf have in algebraic geometry, other than to define the pullback sheaf? A closely related question is In a course on schemes, is there a good reason to define the inverse image sheaf and the pullback sheaf for ringed spaces in general, rather than simply defining the pullback of a quasicoherent sheaf by a morphism of schemes? To go from the first question to the second question, I suppose one must also address whether there are $\mathcal{O}_X$-modules significant to algebraic geometers that are not quasicoherent. Edit: I think the question deserves a certain amount of clarification. Several people have given interesting descriptions or explications of the inverse image sheaf. While I appreciate these, they are not the point of my question; I am, specifically, interested to know whether there are constructions or arguments in algebraic geometry that cannot reasonably be done without using the inverse image sheaf. So far, the answer seems to be that such things exist, but are not really within the scope of, say, a one-year first course on schemes. There are other constructions (such as the inverse image ideal sheaf) that do not, strictly speaking, require the inverse image sheaf, but for which it may be more appropriate to use the inverse image sheaf as a matter of taste.
By some coincidence, I have a student going through this stuff now, and we got to this point this just yesterday. The definition of $f^{-1}$ is certainly disconcerting at first, but it's not that bad. You'd like to say $$f^{-1}\mathcal{F}(U) = \mathcal{F}(f(U))$$ except it doesn't make sense as it stands, unless $f(U)$ is open. So we approximate by open sets from above. A section on the left is a germ of a section of $\mathcal{F}$ defined in some open neighbourhood of $f(U)$, where by germ I mean the equivalence class where you identify two sections if they agree on a smaller neigbourhood. Even if you're still unhappy with this, the adjointness property tells you that it is the right thing to look at. Also, some of us work with non-quasicoherent sheaves (e.g. locally constant sheaves or constructible sheaves), so it's nice to have a general construction. Addendum: In my answer yesterday, I had somehow forgotten to mention the etale space or sheaf as a bunch of stalks $$\coprod_y \mathcal{F}_y\to Y$$ viewpoint discussed by Emerton and Martin Brandenburg. Had you started with this "bundle picture", we would be having this discussion in reverse, because pullback is the natural operation here and pushforward is the thing that seems strange.
{ "source": [ "https://mathoverflow.net/questions/40632", "https://mathoverflow.net", "https://mathoverflow.net/users/5094/" ] }
40,709
Wolfram's MathWorld website, at the page on functions , makes the following claim about the notation $f(x)$ for a function: While this notation is deprecated by professional mathematicians, it is the more familiar one for most nonprofessionals. From context, it appears that this is referring to the use of $f(x)$ to refer to the actual function , rather than just to a particular value, when $x$ is (in the context) a dummy variable. Is this true? Do professional mathematicians "deprecate" this notation? To avoid long and windy discussions as to the values or otherwise of this notation (which would be much more appropriate in a blog), this question should be viewed as a poll. As MO runs on StackExchange 1.0, it doesn't have the feature whereby the actual "up" and "down" votes for an answer can be easily seen. Therefore I shall post two answers, one in favour and one against, the following statement. Please only vote up . A vote for one answer will be taken as a vote against the other. The Law of the Excluded Middle does not hold here. The motion is: This house believes that the notation $f(x)$ to refer to a function has value in professional mathematics and that there is no need to apologise or feel embarrassed when using it thus. This poll has now run its course. The final tally can be seen below.
Vote for this answer if you agree with the statement: This house believes that the notation $f(x)$ to refer to a function has value in professional mathematics and that there is no need to apologise or feel embarrassed when using it thus. (Note: the answer is CW so that this is a genuine poll)
{ "source": [ "https://mathoverflow.net/questions/40709", "https://mathoverflow.net", "https://mathoverflow.net/users/9704/" ] }
40,729
I have always checked very carefully the papers I was refereeing when I wanted to suggest "accept". Actually I spend almost as much time checking the maths of a paper I referee than checking the maths of a paper of mine (and this is very long !). But I have some doubts. Is it really my job as a referee ? This question is related to Refereeing a Paper but only a few comments were made on that point in the above cited thread (mainly Refereeing a Paper ).
I only hope that computer-aided proof checking saves mathematics before it collapses under the weight of decades of irresponsible publishing. Of all disciplines, peer review in mathematics should serve to guarantee nearly absolute confidence in the validity of published results. Many subjects have grown so complex that one can't reasonably expect new people coming to the field to take responsibility for the correctness of all the literature that they might need to quote. I remember attending a seminar at a famous institute where a young speaker justified a step by citing a paper by a well-known and well-published worker in the field. A very, very famous mathematician in the audience, the recognized leader of the discipline, stopped him and said "I wouldn't believe anything in [so and so]'s papers." A hush went around the room. Later I asked a colleague of the impugned mathematician (a member of the same department and an expert in the same field) about the incident. "Yeah, everyone knows his papers are garbage" he said. I asked why they get published. "No one wants a fight. We publish them and then ignore them." I don't want this sort of practice to define mathematics in the public mind. I think we should compensate referees for their hard work, and honor solid refereeing nearly as much as we do excellent research. Either that, or fund computer-aided proof checking to the hilt, change the methodology of the subject and get human beings out of the business of vouchsafing the literature.
{ "source": [ "https://mathoverflow.net/questions/40729", "https://mathoverflow.net", "https://mathoverflow.net/users/9668/" ] }
40,770
I've seen that P != LINSPACE (by which I mean SPACE(n)), but that we don't know if one is a subset of the other. I assume that means that the proof must not involve showing a problem that's in one class but not the other, so how else would you go about proving it? Thanks!
Suppose by contradiction that P=SPACE(n). Then there exists an algorithm to simulate an n-space Turing machine in (say) n c time, for some constant c. But this means that there exists an algorithm to simulate an n 2 -space Turing machine in n 2c time. Therefore SPACE(n 2 ) is also contained in P. So P = SPACE(n) = SPACE(n 2 ). But SPACE(n) = SPACE(n 2 ) contradicts the Space Hierarchy Theorem. QED (Notice that in this proof, we showed neither that P is not contained in SPACE(n), nor that SPACE(n) is not contained in P! We only showed that one or the other must be true, by using the different closure properties of polynomial time and linear space. It's conjectured that P and SPACE(n) are incomparable.)
{ "source": [ "https://mathoverflow.net/questions/40770", "https://mathoverflow.net", "https://mathoverflow.net/users/9714/" ] }
40,920
The title of the question is also the title of a talk by Vladimir Voevodsky, available here . Had this kind of opinion been expressed before? EDIT. Thanks to all answerers, commentators, voters, and viewers! --- Here are three more links: Question arising from Voevodsky's talk on inconsistency by John Stillwell , Nelson's program to show inconsistency of ZF , by Andreas Thom , Pierre Colmez, La logique c’est pas logique ! EDIT. Here the link to the FOM list discussing these themes.
The talk in question was given as part of a celebration (this past weekend) of the 80th anniversary of the founding of the Institute for Advanced Study in Princeton. As you might guess there were quite a few very well-known mathematicians and physicists in the audience. (To name just a few, Jack Milnor, Jean Bourgain, Robert Langlands, Frank Wilczek, and Freeman Dyson, all of whom also spoke during the weekend.) The talk was a gem, and what did come as a surprise, at least to me, was that towards the end of his talk Voevodsky let on that he hoped that someone did find an inconsistency---and that by that time there was no audible gasp from the audience. There was of course a very lively discussion after the talk, and nobody seemed willing to say they felt that the "Current Foundations" (whatever they are) are definitely consistent. Of course Voevodsky was NOT saying that he felt that the body of theorems making up the "classic mathematics" that we normally deal with might be inconsistent, that is quite a different matter. What we should keep in mind is that a hundred years ago an earlier generation of mathematicians were quite surprised by not one but several "antinomies", like Russell's Paradox, The Burali-Forti Paradox, etc., (and that was followed by the greatest century in the history of Mathematics). As to the question "Had this kind of opinion been expressed before?", yes of course it has, but perhaps not so forcefully or in such a high-level forum. One person who has been expressing such ideas in recent years is my old friend Ed Nelson, who was also in the audience. (You can see his ideas in a recent paper: http://www.math.princeton.edu/~nelson/papers/warn.pdf ). I spoke with him after the talk and he seemed pleased that it was now becoming acceptable to discuss the matter seriously.
{ "source": [ "https://mathoverflow.net/questions/40920", "https://mathoverflow.net", "https://mathoverflow.net/users/461/" ] }
40,945
This question is inspired by an answer of Tim Porter . Ronnie Brown pioneered a framework for homotopy theory in which one may consider multiple basepoints. These ideas are accessibly presented in his book Topology and Groupoids. The idea of the fundamental groupoid, put forward as a multi-basepoint alternative to the fundamental group, is the highlight of the theory. The headline result seems to be that the van-Kampen Theorem looks more natural in the groupoid context. I don't know whether I find this headline result compelling- the extra baggage of groupoids and pushouts makes me question whether the payoff is worth the effort, all the more so because I am a geometric topology person, rather than a homotopy theorist. Do you have examples in geometric topology (3-manifolds, 4-manifolds, tangles, braids, knots and links...) where the concept of the fundamental groupoid has been useful, in the sense that it has led to new theorems or to substantially simplified treatment of known topics? One place that I can imagine (but, for lack of evidence, only imagine) that fundamental groupoids might be useful (at least to simplify exposition) is in knot theory, where we're constantly switching between (at least) three different "natural" choices of basepoint- on the knot itself, on the boundary of a tubular neighbourhood, and in the knot complement. This change-of-basepoint adds a nasty bit of technical complexity which I have struggled with when writing papers. A recent proof (Proposition 8 of my paper with Kricker ) which would have been a few lines if we hadn't had to worry about basepoints, became 3 pages. In another direction, what about fundamental groupoids of braids? Have the ideas of fundamental groupoids been explored in geometric topological contexts? Conversely, if not, then why not?
Here is an interesting example where groupoids are useful. The mapping class group $\Gamma_{g,n}$ is the group of isotopy classes of orientation preserving diffeomorphisms of a surface of genus $g$ with $n$ distinct marked points (labelled 1 through n). The classifying space $B\Gamma_{g,n}$ is rational homology equivalent to the (coarse) moduli space $\mathcal{M}_{g,n}$ of complex curves of genus $g$ with $n$ marked points (and if you are willing to talk about the moduli orbifold or stack, then it is actually a homotopy equivalence) The symmetric group $\Sigma_n$ acts on $\mathcal{M}_{g,n}$ by permuting the labels of the marked points. Question: How do we describe the corresponding action of the symmetric group on the classifying space $B\Gamma_{g,n}$? It is possible to see $\Sigma_n$ as acting by outer automorphisms on the mapping class group. I suppose that one could probably build an action on the classifying space directly from this, but here is a much nicer way to handle the problem. The group $\Gamma_{g,n}$ can be identified with the orbifold fundamental group of the moduli space. Let's replace it with a fundamental groupoid. Fix a surface $S$ with $n$ distinguished points, and take the groupoid where objects are labellings of the distinguished points by 1 through n, and morphisms are isotopy classes of diffeomorphisms that respect the labellings (i.e., sending the point labelled $i$ in the first labelling to the point labelled $i$ in the second labelling). Clearly this groupoid is equivalent to the original mapping class group, so its classifying space is homotopy equivalent. But now we have an honest action of the symmetric group by permuting the labels on the distinguished points of $S$.
{ "source": [ "https://mathoverflow.net/questions/40945", "https://mathoverflow.net", "https://mathoverflow.net/users/2051/" ] }
40,962
Suppose that I have $n$ unknown variables $x_1,\ldots,x_n$. I wish to compute their sum: $$Sum(x) = \sum_{i=1}^nx_i$$ However, the only access to these variables is through products: that is, for any subset $S \subset [n]$ I may compute: $$P(S) = \prod_{i \in S}x_i$$ That is, I wish to find some number of subsets $S_1,\ldots,S_k$, compute $P(S_1),\ldots,P(S_k)$, and then apply some postprocessing $f$ to find the sum of the variables: $$f(P(S_1),\ldots,P(S_k)) = Sum(x)$$ My question is: How large must $k$ be? Clearly, $k = n$ suffices, since with $k$ subsets I may uniquely identify each $x_i$ and then sum the values myself. Is it possible to do with $k < n$? With $k = O(1)$?
Here is an extreme case: I will tell you that either every variable is zero or possibly a single one of them is 1. So your task is to decide if the sum is 0 or it is 1. Any product of more than one term gives no information at all. To rule out all zero you need to check each variable.
{ "source": [ "https://mathoverflow.net/questions/40962", "https://mathoverflow.net", "https://mathoverflow.net/users/9769/" ] }
40,997
Given a large number $q$ (say, a prime) and a number $a$ between 2 and $q-1$ what is the fastest algorithm known for computing the inverse of $a$ in the group of residue classes modulo $q$?
The fast Euclidean algorithm runs in time $O(M(n)\log n)$, where $n=\log q,$ and $M(n)$ is the time to multiply two $n$-bit integers. This yields a bit-complexity of $$ O(n\log^2 n\log\log n) $$ for the time to compute the inverse of an integer modulo $q$, using the standard Schonhage-Strassen algorithm for multiplying integers (a slightly better asymptotic result can be obtained using Furer's multiplication algorithm). To understand the basic idea behind the fast Euclidean algorithm, recall that the standard Euclidean algorithm computes the (extended) gcd of two integers $r_0 > r_1$ by successively computing $m_i=\lfloor r_{i-1}/r_i\rfloor$ and setting $$ r_{i+1} = r_{i-1} - m_ir_i, $$ until it obtains $r_{k+1} = 0$, at which point $r_k = \gcd (r_0, r_1)$. This can be expressed in matrix form as $$ R_1 = \begin{bmatrix} r_0\newline r_1 \end{bmatrix};\qquad R_{i+1} = \begin{bmatrix} r_i\newline r_{i+1}\end{bmatrix} = M_iR_i;\qquad M_i=\begin{bmatrix} 0&1\newline 1&-m_i \end{bmatrix}, $$ and if we compute the matrix $S_k=M_kM_{k-1}\ldots M_1$, we have $R_{k+1}=S_kR_1$ which expresses the entry $r_k=\gcd(r_0,r_1)$ as a linear combination of $r_0$ and $r_1$. Assuming this gcd is 1, we can then read off the inverse of $r_1$ modulo $r_0$ (and vice versa) from the top row of $S_k$. As described above, this involves $O(k)$ arithmetic operations on integers of size $O(n)$, and one can show that $k=O(n)$, leading to a running time that is roughly quadratic in $n$. The fast Euclidean algorithm achieves a quasi-linear running time by only computing $O(\log n)$ of the matrices $S_i$. Roughly speaking (and ignoring many important details), this is done by directly computing $S_{k/2}$ using what is known as a "half-gcd" algorithm, computing $R_{k/2+1}=S_{k/2}R_1$, and then proceeding recursively. The half-gcd algorithm, in turn, works by recursively calling itself. The depth of the recursion is $O(\log n)$ and this yields an $O(M(n)\log n)$ complexity bound. This algorithm also works over polynomial rings and is often described in this setting. Further details can be found in the (incomplete) list of references below: Chapter 11 of von zur Gathen and Gerhard, "Modern Computer Algebra," Cambridge University Press, 2003. Chapter 2 of Yap, "Fundamental Problems of Algorithmic Algebra," Oxford University Press, 2000. N. Moller, " On Schonhage's algorithm and subquadratic integer GCD computation ," Mathematics of Comutation 77(261), pp. 589-607 (2008). Stehle and Zimmerman, " A binary recursive GCD algorithm ," ANTS-VI, LCNS 3076, pp. 411-425, 2004.
{ "source": [ "https://mathoverflow.net/questions/40997", "https://mathoverflow.net", "https://mathoverflow.net/users/2344/" ] }
41,011
What is the indefinite sum of the tangent function, that is, the function $T$ for which $\Delta_x T = T(x + 1) - T(x) = \tan(x)$ Of course, there are infinitely many answers, who all differ by a function of period 1. Ideally, I would like the solution to be of the form $T(x) = $ nice_function$(x)$ + possibly_ugly_periodic_function$(x)$, where nice is at least piece-wise continuous. If any of the following sums can be found, the sum of tan can also be found: $\sum \sec x$ $\sum \csc x$ $\sum \cot x$ $\sum \frac{1}{e^{ix} + 1}$ I have tried several methods without success, including using a newton series (which does not converge for non-integer $x$), and trying to guess possible functions. I would also appreciate lines of attack if a solution is not known.
I add more details for the solution in the distinguished answer due to Anixx. First, we need the digamma function http://en.wikipedia.org/wiki/Digamma_function which we will call $\Psi(x)$. Important properties (from that web page) are: $\Psi(x)$ is analytic in the complex plane except at the nonpositive integers where it has simple poles. $\Psi(x+1)-\Psi(x) = 1/x$. $\Psi(x) > 0$ for $x>2$. Asymptotics: $$ \Psi(x) = \log x - \frac{1}{2x} - \frac{1}{12x^2} + \frac{1}{120x^4} + O(x^{-6}) \qquad\text{as } x \to \infty . $$ So, define $T(z) ={}$ $$ -\sum_{k = 1}^{\infty} \Biggl[\Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) - z + 1\Biggr) + \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + z\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + 1\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr)\Biggr)\Biggr] $$ For any fixed $z$, only finitely many preliminary terms involve $\Psi$ evaluated at a nonpositive argument, and the asymptotics of the remaining terms are computed (from the asymptotics given above) as $$ \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) - z + 1\Biggr) + \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + z\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + 1\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr)\Biggr) $$ $=z(1-z)/(k^2\pi^2) + o(k^{-2})$ as $k \to \infty$. So the series converges absolutely except when we are at a pole of one of the preliminary terms. Now, because of absolute convergence, we may subtract term-by-term and simplify to get $$ T(z+1)-T(z) = \sum_{k=1}^\infty\Biggl[\frac{8z}{(-\pi+2\pi k-2z)(-\pi+2\pi k+2z)}\Biggr] = \tan z . $$
{ "source": [ "https://mathoverflow.net/questions/41011", "https://mathoverflow.net", "https://mathoverflow.net/users/7540/" ] }
41,032
Given a once-punctured surface $F$ and an orientation preserving self homeomorphism $\phi$, let $M_\phi$ be the bundle over $S^1$ with fiber $F$ and monodromy $\phi$. In Sakuma's survey article The topology, geometry and algebra of unknotting tunnels (and in this paper ), Johannson and Kobayashi are credited for proving that any unknotting tunnel for $M_\phi$ is isotopic to a tunnel $\alpha$ which lies on a fiber $F$ such that $\alpha\cap\phi(\alpha)=\emptyset$. This leads to a classification of unknotting tunnels in once-punctured torus bundles. The references are talks. Does anyone know if this is written down anywhere?
I add more details for the solution in the distinguished answer due to Anixx. First, we need the digamma function http://en.wikipedia.org/wiki/Digamma_function which we will call $\Psi(x)$. Important properties (from that web page) are: $\Psi(x)$ is analytic in the complex plane except at the nonpositive integers where it has simple poles. $\Psi(x+1)-\Psi(x) = 1/x$. $\Psi(x) > 0$ for $x>2$. Asymptotics: $$ \Psi(x) = \log x - \frac{1}{2x} - \frac{1}{12x^2} + \frac{1}{120x^4} + O(x^{-6}) \qquad\text{as } x \to \infty . $$ So, define $T(z) ={}$ $$ -\sum_{k = 1}^{\infty} \Biggl[\Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) - z + 1\Biggr) + \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + z\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + 1\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr)\Biggr)\Biggr] $$ For any fixed $z$, only finitely many preliminary terms involve $\Psi$ evaluated at a nonpositive argument, and the asymptotics of the remaining terms are computed (from the asymptotics given above) as $$ \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) - z + 1\Biggr) + \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + z\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr) + 1\Biggr) - \Psi \Biggl(\pi \biggl(k - \frac{1}{2}\biggr)\Biggr) $$ $=z(1-z)/(k^2\pi^2) + o(k^{-2})$ as $k \to \infty$. So the series converges absolutely except when we are at a pole of one of the preliminary terms. Now, because of absolute convergence, we may subtract term-by-term and simplify to get $$ T(z+1)-T(z) = \sum_{k=1}^\infty\Biggl[\frac{8z}{(-\pi+2\pi k-2z)(-\pi+2\pi k+2z)}\Biggr] = \tan z . $$
{ "source": [ "https://mathoverflow.net/questions/41032", "https://mathoverflow.net", "https://mathoverflow.net/users/4325/" ] }
41,057
Recently, I was reminded in Melvyn Nathason's first year graduate algebra course of a debate I've been having both within myself and externally for some time. For better or worse, the course most students first use and learn extensive category theory and arrow chasing is in an advanced algebra course, either an honors undergraduate abstract algebra course or a first-year graduate algebra course. (Ok, that's not entirely true, you can first learn about it also in topology. But it's really in algebra where it has the biggest impact. Topology can be done entirely without it wherareas algebra without it beyond the basics becomes rather cumbersome. Also, homological methods become pretty much impossible.) I've never really been comfortable with category theory. It's always seemed to me that giving up elements and dealing with objects that are knowable only up to isomorphism was a huge leap of faith that modern mathematics should be beyond. But I've tried to be a good mathematican and learn it for my own good. The fact I'm deeply interested in algebra makes this more of a priority. My question is whether or not category theory really should be introduced from jump in a serious algebra course. Professor Nathanson remarked in lecture that he recently saw his old friend Hyman Bass, and they discussed the teaching of algebra with and without category theory. Both had learned algebra in thier student days from van der Waerden (which incidently, is the main reference for the course and still his favorite algebra book despite being hopelessly outdated). Melvyn gave a categorical construction of the Fundamental Isomorphism Theorum of Abelian Groups after Bass gave a classical statement of the result. Bass said, "It's the same result expressed in 2 different languages. It really doesn't matter if we use the high-tech approach or not." Would algebracists of later generations agree with Professor Bass? A number of my fellow graduate students think set theory should be abandoned altogether and thrown in the same bin with Newtonian infinitesimals (nonstandard constructions not withstanding) and think all students should learn category theory before learning anything else. Personally, I think category theory would be utterly mysterious to students without a considerable stock of examples to draw from. Categories and universal properties are vast generalizations of huge numbers of not only concrete examples,but certain theorums as well. As such, I believe it's much better learned after gaining a considerable fascility with mathematics-after at the very least, undergraduate courses in topology and algebra. Paolo Aluffi's wonderful book Algebra:Chapter 0 , is usually used by the opposition as a counterexample, as it uses category theory heavily from the beginning. However, I point out that Aluffi himself clearly states this is intended as a course for advanced students and he strongly advises some background in algebra first. I like the book immensely, but I agree. What does the board think of this question? Categories early or categories late in student training?
There's a big difference between teaching category theory and merely paying attention to the things that category theory clarifies (like the difference between direct products and direct sums). In my opinion, the latter should be done early (and late, and at all other times); there's no reason for intentional sloppiness. On the other hand, teaching category theory is better done after the students have been exposed to some of the relevant examples. Many years ago, I taught a course on category theory, and in my opinion it was a failure. Many of the students had not previously seen the examples I wanted to use. One of the beauties of category theory is that it unifies many different-looking concepts; for example, left adjoints of forgetful functors include free groups, universal enveloping algebras, Stone-Cech compactifications, abelianizations of groups, and many more. But the beauty is hard to convey when, in addition to explaining the notion of adjoint, one must also explain each (or at least several) of these special cases. So I think category theory should be taught at the stage where students have already seen enough special cases of its concepts to appreciate their unification. Without the examples, category theory can look terribly unmotivated and unintuitive.
{ "source": [ "https://mathoverflow.net/questions/41057", "https://mathoverflow.net", "https://mathoverflow.net/users/3546/" ] }