source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
26,267 |
While not a research-level math question, I'm sure this is a question of interest to many research-level mathematicians, whose expertise I seek. At RIMS (in Kyoto) in 2005, they had the best white chalk I've seen anywhere. It's slightly larger than standard American chalk, harder, heavier, and most importantly covered with some enamel-like coating that one must rub through (on the end) to be able to write with. One's hands don't rub through the coating, and thus don't get chalky. Are there any U.S. manufacturers of such? EDIT: even though someone has given a link whereby to order this stuff from Japan, I would still be delighted to hear about American products that beat Binney & Smith.
|
We here in New York have also long sought to secure supplies of this amazingly high quality chalk. It is called Hagoromo "Fulltouch" chalk, and my colleague Jonas Reitz wrote to the company, since the chalk does not seem to be available anywhere in the US, and got a letter in reply directly from the company president (with fairly OK English) concerning prices and rates to the US. It would come to 3 dollars per stick, and we considered that. Meanwhile, one of our postdocs with connections to Japan was able to arrange a small supply that way, and this is the message our administrative assistant sent out (provided with permission): To All GC Math Ph.D. Program Faculty, I am happy to report that we have managed (thorough the good graces of our postdoc Yu Yasufuku) to get a limited supply of Hagoromo "Fulltouch" chalk which is made in Japan and is not available for sale in the United States. We have been trying unsuccessfully to get this “miracle” chalk for years, but finally our prayers have been answered. Users of this chalk report that it is the “Rolls Royce” of chalk; for you oenophiles, it may be thought of as the “Chateau Lafite Rothschild” of chalk, or, for you baseball fans, the “Babe Ruth” of chalk… well, you get the idea. I have heard it said that it is impossible to make a mathematical mistake when writing with this chalk, but I am somewhat dubious of this claim. The chalk was “smuggled” in stick by stick carried in the beaks of birds (well, practically), so the supply is very limited. We have, however, worked out a distribution system that seems the fairest and that should allow our stash to last for about 3 semesters. Faculty members teaching at GC will get ten sticks of chalk per semester, and those not teaching but who are coordinating (or co-coordinating) a math seminar at the GC and is a member of the Math Doctoral Faculty will each get 2 sticks per semester. This will repeat each semester until the chalk is gone. Faculty should save this chalk for use only during their most important lectures or when working on their most important theorems. If you currently fall into one of the two categories listed above, please stop by to see me for your chalk and I will check you off of my list. Hopefully, after spring 2011, another supply of these magical white sticks can be procured on the black or gray market, or perhaps Hagoromo will begin distribution in the U.S. At any rate, enjoy your chalk! Use it wisely. Best, Rob Robert S. Landsman, Assistant Program Officer, CUNY Ph.D. Program in Mathematics
|
{
"source": [
"https://mathoverflow.net/questions/26267",
"https://mathoverflow.net",
"https://mathoverflow.net/users/391/"
]
}
|
26,385 |
It is easy to see that if $A\times B$ is homeomorphic to $A\times C$ for topological spaces $A$, $B$, $C$, then one may not conclude that $B$ and $C$ are homeomorphic (for example, take $C=B^2$, $A=B^{\infty}$). The question is: for which $A$ such conclusion is true? I saw long ago a problem that for $A=[0,1]$ it is not true, but could not solve it, and do not know, where to ask. Hence am asking here. The same question in other categories (say, metric spaces instead topological) also seems to have some sense.
|
For $A=[0,1]$, let $B$ be the 2-torus with one hole and $C$ be the 2-disc with two holes. The products $B\times[0,1]$ and $C\times[0,1]$ can be realized in $\mathbb R^3$: the former as a thickening of the torus, the latter in a trivial way. Each of these products is a handlebody bounded by the pretzel surface (the sphere with two handles). It is easy to deform one to the other "by hand".
|
{
"source": [
"https://mathoverflow.net/questions/26385",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4312/"
]
}
|
26,491 |
This is probably common knowledge, alas I have to confess my ignorance. In simpler more abstract language, does $\mathcal{O}_K$ being simply connected (having trivial etale $\pi_1$) imply $\mathcal{O}_K=\mathbb{Z}$?
|
Yes, there are examples and Minkowski's proof for ${\mathbf Q}$ can be adapted to find a few of them. Some examples of this kind among quadratic fields $F$, listed in increasing size of discriminant (in absolute value), are
$$
{\mathbf Q}(\sqrt{-3}), \ \ {\mathbf Q}(i), \ \ {\mathbf Q}(\sqrt{5}), \ \ {\mathbf Q}(\sqrt{-7}), \ \ {\mathbf Q}(\sqrt{2}), \ \ {\mathbf Q}(\sqrt{-2}).
$$
A cubic and quartic field $F$ that will come out of the method I describe below are ${\mathbf Q}(\alpha)$ where $\alpha^3 - \alpha - 1 = 0$ and ${\mathbf Q}(\zeta_5)$. Now for the details. I suggest when reading this through for the first time that you keep a concrete example in mind, like $F = {\mathbf Q}(i)$. (That's what I did the first time I worked this out.) Over the rationals, Minkowski showed a number field with degree larger than 1 must have a discriminant whose absolute value is larger than 1. Over other number fields $F$ besides the rationals, the goal is to find sufficient conditions on $F$ so that any finite extension $E/F$ with $[E:F] > 1$ has its discriminant ideal ${\mathfrak d}_{E/F}$ not equal to the unit ideal, and then a prime ideal factor will ramify in $E$. Rather than show ${\mathfrak d}_{E/F} \not= (1)$, we will look for a sufficient condition on $F$ which assures us that the norm of this ideal is not 1. That means absolutely the same thing, but it's easier to work with ideal norms since they are positive integers rather than ideals, and moreover it lets us express the problem in terms of discriminants of number fields: the discriminants of $E$ and $F$ are related by $$|d_E| = {\rm N}({\mathfrak d}_{E/F})|d_F|^{[E:F]}.$$
So aiming to show ${\mathfrak d}_{E/F} \not= (1)$ is the same as avoiding $|d_E| = |d_F|^{[E:F]}$, which is the same as avoiding $$|d_E|^{1/[E:{\mathbf Q}]} = |d_F|^{1/[F:{\mathbf Q}]}.$$ We want sufficient conditions on $F$ to guarantee this equation for any proper finite extension $E/F$ can't take place. For any number field $K$, the quantity $|d_K|^{1/[K:{\mathbf Q}]}$ is called the root discriminant of $K$. When $n = [K:{\mathbf Q}]$, Minkowski's lower bound on $|d_K|$ is
$$
|d_K| \geq \left(\frac{\pi}{4}\right)^n\frac{n^{2n}}{n!^2},
$$
so we get the root discriminant lower bound
$$
|d_K|^{1/n} \geq \left(\frac{\pi}{4}\right)\frac{n^{2}}{n!^{2/n}}.
$$
Call the right side $f(n)$, so the Minkowski bound says $|d_K|^{1/[K:{\mathbf Q}]} \geq f([K:{\mathbf Q}])$. For $n = 1, 2, 3, 4$ the values of $f(n)$ are $.785, 1.570, 2.140, 2.565$, so we guess $f(n)$ is increasing and it's left as an exercise to prove that. (Hint: use the one-sided Stirling estimate $n! > (n/e)^n\sqrt{2\pi{n}}$.) Returning to the extension $E/F$, let $m = [F:{\mathbf Q}]$, so $[E:{\mathbf Q}] = [E:F][F:{\mathbf Q}] \geq 2m$ since $E$ is a larger field than $F$. Now if $E/F$ had trivial discriminant ideal, we would have
$$
|d_F|^{1/[F:{\mathbf Q}]} = |d_E|^{1/[E:{\mathbf Q}]} \geq f([E:{\mathbf Q}]) \geq f(2m).
$$
This hypothetical lower bound on the root discriminant of $F$ is larger than the proved Minkowski bound of $f(m)$. Suppose $F$ is a number field of degree $m$
whose root discriminant is less than $f(2m)$. If $E/F$ is unramified at all primes in $F$
then $E$ and $F$ have equal root discriminants, so the root discriminant of $E$ is less than $f(2m)$. However, we saw above that the root discriminant of $E$ is $\geq f(2m)$ when $[E:F] \geq 2$, so the only choice is $E = F$, i.e., no proper finite extension of $F$ can be unramified at all primes in $F$. Our goal now is to find examples of number fields $F$ with degree $m$ whose
root discriminant is smaller than $f(2m)$: $|d_F|^{1/m} < f(2m)$. This is the same as
$$
|d_F| < f(2m)^m = \frac{\pi^mm^{2m}}{(2m)!}.
$$
Any $F$ which fits this condition will be an example. As a reality check, let $F$ be the rationals, so $m = 1$. We have
$f(2) = \pi/2$ and $|d_{\mathbf Q}| = 1 < \pi/2$, so $\mathbf Q$ has no unramified extensions. We're on the right track. Taking $m = 2$ we want $|d_F| < f(4)^2 = 6.57$ and the fields ${\mathbf Q}(i)$, ${\mathbf Q}(\sqrt{-3})$, and ${\mathbf Q}(\sqrt{5})$ all work. If the root discriminant of $F$ is not below $f(2m)$, where $m = [F:{\mathbf Q}]$,
we can still squeeze out some information, namely an upper bound on the degree of an everywhere (= at finite places) unramified extension of $F$. For instance, ${\mathbf Q}(\sqrt{2})$ has discriminant 8, which is not below 6.57, so this argument doesn't show on its own that every proper extension of ${\mathbf Q}(\sqrt{2})$ is ramified at some prime in ${\mathbf Q}(\sqrt{2})$. However, we can bound the degree of such an extension. The root disc. of ${\mathbf Q}(\sqrt{2})$ is $\sqrt{8} \approx 2.828$, which lies between $f(4)$ and $f(5)$, so any proper unramified extension of ${\mathbf Q}(\sqrt{2})$ must be a quadratic extension of ${\mathbf Q}(\sqrt{2})$. Quadratic extensions are automatically abelian, so we would have an abelian extension of ${\mathbf Q}(\sqrt{2})$ unramified at no finite places, and there's no such thing by class field theory since ${\mathbf Q}(\sqrt{2})$ has wide class number 1 (the class number not involving ramification at infinity). Therefore you can add ${\mathbf Q}(\sqrt{2})$ to the list of number fields with no proper extension unramified at all finite places. The same arguments apply to ${\mathbf Q}(\sqrt{-2})$ and ${\mathbf Q}(\sqrt{-7})$, whose root discriminants are also between $f(4)$ and $f(5)$. Unfortunately, this method is really limited because although $f(n)$ is increasing,
it's actually bounded. By Stirling's formula, $f(n)$ has limit $\pi e^2/4 \approx 5.803$. So when a number field $F$ has root discriminant exceeding this value, this method won't give us any examples at all (you can't find an $m$ for which the root discriminant is below $f(2m)$). [Edit: Using Odlyzko-type lower bounds on discriminants, one can produce some more examples as Torsten points out.] This kind of argument using Minkowski's bound does work for a few cubic fields and quartic fields: for cubic fields it works as long as the discriminant of the field (in absolute value) is less than 31.39, and there are two such fields: ${\mathbf Q}(\alpha)$ and ${\mathbf Q}(\beta)$ where $\alpha^3 - \alpha - 1 = 0$ (discriminant -23) and $\beta^3 + \beta + 1 = 0$ (discriminant -31). The next smallest absolute value of a discriminant of a cubic field is 44, which is above the bound. For quartic fields we need the discriminant to be less than 158.32, and I know of three fields which work: ${\mathbf Q}(\gamma)$ where
$\gamma^4 + 2\gamma^3 + 3\gamma + 1 = 0$ (discriminant 117), ${\mathbf Q}(\zeta_5)$ has discriminant 125, and ${\mathbf Q}(\zeta_{12})$ has discriminant 144.
|
{
"source": [
"https://mathoverflow.net/questions/26491",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5756/"
]
}
|
26,537 |
It is fundamental to topology that $\mathbb{R}$ is a connected topological space. However, all the topology books that I have ever looked in give the same proof. (the proof I am thinking of can be seen in Munkres's topology or Lee's Introduction to topological manifolds) This seems strange to me, because for other fundamental results such as the Compactness of $[0,1]$, I can think of several proofs. Does anyone know any different proofs of the connectedness of $\mathbb{R}$?
|
If you've already developed basic facts about compactness you can prove it this way: Let $[0,1] = A \cup B$ with $A$ and $B$ closed and disjoint. Then since $A \times B$ is compact and the distance function is continuous, there is a pair $(a, b) \in A \times B$ at minimum distance. If that distance is zero, $A$ and $B$ intersect. If not, you get a contradiction by taking any point in the interval from $a$ to $b$: it can't be in either $A$ or $B$ because its distance from $b$ or $a$ is smaller than the minimum. That shows a compact interval in $\mathbb{R}$ is connected. If $\mathbb{R} = A \cup B$ with $A$ and $B$ closed and disjoint, then for any closed interval $I$ with one endpoint in $A$ and one in $B$, $I = (A \cap I) \cup (B \cap I)$ is disconnection of $I$. Alternatively, you could write $\mathbb{R}$; as a union of closed intervals with a common point.
|
{
"source": [
"https://mathoverflow.net/questions/26537",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4002/"
]
}
|
26,549 |
Edwards, in his book "Divisor theory" says that Kronecker's methods are quite different to Dedekind's and those of today. Is there really much of a difference apart from Kronecker's methods being more constructive?
|
You can find a nice description of Kronecker's approach in an article by Harley Flanders,
"The Meaning of the Form Calculus in Classical Ideal Theory" (Trans. AMS 95 (1960), 92--100). It is at JSTOR here . I found that more to my tastes than Edwards' book. There is a difference between the two approaches. Kronecker was thinking in very general terms, beyond the "one-dimensional" setting that Dedekind worked in. (Kronecker had a dream -- a second one I suppose -- of unifying number theory and algebraic geometry but the tools to achieve this would take a couple more generations to appear). That accounts in part for Kronecker's multivariable polynomials. He had bigger goals than just unique factorization in rings of integers. Here is one example of the difference between Kronecker and Dedekind. Suppose ${\mathfrak a}$ is an ideal in the ring of integers of a number field $K$ and I ask you to compute its norm, i.e., the size of ${\cal O}_K/\mathfrak a$ . How would you do it? From Dedekind's point of view, you find a ${\mathbf Z}$ -basis of ${\cal O}_K$ and of ${\mathfrak a}$ , write the basis of the ideal in terms of the basis of the ring of integers, and then compute (the absolute value of) the determinant of the matrix expressing the ideal basis in terms of the ring basis. But as you may know, ideals usually are not given to us in terms of a ${\mathbf Z}$ -basis. More often they are given in terms of just two generators, say ${\mathfrak a} = (\alpha,\beta)$ . How can you compute the norm of the ideal in terms of the two generators? In principle it should be possible, since the two generators determine the ideal they generate, so all the data you need is encoded in the numbers $\alpha$ and $\beta$ . There is a Dedekind-style way to write the norm of ${\mathfrak a}$ in terms of the two generators: the norm of an ideal is the gcd of the norms of all elements of the ideal.
Watch out: you can't get by using only the gcd of the norms of the two generators.
For example, in the Gaussian integers the ideal $(1+2i,1-2i)$ is the unit ideal $(1)$ , so it has norm 1, but the two generators $1+2i$ and $1-2i$ have norm 5, whose gcd is not 1. (Of course the ideal also contains $1+2i - (1-2i) = 4i$ , whose norm is 4, and the gcd of that with 5 is one and you're done.) In principle you only need to form the gcd of the norms of a finite number of elements in the ideal, but it's not clear which "finitely many" elements are practically enough. So I think it's fair to say Dedekind's point of view does not easily allow you to find the norm of an ideal in terms of two generators of the ideal, which is how one usually thinks about them concretely. Now here is how Kronecker would find the norm of the ideal (essentially). Form the polynomial $\alpha + \beta{T}$ in ${\cal O}_K[T]$ . The field extension $K(T)/{\mathbf Q}(T)$ is finite.
Take the field norm of $\alpha + \beta{T}$ down to ${\mathbf Q}(T)$ . The result is in ${\mathbf Z}[T]$ .
That integral polynomial has finitely many coefficients (which are not all norms of elements in $K$ , so this isn't some disguised version of the previous paragraph). The gcd of the integral coefficients of ${\rm N}_{K(T)/{\mathbf Q}(T)}(\alpha + \beta{T})$ is the norm of the ideal. And if the ideal is given to you with more than two generators, just let $f(T)$ be the polynomial with higher degree having those generators as its coefficients, one for each power of $T$ (it doesn't matter what order you use the generators as coefficients) and do the same thing as in the case of two generators: field norm down to ${\mathbf Q}(T)$ and then gcd of the integral coefficients that pop out. I personally was blown away when I saw this method work, since practically no books on algebraic number theory discuss Kronecker's point of view, so this particular result isn't there. (To be honest, you do not need Kronecker's multivariable polynomial method to prove this norm formula. Once you know the formula, it can be derived by more orthodox techniques, but of course it leaves out the question of how anyone would have ever discovered this formula in the first place by orthodox methods. Any suggestions?) In a sense this example is only a "constructive" dichotomy between Kronecker and Dedekind, but I think it still addresses the question that is asked, because each method of solving this problem (Dedekind's ${\mathbf Z}$ -bases and Kronecker's polynomials) is constructive but they feel so different from each other.
|
{
"source": [
"https://mathoverflow.net/questions/26549",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4692/"
]
}
|
26,568 |
Every time I hear it mentioned it is praised in the highest possible terms, and I remember one of my old lecturers saying that it is one of the 3 most important theorems in analysis. Yet the only consequences of it that I have read is that it proves that there are lots of functionals and that separating hyperplanes exist. Are those 2 consequences really that spectacular, or are there other ones that I don't know of?
|
I have used it (sometimes with coauthors) several times in the following general context. I have wanted to prove that a function f can be decomposed as a sum g+h, where g has certain properties and h has certain properties. It has been possible to show that the set of acceptable g is convex, as is the set of acceptable h. So I'm trying to show that f belongs to a sum of two convex sets K+L. But a sum of convex sets is convex, so if f cannot be written in such a way, then it can be separated from K+L by a functional. It is often possible to derive a contradiction from this. The result is that one can prove the existence of the desired decomposition under circumstances where explicitly defining a decomposition would be difficult. Incidentally, the applications I am alluding to are of the finite-dimensional Hahn-Banach theorem. Some people call it the minimax theorem, and still others would call it duality for linear programming or something like that.
|
{
"source": [
"https://mathoverflow.net/questions/26568",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4692/"
]
}
|
26,585 |
In an «advanced calculus» course, I am talking tomorrow about connectedness (in the context of metric spaces, including notably the real line). What are nice examples of applications of the idea of connectedness? High wow-ratio examples are specially welcomed... :)
|
If $h:[a,b]\to R$ is continuous and one-to-one, then $h$ is monotone. Proof: The image of the connected set $\{(s,t): a \le s < t \le b\}$ under the
map $h(t)-h(s)$ is a connected subset of $R\setminus\{0\}$.
|
{
"source": [
"https://mathoverflow.net/questions/26585",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1409/"
]
}
|
26,613 |
What are some good papers that debunk common myths in the history of mathematics? To give you an idea of what I'm looking for, here are some examples. Tony Rothman, "Genius and biographers: The fictionalization of Evariste Galois," Amer. Math. Monthly 89 (1982), 84-106. Debunks various myths about Galois, in particular the idea that he furiously wrote down all the details of Galois theory for the first time the night before he died. Jeremy Gray, "Did Poincare say 'set theory is a disease'?" , Math. Intelligencer 13 (1991), 19-22. Debunks the myth that Poincare said, "Later generations will regard Mengenlehre as a disease from which one has recovered." Colin McLarty, "Theology and its discontents: The origin myth of modern mathematics," http://people.math.jussieu.fr/~harris/theology.pdf . Debunks the myth that Gordan denounced Hilbert's proof of the basis theorem with the dismissive sentence, "This is not mathematics; this is theology!" Some might say that my question belongs on the Historia Matematica mailing list; however, besides the fact that I don't subscribe to Historia Matematica, I think that the superior infrastructure of MathOverflow actually makes it a better home for the list I hope to create. Still, maybe someone should let the Historia Matematica mailing list know that I'm asking the question here.
|
For over a century, books published a picture of Legendre that was not in fact a picture of Legendre. There's an AMS notices article . Does a picture count as a myth?
|
{
"source": [
"https://mathoverflow.net/questions/26613",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
26,680 |
Background Let $(X,x)$ be a pointed topological space. Then the fundamental group $\pi_1(X,x)$ becomes a topological space: Endow the set of maps $S^1 \to X$ with the compact-open topology, endow the subset of maps mapping $1 \to x$ with the subspace topology, and finally use the quotient topology on $\pi_1(X,x)$. This topology is relevant in some situations. A very interesting paper dealing with this topology is: [1] Daniel K. Biss, A Generalized Approach to the Fundamental Group, The American Mathematical Monthly, Vol. 107 You can find this online . This is somehow an introduction to [2] Daniel K. Biss, The topological fundamental group and generalized covering spaces , Topology and its Applications, Vol. 124 Question How can we prove that $\pi_1(X,x)$ is a topological group? Clearly the inversion map $\pi_1(X,x) \to \pi_1(X,x)$ is continuous, since $S^1 \to S^1, z \mapsto \overline{z}$ is continuous and induces this map. But I don't know how to attack the continuity of the multiplication. It's not hard to see that the multiplication on $map((S^1,1),(X,x))$ is continuous, since it is induced by a fold map $S^1 \to S^1 + S^1$. In order to carry this over to $\pi_1(X,x)$, there are at least two problems which I encounter: The quotient map $map((S^1,1),(X,x)) \to \pi_1(X,x)$ may be not open. The product of the quotient maps $map((S^1,1),(X,x))^2 \to \pi_1(X,x)^2$ may be not a quotient map. In [1] it is claimed that $\pi_1(X,x)$ is always a topological group, and this should be proven in [2], but I have no acecss to [2]. An example that products of quotient maps don't have to be quotient maps can be found here . Remark however that this is true in the category of compactly generated spaces.
|
Update : A bit of a digital paper chase led me, via David Robert's thesis (note that in the latest version, it is Chapter 5, section 2 that is most relevant), to this paper on the arXiv. The last sentence of the abstract is: These hoop earring spaces provide a simple class of counterexamples to the claim that $\pi_{1}^{top}$ is a functor to the category of topological groups. I recommend reading this article. ( Added later : In case it's not clear, the author of that paper is Jeremy Brazas who added an answer afterwards, so if you vote for my answer, you should definitely vote for his!) Original Answer : These were my initial thoughts before I found the references above. These were what made me sufficiently intrigued to do the paper chase and find the above-mentioned thesis and article. The proof given in the second paper (by Biss) that is mentioned in the question is short enough that I think it reasonable to copy it out here. I shan't copy out the obvious diagram so need to establish some notation first: $m \colon \pi_1^{Top}(X,x) \times \pi_1^{Top}(X,x) \to \pi_1^{Top}(X,x)$ is the multiplication map in question $p \colon \operatorname{Hom}((S^1,1),(X,x)) \to \pi_1^{Top}(X,x)$ is the quotient map $\overline{m} \colon \operatorname{Hom}((S^1,1),(X,x)) \times \operatorname{Hom}((S^1,1),(X,x)) \to \operatorname{Hom}((S^1,1),(X,x))$ is the "upstairs" multiplication map. (It's a tilde in the original, but that isn't displaying correctly for me so I daren't use it.) The proof then proceeds: To show that $m$ is continuous, it suffices to show that $\overline{m}$ is continuous, for then if $U \subset \pi_1^{Top}(X,x)$ is open, $(p \times p)^{-1} m^{-1}(U) = \overline{m}^{-1}p^{-1}(U)$ is open, but by the definition of a quotient map, $(p \times p)^{-1} m^{-1}(U)$ is open if and only if $m^{-1}(U)$ is. There then follows a proof that $\overline{m}$ is continuous, a fact that I trust does not need proving. Comments on your comments: We don't need the quotient map to be open since we are only ever dealing with preimage sets. It is certainly not always true that if $q \colon X \to Y$ is a quotient that $q(U)$ is open in $Y$ for every open $U$ in $X$. But it is true by definition that $q^{-1}(U)$ is open in $X$ if and only if $U$ is open in $Y$. This is because the topology on $Y$ is precisely that to make this true. So since we are only dealing with sets of the form $(p \times p)^{-1}(A)$ then the assertion is valid assuming that $p \times p$ is a quotient map . Here, I find myself worried. A quick back-of-envelope check seems to show that one can't simply assume that the product of quotients is again a quotient in Top (a counterexample eludes me as I don't have Counterexamples in Topology to hand and I'm too used to dealing with "nice" spaces). It may be the case that for Hom-spaces then there's some magic that can be done (though such is not mentioned in the paper); but again the best that I can do on the back of an envelope is observe that (modulo some basepoint mess) by construction $\operatorname{Hom}((S^1,1),(X,x)) \times \operatorname{Hom}((S^1,1),(X,x))$ quotients to $\pi_1^{Top}((X,x) \times (X,x))$. But to proceed, one would need to know that $\pi_1^{Top}$ was a product-preserving functor. This is morally the same as saying that it is representable - which looks good since we have an obvious representing object $S^1$! However, this can't be made into a proper argument since although we have a representing object, we don't have an enriched Hom-functor $hTop \times hTop \to Top$ which to evaluate at $S^1$. So I would look for a counterexample to the product of quotients being a quotient, and see where that leads you. Either you'll find a proper counterexample to the proposition in question, or you'll see why in this special case , such a counterexample could not occur. (Of course, I may well be missing something obvious!)
|
{
"source": [
"https://mathoverflow.net/questions/26680",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2841/"
]
}
|
26,776 |
The total space of cotangent bundle of any manifold $M$ is a symplectic manifold. Is it true/false/unknown that for any $M$ , $T^*M$ has Kähler structure? Please support your claim with reference or counterexample.
|
This is true! I assume $M$ compact. Method 1. Real algebraic geometry. Cf. Fukaya, Seidel, and Smith - Exact Lagrangian submanifolds in simply-connected cotangent bundles . By a version of the Nash–Tognoli embedding theorem, one can realise $M$ as an real affine algebraic variety $V_\mathbb{R}$ , cut out by polynomials $f_i \in \mathbb{R}[x_1,\dotsc,x_N]$ . The complex variety $V_\mathbb{C}$ will then be smooth in a small neighbourhood $U$ of $V_\mathbb{R}$ , hence Kaehler in that region, with $V_{\mathbb{R}}$ as a Lagrangian submanifold. But $U$ is diffeomorphic to $T^\ast M$ . The resulting symplectic structure on $T^\ast M$ may be non-standard; via the Lagrangian neighbourhood theorem, you can take the symplectic form to be the canonical one if you'll settle for a Kaehler structure only near the zero-section. Method 2. Eliashberg's existence theorem for Stein structures. See Cieliebak–Eliashberg's unfinished book, Symplectic geometry of Stein manifolds , Theorem 9.5. We observe that $T^\ast M$ has an almost complex structure $J$ (one compatible with the canonical 2-form, for instance) and a bounded-below, proper Morse function $\phi$ whose critical points have at most the middle index (namely, the norm-squared plus a small multiple of a Morse function pulled back from $M$ ). In this situation Eliashberg, via an amazing chain of deformations, finds an integrable complex structure $I$ homotopic to $J$ such that $dd^c \phi$ is non-degenerate. This makes $T^\ast M$ Stein! His theorem only applies in dimensions $\geq 6$ (the paper Constructing Stein manifolds after Eliashberg of Gompf explains what you have to check in dimension 4), so without doing those checks or appealing to other methods, the case of $M$ a surface is left out. I think that the more precise version of Eliashberg's theorem, which may not yet be in the book, would tell us that the Stein structure is homotopic to an easy-to-write-down Weinstein structure on $T^\ast M$ involving its canonical symplectic structure $\omega_\text{can}$ , hence that $dd^c\phi$ is symplectomorphic to $\omega_\text{can}$ .
|
{
"source": [
"https://mathoverflow.net/questions/26776",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5259/"
]
}
|
26,821 |
Last year a paper on the arXiv (Akhmedov) claimed that Thompson's group $F$ is not amenable, while another paper, published in the journal "Infinite dimensional analysis, quantum probability, and related topics" (vol. 12, p173-191) by Shavgulidze claimed the exact opposite, that $F$ is amenable. Although the question of which, if either, was a valid proof seemed to be being asked by people, I cannot seem to find a conclusion anywhere and the discussion of late seems to have died down considerably. From what I can gather, Shavgulidze's paper seems unfixable, while the validity of Akhmedov's paper is undecided (although it may of course be decided now). So, does anyone know if either of these papers is valid?
|
While I did not participate in most of the checking of Shavgulidze's
argument, I can offer the following partial account of the situation. I
am told the paper was correct except for a lemma (or sequence of them)
claiming that a sequence of auxiliary measures had certain properties.
These were Borel measures on the $n$-simplex (one for each $n$). I
believe it was shown that the original proposed auxiliary sequence of
measures did not have one of the two properties. Shavgulidze
proposed other sequences of measures. The most recent attempt that I am
aware of (which was presented during his 2010 trip to the US mentioned
by Mark Sapir in the above comment) involved the direct construction of
Folner sets for the action of $F$ on the finite subsets of dyadic
rationals (see the next paragraphs). The details were somewhat sparse
and the definitions involved many unspecified numerical parameters, but
it appeared to be the case that these sets could not be Folner in the
necessary sense (see below for a clarification of "necessary sense").
This is because they would likely
both contradict the iterated exponential lower bound on the Folner
function which I have demonstrated and because they appear to violate
the qualitative properties which I have demonstrated that Folner sets of
trees must have (see the pre-print on my webpage; the qualitative
condition appears in lemma 5.7, noting that marginal implies measure
0 with respect to any invariant measure). Meanwhile I was able to provide a direct elementary proof that the
existence of such a sequence having these properties implied the amenability
of $F$. In fact the proof gives an explicit procedure for constructing
(weighted) Folner sets from the sequence of measures satisfying the hypotheses
mentioned above. A note containing the details was circulated to a few people around the
time of Shavgulidze's visit to Vanderbilt.
While I am reluctant to speak for anyone else (including the author), it
appears to me that after the dust had settled (which took a considerable amount of time),
the problem with the proof seems to have at least some of its roots
in the following observation (which I now include for the sake of prosperity). $F$
acts on the finite subsets of the dyadic rations (let's call this set
$\mathcal{D}$) by taking the set-wise image (here I am utilizing the
piecewise linear function model of $F$). Now let $\mathcal{T}$ denote
the finite subset of $[0,1]$ which contain $0$ and $1$ and are such that
any consecutive pair is of the form $p/2^q,(p+1)/2^q$ (for natural numbers $p,q$). $F$ only acts partially on $\mathcal{T}$: the action $T \cdot f$ is defined if $f'$
is defined on the complement of $T$ in $[0,1]$ (there may be other cases
when $T \cdot f$ is in $\mathcal{T}$, but let's restrict the domain of
the action as above). The full action of $F$ on $\mathcal{D}$ is amenable. The point here is that the action of the standard generators
on the sets $\{0,1-2^{-n},1\}$ is the same for large enough $n$
and thus we can build
Folner sets as in a $\mathbb{Z}$ action. The amenability of partial
action of $F$ on $\mathcal{T}$ is, on the other hand, equivalent to the
amenability of $F$ (this is well known, but see the preprint above to see
this spelled out in the present jargon). Now here is the catch, if we also require that the invariant
measure/Folner sets for the action of $F$ on $\mathcal{D}$ to
concentrate on sets of mesh less than $1/16$, then one again arrives at
an equivalent formulation of the amenability of $F$. The author was
aware of the need for the mesh condition, but (in the most recent example)
arranged it only in a
modification after the fact (which interferes with invariance). Incidentally the hypotheses on the sequence of measures mentioned above
are a condition requiring that the measures concentrate on sets of
arbitrarily small mesh as $n$ tends to infinity and a condition which
is an analog of translation invariance. I apologize if this borders on ``too much information.'' [Added 1/28/2011]
Shavgulidze's 1/14/2011 posting to the ArXiv is essentially a more detailed version of what he was saying in notes, seminars, and private communication in January 2010 during his visit to the US mentioned in Mark Sapir's post above. The present note is still sufficiently vague and full of sufficiently many errors (many typographical in nature) that it is hard (or easy, if you like) to say explicitly which line of the proof is incorrect. It is possible, however to point to places where crucial details are missing and where there are certainly going to be errors (specifically the problems will be on page 11, if not elsewhere as well). The comments from my answer above still apply equally well to the present version. It appears that the present version (or any perturbation of it) still would violate the lower bound on the growth of the Folner function which I have established. The present version still totally ignores that the combinatorial statements on page 11 themselves readily imply the amenability of F, without the involvement of any analytical concepts. [Added 2/3/2011]
Details on what is incorrect with Shavgulidze's proof of the amenablity of $F$ can be found here . [Added 10/3/2012]
Well, well, well: now I'm in the position of having announced a proof that $F$ is amenable only to have an error be found. The error was finally found by Azer Akhemedov after being overlooked for roughly 4 weeks by myself and 9 or more people who had checked the proof and found no problems. The basic strategy of the proof still may be valid: it began by considering an extension of the free binary system $(\mathbb{T},*)$ on one generator to the finitely additive probability measures on this system:
$$\mu * \nu (E) = \int \int \chi_E(s * t) d \nu (t) d \mu (s).$$
It was shown (correctly) that any idempotent measure is $F$-invariant (there is a natural way of identifying $\mathbb{T}$ with the positive elements of $F$).
The difficulty came in constructing the idempotent measure.
A version of the Kakutani Fixed Point Theorem was used to construct approximations $K_{\mathcal{B},k,n}$ to the set of idempotent measures.
The error occurs in attempting to intersect these compact families of measures.
In the proof, it was claimed that the parameter $k$ could be stablized along the an ultrafilter
(Lemma 4.13 in the most recent version), allowing one to take a directed intersection of nonempty compact sets.
This lemma is likely false and at least is not proved as claimed.
One may still be able to argue that a relevant intersection of these approximations is nonempty and hence that there is an idempotent.
This seems to require new ideas though.
|
{
"source": [
"https://mathoverflow.net/questions/26821",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6503/"
]
}
|
26,832 |
This is an elementary question (coming from an undergraduate student) about algebraic numbers, to which I don't have a complete answer. Let $a$ and $b$ be algebraic numbers, with respective degrees $m$ and $n$. Suppose $m$ and $n$ are coprime. Does the degree of $a+b$ always equal $mn$? I know that the answer is "yes" in the following particular cases (I can provide details if needed) : 1) The maximum of $m$ and $n$ is a prime number. 2) $(m,n)=(3,4)$. 3) At least one of the fields $\mathbf{Q}(a)$ and $\mathbf{Q}(b)$ is a Galois extension of $\mathbf{Q}$. 4) There exists a prime $p$ which is inert in both fields $\mathbf{Q}(a)$ and $\mathbf{Q}(b)$ (if $a$ and $b$ are algebraic integers, this amounts to say that the minimal polynomials of $a$ and $b$ are still irreducible when reduced modulo $p$). I can also give the following reformulation of the problem : let $P$ and $Q$ be the respective minimal polynomials of $a$ and $b$, and consider the resultant polynomial $R(X) = \operatorname{Res}_Y (P(Y),Q(X-Y))$, which has degree $mn$. Is it true that $R$ has distinct roots? If so, it should be possible to prove this by reducing modulo some prime, but which one? Despite the partial results, I am at a loss about the general case and would greatly appreciate any help! [EDIT : The question is now completely answered (see below, thanks to Keith Conrad for providing the reference). Note that in Isaacs' article there are in fact two proofs of the result, one of which is only sketched but uses group representation theory.]
|
The following answer was communicated to me by Keith Conrad: See: M. Isaacs, Degree of sums in a separable field extension, Proc. AMS 25
(1970), 638--641. http://alpha.math.uga.edu/~pete/Isaacs70.pdf Isaacs shows: when $K$ has characteristic $0$ and $[K(a):K]$ and $[K(b):K]$ are
relatively prime, then $K(a,b)$ = $K(a+b)$ , which answers the students question in the
affirmative. His proof shows the same conclusion holds under the
weaker assumption that $[K(a,b):K] = [K(a):K][K(b):K]$ . since Isaacs uses the relative primality assumption on the degrees
only to get that degree formula above, which can occur even in cases
where the degrees of $K(a)$ and $K(b)$ over $K$ are not relatively prime.
|
{
"source": [
"https://mathoverflow.net/questions/26832",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6506/"
]
}
|
26,919 |
I'm teaching an undergrad course in real analysis this Fall and we are using the text "Real Mathematical Analysis" by Charles Pugh. On the back it states that real analysis involves no "applications to other fields of science. None. It is pure mathematics." This seems like a false statement. My first thought was of probability theory. And isn't PDE's sometimes considered applied math? I was wondering what others thought about this statement.
|
As it happens, I just finished teaching a quarter of undergraduate real analysis. I am inclined to rephrase Pugh's statement into a form that I would agree with. If you view analysis broadly as both the theorems of analysis and methods of calculation (calculus), then obviously it has a ton of applications. However, I much prefer to teach undergraduate real analysis as pure mathematics, more particularly as an introduction to rigorous mathematics and proofs. This is partly as a corrective (or at least a complement) to the mostly applied and algorithmic interpretation of calculus that most American students see first. Some mathematicians think, and I've often been tempted to think, that it's a bad thing to do analysis twice, first as algorithmic and applied calculus and second as rigorous analysis. It can seem wrong not to have the rigor up-front. Now that I have seen what BC Calculus is like in a high school, I no longer think that it is a bad thing. Obviously I still think that the pure interpretation is important. On the other hand, both interpretations together is also fine by me. I notice that in France, calculus courses and analysis courses are both called "analyse mathématique". I think that they might separate rigorous and non-rigorous calculus a bit less than in the US, and it could be partly because of the name. In fact, it took me a long time to realize how certain non-rigorous explanations guide good rigorous analysis. For instance, the easy way to derive the Jacobian factor in a multivariate integral is to "draw" an infinitesimal parallelepiped and find its volume. That's not rigorous by itself, but it is related to an important rigorous construction, the exterior algebra of differential forms. Finally, I agree that Pugh's book is great. As the saying goes, you shouldn't judge it by its cover. :-)
|
{
"source": [
"https://mathoverflow.net/questions/26919",
"https://mathoverflow.net",
"https://mathoverflow.net/users/343/"
]
}
|
26,942 |
Part of what I do is study typical behavior of large combinatorial structures by looking at pseudorandom instances. But many commercially available pseudorandom number generators have known defects, which makes me wonder whether I should just use the digits (or bits) of $\pi$. A colleague of mine says he "read somewhere" that the digits of $\pi$ don't make a good random number generator. Perhaps he's thinking of the article "A study on the randomness of the digits of $\pi$" by Shu-Ju Tu and Ephraim Fischbach. Does anyone know this article? Some of the press it got (see e.g. http://news.uns.purdue.edu/html4ever/2005/050426.Fischbach.pi.html ) made it sound like $\pi$ wasn't such a good source of randomness, but the abstract for the article itself (see http://adsabs.harvard.edu/abs/2005IJMPC..16..281T ) suggests the opposite. Does anyone know of problems with using $\pi$ in this way? Of course if you use the digits of $\pi$ you should be careful not to re-use digits you've already used elsewhere in your experiment. My feeling is, you should use the digits of $\pi$ for Monte Carlo simulations. If you use a commercial RNG and it leads you to publish false conclusions, you've wasted time and misled colleagues. If you use $\pi$ and it leads you to publish false conclusions, you've still wasted time and misled colleagues, but you've also found a pattern in the digits of $\pi$!
|
Strictly speaking, there are some known patterns in the digits of $\pi$. There are some known results on how well $\pi$ can be approximated by rationals, which imply (for example) that we know a priori that the next $n$ as-yet-uncomputed digits of $\pi$ can't all be zero (for some explicit value of $n$ that I'm too lazy to compute right now). In practice, though, these "patterns" are so weak that they will not affect any Monte Carlo experiments. The main limitation of using the digits of $\pi$ may be the computational speed. Depending on how many random digits you need, computing fresh digits of $\pi$ might become a computational bottleneck. The further out you go, the harder it becomes to compute more digits of $\pi$. If you are worried about the quality of random digits that you're getting, then you may want to use cryptographic random number generators. For example, finding a pattern in the Blum-Blum-Shub random number generator would probably yield a new algorithm for factoring large integers! Cryptographic random number generators will run more slowly than the "commercial" random number generators you're talking about but you can certainly find some that will generate digits faster than algorithms for computing $\pi$ will.
|
{
"source": [
"https://mathoverflow.net/questions/26942",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3621/"
]
}
|
26,979 |
Is there a finite group such that, if you pick one element from each conjugacy class, these don't necessarily generate the entire group?
|
No, this is impossible. This is a standard lemma, but I'm finding it easier to give a proof than a reference: Let $G$ be your finite group. Suppose that $H$ were a proper subgroup, intersecting every conjugacy class of $G$. Then $G = \bigcup_{g \in G} g H g^{-1}$. If $g_1$ and $g_2$ are in the same coset of $G/H$, then $g_1 H g_1^{-1} = g_2 H g_2^{-1}$, so we can rewrite this union as $\bigcup_{g \in G/H} g H g^{-1}$. There are $|G|/|H|$ sets in this union, each of which has $|H|$ elements. So the only way they can cover $G$ is if they are disjoint. But they all contain the identity, a contradiction. UPDATE: I found a reference. According to Serre , this result goes back to Jordan, in the 1870's.
|
{
"source": [
"https://mathoverflow.net/questions/26979",
"https://mathoverflow.net",
"https://mathoverflow.net/users/799/"
]
}
|
27,075 |
What is the oldest open problem in mathematics? By old, I am referring to the date the problem was stated. Browsing Wikipedia list of open problems, it seems that the Goldbach conjecture (1742, every even integer greater than 2 is the sum of two primes) is a good candidate. The Kepler conjecture about sphere packing is from 1611 but I think this is finally solved (anybody confirms?). There may still be some open problem stated at that time on the same subject, that is not solved. Also there are problems about cuboids that Euler may have stated and are not yet solved, but I am not sure about that. A related question: can we say that we have solved all problems
handed down by the mathematicians from Antiquity?
|
Existence or nonexistence of odd perfect numbers. Update: Goes back at least to Nicomachus of Gerasa around 100 AD, according to J J O'Connor and E F Robertson . Nichomachus also asked about infinitude of perfect numbers. (Goes back at least to Descartes 1638 https://mathworld.wolfram.com/OddPerfectNumber.html and arguably all the way back to Euclid.)
|
{
"source": [
"https://mathoverflow.net/questions/27075",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6129/"
]
}
|
27,076 |
Often undergraduate discrete math classes in the US have a calculus prerequisite. Here is the description of the discrete math course from my undergrad: A general introduction to basic
mathematical terminology and the
techniques of abstract mathematics in
the context of discrete mathematics.
Topics introduced are mathematical
reasoning, Boolean connectives,
deduction, mathematical induction,
sets, functions and relations,
algorithms, graphs, combinatorial
reasoning. What about this course suggests calculus skills would be helpful? Is passing calculus merely a signal that a student is ready for discrete math? Why isn't discrete math offered to freshmen — or high school students — who often lack a calculus background?
|
A significant portion (my observation was about 20-30% at Berkeley, which means it must approach 100% at some schools) of first year students in the US do not understand multiplication. They do understand how to calculate $38 \times 6$, but they don't intuitively understand that if you have $m$ rows of trees and $n$ trees in each row, you have $m\times n$ trees. These students had elementary school teachers who learned mathematics purely by rote, and therefore teach mathematics purely by rote. Because the students are very intelligent and good at pattern matching and at memorizing large numbers of distinct arcane rules (instead of the few unifying concepts they were never taught because their teachers were never taught them either), they have done well at multiple-choice tests. These students are going to struggle in any calculus course or any discrete math course. However, it is easier to have them all in one place so that one instructor can try to help all of them simultaneously. For historical reasons, this place has been the calculus course.
|
{
"source": [
"https://mathoverflow.net/questions/27076",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6555/"
]
}
|
27,107 |
I heard that the following problem lead to determine the rational points of an elliptic curve: For which integers $n$ there are integers $x,y,z$ such that $x/y+y/z+z/x=n$. Could anyone show me why this question leads to the theory of elliptic curves?
|
Nearly 10 years ago, I gave a talk at Wesleyan, and a gentleman named Roy Lisker asked me the same question: Fix an integral solution $(x, \ y, \ z)$ and make the substitution $$u = 3 \ \frac {n^2 z - 12 \ x}z \qquad v = 108 \ \frac {2 \ x \ y - n \ x \ z + z^2}{z^2}$$ Then $(u, \ v)$ is a rational point on the elliptic curve $E_n: \ v^2 = u^3 + A \ u + B$ where $A = 27 \ n \ (24 - n^3)$ and $B = 54 \ (216 - 36 \ n^3 + n^6)$. (It actually turns out that $E_n$ is an elliptic curve whenever $n$ is different from 3, but I’ll discuss this case separately.) Let me say a little about the structure of this curve for the experts: This curve has the “obvious” rational point $T=(3 n^2, 108)$ which has order 3, considering the group structure of $E_n$. It actually turns out that these three multiples correspond to the cases $x = 0$ and $z = 0$, so if such an integral solution $(x, \ y, \ z)$ exists then the rational solution $(u, \ v)$ must correspond to a point on $E_n$ not of order 3. (Of course, I don’t care about the cyclic permutation $x \to y \to z \to x$.) In the following table I’m computing the Mordell-Weil group of the rational points on the elliptic curve i.e. the group structure of the set of rational solutions $(u, \ v)$: $$ \begin{matrix}
n & E_n(\mathbb Q) \\ \\
1 & Z_3 \\
2 & Z_3 \\
3 & \text{Not an elliptic curve} \\
4 & Z_3 \\
5 & Z_6 \\
6 & Z_3 \oplus \mathbb Z \\
7 & Z_3 \\
8 & Z_3 \\
9 & Z_3 \oplus \mathbb Z \\
\end{matrix} $$ Hence when $n =$ 1, 2, 4, 7 or 8 we find no integral solutions $(x, \ y, \ z)$. When $n = 5$, there are only six rational points on $E_n$, namely the multiples of $(u,v) = (3, 756)$ which all yield just one positive integral point $(x,y,z) = (2,4,1)$. Something fascinating happens when $n = 6$... The rank is positive (the rank is actually 1) so there are infinitely many rational points $(u, \ v)$. But we must be careful: not all rational points $(u, \ v)$ yield positive integral points $(x, \ y, \ z)$. Clearly, we can scale $z$ large enough to always choose $x$ and $y$ to be integral, but we might not have $x$ and $y$ to both be positive. You’ll note that $x > 0$ if only if $u < 3 \ n^2$, so we only want rational points in a certain region of the graph. Since the rank is 1, this part of the graph is dense with rational points! Let me give some explicit numbers. The torsion part of $E_n( \mathbb Q)$ is generated by $T = (75, 108)$ and the free part is generated by $(u,v) = (-108, 2052)$. By considering various multiples of this point we get a lot of positive integral -- yet unwieldy! -- points $(x,y,z)$ such that $x/y + y/z + z/x = 6$: $$\begin{aligned} (x,y,z) & = (12, 9, 2), \\
& = (17415354475, 90655886250, 19286662788) \\
& = (260786531732120217365431085802, 1768882504220886840084123089612, 1111094560658606608142550260961) \\
& = (64559574486549980317349907710368345747664977687333438285188, 70633079277185536037357392627802552360212921466330995726803, 313818303038935967800629401307879557072745299086647462868546) \end{aligned} $$ I’ll just mention in passing that when $n = 9$ the elliptic curve $E_n$ also has rank 1. The generator $(u,v) = (54, 4266)$ corresponds to the positive integral point $(x,y,z) = (63, 98, 12)$ on $x/y + y/z = z/x = 9$. What about $n = 3$? The curve $E_n$ becomes $v^2 = (u – 18) (u + 9)^2$. This gives two possibilities: either $u = -9$ or $u \geq 18$. The first corresponds to $x = z$ while the second corresponds to $(z/x) \geq 4$. By cyclically permuting $x$, $y$, and $z$ we find similarly that either $x = y = z$ or $x/y + y/z + z/x \geq 6$. The latter case cannot happen by assumption so $x = y = z$ is the only possibility i.e. $(x,y,z) = (1,1,1)$ is the only solution to $x/y + y/z + z/x = 3$.
|
{
"source": [
"https://mathoverflow.net/questions/27107",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6266/"
]
}
|
27,126 |
$$e^{\pi i} + 1 = 0$$ I have been searching for a convincing interpretation of this. I understand how it comes about but what is it that it is telling us? Best that I can figure out is that it just emphasizes that the various definitions mathematicians have provided for non-intuitive operations (complex exponentiation, concept of radians etc.) have been particularly inspired. Is that all that is behind the slickness of the Famous Five equation? Any pointers?
|
$$e^{i\pi}=\lim\limits_{N\to\infty}\left(1 + \frac{iπ}{N}\right)^N$$
|
{
"source": [
"https://mathoverflow.net/questions/27126",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6566/"
]
}
|
27,144 |
As you all probably know, Vladimir I. Arnold passed away yesterday. In the obituaries, I found the following statement (AFP) In 1974 the Soviet Union opposed Arnold's award of the Fields Medal, the most prestigious recognition in work in mathematics that is often compared to the Nobel Prize, making him one of the most preeminent mathematicians to never receive the prize. Since he made some key results before 1974, it seems that the award would have been deserved. Knowing that the Soviets sometimes forced Nobel laureates not to accept their prizes, I thought at first that the same happened here - but noticing that Kantorovich received his Nobel prize the next year, and that Fields laureates both in 1970 and 1978 were Russians (Novikov and Margulis, respectively), I cannot understand why did the Soviets oppose it in case of Arnold. Can someone shed some light? EDIT: I googled the resources online in English before asking this question here and found no answers. But after posting it here, I googled it in Russian, and found the following : Владимир Игоревич Арнольд был номинирован на медаль Филдса в 1974 году. Далее — изложение рассказа самого Арнольда; надеюсь, что помню его правильно. Всё было на мази, Филдсовский комитет рекомендовал присудить Арнольду медаль. Окончательное решение должен был принять высший орган Международного математического союза — его исполнительный комитет. В 1971 — 1974 годах вице-президентом Исполнительного комитета был один из крупнейших советских (да и мировых) математиков академик Лев Семёнович Понтрягин. Накануне своей поездки на заседание исполкома Понтрягин пригласил Арнольда к себе домой на обед и на беседу о его, Арнольда, работах. Как Понтрягин сообщил Арнольду, он получил задание не допустить присуждение тому филдсовской медали. В случае, если исполком с этим не согласится и всё же присудит Арнольду медаль, Понтрягин был уполномочен пригрозить неприездом советской делегации в Ванкувер на очередной Международный конгресс математиков, а то и выходом СССР из Международного математического союза. Но чтобы суждения Понтрягина о работах Арнольда звучали убедительно, он, Понтрягин, по его словам, должен очень хорошо их знать. Поэтому он и пригласил Арнольда, чтобы тот подробно рассказал ему о своих работах. Что Арнольд и сделал. По словам Арнольда, задаваемые ему Понтрягиным вопросы были весьма содержательны, беседа с ним — интересна, а обед — хорош. Не знаю, пришлось ли Понтрягину оглашать свою угрозу, но только филдсовскую медаль Арнольд тогда не получил — и было выдано две медали вместо намечавшихся трёх. К следующему присуждению медалей родившийся в 1937 году Арнольд исчерпал возрастной лимит. В 1995 году Арнольд уже сам стал вице-президентом, и тогда он узнал, что в 1974 году на членов исполкома большое впечатление произвела глубина знакомства Понтрягина с работами Арнольда. Translation of the text: Vladimir Arnold was nominated for the Fields Medal in 1974. The following is Arnold’s own version of the story; I hope I’m remembering it correctly. The matter seemed settled: the Fields Medal Committee recommended to award a medal to Arnold. The final decision was to be taken by the top administrative body of the International Mathematical Union, the Executive Committee. In 1971–1974 the vice-president of the Executive Committee was one of Soviet Union’s (even world’s) greatest mathematicians, Lev Semenovich Pontryagin (also a member of the Soviet Academy of Sciences). On the eve of his visit to the meeting of the EC, Pontryagin invited Arnold to his home for lunch and to talk about Arnold’s work. Pontryagin told Arnold he was instructed not to allow the Fields Medal to be awarded to him. In case the executive committee wouldn’t agree and would still try to award the medal to Arnold, Pontryagin was authorized to threaten that the Soviet delegation would boycott the next International Congress of Mathematicians in Vancouver, or even that the USSR would leave the IMU. However, in order for Pontryagin’s assertions about Arnold’s work to be convincing, Pontryagin said, he would need to know that work very well. That’s why he invited Arnold in order that he describe his work in detail. Arnold did. According to Arnold, the questions Pontryagin asked him were profound, the talk with him interesting, and the meal good. I do not know if Pontryagin had to make his threat known, but Arnold did not receive the medal—and only two medals were awarded instead of the intended three. By the next time the medals were being awarded, Arnold, born in 1937, was over the age limit. In 1995, Arnold himself became vice-president and learned that in 1974 the depth of Pontryagin’s familiarity with Arnold’s work made a great impression on the members of the EC.
|
Pontryagin wrote a book "Biography of Lev Semenovich Pontryagin, a mathematician, composed by himself". It is available online at http://www.ega-math.narod.ru/LSP/book.htm , in the original Russian. Google does a fairly good job of translation, although it refuses to translate the individual chapters completely because of their length. In the book, Pontryagin shares a lot about the inner workings of the IMU Executive Board and his own role in holding the Soviet party line there as its vice president. For example, he recounts his version of how France got the IMU presidency in 1974, so that neither the Soviet Union nor the US would dominate. The only relevant mention of Arnold that I could find in that book is in chapter 5. He states that in 1974 Arnold was not allowed to leave the country to lecture abroad, and that there was a conflict about this with the Executive Board of the IMU, who insisted that he should. From this, you could extrapolate the reasons for blocking Arnold's Fields medal, if the story is true.
|
{
"source": [
"https://mathoverflow.net/questions/27144",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4925/"
]
}
|
27,164 |
I have just completed an introductory course on analysis, and have been looking over my notes for the year. For me, although it was certainly not the most powerful or important theorem which we covered, the most striking application was the Fourier analytic proof of the isoperimetric inequality . I understand the proof, but I still have no feeling for why anyone would think to use Fourier analysis to approach this problem. I am looking for a clear reason why someone would look at this and think "a Fourier transform would simplify things here". Even better would be a physical interpretation. Could this somehow be related to "hearing the shape of a drum"? Is there any larger story here?
|
Experience with Fourier analysis and representation theory has shown that every time a problem is invariant with respect to a group symmetry, the representation theory of that group is likely to be relevant. If the group is abelian, the representation theory is given by the Fourier transform on that group. In this case, the relevant symmetry group is that of reparameterising the arclength parameterisation of the perimeter by translation. This operation does not change the area or the perimeter. When combined with the observation (from Stokes theorem) that both the area and perimeter of a body can be easily recovered from the arclength parameterisation, this naturally suggests to use Fourier analysis in the arclength variable.
|
{
"source": [
"https://mathoverflow.net/questions/27164",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1106/"
]
}
|
27,197 |
The following definitions are standard: An affine variety $V$ in $A^n$ is a complete intersection (c.i.) if its vanishing ideal can be generated by ($n - \dim V$) polynomials in $k[X_1,\ldots, X_n]$. The definition can also be made for projective varieties. $V$ is locally a complete intersection (l.c.i.) if the local ring of each point on $V$ is a c.i. (that is, quotient of a regular local ring by an ideal generated by a regular sequence). What are examples (preferably affine) of l.c.i. which are not c.i. ? I have never seen such one.
|
(To supplement Alberto's example) If $V$ is projective, then the gap between being locally c.i and c.i is quite big. In particular, any smooth $V$ would be locally c.i., but they are not c.i. typically. For instance, take $V$ to be a few points in $\mathbb P^2$ would give simple examples. In higher dimensions, by Grothendick-Lefschetz, if $V$ is smooth, $\dim V\geq 3$ , and $V$ is c.i. then $\text{Pic}(V)=\mathbb Z$ , so it is a serious restriction. The affine case is more subtle. Again one can look at smooth varieties. If $V$ is a smooth affine curve and c.i., then the canonical bundle of $V$ is trivial. So it gives the following strategy: start with a projective curve $X$ of genus at least $2$ , removing some general points to obtain an (still smooth) affine curve with non-trivial canonical bundle. For more details on the second paragraph, see this question , especially Bjorn Poonen's comments. This paper contains relevant references, and also an example with trivial canonical bundle.
|
{
"source": [
"https://mathoverflow.net/questions/27197",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6576/"
]
}
|
27,244 |
Call a tack the one point union of three open intervals. Can you fit an uncountable number of them on the plane? Or is only a countable number?
|
First of all, the one-point compactification of three open intervals is not a "tack", it's a three-leaf clover. I think that you mean a one-point union of three closed intervals; of course it doesn't matter if the other three endpoints are there or not. This topological type can be called a "Y" or a "T" or a "simple triod". R.L. Moore published a solution to your question in 1928. The answer is no. It was generalized in 1944 by his student Gail Young: You can only have countably many $(n-1)$-dimensional tacks in $\mathbb{R}^n$ for any $n \ge 2$. For her theorem, the name "tack" makes rather more sense, but she calls it a "$T_n$-set". Actually Moore's theorem applies to a more general kind of triod, in which three tips of the "Y" are connected to the center by "irreducible continua", rather than necessarily intervals. I don't know whether I might be spoiling a good question, but here in any case is a solution to the original question (see as both Moore and Young did something more general that takes more discussion). Following domotorp's hint, there is a principle of accumulation onto a countable set of outcomes pigeonhole principle for uncountable sets. If $f:A \to B$ is a function from an uncountable set $A$ to a countable set $B$, then there is an uncountable inverse image $A' = f^{-1}(b)$. If you want to show that $A$ does not exist, then you might as well replace it with $A'$. Unlike the finite pigeonhole principle, which becomes more limited with each such replacement, $A'$ has the same cardinality as $A$, so you haven't lost anything. You are even free to apply the uncountable pigeonhole principle again. Suppose that you have uncountably many simple triods in the plane. Given a simple triod, we can choose a circle $C$ with rational radius and rational center with the branch point of the triod on the inside and the three tips on the outside. Since there are only countably many such circles, there are uncountably many triods with the same circle $C$. We can trim the segments of each such triod so that they stop when they first touch $C$, to make a pie with three slices (a Mercedes-Benz symbol). Then, given such a triod, we can pick a rational point in each of three slices of the pie. Since there are only countably many such triples of points, there must be uncountably many triods with the same three points $p$, $q$, and $r$. In particular there are two such triods, and a suitable version of the Jordan curve theorem implies that they intersect. The argument can be simplified to just pick a rational triangle that functions as the circle, and whose corners function as the three separated points. But I think that there is something to learn from the variations together, namely that the infinite pigeonhole principle gives you a lot of control. For instance, with hardly any creativity, you can assume that the triods are all large.
|
{
"source": [
"https://mathoverflow.net/questions/27244",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6610/"
]
}
|
27,258 |
I recently read the statement "up to conjugacy there are 4 nontrivial finite subgroups of ${\rm SL}_2(\mathbb{Z})$." They are generated by $$\left(\begin{array}{cc} -1&0 \\\ 0&-1\end{array}\right),
\left(\begin{array}{cc} -1&-1 \\\ 1&0\end{array}\right),
\left(\begin{array}{cc} 0&-1 \\\ 1&0\end{array}\right),
\left(\begin{array}{cc} 0&-1 \\\ 1&1\end{array}\right) $$
and are isomorphic to $\mathbb{Z}_2$, $\mathbb{Z}_3$, $\mathbb{Z}_4$, and $\mathbb{Z}_6$, respectively. Does someone know a reference for this statement? (Or, is it easy to see?) My attempt at a Google search turned up this statement, but I wasn't able to find a reference.
|
A finite subgroup will map to a finite subgroup of $PSL_2(\mathbb Z)$, which is a free product $Z_2 * Z_3$. I believe I have been told that finite subgroups of free products of finite groups are conjugate to subgroups of the factors being free-producted together.
|
{
"source": [
"https://mathoverflow.net/questions/27258",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2669/"
]
}
|
27,299 |
Hi, I'll be starting graduate school soon, and when I look back at my college career, there are certain things I wish I could have done differently. In hindsight, I wished I wasn't in such a rush to study [insert your favorite hot topic here] as opposed to pinning down the fundamentals of the course materials I was studying. Probably the best class I took was a seminar where the prof had us read and discuss from classic texts in differential geometry and pdes. Anyways, now that I'm starting graduate school, I'd like to avoid other common pitfalls that graduate students make. So my question is: What are common pitfalls, mistakes or
misconceptions that you wished
somebody had told you were wrong? I'm
interested in pretty much anything
from how to conduct research, to what
courses to take, or anything else.
|
Marie desJardins has a nice article on Surviving Graduate School that is definitely worth reading. The top two pieces of advice I would give are: The most important thing when choosing an advisor is to find someone who will go out of his or her way to help you succeed, not someone who is famous, and not even someone whose research is in the right area. You need to make the transition from being a mathematics student to being a mathematician. That means thinking of mathematics as an arena where you seek out unsolved problems and obsess over them until you solve them, not as a vast sea of material to be learned. Don't get sidetracked trying to learn everything; that's impossible. Focus on finding an open problem you can solve, and solve it.
|
{
"source": [
"https://mathoverflow.net/questions/27299",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6626/"
]
}
|
27,316 |
I have no intuition for field theory, so here goes. I know what the algebraic and separable closures of a field are, but I have no feeling of how different (or same!) they could be. So, what are the differences between them (if any) for a perfect field? A finite field? A number field? Are there geometric parallels? (say be passing to schemes, or any other analogy)
|
Geometrically there is a very big difference between separable and algebraic
closures (in the only case where there is a difference at all, i.e., in positive
characteristic $p$). Technically, this comes from the fact that an algebraically
closed field $k$ has no non-trivial derivations $D$; for every $f\in k$ there is
a $g\in k$ such that $g^p=f$ and then $D(f)=D(g^p)=pg^{p-1}D(g)=0$. This means
that an algebraically closed field contains no differential-geometric
information. On the other hand, if $K\subseteq L$ is a separable extension, then
every derivation of $K$ extends uniquely to a derivation of $L$ so when taking a
separable closure of a field a lot of differential-geometric information remains. Hence I tend to think of a point of a variety for a separably closed field as a
very thick point (particularly if it is a separable closure of a generic point)
while a point over an algebraically closed field is just an ordinary (very thin)
point. Of course you lose infinitesimal information by just passing to the perfection of a field (which is most conveniently defined as the direct
limit over the system of $p$'th power maps). Sometimes that is however exactly
what you want. That idea first appeared (I think) in Serre's theory of
pro-algebraic groups where he went one step further and took the perfection of
group schemes (for any scheme in positive characteristic the perfection is the limit,
inverse this time, of the system of Frobenius morphisms) or equivalently
restricted their representable functors to perfect schemes. This essentially
killed off all infinitesimal group schemes and made the theory much closer to
the characteristic zero theory (though interesting differences remained mainly
in the fact that there are more smooth unipotent group schemes such as the Witt
vector schemes). Another, interesting example is for Milne's flat cohomology
duality theory which needs to invert Frobenius by passing to perfect schemes in
order to have higher $\mathrm{Ext}$-groups vanish (see SLN 868).
|
{
"source": [
"https://mathoverflow.net/questions/27316",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4177/"
]
}
|
27,324 |
To construct J. H. Conway's look-and-say sequence , begin by putting down a 1 as the first entry. The other entries are found by saying the previous entry aloud, and writing what you hear. 1
11
21
1211
111221
312211 (previous entry was three 1's, two 2's and one 1)
... Conway provides his usual fantastical analysis in The Weird and Wonderful Chemistry of Audioactive Decay [Eureka 46, 5-18], where he demonstrates several otherworldly properties of this sequence. One was this: the ratio of the lengths of consecutive entries has a limit, $\lambda$ . Furthermore, $\lambda$ is the root of a polynomial of degree 71. Now, when I was in high school we were taught the quadratic formula and told there is a cubic formula, but you don't have to learn it. Why? "You won't be needing it." And mostly I've found that to be true. Am I wrong, or do high-degree polynomials rarely occur (in uncontrived settings)? What are some other examples of useful roots of polynomials of high degree? Power series and the like can obviously produce useful polynomials of arbitrarily large degree, but I'm looking for surprises such as the degree-71 polynomial at the heart of the look-and-say sequence above.
|
From Fernando Rodriguez-Villegas' preprint : "Chebyshev in his work on the distribution of prime numbers used the following fact
$$
u_n:=\frac{(30n)!n!}{(15n)!(10n)!(6n)!}\in\mathbb Z,
\qquad n = 0, 1, 2, \dots
$$
(see also my question -- WZ).
This is not immediately obvious (for example, this ratio of factorials is not a product
of multinomial coefficients) but it is not hard to prove. The only proof I know
proceeds by checking that the valuations $v_p(u_n)$ are non-negative for every prime
$p$; an interpretation of $u_n$ as counting natural objects or being dimensions of natural
vector spaces is far from clear. As it turns out, the generating function
$$
u(\lambda):=\sum_{n=0}^\infty u_n\lambda^n
$$
is algebraic over $\mathbb Q(\lambda)$; i.e. there is a polynomial $F\in\mathbb Q[x,y]$ such that
$$
F(\lambda,u(\lambda))=0.
$$
However, we are not likely to see this polynomial explicitly any time soon as its
degree is $483,840$ (!)"
|
{
"source": [
"https://mathoverflow.net/questions/27324",
"https://mathoverflow.net",
"https://mathoverflow.net/users/175/"
]
}
|
27,344 |
There are many different styles of lecturing, and many different aspects that are blended together to give a whole "lecturing style". That said, I'm particularly interested in hearing people's experiences with so-called "handouts". At one extreme lie the lecturers who "dictate" a set of notes (usually not actual dictation, but by writing on a board) whilst at the other are lecturers who distribute complete lecture notes in advance. As this is math overflow, I realise that it is extremely unlikely that it will be possible to answer the question "which is better", and I realise that this probably depends much more on other factors than just whether or not notes where given out or not, but to help me decide what to do then I'd like to hear people's experiences - both as lecturers and students. If anyone can point me to actual research on this in the (mathematics education) literature then that would be an unlooked-for bonus. (Minor edit: in light of the way that comments get displayed in "short form", I'd like to make it clear that the "Andrew" referred to in many of the comments is not the same "Andrew" who edited this question! Unfortunately, if I put this remark in a comment - which is where it belongs - then it wouldn't be seen by those casually stumbling across this question and so those most prone to making that assumption!)
|
Without pre-typed lecture notes (or a textbook that is being followed reasonably closely), many students often feel pressured to copy down every scrap that the lecturer writes down, in case they are missing out on something that will be vitally important later. This often comes at the cost of the student being able to comprehend what is going on in real-time. A related problem is that without the backup of official notes or textbook, a single typo in lecture can lead to hours of confusion on the student's part when reviewing his or her transcribed notes afterwards. (The problem is mitigated somewhat nowadays by the plethora of online mathematics resources, combined with modern search engines, but the situation is still less than ideal.) Note also that while the lecturer may know in advance which portions of the lecture are important enough to remember, and which ones are more trifling, many students will not be able to make the distinction in real time, and will thus have to record everything, leading to a sub-optimal allocation of the student's mental resources. To me, the above dangers are worse than the opposite danger that the students are lulled into complacency by the existence of official lecture notes, and thus cease to pay attention to the class. The latter problem can be fixed by a variety of means (e.g. making the classes more interactive or entertaining, or making the homework challenge the student beyond what is presented in the notes), and in any case is more a matter of the responsibilities of the student than of the lecturer. The former problem is however difficult for the student to address by himself or herself (using third-party lecture notes, for instance, is usually a terrible solution). Ideally, the existence of lecture notes should free up lecture time to focus on other aspects of the course (e.g. one could do a simple example in class, and refer to the notes for a more detailed example; or a heuristic proof with some details partially filled in, with the more technical details left to the notes; one can also present the more improvisational and free-form side of mathematics effectively in lecture, whereas the text medium is far superior for presenting the polished and structured side). Using class time to mechanically repeat what is written in the notes or textbook is a waste, and reduces the lecturer to essentially being a fancy text-to-speech synthesiser (this is the dual problem to that of the student being reduced to essentially a fancy speech-to-text synthesiser); instead, lectures should complement and support , rather than replicate , text, and vice versa. I discuss these issues more in my teaching statement, http://www.math.ucla.edu/~tao/teaching.dvi
|
{
"source": [
"https://mathoverflow.net/questions/27344",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6590/"
]
}
|
27,375 |
For a group $G$, is there an interpretation of $\mathbb C[G]$ as functions over some noncommutative space? If so, what does this space "look like"? What are its properties? How are they related to properties of $G$?
|
The noncommutative space defined by $C[G]$ is (by definition) the dual $\widehat{G}$ of G.
There are as many ways to make sense of this space as there are theories of noncommutative geometry. (Edit: In particular if G is not finite you have lots of possible meanings for the group ring, depending on what kind of regularity and support conditions you put on G, or equivalently, what class of representations of G you wish to consider, and I will ignore all such issues - which are the main technical part of the subject - below.) One basic principle is that noncommutative geometry is not about algebras up to isomorphism, it's about algebras up to Morita equivalence -- in other words, it's in fact about categories of modules over algebras (the basic invariant of Morita equivalence). You can think of these (depending on context) as vector bundles or sheaves of some kind on the dual. In this case we're looking at the category of complex representations of G, which are sheaves on the dual $\widehat{G}$. You can think of this as a form of the Fourier transform (modules for functions on G with convolution = modules for functions on the dual with multiplication), though obviously at this level of detail it's a complete tautology. Coarser invariants such as K-theory of group algebras, Hochschild homology etc give invariants, eg K-theory and cohomology, of the dual, noncommutative as it may be. There are many conjectures about this noncommutative topology, most famously the Baum-Connes conjecture relating the K-theory of this "space" to that of classifying spaces associated to G. As to what the dual looks like, this is of course highly dependent on the group.
For completely arbitrary groups I don't know of anything meaningful to say beyond structural things of the Baum-Connes flavor, so you have to pick a class of groups to study. If G is abelian, the dual is itself a group (the dual group). The formal thing you can say in general is that the dual of G fibers over the dual of the center of G -- this is a form of Schur's lemma, saying irreducible reps live over a particular point of the dual of the center (ie the center acts by evaluation by a character). You might get some more traction by looking at the "Bernstein center" or Hochschild cohomology --- endomorphisms of the identity functor of G-reps. This is a commutative algebra and the dual fibers over its spectrum. In many cases this is a very good approximation to the dual -- ie the "fibers are finite" (this is what happens for say real and p-adic groups). The orbit method of Kirillov says that for a nilpotent or solvable group, the dual looks like the dual space of the Lie algebra, modulo the coadjoint action. So again that's quite nice. Very very roughly the Langlands philosophy says that for reductive groups G (in particular over local or finite fields) the dual of G is related to conjugacy classes in a dual group $G^\vee$. This is if you'd like a way to make meaningful the observation that conjugacy classes and irreps are in bijection for a finite group -- you roughly want to say they're in CANONICAL bijection if the two groups are "dual". Rather than say it this coarsely, it's better to think in terms of the Harish-Chandra / Gelfand philosophy, which (again whittled down to one coarse snippet) says that the dual of a reductive group (over any field) is a union of "series", ie a union of subspaces each of which looks like the dual of a torus modulo a Weyl group. In other words, you look at all conjugacy classes of tori in G, for each torus you construct its dual (which is a group now!), and mod out by the symmetries inherited by the torus from its embedding in G, and this is the dual of G roughly. (This is also very close to saying semisimple conjugacy classes in the dual group of G, which is where the Langlands interpretation comes from).
Anyway this is saying that the dual is a very nice and manageable, even algebraic, object.
Kazhdan formulated this philosophy as saying that the dual of a reductive group is an algebraic object --- the reps of the group over a field F are something like the F-points of one fixed variety (or stack) over the algebraic closure.
Anyway one can go much further, and that's what the Langlands program does.
|
{
"source": [
"https://mathoverflow.net/questions/27375",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2837/"
]
}
|
27,579 |
Let $p \ne 2$ be a prime and $a$ the smallest positive integer that is a primitive root modulo $p$. Is $a$ necessarily a primitive root modulo $p^2$ (and hence modulo all powers of $p$)? I checked this for all $p < 3 \times 10^5$ and it seems to work, but I can't see any sound theoretical reason why it should be the case. What is there to stop the Teichmuller lifts of the elements of $\mathbb{F}_p^\times$ being really small?
|
It is not true in general. See http://primes.utm.edu/curios/page.php/40487.html for the example, 5 mod 40487^2.
|
{
"source": [
"https://mathoverflow.net/questions/27579",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2481/"
]
}
|
27,592 |
This question may sound ridiculous at first sight, but let me please show you all how I arrived at the aforementioned 'identity'. Let us begin with (one of the many) equalities established by Euler: $$ f(x) = \frac{\sin(x)}{x} = \prod_{n=1}^{\infty} \Big(1-\frac{x^2}{n^2\pi^2}\Big) $$ as $(a^2-b^2)=(a+b)(a-b)$, we can also write: (EDIT: We can not write this...) $$ f(x) = \prod_{n=1}^{\infty} \Big(1+\frac{x}{n\pi}\Big) \cdot \prod_{n=1}^{\infty} \Big(1-\frac{x}{n\pi}\Big) $$ We now we arrange the terms with $ (n = 1 \land n=-2)$, $ (n = -1 \land n=2$), $( n=3 \land -4)$ , $ (n=-3 \land n=4)$ , ..., $ (n = 2n \land n=-2n-1) $ and $(n=-2n \land n=2n+1)$ together.
After doing so, we multiply the terms accordingly to the arrangement. If we write out the products, we get: $$ f(x)=\big((1-x/2\pi + x/\pi -x^2/2\pi^2)(1+x/2\pi-x/\pi - x^2/2\pi^2)\big) \cdots $$
$$
\cdots \big((1-\frac{x}{(2n)\pi} + \frac{x}{(2n-1)\pi} -\frac{x^2}{(2n(n-1))^2\pi^2})(1+\frac{x}{2n\pi} -\frac{x}{(2n-1)\pi} -\frac{x^2}{(2n(2n-1))^2\pi^2)})\big) $$ Now we equate the $x^2$-term of this infinite product, using Newton's identities (notice that the '$x$'-terms are eliminated) to the $x^2$-term of the Taylor-expansion series of $\frac{\sin(x)}{x}$. So, $$ -\frac{2}{\pi^2}\Big(\frac{1}{1\cdot2} + \frac{1}{3\cdot4} + \frac{1}{5\cdot6} + \cdots + \frac{1}{2n(2n-1)}\Big) = -\frac{1}{6} $$ Multiplying both sides by $-\pi^2$ and dividing by 2 yields $$\sum_{n=1}^{\infty} \frac{1}{2n(2n-1)} = \pi^2/12 $$ That (infinite) sum 'also' equates $\ln(2)$, however (According to the last section of this paper). So we find $$ \frac{\pi^2}{12} = \ln(2) . $$ Of course we all know that this is not true (you can verify it by checking the first couple of digits). I'd like to know how much of this method, which I used to arrive at this absurd conclusion, is true, where it goes wrong and how it can be improved to make it work in this and perhaps other cases (series). Thanks in advance, Max Muller (note I: 'ln' means 'natural logarithm)
(note II: with 'to make it work' means: 'to find the exact value of)
|
You cannot split $$\left(1-\left(\frac{x}{n}\right)^2\right)\tag{1}$$ into $$\left(1 -\frac{x}{n}\right) \left(1 + \frac{x}{n}\right)\tag{2}$$ since the products no longer converge.
|
{
"source": [
"https://mathoverflow.net/questions/27592",
"https://mathoverflow.net",
"https://mathoverflow.net/users/93724/"
]
}
|
27,681 |
How can the liar paradox be expressed concisely in symbols? In which formal languages?
|
The Liar is the statement "this sentence is false." It is expressible in any language able to perform self-reference and having a truth predicate. Thus, $L$ is a statement equivalent to $\neg T(L)$. Goedel proved that the usual formal languages of mathematics, such as the language of arithmetic, are able to perform self-reference in the sense that for any assertion $\varphi(x)$ in the language of arithmetic, there is a sentence $\psi$ such that PA proves $\psi\iff\varphi(\langle\psi\rangle)$, where $\langle\psi\rangle$ denotes the Goedel code of $\psi$. Thus, the sentence $\psi$ asserts that "$\varphi$ holds of me". Tarski observed that it follows from this that truth is not definable in arithmetic. Specifically, he proved that there can be no first order formula $T(x)$ such that $\psi\iff T(\langle\psi\rangle)$ holds for every sentence $\psi$. The reason is that the formula $\neg T(x)$ must have a fixed point, and so there will be a sentence $\psi$ for which PA proves $\psi\iff\neg T(\langle\psi\rangle)$, which would contradict the assumed property of T. The sentence $\psi$ is excactly the Liar. Goedel observed that the concept of "provable", in contrast, is expressible, since a statement is provable (in PA) say, if and only if there is a finite sequence of symbols having the form of a proof. Thus, again by the fixed point lemma, there is a sentence $\psi$ such that PA proves $\psi\iff\neg\text{Prov}(\langle\psi\rangle)$. In other words, $\psi$ asserts "I am not provable". This statement is sufficiently close to the Liar paradox statement that one can fruitfully run the analysis, but instead of a contradiction, what one gets is that $\psi$ is true, but unprovable. This is how Goedel proved the Incompleteness Theorem.
|
{
"source": [
"https://mathoverflow.net/questions/27681",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3441/"
]
}
|
27,693 |
I hope this question is suitable; this problem always bugs me. It is an issue of mathematical orthography. It is good praxis, recommended in various essays on mathematical writing, to capitalize theorem names when recalling them: for instance one may write "thanks to Theorem 2.4" or "using ii) from Lemma 1.2.1" and so on. Should these names be capitalized when they appear unnumbered? For instance which of the following is correct? "Using the previous Lemma we deduce..." versus "Using the previous lemma we deduce..." "The proof of Lemma 1.3 is postponed to next Section." versus "The proof of Lemma 1.3 is postponed to next section."
|
In English, proper nouns are capitalized. The numbered instances you mention are all usages as proper nouns, but merely refering to a lemma or corollary not by its name is not using a proper noun, and so is uncapitalized. Thus, for example, one should write about the lemma before Theorem 1.2 having a proof similar to Lemma 5, while the main corollary of Section 2 does not. Edit. Well, I've become conflicted. The Chicago Manual of Style, which I have always taken as my guide in such matters, asserts in item 7.136 that "the word chapter is lowercased and spelled out in text". And in 7.141 they favor act 3 and scene 5 in words denoting parts of poems and plays. This would seem to speak against Section 2 and possibly against Theorem 1.2. In 7.135 they say that common titles such as foreward, preface, introduction, contents, etc. are lowercased, as in "Allan Nevins wrote the foreward to...". This may also be evidence against Theorem 1.2. But in 7.147 they favor Piano Sonata no. 2, which may be evidence in favor of Theorem 1.2. But they don't treat mathematical writing explicitly, and now I am less sure of what I have always believed, above. I do note that the CSM text itself refers to "fig. 1.2" and "figure 9.3", and not Figure 1.2, which would clearly speak against Theorem 1.2. So I am afraid that I may have to change my mind about this.
|
{
"source": [
"https://mathoverflow.net/questions/27693",
"https://mathoverflow.net",
"https://mathoverflow.net/users/828/"
]
}
|
27,749 |
Many famous results were discovered through non-rigorous proofs, with
correct proofs being found only later and with greater difficulty. One that is well
known is Euler's 1737 proof that $1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\cdots =\frac{\pi^2}{6}$ in which he pretends that the power series for $\frac{\sin\sqrt{x}}{\sqrt{x}}$
is an infinite polynomial and factorizes it from knowledge of its roots. Another example, of a different type, is the Jordan curve theorem. In this case,
the theorem seems obvious, and Jordan gets credit for realizing that it requires
proof. However, the proof was harder than he thought, and the first rigorous proof
was found some decades later than Jordan's attempt. Many of the basic theorems of topology are like this. Then of course there is Ramanujan, who is in a class of his own when it
comes to discovering theorems without proving them. I'd be interested to see other examples, and in your thoughts on what the examples reveal about the connection between discovery and proof. Clarification . When I posed the question I was hoping for some explanations
for the gap between discovery and proof to emerge, without any hinting from me. Since this hasn't happened much yet, let me suggest some possible
explanations that I had in mind: Physical intuition . This lies behind results such as the Jordan curve theorem,
Riemann mapping theorem, Fourier analysis. Lack of foundations . This accounts for the late arrival of rigor in calculus, topology,
and (?) algebraic geometry. Complexity . Hard results cannot proved correctly the first time, only via a
series of partially correct, or incomplete, proofs. Example: Fermat's last theorem. I hope this gives a better idea of what I was looking for. Feel free to edit your
answers if you have anything to add.
|
In 1905, Lebesgue gave a "proof" of the following theorem: If $f:\mathbb{R}^2\to\mathbb{R}$ is a Baire function such that for every $x$ , there is a unique $y$ such that $f(x,y)=0$ , then the thus implicitly defined function is Baire. He made use of the "trivial fact" that the projection of a Borel set is a Borel set. This turns out to be wrong, but the result is still true. Souslin spotted the mistake, and named continuous images of Borel sets analytic sets. So a mistake of Lebesgue led to the rich theory of analytic sets. Lebesgue
seemingly enjoyed this fact and mentioned it in the foreword to a book, "Leçons sur les Ensembles Analytiques et leurs Applications", of Souslin's teacher Lusin (as referenced in an AMS review of the book).
|
{
"source": [
"https://mathoverflow.net/questions/27749",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1587/"
]
}
|
27,755 |
Knuth's intuition that Goldbach's conjecture (every even number greater than 2 can be written as a sum of two primes) might be one of the statements that can neither be proved nor disproved really puzzles me. (See page 321 of http://www.ams.org/notices/200203/fea-knuth.pdf ) All the examples of such statements I have heard until now were very abstract, but this one is so concrete. My question is that is there such an example of a statement of this sort that is proved to be unprovable, i.e., for some property $P(n)$ , the statement that "every natural number $n$ satisfies $P(n)$ ". (In the Goldbach case $P(n)$ would be "if $n$ is an even number greater than $2$ , then there exists two primes $p$ and $q$ such that $n = p + q$ ".) If Knuth is right, it would be very interesting in one sense: the negation of Goldbach is obviously provable if it is true. So if someone proves that Goldbach is not provable, then we would know that Goldbach is true. We would be sure that someone would never come up with an example that would violate the condition. For the practical person, this is as good as it is proven. Edit: I have learned a lot from the answers, thank you so much!
|
When we say that a statement is undecidable, this is implicitly with regard to some axiom system, such as Peano arithmetic or ZFC. The reason why statements about arithmetic can be undecidable is that there exist inequivalent models of arithmetic that obey such axiom systems (this is a consequence of the Godel completeness and incompleteness theorems, assuming of course that the axiom system being used is consistent and recursively enumerable). For instance, the standard or "true" natural numbers obey the Peano axioms, but so do some more exotic number systems (e.g. the nonstandard natural numbers). It is conceivable that a given statement in arithmetic, such as Goldbach's conjecture, holds for the true natural numbers but not for some other number systems that obey the Peano axioms, though personally I view this possibility as remote. (Note that the nonstandard natural numbers satisfy exactly the same first-order statements as the standard natural numbers, by Los's theorem, but one can also construct other, weirder, models of arithmetic which have a genuinely different theory.) So, if Goldbach's conjecture is undecidable for some given axiom system, what this would imply is that every "true" even natural number larger than 4 is the sum of two primes (otherwise we could disprove Goldbach with an argument of finite length), but that there exists a more exotic number system (larger than the true natural numbers) obeying those axioms which contains some even exotic number that is not the sum of two (exotic) primes. (Note that the length of a proof has to be true natural number, rather than an exotic one, so the existence of an exotic counterexample cannot be directly converted to a disproof of Goldbach.) This is an unlikely scenario, but not an a priori impossible one (as one can see from the example of Goodstein's theorem or the Paris-Harrington theorem). [Edit: here, as usual, when talking about things like the "true" natural numbers, one has to work in some external reasoning system which may be distinct from the reasoning system one is analysing the decidability of. For instance, one may be using ZFC as one's external reasoning system to analyse what can and cannot be decided in Peano arithmetic, with the true natural numbers then being constructed by, say, the von Neumann ordinal construction combined with the axiom of infinity. Or one could be using an informal external reasoning system, based perhaps on some Platonic beliefs about mathematical objects, that is not explicitly axiomatised. It's best to keep the external system conceptually distinct from any internal ones being studied, as one can get hopelessly confused otherwise.]
|
{
"source": [
"https://mathoverflow.net/questions/27755",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1229/"
]
}
|
27,785 |
If you have an infinite set X of cardinality k, then what is the cardinality of Sym(X) - the group of permutations of X ?
|
$k^k$. Easy that it's an upper bound. For lower bound split $X$ into two equinumerous
subsets; there are $\ge k^k$ permutations swapping the two subsets.
|
{
"source": [
"https://mathoverflow.net/questions/27785",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3537/"
]
}
|
27,851 |
Here is a question someone asked me a couple of years ago. I remember having spent a day or two thinking about it but did not manage to solve it. This may be an open problem, in which case I'd be interested to know the status of it. Let $f$ be a one variable complex polynomial. Supposing $f$ has a common root with every $f^{(i)},i=1,\ldots,\deg f-1$, does it follow that $f$ is a power of a degree 1 polynomial? upd: as pointed out by Pedro, this is indeed a conjecture (which makes me feel less badly about not being able to do it). But still the question about its status remains.
|
That is known as the Casas-Alvero conjecture. Check this out, for instance: https://arxiv.org/abs/math/0605090 Not sure of its current status, though.
|
{
"source": [
"https://mathoverflow.net/questions/27851",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2349/"
]
}
|
27,971 |
Cayley's theorem makes groups nice: a closed set of bijections is a group and a group is a closed set of bijections- beautiful, natural and understandable canonically as symmetry. It is not so much a technical theorem as a glorious wellspring of intuition- something, at least from my perspective, that rings are missing; and I want to know why. Certainly the axiom system is more complicated- so there is no way you're going to get as simple a characterisation as you do with groups- but surely there must be some sort of universal object for rings of a given cardinality, analogous to the symmetric group in group theory. I would be surprised if it was a ring- the multiplicative and additive properties of a ring could be changed (somewhat) independently of one another- but perhaps a fibration of automorphisms over a group? If so is there a natural(ish) way of interpreting it? Perhaps it's possible for a certain subclass of rings, perhaps it's possible but useless, perhaps it's impossible for specific reasons, in which case: the more specific the better. Edit: So Jack's answer seems to have covered it (and quickly!): endomorphisms of abelian groups is nice! But can we do better? Is there a chance that 'abelian' can be unwound to the extent we can make this about sets again- or is that too much to hope for?
|
Every (associative, unital) ring is a subring of the endomorphism ring of its underlying additive group. Rings act on abelian groups; groups act on sets. The universal action on an abelian group is its endomorphism ring; the universal action on a set is the symmetric group. Modules are rings that remember their action on an abelian group; permutation groups are groups that remember their action on a set. A set is determined by its cardinality, but for abelian groups cardinality is not a very useful invariant. Rather than "order" of a ring, consider the isomorphism class of its underlying additive group. This is even commonly done in the finite ring case, where the order still has some mild control, but not as much as the isomorphism type of the additive group.
|
{
"source": [
"https://mathoverflow.net/questions/27971",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5869/"
]
}
|
28,054 |
The group $Diff(S^n)$ ($C^\infty$-smooth diffeomorphisms of the $n$-sphere) has many interesting subgroups. But one question I've never seen explored is what are its "big" finite-dimensional subgroups? For example, $Diff(S^n)$ contains a finite-dimensional Lie subgroup of dimension $n+2 \choose 2$, the subgroup of conformal automorphisms of $S^n$. Similarly it contains a compact Lie subgroup of dimension $n+1 \choose 2$, the isometry group of $S^n$. Is it known that: 1) A finite-dimensional Lie subgroup of $Diff(S^n)$ having dimension at least $n+2 \choose 2$ is conjugate to a subgroup of the conformal automorphism group of $S^n$ ? (Answer, no, see Algori's answer below). Modified question: As Algori notes, $GL_{n+1}(\mathbb R) / \mathbb R_{>0}$ acts on $S^n$ and has dimension $n^2+2n$. So is a finite-dimensional Lie subgroup of $Diff(S^n)$ of dimension $n^2+2n$ (or larger) conjugate to a subgroup of this group? 2) A compact Lie subgroup of $Diff(S^n)$ having dimension at least $n+1 \choose 2$ is conjugate to a subgroup of the isometry group of $S^n$ ? (Answer: Yes, see Torsten Ekedahl's post below) For example, arbitrary compact subgroups of $Diff(S^n)$ do not have to be conjugate to subgroups of the above two groups -- perhaps the earliest examples of these came from exotic projective and lens spaces. But I have little sense for how high-dimensional these "exotic" subgroups of $Diff(S^n)$ can be.
|
You can make big Lie groups act effectively on small manifolds by cheating: make the group a product of groups, with each factor acting by compactly supported diffeomorphisms on a different disjoint open subset. So the additive group $\mathbb R^N$ becomes a subgroup of $Diff(S^1)$ by flowing along $N$ commuting vector fields supported in $N$ disjoint arcs. (added later) A similar cheat: let $a_1,\dots ,a_N$ be linearly independent functions of one variable. Then $a_1(x)\frac{\partial}{\partial y},\dots ,a_N(x)\frac{\partial}{\partial y}$ are independent commuting vector fields in the $x,y$ plane. Modify this example to make it compactly supported if you like. (added still later) My proposed extension of the first cheat to a semisimple group (comment thread of Torsten's answer) is doomed: Choose a point on the circle and choose an element of SL_2(R) that fixes this point and acts on the tangent space there with eigenvalue c>1. Lifting the group element to the universal covering group in the right way, you get an element g of the latter group that fixes all the points above the given point in the universal covering space of the circle, in each case with eigenvalue c. But now if this line with this action could be embedded in a longer line with trivial action outside then there would be a sequence of fixed points of g with eigenvalue c converging to a fixed point of g with eigenvalue 1, contradiction.
|
{
"source": [
"https://mathoverflow.net/questions/28054",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1465/"
]
}
|
28,056 |
Question. Given a Turing-machine program $e$, which
is guaranteed to run in polynomial time, can we computably
find such a polynomial? In other words, is there a
computable function $e\mapsto p_e$, such that whenever $e$
is a Turing-machine program that runs in polynomial time,
then $p_e$ is such a polynomial time bound? That is, $p_e$
is a polynomial over the integers in one variable and
program $e$ on every input $n$ runs in time at most
$p_e(|n|)$, where $|n|$ is the length of the input $n$. (Note that I impose no requirement on $p_e$ when $e$ is not
a polynomial-time program, and I am not asking whether the
function $e\mapsto p_e$ is polynomial-time computable, but
rather, just whether it is computable at all.) In the field of complexity theory, it is common to treat
polynomial-time algorithms as coming equipped with an
explicit polynomial clock, that counts steps during the
computation and forces a halt when expired. This convention
allows for certain conveniences in the theory. In the field
of computability theory, however, one does not usually
assume that a polynomial-time algorithm comes equipped with
such a counter. My question is whether we can computably
produce such a counter just from the Turing machine
program. I expect a negative answer. I think there is no such
computable function $e\mapsto p_e$, and the question is
really about how we are to prove this. But I don't know... Of course, given a program $e$, we can get finitely many
sample points for a lower bound on the polynomial, but this
doesn't seem helpful. Furthermore, it seems that the lesson
of Rice's Theorem is
that we cannot expect to compute nontrivial information by
actually looking at the program itself, and I take this as
evidence against an affirmative answer. At the same time,
Rice's theorem does not directly apply, since the
polynomial $p_e$ is not dependent on the set or function
that $e$ computes, but rather on the way that it computes
it. So I'm not sure. Finally, let me mention that this question is related to and
inspired by this recent interesting MO question about the
impossibility of converting NP algorithms to P
algorithms .
Several of the proposed answers there hinged critically on
whether the polynomial-time counter was part of the input
or not. In particular, an affirmative answer to the present
question leads to a solution of that question by those
answers. My expectation, however, is for a negative answer
here and an answer there ruling out a computable
transformation.
|
[Edit: A bug in the original proof has been fixed, thanks to a comment by Francois Dorais.] The answer is no. This kind of thing can be proved by what I call a "gas tank" argument. First enumerate all Turing machines in some manner $N_1, N_2, N_3, \ldots$ Then construct a sequence of Turing machines $M_1, M_2, M_3, \ldots$ as follows. On an input of length $n$, $M_i$ simulates $N_i$ (on empty input) for up to $n$ steps. If $N_i$ does not halt within that time, then $M_i$ halts immediately after the $n$th step. However, if $N_i$ halts within the first $n$ steps, then $M_i$ "runs out of gas" and starts behaving erratically, which in this context means (say) that it continues running for $n^e$ steps before halting where $e$ is the number of steps that $N_i$ took to halt. Now if we had a program $P$ that could compute a polynomial upper bound on any polytime machine, then we could determine whether $N_i$ halts by calling $P$ on $M_i$, reading off the exponent $e$, and simulating $N_i$ for (at most) $e$ steps. If $N_i$ doesn't halt by then, then we know it will never halt. Of course this proof technique is very general; for example, $M_i$ can be made to simulate any fixed $M$ as long as it has gas but then do something else when it runs out of gas, proving that it will be undecidable whether an arbitrary given machine behaves like $M$.
|
{
"source": [
"https://mathoverflow.net/questions/28056",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1946/"
]
}
|
28,088 |
As usual I expect to be critisised for "duplicating" this question . But I do not! As Gjergji immediately
notified, that question was from numerology. The one I ask you here
(after putting it in my response ) is a mathematics
question motivated by Kevin's (O'Bryant) comment to the earlier post. Problem. For any $\epsilon>0$, there exists an $n$ such that
$\|n/\log(n)\|<\epsilon$ where $\|\ \cdot\ \|$ denotes the
distance to the nearest integer. In spite of the simple formulation, it is likely that
the diophantine problem is open. I wonder whether it follows
from some known conjectures (for example, Schanuel's conjecture).
|
If $f(x)=\frac{x}{\log x}$, then $f'(x)=\frac{1}{\log x} - \frac{1}{(\log x)^2}$, which tends to zero as $x\rightarrow \infty$. Choose some large real number $x$ for which $f(x)$ is integral. Then the value of $f$ on any integer near $x$ must be very close to integral.
|
{
"source": [
"https://mathoverflow.net/questions/28088",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4953/"
]
}
|
28,147 |
I was helping a student study for a functional analysis exam and the question came up as to when, in practice, one needs to consider the Banach space $L^p$ for some value of $p$ other than the obvious ones of $p=1$ , $p=2$ , and $p=\infty$ . I don't know much analysis and the best thing I could think of was Littlewood's 4/3 inequality. In its most elementary form, this inequality states that if $A = (a_{ij})$ is an $m\times n$ matrix with real entries, and we define the norm $$\|A\| = \sup\biggl(\left|\sum_{i=1}^m \sum_{j=1}^n a_{ij}s_it_j\right| : |s_i| \le 1, |t_j| \le 1\biggr)$$ then $$\biggl(\sum_{i,j} |a_{ij}|^{4/3}\biggr)^{3/4} \le \sqrt{2} \|A\|.$$ Are there more convincing examples of the importance of "exotic" values of $p$ ? I remember wondering about this as an undergraduate but never pursued it. As I think about it now, it does seem a bit odd from a pedagogical point of view that none of the textbooks I've seen give any applications involving specific values of $p$ . I didn't run into Littlewood's 4/3 inequality until later in life. [Edit: Thanks for the many responses, which exceeded my expectations! Perhaps I should have anticipated that this question would generate a big list; at any rate, I have added the big-list tag. My choice of which answer to accept was necessarily somewhat arbitrary; all the top responses are excellent.]
|
Huge chunks of the theory of nonlinear PDEs rely critically on analysis in $L^p$-spaces. Let's take the 3D Navier-Stokes equations for example. Leray proved in 1933 existence of a weak solution to the corresponding Cauchy problem with initial data from the space $L^2(\mathbb R^3)$. Unfortunately, it is still a major open problem whether the Leray weak solution is unique. But if one chooses the initial data from $L^3(\mathbb R^3)$, then Kato showed that there is a unique strong solution to the Navier-Stokes equations (which is known to exist locally in time). $L^3$ is the "weakest" $L^p$-space of initial data which is known to give rise to unique solutions of the 3D Navier-Stokes. In some cases the structure of the equations suggests the choice of $L^p$ as the most natural space to work in. For instance, many equations stemming from non-Newtonian fluid dynamics and image processing involve the $p$-Laplacian
$\nabla\left(|\nabla u|^{p-2}\nabla u\right)$ with $1 < p < \infty.$ Here the $L^p$-space
and $L^p$-based Sobolev spaces provide a natural framework to study well-posedness and regularity issues. Yet another example from harmonic analysis (which goes back to Paley and Zigmund, I think). Let
$$F(x,\omega)=\sum\limits_{n\in\mathbb Z^d} g_n(\omega)c_ne^{inx},\quad x\in \mathbb T^d,$$
where $g_n$ is a sequence of independent normalized Gaussians and $(c_n)$ is a non-random element of $l^2(\mathbb Z^d)$. Then the function $F$ belongs almost surely to any $L^p(\mathbb T^d)$, $2\leq p <\infty$ and it does not belong almost surely to $L^{\infty}(\mathbb T^d)$. There have been very recent applications of this resut to the existence of solutions to the nonlinear Schrodinger equations with random initial data (due to Burq, Gérard, Tzvetkov et al).
|
{
"source": [
"https://mathoverflow.net/questions/28147",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
28,265 |
In the common Hodge theory books, the authors usually cite other sources for the theory of elliptic operators, like in the Book about Hodge Theory of Claire Voisin, where you find on page 128 the Theorem 5.22: Let P : E → F be an elliptic differential operator on a compact manifold. Assume that E and F are of the same rank, and are equipped with metrics. Then Ker P is a finite-dimensional subspace of the smooth sections of E and Im P is a closed subspace of finite codimension of the smooth sections of F and we have an L²-orthogonal direct sum decomposition: smooth sections of E = Ker P ⊕ Im P* (where P* is the formal L²-adjoint of P) In the case of Hodge Theory, we consider the elliptic self-adjoint operator Δ, the Laplacian (or: Hodge-Laplacian, or Laplace-Beltrami operator). A proof for this theorem is in Wells' "Differential Analysis on Complex Manifolds", Chapter IV - but it takes about 40 pages, which is quite some effort! Now that I'm learning the theory of elliptic operators (in part, because I want to patch this gap in my understanding of Hodge Theory), I wonder if this "functional analysis" is really always necessary . Do you know of any class of complex manifolds (most likely some restricted class of complex projective varieties) where you can get the theorem above without using the theory of elliptic operators (or at least, where you can simplify the proofs that much that you don't notice you're working with elliptic operators) ? Maybe the general theorem really requires functional analysis (I think so), but the Hodge decomposition might follow from other arguments . I would be very happy to see some arguments proving special cases of Hodge decomposition on, say, Riemann surfaces. I would be even happier to hear why this is implausible (this would motivate me to learn more about these fascinating elliptic differential operators). If this ends up being argumentative and subjective, feel free to use the community wiki hammer.
|
The hard part of the proof of the Hodge decomposition (which is where the serious functional analysis is used) is the construction of the Green's operator. In Section 1.4 of Lange and Birkenhake's "Complex Abelian Varieties", they prove the Hodge decomposition for complex tori using an easy Fourier series argument to construct the Green's operator.
|
{
"source": [
"https://mathoverflow.net/questions/28265",
"https://mathoverflow.net",
"https://mathoverflow.net/users/956/"
]
}
|
28,268 |
I often hear the advice, "Read the masters" (i.e., read old, classic texts by great mathematicians). But frankly, I have hardly ever followed it. What I am wondering is, is this a principle that people give lip service to because it sounds good, but which is honored in the breach more than in the observance? If not, which masterworks have you found to be most enlightening? To keep the question focused, let me lay down some ground rules. List only papers/books from the 19th century or earlier. I recognize that this is an arbitrary cutoff but I want to draw a line somewhere. It must be something that you personally have read in its entirety (or almost in its entirety). I'm not really interested in secondhand evidence ("So-and-so says that X is a must-read"). You must have acquired important mathematical insights (not just historical insights) from the paper/book that you feel that you would never have acquired had you restricted your reading to 20th-century and 21st-century literature. It's not enough, for my purposes, that you found the paper/book "interesting" but not really essential. If possible, briefly describe these insights in your response. [Edit: In response to a comment that suggested that I have set the bar impossibly high, let me violate one of my own ground rules and point to this discussion on the $n$ -Category Cafe that gives some secondhand examples. That discussion should also help to clarify what I am asking for more examples of.]
|
I agree 100% with Igor and Andrew L., on the benefit of reading the creator's version of the same thing available from later expositors. I have gained mathematical insights from reading Euclid, Archimedes, Riemann, Gauss, Hurwitz, Wirtinger, as well as moderns like Zariski.... on topics I already thought I understood. Just Euclid 's use of the word "measures" for "divides" finally made clear to me the elementary argument that the largest number dividing 2 integers is also the smallest positive number one can measure using both of them. This is clear thinking of (commensurable) measuring sticks, since by translating it is obvious the set of lengths that one can so measure are equally spaced, hence the smallest one would measure them all. I was unaware also that Euclid's characterization of a tangent line to a circle was not just that it is perpendicular to the radius, but is the only line meeting the circle locally once and such that changing its angle ever so little produces a second intersection, i.e. Newton's definition of a tangent line. It is said Newton read Euclid just before giving his own definition. I did not realize until reading Archimedes that the "Cavalieri principle" follows just from the definition of the Riemann integral, without needing the fundamental theorem of calculus. I.e. it follows just from the definition of a volume as a limit of approximating slices, and was known to Archimedes. Hence one can conclude all the usual volume formulas for pyramids, cones, spheres, even the bicylinder, just by starting from the decomposition of a cube into three right pyramids, applying Cavalieri to vary the angle of the pyramid, then approximating and using Cavalieri. It is an embarrassment to me that I had thought the volume of a bicylinder a more difficult calculus problem that that for a sphere, when it follows immediately from comparing horizontal slices of a double square based pyramid inscribed in a cube. I.e. by Cavalieri and the Pythagorean theorem, the volume of a sphere is the difference between the volumes of a cylinder and an inscribed double cone. The same argument shows the volume of a bicylinder is the difference between the volumes of a cube and an inscribed double square based pyramid. This led to an intuitive understanding of the simple relation between the volumes of certain inscribed figures that I then noticed had been recently studied by Tom Apostol. I realized this summer that this allows a computation of the volume of the 4 dimensional ball. I.e. this ball results from revolving half a 3 ball, hence can be calculated by revolving a cylinder and subtracting the volume of revolving a cone. Since Archimedes knew the center of gravity of both those solids he knew this. Having read everywhere that Hurwitz 's theorem was that the maximum number of automorphisms of a Riemann surface of genus $g$ is $84(g-1),$ I had a difficult proof that the maximum number in genus $5$ is $192,$ using Jacobians, Prym varieties, and classifications of representations of planar groups, until Macbeath referred me to Hurwitz' original paper where a complete list of the possible orders was easily given: $84(g-1), 48(g-1),\ldots$ I subsequently explained this easy argument to some famous mathematical figures. Sometime later a more complicated such example for which Macbeath himself was usually credited was found also to occur in the 19th-century literature. Having studied Riemann surfaces all my life, but unable to read German well, I thought I had acquired some grasp of the Riemann Roch theorem, in particular I thought Riemann had given only an inequality $\ell(D) ≥ 1-g + \deg(D).$ When the translation from Kendrick press became available, I learned he had written down a linear map whose kernel computed $\ell(D),$ and the estimate derived from the fundamental theorem of linear algebra. The full equality also follows, but only if one can compute the cokernel as well. That cokernel of course was already shown by him to be what we now call $H^1(D).$ Hence Riemann's original theorem was the so called "index" version of RR. Since he expressed his map in terms of path integrals, it was natural to evaluate those integrals by residue calculus as Roch did. This is explained in my answer to "why is Riemann Roch [not precisely] an index problem?" Although there are many fine modern expositions of Riemann Roch, the most insightful perhaps being that in the chapter on Riemann surfaces in Griffiths and Harris, I had not seen how simple it was until reading Riemann. Perhaps this is only historical knowledge, but reading Riemann one sees that he also knew completely how to prove (index) Riemann Roch for algebraic plane curves, without appealing to the questionable Dirichlet principle, hence the usual impression that a rigorous proof had to await later arguments of Clebsch, Hilbert, or Brill and Noether, is incorrect. Reading Wirtinger ’s 19th century paper on theta functions, even though unfortunately for me only available in the original German, I learned that when a smooth Riemann surface acquires a singularity, the elementary holomorphic differential with a non zero period around that vanishing cycle, becomes meromorphic, and that period becomes the residue at the singular point. At last this explains clearly why one defines "dualizing differentials" as one does, in algebraic geometry. Once as grad student in Auslander's algebraic geometry class, I vowed to try out Abel's advice and read the master Zariski 's paper on the concept of a simple point. I was very discouraged when several hours passed and I had managed only a few pages. Upon returning to class, Auslander began to pepper us with questions about regular local rings. I found out how much I had learned when I answered them all easily until he literally told me to be quiet, since I obviously knew the subject cold. (To be honest, I did not know the very next question he posed, but I was off the hook.) In my answer to a question about where to learn sheaf cohomology I have given an example of insight only contained in Serre 's original paper. The sense of wonder and awe one gets upon reading people like Riemann or Euler , is also quite wonderful. Any student who has struggled to compute the sum of the even powers of the reciprocals of natural numbers $1/n^{2k},$ will be amazed at Euler's facile accomplishment of this for many values of $k.$ Calculus students estimating $\pi$ by the usual series to 3 or 4 places will also be impressed at his scores of correct digits. On the other hand, anyone using a modern computer can detect an actual error in his expansion of $\pi,$ I forget where, in the 214th place? but an error which was already noticed long ago. As you can see these are elementary examples hence from a fairly naive and uneducated person, myself, who has not at all plumbed the depth of many original papers. But these few forays have definitely convinced me there is a benefit that cannot be gained elsewhere, as these exposures can transform the understanding of ordinary mortals closer to that of more knowledgeable persons, at least in a narrow vein. So while it might be thought that only the strongest mathematicians can attempt these papers, my advice would be that reading such masters may be even more helpful to us average students. As a remark on criterion 2 of the original question, I find it is not at all necessary to read all of a paper by a master to get some insight. One word in Euclid enlightened me, and before the translation came out, I had already gained most of my understanding of Riemann's argument for RR just from reading the headings of the paragraphs. I learned a proof of RR for plane curves from reading only the introduction to a paper of Fulton. A single sentence of Archimedes, that a sphere is a cone with vertex at the center and base equal to the surface, makes it clear the volume is $1/3$ the surface area. Moreover this shows the same ratio holds for a bicylinder, whereas the area of a bicylinder is considered so difficult we do not even ask it of calculus students. So one should not be discouraged by the difficulty of reading all of a masters' paper, although of course it wouldn't hurt. A remark on the definition of master, versus creator. There are cases where a later master re - examines an earlier work and adds to it, and in these cases it seems valuable to read both versions. In addition to examples given above of Newton generalizing Euclid and Mumford using Hilbert, perhaps Mumford 's demonstration of the power of Grothendieck's Riemann Roch theorem in calculating inavriants of moduli space of curves is relevant. A related question occurs in many cases since the classical arguments of the "ancients" are preserved but only in classical texts such as Van der Waerden in algebra, and newer books have found slicker methods to avoid them. E.g. the method of LaGrange resolvents is useful in Galois theory for proving an extension of prime degree in characteristic zero is radical. There are faster less precise methods of showing this such as Artin/Dedekind's method of independence of characters, but the older method is useful when trying to use Galois theory to actually write down solution formulas of cubics and quartics. Thus today we often have an intermediate choice of reading modern expositions which reproduce the methods of the creators, or ones that avoid them, sometimes losing information. (This is discussed in the math 844-2 algebra notes on my web page, where, being a novice, I give all competing methods of proof.)
|
{
"source": [
"https://mathoverflow.net/questions/28268",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
28,415 |
Let $X, Y$ be normed vector spaces, where $X$ is infinite dimensional. Does there exist a linear map $T : X \rightarrow Y$ and a subset $D$ of $X$ such that $D$ is dense in $X$ , $T$ is bounded in $D$ (i.e. $\sup _{x\in D, x \neq 0} \frac{\|Tx\|}{\|x\|}<\infty $ ), but $T$ is not bounded in $X$ ?
|
Matthew's answer reminded me of a fact that makes this easy: if $X$ is a normed space (say, over $\mathbb{R}$) and $f : X \to \mathbb{R}$ is a linear functional, then its kernel $\ker f$ is either closed or dense in $X$, depending on whether or not $f$ is continuous (i.e. bounded). The proof is trivial: $\ker f$ is a subspace of $X$ of codimension 1. Its closure is a subspace that contains it, so must either be $\ker f$ or $X$. And of course, a linear functional is continuous iff its kernel is closed. This is Proposition III.5.2-3 in Conway's A Course in Functional Analysis . So let $f$ be an unbounded linear functional on $X$ (which one can always construct as in Matthew's example), and take $D = \ker f$. $D$ is dense by the above fact, and $f$ is identically zero on $D$.
|
{
"source": [
"https://mathoverflow.net/questions/28415",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4928/"
]
}
|
28,421 |
It is well known that if $M, \Omega$ is a symplectic manifold then the Poisson bracket gives $C^\infty(M)$ the structure of a Lie algebra. The only way I have seen this proven is via a calculation in canonical coordinates, which I found rather unsatisfying. So I decided to try to prove it just by playing around with differential forms. I got quite far, but something isn't working out and I am hoping someone can help. Forgive me in advance for all the symbols. Here is the setup. Given $f \in C^\infty(M)$ , let $X_f$ denote the unique vector field which satisfies $\Omega(X_f, Y) = df(Y) = Y(f)$ for every vector field $Y$ . We define the Poisson bracket of two functions $f$ and $g$ to be the smooth function $\{f, g \} = \Omega(X_f, X_g)$ . I can show that the Poisson bracket is alternating and bilinear, but the Jacobi identity is giving me trouble. Here is what I have. To start, let's try to get a handle on $\{ \{f, g \}, h\}$ . Applying the definition, this is given by $d(\Omega(X_f, X_g))X_h$ . So let's try to find an expression for $d(\Omega(X,Y))Z$ for arbitrary vector fields $X, Y, Z$ . Write $\Omega(X,Y) = i(Y)i(X)\Omega$ where $i(V)$ is the interior product by the vector field $V$ . Applying Cartan's formula twice and using the fact that $\Omega$ is closed, we obtain the formula $$d(\Omega(X,Y)) = (L_Y i(X) - i(Y) L_X) \Omega$$ where $L_V$ is the Lie derivative with respect to the vector field $V$ . Using the identity $L_V i(W) - i(W) L_V = i([V,W])$ , we get: $$(L_Y i(X) - i(Y) L_X) = L_Y i(X) - L_X i(Y) + i([X,Y])$$ Now we plug in the vector field $Z$ . We get $(L_Y i(X) \Omega)(Z) = Y(\Omega(X,Z)) - \Omega(X,[Y,Z])$ by the definition of the Lie derivative, and clearly $(i([X,Y])\Omega)(Z) = \Omega([X,Y],Z)$ . Putting it all together: $$d(\Omega(X,Y))Z = Y(\Omega(X,Z)) - X(\Omega(Y,Z)) + \Omega(Y, [X,Z]) - \Omega(X, [Y,Z]) + \Omega([X,Y], Z)$$ This simplifies dramatically in the case $X = X_f, Y = X_g, Z = X_h$ . The difference of the first two terms simplifies to $[X_f, X_g](h)$ , and we get: $$
\begin{align}
\{\{f, g\}, h\} &= [ X_f, X_g ](h) + [ X_f, X_h ](g) - [ X_g, X_h ](f) - [ X_f, X_g ](h)\\
&= [ X_f, X_h ](g) - [ X_g, X_h ](f)
\end{align}$$ However, this final expression does not satisfy the Jacobi identity. It looks at first glance as though I just made a sign error somewhere; if the minus sign in the last expression were a plus sign, then the Jacobi identity would follow immediately. I have checked all of my signs as thoroughly as I can, and additionally I included all of my steps to demonstrate that if a different sign is inserted at any point in the argument then one obtains an equation in which the left hand side is alternating in two of its variables but the right hand side is not. Can anybody help?
|
The Jacobi identity for the Poisson bracket does indeed follow from the fact that $d\Omega =0$. I claim that (twice) the Jacobi identity for functions $f,g,h$ is precisely
$$d\Omega(X_f,X_g,X_h) = 0.$$ To see this, simply expand $d\Omega$. You will find six terms of two kinds: three terms of the form
$$X_f \Omega(X_g,X_h) = X_f \lbrace g,h \rbrace$$ and three terms of the form
$$\Omega([X_f,X_g],X_h).$$ To deal with the first kind of terms, notice that from the definition of $X_f$, for any function $g$,
$$X_f g = \lbrace g, f \rbrace.$$
This means that
$$X_f \Omega(X_g,X_h) = \lbrace \lbrace g,h \rbrace, f \rbrace.$$ To deal with the second kind of terms, notice that
$$\iota_{[X_f,X_g]}\Omega = [L_{X_f},\iota_{X_g}]\Omega,$$
but since $d\Omega=0$,
$$L_{X_f}\Omega = d \iota_{X_f}\Omega = 0,$$
and hence
$$\iota_{[X_f,X_g]}\Omega = d \iota_{X_f}\iota_{X_g}\Omega = d\lbrace g,f\rbrace,$$
whence
$$\Omega([X_f,X_g],X_h) = d\lbrace g,f\rbrace (X_h) = \lbrace\lbrace g,f\rbrace, h \rbrace.$$ Adding it all up you get twice the Jacobi identity.
|
{
"source": [
"https://mathoverflow.net/questions/28421",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4362/"
]
}
|
28,496 |
I've just finished teaching a year-long "foundations of algebraic
geometry" class . It
was my third time teaching it, and my notes are gradually converging.
I've enjoyed it for a number of reasons (most of all the students, who
were smart, hard-working, and from a variety of fields). I've
particularly enjoyed talking with experts (some in nearby fields, many
active on mathoverflow) about what one should (or must!) do in a first
schemes course. I've been pleasantly surprised to find that those who
have actually thought about teaching such a course (and hence who know
how little can be covered) tend to agree on what is important, even if
they are in very different parts of the subject. I want to raise this
question here as well: What topics/examples/ideas etc. really really should be learned in a
year-long first serious course in schemes? Here are some constraints. Certainly most excellent first courses
ignore some or all of these constraints, but I include them to focus
the answers. The first course in question should be
purely algebraic. (The reason for this constraint: to avoid a
debate on which is the royal road to algebraic geometry --- this is
intended to be just one way in. But if the community thinks that a
first course should be broader, this will be reflected in the voting.)
The course should be intended for people in all parts of algebraic
geometry. It should attract smart people in nearby areas. It should
not get people as quickly as possible into your particular area of
research. Preferences: It can (and, I believe, must) be hard. As
much as possible, essential things must be proved, with no handwaving
(e.g. "with a little more work, one can show that...", or using
exercises which are unreasonably hard). Intuition should be given
when possible. Why I'm asking: I will likely edit the notes further, and hope to post
them in chunks over the 2010-11 academic year to provoke further debate. Some hastily-written thoughts are here ,
if you are curious. As usual for big-list questions: one topic per answer please. There
is little point giving obvious answers (e.g. "definition of a
scheme"), so I'm particularly interested in things you think others
might forget or disagree with, or things often omitted, or things you
wish someone had told you when you were younger. Or propose dropping
traditional topics, or a nontraditional ordering of traditional topics. Responses
addressing prerequisites such as "it shouldn't cover any commutative
algebra, as participants should take a serious course in that subject
as a prerequisite" are welcome too. As the most interesting
responses might challenge (or defend) conventional wisdom, please give
some argument or evidence in favor of your opinion. Update later in 2010: I am posting the notes, after suitable editing, and trying to take into account the advice below, here . I hope to reach (near) the end some time in summer 2011. Update July 2011: I have indeed reached near the end some time in summer 2011.
|
I found differentials hard to understand when I learned this material. Here are two things that helped me which I think are not in your notes: (1) The description of the Zariski tangent space to $X$ at $x$ as those Hom's from $\mathrm{Spec} \ k[\epsilon]/\epsilon^2$ to $X$ which take $\mathrm{Spec} \ k$ to $x$. This is much closer to my physical intuition for a tangent space than the $(\mathfrak{m}/\mathfrak{m}^2)^{\vee}$ definition. It is also an early example of the power of using rings with nilpotents. Building the vector space structure from this definition is especially pretty. (2) A careful discussion of the relationship between the infinitesimal objects, i.e. the elements of the Zariski tangent and cotangent spaces, and the global objects, i.e. derivations and Kahler differentials.
|
{
"source": [
"https://mathoverflow.net/questions/28496",
"https://mathoverflow.net",
"https://mathoverflow.net/users/299/"
]
}
|
28,526 |
Suppose that you graduate with a good PhD in mathematics, but don't necessarily want to go into academia, with the post-doc years that this entails. Are there any other options for continuing to do "real math" professionally? For example, how about working at the NSA? I don't know much of what is done there -- is it research mathematics? Are there other similar organizations? Perhaps corporations that contract with the federal government? Companies like RSA? Other areas of industry? Is there research mathematics done in any sort of financial or tech company? I've made this a community wiki, since there aren't any right answers...
|
I have worked in academia, at the research center of a telecommunications company (Tellabs), and at two different FFRDCs (MIT Lincoln Laboratory and IDA). At all of the non-academic jobs, I have done "real math," published papers, attended conferences, given talks, etc. So it is certainly possible to continue doing "real math" outside of academia. You should be aware, however, that in almost any non-academic job, there is pressure on you to produce results that are "useful" for the company or the government. The amount of such pressure varies, but it always exists, because ultimately that is the main justification for your paycheck. In academia, the corresponding fact is that in almost any academic job, there is pressure on you to teach, since that is usually the justification for a significant portion of your salary. Finding a non-academic job where there is no pressure on you to do anything "useful" is akin to finding an academic job where you have no teaching responsibilities. Certain high-tech companies and certain FFRDC's recognize that a good way to attract top talent is to give their employees the freedom to pursue their own research interests, whatever that may be. All the non-academic jobs I had were like this. They actively encouraged me to spend some amount of my time doing "real math" regardless of whether the results were of any "use." How much time? Well, if the company was doing well, and if I was doing a good job of producing "useful" results that they liked, then they would give me more freedom. But if the company was doing poorly then they would start to squeeze. During the telecom industry meltdown in the late 1990s, Tellabs eventually eliminated its research center entirely, along with my job; Bell Labs (more famously) suffered a similar fate. So far I have been drawing a dichotomy between "what the company finds useful" and "real math," and maybe you don't find that satisfactory. After all, if you're sufficiently motivated, you can do "real math" on your own time regardless of what your "day job" is. Maybe what you want is a job where providing what is useful to the company involves doing real math . This is a taller order; for example, at Lincoln Labs I found that there was almost no real math involved in the work they wanted me to do, and I eventually left that job for that reason even though it was a great job in almost every other respect. However, it is still possible to find such jobs, depending on what area of math you are interested in. If you are interested in large cardinals and are hoping for a job where your theorems about large cardinals will be "useful" then you are probably out of luck. However, if your interests lean towards areas with known relevance to computer science or various branches of engineering then your chances are much better. The NSA scores pretty well in this regard since it is no secret that number theory and various other branches of so-called "pure" mathematics are relevant to cryptology. In summary, jobs where you do "real math" do exist. When considering such a job, though, you should first ask yourself, will I enjoy producing what this company considers to be "useful" results? If the answer is no, then you will probably not be happy at the job even if they give you some freedom to do "real math." However, if the answer is yes, and the company gives you some amount of freedom to do "real math," then it will probably be an excellent fit for you.
|
{
"source": [
"https://mathoverflow.net/questions/28526",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3028/"
]
}
|
28,532 |
Suppose I want to understand classical mechanics. Why should I be interested in arbitrary poisson manifolds and not just in symplectic ones? What are examples of systems best described by non symplectic poisson manifolds?
|
For many reasons and purposes, it is the Poisson bracket, not the symplectic form, that plays a primary role. Equations of motion and, more generally, the evolution of observables have an easy form:
$$ \frac{\partial f}{\partial t}=\{H,f\}.$$ Conserved quantities form a Poisson subalgebra:
$$\{H,F\}=\{H,G\}=0 \implies \{H,\{F,G\}\}=0. $$ Since symmetries (Hamiltonian group actions) are Poisson by nature, the moment map is defined in the Poisson setting:
$$ M\ni P\mapsto (X \mapsto H_X(P)).$$
Unlike in the symplectic case, both steps in the Hamiltonian reduction, restriction to the level set and factorization by the action of the stabilizer, are naturally carried out in the Poisson category, even for singular reduction. Quasiclassical approximation in quantum mechanics (and conversely, quantization of classical mechanical systems) is expressed via the Poisson bracket:
$$[\hat{F},\hat{G}]=ih\widehat{\{F,G\}} +O(h^2). $$ On the other hand, many natural systems have a degenerate Poisson bracket and/or are infinite-dimensional. The phase space of various tops is the dual space $\mathfrak{g}^*$ of a Lie algebra. This is a universal example of a linear Poisson structure. Symplectic leaves are the coadjoint orbits and the Poisson center is given by the (classical) "Casimir functions". Classical integrable systems such as KdV admit a bi-Hamiltonian structure (i.e. a pair of compatible Poisson brackets). This has no analogues in symplectic theory. Some of these structures are obtained by a reduction from a linear Poisson structure on a suitable infinite-dimensional Lie algebra (loop algebra, algebra of matrix differential or pseudo-differential operators, etc). Finally, a related practical consideration: even if you are interested in studying a symplectic manifold, frequently this is best accomplished by embedding it as a symplectic leaf into a simpler Poisson manifold, whether or not it has direct physical meaning. In particular, this applies to flag manifolds of semisimple Lie groups, which are topologically complicated objects, but can be identified with coadjoint orbits, thus embedded into a vector space with a linear Poisson structure.
|
{
"source": [
"https://mathoverflow.net/questions/28532",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2837/"
]
}
|
28,569 |
Let $G$ be a complex connected Lie group, $B$ a Borel subgroup and $W$ the Weyl group. The Bruhat decomposition allows us to write $G$ as a union $\bigcup_{w \in W} BwB$ of cells given by double cosets. One way I have seen to obtain cell decompositions of manifolds is using Morse theory. Is there a way to prove the Bruhat decomposition using Morse theory for a certain well-chosen function $f$? More generally, are there heuristics when obtaining a certain cell decomposition through Morse theory is likely to work?
|
Here's a partial answer. What I'm about to say is taken from Section 2.4 of Chriss and Ginzburg's Representation Theory and Complex Geometry . The references I use will be from this book. There aren't complete proofs there, but there are references to complete proofs. The Bruhat decomposition on $G$ (we'll assume $G$ is reductive, the general case follows from this) can be seen as the preimage of the Bruhat decomposition of its flag variety $G/B$. It's easier to work with $G/B$ since it's a projective variety. Here $G/B$ has a torus (= ${\bf C}^*$) action with finitely many fixed points $\{w_1, \dots, w_n\}$, so one has a Bialynicki-Birula cell (= affine space) decomposition given by the attracting sets $X_i = \{ x \in G/B \mid \lim_{t \to 0} t * x = w_i \}$. [Normally, one thinks of $G/B$ as having finitely many fixed points with respect to $({\bf C}^* )^r$, where $r$ is the rank of $G$, but we can take ${\bf C}^* $ to be a subgroup in general position] In our case, the fixed points are indexed by the Weyl group, and the attracting sets will be the Bruhat cells. From this, one can construct a Morse function $H$ (Lemma 2.4.17) whose Morse decomposition coincides with the B-B cell decomposition (Corollary 2.4.24). So even though it's a very special case, one case where cell decompositions come from Morse functions is when the manifold is a projective algebraic variety with finitely many torus fixed points.
|
{
"source": [
"https://mathoverflow.net/questions/28569",
"https://mathoverflow.net",
"https://mathoverflow.net/users/798/"
]
}
|
28,612 |
Perhaps this question overlaps with similar ones, ... but I want to focus on a particular possible cause of confusion. I notice that students are often confused by the concepts of "infinite" and "unbounded". So, when asked if the set of invertible matrices is compact, they reply "no, because there are an infinite number of matrices with non zero determinant, therefore the set is unbounded". Actually this happens in Italian, where the corresponding words ("infinito" and "illimitato") are almost synonyms in everyday language. Does this happen in English too, or other languages?. I wonder: what if we chose another name for the two concepts? Would they make this mistake anyway? One way to check this would be to compare with what happens in other languages, where perhaps the words chosen do not create the confusion. Do you have other examples of this situation? Can you suggest different math concepts which in one language are named with synonyms, but not in another? Do you know if this problem has been studied anywhere?
|
"Open" and "closed". Every reasonable human being on the planet, who has not studied topology, will assume that something can either be open or closed, but not both. This often causes students to make statements like "Set A is open, therefore it is not closed, thus ..."
|
{
"source": [
"https://mathoverflow.net/questions/28612",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6658/"
]
}
|
28,647 |
Is it possible to partition $\mathbb R^3$ into unit circles?
|
The construction is based on a well ordering of $R^3$ into the least ordinal of cardinality continuum. Let $\phi$ be that ordinal and let $R^3=\{p_\alpha:\alpha<\phi\}$ be an enumeration of the points of space. We define a unit circle $C_\alpha$ containing $p_\alpha$ by transfinite recursion on $\alpha$ , for some $\alpha$ we do nothing. Here is the recursion step. Assume we have reached step $\alpha$ and some circles $\{C_\beta:\beta<\alpha\}$ have been determined. If some of them contains (=covers) $p_\alpha$ , we do nothing. Otherwise, we choose a unit circle containing $p_\alpha$ that misses all the earlier circles. For that, we first choose a plane through $p_\alpha$ that is distinct from the planes of the earlier circles. This is possible, as there are continuum many planes through $p_\alpha$ and less than continuum many planes which are the planes of those earlier circles. Let $K$ be the plane chosen. The earlier circles intersect $K$ in less than continuum many points, so it suffices to find, in $K$ , a unit circle going through $p_\alpha$ which misses certain less than continuum many points. That is easy: there are continuum many unit circles in $K$ that pass through $p_\alpha$ and each of the bad points disqualifies only 2 of them.
|
{
"source": [
"https://mathoverflow.net/questions/28647",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3375/"
]
}
|
28,717 |
Has any work been done on Singmaster's conjecture since Singmaster's work? The conjecture says there is a finite upper bound on how many times a number other than 1 can occur as a binomial coefficient. Wikipedia's article on it, written mostly by me, says that It is known that infinitely many numbers appear exactly 3 times. It is unknown whether any number appears an odd number of times where the odd number is bigger than 3. It is known that infinitely many numbers appear 2 times, 4 times, and 6 times. One number is known to appear 8 times. No one knows whether there are any others nor whether any number appears more than 8 times. Singmaster reported that Paul Erdős told him the conjecture is probably true but would probably be very hard to prove.
|
There is an upper bound of $O\left(\frac{(\log n)(\log \log \log n)}{(\log \log n)^3}\right)$ due to Daniel Kane: see " Improved bounds on the number of ways of expressing t as a binomial coefficient ," Integers 7 (2007), #A53 for details.
|
{
"source": [
"https://mathoverflow.net/questions/28717",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6316/"
]
}
|
28,721 |
Alright, so I have been taking a while to soak in as much advanced mathematics as an undergraduate as possible, taking courses in algebra, topology, complex analysis (a less rigorous undergraduate version of the usual graduate course at my university), analysis, model theory, and number theory. That is, I have taken enough 'abstract' (proof-based) mathematics courses to fall in love with the subject and decide to pursue it as a career. However, I have been putting off taking a required ordinary differential equations course (colloquially referred to as 'calc 4', though this seems inappropriate) which will likely be very computational and designed to cater to the overpopulation of engineering students at my university. So my question is, for someone who might have to actually concern themselves with the theory behind the 'rules' and theorems which will likely go unproven in this low-level course (likely of questionable mathematical content), what might be a decent supplementary text in ODE? That is, something substantive to counter-balance the 'ODE for students of science and engineering'-type text I will have to wade through. I want to study algebraic geometry further (I have gone through Karen Smith's text and the first part of Hartshorne), so something which goes from basic material through differential forms and related material would be nice. Thanks! (and yes, it's embarrassing that I still haven't taken the 200-level ODE course, but I have been putting it off in favor of more interesting/rigorous courses... but now there's that whole graduation requirements issue).
--Lambdafunctor
|
Maybe I am reading too much into your pseudonym and your partly apologetic and partly condescending comments about the course you are going to take, but please, Don't disparage the "rules" and computational aspects of differential equations. Firstly , it is a beautiful subject with direct scientific origin and arguably most applications (save only calculus, perhaps) of all the courses you'd ever take. Secondly , these scientific connections continue to motivate and shape the development of the subject. Thirdly , rigor and abstraction are not substitutes for the actual mathematical content. Bourbaki never wrote a volume on differential equations, and the reason, I think, is that the subject is too content-rich to be amenable to axiomatic treatment. Finally , I've taught students who were gung-ho about rigorous real analysis, Rudin style, but couldn't compute the Taylor expansion of $\sqrt{1+x^3}.$ Knowing that the Riemann-Hilbert correspondence is an equivalence of triangulated categories may feel empowering, but as a matter of technique, it is mere stardust compared with the power of being able to compute the monodromy of a Fuchsian differential equation by hand. Having forewarned you, here are my favorite introductory books on differential equations, all eminently suitable for self-study: Piskunov, Differential and integral calculus Filippov, Problems in differential equations Arnold, Ordinary differential equations Poincaré, On curves defined by differential equations Arnold, Geometric theory of differential equations Arnold, Mathematical methods of classical mechanics You will find a lot of geometry, including an excellent exposition of calculus on manifolds, in the right context , in Arnold's Mathematical methods .
|
{
"source": [
"https://mathoverflow.net/questions/28721",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6131/"
]
}
|
28,776 |
A theorem of Hecke (discussed in this question )
shows that if $L$ is a number field, then the image of the
different $\mathcal D_L$ in the ideal class group of $L$ is a square. Hecke's proof, and all other proofs that I know, establish this essentially by
evaluating all quadratic ideal class characters on $\mathcal D_L$ and showing
that the result is trivial; thus they show that the image of $\mathcal D_L$
is trivial in the ideal class group mod squares, but don't actually exhibit a square root
of $\mathcal D_L$ in the ideal class group. Is there any known construction (in general, or in some interesting cases) of an ideal
whose square can be shown to be equivalent (in the ideal class group) to $\mathcal D_L$. Note: One can ask an analogous question when one replaces the rings of integers by
Hecke algebras acting on spaces of modular forms, and then in some situations I know
that the answer is yes . (See this paper .) This gives me some hope that there might be a construction in
this arithmetic context too. (The parallel between Hecke's context (i.e. the number
field setting) and the Hecke algebra setting is something I learnt from Dick Gross.) Added: Unknown's very interesting comment below seems to show that the answer is "no",
if one interprets "canonical" in a reasonable way. In light of this, I am going to ask another question which is a tightening of this one. On second thought: Perhaps I will ask a follow-up question at some point, but I think I need more time to reflect on it. In the meantime, I wonder if there is more that one can say about this question, if not in general, then in some interesting cases.
|
The following example shows that, in its strongest form, the answer to Professor Emerton's question is no. This answer is essentially an elaboration on what is already in the comments. Let $p \equiv q \equiv 5 \pmod 8$. Let $K/\mathbb{Q}$ be a cyclic
extension of degree four totally ramified at $p$ and $q$ and unramified
everywhere else (it exists). To make life easier, suppose that the $2$-part of the class
group of $K$ is cyclic.
The Galois group of $K$ is
$G = \mathbb{Z}/4 \mathbb{Z}$.
Let $C$ denote the class group of $K$. I claim that $C^G$ is cyclic
of order two. Since the $2$-part of $C$ is cyclic,
this is equivalent to showing that $C_G$ is cyclic of order two.
By class field theory, $C_G$ corresponds to a Galois extension
$L/\mathbb{Q}$ unramified everywhere over $K$ such that there is an exact sequence $$1 \rightarrow C_G \rightarrow \mathrm{Gal}(L/\mathbb{Q})
\rightarrow \mathbb{Z}/4 \mathbb{Z} \rightarrow 1.$$ If $\Gamma$ is any finite group with center $Z(\Gamma)$, then an easy
exercise shows that $\Gamma/Z(\Gamma)$ is cyclic only if it is trivial.
We deduce that $L$ is the genus field of $K$.
There is a degree four extension $M/L$ contained inside the cyclotomic field
$\mathbb{Q}(\zeta_p,\zeta_q)$ that is unramified over $K$ at all finite
primes. However, the congruence conditions on $p$ and $q$ force $K$ to be
(totally) real and $M$ (totally) complex. Thus $M/K$ is ramified at
the infinite primes, and $C_G = \mathbb{Z}/2\mathbb{Z}$, and the claim is established. We note, in passing, that $L = K(\sqrt{p}) = K(\sqrt{q})$. Suppose there
exists a canonical element $\theta \in C$ such that $\theta^2 = \delta_K$,
where $\delta_K$ is the different of $K$.
The different $\delta_K$ is invariant under $G$. If $\theta$ is
canonical in the strongest sense then it must also be invariant under $G$. In particular,
the element $\theta \in C^G$ must have order dividing two, and hence $\theta^2 = \delta_K$
must be trivial in $C$. We conclude that if $\delta_K$ is not principal,
no such $\theta$ exists. It remains to show that there exists primes $p$ and $q$ such that
$\delta_K$ is not principal and the $2$-part of $C$ is cyclic. A computation shows this is so for
$p = 13$ and $q = 53$. For those playing at home, $K$ can be taken to be
the splitting field
of
$$x^4 + 66 x^3 + 600 x^2 + 1088 x - 1024,$$
where $C = \mathbb{Z}/8 \mathbb{Z}$ and $\delta_K = [4]$. $C$ is generated by (any) prime
$\mathfrak{p}$ dividing $2$, and $G$ acts on $C$ via the quotient $\mathbb{Z}/2\mathbb{Z}$, sending $\mathfrak{p}$ to $\mathfrak{p}^3$.
|
{
"source": [
"https://mathoverflow.net/questions/28776",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2874/"
]
}
|
28,788 |
A while back I saw posted on someone's office door a statement attributed to some famous person, saying that it is an instance of the callousness of youth to think that a theorem is trivial because its proof is trivial. I don't remember who said that, and the person whose door it was posted on didn't remember either. This leads to two questions: Who was it? And where do I find it in print—something citable? (Let's call that one question.) What are examples of nontrivial theorems whose proofs are trivial? Here's a wild guess: let's say for example a theorem of Euclidean geometry has a trivial proof but doesn't hold in non-Euclidean spaces and its holding or not in a particular space has far-reaching consequences not all of which will be understood within the next 200 years. Could that be an example of what this was about? Or am I just missing the point?
|
Bertrand Russell proved that the general set-formation principle known as the Comprehension Principle, which asserts that for any property $\varphi$ one may form the set $\lbrace\ x \mid \varphi(x)\ \rbrace$ of all objects having that property, is simply inconsistent. This theorem, also known as the Russell Paradox , was certainly not obvious at the time, as Frege was famously completing his major treatise on the foundation of mathematics, based principally on what we now call naive set theory, using the Comprehension Principle. It is Russell's theorem that showed that this naive set theory is contradictory. Nevertheless, the proof of Russell's theorem is trivial: Let $R$ be the set of all sets $x$ such that $x\notin x$ . Thus, $R\in R$ if and only if $R\notin R$ , a contradiction. So the proof is trivial, but the theorem was shocking and led to a variety of developments in the foundations of mathematics, from which ultimately the modern ZFC conceptions arose. Frege abandoned his work in this area.
|
{
"source": [
"https://mathoverflow.net/questions/28788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6316/"
]
}
|
28,826 |
While reading the answer to another Mathoverflow question, which mentioned the Poisson summation formula, I felt a question of my own coming on. This is something I've wanted to know for a long time. In fact, I've even asked people, who have probably given me perfectly good answers, but somehow their answers have never stuck in my brain. The question is simple: the Poisson summation formula is incredibly useful to many people, but why is that? When you first see it, it looks like a piece of magic, but then suddenly you start spotting that people keep saying "By Poisson summation" and expecting you to fill in the details. In that respect, it's a bit like the phrase "By compactness," but the important difference for me is that I can fill in the details of compactness arguments. What I would like to know is this. What is the "trigger" that makes people think, "Ah, Poisson summation should be useful here"? And is there some very simple example of how it is applied, with the property that once you understand that example, you basically understand how to apply it in general? (Perhaps two or three examples are needed -- that would obviously be OK too.) And can one give a general description of the circumstances where it is useful? (Anyone familiar with the Tricki will see that I am basically asking for a Tricki article on the formula. But I don't mind something incomplete or less polished.) For reference, here is a related (but different) question about the Poisson summation formula: Truth of the Poisson summation formula
|
The existing answers list some important situations where Poisson Summation plays a role, the application to proving the functional equation of $\theta$ and hence of $\zeta$ being my personal favourite. My best answer to Tim's question as he actually asked it might be: why not have it in mind to try using it whenever you have a discrete sum that you are having trouble estimating, especially if you fancy your chances of understanding the Fourier transform of the summands. You'll end up with a different sum and it might be a lot easier to understand, and you might even be able to approximate your first sum by an integral (the term $\hat{f}(0)$ in the Poisson summation formula). To explain a little more with an example, there's a whole theory concerned with the estimation of exponential sums $\sum_{n \leq N} e^{2\pi i \phi(n)}$. There are two processes called A and B that can be used to turn a sum like this into something you might be better positioned to understand. Process A is basically Weyl/van der Corput differencing (Cauchy-Schwarz) and process B is essentially Poisson summation. It's not a very straightforward task to put together a theory of how these processes interact, and how they may best be combined to estimate your sum, and in fact this is in general something of an art. The 10 lectures book by Montgomery contains a nice exposition, and there's a whole LMS lecture note volume by Graham and Kolesnik if you want more details. I want to share a perhaps slightly obscure paper of Roberts and Sargos (Three-dimensional exponential sums with monomials, Journal fur die reine und angewandte Mathematik (Crelle) 591), in which they use Poisson Summation in the form of Process B mentioned above arbitrarily many times to establish the following rather simple-to-state result: the number of quadruples $x_1,x_2,x_3,x_4$ in $[X, 2X)$ with $$|1/x_1 + 1/x_2 - 1/x_3 - 1/x_4| \leq 1/X^3$$ is $X^{2 + o(1)}$. In other words, the quantities $1/a + 1/b$ tend to avoid one another to pretty much the same extent as random numbers of the same size. Very very roughly speaking (I don't really understand the argument in depth) the proof involves looking at exponential sums $\sum_x e^{2\pi i m/x}$, and it is these that are transformed repeatedly using Poisson summation followed by other modifications (it being reasonably pointless to try and apply Poisson sum twice in succession).
|
{
"source": [
"https://mathoverflow.net/questions/28826",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1459/"
]
}
|
28,901 |
What are the pros and cons of the MathSciNet database vs Google Scholar? I don't have access to Mathscinet so this question is out of curiosity, and also this question where MathSciNet is used to find paper counts. I reckon Google Scholar will almost always be the more comprehensive of the two with higher paper counts for any particular author and includes papers that don't use the MSC codes as well papers from other subjects that may be of mathematical interest but aren't included in mathscinet. One definitely annoying thing about Google is that in the advanced search it doesn't have a mathematics only category, but lumps it in with computer science and engineering so you sometimes need to add something like -"computer science" -"engineering" mathematics to your search term to filter out unwanted results, which isn't ideal.
|
Google scholar is free; MathSciNet requires a subscription. In practice the usual effect of this is that one needs to access MathSciNet via a university IP address rather than one's home address. But it also means that one can't provide web pointers to MathSciNet searches or reviews and expect them to be usable by people who are not themselves professional mathematicians; it is possible to link to bibliographic entries for individual articles but non-subscribers are not shown the review text, only the bibliographic data. MathSciNet covers essentially all mathematics journals; Google scholar covers only what it can find online. On the other hand, Google scholar covers unpublished preprints and some published mathematical material in related disciplines (e.g. theoretical computer science conferences) that is not as comprehensively reviewed in MathSciNet. MathSciNet is indexed only by title and abstract/review text; Google scholar is indexed by the full text of the article. Some articles in MathSciNet have a review written by a knowledgeable reviewer, that puts the article into context better than the authors did. (On the other hand, many MathSciNet entries merely repeat the authors' abstract.) MathSciNet has much more reliable publication data than Google scholar: its BibTeX is generally usable as-is, it properly collects papers by the same author and distinguishes papers by different authors, and it doesn't have duplicate entries for the same paper. However, Google scholar generally has better citation data than MathSciNet: although MathSciNet lists the papers that cite a given paper, the ones that it lists are generally a small subset of the ones Google scholar finds. Google scholar will provide links to as many different online copies of a paper as it can find (e.g. preprints from the author's home page); MathSciNet will only provide one link, to the official published copy, and will do so only for a subset of the journals it covers.
|
{
"source": [
"https://mathoverflow.net/questions/28901",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3537/"
]
}
|
28,945 |
There are infinite graphs which contain all finite graphs as induced subgraphs, e.g. the Rado graph or the coprimeness graph on the naturals. Are there infinite groups which
contain all finite groups as
subgroups?
|
Yes, plenty. The group only has to contain all finite permutation groups. Perhaps the most
straightforward example would be the permutations of a countable set. That is bijections
which fix all but a finite set.
|
{
"source": [
"https://mathoverflow.net/questions/28945",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2672/"
]
}
|
28,947 |
The recent question about the most prolific collaboration interested me. How about this question in the opposite direction, then: can anyone beat, amongst contemporary mathematicians, the example of Christopher Hooley, who has written 91 papers and has yet to coauthor a single one (at least if one discounts an obituary written in 1986)?
|
Lucien Godeaux wrote more than 600 papers and not one of them is a joint paper. He cowrote a textbook in projective geometry. Mathscinet records only 15 citations to all these papers! But there is something called Godeaux surfaces which is mentioned in the literature. This is about the weirdest example I know. http://www.ams.org/mathscinet/search/author.html?mrauthid=241534
|
{
"source": [
"https://mathoverflow.net/questions/28947",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5575/"
]
}
|
28,997 |
For many standard, well-understood theorems the proofs have been streamlined to the point where you just need to understand the proof once and you remember the general idea forever. At this point I have learned three different proofs of the Birkhoff ergodic theorem on three separate occasions and yet I still could probably not explain any of them to a friend or even sit down and recover all of the details. The problem seems to be that they all depend crucially on some frustrating little combinatorial trick, each of which was apparently invented just to service this one result. Has anyone seen a more natural approach that I might actually be able to remember? Note that I'm not necessarily looking for a short proof (those are often the worst offenders) - I'm looking for an argument that will make me feel like I could have invented it if I were given enough time.
|
I don't know whether this helps or whether you've already seen this before, but this made a lot more intuitive sense to me than the combinatorial approach in Halmos's book. The key point in the proof is to prove the maximal ergodic theorem. This states that if $M_T$ is the maximal operator $M_T f= \sup_{n >0} \frac{1}{n+1} (f + Tf + \dots + T^n f)$, then $\int_{M_T f>0} f \geq 0$. Here $T$ is the associated map on functions coming from the measure-preserving transformation. This is a weak-type inequality, and the fact that one considers the maximal operator isn't terribly surprising given how they arise in a) the proof of the Lebesgue differentation theorem (namely, via the Hardy-Littlewood maximal operator $Mf(x) = \sup_{t >0} \frac{1}{2t} \int_{x-t}^{x+t} |f(r)| dr$. b) In the theory of singular integrals, one can define a maximal operator in the same way and prove that it is $L^p$-bounded for $1 < p < \infty$ and weak-$L^1$ bounded in suitably nice homogeneous cases (e.g. the Hilbert transform). One of the consequences of this is, for instance, that the Hilbert transform can be computed a.e. via the Cauchy principal value of the usual integral. c) I'm pretty sure the boundedness of the maximal operator of the partial sums of Fourier series is used in the proof of the Carleson-Hunt theorem. So using maximal operators (and, in particular, weak bounds on them) to establish convergence is fairly standard. Once the maximal inequality has been established, it isn't usually very hard to get the pointwise convergence result, and the ergodic theorem is no exception. The maximal ergodic theorem actually generalizes to the case where $T$ is an operator of $L^1$-norm at most 1, and thinking of it in a more general sense might meet the criteria of your question. In particular, let $T$ be as just mentioned, and consider $M_T$ described in the analogous way. Or rather, consider $M_T'f = \sup_{n \geq 0} \sum_{i=0}^n T^if$. Clearly $M_T'f >0$ iff $M_Tf >0$. Moreover, $M_T'$ has the crucial property that $T M_T' f + f = M_T' f$ whenever $M_Tf>0$. Therefore,
$\int_{M_T'f>0} f = \int_{M_T'f>0} M_T'f - \int_{M_T'f>0} TM_T'f.$ The first part is in fact $||M_T'f||_1$ because the modified maximal operator is always nonnegative. The second part is at most $||T M_T'f|| \leq ||M_T'f||$ by the norm condition. Hence the difference is nonnegative. Perhaps this will be useful: let $M$ be an operator (not necessarily linear) sending functions to nonnegative functions such that $(T-I)Mf = f$ wherever $Mf>0$, for $T$ an operator of $L^1$-norm at most 1). Then $\int_{Mf>0} f \geq 0$. The proof is the same.
|
{
"source": [
"https://mathoverflow.net/questions/28997",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4362/"
]
}
|
28,999 |
This recent MO
question ,
answered now several times over, inquired whether an
infinite group can contain every finite group as a
subgroup. The answer is yes by a variety of means. So let us raise the stakes: Is there a countable group
containing (a copy of) every countable group as a subgroup? The countable random graph, after all, which inspired the
original question, contains copies of all countable graphs,
not merely all finite graphs. Is this possible with groups?
What seems to be needed is a highly saturated countable group. An easier requirement would insist that
the group contains merely all finitely generated groups as
subgroups, or merely all countable abelian groups.
(Reducing to a countable family, however, trivializes the question via the direct sum.) A harder requirement would find the subgroups in particularly nice ways: as direct summands or as normal subgroups. Another strengthened requirement would insist on an
amalgamation property: whenever
$H_0\lt H_1$ are finitely generated, then every copy of $H_0$ in the universal group $G$
extends to a copy of $H_1$ in $G$. This property implies
that $G$ is universal for all countable groups, by adding
one generator at a time. This would generalize the
saturation property of the random graph. If there is a universal countable group, can one find a
finitely generated such group, or a finitely presented
such group? (This would lose amalgamation, of course.) Moving higher, for which cardinals $\kappa$
is there a universal group of size $\kappa$? That is, when is there
a group of size
$\kappa$ containing as a subgroup a copy of every group of size
$\kappa$? Moving lower, what is the minimum size of a finite group
containing all groups of finite size at most $n$ as subgroups?
Clearly, $n!$ suffices. Can one do better?
|
There isn't a countable group which contains a copy of every countable group
as a subgroup. This follows from the fact that there are uncountably many
2-generator groups up to isomorphism. The first example of such a family was discovered by B.H. Neumann. A clear
account of his construction can be found in de la Harpe's book on geometric group theory.
|
{
"source": [
"https://mathoverflow.net/questions/28999",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1946/"
]
}
|
29,006 |
This is certainly related to "What are your favorite instructional counterexamples?" , but I thought I would ask a more focused question. We've all seen Counterexamples in analysis and Counterexamples in topology , so I think it's time for: Counterexamples in algebra . Now, algebra is quite broad, and I'm new at this, so if I need to narrow this then I will- just let me know. At the moment I'm looking for counterexamples in all areas of algebra: finite groups, representation theory, homological algebra, Galois theory, Lie groups and Lie algebras, etc. This might be too much, so a moderator can change that. These counterexamples can illuminate a definition (e.g. a projective module that is not free), illustrate the importance of a condition in a theorem (e.g. non-locally compact group that does not admit a Haar measure), or provide a useful counterexample for a variety of possible conjectures (I don't have an algebraic example, but something analogous to the Cantor set in analysis). I look forward to your responses! You can also add your counter-examples to this nLab page: http://ncatlab.org/nlab/show/counterexamples+in+algebra (the link to that page is currently "below the fold" in the comment list so I (Andrew Stacey) have added it to the main question)
|
In the category of rings, epimorphisms do not have to be surjective:
$\mathbb{Z}\hookrightarrow \mathbb{Q}$.
|
{
"source": [
"https://mathoverflow.net/questions/29006",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6936/"
]
}
|
29,100 |
This question is predicated on my understanding that real algebraic geometry (henceforth RAG) is the version of algebraic geometry (AG) one gets when replacing (esp. algebraically closed) fields with formally real (esp. real closed) fields. This makes for substantial differences in the theory because such fields can be ordered, and with order comes the notion of a semialgebraic set and a stronger topology. I am aware that there is a notion of "real spectrum" analogous to the traditional spectrum of a commutative ring, though I'm not terribly familiar with either. I assume this allows one to glue things together and define "real schemes" or some such thing. Or if not, I assume the reason this doesn't work is something one would learn in the study of RAG. My question: Given the differences in the theories, how well does one need to understand "traditional" AG to study RAG? Are there references (preferably books) which introduce RAG at an abstract level without assuming much knowledge of AG? Or is asking for this like when people ask how they can learn about motives without knowing about AG first? I already have Basu, Pollack, and Roy's Algorithms in Real Algebraic Geometry but I'm looking for something less algorithmic.
|
Real algebraic geometry comes with its own set of methods. While keeping in mind the complex picture is sometimes useful (e.g. for any real algebraic variety X , the Smith-Thom inequality asserts that $b(X(\mathbb{R})) \leq b(X(\mathbb{C}))$, where $b(\cdot)$ denotes the sum of the topological Betti numbers with mod 2 coefficients), most of the technique used are either built from scratch or borrow from other areas, such as singularity theory or model theory. The literature is a lot smaller for RAG than for traditional AG; the basic reference is the book by Bochnak, Coste and Roy (preferably the English-language edition which is more recent by more than 10 years, and has been greatly expanded). The book covers in particular the real spectrum, the transfer principle (which makes non-standard methods really easy), stratifications and Nash manifolds, among other topics. Michel Coste also has An Introduction to Semialgebraic Geometry available on his webpage a very short treatment of some basic results, enough to give you a first impression. Other interesting books tend to be shorter and more focused than BCR, dealing with a specific aspect; e.g. Prestel's Positive polynomials. (dealing mostly with results such as Schmudgen's theorem), and Andradas-Brocker-Ruiz Constructible sets in real geometry (dealing mostly with the minimum number of inequalities required to define basic sets). The book by Benedetti and Risler is very interesting and concrete; I found some passages very useful and some results are hard to find in other books (the sections on additive complexity of polynomials are very thorough), but it is a bit scatterbrained for my taste. As the name indicates, the book by Basu Pollack and Roy is entirely focused on the algorithmic aspects. It's a very good book, and you may still pick up some of the theory in there, but it does not sound like what you are after right now. As for o-minimality, there again, Michel Coste's webpage contains an introduction that nicely complements van den Dries's book. I would hesitate to bundle o-minimality with real algebraic geometry. In some respects, the two domains are undoubtedly close cousins, and o-minimality can be seen as a wide-ranging generalization of real algebraic structures; on the other hand, each disciplines has also its own aspects and problems that do not translate all that well into the other. I'm being verbose as usual. Still, I hope it helps.
|
{
"source": [
"https://mathoverflow.net/questions/29100",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5963/"
]
}
|
29,104 |
As a person who has been spending significant time to learn mathematics, I have to admit that I sometimes find the fact uncovered by Godel very upsetting: we never can know that our axiom system is consistent. The consistency of ZFC can only be proved in a larger system, whose consistency is unknown. That means proofs are not like as I once used to believe: a certificate that a counterexample for a statement can not be found. For example, in spite of the proof of Wiles, it is conceivable that someday someone can come up with integers a,b and c and n>2 such that a^n + b^n = c^n, which would mean that our axiom system happened to be inconsistent. I would like to learn about the reasons that, in spite of Godel's thoerem, mathematicians (or you) think that proofs are still very valuable. Why do they worry less and less each day about Godel's theorem (edit: or do they)? I would also appreciate references written for non-experts addressing this question.
|
If you like, you can view proofs of a statement in some formal system (e.g. ZFC) as a certificate that a counterexample cannot be found without demonstrating the inconsistency of ZFC, which would be a major mathematical event, and probably one of far greater significance than whether one's given statement was true or false. In practice, a given proof is not going to be closely tied to a single formal system such as ZFC, but will be robust enough that it can follow from any number of reasonable sets of axioms, including those much weaker than ZFC. Only one of these sets of axioms then needs to be consistent in order to guarantee that no counterexample would ever be found, and this is about as close to an ironclad guarantee as one can ever hope for. But ultimately, mathematicians are not really after proofs, despite appearances; they are after understanding . This is discussed quite well in Thurston's article " On proof and progress in mathematics ".
|
{
"source": [
"https://mathoverflow.net/questions/29104",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1229/"
]
}
|
29,118 |
I am trying to prove $\sum\limits_{j=0}^{k-1}(-1)^{j+1}(k-j)^{2k-2} \binom{2k+1}{j} \ge 0$. This inequality has been verified by computer for $k\le40$. Some clues that might work (kindly provided by Doron Zeilberger) are as follows: Let $Ef(x):=f(x-1)$, let $P_k(E):=\sum_{j=0}^{k-1}(-1)^{(j+1)}\binom{2k+1}{j}E^j$; These satisfy the inhomogeneous recurrence $P_k(E)-(1-E)^2P_{k-1}(E) =$ some binomial in $E$; The original sum can be expressed as $P_k(E)x^{(2k-2)} |_{x=k} $; Try to derive a recurrence for $P_k(E)x^{(2k-2)}$ before plugging in
$x=k$ and somehow use induction, possibly having to prove a more general statement to facilitate the induction. Unfortunately I do not know how to find a recurrence such as suggested by clue 4.
|
Your expression is the difference of two central Eulerian numbers , $$A(k):=\sum_{j=0}^{k-1}(-1)^{j+1}(k-j)^{2k-2}{2k+1 \choose j}=\left \langle {2k-2\atop k-2} \right \rangle-\left \langle {2k-2\atop k-3} \right \rangle$$ as you can easily deduce from their closed formula. The positivity of $A(k)$ is just due to the fact that the Eulerian numbers $\left \langle {n\atop j}\right \rangle$ are increasing for $1\leq j\leq n/2$ (like the binomial coefficients); this fact has a clear combinatorial explanation also. See e.g. http://en.wikipedia.org/wiki/Eulerian_number http://www.oeis.org/A008292 [edit] : although by now all details have been very clearly explained by Victor Protsak, I wish to add a general remark, should you find yourself in an analogous situation again. A healthy approach in such cases is adding variables, following the motto "more variables = simpler dependence" (like when one passes from quadratic to bilinear). In the present case, you may consider $$A(k):=a(k,\ 2k-2,\ 2k+1)$$ where you define $$a(k,n,m):=\sum_{j=0}^{k-1}(-1)^{j+1}(k-j)^{n}{m \choose j}$$ in which it is more apparent the action of the iterated difference operator, or, in the formalism of generating series, the Cauchy product structure: $$\sum_{k=0}^\infty a(k,n,m)x^k=-\sum_{j=0}^\infty j^nx^j\, \sum_{j=0}^\infty(-1)^j{m \choose j} x^j =-(1-x)^m\sum_{j=0}^\infty j^nx^j. $$ The series
$$\sum_{j=0}^\infty j^nx^j$$
is now quite a simpler object to investigate, and in fact it is well-known to whoever played with power series in childhood. It sums to a rational function $$(1-x)^{-n-1}x\sum_{k=0}^{n}\left \langle {n\atop k}\right \rangle x^k$$
that defines the Eulerian polynomial of order $n$ as numerator, and the Eulerian numbers as coefficients. In your case, $m=n+3$, meaning that you are still applying a discrete difference twice (in fact just once, due to the symmetric relations; check Victor's answer).
|
{
"source": [
"https://mathoverflow.net/questions/29118",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6989/"
]
}
|
29,197 |
I have read about the existence of functions of the kind described in the title in several places, but never seen an instance of them. Sorry if this is too much an elementary question to be posted here.
|
A function $f:\mathbb{N}\to\mathbb{N}$ is computable if and only if the graph of $f$ is $\Sigma_1$ definable in the arithmetic hierarchy , which means that $f(x)=y\iff \exists n\ \varphi(x,y,n)$, where $\varphi$ involves only bounded quantifiers. Thus, the essence of computation is that it is the search for an arithmetic witness $n$ of some primitive property. Many functions, however, are easy to describe but cannot be expressed in this simple form. Here are some examples: The characteristic function of the set of theorems of your favorite axiomatization of mathematics, such as PA or ZFC; this is the function that correctly labels assertions as theorem or non-theorem . We may view assertions directly as syntactic strings of symbols or we may code them as numbers if you wish (and this is surely a cosmetic difference). While we can recognize theorems by their proofs, we provably have in principle no computable way to recognize a non-theorem. The truth function, which correctly labels the statements of arithmetic as true or false , is not computable. This function is not in the arithmetic hierarchy, but it exists at the entry level $\omega$ to the hyperarithmetic hierarchy. The halting problem function, which correctly labels program-input pairs as halting or non-halting , is easy to describe, but not computable. The Tiling function, which given any finite set of polygonal tiles, outputs the size $k$ of the largest $k\times k$ sqaure that can be tiled by them, or $0$ if they tile the entire plane. The Conjugate function, which given two words in a finite group presentation, correctly states whether they are conjugate or not. The Solve function, which given a polynomial over the integers in several variables, outputs the list of smallest-norm integer solutions (giving the empty list if there are none). This is not computable by the MRDP solution to Hilbert's 10th problem. The positive instances are computable, of course, as they are witnessed by their corresponding calculation, but the empty list provably cannot be witnessed in a finitary way. The Tot function, which correctly labels Turing machine programs as total or strictly partial . The Empty function, which labels Turing machine programs as empty or non-empty , depending on whether they accept an input. An uncountable supply of examples is provided by Rice's Theorem , which asserts that no non-trivial property of the c.e. sets is computable from their programs. Thus, if $W_e$ is the set enumerated by program $e$, then for any family of sets ${\cal A}$ which contains some but not all $W_e$, the characteristic function of the set of programs $\{ e | W_e\in{\cal A}\}$ is not computable. For example, the functions that decide whether program $e$ enumerates a connected graph, or whether this set contains any primes, or whether it is eventually periodic, or whether it exhibits any other property that holds of some but not all programs, are all non-computable.
|
{
"source": [
"https://mathoverflow.net/questions/29197",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6466/"
]
}
|
29,232 |
In her paper The unification of Mathematics via Topos Theory , Olivia Caramello says "one can generate a huge number of new results in any mathematical field without any creative effort". Is this an exaggeration, and if not is this a new idea or has it always been thought that topos theory could enable automatic generation of theorems ?
|
Topos theory provides a dictionary between (certain areas of) logic and (certain areas of) geometry. As such, it provides all the benefits that mathematical dictionaries do: It lets you translate between two languages whose natural evolutions proceeded independently. An insight that is obvious in one domain may not be so obvious when translated to the other domain. Dictionaries cannot perform magic. In particular, it is usually too optimistic to think that a dictionary will allow you to prove significant new theorems with no effort. True, sometimes we do get lucky in this way. When Richard Stanley first discovered the dictionary between toric varieties and convex polytopes , he almost immediately reaped the reward of proving an important combinatorial conjecture with very little effort, because geometers had already put in a lot of work to solve exactly the problem he needed. But more commonly, the payoff of a dictionary is that it allows you to formulate good questions with very little effort. That is, you now have a new way to think about old problems, so you may be able to find your way more easily to a solution by borrowing concepts from both domains. You will still need to do hard work to solve hard problems, but your toolbox is now bigger.
|
{
"source": [
"https://mathoverflow.net/questions/29232",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3537/"
]
}
|
29,300 |
Of all the constructions of the reals, the construction via the surreals seems the most elegant to me. It seems to immediately capture the total ordering and precision of Dedekind cuts at a fundamental level since the definition of a number is based entirely on how things are ordered. It avoids, or at least simplifies, the convergence question of Cauchy sequences. And it naturally transcends finiteness without sacrificing awareness of it. The one "rumor" I've consistently heard is that it is hard to naturally define integrals and derivatives in the surreals, although I have yet to see a solid technical justification of that. Are there known results that suggest we should avoid further study of this construction, or that show limitations of it?
|
At a recent conference in Paris on Philosophy and Model Theory (at which I also spoke), Philip Ehrlich gave a fascinating talk on the surreal numbers and new developments, showcasing it as unifying many disparate paths in mathematics. The abstract is available here, on page 8 , and here his article on the Absolute Arithmetic Continuum . The principal new technical development is a focus on the underlying tree. Philip expressed his frustration that Conway often treated his creation of surreal numbers as a kind of game or just-for-fun project---an attitude reinforced by the excellent Knuth book---whereas they are in fact a profound mathematical development unifying disparate threads of mathematical investigation into a single unifying structure. And he made a very strong case for this position at the conference. Meanwhile, perhaps exhibiting Philip's point, at a conference on logic and games here at CUNY, I once heard Conway describe the surreal numbers as one of the great disappointments of his life, that they did not seem after all to have the profound unifying nature that he (and many others) thought they might. Philip Ehrlich strove to make the case that Conway was his own worst enemy in promoting the surreals, and that they actually do have the unifying nature Conway thought they did, but that Conway scared people away from this perspective by treating them as a toy. I encourage you to read Philip's articles. So my answer, supporting Philip, is that nothing is wrong with the surreals---please have at them! Of course they have their own issues, which will need to be surmounted, but we shall all benefit from a greater investigation of them.
|
{
"source": [
"https://mathoverflow.net/questions/29300",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2498/"
]
}
|
29,302 |
There are a number of informal heuristic arguments for the consistency of ZFC, enough that I am happy enough to believe that ZFC is consistent. This is true for even some of the more tame large cardinal axioms, like the existence of an infinite number of Grothendieck universes. Are there any such heuristic arguments for the existence of Vopenka cardinals or huge cardinals? I'd very much like to believe them, mainly because they simplify a great deal of trouble one has to go through when working with accessible categories and localization (every localizer is accessible on a presheaf category, for instance). For Vopenka's principle, the category-theoretic definition is that every full complete (cocomplete) subcategory of a locally presentable category is reflective (coreflective). This seems rather unintuitive to me (and I don't even understand the model-theoretic definition of Vopenka's principle). What reason is there to believe that ZFC+VP (or ZFC+HC, which implies the consistency of VP) is consistent? Obviously, I am willing to accept heuristic or informal arguments (since a formal proof is impossible).
|
Most of the arguments previously presented take a set-theoretic/logical point of view and apply to large cardinal axioms in general. There's a lot of good stuff there, but I think there are additional things to be said about Vopěnka's principle specifically from a category-theoretic point of view. One formulation of Vopěnka's principle (which is the one that I'm used to calling "the" category-theoretic definition, and the one used as the definition in Adamek&Rosicky's book, although there are many category-theoretic statements equivalent to VP) is that there does not exist a large (= proper-class-sized) full discrete (= having no nonidentity morphims between its objects) subcategory of any locally presentable category. I think there is a good argument to be made for the naturalness of this from a category-theoretic perspective. To explain why, let me back up a bit. To a category theorist of a certain philosophical bent, one thing that category theory teaches us is to avoid talking about equalities between objects of a category, rather than isomorphism. For instance, in doing group theory, we never talk about when two groups are equal, only when they are isomorphic. Likewise in doing topology, we never talk about when two spaces are equal, only when they are homeomorphic. Once you get used to this, it starts to feel like an accident that it even makes sense to ask whether two groups are equal, rather than merely isomorphic. And in fact, it is an accident, or at least dependent on the particular choice of axioms for a set-theoretic foundation; one can give other axiomatizations of set theory, provably equivalent to ZFC, in which it doesn't make sense to ask whether two sets are equal, only whether two elements of a given ambient set are equal. These are sometimes called "categorial" set theories, since the first example was Lawvere's ETCS which axiomatizes the category of sets, but I prefer to call them structural set theories, since there are other versions, like SEAR , which don't require any category theory. Now there do exist categories in which it does make sense to talk about "equality" of objects. For instance, any set X can be regarded as a discrete category $X_d$, whose objects are the elements of X and in which the only morphisms are identities. Moreover, a category is equivalent to one of the form $X_d$, for some set X, iff it is both a groupoid and a preorder, i.e. every morphism is invertible and any parallel pair of morphisms are equal. I call such a category a "discrete category," although some people use that only for the stricter notion of a category isomorphic to some $X_d$. So it becomes tempting to think that one might instead consider "category" to be a fundamental notion, and define "set" to mean a discrete category. Unfortunately, however, what I wrote in the previous paragraph is false: a category is equivalent to one of the form $X_d$, for some set X, iff it is a groupoid and a preorder and small . We can just as well construct a category $X_d$ when X is a proper class, and it will of course still be discrete. In fact, just as a set is the same thing as a small discrete category, a proper class is the same thing as a large discrete category. However, this feels kind of bizarre, because the large categories that arise in practice are almost never of the sort that admit a meaningful notion of "equality" between their objects, and in particular they are almost never discrete. Consider the categories of groups, or rings, or topological spaces, or sets for that matter. Outside of set theory, proper classes usually only arise as the class of objects of some large category, which is almost never discrete. The world would make much more sense, from a category-theoretic point of view, if there were no such things as proper classes, a.k.a. discrete large categories --- then we could define "set" to mean "discrete category" and life would be beautiful. Unfortunately, we can't have large categories without having large discrete categories, at least not without restricting the rest of mathematics fairly severly. This is obviously true if we found mathematics on ZFC or NBG or some other traditional "membership-based" or "material" set theory, since there we need a proper class of objects before we can even define a large category. But it's also true if we use a structural set theory, since there are a few naturally and structurally defined large categories that are discrete, such as the category of well-orderings and all isomorphisms between them (the core of the full subcategory of Poset on the well-orderings). Thus Vopěnka's principle, as I stated it above, is a weakened version of the thesis that large discrete categories don't exist: it says that at least they can't exist as full subcategories of locally presentable categories. Since locally presentable categories are otherwise very well-behaved, this is at least reasonable to hope for. In fact, from this perspective, if Vopěnka's principle turns out to be inconsistent with ZFC, then maybe it is ZFC that is at fault! (-:
|
{
"source": [
"https://mathoverflow.net/questions/29302",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1353/"
]
}
|
29,323 |
You're hanging out with a bunch of other mathematicians - you go out to dinner, you're on the train, you're at a department tea, et cetera. Someone says something like "A group of 100 people at a party are each receive hats with different prime numbers and ..." For the next few minutes everyone has fun solving the problem together. I love puzzles like that. But there's a problem -- I running into the same puzzles over and over. But there must be lots of great problems I've never run into. So I'd like to hear problems that other people have enjoyed, and hopefully everyone will learn some new ones. So: What are your favorite dinner conversation math puzzles? I don't want to provide hard guidelines. But I'm generally interested in problems that are mathematical and not just logic puzzles. They shouldn't require written calculations or a convoluted answer. And they should be fun - with some sort of cute step, aha moment, or other satisfying twist. I'd prefer to keep things pretty elementary, but a cool problem requiring a little background is a-okay. One problem per answer. If you post the answer, please obfuscate it with something like rot13 . Don't spoil the fun for everyone else.
|
I really like the following puzzle, called the blue-eyed islanders problem, taken from Professor Tao's blog : "There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours. Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces). If a tribesperson does discover his or her own eye color, then their religion compels them to commit ritual suicide at noon the following day in the village square for all to witness. All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth). Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople). One day, a blue-eyed foreigner visits to the island and wins the complete trust of the tribe. One evening, he addresses the entire tribe to thank them for their hospitality. However, not knowing the customs, the foreigner makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”. What effect, if anything, does this faux pas have on the tribe?" For those of you interested, there is a huge discussion of the problem at http://terrytao.wordpress.com/2008/02/05/the-blue-eyed-islanders-puzzle/ Malik
|
{
"source": [
"https://mathoverflow.net/questions/29323",
"https://mathoverflow.net",
"https://mathoverflow.net/users/27/"
]
}
|
29,333 |
Warmup (you've probably seen this before) Suppose $\sum_{n\ge 1} a_n$ is a conditionally convergent series of real numbers, then by rearranging the terms, you can make "the same series" converge to any real number $x$. To do this, let $P=\{n\ge 1\mid a_n\ge 0\}$ and $N=\{n\ge 1\mid a_n<0\}$. Since $\sum_{n\ge 1} a_n$ converges conditionally, each of $\sum_{n\in P}a_n$ and $\sum_{n\in N}a_n$ diverge and $\lim a_n=0$. Starting with the empty sum (namely zero), build the rearrangement inductively. Suppose $\sum_{i=1}^m a_{n_i}=x_m$ is the (inductively constructed) $m$-th partial sum of the rearrangement. If $x_m\le x$, take $n_{m+1}$ to be the smallest element of $P$ which hasn't already been used. If $x_m> x$, take $n_{m+1}$ to be the smallest element of $N$ which hasn't already been used. Since $\sum_{n\in P}a_n$ diverges, there will be infinitely many $m$ for which $x_m\ge x$, so $n_{m+1}$ will be in $N$ infinitely often. Similarly, $n_{m+1}$ will be in $P$ infinitely often, so we've really constructed a rearrangement of the original series. Note that $|x-x_m|\le \max\{|a_n|\bigm| n\not\in\{n_1,\dots, n_m\}\}$, so $\lim x_m=x$ because $\lim a_n=0$. Suppose $\sum_{n\ge 1}v_n$ is a conditionally convergent series with $v_n\in \mathbb R^k$. Can the sum be rearranged to converge to any given $w\in \mathbb R^k$? Obviously not! If $\lambda$ is a linear functional on $\mathbb R^k$ such that $\sum \lambda(v_n)$ converges absolutely, then $\lambda$ applied to any rearrangement will be equal to $\sum \lambda(v_n)$. So let's also suppose that $\sum \lambda(v_n)$ is conditionally convergent for every non-zero linear functional $\lambda$. Under this additional hypothesis, I'm pretty sure the answer should be "yes".
|
The Levy--Steinitz theorem says the set of all convergent rearrangements of a series of vectors, if nonempty, is an affine subspace of ${\mathbf R}^k$. There is an article on this by Peter Rosenthal in the Amer. Math. Monthly from 1987, called "The Remarkable Theorem of Levy and Steinitz". Also see Remmert's Theory of Complex Functions, pp. 30--31. As an example, taking $k = 2$, suppose $v_n = ((-1)^{n-1}/n,(-1)^{n-1}/n)$. Then the convergent rearrangments fill up the line $y = x$. The linear function $\lambda(x,y) = x-y$ of course kills the series, which makes Anton's observation explicit in this instance. The Rosenthal article, at the end, discusses Anton's question. Indeed if there is no absolute convergence in any direction then the set of all rearranged series is
all of ${\mathbf R}^k$. Note by the above example that this condition is stronger than saying the series in each standard coordinate is conditionally convergent. Rosenthal said this stronger form of the Levy-Steinitz theorem was in the papers by Levy (1905) and Steinitz (1913). He also refers to I. Halperin, Sums of a Series Permitting Rearrangements, C. R. Math Rep. Acad. Sci. Canada VIII (1986), 87--102.
|
{
"source": [
"https://mathoverflow.net/questions/29333",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1/"
]
}
|
29,424 |
Ordinary cohomology on CW complexes is determined by the coefficients. There are (more than) two nice ways to define cohomology for non-CW-complexes: either by singular cohomology or
by defining $\widetilde H^n(X;G) = [X, K(G,n)]$. Are there standard/easy examples where these
two theories differ? One idea that comes to mind is the paper by Milnor and Barratt (about Anomolous Singular
Homology) which says that the $n$-dimensional Hawaiian earring $H^n$ has nontrivial singular
homology in arbitrarily high dimensions. But I don't see an easy way to compute
$[H^n, K(G, m)]$.
|
The Cantor set has exotic zeroth cohomology. Its singular cohomology is the linear dual of its zeroth singular homology, which is the free abelian group on its set of points. Thus its singular cohomology is an uncountable infinite product of $\mathbb Z$. Its represented cohomology is the set of continuous maps to the discrete space $\mathbb Z$, which must factor through a finite quotient. It is a free abelian group on countably many generators.
|
{
"source": [
"https://mathoverflow.net/questions/29424",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3634/"
]
}
|
29,442 |
This question is, in some sense, a variant of this , but for certain cases. The opposite category of an abelian category is abelian. In particular, if $R-mod$ is the category of $R$-modules over a ring $R$ (say left modules), its opposite category is abelian. The Freyd-Mitchell embedding theorem states that this opposite category can be embedded in a category of modules over a ring $S$. This embedding is usually very noncanonical though. Question: Is there any way to choose $S$ based on $R$? My guess is probably not, since these notes cite the opposite category of $R-mod$
as an example of an abelian category which is not a category of modules. I can't exactly tell if they mean "it is (for most $R$) (provably) not equivalent to a category of modules over any ring" or "there is no immediate structure as a module category." If it is the former, how would one prove it? I also have a variant of this question when there is additional structure on the category. The module category $H-mod$ of a Hopf algebra $H$ is a tensor category. A finite-dimensional Hopf algebra can be reconstructed from the tensor category of finite-dimensional modules with a fiber functor via Tannakian reconstruction. Now $(H-mod)^{opp}$ is a tensor category as well satisfying these conditions (namely, the hom-spaces in this category are finite-dimensional). The dual of the initial fiber functor makes sense and becomes a fiber functor from $(H-mod)^{opp} \to \mathrm{Vect}$ (since duality is a contravariant tensor functor on the category of vector spaces). In this case, $(H-mod)^{opp}$ is the representation category of a canonical Hopf algebra $H'$. Question$\prime$ What is $H'$ in terms of $H$?
|
One can prove that for any non-zero ring $R$ the category $R$-Mod$^{op}$ is not a category of modules. Indeed any category of modules is Grothendieck abelian i.e., has exact filtered colimits and a generator. So for $R$-Mod$^{op}$ to be a module category $R$-Mod would also need exact (co)filtered limits and a cogenerator. It turns out that any such category consists of just a single object. I believe this is stated somewhere in Freyd's book Abelian Categories but I am not sure exactly where off the top of my head. Edit it is page 116. Further edit: For categories of finitely generated modules here is something else, although it is more in the direction of the title of your question than what is in the actual body of the question. Suppose we let $R$ be a commutative noetherian regular ring with unit and let $R$-mod be the category of finitely generated $R$-modules. Then we can get a description of $R$-mod$^{op}$ using duality in the derived category. Since $R$ is regular every object of $D^b(R) \colon= D^b(R-mod)$ is compact in the full derived category. The point is that
$$RHom(-,R)\colon D^b(R)^{op} \to D^b(R) $$
is an equivalence (usually this is only true for perfect complexes, but here by assumption everything is perfect). So one can look at the image of the standard t-structure (which basically just "filters" complexes by cohomology) under this duality. The heart of the standard t-structure is $R$-mod sitting inside $D^b(R)$ so taking duals gives an equivalence of $R$-mod$^{op}$ with the heart of the t-structure obtained by applying $RHom(-,R)$ to the standard t-structure. In the case of $R =k$ a field then this just pointwise dualizes complexes so we see that it restricts to the equivalence $k$-mod$^{op}\to k$-mod given by the usual duality on finite dimensional vector spaces. As another example consider $\mathbb{Z} $-mod sitting inside of $D^b(\mathbb{Z})$ as the heart of the standard t-structure given by the pair of subcategories of $D^b(\mathbb{Z})$
$$\tau^{\leq 0} = \{X \; \vert \; H^i(X) = 0 \; \text{for} \; i>0\}$$
$$\tau^{\geq 1} = \{X \; \vert \; H^i(X) = 0 \; \text{for} \; i\leq 0\} $$
It is pretty easy to check that $RHom(\Sigma^i \mathbb{Z}^n, \mathbb{Z}) \cong \Sigma^{-i} \mathbb{Z}^n$ and $RHom(\Sigma^i \mathbb{Z}/p^n\mathbb{Z}, \mathbb{Z}) \cong \Sigma^{-i-1}\mathbb{Z}/p^n\mathbb{Z}$ so that this t-structure gets sent to
$$\sigma^{\leq 0} = \{X \; \vert \; H^i(X) = 0 \; \text{for} \; i> 0, \; H^{0}(X) \; \text{torsion}\}$$
$$\sigma^{\geq 1} = \{X \; \vert \; H^i(X) = 0 \; \text{for} \; i< 0, H^0(X) \; \text{torsion free}\} $$
using the fact that objects of $D^b(\mathbb{Z})$ are isomorphic to the sums of their cohomology groups appropriately shifted. So taking the heart we see that
$$\mathbb{Z}-mod^{op} \cong \sigma^{\leq 0} \cap \Sigma\sigma^{\geq 1} = \{X \; \vert \; H^{-1}(X) \; \text{torsion free}, H^0(X) \; \text{torsion}, H^i(X) = 0 \; \text{otherwise}\} $$
with the abelian category structure on the right coming from viewing it as a full subcategory of $D^b(\mathbb{Z})$ with short exact sequences coming from triangles. It is the tilt of $\mathbb{Z}$-mod by the standard torsion theory which expresses every finitely generated abelian group as a torsion and torsion free part.
|
{
"source": [
"https://mathoverflow.net/questions/29442",
"https://mathoverflow.net",
"https://mathoverflow.net/users/344/"
]
}
|
29,475 |
The proof that I have in mind is as follows - $\text{Gal }(\overline{\mathbb Q}/\mathbb Q)$ is a proper uncountable subgroup of the group of permutations on countably many symbols, hence the latter is uncountable. . But it needs a lot of jargon from topology and algebra. Is there a neat proof like Cantor's diagonal argument?
|
The fact that any conditionally convergent series [and that such exists] can be rearranged to converge to any given real number $x$ proves that there is an injection $P$ from the reals to the permutations of $\mathbb{N}$. The usual proof (pick positive terms until we exceed $x$, pick negative terms until we "deceed" $x$, repeat) is constructive; if we use a series consisting of easily computable rational numbers (the alternating harmonic series being the canonical example) and a real number $x$ for which we can decide $q<x$ or $q>x$ or $q=x$ for every rational number $q$, we may compute $P(x)(n)$ for every $n$.
|
{
"source": [
"https://mathoverflow.net/questions/29475",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2720/"
]
}
|
29,490 |
It is well-known that the number of surjections from a set of size n to a set of size m is quite a bit harder to calculate than the number of functions or the number of injections. (Of course, for surjections I assume that n is at least m and for injections that it is at most m.) It is also well-known that one can get a formula for the number of surjections using inclusion-exclusion, applied to the sets $X_1,...,X_m$, where for each $i$ the set $X_i$ is defined to be the set of functions that never take the value $i$. This gives rise to the following expression: $m^n-\binom m1(m-1)^n+\binom m2(m-2)^n-\binom m3(m-3)^n+\dots$. Let us call this number $S(n,m)$. I'm wondering if anyone can tell me about the asymptotics of $S(n,m)$. A particular question I have is this: for (approximately) what value of $m$ is $S(n,m)$ maximized? It is a little exercise to check that there are more surjections to a set of size $n-1$ than there are to a set of size $n$. (To do it, one calculates $S(n,n-1)$ by exploiting the fact that every surjection must hit exactly one number twice and all the others once.) So the maximum is not attained at $m=1$ or $m=n$. I'm assuming this is known, but a search on the web just seems to lead me to the exact formula. A reference would be great. A proof, or proof sketch, would be even better. Update. I should have said that my real reason for being interested in the value of m for which S(n,m) is maximized (to use the notation of this post) or m!S(n,m) is maximized (to use the more conventional notation where S(n,m) stands for a Stirling number of the second kind) is that what I care about is the rough size of the sum. The sum is big enough that I think I'm probably not too concerned about a factor of n, so I was prepared to estimate the sum as lying between the maximum and n times the maximum.
|
It seems to be the case that the polynomial $P_n(x) =\sum_{m=1}^n
m!S(n,m)x^m$ has only real zeros. (I know it is true that $\sum_{m=1}^n
S(n,m)x^m$ has only real zeros.) If this is true, then the value of $m$
maximizing $m!S(n,m)$ is within 1 of $P'_n(1)/P_n(1)$ by a theorem of
J. N. Darroch, Ann. Math. Stat. 35 (1964), 1317-1321. See also
J. Pitman, J. Combinatorial Theory, Ser. A 77 (1997), 279-303.
By standard combinatorics
$$ \sum_{n\geq 0} P_n(x) \frac{t^n}{n!} = \frac{1}{1-x(e^t-1)}. $$
Hence
$$ \sum_{n\geq 0} P_n(1)\frac{t^n}{n!} = \frac{1}{2-e^t} $$
$$ \sum_{n\geq 0} P'_n(1)\frac{t^n}{n!} = \frac{e^t-1}{(2-e^t)^2}. $$
Since these functions are meromorphic with smallest singularity at $t=\log 2$,
it is routine to work out the asymptotics, though I have not bothered to
do this. Update. It is indeed true that $P_n(x)$ has real zeros. This is because
$(x-1)^nP_n(1/(x-1))=A_n(x)/x$, where $A_n(x)$ is an Eulerian polynomial. It
is known that $A_n(x)$ has only real zeros, and the operation $P_n(x)
\to (x-1)^nP_n(1/(x-1))$ leaves invariant the property of having real
zeros.
|
{
"source": [
"https://mathoverflow.net/questions/29490",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1459/"
]
}
|
29,624 |
Define a growth function to be a monotone increasing function $F: {\bf N} \to {\bf N}$, thus for instance $n \mapsto n^2$, $n \mapsto 2^n$, $n \mapsto 2^{2^n}$ are examples of growth functions. Let's say that one growth function $F$ dominates another $G$ if one has $F(n) \geq G(n)$ for all $n$. (One could instead ask for eventual domination , in which one works with sufficiently large $n$ only, or asymptotic domination , in which one allows a multiplicative constant $C$, but it seems the answers to the questions below are basically the same in both cases, so I'll stick with the simpler formulation.) Let's call a collection ${\mathcal F}$ of growth functions complete cofinal if every growth function is dominated by at least one growth function in ${\mathcal F}$. Cantor's diagonalisation argument tells us that a cofinal set of growth functions cannot be countable. On the other hand, the set of all growth functions has the cardinality of the continuum. So, on the continuum hypothesis, a cofinal set of growth functions must necessarily have the cardinality of the continuum. My first question is: what happens without the continuum hypothesis? Is it possible to have a cofinal set of growth functions of intermediate cardinality? My second question is more vague: is there some simpler way to view the poset of growth functions under domination (or asymptotic domination) that makes it easier to answer questions like this? Ideally I would like to "control" this poset in some sense by some other, better understood object (e.g. the first uncountable ordinal, the nonstandard natural numbers, or the Stone-Cech compactification of the natural numbers). EDIT: notation updated in view of responses.
|
For asymptotic domination, commonly denoted ${\leq^*}$ and often called eventual domination , this has been answered by Stephen Hechler, On the existence of certain cofinal subsets of ${}^{\omega }\omega$ , MR360266 . What you call a complete set is usually called a dominating family . As a poset under eventual domination, a dominating family $\mathcal{F}$ must have the following three properties: $\mathcal{F}$ has no maximal element. Every countable subset of $\mathcal{F}$ has an upper bound in $\mathcal{F}$. $|\mathcal{F}| \leq 2^{\aleph_0}$ Hechler showed that for any abstract poset $(P,{\leq})$ with these three properties, there is a forcing extension where all cardinals and cardinal powers are preserved, and there is a dominating family isomorphic to $(P,{\leq})$. In particular, one can have a wellordered dominating family whose length is any cardinal $\delta$ with uncountable cofinality. In this case, the restriction $\delta \leq 2^{\aleph_0}$ is inessential since one can always add $\delta$ Cohen reals without affecting conditions (1) and (2). However, for arbitrary posets, condition (2) could be destroyed by adding reals. The total domination order is more complex. One can always get a totally dominating family $\mathcal{F}'$ from a dominating family $\mathcal{F}$ by adding $\max(f,n) \in \mathcal{F}'$ for every $f \in \mathcal{F}$ and $n < \omega$. Since $\mathcal{F}$ is infinite, the resulting family $\mathcal{F}'$ has the same size as $\mathcal{F}$. Howerver, there does not appear to be a simple combinatorial characterization of the possibilities for the posets that arise in this way.
|
{
"source": [
"https://mathoverflow.net/questions/29624",
"https://mathoverflow.net",
"https://mathoverflow.net/users/766/"
]
}
|
29,644 |
The well known "Sum of Squares Function" tells you the number of ways you can represent an integer as the sum of two squares. See the link for details, but it is based on counting the factors of the number N into powers of 2, powers of primes = 1 mod 4 and powers of primes = 3 mod 4. Given such a factorization, it's easy to find the number of ways to decompose N into two squares. But how do you efficiently enumerate the decompositions? So for example, given N=2*5*5*13*13=8450 , I'd like to generate the four pairs: 13*13+91*91=8450 23*23+89*89=8450 35*35+85*85=8450 47*47+79*79=8450 The obvious algorithm (I used for the above example) is to simply take i=1,2,3,...,$\sqrt{N/2}$ and test if (N-i*i) is a square. But that can be expensive for large N. Is there a way to generate the pairs more efficiently? I already have the factorization of N, which may be useful. (You can instead iterate between $i=\sqrt{N/2}$ and $\sqrt{N}$ but that's just a constant savings, it's still $O(\sqrt N)$.
|
The factorization of $N$ is useful, since $$(a^2+b^2)(c^2+d^2)=(ac+bd)^2+(ad-bc)^2$$ There are good algorithms for expressing a prime as a sum of two squares or, what amounts to the same thing, finding a square root of minus one modulo $p$. See, e.g., http://www.emis.de/journals/AMEN/2005/030308-1.pdf Edit: Perhaps I should add a word about solving $x^2\equiv-1\pmod p$. If $a$ is a quadratic non-residue (mod $p$) then we can take $x\equiv a^{(p-1)/4}\pmod p$. In practice, you can find a quadratic non-residue pretty quickly by just trying small numbers in turn, or trying (pseudo-)random numbers.
|
{
"source": [
"https://mathoverflow.net/questions/29644",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7107/"
]
}
|
29,676 |
Let ${\bf N}^\omega = \bigcup_{m=1}^\infty {\bf N}^m$ denote the space of all finite sequences $(N_1,\ldots,N_m)$ of natural numbers. For want of a better name, let me call a family ${\mathcal T} \subset {\bf N}^\omega$ a blocking set if every infinite sequence $N_1,N_2,N_3,N_4,\ldots$ of natural numbers must necessarily contain a blocking set $(N_1,\ldots,N_m)$ as an initial segment. (For the application I have in mind, one might also require that no element of a blocking set is an initial segment of any other element, but this is not the most essential property of these sets.) One can think of a blocking set as describing a machine that takes a sequence of natural number inputs, but always halts in finite time; one can also think of a blocking set as defining a subtree of the rooted tree ${\bf N}^\omega$ in which there are no infinite paths. Examples of blocking sets include All sequences $N_1,\ldots,N_m$ of length $m=10$. All sequences $N_1,\ldots,N_m$ in which $m = N_1 + 1$. All sequences $N_1,\ldots,N_m$ in which $m = N_{N_1+1}+1$. The reason I happened across this concept is that such sets can be used to pseudo-finitise a certain class of infinitary statements. Indeed, given any sequence $P_m(N_1,\ldots,N_m)$ of $m$-ary properties, it is easy to see that the assertion There exists an infinite sequence $N_1, N_2, \ldots$ of natural numbers such that $P_m(N_1,\ldots,N_m)$ is true for all $m$. is equivalent to For every blocking set ${\mathcal T}$, there exists a finite sequence $(N_1,\ldots,N_m)$ in ${\mathcal T}$ such that $P_m(N_1,\ldots,N_m)$ holds. (Indeed, the former statement trivially implies the latter, and if the former statement fails, then a counterexample to the latter can be constructed by setting the blocking set ${\mathcal T}$ to be those finite sequences $(N_1,\ldots,N_m)$ for which $P_m(N_1,\ldots,N_m)$ fails.) Anyway, this concept seems like one that must have been studied before, and with a standard name. (I only used "blocking set" because I didn't know the existing name in the literature.) So my question is: what is the correct name for this concept, and are there some references regarding the structure of such families of finite sequences? (For instance, if we replace the natural numbers ${\bf N}$ here by a finite set, then by Konig's lemma, a family is blocking if and only if there are only finitely many finite sequences that don't contain a blocking initial segment; but I was unable to find a similar characterisation in the countable case.)
|
Intuitionists use the name "bar" for what you called a blocking set. The relevant context is "bar induction," the principle saying that, if (1) a property has been proved for all elements of a bar and (2) it propagates in the sense that, whenever it holds for all the
one-term extensions of a finite sequence s then it holds for s itself,
then this property holds of the empty sequence. (I'm omitting some technicalities here that distinguish different versions of bar induction.) There's also a closely related notion in infinite combinatorics, called a "barrier"; this is a collection $B$ of finite subsets of $\mathbf N$ such that no member of $B$ is included in another and every infinite subset of $\mathbf N$ has an initial segment in $B$. This is the subject of a partition theorem due to Nash-Williams: If a barrier is partitioned into two pieces, then there is an infinite $H\subseteq\mathbf N$ such that one of the pieces includes a barrier for $H$ (meaning that every infinite subset of $H$ has an initial segment in that piece).
|
{
"source": [
"https://mathoverflow.net/questions/29676",
"https://mathoverflow.net",
"https://mathoverflow.net/users/766/"
]
}
|
29,734 |
If an entire function is bounded for all $z \in \mathbb{C}$ , than it's a constant by Liouville's theorem. Of course an entire function can be bounded on lines through the origin $z=r \exp(i \phi), \phi= \text{const.}, r \in \mathbb{R}$ without being constant (e.g. $\cos(z^n)$ is bounded on $n$ lines). What is the maximum cardinality of the set of "directions" $\phi$ for which an entire function can be bounded without being constant? From intuition I would expect only finitely many directions. Is this correct? (Picard's second theorem says that in any open set containing $\infty$ every value with possibly a single exception is taken infinitely often by an entire non-constant function. Here I'm asking a somehow "orthogonal" question, looking for lines through $\infty$ where an entire non-constant function is bounded.)
|
Newman gave an example in 1976 of a non-constant entire function bounded on each line through the origin in " An entire function bounded in every direction ". I like the second sentence of the article: This is exactly what is needed to confuse students who have just struggled to comprehend the meaning of Liouville's theorem. Armitage gave examples in 2007 of non-constant entire functions that go to zero in every direction in "Entire functions that tend to zero on every line". For this I have only seen the MR review . (If you don't have MathSciNet access, the link should still give you the publication information to find the article.) Update: I just decided to take a look at the Armitage paper, and the introduction was enlightening: Although every bounded entire (holomorphic) function on $\mathbb{C}$
is constant (Liouville’s theorem), it has been known for more than a hundred years
that there exist nonconstant entire functions $f$ such that $f(z) → 0$ as $z →∞$ along
every line through 0 (see, for example, Lindelöf’s book [10, pp. 119–122] of 1905). And it has been known for more than eighty years that such functions can tend to 0
along any line whatsoever (see Mittag-Leffler [11], Grandjot [8], and Bohr [4]). Further
references to related work are given in Burckel’s review [5] of Newman’s note [12].
Entire functions with radial decay are used by Beardon and Minda [3] and Ullrich [14]
in studies of pointwise convergent sequences of entire functions. Armitage goes on to mention that Mittag-Leffler and Grandjot also gave explicit constructions, but states, "The examples given in what follows may nevertheless
be of some interest because of their comparative simplicity." The examples are
$$F(z)=\exp\left(-\int_0^\infty t^{-t}\cosh(tz^2)dt\right) - \exp\left(-\int_0^\infty t^{-t}\cosh(2tz^2)dt\right)$$ and
$$G(z)=\int_0^\infty e^{i\pi t}t^{-t}\cosh(t\sqrt{z})dt\int_0^\infty e^{i\pi t}t^{-t}\cos(t\sqrt{z})dt .$$
|
{
"source": [
"https://mathoverflow.net/questions/29734",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6415/"
]
}
|
29,737 |
I am currently trying to understand Cech cohomology. Five questions arised and I would be glad for help. In what follows $X$ is a topological space. I really like Dugger's and Isaksen's paper "Topological Hypercovers and ...". They prove for an arbitrary open cover $U=(U_a)_{a\in A}$ of $X$ the weak equivalence
$$
\operatorname{hocolim} ~C(U)\to X
$$
with $C(U)$ is the usual simplicial space of Cech namely
$$
\dots\to\coprod_{(b,c)\in A\times A}U_b\cap U_c\to\coprod_{a\in A}U_a
$$
Related statements are due to Segal. If $U$ is now a cover with every iterated intersection contractible we have a weak equivalence
$$
\operatorname{hocolim} ~C(U)\to \operatorname{hocolim} ~\pi_0C(U)
$$
The space $\pi_0C(U)$ is a simplicial set. You can find such a cover for every locally contractible space for instance. Dugger and Isaksen build the category $OpCov(X)$. This has open covers of $X$ as objects and morphisms are mappings between the index sets $f:A\to B$ and for every $a$ in $A$ a mapping $U_a\to V_{f(a)}$. One can also build the category $Cov(X)$ with same objects but morphisms only strict containments of covers. Is then
$$
\operatorname{hocolim} ~C(U)\sim \lim_{U\in Cov(X)}~\operatorname{hocolim} ~\pi_0C(U)\sim \lim_{U\in OpCov(X)}~\operatorname{hocolim} ~\pi_0C(U)
$$
all weakly equivalent for locally contractible $X$? Why does one define Cech cohomology as $H^n(X)=\operatorname{colim} H^n(\operatorname{hocolim} C(U))$ with colimit over $Cov(X)^{op}$? Is this the right definition? Why not the limit over $Cov(X)$ instead of colimit over the opposite category? What is the problem with Cech Homology? One defines $\hat H^n(X)=\operatorname{colim} ~H^n(C(U))$ as Cech cohomology. Then for a locally contractible $X$ it coincides with singular cohomology. Why not define $\hat H_n(X)=\lim ~H_n(C(U))$. I have read that it is because the limit functor does not respect exact sequences. Does it mean $\hat H_n(X)$ coincides not with singular homology for locally contractible $X$? Can one see directly without going through sheaf cohomology that singular cohomology and Cech cohomology are the same for locally contractible $X$? I have written down complexes for the boundary of the two simplex with a nice cover and yes the cohomology is the same. Is there a kind of MayerVietoris argument? Here is my last question. $X$ is a good space now. Where is the mistake in the following equality. I consider Top as a topologgical model category and the crucial step is perhaps the one which looks like a smallness argument on the homotopy category.
$$
\begin{array}{rcl}
\pi_0(X)&=&[S^0, \operatorname{hocolim} ~C(U)]\\
&=& \operatorname{hocolim}[S^0, C(U)]\\
&=& \operatorname{hocolim} \pi_0(C(U))\\&=&X
\end{array}
$$
Here $U$ is a cover with everything contractible as above. Thank endurance reading and for help.
|
Newman gave an example in 1976 of a non-constant entire function bounded on each line through the origin in " An entire function bounded in every direction ". I like the second sentence of the article: This is exactly what is needed to confuse students who have just struggled to comprehend the meaning of Liouville's theorem. Armitage gave examples in 2007 of non-constant entire functions that go to zero in every direction in "Entire functions that tend to zero on every line". For this I have only seen the MR review . (If you don't have MathSciNet access, the link should still give you the publication information to find the article.) Update: I just decided to take a look at the Armitage paper, and the introduction was enlightening: Although every bounded entire (holomorphic) function on $\mathbb{C}$
is constant (Liouville’s theorem), it has been known for more than a hundred years
that there exist nonconstant entire functions $f$ such that $f(z) → 0$ as $z →∞$ along
every line through 0 (see, for example, Lindelöf’s book [10, pp. 119–122] of 1905). And it has been known for more than eighty years that such functions can tend to 0
along any line whatsoever (see Mittag-Leffler [11], Grandjot [8], and Bohr [4]). Further
references to related work are given in Burckel’s review [5] of Newman’s note [12].
Entire functions with radial decay are used by Beardon and Minda [3] and Ullrich [14]
in studies of pointwise convergent sequences of entire functions. Armitage goes on to mention that Mittag-Leffler and Grandjot also gave explicit constructions, but states, "The examples given in what follows may nevertheless
be of some interest because of their comparative simplicity." The examples are
$$F(z)=\exp\left(-\int_0^\infty t^{-t}\cosh(tz^2)dt\right) - \exp\left(-\int_0^\infty t^{-t}\cosh(2tz^2)dt\right)$$ and
$$G(z)=\int_0^\infty e^{i\pi t}t^{-t}\cosh(t\sqrt{z})dt\int_0^\infty e^{i\pi t}t^{-t}\cos(t\sqrt{z})dt .$$
|
{
"source": [
"https://mathoverflow.net/questions/29737",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7057/"
]
}
|
29,750 |
Riemann-sums can e.g. be very intuitively visualized by rectangles that approximate the area under the curve.
See e.g. Wikipedia:Riemann sum . The Itô integral has due to the unbounded total variation but bounded quadratic variation an extra term (sometimes called Itô correction term). The standard intuition for this is a Taylor expansion , sometimes Jensen's inequality . But normally there is more than one intuition for a mathematical phenomenon, e.g. in Thurston's paper, "On Proof and Progress in Mathematics" , he gives seven different elementary ways of thinking about the derivative. My question Could you give me some other intuitions for the Itô integral (and/or Itô's lemma as the so called "chain rule of stochastic calculus"). The more the better and from different fields of mathematics to see the big picture and connections. I am esp. interested in new intuitions and intuitions that are not so well known.
|
I find the intuitive explanation in Paul Wilmott on Quantitative Finance particularly appealing. Fix a small $h>0$ . The stochastic integral $$\int_0^{h} f(W(t))\ dW(t)=\lim\limits_{N\to\infty}\sum\limits_{j=1}^{N}
f\left(W(t_{j-1})\right)\left(W(t_{j})-W({t_{j-1}})\right),\quad t_j= h\frac{j}{N},$$ involves adding up an infinite number of random variables. Let's substitute every term $f\left(W(t_{j-1})\right)$ with its formal Taylor expansion. Then there are several contributions to the sum: those that are a sum of random variables and those that are a sum of the squares of random variables , and then there are higher-order terms . Add up a large number of independent random variables and the Central Limit Theorem kicks in, the end result being a normally distributed random variable. Let's calculate its mean and standard deviation. When we add up $N$ terms that are normal, each with a mean of $0$ and a standard deviation of $\sqrt{h/N}$ , we end up with another normal, with a mean of $0$ and a standard deviation of $\sqrt{h}$ . This is our $dW$ . Notice how the $N$ disappears in the limit. Now, if we add up the $N$ squares of the same normal terms then we get something which is normally distributed with a mean of $$N\left(\sqrt{\frac{h}{N}}\right)^2=h$$ and a standard deviation which is $h\sqrt{2/N}.$ This tends to zero as $N$ gets larger. In this limit we end up with, in a sense, our $dW^2(t)=dt$ , because the randomness as measured by the standard deviation disappears leaving us just with the mean $dt$ . The higher-order terms have means and standard deviations that are too small, disappearing
rapidly in the limit as $N\to\infty$ .
|
{
"source": [
"https://mathoverflow.net/questions/29750",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1047/"
]
}
|
29,866 |
I was wondering what would be the best way to present your paper at a conference, if your paper is selected for "short communication", lasting for about 15 minutes?
Should you concentrate on the main results or the proofs?
And what should a first-time presenter be wary of? Thanks in advance.
|
The first priority is to state your main results and explain why they are interesting (e.g. how they fit in with related work). With only 15 minutes you do not have much time to discuss proofs, but it is nice to give a brief outline of the proof of your main result and what is involved. As a first-time presenter, I would watch out for the following: First, as someone commented, it is crucial to not go over time. Don't try to cram too much in. Second, watch out for the mechanical aspects of the presentation, e.g. is your print large enough to read, and is your voice loud enough to hear. Third, strive to emphasize the most important points and not get lost in less important details. This is especially important when you only have a short time to talk. Fourth, know your audience, so that you have some idea what you can assume is known and what you need to review.
|
{
"source": [
"https://mathoverflow.net/questions/29866",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7144/"
]
}
|
29,869 |
This question came up when I was doing some reading into convolution squares of singular measures. Recall a function $f$ on the torus $T = [-1/2,1/2]$ is said to be $\alpha$-Hölder (for $0 < \alpha < 1$) if $\sup_{t \in \mathbb{T}} \sup_{h \neq 0} |h|^{-\alpha}|f(t+h)-f(t)| < \infty$. In this case, define this value, $\omega_\alpha(f) = \sup_{t \in \mathbb{T}} \sup_{h \neq 0} |h|^{-\alpha}|f(t+h)-f(t)|$. This behaves much like a metric, except functions differing by a constant will not differ in $\omega_\alpha$. My primary question is this: 1) Is it true that the smooth functions are "dense" in the space of continuous $\alpha$-Hölder functions, i.e., for a given continuous $\alpha$-Hölder $f$ and $\varepsilon > 0$, does there exists a smooth function $g$ with $\omega_\alpha(f-g) < \varepsilon$? To be precise, where this came up was worded somewhat differently. Suppose $K_n$ are positive, smooth functions supported on $[-1/n,1/n]$ with $\int K_n = 1$. 2) Given a fixed continuous function $f$ which is $\alpha$-Hölder and $\varepsilon > 0$, does there exist $N$ such that $n \geq N$ ensures $\omega_\alpha(f-f*K_n) < \varepsilon$? This second formulation is stronger than the first, but is not needed for the final result, I believe. To generalize, fix $0 < \alpha < 1$ and suppose $\psi$ is a function defined on $[0,1/2]$ that is strictly increasing, $\psi(0) = 0$, and $\psi(t) \geq t^{\alpha}$. Say that a function $f$ is $\psi$-Hölder if $\sup_{t \in \mathbb{T}} \sup_{h \neq 0} \psi(|h|)^{-1}|f(t+h)-f(t)| < \infty$. In this case, define this value, $\omega_\psi(f) = \sup_{t \in \mathbb{T}} \sup_{h \neq 0} \psi(|h|)^{-1}|f(t+h)-f(t)|$. Then we can ask 1) and 2) again with $\alpha$ replaced by $\psi$. I suppose the motivation would be that the smooth functions are dense in the space of continuous functions under the usual metrics on function spaces, and this "Hölder metric" seems to be a natural way of defining a metric of the equivalence classes of functions (where $f$ and $g$ are equivalent if $f = g+c$ for a constant $c$). Any insight would be appreciated.
|
In our PDE seminar, we met the same kinds of questions, and
we think the answer is "WRONG". The smooth functions is NOT
dense in Hölder spaces. An example is,
$$f(x) = |x|^{1/2} \quad x \in (-1,1)$$
it is easy to check that $f$ is $1/2$-Hölder continuous. For details,
for any $g \in C^{1}((-1,1))$, then the derivative of $g$ is continuous
at $0$, so we have
$$
\lim_{x \to 0} \frac{|g(x)-g(0)|}{|x|^{1/2}} = \lim_{x \to 0}
|x|^{1/2}\frac{|g(x)-g(0)|}{|x|} = 0
$$ and
$$
\omega_{1/2}(g-f) \ge \frac{|(g(x)-f(x))-(g(0)-f(0))|}{|x|^{1/2}} \ge
|\frac{|(g(x)-g(0)|}{|x|^{1/2}}-\frac{|f(x)-f(0)|}{|x|^{1/2}}|
$$
but $$
\frac{|f(x)-f(0)|}{|x|^{1/2}}=1 \quad x \in (-1,1) \quad x \neq 0
$$
let $x \to 0$, we obtain $\omega_{1/2}(g-f) \ge 1$. Thus, for any $g \in C^{1}((-1,1))$, we have $\omega_{1/2}(g-f)\ge 1$. For $0< \alpha <1$ we can make similar examples,
but when $\alpha = 1$, the proof of the counter-example
may be different.
|
{
"source": [
"https://mathoverflow.net/questions/29869",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7165/"
]
}
|
29,949 |
In short, my question is: What is the shortest computer program for which it is not known whether or not the program halts? Of course, this depends on the description language; I also have the following vague question: To what extent does this depend on the description language? Here's my motivation, which I am sure is known but I think is a particularly striking possibility for an application to mathematics: Let $P(n)$ be a statement about the natural numbers such that there exists a Turing machine $T$ which can decide whether $P(n)$ is true or false. (That is, this Turing machine halts on every natural number $n$, printing "True" if $P(n)$ is true and "False" otherwise.) Then the smallest $n$ such that $P(n)$ is false has low Kolmogorov complexity , as it will be printed by a program that tests $P(1)$, then $P(2)$, and so on until it reaches $n$ with $P(n)$ false, and prints this $n$. Thus the Kolmogorov complexity of the smallest counterexample to $P$ is bounded above by $|T|+c$ for some (effective) constant $c$. Let $L$ be the length of the shortest computer program for which the halting problem is not known. Then if $|T|+c < L$, we may prove the statement $\forall n, P(n)$ simply by executing all halting programs of length less than or equal to $|T|+c$, and running $T$ on their output. If $T$ outputs "True" for these finitely many numbers, then $P$ is true. Of course, the Halting problem places limits on the power of this method. Essentially, this question boils down to: What is the most succinctly stateable open conjecture? EDIT: By the way, an amazing implication of the argument I give is that to prove any theorem about the natural numbers, it suffices to prove it for finitely many values (those with low Kolmogorov complexity). However, because of the Halting problem it is impossible to know which values! If anyone knows a reference for this sort of thing I would also appreciate that.
|
There is a 5-state, 2-symbol Turing machine for which it is not known whether it halts. See http://en.wikipedia.org/wiki/Busy_beaver .
|
{
"source": [
"https://mathoverflow.net/questions/29949",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6950/"
]
}
|
29,970 |
Grothendieck famously objected to the term "perverse sheaf" in Récoltes et Semailles , writing "What an idea to give such a name to a mathematical thing! Or to any other thing or living being, except in sternness towards a person—for it is evident that of all the ‘things’ in the universe, we humans are the only ones to whom this term could ever apply.” (Link here , in an excellent article " Comme Appelé du Néant : The life of Alexandre Grothendieck", part 2, by Allyn Jackon.) But a google search for '"perverse sheaf" etymology' gives only nine hits, none of which seem informative. What is the etymology of the term "perverse sheaf"?
|
When MacPherson and I first started thinking about intersection homology, we realized that there was a number that measured the "badness" of a cycle with respect to a stratum. This number had the property that when you (transversally) intersected two cycles, their
"badness" would add. The best situation occurs for cocycles, in which case that number was zero, and the intersection of two cocycles was again a cocycle. The worst situation was for ordinary homology, in which case that number could be as large as the codimension of the stratum. In that case, the intersection of two cycles could even fail to be a cycle. After a while it became clear that we needed a name for this number and we tried "degeneracy", "gap", etc., but nothing seemed to fit. It seemed that the bad cycles were being "obstinate", but "obstinateness" did not sound reasonable. Finally we said, "let's just call it the perversity, and we'll find a better word later". We tried again later, with no success. (We did not realize that in some languages the word is obscene.) When we first went to talk with Dennis Sullivan and John Morgan about these ideas, we were calling the resulting groups "perverse homology", but Sullivan suggested the alternative, "intersection homology", which seemed fine with us. This was 1974-75. Later, when it was discovered that, for any perversity, there is an abelian category of sheaves, whose simple objects are the intersection cohomology sheaves (with that perversity) of closures of strata, Deligne coined the term "faisceaux pervers".
|
{
"source": [
"https://mathoverflow.net/questions/29970",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6950/"
]
}
|
30,035 |
Recall that the scalar curvature of a Riemannian manifold is given by the trace of the Ricci curvature tensor. I will now summarize everything that I know about scalar curvature in three sentences: The scalar curvature at a point relates the volume of an infinitesimal ball centered at that point to the volume of the ball with the same radius in Euclidean space. There are no topological obstructions to negative scalar curvature. On a compact spin manifold of positive scalar curvature, the index of the Dirac operator vanishes (equivalently, the $\hat{A}$ genus vanishes). The third item is of course part of a larger story - one can use higher index theory to produce more subtle positive scalar curvature obstructions (e.g. on non-compact manifolds) - but all of these variations on the compact case. I am also aware that the scalar curvature is an important invariant in general relativity, but that is not what I want to ask about here. This is what I would like to know: Are there any interesting theorems about metrics with constant scalar curvature? For example, are there topological obstructions to the existence of constant scalar curvature metrics, or are there interesting geometric consequences of constant scalar curvature? Can anything be said about manifolds with scalar curvature bounds (other than the result I quoted above about spin manifolds with positive scalar curvature), analogous to the plentiful theorems about manifolds with sectional curvature bounds? (Thus additional hypotheses like simple connectedness are allowed) Is anything particular known about positive scalar curvature for non-spin manifolds?
|
The Kazdan-Warner theorem goes a long way toward answering the first and second questions. (For notes typed up by Kazdan, see http://www.math.upenn.edu/~kazdan/japan/japan.pdf .) Here's what is says (taken almost verbatim from the notes, page 93): Divide the class of all closed manifolds (edit: of dimension > 2. See comments) into 3 types: I. Those which admit a metric of nonnegative scalar curvature which is positive somewhere. II. Those which don't but admit a metric of 0 scalar curvature. III. All other closed manifolds. The theorem is that if $M$ is in class I, then any $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric. If $M$ is in class II, then $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric iff it's identically 0 or negative somewhere. If M is in class III, then $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric iff it's negative somewhere. In particular, every closed manifold has a metric of constant negative scalar curvature. Those in class I or II have a metric of 0 scalar curvature, and those in class I have a metric of constant positive scalar curvature.
|
{
"source": [
"https://mathoverflow.net/questions/30035",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4362/"
]
}
|
30,042 |
Let $\mathfrak{g} \subset \mathfrak{gl}_n$ be one of the classical real or complex semisimple Lie algebras. If $g \in \mathfrak{g}$, then $g$ has a Jordan decomposition $g = g_s + g_n$ with $g_s$ semisimple and $g_n$ nilpotent, and $[g_s,g_n]=0$. The elements $g_s,g_n$, which a priori are just in $\mathfrak{gl}_n$, are both in $\mathfrak{g}$ again. There are various middle-brow general ways to see this (for one, use that $\mathfrak{g}$ is algebraic), but for concrete choices of $\mathfrak{g}$ it's basically elementary, as follows. One knows from the construction of the Jordan decomposition that $g_s,g_n$ are both polynomials in $g$ (different polynomials for different $g$, of course), and (EDIT) you can rig the construction so that these polynomials are odd. The Lie algebra $\mathfrak{g}$ is the subspace of $\mathfrak{gl}_n$ cut out by conditions like $\mathrm{trace}(g)=0$, or $Jg = -g^{t} J$ for some matrix $J$, and so forth. The condition $\mathrm{trace}(g)=0$ is always true for $g_n$, so it's true for $g_s$ if true for $g$. The condition $Jg=-g^t J$ is visibly true for odd $p(g)$ if true for $g$, so if true for $g$ then it's true for both $g_s$ and $g_n$. Thus $g_s$ and $g_n$ visibly satisfy whatever conditions $g$ is required to satisfy, and so are contained in $\mathfrak{g}$. (This might seem lowbrow but in fact I think this is basically the idea of the proof that Fulton-Harris give for general semisimple Lie algebras.) Now suppose instead that $G$ is a real or complex linear Lie group with Lie algebra $\mathfrak{g}$. This time the Jordan decomposition is $g = g_s g_u$ with $g_u$ unipotent, and indeed $g_s$ and $g_u$ are still in $G$. But if you try to make the same lowbrow argument as in the Lie algebra case, it appears to die horribly (a condition like $g^t = g^{-1}$ certainly need not be preserved by taking a polynomial in $g$). My question is, is there an elementary way to rescue it? (In particular, something other than just the general argument for algebraic groups.) Obviously you're fine for elements $g$ in the image of the exponential map, so the issue is passing to the whole group. A caveat is that I do $\textit{not}$ want to assume that $G$ is connected.
|
The Kazdan-Warner theorem goes a long way toward answering the first and second questions. (For notes typed up by Kazdan, see http://www.math.upenn.edu/~kazdan/japan/japan.pdf .) Here's what is says (taken almost verbatim from the notes, page 93): Divide the class of all closed manifolds (edit: of dimension > 2. See comments) into 3 types: I. Those which admit a metric of nonnegative scalar curvature which is positive somewhere. II. Those which don't but admit a metric of 0 scalar curvature. III. All other closed manifolds. The theorem is that if $M$ is in class I, then any $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric. If $M$ is in class II, then $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric iff it's identically 0 or negative somewhere. If M is in class III, then $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric iff it's negative somewhere. In particular, every closed manifold has a metric of constant negative scalar curvature. Those in class I or II have a metric of 0 scalar curvature, and those in class I have a metric of constant positive scalar curvature.
|
{
"source": [
"https://mathoverflow.net/questions/30042",
"https://mathoverflow.net",
"https://mathoverflow.net/users/379/"
]
}
|
30,048 |
Suppose we have an $m$x$n$ matrix $A$, with $m\lt n$, and an $m$x$1$ vector $b$. Are there existence and uniqueness conditions characterizing nonnegative solutions of the system of linear equations $Ax=b$? i.e. When is there an $x\geq 0$ such that $Ax$=$b$? I'm sure it is a very well-known result I'm after here but I can't seem to find the answer easily. Any references would be helpful. This ( http://www.jstor.org/pss/1968384 ) looks related but it is not free.
|
The Kazdan-Warner theorem goes a long way toward answering the first and second questions. (For notes typed up by Kazdan, see http://www.math.upenn.edu/~kazdan/japan/japan.pdf .) Here's what is says (taken almost verbatim from the notes, page 93): Divide the class of all closed manifolds (edit: of dimension > 2. See comments) into 3 types: I. Those which admit a metric of nonnegative scalar curvature which is positive somewhere. II. Those which don't but admit a metric of 0 scalar curvature. III. All other closed manifolds. The theorem is that if $M$ is in class I, then any $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric. If $M$ is in class II, then $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric iff it's identically 0 or negative somewhere. If M is in class III, then $f:M\rightarrow\mathbb{R}$ is the scalar curvature of some metric iff it's negative somewhere. In particular, every closed manifold has a metric of constant negative scalar curvature. Those in class I or II have a metric of 0 scalar curvature, and those in class I have a metric of constant positive scalar curvature.
|
{
"source": [
"https://mathoverflow.net/questions/30048",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7205/"
]
}
|
30,113 |
The classifying space of the nth symmetric group $S_n$ is well-known to be modeled by the space of subsets of $R^\infty$ of cardinality $n$. Various subgroups of $S_n$ have related models. For example, $B(S_i \times S_j)$ is modeled by subsets of $R^\infty$ of cardinality $i + j$ with $i$ points colored red and $j$ points colored blue. More fun: the wreath product $S_i \int S_j \subset S_{ij}$ has classifying space modeled by $ij$ points partitioned into $i$ sets of cardinality $j$ (but these sets are not "colored"). My question: is there a geometric model, preferably related to these, for classifying spaces of alternating groups? [Note: since any finite group is a subgroup of a symmetric group one wouldn't expect to find geometric models of arbitrary subgroups, but alternating groups seem special enough...]
|
$n$ linearly independent points in $R^\infty$ together with an orientation of the $n$-plane which they span.
|
{
"source": [
"https://mathoverflow.net/questions/30113",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4991/"
]
}
|
30,156 |
At the end of this month I start teaching complex analysis to
2nd year undergraduates, mostly from engineering but some from
science and maths. The main applications for them in future
studies are contour integrals and Laplace transform, but this should be a "real" complex analysis course which I could later
refer to in honours courses. I am now confident (after this
discussion , especially Gauss’s complaints given in Keith’s comment )
that the name "complex" is discouraging to average students. Why do we need to study numbers which do not belong to the real world? We all know that the thesis is wrong and I have in mind some examples
where the use of complex variable functions simplify solving considerably
(I give two below). The drawback is that all them assume some
knowledge from students already. So I would be happy to learn elementary examples which may
convince students that complex numbers and functions of a
complex variable are useful. As this question runs in the community wiki mode,
I would be glad to see one example per answer. Thank you in advance! Here are the two promised examples. I was reminded of the second one by several answers and comments about trigonometric functions (and also by the notification that "the bounty on your question Trigonometry related to Rogers–Ramanujan identities expires within three days"; it seems to be harder than I expected). Example 1. What is the Fourier expansion of the (unbounded) periodic function $$
f(x)=\ln\Bigl\lvert\sin\frac x2\Bigr\rvert\ ?
$$ Solution. The function $f(x)$ is periodic with period $2\pi$ and has poles at the
points $2\pi k$ , $k\in\mathbb Z$ . Consider the function on the interval $x\in[\varepsilon,2\pi-\varepsilon]$ .
The series $$
\sum_{n=1}^\infty\frac{z^n}n, \qquad z=e^{ix},
$$ converges for all values $x$ from the interval.
Since $$
\Bigl\lvert\sin\frac x2\Bigr\rvert=\sqrt{\frac{1-\cos x}2}
$$ and $\operatorname{Re}\ln w=\ln\lvert w\rvert$ , where we choose $w=\frac12(1-z)$ ,
we deduce that $$
\operatorname{Re}\Bigl(\ln\frac{1-z}2\Bigr)=\ln\sqrt{\frac{1-\cos x}2}
=\ln\Bigl\lvert\sin\frac x2\Bigr\rvert.
$$ Thus, $$
\ln\Bigl\lvert\sin\frac x2\Bigr\rvert
=-\ln2-\operatorname{Re}\sum_{n=1}^\infty\frac{z^n}n
=-\ln2-\sum_{n=1}^\infty\frac{\cos nx}n.
$$ As $\varepsilon>0$ can be taken arbitrarily small,
the result remains valid for all $x\ne2\pi k$ . Example 2. Let $p$ be an odd prime number. $\newcommand\Legendre{\genfrac(){}{}}$ For an integer $a$ relatively prime to $p$ ,
the Legendre symbol $\Legendre ap$ is $+1$ or $-1$ depending on whether the congruence $x^2\equiv a\pmod{p}$ is solvable or not.
Using the elementary result (a consequence of Fermat's little theorem) that $$
\Legendre ap \equiv a^{(p-1)/2}\pmod p,
\tag{*}\label{star}
$$ show that $$
\Legendre 2p=(-1)^{(p^2-1)/8}.
$$ Solution. In the ring $\mathbb Z+\mathbb Zi=\Bbb Z[i]$ , the binomial formula implies $$
(1+i)^p\equiv1+i^p\pmod p.
$$ On the other hand, $$
(1+i)^p
=\bigl(\sqrt2e^{\pi i/4}\bigr)^p
=2^{p/2}\biggl(\cos\frac{\pi p}4+i\sin\frac{\pi p}4\biggr)
$$ and $$
1+i^p
=1+(e^{\pi i/2})^p
=1+\cos\frac{\pi p}2+i\sin\frac{\pi p}2
=1+i\sin\frac{\pi p}2.
$$ Comparing the real parts implies that $$
2^{p/2}\cos\frac{\pi p}4\equiv1\pmod p,
$$ hence from $\sqrt2\cos(\pi p/4)\in\{\pm1\}$ we conclude that $$
2^{(p-1)/2}\equiv\sqrt2\cos\frac{\pi p}4\pmod p.
$$ Then using the elementary result \eqref{star}: $$
\Legendre2p
\equiv2^{(p-1)/2}
\equiv\sqrt2\cos\frac{\pi p}4
=\begin{cases}
1 & \text{if } p\equiv\pm1\pmod8, \cr
-1 & \text{if } p\equiv\pm3\pmod8,
\end{cases}
$$ which is exactly the required formula.
|
The nicest elementary illustration I know of the relevance of complex numbers to calculus
is its link to radius of convergence, which student learn how to compute by various tests, but more mechanically than conceptually. The series for $1/(1-x)$, $\log(1+x)$, and $\sqrt{1+x}$ have radius of convergence 1 and we can see why: there's a problem at one of the endpoints of the interval of convergence (the function blows up or it's not differentiable). However,
the function $1/(1+x^2)$ is nice and smooth on the whole real line with no apparent problems, but its radius of convergence at the origin is 1. From the viewpoint of real analysis this is strange: why does the series stop converging? Well, if you look at distance 1 in the complex plane... More generally, you can tell them that for any rational function $p(x)/q(x)$, in reduced form, the radius of convergence of this function at a number $a$ (on the real line) is precisely the distance from $a$ to the nearest zero of the denominator, even if that nearest zero is not real. In other words, to really understand the radius of convergence in a general sense you have to work over the complex numbers. (Yes, there are subtle distinctions between smoothness and analyticity which are relevant here, but you don't have to discuss that to get across the idea.) Similarly, the function $x/(e^x-1)$ is smooth but has a finite radius of convergence $2\pi$ (not sure if you can make this numerically apparent). Again, on the real line the reason for this is not visible, but in the complex plane there is a good explanation.
|
{
"source": [
"https://mathoverflow.net/questions/30156",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4953/"
]
}
|
30,248 |
The title basically says it all. Is there a group with more than one element that is isomorphic to the group of automorphisms of itself? I'm mainly interested in the case for finite groups,
although the answer for infinite groups would still be somewhat interesting.
|
The automorphism group of the symmetric group $S_n$ is (isomorphic to) $S_n$ when $n$ is different from $2$ or $6$. In fact, if $G$ is a complete group you can ascertain that $G \simeq \mathrm{Aut}(G)$. The reverse implication needn't hold, though.
|
{
"source": [
"https://mathoverflow.net/questions/30248",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
30,307 |
I heard or have read the following nice explanation for the origin of the convention that one uses (almost) always $x,y,z$ for variables. (This question was motivated by question Origin of symbol *l* for a prime different from a fixed prime? ) It seems this custom is due to the typesetter of Descartes. Descartes used initially other letters
(mainly $a,b,c$) but the typesetter had the same limited number of lead symbols
for each of the 26 letters of the Roman alphabet. The frequent use of variables exhausted his stock and he asked thus Descartes if he could use the last three letters $x,y,z$ of the alphabet (which occur very rarely in French texts). Does anyone know if this is only a (beautiful) legend or if it contains some truth? (I checked that Descartes uses indeed already $x,y,z$ generically for variables in his printed works.)
|
You'll find details on this point (and precise references) in Cajori's History of mathematical notations , ¶340. He credits Descartes in his La Géometrie for the introduction of $x$, $y$ and $z$ (and more generally, usefully and interestingly, for the use of the first letters of the alphabet for known quantities and the last letters for the unknown quantities) He notes that Descartes used the notation considerably earlier: the book was published in 1637, yet in 1629 he was already using $x$ as an unknown (although in the same place $y$ is a known quantity...); also, he used the notation in manuscripts dated earlier than the book by years. It is very, very interesting to read through the description Cajori makes of the many, many other alternatives to the notation of quantities, and as one proceeds along the almost 1000 pages of the two volume book, one can very much appreciate how precious are the notations we so much take for granted!
|
{
"source": [
"https://mathoverflow.net/questions/30307",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4556/"
]
}
|
30,381 |
What is authoritative canonical formal definition of function? For example, According to Wolfram MathWorld ,
$$isafun_1(f)\;\leftrightarrow\;
\forall a\in f\;(\exists x\exists y \;\langle x,y\rangle = a)
\; \wedge \;
\forall x\forall y_1\forall y_2\;((\langle x,y_1\rangle\in f\wedge\langle x,y_2\rangle \in f)\rightarrow y_1=y_2))$$ According to Bourbaki "Elements de Mathematiques, Theorie des Ensembles",
$$
isafun_2(f)\;\leftrightarrow\;
\exists d\exists g\exists c\;(\langle d,g,c\rangle=f
\;\wedge\;isafun_1(g)\;\wedge$$
$$\;\wedge\;
\forall x(x \in d\rightarrow \exists y(\langle x,y\rangle \in g))
\;\wedge\;
\forall x\forall y(\langle x,y\rangle \in g\rightarrow (x \in d\wedge y\in c)))
$$ How to make agree definition of function as triple with extensional equality
$$
\forall f\forall g\;[\;(isafun(f)\wedge isafun(g))
\; \rightarrow \;
[\;(\forall x(\;f(x)=g(x)\;))\leftrightarrow f=g\;]\;]
$$
? Why such divergences in definitions exist? Upd: Two additional questions: Why function is not a pair in $isafun_2$? First component of triple is perfectly derivable from the second. What word function exactly means if no underlying theory is specified in context? If I build fully formal knowledge base about mathematics for automated reasoning and want to add notion of contextless function -- how I must describe it?
|
The fact is that different subject areas of mathematics use different definitions for this basic concept. The Bourbaki definition is quite common, particularly in many of the areas well-represented here on MO, but other areas use the ordered-pair definition. For example, if you open any set-theory text, you will find that a function $f$ is a set of ordered pairs having the functional property that any $x$ is paired with at most one $y$, denoted $f(x)$. This definition, which is completely established and much older than the Bourbaki definition, makes a function a special kind of binary relation, which is any set of ordered pairs. The domain of a function is the set of $x$ for which $f(x)$ exists. The range is the set of all such $f(x)$, and so on. The assertion $f:A\to B$ is a statement about the three objects, $f$, $A$ and $B$, that $f$ is a function with domain $A$ having its range a subset of $B$. In particular, the same function $f$ can have many different codomains. Another useful variation of the function concept is the concept of a partial function, common in many parts of logic, particularly set theory and computability theory. A partial function on $A$ is simply a function whose domain is included in $A$. In this case, we write $f:A\to B$, but with with three dots (my MO tex ability can't seem to do it), to mean that $f$ is a function with $dom(f)\subset A$ and $ran(f)\subset B$. This notion is particularly usefful in computability theory, where one has functions that might not produce an output on all input. But it also arises in set theory, where one often build partial orders consisting of small partial functions from one set to another. The union of a chain of such functions is a function again. It would be silly to insist in the Bourbaki style that there are really invisible functors running through this construction adjusting the domains and co-domains. One could object that the set-theorists could use the Bourbaki definition, if only they prepared better: in any context where many functions are treated, they should simply delimit an upper bound for the co-domains under consideration and use that co-domain for all the functions. But this proposal bumps into set-theoretic issues. For example, if I consider the class of all functions from an ordinal to the ordinals, then the only common co-domain is the class of all ordinals. But as this is a proper class, it isn't available if I want to consider only set functions. So there are good set-theoretic reasons not to use the Bourbaki definition. There are numerous other basic concepts that are given different precise meanings in different subjects of mathematics. For example, the concept of tree . In graph theory, it is a graph with no loops, whereas in set theory, it is a kind of partial order. In finite combinatorics, it might be a finitte partial order having no diamonds, but in the infinitary theory, one often means a partial order such that the predecessors of every node are well-ordered (making the levels of the tree form a well-ordered hierarchy). The graph-theoretic definition does not allow for the cases of Souslin trees and Kurepa trees, which are central in the other theory. There are surely numerous other examples where terminology differs.
|
{
"source": [
"https://mathoverflow.net/questions/30381",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7257/"
]
}
|
30,402 |
The envelope of parabolic trajectories from a common launch point is itself a parabola.
In the U.S. soon many will have a chance to observe this fact directly, as the 4th of July is traditionally celebrated with fireworks. If the launch point is the origin, and the trajectory starts off at angle $\theta$ and velocity $v$, then under unit gravity it follows that the parabola
$$
y = x \tan \theta - [x^2 /(2 v^2)] (1 + \tan^2 \theta)
$$
and the envelope of all such trajectories, is another parabola:
$$
y = v^2 /2 - x^2 / (2v^2)
$$ These equations are not difficult to derive.
I have two questions.
First, is there a way to see that the envelope of parabolic trajectories is itself a parabola, without computing these equations?
Is there a purely geometric argument?
Perhaps there is a way to nest cones and obtain the above picture through conic sections, but I couldn't see it. Second, of course the trajectories are actually pieces of ellipses, not parabolas, if we follow the true inverse-square law of gravity.
Is the envelope of these elliptical trajectories also an ellipse?
(I didn't try to work out the equations.)
Perhaps the same geometric viewpoint (if it exists) could apply, e.g. by slightly tilting the sections.
|
E. Torricelli, who was the last Galileo's secretary, suggested a purely geometrical method to find the envelope in his De motu Proiectorum . He also coined the term `parabola of safety'. Apparently it was the first example of computation of an envelope. The method is briefly described in this note . Another approach is to launch identical missiles with the same velocity at all possible angles simultaneously. At time $t$, their positions describe a circle
$$x^2+\left(y-\frac{t^2}{2}\right)^2=(vt)^2.$$
The latter equation has a unique solution in $t$ provided $(x,y)$ belongs to the parabola
$$y=\frac{v^2}{2}-\frac{x^2}{2v^2}.$$ In the case of missiles moving in a Kepler field (with the attractive potential $\sim -1/r$), the envelope of elliptic trajectories is indeed an ellipse. A web search gave the nice short article which contains several elementary geometric proofs of this and related results. Edit. A free version of J.-M. Richard article can be found here .
|
{
"source": [
"https://mathoverflow.net/questions/30402",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
30,632 |
You and I are having a conversation: "Okay," I say, "I think I get it. The gauge groups we know and love arise naturally as symmetries of state spaces of particles." "Something like that." "...And then we can add these as local symmetries to space by restricting a connection on some principal bundle with a nice little lagrangian..." "Again, something like that." "But this all seems pretty 'top-down' - I mean, why aren't we trying to see what matter looks like?" I reel off a somewhat overblown soliloquy on that old John Wheeler quote about empty space (free wine from the afternoon's colloquium, evident in its effect), but you have stopped listening. "String theory?" "Don't get me started on string theory! A bunch of guys who never learned to apply Occam's razor is what that is - 'ooh it's not working, must be because we need more assumptions' - or something to keep the differential geometers busy 'til they unfreeze Einstein!" You look offended. "What?!! I'm joking!!" "Sure." "Still, though, they're wrong - I mean Donaldson's Theorem is a dead giveaway isn't it?! Here we are hurtling through topological $\mathbb{R}^4$ , having this conversation, surrounded by artefacts of differential structure: someone's proved that this is the only space of this sort in which these artefacts can occur in some sensibly invariant way -and with a proper continuum of possibilities, too- and we're talking about string theory! Why isn't everyone in the mathematical physics world trying to crack the puzzle of the different structures in topological $\mathbb{R}^4$ ? I'm not saying it's going to be "electron equals Casson handle", but it must be worth at least looking . For crying out loud, why aren't we all looking at exotic $\mathbb{R}^4$ ?! You resist the temptation to give me a withering look, clear your throat, and say:
|
To play devil's advocate, you could easily turn around this line of thought. Since we live in a 3+1-dimensional space-time, we develop concepts that are sensitive to our experiences. So calculus and manifold theory single out dimensions 3 and 4 because that's the only way we know how to design geometric things. Of course the definitions of smooth structure does not mention 3+1-dimensional space-time, but somehow the ingredient concepts are intrinsically 3+1-dimensional, at least that's the assertion of this comment. i.e. the whole concept of linear approximation and smooth function would perhaps appear quaint, uninformative or just plain missing-the-point to beings that live in a 5+3-dimensional space-time, whatever that might mean. The larger upshot of this is, IMO, good physics is really inspired by experiments, not fancy mathematical tools. Without fancy mathematical tools you've maybe got not so much to work with but coming at physics from the perspective of a mathematician wanting to use tool X might not be productive. There's this saying "if all you have is a hammer, everything starts to look like a nail" that applies. We should design our tools to purpose, not shape the purpose to the tools. If you're going to use exotic smooth $\mathbb R^4$'s to construct a physical theory one major problem is that exotic smooth $\mathbb R^4$'s are locally just like $\mathbb R^4$. So if you want your theory to have non-trivial dependence on the smooth structure it has to be a global (non-local in a very significant way) theory -- taking account of behavior off at infinity.
|
{
"source": [
"https://mathoverflow.net/questions/30632",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5869/"
]
}
|
30,659 |
I am a 19 yr old student new to all these ideas. I made the transformation $X(z)=\sum_{n=1}^\infty z^n/n^2$. Therefore $X(1)=\pi^2/6$ as we all know (it is $\zeta(2)$). To calculate $X(1)$, I integrated
$$\frac{Y(z)}{z}=\sum_{n=1}^\infty \frac{z^{n-1}}{n} $$
between 0 to 1; We all know $Y(z)=-\log(1-z)$; so doing integral of $Y(z)/z$ from 0 to 1, I get $$X(1)=\int_0^1\ln\frac{z}{z-1}dz$$
that is $\pi^2/6$; Now integrating $X(z)/z$ between 0 to 1, I get $\sum_{n=1}^\infty 1/n^3$; therefore performing that integral I get $\sum1/n^3$ as
$$\int_0^1 \frac{(\log x)^2 dx}{1-x};$$
which Iam unable to do. So can anyone give an idea of how to find
$$\sum_{n=1}^\infty\frac1{n^3}=\zeta(3)=\int_0^1 \frac{(\log x)^2 dx}{1-x};$$
given
$$\zeta(2)=\int_0^1 \frac{\log x\ dx}{1-x}=\frac{\pi^2}{6}.$$
Please take time to read all these things and help me out by giving some suggestion.
|
Dear Vamsi, Unlike the special values $\zeta(2n)$ (for $n \geq 1$), which are known to be simple algebraic expressions in $\pi$ (in fact just rational multiples of $\pi^{2n}$), it is conjectured (but not known) that the values $\zeta(2n+1)$ are genuinely new irrationalities (and that in fact
each is genuinely different from the other); more precisely, they are conjectured to be algebraically independent of one another, and of $\pi$. There is no prior, classical, name
for these numbers, and in particular you should not expect to be able to evaluate your integral in terms of any numbers whose names you already know. There is a theoretical basis for this conjecture: the kind of integrals that you are computing are callled "period integrals" (if you search, you will find a few other MO questions about periods, in this sense), and a general philosophy is that period integrals should have no more relations between them than those that are implied by elementary manipulations of integrals
(of the type that you made to compute $\zeta(2)$; and here I don't mean elementary in a disparaging sense, just in the sense of standard rules for computing integrals). In fact, period integrals are manifestations of underlying geometry (which I won't get into here; all I will say is that the geometry relevant to zeta values is the geometry of "mixed Tate motives"). One can show that the geometric objects underlying the odd zeta values are independent of one another, in a suitable sense, and of the geometric object underlying $\pi$ (which is basically the circle); what is missing is a proof that the period integrals faithfully reflect the underlying geometry (so that independence in geometry implies independence of period integrals). This is one of the big conjectures in contemporary arithmetic geometry and number theory, and so your question, which is a very nice one, is touching on some very fundamental (and difficult) mathematical issues. Good luck as you continue your studies! Best wishes, Matthew Emerton
|
{
"source": [
"https://mathoverflow.net/questions/30659",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7346/"
]
}
|
30,661 |
What are nice examples of topological spaces $X$ and $Y$ such that $X$ and $Y$ are not homeomorphic but there do exist continuous bijections $f: X \to Y$ and $g: Y \to X$?
|
Recycling an old (ca. 1998) sci.math post: " Anyone know an example of two topological spaces $X$ and $Y$
with continuous bijections $f:X\to Y$ and $g:Y\to X$ such that
$f$ and $g$ are not homeomorphisms? Let $X = Y = Z \times \{0,1\}$ as sets, where $Z$ is the set of integers.
We declare that the following subsets of $X$ are open for each $n>0$.
$$\{(-n,0)\},\ \ \{(-n,1)\},\ \ \{(0,0)\},\ \ \{(0,0),(0,1)\},\ \ \{(n,0),(n,1)\}$$
This is a basis for a topology on $X$. We declare that the following subsets of $Y$ are open for each $n>0$.
$$\{(-n,0)\},\ \ \{(-n,1)\},\ \ \{(0,0),(0,1)\},\ \ \{(n,0),(n,1)\}$$
This is a basis for a toplogy on $Y$. Define $f:X\to Y$ and $g:Y\to X$ by $f((n,i))=(n,i)$ and $g((n,i))=(n+1,i).$
Then $f$ and $g$ are continuous bijections, but $X$ and $Y$ are not homeomorphic. This example is due to G. Paseman. David Radcliffe " More generally, take a space X with three successively
finer topologies T, T' and T''. Form two spaces which have underlying
set ZxX, and "form the infinite sequences" .... T T T T' T'' T'' T'' ....
and ... T T T T T'' T'' T'' T'' .... The continuous maps will take a finer
topology in one sequence to a rougher topology in the other. You can
make them bijective, and show that they are obviously non-homeomorphic
for a judicious choice of X, T, T', and T''. Gerhard "Ask Me About System Design" Paseman, 2010.07.05
|
{
"source": [
"https://mathoverflow.net/questions/30661",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2060/"
]
}
|
30,874 |
I want to understand the idea of the proof of the artihmetic fixed point theorem. The theorem is crucial in the proof of Gödel's first Incompletness theorem. First some notation: We work in $NT$, the usual number theory, it has implemented all primitve recursive functions. Every term or formula $F$ has a unique Gödel number $[F]$, which encodes $F$. If $n$ is a natural number, the corresponding term in $NT$ is denoted by $\underline{n}$. The function $num(n):=[\underline{n}]$ is primitive recursive. Also, there is a primitive recursive function $sub$ of two variables, such that $sub([F],[t])=[F_v(t)]$, where $v$ is a free variable of $F$ which is replaced by a term $t$. Now the theorem assertions the following: Let $F$ be a formula with only one free variable $v$. Then there is a sentence $A$ such that $NT$ proves $A \Leftrightarrow F_v(\underline{[A]})$. This may be interpreted as a self-referential definition of $A$, which is, as I said, crucial in Gödel's work. I understand the proof, I just repeat it, but I don't get the idea behind it: Let $H(v)=F_v(sub(v,num(v)))$ and $A = H_v(\underline{[H]})$. Then we have $A \Leftrightarrow H_v(\underline{[H]})$
$\Leftrightarrow F_v(sub(v,num(v)))_v(\underline{[H]})$
$\Leftrightarrow F_v(sub_1(\underline{[H]},num(\underline{[H]})))$
$\Leftrightarrow F_v(sub_1(\underline{[H]},\underline{[\underline{[H]}]}))$
$\Leftrightarrow F_v(\underline{[H_v(\underline{[H]})]})$
$\Leftrightarrow F_v(\underline{[A]}), qed.$ But why did we choose $H$ and $A$ like above?
|
The fixed point lemma is profound because it reveals a
surprisingly deep capacity in mathematics for
self-reference: when a statement $A$ is equivalent to
$F(A)$, it effectively asserts "$F$ holds of me". How
shocking it is to find that self-reference, the stuff of
paradox and nonsense, is fundamentally embedded in our
beautiful number theory! The fixed point lemma shows that
every elementary property $F$ admits a statement of
arithmetic asserting "this statement has property $F$". Such self-reference, of course, is precisely how Goedel
proved the Incompleteness Theorem, by forming the famous
"this statement is not provable" assertion, obtaining it
simply as a fixed point $A$ asserting "$A$ is not
provable". Once you have this statement, it is easy to see
that it must be true but unprovable: it cannot be provable,
since otherwise we will have proved something false, and
therefore it is both true and unprovable. But I have shared your apprehension at the proof of the
fixed point lemma, which although short and simple, can
nevertheless appear mysteriously impenetrable, like an
ancient mystical rune that we have memorized. We can verify
it step-by-step, but where did it come from? So let me try to explain how one could derive this
argument, or at least arrive at it by small steps. We want to find a statement $A$ that is equivalent to
$F(A)$. If we could expect a strong version of this, then
we would seek an $A$ that is equivalent to $F(A)$, and to
$F(F(A))$, and so on, expanding from the inside. Such a
process leads naturally to the infinitary expression $F(F(F(\cdots)))$ Furthermore, this infinitary expression is itself a fixed
point, in a naive formal way, since if $A$ is that
expression, then applying one more $F$ results in an
expression with the same form, as desired. This infinitary
expression doesn't count as a solution, of course, since we
seek a finite well-formed expression, but it suggests an
approach. Namely, what we want to do is to capture in a single finite
expression the self-expanding nature of that infinitary
solution. The desired statement $A$ should be equivalent to
the assertion that $F$ holds when substituted at $A$
itself. So we introduce an auxiliary variable $v$ and
consider the assertion $H(v)$ that asserts that $v$ is as
desired, namely, that $F$ holds of the statement $v$ codes,
when substituted at $v$. This last self-substitution part,
about substituting $v$ at $v$, is what allows one
substitution to self-expand into two, and then three and so
on, in effect curling the self-expanding infinite tail of
the infinitary expression around onto itself. Namely, if $n$ is the code of $H(v)$, then we perform the
substitution, obtaining the statement $H(n)$, which
asserts exactly that $F$ holds of the statement $n$ codes
when substituted at $n$. But since $n$ codes $H(v)$, this
means that $H(n)$ asserts that $F(H(n))$, and we have the
desired fixed point. Allow me to mention a few other things. First, it is
interesting to consider whether all fixed points of $F$ are
equivalent to each other. This is true after all when $F$
is tautological, for example, since any fixed point will
also be logically valid. Similarly, Goedel's "I am not
provable" statements are all equivalent to the assertion
that the theory is consistent, and this is how one can
prove the Second incompleteness theorem. But are fixed
points for a given $F$ always equivalent? The answer is no.
Fix any statements $A$ and $B$, and let $F(v)$ be the
statement, "if $v=[A]$, then $A$, otherwise $B$". Note that
$F([A])$ is equivalent to $A$ and $F([B])$ is equivalent to
$B$, so they are both fixed points. Andreas raised the very interesting question in the
comments below whether the fixed point $A$ has $F([A])$
also as a fixed point. This is what we might expect from the infinitary example above.
The example of the previous paragraph shows, however, that
not every fixed point has this feature, since in that
example, $A$ is equivalent to $F([A])$, but $F([F([A])])$
is equivalent to $B$. But in this example, other fixed
points do have the feature. I am unsure in general about whether there must always be a fixed point $A$ such that
$F([A])$ is also a fixed point. Lastly, I would like to mention that essentially the same
argument for the fixed point lemma has been used to prove
other fixed point theorems in logic. For example, the Recursion
Theorem asserts that for any computable function $f$, acting on
programs, there is a program $e$ such that $e$ and $f(e)$
compute exactly the same function. One can prove this in a very similar way to the fixed point
lemma. Namely, define H(v,x)={f({v}(v))}(x), where {e}(x)
means the output of program e on input x. Note that H is
running program v on itself, and then applying f, just as
the H in your argument. Now, let s be the function that on
input v, produces a program to compute H(v,x), so that
{s(v)}(x)=H(v,x). Let d be the program computing s, and let
e=s(d). Putting this together, we have {e}(x) = {s(d)}(x)= H(d,x) = {f({d}(d))}(x)
= {f(s(d))}(x) = {f(e)}(x). So program e and f(e) compute the same function.
|
{
"source": [
"https://mathoverflow.net/questions/30874",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2841/"
]
}
|
30,972 |
Every orientable 3-manifold can be obtained from the 3-sphere by doing surgery along a framed link. Kirby's theorem says that the surgery along two framed links gives homeomorphic manifolds if and only if the links can be related by a sequence of Kirby moves and isotopies. This is pretty similar to Reidemeister's theorem, which says that two link diagrams correspond to isotopic links if and only if they can be related by a sequence of plane isotopies and Reidemeister moves. Note however that Kirby moves, as opposed to the Reidemeister moves, are not local: the second Kirby move involves changing the diagram in the neighborhood of a whole component of the link. In "On Kirby's calculus", Topology 18, 1-15, 1979 Fenn and Rourke gave an alternative version of Kirby's calculus. In their approach there is a countable family of allowed transformations, each of which looks as follows: replace a $\pm 1$ framed circle around $n\geq 0$ parallel strands with the twisted strands (clockwise or counterclockwise, depending on the framing of the circle) and no circle. Note that this time the parts of the diagrams that one is allowed to change look very similar (it's only the number of strands that varies), but still there are countably many of them. I would like to ask if this is the best one can do. In other words, can there be a finite set of local moves for the Kirby calculus? To be more precise, is there a finite collection $A_1,\ldots A_N,B_1,\ldots B_N$ of framed tangle diagrams in the 2-disk such that any two framed link diagrams that give homeomorphic manifolds are related by a sequence of isotopies and moves of the form "if the intersection of the diagram with a disk is isotopic to $A_i$, then replace it with $B_i$"? I vaguely remember having heard that the answer to this question is no, but I do not remember the details.
|
There is a finite set of local moves. For instance, these: (source: unipi.it ) In the second row, the number of encircled vertical strands is $n\leqslant 3$ on the left and $n\leqslant 2$ on the right move. So they are indeed finite. The bottom-right move is just the Fenn-Rourke move (with $\leqslant 2$ strands). The box is a full counterclockwise twist. We also add all the corresponding moves with $-1$ instead of $+1$ . We can prove that these moves generate the Fenn-Rourke moves as follows. Consider as an example the Fenn-Rourke move with 3 strands: (source: unipi.it ) To generate this move, we first construct a chain of 0-framed unknots as follows: (source: unipi.it ) Then we slide the vertical strands along the chain: (source: unipi.it ) Now it is sufficient to use the Fenn-Rourke move with 2 strands, slide, and use another Fenn-Rourke with one strand: (source: unipi.it ) Finally, iterate this procedure 3 times: (source: unipi.it ) and you are done. The same algorithm works for the general Fenn-Rourke move with $n$ strands. EDIT I have slightly expanded the proof and posted it in the arXiv as https://arxiv.org/abs/1102.1288
|
{
"source": [
"https://mathoverflow.net/questions/30972",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2349/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.