source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
115,617 |
Would anybody happen to know where I could obtain a scanned version of Lectures on Morse theory - [revised and expanded version of notes of lectures delivered at Professor R. Bott's topology seminar at Harvard in February and March of 1963], taken by Richard S Palais? As far as I am aware, these notes were never published.
|
I find myself more than a little confused by this question. First, the "Lectures on K(X)" are not about Morse Theory. It is true that I gave a lecture on Morse Theory at Bott's Seminar in 1963, but I did not take and write up notes of lectures by Bott (at least not as far as I can recall---but that was half a century ago). The lecture I gave was on extending Morse Theory to Hilbert manifolds using "Condition C" (later called Condition PS). At the end of that lecture someone in the audience told me he had heard a similar lecture by Steve Smale at Columbia a few weeks earlier. (By a weird coincidence, Steve had also called (essentially) the same condition Condition C.) Anyway, I contacted Steve and we wrote up a joint research announcement in BAMS (called "A Generalized Morse Theory") and I wrote up my full version (called "Morse Theory on Hilbert Manifolds") in Topology and Steve wrote up his full version in the Annals. (The reason we didn't write a joint article was that we had pretty much already completed our research and although we had essentially the same abstract theory we had each developed it for very different applications.)
|
{
"source": [
"https://mathoverflow.net/questions/115617",
"https://mathoverflow.net",
"https://mathoverflow.net/users/332/"
]
}
|
115,735 |
In the long process that resulted in the classification of finite simple groups, some of the exceptional groups were only shown to exist after people had computed (most of) their character tables and other such precise information which usually can only be attached to things that exist. Maybe someone familiar with the details can tell me/us: Was there at some point a finite simple group conjectured to exist that turned out not to exist in the end? If so, this part of the story is much less told than the successful part! It would be interesting to know if for such a non-existent group, say, the character table was computed, and so on... I am asking because I just read this question on Math.SE and it reminded me I have always wanted to know this.
|
There was a point during the history of the Classification when pursuers of sporadic groups distinguished the Baby Monster, the Middle Monster and the Super Monster. The first two actually turned out to exist (though the word "Middle" was dropped), but the third turned out to be a dud. http://www.neverendingbooks.org/index.php/tag/simples/page/2 has an account of this.
|
{
"source": [
"https://mathoverflow.net/questions/115735",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1409/"
]
}
|
116,123 |
I asked this (with background) here https://stats.stackexchange.com/questions/38494/principal-component-analysis-bootstrap-and-probability-of-eigenvalue-collision but did not really get any answers. See that post for the background. Let $D$ be some open set in the plane, say. Not really important where the set $D$ sits, but it shoud not be only a line/curve. Suppose we have defined a continuous function on $D$
$$
f \colon D \mapsto \text{Sym}^n
$$
where $\text{sym}^n$ is the set of (real) symmetric $n \times n$ matrices. How can I define the eigenvectors of $f(x), x \in D$ as a continuous function on $D$? How can I calculate this?
And how can I deal with eigenvalue collisions? A simple example clarifying this point (and defined on a curve): Let
$$
f(t) =\left( \begin{matrix} 1+t & 0
\cr
0 & 1-t \end{matrix}\right)
$$
Then the largest eigenvalue is
$$
\lambda_1(t) = 1+ |t|
$$
but the eigenvector corresponding to the largest eigenvalue cannot be defined as a continuous function:
$$
v_1(t) = \begin{cases} e_2 & t\le 0 \cr
e_1 & t > 0 \end{cases}
$$
So what I want is to look at the two eigenvalue functions $1+t, 1-t$ and follow the eigenvectors corresponding to each one, which obviously can be done in a continuos (constant!) manner. ADDED after the answer by Anthony Quas: Is it possible to give some further conditions, under which a solution is possible?
Differentiability? Or, if the matrices are realizations of some random field of matrices, can something be said about the probability some continuous selection is possible?
|
The example given by Anthony Quas reveals a phenomenon discussed in Kato's book Perturbation Theory for Linear Differential Operators . The point is the following: If the symmetric matrix depends analytically upon one parameter, then you can follow analytically its eigenvalues and its eigenvectors. Notice that this requires sometimes that the eigenvalues cross. When this happens, the largest eigenvalues, as the maximum of smooth functions, is only Lipschitz. On the contrary, if the matrix depends upon two or more parameters, the eigenvalues are at most Lipschitz when crossing happens, and the eigenvectors cannot be chosen continuously. A typical example is
$$(s,t)\mapsto\begin{pmatrix} s & t \\\\ t & -s \end{pmatrix},$$
whose eigenvalues are $\pm\sqrt{s^2+t^2}$. Up to the shift by $I_2$, Quas' example is just a piecewise $C^1$ section of this two-parameters example, and it inherits its lack of continuous selection of eigenvectors. Likewise, if analyticity is dropped, a $C^\infty$-example by Rellich shows that eigenvectors need not be continuous functions of a single parameter. Of course, Quas' example can be recast as a $C^\infty$ one, by flatening the parametrisation at $t=0$, say by replacing $t$ by $s$ such that $t={\rm sgn}(s)\cdot e^{-1/s^2}$. Side remark: Kato's result is only local . If the domain is not simply connected, it could happen that a global continuous selection of eigenvectors is not possible. This is classical in the exemple above if you restrict to the unit circle $s^2+t^2=1$; then the eigenvalues $\pm1$ are global continuous functions, but when following an eigenvector, it experiences a flip $v\mapsto -v$ as one makes one turn.
|
{
"source": [
"https://mathoverflow.net/questions/116123",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6494/"
]
}
|
116,243 |
I foresee that to experts of automorphic forms this question will sound unimportant or useless or even not worthy of an answer; but none of these are going to stop me from asking it! The question is simple: let $G$ be a reductive (or even semisimple) algebraic group over $\mathbb Q$. Is it true that the adelic group $G(\mathbb A)$ is of Type I (i.e., direct integral decomposition of its unitary representations is unique)? And if the answer is negative, then how do people actually get around it in the study of multiplicity of automorphic forms in $L^2(G(\mathbb Q)\backslash G(\mathbb A))$?
|
I believe the answer is yes. Let's begin by recalling that if one wants to show that a locally compact group $G$ is of type I, it suffices to show that $G$ contains a "large" compact subgroup $K$, in the sense that for every $\pi \in \hat{G}$ and $\sigma \in \hat{K}$, the multiplicity of $\sigma$ in $\pi|_K$ is finite. This is how Harish-Chandra showed that a real reductive group is of type I (take $K$ to be a maximal compact), and also how Bernstein showed that a $p$-adic reductive group is of type I (take $K$ to be a compact open subgroup). Now let $G$ be a connected reductive group over $\mathbb Q$. Then, away from a finite set $S$ of places (containing $\infty$), $G$ is unramified and has a model over $\mathbb Z_p$. Let's abuse notation and denote this model by $G$. It suffices to show that $G(\mathbb A^S) = \prod'_{p \not\in S} G(\mathbb Q_p)$ is of type I. The desired large $K$ turns out to be $K = \prod_{p \not\in S} G(\mathbb Z_p)$. This assertion essentially appears (without proof) as Theorem 4 in Flath's article in the Corvallis proceedings. The details are spelled out in the appendix to Clozel's article in the IAS/Park City 2002 lecture notes on automorphic forms ( MR2331351 ; a Google Books preview is available here ).
|
{
"source": [
"https://mathoverflow.net/questions/116243",
"https://mathoverflow.net",
"https://mathoverflow.net/users/26116/"
]
}
|
116,336 |
At MIT all departments have numbers, and math is 18. Last year MIT
math majors produced a tee shirt that said ${i\choose 18}$ ("I choose
18") on the front, and on the back
$$ \frac{34376687+1499084559i}{14485008384}. $$
With the more natural denominator $18!$ this is
$$ \frac{15194495654000+662595375078000i}{18!}. $$
This suggests the question: for any $n\geq 1$ find a "nice"
combinatorial interpretation of the real and imaginary parts of
$i(i-1)(i-2)\cdots (i-n+1)=f_n+ig_n$. It is easy to express $f_n$ and
$g_n$ as certain alternating sums of Stirling numbers of the first
kind, but I don't consider this "nice." The $g_n$'s seem to alternate
in sign beginning with $n=5$. The $f_n$'s alternate in sign up to
$n=17$ and then seem to alternate in sign beginning with $n=18$. It is
curious that $i(i-1)(i-2)(i-3)=-10$, a real number. One could ask the
same question with $i$ replaced by any Gaussian integer $a+bi$. One
can also ask about the asymptotic rate of growth of $f_n$ and
$g_n$. Clearly $f_n^2+g_n^2\sim C\cdot (n-1)!^2$, so one would expect
$f_n$ and $g_n$ to be roughly of the size of $(n-1)!$.
|
Asymptotics: Lets look at the quantity $$S(n)=(-1)^{n}(n+1)\binom{i}{n+1}=i\prod_{k=1}^{n+1}\left(1-\frac{i}{k}\right).$$ It's just your binomial coefficient above with the $(-1)^{n+1}$ factored in, and an extra $n+1$ so it factors nicely as a product. Claim: We have that $$S(n)=\sqrt{\frac{\sinh{\pi}}{\pi}}e^{iC_{0}}e^{-i\log n}\left(1+O\left(\frac{1}{n}\right)\right),$$
where $$C_{0}=\frac{-\pi}{2}-1+\int_0^\infty \frac{\{x\}}{1+x^2}dx =\arg(\Gamma(i))\approx
-1.872.$$ In particular, the angle moves around the circle like $\log n$. Application to your question: The above claim shows that
$$f_{n+1} = (-1)^{n+1} n! \sqrt{\frac{\sinh{\pi}}{\pi}}\cos(-\log n+C_0)\left(1+O\left(\frac{1}{n}\right)\right)$$ and $$g_{n+1} \sim (-1)^{n+1} n! \sqrt{\frac{\sinh{\pi}}{\pi}} \sin(-\log n+C_0)\left(1+O\left(\frac{1}{n}\right)\right).$$ In particular, the ratio $g_n/f_n$ can be made arbitrarily large or small. Proof of the claim: We first note that the size is $$\sqrt{\prod_{k=1}^{n}\left(1+\frac{1}{k^{2}}\right)}=\sqrt{\prod_{k=1}^{\infty}\left(1+\frac{1}{k^{2}}\right)}+O\left(\frac{1}{n}\right).$$ To evaluate this product, recall the Weierstrass product for the Gamma function $$\left(\Gamma(z)\right)^{-1}=ze^{\gamma z}\prod_{k=1}^{\infty}\left(1+\frac{z}{k^{2}}\right)e^{-\frac{z}{r}}.$$ From this it follows that $$\frac{1}{|\Gamma(i)|^{2}}=\frac{1}{\Gamma(i)\Gamma(-i)}=\prod_{k=1}^{\infty}\left(1+\frac{1}{k^{2}}\right).$$ Using the identity $$\Gamma(x)\Gamma(-x)=-\frac{\pi}{x\sin\left(\pi x\right)},$$ we now have that $$\frac{1}{\Gamma(i)\Gamma(-i)}=\frac{-i\sin(i\pi)}{\pi}=\frac{\sinh(\pi)}{\pi},$$ which gives rise to the $\sqrt{\frac{\sinh(\pi)}{\pi}}$ term. Moving on to the evaluation of the angle, by looking at each triangle, and noting that the argument is additive when multiplied, we get that the argument equals $$-\sum_{k=1}^{n}\tan^{-1}\left(\frac{1}{k}\right).$$ The negative sign arises since we are working in the fourth quadrant. By looking at the Taylor series for $\tan^{-1}$ we see that the above is $\log n+O(1)$, however, I would like to compute this argument more precisely, and obtain the constant. Lets compare our $\tan^{-1}$ series to the harmonic series. Rewriting things in terms of a Riemann Stieltjes integral, and using summation by parts, we have that $$\sum_{k=1}^{n}\tan^{-1}\left(\frac{1}{k}\right)=\int_{0}^{n}\tan^{-1}\left(\frac{1}{x}\right)d\left[x\right]=[n]\tan^{-1}(1/n)\int_{0}^{n}\frac{\left[x\right]}{1+x^{2}}dx. $$ Pulling out the main term with the identity $[x]=x-\{x\}$, the above equals $$\int_{0}^{n}\frac{x}{1+x^{2}}dx-\int_{0}^{n}\frac{\{x\}}{1+x^{2}}dx.$$ Since the first integral evaluates to $\frac{1}{2}\log(1+x^2)$, we have that $$\sum_{k=1}^{n}\tan^{-1}\left(\frac{1}{k}\right)=\log n +1-\int_0^\infty \frac{\{x\}}{1+x^2}dx +O\left(\frac{1}{n}\right).$$ Acknowledgements: I would like to thank Noam Elkies for pointing out that $$\prod_{k=1}^\infty \sqrt{1+\frac{1}{k^2}}=\frac{1}{|\Gamma(i)|}=\sqrt{\frac{\sinh(\pi)}{\pi}}$$ in the comments. Edit: Fixed the constants appearing. Interestingly $$\Gamma(i)=\sqrt{\frac{\pi}{\sinh{\pi}}}\exp\left(i\left(\frac{-\pi}{2}-1+\int_0^\infty \frac{\{x\}}{1+x^2}dx \right)\right).$$
|
{
"source": [
"https://mathoverflow.net/questions/116336",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2807/"
]
}
|
116,627 |
I'm writing an article on Lychrel numbers and some people pointed out that this is completely useless. My idea is to amend my article with some theories that seemed useless when they are created but found use after some time. I came with some ideas like the Turing machine but I think I'm not grasping the right examples. Can someone point me some theories that seemed like the Lychrel numbers and then become 'useful'? Edit: As some people pointed out that I've published this on MSE I present a code here to find some candidates as Lychrel numbers. def reverseNum(n):
st = str(n)
return int("".join([st[i] for i in xrange(len(st)-1,-1,-1)]))
def isPalindrome (n):
st = str(n)
rev = str(reverseNum(st))
return st==rev
def isLychrel (n, num_interations):
p = n
for i in xrange(num_interations):
if isPalindrome(p):
return i
p = p + reverseNum(p)
return -1
for i in xrange(1000):
p = isLychrel(i,100)
if (p < 0):
print i,p
|
Number theory, in particular investigations related to prime numbers, was famously considered useless (e.g., by Hardy) for practical matters. Now, since "everybody" needs some cryptography it is quite useful to know how to generate primes (e.g., for an RSA key) and alike, sometimes involving prior 'useless' number theory results.
|
{
"source": [
"https://mathoverflow.net/questions/116627",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30026/"
]
}
|
116,734 |
First, I apologize if mathoverflow is a bad fit for this question, but it is the only place where I can think to get advice from professionals given my circumstance. I'm also sorry about any vagueness in my post since I need to make sure I maintain anonymity - anonymity is also why I can't get advice from folks in my department. The short story is that I am a graduate student in a math related field and have been reached out to by a faculty member for writing up a paper about something we discussed to be submitted to a low ranking Mathematics journal. The material is very basic, coming very close to being trivial observations, and happens to be of, I suspect, no interest to anyone anywhere. Anyone who cared to prove what we proved would probably be able to do so within a day at most, if the results aren't already well known. At best, I think the results make good homework problems. Despite this, the faculty member seems excited about it. He works in a field that another faculty member has described as "toxic" to getting a job in academics, which is my ultimate goal, and advised me that I would probably be smart to leave papers in this research topic off my C.V. unless it is accepted to a top-tier journal. I'm in the strange position of working on this paper because I don't want to alienate him or offend him, but I'm hoping that the paper is rejected just so that my name isn't attached to the paper. This will be my first article submitted for publication, and I'm a little uneasy about even having random editors - who I conceivably could run into in the future - viewing the work. So, my question to you: is this sort of thing worth getting worked up about? Should I just go along with it, figuring that it is highly unlikely that it will negatively impact my career, or is there some legitimate concern? Is there a chance that publishing trash might help me just because it increases my publication rate?
|
When all else fails, try honesty. I don't mean that you have to tell the professor to his face that you think his field is toxic and that the paper is garbage. But if your honest opinion is that the paper is too trivial to be worth publishing and that you're worried that it might hurt your career, then I would tell the professor that. If you can suppress your name from the paper more easily, just by declining to work on it, then by all means do that. But it sounds like you've already worked on the paper and can't extricate yourself that easily at this point. In that case I'd recommend just telling the professor that you have had second thoughts and would like to remove your name from the paper, and explain why. The fact that you're a student and he's a professor makes this a scarier prospect, but I don't think your difference in social status should stop you from giving your honest professional opinion on the quality of the work. Intellectual honesty is what we are all striving for in our profession, after all. What's the point in being a scholar if you have to sacrifice honesty? And maybe you're wrong after all and the paper is more interesting than you think. You won't find this out unless you give the professor an opportunity to openly defend the paper against honest criticism. Honesty is so rare that it tends to confuse people, who are more accustomed to dealing with lies and excuses than with the straight dope. I know from experience that being honest does risk being misunderstood by people who assume that I can't possibly be telling the truth, so there is some risk of misunderstanding. In the long run, though, I believe that developing a reputation for honesty pays off handsomely, in terms of inner peace if nothing else.
|
{
"source": [
"https://mathoverflow.net/questions/116734",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30065/"
]
}
|
116,847 |
This is a tough one, but does anyone know of any images that recall characteristic p geometry ( over algebraically closed fields ) in some sense? It is not enough if it is some picture that can be also understood solely in characteristic 0. A quick search through the literature has proved fruitless. I have been thinking for a while of asking this question, but I never had a pressing need rather than my own curiosity. However, now I am trying to improve a poster by including pictures but the topic is algebraic geometry in characteristic p. An example of such an image in Complex Geometry would be the arrangement of contracted and blow up curves in the standard Cremona transformation of the projective plane. The one in this poster is simple but effective. I believe this question also might be of interest for people who try to explain research to non-mathematicians or simply to mathematicians who are not geometers.
|
I don't think you can draw something meaningful - I would be surprised if someone made a good drawing of the Frobenius morphism ;). That being said, here is an example (possibly misleading or unrelated to your research) I saw in the slides of Benedict Gross's lectures on the arithmetic of hyperelliptic curves. Take a prime $p$, say $p=57$, and an equation of an hyperelliptic curve $y^2 = x^n + ax^{n-2} + \ldots$ with integer coefficients. Draw a $p\times p$ square and mark the solutions to the above equation mod $p$. The resulting picture exhibits the following: It is a mixture of chaos and geometry: there is a visible symmetry coming from the hyperelliptic involution $(x, y)\mapsto (x, -y)$. The solutions form a finite set, in particular, it makes combinatorial arguments possible. We can ask how many points are there and whether it gives us some "geometric" information. This is not obvious to someone from other fields, or a non-mathematician. You can include drawings of the same curve over $\mathbb{R}$ and $\mathbb{C}$. I think the equation $y^2 = \ldots$ and the three pictures together explain pretty well what algebraic geometry is about without going into too much detail.
|
{
"source": [
"https://mathoverflow.net/questions/116847",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1887/"
]
}
|
116,896 |
Liouville's theorem from complex analysis states that a holomorphic function $f(z)$ on the plane that is bounded in magnitude is constant. The usual proof uses the Cauchy integral formula. But this has always struck me as indirect and unilluminating. There is a proof via harmonic function theory, but this also seems to involve an unnecessarily large amount of prior buildup. So one might seek a more direct proof as below. Assume that $f(z)$ is nonconstant. The fact that $f(z)$ is holomorphic at every point implies that at any given point, there is a direction such that moving in that direction makes $|f(z)|$ larger. But this doesn't prove that $|f(z)|$ is unbounded, because a priori its magnitude could behave like $5 - \frac{1}{|z|}$ or some such thing. In the case of $f(z) = \frac{1}{P(z)}$ where $P(z)$ is a polynomial, one knows that $|f(z)|$ tends toward $0$ as $|z| \to \infty$ so that there's some closed disk such that if $|f(z)|$ is bounded, then it has a maximum in the interior of the disk, which contradicts the fact that one can always make $f(z)$ larger by moving in a suitable direction. But for general $f(z)$, one doesn't have this argument. One can try to reason based on the power series expansion of a holomorphic function $f(z)$ that is not a polynomial. Because polynomials are unbounded as $|z| \to \infty$ and grow in magnitude in a way that's proportional to their degree, one might think that a power series, which can be regarded as an infinite degree polynomial, would also be unbounded as $|z| \to \infty$. This is of course false: take $f(z) = \sin(z)$, then as $|z| \to \infty$ along the real axis, $f(z)$ remains bounded. The point is that the dominant term in the partial sums of the power series varies with $|z|$, and that the relevant coefficients change, alternating in sign and tending toward zero rapidly, so that the gain in size corresponding to moving to the next power of $z$ is counterbalanced by the change in coefficient. But there's some direction that one can move in for which $f(z)$ is unbounded: in particular, for $f(z) = \sin(z)$, $f(z)$ is unbounded along the imaginary axis. This suggests that we write $a_n = s_{n}e^{i \theta_n}$ for the coefficient of $z^n$ in the power series expansion of $f(z)$ and write $z = re^{i \theta}$ (where $s, r > 0$) so that $$f(z) = \sum_{n = 0}^{\infty} {a_n}z^n = \sum_{n = 0}^{\infty} sr^n e^{n\theta + \theta_n} $$ and try to find a function $\theta = g(r)$ such that $f(z)$ is unbounded as $r \to \infty$ if one takes $\theta = g(r)$. But I don't know what to do next. Any ideas? Any ideas for other strategies of proving Liouville's theorem that are more direct than the ones using Cauchy's theorem?
|
I think the most illuminating proof of Liouville's theorem uses Riemann surfaces. Let $f : \mathbb{C} \rightarrow \mathbb{C}$ be a bounded holomorphic function, and set $g(z) = f(1/z)$. Then $g : \mathbb{C} \setminus 0 \rightarrow \mathbb{C}$ is a bounded holomorphic function, so Riemann's removable singularities theorem says that $g$ can be extended over $0$. Translated into the language of Riemann surfaces, this says that $f$ extends to a holomorphic function $F : \mathbb{P}^1 \rightarrow \mathbb{C}$. Since $\mathbb{P}^1$ is compact, $F$ must have a global maximum. However, the maximum modulus principle says that a nonconstant holomorphic function cannot have a local maximum, so $F$ must be constant.
|
{
"source": [
"https://mathoverflow.net/questions/116896",
"https://mathoverflow.net",
"https://mathoverflow.net/users/683/"
]
}
|
116,913 |
Let $k$ be an algebraically closed field and $G$ an algebraic group over $k$ which is also a $k$-variety (so $G$ is integral, etc). Let $I$ be the ideal defining the identity $e \in G$ and let $\{ t_1, \ldots, t_n \} \subseteq I$ be a set of local parameters at $e$. Since $e$ is a smooth point, the $t_i$ are algebraically independent and we naturally obtain a $k$-algebra embedding $k[t_1, \ldots, t_n] \hookrightarrow k[G]$. Let $A$ denote the subalgebra of $k[G]$ thus defined. I do not expect that $A$ will be a sub-Hopf algebra of $k[G]$ (in particular, I see no reason why the comultiplication should preserve $A$), but I don't know any concrete examples. So I have two questions: (1) Is there a specific example of an algebraic group $G$ as above such that $A$ is not a sub-Hopf algebra of $k[G]$? (2) Are there nice conditions on $G$ under which $A$ will be a sub-Hopf algebra of $k[G]$?
|
I think the most illuminating proof of Liouville's theorem uses Riemann surfaces. Let $f : \mathbb{C} \rightarrow \mathbb{C}$ be a bounded holomorphic function, and set $g(z) = f(1/z)$. Then $g : \mathbb{C} \setminus 0 \rightarrow \mathbb{C}$ is a bounded holomorphic function, so Riemann's removable singularities theorem says that $g$ can be extended over $0$. Translated into the language of Riemann surfaces, this says that $f$ extends to a holomorphic function $F : \mathbb{P}^1 \rightarrow \mathbb{C}$. Since $\mathbb{P}^1$ is compact, $F$ must have a global maximum. However, the maximum modulus principle says that a nonconstant holomorphic function cannot have a local maximum, so $F$ must be constant.
|
{
"source": [
"https://mathoverflow.net/questions/116913",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1528/"
]
}
|
117,036 |
What does the Pontryagin class detects or is an obstruction to? Please avoid any answer using that it's the even Chern class of the complexified bundle or any interpretation that relies on the complexified bundle. As related question might be the following: when one defines the obstruction classes on a rank $4$ vector bundle (and if the first three obstruction classes do vanish) then the fourth obstruction class can be decomposed as the Euler class and the first Pontryagin class (as $\pi_3(SO_4) \simeq \mathbb{Z} \oplus \mathbb{Z}$). Is there a geometric description of a system of generators in $\pi_3(SO_4)$ which is associated to these classes? EDIT: deleted "For example, why does the first Pontryagin class distinguishes the (tangent bundles of the) exotic $4$-spheres?" as it is wrong, see Liviu's answer below.
|
Pontryagin's original definition for his classes was an obstruction cycle as follows: On the $n$ dimensional manifold $M$ take $(n-2i) +2$ vector fields in general position, and consider the points $x$ where they span a subspace (in $T_xM$) of dimension less or equal to $n-2i$. The set of such points $x$ form a cycle of codimenion $4i$ in $M.$ The dual cohomology class is $p_i(M).$ This definition might differ from the today accepted definition through Chern classes (as in the book by Milnor-Stasheff) by a second order class.
|
{
"source": [
"https://mathoverflow.net/questions/117036",
"https://mathoverflow.net",
"https://mathoverflow.net/users/18974/"
]
}
|
117,104 |
A palindrome is a number which remains the same when reversing it, for instance 34143. Now pick an arbitrary number, say 26: then 26+62=88 is a palindrome. If the number was 57, then 57+75=132 is not a palindrome; but 132+231=363 is. In general, iterating $a_1\ldots a_n\to a_1\ldots a_n+a_n\ldots a_1$ always seems to lead to a palindrome. But, the point is that this doesn't work for 196! This problem is well-known, and there is a heavy numerical evidence for it, see http://en.wikipedia.org/wiki/Lychrel_number and http://www.p196.org/ . I was wondering, is there any theoretical advance on this subject? Or at least, to which math area does this problem belong to? Many thanks.
|
The progress (which may not be recent) appears to be in programming tricks to push the iterations starting from 196 into more and more millions of digits. It is not true that iterating almost always seems to lead to a palindrome. Unless iterating leads to a palindrome "fairly quickly," one almost never seems to arrive at a palindrome. For a number with few digits, an eventual palindrome is fairly likely. As the number of digits grows the likelihood seems to decrease rapidly. Let $s(x)$ be the result of applying the reverse and add operation to $x$. There is no proof which rules out the possibility that for all $x$ there is a $j$ with $s^j(x)$ a palindrome, but it seems highly unlikely. If we examine a large range of numbers and classify $x$ as a probably never getting to a palindrome if none of $s(x),s^2(x),..s^{50}(x)$ is a palindrome then we will have some errors (probably under $1\\%$) which can be discovered by pushing out to $s^{500}(x)$. But if we push out that far, or even just to $s^{300}(x)$ then we are likely to never have an error (that we can discover.) There is (according to Wikipedia) a 19 digit number which arrives at a palindrome after 261 iterations and this is the current record. I tried 500 random 10 digit integers (actually, random integers under $10^{10}.$) Of them, 224 went to 300 iterations without a palindrome. There were 13 cases which did arrive at a palindrome but taking at least 30 iterations. They took 30,30,32,32,32,34,38,41,42,46,49,66 and 88 iterations. Further discussion Some observations and questions in no special order: Let $r(x)$ be the reverse of $x$ and $s(x)=x+r(x)$ be the result of applying the reverse and add operation to $x$. If $x$ is a multiple of $11$ so is $r(x)$ and hence $s(x)$ and all future iterates $s^i(x)$ if $x$ has an even number of digits then $s(x)$ is a multiple of $11$ A palindrome with an even number of digits is a multiple of 11. Of the $9\cdot10^M$ palindromes with $2M+1$ digits, about $\frac{1}{11}$ are multiples of $11.$ Call $x$ special if no carries occur in the addition $x+r(x).$ In this case $x+r(x)$ is a palindrome. Call $x$ exceptional if $x+r(x)$ is a palindrome but $x$ is not special (i.e. some carries do occur). By definition, $s(x)$ is a palindrome exactly when $x$ is special or exceptional. If $z=s(y)$ then when we appropriately match up $z$ with $r(z)$, corresponding digits will be equal or differ by $1$ (in case of a carry immediately before one but not the other.) Here appropriately means that when $z$ has one more digit than $y$ we do not match the leading $1$, So, if $x$ has an even number $2M$ of digits then $s(x)$ is a multiple of $11$ which, if not a palindrome, misses it only by having some positions which should be equal differ by $1$ and perhaps having a leading $2M+1$st digit which is a $1$. If $x$ has an odd number of digits then almost the same is true. The previous comments shows that integers of the form $s(x)$ are fairly special. What can be said about numbers of the form $s^2(x)$, $s^3(x)$ etc? Certainly by $s^6(x)$ (and usually earlier) we have had an even number of digits at some earlier stage and hence are at a multiple of $11$ from then on. Clearly there must be many solutions of $s(x)=s(y)$ with $x \ne y$ since the the image set is much sparser. We can see that directly because for $y=\sum_0^{N-1}a_i10^i$ we have $s(y)=\sum_0^{N-1}(a_i+a_{N-1-i})10^i.$ This usually presents many chances to increase one of $a_i,a_{N-1-i}$ by $1,2,3$ or even more and lower the other by an equal amount to get $x$ with $s(x)=s(y).$ Recall that every $y$ with $s(y)$ a palindrome is special or exceptional (and vice versa). Roughly $(1.8)^{-N/2}$ of the $N$ digit integers $z$ are special and I do not see that knowing also that $z=s^j(x)$ for some $x$ changes that. So for small starting $x$ there is a fair chance of getting to a special number as we repeatedly apply $s$. However this chance gets smaller as the number of digits rises. This suggests that if iterating $s(x)$ gets into enough digits without running into a palindrome, it is unlikely to ever arrive at one. Here is some justification of the counts: Let $N=2M$ be even. There are $9\cdot10^{N-1}$ integers $x=\sum_{0}^{N-1}x_i10^i$ with $N$ digits. The proportions of these which are special and exceptional are easy to give exactly but less than $\frac{1}{1.8^M}$ of them are special and and less than $\frac{1}{10^M}$ are exceptional. This is because there are $100$ ways to pick each of the $M$ pairs $x_i,x_{N-i-1}$ (except for $90$ ways to pick $x_0,X_N$.) To be special there are only $55$ choices with $x_i+x_{N-i-1}\le 9$ (except $45$ for $i=0$). A number is exceptional if each pair $x_i,x_{N-1-i}$ has sum $0$ or $11.$ (I think I proved that to my satisfaction.)
For $N=2M+1$ the ratios are about the same taking into account the central digit. Final thoughts Here is an easier related problem which might be called the 49 problem , I don't know if there is a 49+49= 98 problem although we have just looked at the 98+98= 196 problem. 196+196=392 and I've spent time on the 392 problem but it is unrelated. Here is the 49 problem : Call an integer $z$ very even if all digits are even (equivalently,if $z=d(x)=x+x$ for an $x$ with all digits less than $5$) . Q: for which $n$ does $n,d(n),d^2(n),\cdots$ arrive at a very even integer? Up to 20001 there is a single starter which gets to a very even number after 47 doublings but no sooner. Curiously 49,98, 196 ,392,... is the first example which never seems to arrive at a very even number. I actually included this example because I thought there would be some integers which provably never lead to a very even number. My proposed proof was that $x,d(x),d^2(x),\cdots$ is eventually periodic mod $10^j$ for any $j$, so one would just need to find the appropriate $j$ where every member of the cycle had an odd digit. However now I see that the $\mod 10^j$ orbit has length $4\cdot 5^{j-1}$ so almost surely does have a $j$ digit very even member (which may be the tail end of much much longer integers with plenty of room for odd digits.) However it seems clear that when the number of digits is small there is a reasonable chance of bumping into a very even number but that goes to $0$ exponentially in $d$ for a "random" $d$-digit number. It would be nice to find some invariant for your problem, or for this one, (similar to the proposed reduce mod $10^j$ here) which could rule out a palindrome. Short of something like that it seems hard to imagine a definitive proof (as opposed to probabilistic heuristic) of never arriving at a palindrome.
|
{
"source": [
"https://mathoverflow.net/questions/117104",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29333/"
]
}
|
117,191 |
We know a positive rational number can be uniquely written as $m/n$ where $m$ and $n$ are coprime positive integers. Particularly, we can pick out those numbers with $m$ and $n$ both prime . Question 1 : Is the collection of all such numbers dense on the positive half of the real line? Furthermore, we can ask about the efficiency of approximation, more precisely: Question 2 : Suppose we have an inequality $1\le ps-qr\le a$. Fix some $a$, can we find infinitely many solutions where $p$,$s,$,$q$,$r$ are positive primes?
|
Question 1: The set is dense. Suppose that we are given a fixed $x\in\mathbb{R}$. Then let $p$ be a large prime. If $p$ is sufficiently large, then there will be a prime $$q\in\left[px,\ px+\left(px\right)^{0.525}\right]$$ by the work of Baker, Harman and Pintz on prime gaps . This implies that $$\left|x-\frac{q}{p}\right|\ll_x p^{-0.475},$$ which becomes arbitrarily small as we take $p\rightarrow\infty $. This proves that for any $\epsilon>0$, there exists $p,q$ such that $\left|x-\frac{q}{p}\right|\leq \epsilon.$ Question 2: We can find infinitely many solutions to $$1\leq qp-rs\leq a$$ for primes $p,q,r,s$ and all $a\geq 26$. Under the Elliott-Halberstam Conjecture , we can take $a\geq 6$. This is a corollary of the work of Goldston, Graham, Pintz and Yıldırım on the gaps between almost primes. They prove that if $q_n$ is the $n^{th}$ almost prime, then $$\liminf_{n\rightarrow \infty} q_{n+1}-q_n \leq 26,$$ and that the upper bound may be reduced to $6$ under the Elliott-Halberstam Conjecture. Since $q_n=pq$ and $q_{n+1}=rs$ where $p,q,r,s$ are primes, this yields the above claim. Edit: The more recent work of Goldston, Graham, Pintz and Yıldırım show that we can take $a=6$ unconditionally. (Thank you to quid for mentioning this in the comments)
|
{
"source": [
"https://mathoverflow.net/questions/117191",
"https://mathoverflow.net",
"https://mathoverflow.net/users/20311/"
]
}
|
117,292 |
Why is a ring called "ring" (or Zahlring in German)? There seems to (naive) me nothing more ring-like to a ring than there is to a group or a field. I am particularly interested to learn why the word "ring" seemed appropriate to the founders. Thanks for educating me!
|
First, a minor correction to the question, but it seems somewhat relevant: ring is not called 'Zahlring' in German, ring in German is 'Ring.' As mentioned Hilbert introduced the word 'Zahlring' but also in the same sentence as synonyms 'Ring' and 'Integritätsbereich' (so ring and integral domain, resp.); see the embedded document on math.SE mentioned above; the mathematical meaning is that of a ring of algebraic integers in a number field (not necessarily the (full) one. In particular, already in that document he frequently just uses 'Ring', and defines for example 'Ringideale'.
I just browsed the first part of the 'Zahlbericht' where this happens, there seems no motivation for the name to be found, he just gives in a footnote that Dedekind calls this 'Ordnung' [order]. The idea that the name is motivated by 'circling back' might or might not be true. But I could not find any trace of it there. In particular , there seems to be no result close by regarding the fact that the powers $\alpha^n$ somehow 'circle back' to linear combinations (the idea mentioned by KConrad); also no analogy to rings of residue classes is drawn. (Of course, it is proved somewhere that such a 'ring' has a finite $\mathbb{Z}$-module basis but the way this is presented does not suggest any particular 'circling back' idea.) Hilbert's definition for ring is (paraphrasing): given a collection of algebraic integers, a ring is everything that can be written as polynomial functions with integer coefficients of this given collection.
(As an aside, personally, I now finally understood the idea behind the name integral domain/'Integritätsbereich'; a number field is also called 'Rationalitätsbereich', so rational domain there, being everything one gets with rational functions and the integral domain is what one gets with integral functions. Added: I saw had I started to read MO earlier I could have learned this usage due to Kronecker was mentioned by KConrad on the question linked to). He then right away comments that a 'ring' is thus closed/invariant under addition, subtraction, and multiplication. So, perhaps it is a ring just since one does not leave it even if one moves around, say like a boxing-ring.
Or, I quite like the idea presented earlier of 'Ring' also being used to describe (figuratively) a collection of people with a certain relation among them, a property this word shares with 'Gruppe' [group] and also 'Körper' [field, but literally body], both seem to have been established by then already.
(Which also is somehow a partial response to why a ring is a ring even though it is not more ring-like than a group or a field; the later two already had a different name.) Then, it seems the first axiomatisation of some notion of ring is due to Fraenkel (J. Reine Angew. Math., 1915). I stress some notion, since it does not completely match current practise in that each element is either a zero-divisor or invertible (and while non-commuativity is allowed it is only in a somewhat restricted sense in that the two products must only differ by an invertible element). The guiding example seems to be rings of integers modulo composites. Regarding the name 'Ring' (that paper is also in German) he credits Hilbert but says there is some deviation of the meaning. By constrast, Steinitz in his earlier axiomatization of fields (J. Reine Angew. Math., 1910) also discusses 'Integritätsbereiche' (integral domains) with exatly the axiomatization common today.
(comm. ring, with unit, no zero-divisors). Then to 'Moderne Algebra' (1930) by van der Waerden (based on lectures by Artin and Noether). [To be precise, I could not look at the original edition, but only some later edition, I hope this did not change over time.] There one finds 'Ring' defined, (essentially) as is done now, as a basic notion; without any discussion of the naming.
[To be precise, a ring there has not necessarily a multiplicative unit element and the existence of additive inverse and neutral element is expressed together via imposing solubility of $a+X=b$ for all $a,b$.] In addition, one also finds 'Integritätsbereich' there with a different meaning than 'Ring'; namely as commutaitve ring without zero-divisors (yet not necessarily with unit element, so somewhat deviating from current usage and Steinitz). I think one can make an argument that the structure is now called ring because it is called like that in 'Moderne Algebra', and one can note that also the naming of integral domain survived. (Except for slight deviation with unit element, but which until today is not quite uniform.) And, it seems reasonable to assume that the naming of Artin, Noether, van der Waerden as for Franekel is directly inspired by Hilbert. After all, a ring has (just) the main properties mentioned by Hilbert for his 'rings', closed under addition, subtraction, and multiplication. What I do not know is whether there is any earlier axiomatization of ring in (or at least closer than Fraenkel's to) the current sense. Fraenkel's To sum it up, this is all but a 'definite' answer, but I hope it contains some relevant information.
In my opinion, it could be difficult, possibly even impossible, to ascertain what precisely motivated the choice of name and even more so to really pin down why one name survived and another not (say, Integritätsbereich did, Rationalitätsbereich did not). It could however be interesting to research literature and in particular lecture notes, if existant, of the beginning 20th century to see the development in more detail. Still, ring seems like a good word as there are some potential intuitions (this circling back and the residue classes), also it is short and was I think quite different from preexisting names.
|
{
"source": [
"https://mathoverflow.net/questions/117292",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
117,302 |
There is a simple canonical form of a symmetric and antisymmetric bilibear forms. Is there a canonical form for a general bilinear form?
|
First, a minor correction to the question, but it seems somewhat relevant: ring is not called 'Zahlring' in German, ring in German is 'Ring.' As mentioned Hilbert introduced the word 'Zahlring' but also in the same sentence as synonyms 'Ring' and 'Integritätsbereich' (so ring and integral domain, resp.); see the embedded document on math.SE mentioned above; the mathematical meaning is that of a ring of algebraic integers in a number field (not necessarily the (full) one. In particular, already in that document he frequently just uses 'Ring', and defines for example 'Ringideale'.
I just browsed the first part of the 'Zahlbericht' where this happens, there seems no motivation for the name to be found, he just gives in a footnote that Dedekind calls this 'Ordnung' [order]. The idea that the name is motivated by 'circling back' might or might not be true. But I could not find any trace of it there. In particular , there seems to be no result close by regarding the fact that the powers $\alpha^n$ somehow 'circle back' to linear combinations (the idea mentioned by KConrad); also no analogy to rings of residue classes is drawn. (Of course, it is proved somewhere that such a 'ring' has a finite $\mathbb{Z}$-module basis but the way this is presented does not suggest any particular 'circling back' idea.) Hilbert's definition for ring is (paraphrasing): given a collection of algebraic integers, a ring is everything that can be written as polynomial functions with integer coefficients of this given collection.
(As an aside, personally, I now finally understood the idea behind the name integral domain/'Integritätsbereich'; a number field is also called 'Rationalitätsbereich', so rational domain there, being everything one gets with rational functions and the integral domain is what one gets with integral functions. Added: I saw had I started to read MO earlier I could have learned this usage due to Kronecker was mentioned by KConrad on the question linked to). He then right away comments that a 'ring' is thus closed/invariant under addition, subtraction, and multiplication. So, perhaps it is a ring just since one does not leave it even if one moves around, say like a boxing-ring.
Or, I quite like the idea presented earlier of 'Ring' also being used to describe (figuratively) a collection of people with a certain relation among them, a property this word shares with 'Gruppe' [group] and also 'Körper' [field, but literally body], both seem to have been established by then already.
(Which also is somehow a partial response to why a ring is a ring even though it is not more ring-like than a group or a field; the later two already had a different name.) Then, it seems the first axiomatisation of some notion of ring is due to Fraenkel (J. Reine Angew. Math., 1915). I stress some notion, since it does not completely match current practise in that each element is either a zero-divisor or invertible (and while non-commuativity is allowed it is only in a somewhat restricted sense in that the two products must only differ by an invertible element). The guiding example seems to be rings of integers modulo composites. Regarding the name 'Ring' (that paper is also in German) he credits Hilbert but says there is some deviation of the meaning. By constrast, Steinitz in his earlier axiomatization of fields (J. Reine Angew. Math., 1910) also discusses 'Integritätsbereiche' (integral domains) with exatly the axiomatization common today.
(comm. ring, with unit, no zero-divisors). Then to 'Moderne Algebra' (1930) by van der Waerden (based on lectures by Artin and Noether). [To be precise, I could not look at the original edition, but only some later edition, I hope this did not change over time.] There one finds 'Ring' defined, (essentially) as is done now, as a basic notion; without any discussion of the naming.
[To be precise, a ring there has not necessarily a multiplicative unit element and the existence of additive inverse and neutral element is expressed together via imposing solubility of $a+X=b$ for all $a,b$.] In addition, one also finds 'Integritätsbereich' there with a different meaning than 'Ring'; namely as commutaitve ring without zero-divisors (yet not necessarily with unit element, so somewhat deviating from current usage and Steinitz). I think one can make an argument that the structure is now called ring because it is called like that in 'Moderne Algebra', and one can note that also the naming of integral domain survived. (Except for slight deviation with unit element, but which until today is not quite uniform.) And, it seems reasonable to assume that the naming of Artin, Noether, van der Waerden as for Franekel is directly inspired by Hilbert. After all, a ring has (just) the main properties mentioned by Hilbert for his 'rings', closed under addition, subtraction, and multiplication. What I do not know is whether there is any earlier axiomatization of ring in (or at least closer than Fraenkel's to) the current sense. Fraenkel's To sum it up, this is all but a 'definite' answer, but I hope it contains some relevant information.
In my opinion, it could be difficult, possibly even impossible, to ascertain what precisely motivated the choice of name and even more so to really pin down why one name survived and another not (say, Integritätsbereich did, Rationalitätsbereich did not). It could however be interesting to research literature and in particular lecture notes, if existant, of the beginning 20th century to see the development in more detail. Still, ring seems like a good word as there are some potential intuitions (this circling back and the residue classes), also it is short and was I think quite different from preexisting names.
|
{
"source": [
"https://mathoverflow.net/questions/117302",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30228/"
]
}
|
117,415 |
It's a commonplace to state that while other sciences (like biology) may always need the newest books, we mathematicians also use to use older books. While this is a qualitative remark, I would like to get a quantitative result. So what are some "old" books that are still used? Coming from (algebraic) topology, the first things which come to my mind are the works by Milnor. Frequently used (also as a topic for seminars) are his Characteristic Classes (1974, but based on lectures from 1957), his Morse Theory (1963) and other books and articles by him from the mid sixties. An older book, which is sometimes used, is Steenrod's The Topology of Fibre Bundles from 1951, but this feels a bit dated already. Books older than that in topology are usually only read for historical reasons. As I have only very limited experience in other fields (except, perhaps, in algebraic geometry), my question is: What are the oldest books regularly used in your field (and which don't feel "outdated")?
|
EGA and SGA , both from the 1960s and 1970s, are very widely used in algebraic geometry. Hartshorne's textbook (first published in 1977) is still the main choice for courses on the theory of schemes.
|
{
"source": [
"https://mathoverflow.net/questions/117415",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2039/"
]
}
|
117,460 |
I would like to get some intuitive feeling for the mean curvature. The mean curvature of a hypersurface in a Riemannian manifold by definition is the trace of the second fundamental form. Is there any good picture to have in mind, when dealing with it?
|
Do this experiment: draw a curve in 2D and about some point on that curve, draw a unit normal vector field. Now convince yourself that if you push the curve out along the normal field by a distance $\epsilon$, the length changes by a factor of $1+\epsilon\kappa$ where $\kappa$ is the curvature. Now consider a 2D surface in 3D. Choose local coordinates so that the first two basis vectors are tangent to the surface and the curvatures along the first two coordinates are the principal curvatures. The infinitesimal change in surface area is $(1+\epsilon\kappa_1)(1+\epsilon\kappa_2)$ -- actually this product is the Jacobian of the normal map, $N_\epsilon$ that pushes the surface out along the normal field a distance epsilon. In general, for co-dimension one surfaces, the Jacobain of the normal map $N_\epsilon$ is $\Pi_{i=1}^{n-1} (1+\epsilon\kappa_i)$. We can integrate this over the original surface to get the new, n-1 volume of the pushed surface. That is: $\mathcal{H}^{n-1}(N_\epsilon(W)) = \int_{W} \Pi_{i=1}^{n-1} (1 + \epsilon\kappa_i) d\mathcal{H}^{n-1}$ $ \hspace{1in} = \int_{W} 1 d\mathcal{H}^{n-1} + \epsilon \int_{W} \sum_i \kappa_i d\mathcal{H}^{n-1} + ... + \epsilon^k \int_{W} \sum_{s\in S(k)}\Pi_{i\in s}\kappa_i d\mathcal{H}^{n-1}$ $ \hspace{1.2in} + ... + \epsilon^{n-1} \int_{\partial W} \Pi_{i=1}^{n-1} \kappa_i d\mathcal{H}^{n-1}$ Now we note that the first order term is the integral of the mean curvature. That is, to first order, the change in surface volume is given by the mean curvature. Note that I am using the term mean curvature for what is sometimes called total mean curvature, $\sum_{i=1}^{n-1} \kappa_i$. (This works in co-dimension $k>1$, but then, because the set of normal directions at any point is $k$-dimensional, it is a bit more involved to get a result that looks like the result here.)
|
{
"source": [
"https://mathoverflow.net/questions/117460",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30196/"
]
}
|
117,517 |
Universes seem to first enter Grothendieck's work in SGA 1, which is credited to Grothendieck, and a lengthy discussion is in the chapter on Prefaisceaux (presheaves) in SGA 4. That chapter is credited to Grothendieck and Verdier. The appendix on them there is credited to N Bourbaki. Is there any known evidence of who actually wrote the appendix?
|
Pierre Cartier has told me everyone at the time (i.e. everyone in those circles) knew Pierre Samuel wrote the appendix. Incidentally this makes a third person breaking the general rule that all writings signed N Bourbaki were collective. Weil and Dieudonné wrote historical/philosophic pieces signed Bourbaki, and Samuel wrote this.
|
{
"source": [
"https://mathoverflow.net/questions/117517",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38783/"
]
}
|
117,579 |
How many nonoverlapping unit squares can (nonoverlappingly) touch one unit square?
By "nonoverlapping" I mean: not sharing an interior point.
By "touch" I mean: sharing a boundary point. It seems the answer for a square in $\mathbb{R}^2$ should be $8$,
and for a cube in $\mathbb{R}^3$, $26$. But a 1999 paper by Larman and Zong, "On the Kissing Numbers of Some Special Convex Bodies." Discrete Comput Geom 21 :233–242 (1999).
( Springer link ) says "In this note we determine the kissing numbers of octahedra, rhombic dodecahedra and
elongated octahedra. In fact, besides balls and cylinders, they are the only convex bodies
whose kissing numbers are exactly known." In that paper, they were interested in the translative kissing number and the lattice kissing number,
whereas I want to consider arbitrary orientations of each square/cube. Despite the quote above,
it seems this should be known...? Update ( 30Dec12 ) The following explains (I believe) the $0.82$ in Henry Cohn's comment,
leading to his proof for $\le 9$ in $\mathbb{R}^2$:
|
The square case was posed as a problem at Leningrad (now St. Petersburg) high school math olympiad in 1963. I wrote a solution of this problem for the volume "St. Petersburg mathematical olympiads 1961-1993", D.V.Fomin, K.P.Kokhas eds., Lan' Publ. 2007 (in Russian), it is Problem 63.31 in that book. Here is the original very detailed draft of the solution from my archive (only a sketch made it into the final book). It is in Russian but the pictures and formulae might be enough to follow the proof. Up to elementary but cumbersome case chasing, the solution is the following. Consider the boundary of the twice bigger square with the same center and parallel sides. It is a broken line of length 8. It turns out that each of the kissing squares takes away a piece of length at least 1 from this broken line. Hence there are at most 8 kissing squares (and furthermore one analyse the equality case). ( Snapshots of the figures added by J.O'Rourke ) The proof of the fact that the length of the intersection is at least 1 is essentially an exhaustion of cases, each of which is trivial. It is helpful to observe that the length of the intersection is a piecewise linear function of the relative position of the squares (if the orientation is fixed), so one has to consider only "borderline" positions (i.e. those where one of the corners is on one of the lines). This leaves about 10 cases to consider.
|
{
"source": [
"https://mathoverflow.net/questions/117579",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
117,622 |
Hi, I am trying to read some papers on Algebraic Geometry in French. But I am stuck in understanding some Math.French.Words. Anybody has a good reference for it? Thank you all.
|
Kai-Wen Lan has written a glossary for French and German. Quoting from his web page "These are prepared primarily for reading mathematical texts". You can find the French one here .
|
{
"source": [
"https://mathoverflow.net/questions/117622",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13947/"
]
}
|
117,650 |
I am new here, so forgive me if this question does not satisfy the protocols of the site.
I know there are so many equivalents to the AC (axiom of choice) and there are books that lists this equivalent forms of AC. But how can I find equivalent forms of the continuum hypothesis ? Specially I am interested in Algebraic equivalent form. What references do we have in the literature !? Thanks in Advance.
|
A classical reference is Hypothèse du Continu by Waclaw Sierpiński (1934), available through the Virtual Library of Science as part of the series Mathematical Monographs of the Institute of Mathematics of the Polish Academy of Sciences. Sierpiński discusses equivalences and consequences. The statements covered include examples from set theory, combinatorics, analysis, and algebra. Most of the consequences he did not show equivalent were found later (mainly by Martin, Solovay, and Kunen) to be strictly weaker in that they follow from Martin's Axiom , and some are discussed in the original Martin-Solovay paper. (In fact, the discovery of Martin's Axiom and the subsequent research on cardinal characteristics of the continuum helped clarify what the role of CH is in many classical arguments, and nowadays results that classically would be stated as consequences of CH are stated as consequences of some equality between cardinal characteristics. See the articles by Blass and Bartoszyński on the Handbook of Set Theory .) Of course, many equivalents were found after 1934. For example: Around 1943, Erdős and Kakutani proved that CH is equivalent to there being countably many Hamel bases whose union is $\mathbb R\setminus\{0\}$ . In the early 60s, Erdős found a nice equivalent in terms of analytic functions (see Chapter 17 in Aigner-Ziegler Proofs from THE BOOK ). Quite recently, Zoli proved that CH is equivalent to the transcendental reals being the union of countably many transcendence bases. I do not know of an encyclopedic work updating Sierpiński's monograph. Most recent work on CH centers on what Stevo Todorcevic calls Combinatorial Dichotomies in Set Theory . It turns out that for quite a few statements, CH proves a "nonclassification" result, while strong forcing axioms (such as PFA) prove strong "classifications". For example, J. Moore proved that there is a 5-element basis for the uncountable linear orders if PFA holds, while Sierpiński showed that CH gives us $2^{\aleph_1}$ non-isomorphic uncountable dense sets of reals, none of which embeds into another in an order-preserving fashion. Though not specifically concerned with CH and its equivalences, you may find interesting Steprans's History of the Continuum in the Twentieth Century . ( Wayback Machine ) Another recent line of study on CH centers on the role of choice. Propositions equivalent to CH in ZFC may have wildly different truth values if choice is not assumed. For example, under determinacy, CH is true in the sense that every set of reals is either countable or of the same size as the reals. However, it is also false in the sense that $\aleph_1\not\le|\mathbb R|$ , and that there is a surjection from $\mathbb R$ onto $\aleph_2$ .
|
{
"source": [
"https://mathoverflow.net/questions/117650",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
117,668 |
When I was a graduate student in math (mid-late eighties and early nineties) the arena was dominated by a few grand projects: for instance, Misha Gromov's hyperbolic groups, which spread into many seemingly heterogeneous domains (such as combinatorial group theory, to name just one), and Bill Thurston's classification of low-dim manifolds. A few years have passed, and I must admit that, aside my pet domains, such as categorical algebra and applied logic, I have not kept up with innovations. I am pretty sure, though, that new trends, generated by some core new ideas, are leading contemporary math research, or substantial portions thereof. One such idea I know already, namely Voevodsky's homotopical foundations of logics, which brings together abstract homotopy theory and type theory. What else? PS I am not interested (at least not directly) in new solutions to old problems, unless these solutions are the germs of (and conducive to) new directions. Rather, I would hear about new research projects which unify previously disparate fields, or create entirely new ones, or shed lights into old domains in a radically new way (in other words, I am interested in what Thomas Kuhn called paradigm shifts ). The ors are of course not mutually exclusive. Addendum: It looks like there is an ongoing debate on whether this question should be kept open or not. As I have already answered below, I submit entirely and with no reservations to the policy of this respectable forum, whatever the outcome. As a dweller of MO, though, I am entitled to my personal opinion, and this is to keep my question in the air. Nevertheless, I am well aware of the potential risks of either turning it into -what I like best of math right now- or generic answers that provide no meat for this community. Therefore, allow me to add a clarification: My wish is the following- to get to know from informed individuals which new paradigm-shifting trends are currently under way. To this effect I appreciate all the answers so far, but I invite everybody to provide some level of details as to why they have mentioned a specific idea and/or research project (some folks have already done it). For instance, one answer was tropical math, which indeed seems to have risen to prominence in recent years. It would be nice to know which areas it does impact and in which way, which core ideas it is founded on, which threads it brings together, etc. The same applies of course to all other proposals.
|
There is the derived algebraic geometry program of Jacob Lurie, starting with his thesis in 2007 and building on the work of ..., Simpson, Toën-Vezzosi, etc. In the words of Lurie, this is basically "jazzing up" the foundations of algebraic geometry with homotopy theory. Lurie has written two 1000-page books called Higher Topos Theory and Higher Algebra , and a series of papers called DAG . He has already had success in applying the theory to the classical topic of elliptic cohomology . See also this blog post by Tim Gowers and a relevant MO discussion . (Of course, this can be viewed as an extension of Grothendieck's program from the 80's, see e.g. this MO discussion .)
|
{
"source": [
"https://mathoverflow.net/questions/117668",
"https://mathoverflow.net",
"https://mathoverflow.net/users/15293/"
]
}
|
117,684 |
Let $E \to F$ be a morphism of cohomology theories defined on finite CW complexes. Then by Brown representability, $E, F$ are represented by spectra, and the map $E \to F$ comes from a map of spectra. However, it is possible that the map on cohomology theories is zero while the map of spectra is not nullhomotopic. In other words, the homotopy category of spectra does not imbed faithfully into the category of cohomology theories on finite CW complexes. This is due to the existence of phantom maps: Let $f: X \to Y$ be a map of spectra. It is possible that $f$ is not nullhomotopic even if for every finite spectrum $F$ and map $F \to X$, the composite $F \to X \stackrel{f}{\to} Y$ is nullhomotopic. Such maps are called phantom maps. For an explicit example, let $S^0_{\mathbb{Q}} = H\mathbb{Q}$ be the rational sphere. This is obtained as a filtered (homotopy) colimit of copies of $S^0$ and multiplication by $m$ maps. The universal coefficient theorem shows that there are nontrivial maps $S^0_{\mathbb{Q}} \to H \mathbb{Z}[1]$; in fact they are parametrized by $\mathrm{Ext}^1(\mathbb{Q}, \mathbb{Z}) \neq 0$. However, these restrict to zero on any of the terms in the filtered colimit (each of which is a copy of $S^0$). In other words, the distinction between flat and projective modules is in some sense an algebraic analog of the existence of phantom maps. Given a flat non-projective module $M$ over some ring $R$, then there is a nontrivial map in the derived category $M \to N[1]$ for some module $N$. Now $M$ is a filtered colimit of finitely generated projectives -- Lazard's theorem -- and the map $M \to N[1]$ is "phantom" in that it restricts to zero on each of these finitely generated projectives (or more generally for any compact object mapping to $M$).
So it should not be too surprising that phantom maps of spectra exist and are interesting. Now spectra are analogous to the derived category of $R$-modules, but spectra also come with another adjunction:
$$ \Sigma^\infty, \Omega^\infty: \mathcal{S}_* \leftrightarrows \mathcal{Sp}$$
between pointed spaces and spectra. They thus come with another distinguished class of objects, the suspension spectra. (Random question: what is the analog of a suspension spectrum in algebra?) Definition: A map of spectra $X \to Y$ is hyperphantom if for any suspension spectrum $T$ (let's interpret that loosely to include desuspensions of suspension spectra), $T \to X \to Y$ is nullhomotopic. In other words, a map of spectra is hyperphantom if the induced natural transformation on cohomology theories of spaces (not necessarily finite CW ones!) is zero. Is it true that a hyperphantom map is nullhomotopic? Rudyak lists this as an open problem in "On Thom spectra, orientability, and cobordism." What is the state of this problem?
|
Consider the periodic complex $K$-theory spectrum $KU$. The integral homology group $H_i(KU)$, the direct limit of $$\dots \to H_{2n+i}(BU)\to H_{2n+2+i}(BU)\to\dots,$$
is a one-dimensional rational vector space if $i$ is even and trivial if $i$ is odd. It follows that $H^1(KU)$ is nontrivial. (It's $Ext(\mathbb Q,\mathbb Z)$.) But this can't be detected in the cohomology of suspension spectra, because $H^{2n+1}(BU)$ is trivial. So that's an example of a "hyperphantom" map from $KU$ to the Eilenberg-MacLane spectrum $\Sigma H\mathbb Z$.
|
{
"source": [
"https://mathoverflow.net/questions/117684",
"https://mathoverflow.net",
"https://mathoverflow.net/users/344/"
]
}
|
117,971 |
First let me state two known theorems. Theorem 1 (for smooth manifolds): Let $(M,g)$ be a smooth compact two dimensional Riemannian manifold. Then
$$ \int \frac{K}{2 \pi} dA = \chi (M) $$
where $K$ is the Gaussian curvature, $dA$ is the area form and $\chi(M)$ is the Euler characteristic. Theorem 2 (combinatorial version): Let $M$ be a two dimensional simplicial complex, with vertices, edges and faces (essentially a bunch of triangles glued together along the edges). Define the function $K:M \rightarrow \mathbb{R}$ to be zero at any point that is not a vertex. At any vertex sum up all the angles
at the vertex and look at the deviation from $2 \pi$. This is the value of $K$ at a vertex. Then
$$ \sum_{p\in M} \frac{K(p)}{2 \pi} = \chi(M) . $$ Both these statements are known as Gauss Bonnet Theorem. My question is the following: Is it possible to use Theorem 2 in some way to prove Theorem 1? In other words can one recover Theorem 1 from Theorem 2 as some sort of an appropriate "limit"? A second question is: Is there are a more general theorem from
which both theorem 1 and theorem 2 arise as special cases? Probably
some version of the theorem where $K$ simply has to be a measurable
function?
|
The answer is yes, to both questions. First question first. For any geodesic $n$ -gon $P$ on $M$ , i.e., a simply connected region of $M$ whose boundary consists of $n$ -geodesic arcs, define $$ \delta(P)= \mbox{sum of the angles of $T$}-(n-2)\pi. $$ Note that if $M$ were flat, then the defect $\delta(P)$ would be $0$ . This quantity has a remarkable property: if $P= P'\cup P''$ , where $P, P', P''$ are geodesic polygons, then $$ \delta(P)=\delta(P')+\delta(P'')-\delta(P'\cap P'') $$ which shows that $\delta$ behaves like a finitely additive measure. It can be extended to a countably additive measure on $M$ , and as such, it turns out to be absolutely continuous with respect to the the volume measure $dV_g$ defined by the Riemann metric $g$ . $\newcommand{\bR}{\mathbb{R}}$ Thus we can find a function $\rho: M\to \bR$ such that $$\delta= \rho dV_g. $$ More concretely for any $p\in M$ we have $$\rho(p) =\lim_{P\searrow p} \frac{\delta(P)}{{\rm area}\;(P)}, $$ where the limit is taken over geodesic polygons $P$ that shrink down to the point $p$ . In fact $$ \rho(p) = K(p). $$ Now observe that if we have a geodesic triangulation $\newcommand{\eT}{\mathscr{T}}$ $\eT$ of $M$ , the combinatorial Gauss-Bonnet formula reads $$\sum_{T\in\eT} \delta(T)=2\pi \chi(M). $$ On the other hand $$\delta(T) =\int_T \rho(p) dV_g(p), $$ and we deduce $$ \int_M \rho(p) dV_g(g)=\sum_{T\in \eT}\int_T \rho(p) dV_g(p) =\sum_{T\in\eT} \delta(T)=2\pi \chi(M). $$ For more details see these notes for a talk I gave to first year grad students a while back. As for the second question, perhaps the most general version of Gauss-Bonnet uses the concept of normal cycle introduced by Joseph Fu. This is a rather tricky and technical subject, which has an intuitive description. Here is roughly the idea. To each compact and reasonably behaved subset $S\subset \bR^n$ one can associate an $(n-1)$ -dimensional current $\newcommand{\bN}{\boldsymbol{N}}$ $\bN^S$ that lives in $\Sigma T\bR^n =$ the unit sphere bundle of the tangent bundle of $\bR^n$ . Think of $\bN^S$ as oriented $(n-1)$ -dimensional submanifold of $\Sigma T\bR^n$ . The term reasonably behaved is quite generous because it includes all of the examples that you can produce in finite time (Cantor-like sets are excluded). For example, any compact, semialgebraic set is reasonably behaved. How does $\bN^S$ look? For example, if $S$ is a submanifold, then $\bN^S$ is the unit sphere bundle of the normal bundle of $S\hookrightarrow \bR^n$ . If $S$ is a compact domain of $\bR^n$ with $C^2$ -boundary, then $\bN^S$ , as a subset of $\bR^N\times S^{n-1}$ can be identified with the graph of the Gauss map of $\partial S$ , i.e. the map $$\bR^n\supset \partial S\ni p\mapsto \nu(p)\in S^{n-1}, $$ where $\nu(p)$ denotes the unit-outer-normal to $\partial S$ at $p$ . More generally, for any $ S$ , consider the tube of radius $\newcommand{\ve}{{\varepsilon}}$ around $S$ $$S_\ve= \bigl\lbrace x\in\bR^n;\;\; {\rm dist}\;(x, S)\leq \ve\;\bigr\rbrace. $$ For $\ve $ sufficiently small, $S_\ve$ is a compact domain with $C^2$ -boundary (here I'm winging it a bit) and we can define $\bN^{S_\ve}$ as before. $\bN^{S_\ve}$ converges in an appropriate way to $\bN^{S}$ as $\ve\to 0$ so that for $\ve$ small, $\bN^{S_\ve}$ is a good approximation for $\bN^S$ . Intuitively, $\bN^S$ is the graph of a (possibly non existent) Gauss-map. If $S$ is a convex polyhedron $\bN^S$ is easy to visualize. In general $\bN^S$ satisfies a remarkable additivity $$\bN^{S_1\cup S_2}= \bN^{S_1}+\bN^{S_2}-\bN^{S_1\cap S_2}. $$ In particular this leads to quite detailed description for $\bN^S$ for a triangulated space $S$ . Where does the Gauss-Bonnet formula come in? As observed by J. Fu, there are some canonical, $O(n)$ -invariant, degree $(n-1)$ differential forms on $\Sigma T\bR^n$ , $\omega_0,\dotsc, \omega_{n-1}$ with lots of properties, one being that for any compact reasonable subset $S$ $$\chi(S)=\int_{\bN^S}\omega_0. $$ The last equality contains as special cases the two formulae you included in your question. I am aware that the last explanations may feel opaque at a first go, so I suggests some easier, friendlier sources. For the normal cycle of simplicial complexes try these notes. For an exposition of Bernig's elegant approach to normal cycles try these notes. Even these "friendly" expositions with a minimal amount technicalities could be taxing since they assume familiarity with many concepts. Last, but not least, you should have a look at these REU notes on these subject . While the normal cycle does not appear, its shadow is all over the place in these beautifully written notes. Also, see these notes for a minicourse on this topic that I gave a while back.
|
{
"source": [
"https://mathoverflow.net/questions/117971",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4463/"
]
}
|
118,117 |
Does every irreducible curve admit an equation of the form $f(x)=g(y)$, where $f$ and $g$ are polynomials? What if we allow $f$ and $g$ to be rational functions? Actually, I'd like to understand this in the presence of an additional constraint: if we're given a finite cover of curves $\pi\colon C\to\mathbb{P}^1$, do we expect there to be a cover $\phi\colon C\to\mathbb{P}^1$ and rational functions $f(x)$ and $g(x)$ such that $C$ is isomorphic to $f(x)=g(y)$ and also $f\circ\phi=g\circ\pi$? In other words, not only is $C$ isomorphic to $f(x)=g(y)$, but this isomorphism can be chosen so that $\pi$ is the projection onto the $y$ coordinate. This is reminiscent of Chad Schoen's paper "Varieties dominated by product varieties", but I don't see a precise connection between the two.
|
This would contradict the Harris-Mumford(-Eisenbud) theorem that $M_g$ is non-uniruled for $g$ at least $23$. Let $C$ be a general curve of genus $g$. If $C$ is in "Zieve form", then it is the normalization of the (almost certainly) singular curve in $\mathbb{CP}^1 \times \mathbb{CP}^1$,
$$D = \{ ([x_0,x_1],[y_0,y_1]) \in \mathbb{CP}^1\times \mathbb{CP}^1 \vert y_0^e f(x_0,x_1) - x_0^dg(y_0,y_1) \}, $$ where $f(x_0,x_1)$, respectively $g(y_0,y_1)$, is a homogeneous polynomial of degree $d$, resp. $e$, such that $f(0,1)$ and $g(0,1)$ are nonzero (or else the defining polynomial factors to a simpler form). By direct computation, the singular points occur where $[x_0,x_1]$ is a multiple root of $f(x_0,x_1)$ and $[y_0,y_1]$ is a multiple root of $g(y_0,y_1)$ or the point is $([0,1],[0,1])$. Moreover, at each point, the local analytic type of the singularity is the same as the plane curve with equation $y^n-x^m$, where $m$, resp. $n$, is the vanishing order of $f(x_0,x_1)$, resp. $g(y_0,y_1)$ at that point. In particular, the "delta invariant" depends only on $(m,n)$. Thus, if you "deform" $f(x_0,x_1)$ and $g(y_0,y_1)$ so that the number and type of multiple roots remains constant, then the normalizations of the corresponding curves in $\mathbb{CP}^1\times \mathbb{CP}^1$ remain of genus $g$. However, the family of such deformations of $(f,g)$ is a rational variety. Precisely, if you write
$$ f(x_0,x_1) = (x_1-a_1x_0)^{m_1}(x_1-a_2x_0)^{m_2}\cdots (x_1-a_rx_0)^{m_r}, $$ with $(a_1,\dots,a_r)$ pairwise distinct,
then the deformation space for $f$ is just a Zariski open subset of the affine space with coordinates $(a_1,\dots,a_r)$, and similarly for $g(x_0,x_1)$. Since $M_g$ is non-uniruled, this is a contradiction: there is only the constant morphism from a rational variety to $M_g$ whose image contains the general point parameterizing $C$. Edit. Mike also asks whether this could be true if $f$ and $g$ are rational functions rather than polynomial functions. This is equivalent to replacing the defining equation above in $\mathbb{CP}^1 \times \mathbb{CP}^1$ by the more general equation
$$
g_0(y_0,y_1)f_1(x_0,x_1) - f_0(x_0,x_1)g_1(y_0,y_1),
$$
where $f_0$, $f_1$ are homogeneous of degree $d$ with no common factor, and where $g_0$, $g_1$ are homogeneous of degree $d$ with no common factor. The same observations apply: the number and types of singularities depend only on the number and multiplicities of the roots of $f_0$, $f_1$, $g_0$ and $g_1$. By varying those (distinct, likely repeated) roots as in the previous paragraph, one gets a morphism from a rational, quasi-projective variety to $M_g$. By Harris-Mumford(-Eisenbud), the only such morphism is constant if the image contains a general point of $M_g$.
|
{
"source": [
"https://mathoverflow.net/questions/118117",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30412/"
]
}
|
118,183 |
I understand Godel's Incompleteness Theorems to be statements about effectively generated formal systems , which basically makes them theorems about algorithms. This is cool, because despite being very abstract, they actually constrain my expectations about how computers and human beings can behave. But, being theorems, what formal system are they theorems in ? That is, what formal language is used to express them, how do I interpret that language as being about algorithms, what axioms are assumed, and what rules of inference are used to derive the incompleteness theorems? I ask because I am looking for a better answer than " ZFC ", which has been given to me in person a few times now. ZFC refers to all sorts of things I don't believe exist (e.g. non recursively enumerable sets, choice functions for uncountable families...), at least not in the same way I believe in concrete things like computers and algorithms. I can see from skimming the proofs that I could probably make up a formal system in which the theorems could be expressed and proven, which did not refer to all the monstrosities of ZFC. I just want to know what standard, "simplest" formal system(s) can be used for this purpose.
|
As Andres Caicedo points out in his comment (to the Question), the modest fragment $\sf{PRA}$ (Primitive Recursive Arithmetic) of $\sf{PA}$ (Peano arithmetic) is already is able to verify the incompleteness theorems. Indeed, the proof of the Gödel-Rosser incompleteness proof is entirely syntactic and can be readily implemented in a fragment of $\sf{PRA}$ known as $I\Delta_0 + exp$, where $I\Delta_0$ is the weakening of $PA$ in which the induction scheme is only available for $\Delta_0$-formulas, and $exp$ asserts the totality of the exponential function $2^x$ (it is well-known that $I\Delta_0$ is unable to prove the totality of the exponential function). It is worth noting that in the above $I\Delta_0 + exp$ can be even reduced to $I\Delta_0 + \Omega_1$, where $\Omega_1$ is the axiom asserting the totality of the function $2^{\left| x\right|^2 }$, where $\left| x\right|$ denotes the length of the binary expansion of $x$. The theory $I\Delta_0 + \Omega_1$ is commonly viewed as the weakest fragment of $\sf{PA}$ in which one can develop a workable "theory of syntax". PS. As pointed out by Jeřábek, the incompleteness theorems can be implemented in even weaker systems.
|
{
"source": [
"https://mathoverflow.net/questions/118183",
"https://mathoverflow.net",
"https://mathoverflow.net/users/84526/"
]
}
|
118,444 |
Let $H$ be a separable, infinite dimensional Hilbert Space and $Calk(H):=B(H)/K(H)$ denotes
the Calkin algebra. There is obvious surjection $\pi: B(H) \to Calk(H)$ but I'm interested
in somehow opposite question: is it possible to construct embedding $j_1:Calk(H) \to B(H)$?
The same question for embedding $j_2:B(H) \to Calk(H)$ and finally is there epimorphism
$k:Calk(H) \to B(H)$?
|
(1) No. The Calkin algebra contains an uncountable family of mutually orthogonal nonzero projections. (2) Yes. Embed $B(H)$ into $B(H \otimes H)$ by the map $A \mapsto A \otimes I$, then pass to $Q(H\otimes H) \cong Q(H)$. (3) No. The Calkin algebra is simple.
|
{
"source": [
"https://mathoverflow.net/questions/118444",
"https://mathoverflow.net",
"https://mathoverflow.net/users/24078/"
]
}
|
118,448 |
We know that a cotangent bundle $T^\star M$ has a canonical symplectic form and $M$ is a natural Lagrangian submanifold of it. A well known result is that any submanifold $X=\{(p,f(p)): p\in M\}$, where $f$ is a closed one form is Lagrangian. Denote by $[f]$ the de Rham cohomology class of $f$. Assume that we flow $X$ in a Hamiltonian direction to $Y$, then $Y$ will be a Lagrangian submanifold of $T^\star M$. My question is can we write it as $Y={(p,g(p)): p\in M}$ for some closed one form $g$.? If so, do we have $ [g]=[f]?$ Thanks in advance!
|
(1) No. The Calkin algebra contains an uncountable family of mutually orthogonal nonzero projections. (2) Yes. Embed $B(H)$ into $B(H \otimes H)$ by the map $A \mapsto A \otimes I$, then pass to $Q(H\otimes H) \cong Q(H)$. (3) No. The Calkin algebra is simple.
|
{
"source": [
"https://mathoverflow.net/questions/118448",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29480/"
]
}
|
118,481 |
Let me give a reasonable model for the question in the title. In ${\rm Sym}_n({\mathbb R})$, the positive definite matrices form a convex cone $S_n^+$. The probability I have in mind is the ratio $p_n=\theta_n/\omega_n$, where $\theta_n$ is the solid angle of $\Lambda_n$, and $\omega_n$ is the solid angle of the whole space ${\rm Sym}_n$ (the area of the unit sphere of dimension $N-1$ where $N=\frac{n(n+1)}2$). These definitions are relative to the Euclidian norm $\|M\|=\sqrt{{\rm Tr}(M^2)}$ ; this is the most natural among Euclidian norms, because it is invariant under unitary conjugation. Because $S_2^+$ is a circular cone, I could compute $p_2=\frac{2-\sqrt2}4\sim0.146$ . Is there a known close formula for $p_n$? If not, is there a known asymptotics? More generally, we may define open convex cones
$$\Lambda_n^0\subset\Lambda_n^1\subset\cdots\subset\Lambda_n^{n-1}$$
in the following way: $M\mapsto\det M$ is a homogeneous polynomial, hyperbolic in the direction of the identity matrix $I_n$. Thus its successive derivatives in this direction are hyperbolic too. The $k$th derivative defines a "future cone" $\Lambda_n^k$, these cones being nested. For instance, $\Lambda_n^0=S_n^+$. It turns out that this derivative is, up to a constant, $\sigma_{n-k}(\vec\lambda)$, where $\sigma_j$ is the $j$th elementary symmetric polynomial and $\vec\lambda$ the spectrum of $M$. Therefore $\Lambda_n^k$ is defined by the inequalities
$$\sigma_1(\vec\lambda)\ge0,\ldots,\sigma_{n-k}(\vec\lambda)\ge0.$$
For instance, $\Lambda_n^{n-1}$ is the half-space defined by ${\rm Tr}M\ge0$. Let us define again $p_{n,k}$ the probability for $M\in{\rm Sym}_n$ to belong to $\Lambda_n^k$. Thus $p_{n,0}=p_n$ and $p_{n,n-1}=\frac12$. What is the distribution of $(p_{n,0},\ldots,p_{n,n-1})$, asymptotically as $n\rightarrow+\infty$? Edit . As Mikaël mentionned, it is equivalent, and easier for calculations, to consider the standard Gaussian measure (GOE) over ${\bf Sym}_n$.
|
Edit: According to Dean and Majumdar , the precise value of $c$ in my answer below is $c=\frac{\log 3}{4}$ (and $c=\frac{\log 3}{2}$ for GUE random matrices). I did not read their argument, but I have been told that it can be considered as rigourous. I heard about this result through the recent work of Gayet and Welschinger on the mean Betti number of random hypersurfaces. I am a bit surprised that this computation was not made before 2008. Let me just expand my comment. You are talking about the uniform measure on the unit sphere of the euclidean space $Sym_n(\mathbb R)$, but for measuring subsets that are homogeneous it is equivalent to talk about the standard gaussian measure on $Sym_n(\mathbb R)$. This measure is called in random matrix theory the Gaussian Orthogonal Ensemble (GOE). In particular $p_n$ is the probability that a matrix in the GOE is positive definite. Since there are explicit formulas for the probability distribution of the eigenvalues of a GOE matrix (this is probably what Robert Bryant is proving), there migth be explicit formulas for $p_n$. Anyway, the asymptotics are known from general large deviation results for random matrices (due to Ben Arous and Guionnet, PTRF 1997)~: $p_n$ goes to zero as $e^{-c n^2}$ for some constant $c>0$. The constant is equal to the infimum, over all probability measures $\mu$ on $\mathbb R^+$, of the quantity
$$ \frac{1}{2} (\int x^2 d\mu(x) - \Sigma(\mu)) - \frac 3 8 - \frac 1 4 \log 2$$
where $\Sigma(\mu)$ is Voiculescu's free entropy $\iint \log|x-y| d\mu(x) d\mu(y)$. You can probably explicitely compute $c$. It is even possible that this was known before Ben Arous and Guionnet's work, since their results are much more general. For your second question, I am pretty sure that the limiting graph of $t \in [0,1( \mapsto p_{n,E(tn)}$ is $0$ ($E(x)$ is the integer part of $x$). But this is probably not what you really want to ask.
|
{
"source": [
"https://mathoverflow.net/questions/118481",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8799/"
]
}
|
118,500 |
Let $k$ be a field. There are two natural categories to consider: The category of simplicial commutative $k$-algebras. The category of connective $E_\infty$ $k$-algebras (i.e., chain complexes of $k$-vector spaces in nonnegative dimensions with a coherently associative and commutative multiplication law). These categories are not the same if $k$ does not have characteristic zero. Simplicial commutative $k$-algebras are rather special, and (for example) not every commutative dga over $k$ (which determines an $E_\infty$-algebra over $k$) comes from a simplicial commutative $k$-algebra. (The homotopy groups of a simplicial commutative ring have divided powers, by an explicit construction that I don't really understand.) The category of $E_\infty$-algebras over $k$ has a nice interpretation via homotopy theory: it is the category of commutative algebra objects (in an appropriate sense) in the category of connective $k$-module spectra. (In particular, it is monadic over connective $k$-module spectra, in the $\infty$-categorical sense.) I don't know how to think of simplicial commutative rings in this way; all I know for motivation is that they form a nice homotopy theory (e.g. presented by a fairly concrete model category) that allows you to extend the category of ordinary commutative rings (e.g., to resolve non-smooth objects by smooth ones). Is there an analog of the above discussion for $E_\infty$-algebras that works for simplicial commutative $k$-algebras? In particular, can they be described as algebras over a nice monad for $k$-module spectra?
|
I don't know a really satisfying answer to this question, but here are a few observations. 1) The $\infty$-category of simplicial commutative $k$-algebras is monadic over the $\infty$-category of connective $k$-module spectra. The relevant monad is the nonabelian left derived functor of the "total symmetric power" on ordinary $k$-modules, which is different from
the construction $M \mapsto \bigoplus_{n} (M^{\otimes n})_{h \Sigma_n}$ unless $k$ has characteristic zero. 2) The $\infty$-category of simplicial commmutative rings is freely generated under sifted colimits by the ordinary category of finitely generated polynomial algebras over $k$.
In other words, it can be realized as the $\infty$-category of product-preserving functors from the ordinary category of $k$-schemes which are affine spaces to the $\infty$-category of spaces. 3) Let $X$ be the affine line over $k$ (in the sense of classical algebraic geometry). Then
$X$ represents the forgetful functor {commutative $k$-algebras} -> {sets}. Consequently,
$X$ has the structure of a commutative $k$-algebra in the category of schemes.
Also, $X$ is flat over $k$. Now, any ordinary scheme can be regarded as a spectral scheme over $k$: that is, it also represents a functor {connective E-infty algebras over k} -> {spaces}. In general, products in the category of ordinary $k$-schemes need not coincide with products in the $\infty$-category of spectral $k$-schemes. However, they do agree for flat $k$-schemes. Consequently, $X$ can be regarded as a commutative $k$-algebra in the $\infty$-category of derived $k$-schemes. In particular, $X$ represents a functor {connective E-infty algebras over k} -> {connective E_infty algebras over k}. This functor has the structure of a comonad whose comodules are the simplicial commutative $k$-algebras. You can summarize the situation more informally by saying: derived algebraic geometry (based on simplicial commutative $k$-algebras) is what you get when you take
spectral algebraic geometry (based on E-infty-algebras over $k$) by forcing the two different versions of the affine line to coincide. 4) The forgetful functor {simplicial commutative $k$-algebras} -> {E-infty algebras over $k$} is both monadic and comonadic. In particular, you can think of a simplicial commutative $k$-algebra $R$ as an E-infty algebra over $k$ with some additional structure. As Tyler mentioned, one way of thinking about that additional structure is that it gives you the ability to form symmetric powers of connective modules. Of course, if $M$ is any $R$-module spectrum, you can always form the construction $(M^{\otimes n})_{h \Sigma_n}$. However, this doesn't behave the way you might expect based on experience in ordinary commutative algebra: for example, if $M$ is free (i.e. a sum of copies of $R$) then $(M^{\otimes n})_{h \Sigma_n}$ need not be free (unless $R$ is of characteristic zero). However, when
$R$ is a simplicial commutative ring, there is a related construction on connective
$R$-module spectra, given by nonabelian left derived functors of the usual symmetric power.
This will carry free $R$-module spectra to free $R$-module spectra (of the expected rank). It is possible to describe the $\infty$-category of simplicial commutative $k$-algebras along the following lines: a simplicial commutative $k$-algebra is
a connective E-infty algebra over $R$ together with a collection of symmetric power functors
Sym^{n} from connective $R$-modules to itself, plus a bunch of axioms and coherence data.
I don't remember the exact statement (my recollection is that spelling this out turned out to be more trouble than it was worth).
|
{
"source": [
"https://mathoverflow.net/questions/118500",
"https://mathoverflow.net",
"https://mathoverflow.net/users/344/"
]
}
|
118,626 |
Every real symmetric matrix has at least one real eigenvalue. Does anyone know how to prove this elementary, that is without the notion of complex numbers?
|
If "elementary" means not using complex numbers, consider this. First minimize the Rayleigh ratio $R(x)=(x^TAx)/(x^Tx).$ The minimum exists and is real.
This is your first eigenvalue. Then you repeat the usual proof by induction in dimension of the space. Alternatively you can consider the minimax or maximin problem with the same Rayleigh ratio,
(find the minimum of a restriction on a subspace, then maximum over all
subspaces) and it will give you all eigenvalues. But of course any proof requires some topology. The standard proof requires Fundamental theorem of
Algebra, this proof requires existence of a minimum.
|
{
"source": [
"https://mathoverflow.net/questions/118626",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30376/"
]
}
|
118,646 |
We know PGL(2, $q$) has elements of order $q+1$ or $q-1$. Suppose $k\neq 1$, $2$ divide $q+1$ or $q-1$. It is clear that PGL(2, $q$) has an elements of order $k$. I would like to know what is the number of the elements of order $k$ and how we can get it?
|
If "elementary" means not using complex numbers, consider this. First minimize the Rayleigh ratio $R(x)=(x^TAx)/(x^Tx).$ The minimum exists and is real.
This is your first eigenvalue. Then you repeat the usual proof by induction in dimension of the space. Alternatively you can consider the minimax or maximin problem with the same Rayleigh ratio,
(find the minimum of a restriction on a subspace, then maximum over all
subspaces) and it will give you all eigenvalues. But of course any proof requires some topology. The standard proof requires Fundamental theorem of
Algebra, this proof requires existence of a minimum.
|
{
"source": [
"https://mathoverflow.net/questions/118646",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29634/"
]
}
|
118,903 |
I'm teaching axiomatic linear algebra again this semester. Although the textbooks I'm using do everything over the real or complex numbers, for various reasons I prefer to work over an arbitrary field when possible. I always introduce at least $\mathbb{F}_2$ as an example of a finite field. To help motivate this level of generality, I'd like to cover some application of linear algebra over finite fields. Ideally it shouldn't make explicit reference to linear algebra or finite fields in its setup, and should require as little background as possible (the students have taken calculus, but not necessarily any other advanced math — in particular applications to group theory are out). I've looked around a little, but haven't found anything so far that requires little enough overhead to fit into a single 50-minute lecture and wouldn't seem either too abstract or too arbitrary to motivate such students. Any suggestions? Alternatively, I'd be interested in elementary applications of linear algebra over any other field which isn't a subfield of $\mathbb{C}$.
|
How about binary linear codes? You can "see" the Hamming distance between codewords, and use linear transformations to encode/decode
|
{
"source": [
"https://mathoverflow.net/questions/118903",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1044/"
]
}
|
118,920 |
My stackexchange post was somewhat unsatisfactory (also because I may not have stated clear enough what my interest was). So here it goes! Let $M$ be a compact Riemannian manifold and $\Delta$ be the Laplace-Beltrami operator. It is well-known that the solution operator to the heat equation $e^{t \Delta}$ is smoothing for $t>0$ and has a smooth integral kernel $k_t(x, y) \in C^\infty(M \times M)$ . Furthermore, $k_t$ has an asymptotic expansion $$ k_t(x, y) \sim \underbrace{(4 \pi t)^{-n/2} \exp \left( -\frac{1}{4t} \mathrm{dist}(x, y)^2 \right)}_{:= e_t(x, y)} \sum_{j=0}^\infty t^j \Phi_j(x, y) $$ meaning that $$ \left| k_t(x, y) - e_t(x, y) \sum_{j=0}^N t^j \Phi_j(x, y) \right| \leq C t^{N+1}$$ uniformly in $x$ and $y$ in a neighborhood of the diagonal. Now by by formally substituting $t \rightarrow it$ , one gets the formal asymptotic series $$ e_{it}(x, y) \sum_{j=0}^\infty (it)^j \Phi_j(x, y),$$ which has the property that it formally (i.e. termwise, as asymptotic series in $t$ ) solves the Schrödinger equation $ \left(i \frac{\partial}{\partial t} + \Delta\right)k_t = 0.$ Now my question is the following: Does this asymptotic series have any relation to the solution operator $e^{it\Delta}$ of the Schrödinger equation, or to its distribution kernel?
|
How about binary linear codes? You can "see" the Hamming distance between codewords, and use linear transformations to encode/decode
|
{
"source": [
"https://mathoverflow.net/questions/118920",
"https://mathoverflow.net",
"https://mathoverflow.net/users/16702/"
]
}
|
119,254 |
Can anyone cite an example of a mathematics paper that has been retracted? It is said that on the order of 100,000 new theorems enter the mathematics literature every year. For a number of reasons including hyper-specialization and demands on referee resources it is, in my view, unlikely that all their proofs are correct. Yet it seems no explicit effort is made to clean the literature. False theorms float downstream along with the true ones, available for citation and use in constructing yet further theorems. Thanks for any insight. Cheers, Scott
|
A general existence theorem is proved : 1933 : W. Grunwald, Ein allgemeines Existenztheorem für algebraische Zahlkörper , J. reine angew. Math. 169 (1933), 103–107. and reproved : 1942: G. Whaples, Non-analytic class field theory and Grünwald's theorem .
Duke Math. J. 9, (1942). 455–473. A counter-example is found : 1948 : S. Wang, A counter-example to Grunwald's theorem , Ann. Math. 49 (1948), 1008–1009. and the theorem is corrected : 1950 : S. Wang, On Grunwald's theorem , Ann. Math. 51 (1950), 471–484. twice in the same year : ---- : H. Hasse, Zum Existenzsatz von Grunwald in der Klassenkörpertheorie , J. reine angew. Math. 188 (1950), 40–64. A quarter of a century later, a simpler proof is given : 1974: J. Neukirch, Eine Bemerkung zum Existenzsatz von Grunwald-Hasse-Wang , J. Reine Angew. Math. 268/269 (1974), 315–317. but more than half a century later, corrections to the corrections are required : 2007 : W-D. Geyer & C. Jensen, Embeddability of quadratic extensions in cyclic extensions .
Forum Math. 19 (2007), no. 4, 707–725. 2011 : P. Morton, A correction to Hasse's version of the Grunwald-Hasse-Wang theorem .
J. Reine Angew. Math. 659 (2011), 169–174. Addendum (2013/05/18) I'm afraid the above list of errors and corrections might look a bit negative, so let me add a positive note (which will also save you 30,00 € or $42.00 by not having to read it here ) : In 1933, van der Waerden asked in the Jahresbericht : Which quadratic fields can be embedded in cyclic quartic fields ? Solutions were provided by four people, among them Hasse, who generalised the problem to : Under which conditions can a degree-$l$ ($l$ prime) cyclic extension $K_1$ of a number field $K$ be embedded into a degree-$l^n$ cyclic extension $K_n$ of $K$ ? A. Scholz sent in a "solution" to this problem in 1935 which essentially claimed that the obstructions are purely local in nature. But Hans Richter, a doctoral student of van der Waerden, knew already that there is an exception when $l=2$, so a Scholtz-Richter correction to Scholz's paper was required. In a sense, Richter anticipated not only Wang's counterexample to Grunwald's theorem but also its solution, without mentioning it explicitly as such.
|
{
"source": [
"https://mathoverflow.net/questions/119254",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4111/"
]
}
|
119,329 |
On page 263 of this book review appears the following: Given the centrality of L-functions to the Langlands program, nothing would seem more natural (than a presentation of elementary algebraic number theory from the standpoint of L-functions and their analytic properties), but in fact the properties of L-functions traditionally of interest to analytic number theorists - for example, the location of zeroes in the critical strip (the Generalized Riemann Hypothesis) - have historically had little to do with the preoccupations of the Langlands program. Thanks largely to the efforts of a few charismatic and determined individuals, this is beginning to change and Langlands himself has in recent years turned to methods in analytic number theory in an attempt to get beyond the visible limits of the techniques developed over the last few decades. I'd like to ask for a big picture exposition of how such questions about the location of zeroes of L-functions appear and interact with the Langlands program. My interest is mainly cultural and the answer should be tailored for the outsider to number theory (I'm viewing Langlands program algebraically as the pursuit of a nonabelian class field theory.) A more crude question is: Does the Langlands program say anything about the Grand Riemann Hypothesis or vice versa? This is almost certainly too crude a question for MO, but Langlands seems to have such an amazing unifying appeal, that I feel a temptation to see how much it subsumes. I fully expect an answer like "It is impossible to coherently discuss this without years of training". Thank you for any attempt to explain things to someone who is not a number theorist, in advance!
|
One can use Langlands functoriality to eliminate the so-called Siegel zeros of an automorphic $L$-function. For example, Hoffstein-Ramakrishnan (IMRN 1995) proved that the $L$-function of a $GL(n)$ cusp form for $n>1$ has no Siegel zero if all $GL(m)\times GL(n)$ $L$-functions are $GL(mn)$ $L$-functions. There are several unconditional results along this line, e.g. in the same paper it is shown that the $L$-function of a $GL(2)$ cusp form has no Siegel zero.
|
{
"source": [
"https://mathoverflow.net/questions/119329",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6269/"
]
}
|
119,334 |
We have this theorem:
Let $U$, $V$ two open sets of manifold $M$, ($U \cup V = M$). If they are $G$ -stable the induced sequence in cohomology $$ \cdots \to H^k_G (U \cup V) \to H^k_{G}(U)\oplus H^k_G(V) \to H^k_{G}(U \cap V) \to H^{k+1}_{G}(U \cup V) \to \cdots $$ is exact. There is a Borel localization theorem:
Let $M$ a compact manifold equipped with a $G$- action ($G$ is a compact Lie group). Let $i:F \rightarrow M$ denote the inclusion of the $G$-fixed point set of $M$ in $M$ of the set of $M$. Then $$ i^{*}: H^\bullet_G(M) \to H^\bullet_G(F) \simeq H^\bullet(F) \otimes H^\bullet_{G}(pt) $$
is an isomorfism modulo $H^{*}_{pt}(G)$-torsion. Is there a way to give a proof of Borel localization theorem using equivariant Mayer-Vietoris theorem? Is there a good (concrete) example in which the torsion is essential to have isomorfism? (when $H^\bullet_{G}(pt)$ is a polinomial ring...)
|
One can use Langlands functoriality to eliminate the so-called Siegel zeros of an automorphic $L$-function. For example, Hoffstein-Ramakrishnan (IMRN 1995) proved that the $L$-function of a $GL(n)$ cusp form for $n>1$ has no Siegel zero if all $GL(m)\times GL(n)$ $L$-functions are $GL(mn)$ $L$-functions. There are several unconditional results along this line, e.g. in the same paper it is shown that the $L$-function of a $GL(2)$ cusp form has no Siegel zero.
|
{
"source": [
"https://mathoverflow.net/questions/119334",
"https://mathoverflow.net",
"https://mathoverflow.net/users/30779/"
]
}
|
120,067 |
The theta function is the analytic function $\theta:U\to\mathbb{C}$ defined on the (open) right half-plane $U\subset\mathbb{C}$ by $\theta(\tau)=\sum_{n\in\mathbb{Z}}e^{-\pi n^2 \tau}$. It has the following important transformation property. Theta reciprocity : $\theta(\tau)=\frac{1}{\sqrt{\tau}}\theta\left(\frac{1}{\tau}\right)$. This theorem, while fundamentally analytic—the proof is just Poisson summation coupled with the fact that a Gaussian is its own Fourier transform—has serious arithmetic significance. It is the key ingredient in the proof of the functional equation of the Riemann zeta function. It expresses the automorphy of the theta function. Theta reciprocity also provides an analytic proof (actually, the only proof, as far as I know) of the Landsberg-Schaar relation $$\frac{1}{\sqrt{p}}\sum_{n=0}^{p-1}\exp\left(\frac{2\pi i n^2 q}{p}\right)=\frac{e^{\pi i/4}}{\sqrt{2q}}\sum_{n=0}^{2q-1}\exp\left(-\frac{\pi i n^2 p}{2q}\right)$$ where $p$ and $q$ are arbitrary positive integers. To prove it, apply theta reciprocity to $\tau=2iq/p+\epsilon$, $\epsilon>0$, and then let $\epsilon\to 0$. This reduces to the formula for the quadratic Gauss sum when $q=1$: $$\sum_{n=0}^{p-1} e^{2 \pi i n^2 / p} =
\begin{cases}
\sqrt{p} & \textrm{if } \; p\equiv 1\mod 4 \\\
i\sqrt{p} & \textrm{if } \; p\equiv 3\mod 4
\end{cases}$$ (where $p$ is an odd prime). From this, it's not hard to deduce Gauss's "golden theorem". Quadratic reciprocity : $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{(p-1)(q-1)/4}$ for odd primes $p$ and $q$. For reference, this is worked out in detail in the paper " Applications of heat kernels on abelian groups: $\zeta(2n)$, quadratic reciprocity, Bessel integrals " by Anders Karlsson. I feel like there is some deep mathematics going on behind the scenes here, but I don't know what. Why should we expect theta reciprocity to be related to quadratic reciprocity? Is there a high-concept explanation of this phenomenon? If there is, can it be generalized to other reciprocity laws (like Artin reciprocity)? Hopefully some wise number theorist can shed some light on this!
|
Going in the direction of more generality: With $\theta(\tau)=\sum_n\exp(\pi i n^2 \tau)$, theta reciprocity describes how the function behaves under the linear fractional transformation $[\begin{smallmatrix} 0&1 \\ -1&0\end{smallmatrix}]$. From this one can show it's an automorphic form (of half integral weight, on a congruence subgroup). Automorphic forms and more generally automorphic representations are linked by the Langlands program to a very general approach to a non-abelian class field theory. Your "Why should we expect ..." question is dead-on. This is very deep and surprising stuff. In the direction of more specificity, the connection to the heat kernel is fascinating. (In this context, Serge Lang was a great promoter of 'the ubiquitous heat kernel.') The theta function proof is also discussed in Dym and McKean's 1972 book "Fourier Series and Integrals" and in Richard Bellman's 1961 book "A Brief Introduction to Theta Functions." Bellman points out that theta reciprocity is a remarkable consequence of the fact that when the theta function is extended to two variables, both sides of the reciprocity law are solutions to the heat equation. One is, for $t\to 0$ what physicists call a 'similarity solution' while the other is, for $t\to \infty$ the separation of variables solution. By the uniqueness theorem for solutions to PDEs, the two sides must be equal! A special case of quadratic reciprocity is that an odd prime $p$ is a sum of two squares if and only if $p\equiv 1\bmod 4$. This can be be done via the theta function and is in fact given in Jacobi's original 1829 book "Fundamenta nova theoriae functionum ellipticarum."
|
{
"source": [
"https://mathoverflow.net/questions/120067",
"https://mathoverflow.net",
"https://mathoverflow.net/users/25791/"
]
}
|
120,442 |
Is it true that every smooth rational variety X is simply connected? How is the proof?
Would it be still true if X has mild (for example orbifold) singularities?
|
Yes! (I assume it was implicit in your question that the variety be projective ?) More generally: any smooth, complex, rationally connected projective variety is simply connected. See Debarre's book ("Higher dimensional algebraic geometry"). Alternatively, take a look at Debarre's Bourbaki talk ("Varietes rationnellement connexes"). The idea is that a rationally connected variety has no holomorphic forms, so by Hodge theory the structure sheaf $\mathcal{O}_X$ is acyclic, implying $\chi(X,\mathcal{O}_X) = 1$. If $f : Y \to X$ is a connected etale cover, then $Y$ is again rationally connected, so by the same argument $\chi(Y,\mathcal{O}_Y) = 1$, so $\deg{f} = 1$. So $X$ is simply connected. In positive characteristic the Hodge theory fails, so the argument as such doesn't stand, but you may still get the simple connectedness as a consequence of the fibration theorem of Graber-Harris-Starr-de Jong. See the mentioned Bourbaki expose.
|
{
"source": [
"https://mathoverflow.net/questions/120442",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5259/"
]
}
|
120,511 |
I was investigating primes with the property that the sum of the first $n$ primes is divisible by $p_n$. It turns out that these primes are extremely extremely rare. For primes less than $10^9$, I have found that there are only five primes with this property: $$
p_1 = 2
$$ $$
p_3 = 5
$$ $$
p_{20} = 71
$$ $$
p_{31464} = 369,119
$$ $$
p_{22096548} = 415,074,643
$$ This raises the curious and equivalent questions: Q1. Are there infinitely many primes which divide the sum of all the preceding primes? Q2 . Even if we assume that there are infinitely many such primes, why are they so rare? In other words, why do primes dislike dividing the sum of all the preceding primes? Is there any heuristic argument to show that such primes will indeed be extremely rare?
|
Here is a heuristic argument that there is nothing to explain: The probability that $p$ divides the sum of the preceding primes is $1/p$ . So the expected number of primes less than $10^9$ with this property is $\sum_{p \leq 10^9} \frac{1}{p}$ . Using Mertens' second theorem , $$\sum_{p \leq 10^9} \frac{1}{p} \approx \log \log 10^9 + M \approx 3.3$$ Here $\log$ is natural log and $M \approx 0.26149$ is Mertens' constant . This is an example of the motto " $\log \log x$ goes to infinity but has never been observed to do so". It is quite common for people to look at primes $p$ which divide some quantity $a_p$ and conclude that they are surprisingly rare when, in fact, they are simply growing as $\log \log N$ for the reason above.
|
{
"source": [
"https://mathoverflow.net/questions/120511",
"https://mathoverflow.net",
"https://mathoverflow.net/users/23388/"
]
}
|
120,522 |
I have adjacency matrices which have nearly connected components. That is partitions with a dense number of edges between nodes in the same group and few edges acting as bridges between these groups. I have clustered them so that they form a band matrix. What ways exist to identity these nodes linking groups? These nodes that act as bridges between these nearly connected components. It appears that each method I have seen has some element of it being a heuristic in some way.
|
Here is a heuristic argument that there is nothing to explain: The probability that $p$ divides the sum of the preceding primes is $1/p$ . So the expected number of primes less than $10^9$ with this property is $\sum_{p \leq 10^9} \frac{1}{p}$ . Using Mertens' second theorem , $$\sum_{p \leq 10^9} \frac{1}{p} \approx \log \log 10^9 + M \approx 3.3$$ Here $\log$ is natural log and $M \approx 0.26149$ is Mertens' constant . This is an example of the motto " $\log \log x$ goes to infinity but has never been observed to do so". It is quite common for people to look at primes $p$ which divide some quantity $a_p$ and conclude that they are surprisingly rare when, in fact, they are simply growing as $\log \log N$ for the reason above.
|
{
"source": [
"https://mathoverflow.net/questions/120522",
"https://mathoverflow.net",
"https://mathoverflow.net/users/19684/"
]
}
|
120,525 |
I'd like to check with my colleagues whether I have correctly understood "embedded resolution of singularities". Let $X$ be a nonsingular projective variety over $\mathbf C$ and let $D$ be a "nice" divisor on $X$, say $D$ has strictly normal crossings. (Maybe we could just take $D$ to be a closed subscheme?) Then, has the following statement been proven? And what is a "good" reference? There exists a projective birational surjective morphism $\psi:Y\to X$ with $Y$ a nonsingular projective variety over $\mathbf C$ and the inverse image of $D$ in $Y$ a nonsingular projective variety over $\mathbf C$ of codimension one in $Y$? I'm worried about whether I have correctly understood this statement, or maybe one needs some "normality" conditions on $D$ to assure this "embedded" resolution of singularities. Also, how does one obtain this embedded resolution of singularities? Can we write down a terminating process which ends with an embedded resolution of singularities? I have a hard time "believing" the above statement, but I don't know why. If anybody can explain to me that this is not so surprising as a result I would be very thankful.
|
Here is a heuristic argument that there is nothing to explain: The probability that $p$ divides the sum of the preceding primes is $1/p$ . So the expected number of primes less than $10^9$ with this property is $\sum_{p \leq 10^9} \frac{1}{p}$ . Using Mertens' second theorem , $$\sum_{p \leq 10^9} \frac{1}{p} \approx \log \log 10^9 + M \approx 3.3$$ Here $\log$ is natural log and $M \approx 0.26149$ is Mertens' constant . This is an example of the motto " $\log \log x$ goes to infinity but has never been observed to do so". It is quite common for people to look at primes $p$ which divide some quantity $a_p$ and conclude that they are surprisingly rare when, in fact, they are simply growing as $\log \log N$ for the reason above.
|
{
"source": [
"https://mathoverflow.net/questions/120525",
"https://mathoverflow.net",
"https://mathoverflow.net/users/31091/"
]
}
|
120,536 |
This is a boring, technical question that I stumbled upon while making a contribution to Sage. I would still like to hear a constructive answer so hopefully the question does not get closed. The question is the following. How many spanning trees does the empty graph $E$ have? According to Sage it has 1, while Mathematica claims $\tau(E) = 0.$ Now the only subgraph of $E$ is $E$ hence this question can be rephrased as Is $E$ a tree? One characterization says that a tree is a connected graph with $n$ vertices and $n-1$ edges and would imply that $E$ is not a tree. However if we define a tree as a connected acyclic graph then $E$ is clearly a tree. It appears that as far as Kirchhoff is concerned any value would do since $$\rm{adj}(\mathcal{L}(E)) = \mathcal{L}(E) = k\mathcal{L}(E)$$ for any $k.$ Hence what I am wondering is Are there any wider reasons in defining $E$ to (not) be a tree?
|
In a paper " Is the null-graph a pointless concept? " Harary and Read examine reasons for
assigning certain properties to the empty graph. They observe that from the enumeration perspective it appears to be convenient to consider the empty graph as a forest, but not a tree.
|
{
"source": [
"https://mathoverflow.net/questions/120536",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1737/"
]
}
|
120,984 |
I've been getting interested in the (Bott--)Borel--Weil theorem lately. As a (mainly) geometer it is very interesting to see representation appearing (from nowhere as far as I can see) in the theory of complex geometry. What I can't seem to find out, though, is if this fact has been used to prove any interesting results in representation theory, ie, does the geometric realization of the representation allow us to prove anything we didn't know already?
|
It's always a good idea to ask (as students typically do) why one is studying a particular subject or theorem. Here are some of my views, from the algebraic side of representation theory: 1) The original theorem here was proved by Borel and Weil, though never written up formally by them. Serre reported on it at the Bourbaki seminar (expose 100, usually available online at NUMDAM, which may be under renovation at the moment). Taken by itself, the Borel-Weil theorem provides a somewhat concrete geometric model using line bundles on the flag variety for all finite dimensional irreducible representations of a (complex or compact) semisimple Lie group. Up to isomorphism these representations are parametrized by "dominant" characters of a maximal torus. The existence was originally an indirect consequence of work by E. Cartan and then Weyl, but the actual representations are not easy to write down. Instead, some indirect information about characters (or weight space multiplicities) was cleverly developed. 2) Later Bott, in his fundamental 1957 Annals paper "Homogenous vector bundles" responded to a conjecture of Borel and Hirzebruch by proving that for non-dominant weights and corresponding line bundles, the flag variety has non-vanishing cohomology in at most one (predictable) degree. When the cohomology is non-zero, it then affords the same irreducible representation you get from a domiannt weight "linked" by the Weyl group via Borel-Weil. From the viewpoint of representation theory, this of course is a somewhat negative result showing that nothing new turns up in higher cohomology calculations. 3) In a series of papers (mostly in Invent. Math. and available online via GDZ archive), Demazure translated these ideas into the language of algebraic geometry. Still working in characteristic 0, he derived a "very simple" proof of the theorems of Borel-Weil and Bott, then showed how to derive the Weyl character formula and implement it effectively in this same framework. 4) As pointed out by Aakumadule, Kumar's proof of the old PRV conjecture on tensor products of irreducibles (predicting certain direct summands) relies heavily on the machinery of the Borel-Weil theorem, though in the algebraic framework. Naturally the flag variety is a major player in representation theory here and elsewhere, as in Kumar's work with Littelmann and the book by Brion and Kumar on Frobenius splitting. This all tends to drift into prime characteristic too. 5) In prime characteristic (which interests me more), work by H.H. Andersen and others has shown what can and can't be done in Demazure's set-up. In particula, the reductions modulo a prime of the standard irreducible representations become "Weyl modules" (and play a major role in Jantzen's book Representations of Algebraic Groups ). These are usually reducible, but have formal properties like the infinite dimensional Verma modules in characteristic 0 and lead (by Lusztig's ideas) to a partly proved conjecture on characters of irreducibles. For higher cohomology there are still many open problems. Unlike Bott's case, one sometimes has systematic occurrences of multiple non-zero higher cohomology groups though the Euler character remains invariant. I've conjectured that the Kazhdan-Lusztig theory for an affine Weyl group (relative to the rpime) controls all of this in a nice way.
|
{
"source": [
"https://mathoverflow.net/questions/120984",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1648/"
]
}
|
121,017 |
This came up when I did a brand-new (or maybe it's just "birdtracks" in disguise :-)
graph-based construction of the E8 family. x,y,z are dimensions and thus integer. x<0 actually doesn't hurt. (For $y=x,z=(3 (-2 + o) o)/(10 + o)$ the standard E8 setup results, o must now divide 360 etc. pp.) So here is the equation: $3 x (2 + x) (-2 + x + x^2 - 2 y) y (-x + x^2 - 2 z) z=q^2$ With rational y or z I wouldn't pester MO - I'd solve it on the spot. But here some nasty division properties are involved, and I'm lousy in number theory. Is there a finite, easy describable solution list?
|
It's always a good idea to ask (as students typically do) why one is studying a particular subject or theorem. Here are some of my views, from the algebraic side of representation theory: 1) The original theorem here was proved by Borel and Weil, though never written up formally by them. Serre reported on it at the Bourbaki seminar (expose 100, usually available online at NUMDAM, which may be under renovation at the moment). Taken by itself, the Borel-Weil theorem provides a somewhat concrete geometric model using line bundles on the flag variety for all finite dimensional irreducible representations of a (complex or compact) semisimple Lie group. Up to isomorphism these representations are parametrized by "dominant" characters of a maximal torus. The existence was originally an indirect consequence of work by E. Cartan and then Weyl, but the actual representations are not easy to write down. Instead, some indirect information about characters (or weight space multiplicities) was cleverly developed. 2) Later Bott, in his fundamental 1957 Annals paper "Homogenous vector bundles" responded to a conjecture of Borel and Hirzebruch by proving that for non-dominant weights and corresponding line bundles, the flag variety has non-vanishing cohomology in at most one (predictable) degree. When the cohomology is non-zero, it then affords the same irreducible representation you get from a domiannt weight "linked" by the Weyl group via Borel-Weil. From the viewpoint of representation theory, this of course is a somewhat negative result showing that nothing new turns up in higher cohomology calculations. 3) In a series of papers (mostly in Invent. Math. and available online via GDZ archive), Demazure translated these ideas into the language of algebraic geometry. Still working in characteristic 0, he derived a "very simple" proof of the theorems of Borel-Weil and Bott, then showed how to derive the Weyl character formula and implement it effectively in this same framework. 4) As pointed out by Aakumadule, Kumar's proof of the old PRV conjecture on tensor products of irreducibles (predicting certain direct summands) relies heavily on the machinery of the Borel-Weil theorem, though in the algebraic framework. Naturally the flag variety is a major player in representation theory here and elsewhere, as in Kumar's work with Littelmann and the book by Brion and Kumar on Frobenius splitting. This all tends to drift into prime characteristic too. 5) In prime characteristic (which interests me more), work by H.H. Andersen and others has shown what can and can't be done in Demazure's set-up. In particula, the reductions modulo a prime of the standard irreducible representations become "Weyl modules" (and play a major role in Jantzen's book Representations of Algebraic Groups ). These are usually reducible, but have formal properties like the infinite dimensional Verma modules in characteristic 0 and lead (by Lusztig's ideas) to a partly proved conjecture on characters of irreducibles. For higher cohomology there are still many open problems. Unlike Bott's case, one sometimes has systematic occurrences of multiple non-zero higher cohomology groups though the Euler character remains invariant. I've conjectured that the Kazhdan-Lusztig theory for an affine Weyl group (relative to the rpime) controls all of this in a nice way.
|
{
"source": [
"https://mathoverflow.net/questions/121017",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11504/"
]
}
|
121,031 |
The concept of relation in the history of mathematics, either consciously or not, has always been important: think of order relations or equivalence relations. Why was there the necessity of singling out a particular kind of relations, namely the functional ones? I guess (but I don't have data about this) historically the recognition that "operational" expressions like $x^3$ or $\sum_{i=0}^{\infty} \frac{x^n}{n!}$ could be formalized as functional relations led to devote more attention to functions understood in the modern set theoretical sense (i.e. as a special case of relations). That viewpoint permitted to consider things such as the Dirichlet function $\chi_{\mathbb{Q}}$ (which was previously not even considered to be a true "function"!) as fully legitimate objects, and to not dismiss them as pathological, with great theoretical advantage. The language and notation of functions was preferred even to deal with things that, technically, were relations: think of "multi-valued functions" in complex analysis such as $\sqrt x$ or $\log (x)$. 1) In which instances in modern mathematics are relations used as important generalizations of functions? One example that comes to mind is correspondences in the sense of algebraic geometry. In modern Algebra the concept of homomorphism, a kind of function between algebraic structures, is central; we are used to see expressions like $f(x*y)=f(x)*f(y)$. But it would be equally possible to define a "homomorphic relation" $R$, for example on groups, by the requirement: $(xRz$ & $yRt)$ $\Rightarrow$ $(x*y)R(z*t)$, where $*$ is the group multiplication. 2) Has this kind of "homomorphic relations" been studied (on groups or other algebraic structures)? Why algebra is pervaded with homomorphisms but we never see "homomorphic relations"? Are there something more than just historical reasons? Let Set be the usual category of sets, and Rel be the category of sets-with-relations-as-morphisms. There is the faithful functor Set $\to$ Rel that simply keeps sets intact and sends a function to its graph. And there is also a faithful functor Rel $\to$ Set mapping $X\to 2^X$ and $R\subseteq X\times Y$ to $R_*:2^X\to 2^Y, A\mapsto R_*(A)=\{ y\in Y\; |\; \exists x \in A : (x,y)\in R \}$. Despite the trivial foundational fact that set theoretical functions are defined to be a special kind of relations, it seems that in category theory Set has priority on Rel . For example the Yoneda's lemma is stated for Set ; and people talk of simplicial sets , not simplicial relations; and the category Rel is just retrieved as "the Kleisli category of the powerset endofunctor on Set " (I just learned this from wikipedia) and it doesn't seem to be so ubiquitous as Set (but this impression might just depend on my ignorance in category theory). 3) Are functions really more central/important than relations in category theory? If so, is it just for historical reasons or there are some more "intrinsic" reasons? E.g. is there an analogous of Yoneda's lemma for Rel ?
|
Regarding question 3, one can make an argument that actually the fundamental object is "Set together with Rel". The bijective-on-objects inclusion of Set into Rel is a categorical structure that can be expressed as an F-category , a proarrow equipment , or a double category . All of these are slightly different ways of talking about a (2-)category that has two classes of morphisms. It turns out that in the particular case of Set+Rel, either class of morphisms can be recovered from the other. The relations are the jointly monic spans of functions, while the functions are the relations with right adjoints. The same fact holds in much greater generality: from any regular category (whose morphisms are "function-like") we can construct a unitary tabular allegory (whose morphisms are "relation-like"), and conversely. The two are really just the same structure expressed in different ways. Sometimes it's more convenient to use the functions; sometimes it's more convenient to use the relations; and sometimes we want both encapsulated in a single structure. The importance of this sort of two-kinds-of-morphism structure becomes more pronounced as you go up in categorical dimension. For instance, the analogous thing for categories is the inclusion of Cat (whose morphisms are functors) into Prof (whose morphisms are profunctors ). In this case, Prof can be constructed from Cat, but with rather more difficulty than Rel is constructed from Set, while Cat cannot be recovered 2-categorically from Prof (e.g. Morita-equivalent categories are equivalent in Prof, but not in Cat). On the other hand, profunctors seem an essential ingredient for doing "formal category theory", e.g. in the formulation of weighted limits and colimits, so it's valuable to keep both kinds of morphism around.
|
{
"source": [
"https://mathoverflow.net/questions/121031",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4721/"
]
}
|
121,044 |
I have solved the Schrödinger equation for a triangular well potential and the solution comes in terms of Airy functions...now i am facing the following problems:
What are the normalization constants of Airy functions? What are the asymptotic forms Airy functions? How to find the matrix elements of the airy function? If anybody knows the answer please tell me as soon as possible.
|
Regarding question 3, one can make an argument that actually the fundamental object is "Set together with Rel". The bijective-on-objects inclusion of Set into Rel is a categorical structure that can be expressed as an F-category , a proarrow equipment , or a double category . All of these are slightly different ways of talking about a (2-)category that has two classes of morphisms. It turns out that in the particular case of Set+Rel, either class of morphisms can be recovered from the other. The relations are the jointly monic spans of functions, while the functions are the relations with right adjoints. The same fact holds in much greater generality: from any regular category (whose morphisms are "function-like") we can construct a unitary tabular allegory (whose morphisms are "relation-like"), and conversely. The two are really just the same structure expressed in different ways. Sometimes it's more convenient to use the functions; sometimes it's more convenient to use the relations; and sometimes we want both encapsulated in a single structure. The importance of this sort of two-kinds-of-morphism structure becomes more pronounced as you go up in categorical dimension. For instance, the analogous thing for categories is the inclusion of Cat (whose morphisms are functors) into Prof (whose morphisms are profunctors ). In this case, Prof can be constructed from Cat, but with rather more difficulty than Rel is constructed from Set, while Cat cannot be recovered 2-categorically from Prof (e.g. Morita-equivalent categories are equivalent in Prof, but not in Cat). On the other hand, profunctors seem an essential ingredient for doing "formal category theory", e.g. in the formulation of weighted limits and colimits, so it's valuable to keep both kinds of morphism around.
|
{
"source": [
"https://mathoverflow.net/questions/121044",
"https://mathoverflow.net",
"https://mathoverflow.net/users/20726/"
]
}
|
121,178 |
Is it true that given a finitely presented group $G$, either all primes
or only finitely many of them occur as orders of elements of $G$?
|
No. The set of primes can be whatever you want (added: within reason! As Benjamin Steinberg points out, it can in fact be any recursively enumerable set of primes). First, note that for infinitely presented groups, the torsion can be whatever you like: the torsion in the group $*_i \mathbb{Z}/p_i$ is precisely the set of primes $p_i$, by standard facts about free products. (Added: As long as the set of $p_i$ is recursively enumerable, then this group admits a recursive presentation on a countable set of generators.) By Higman's Embedding Theorem, the above group can be embedded in a finitely presented group. More subtlely, this embedding doesn't introduce any new torsion---see, for instance, Theorem 2.5 of this preprint of Chiodo . Clarification: Higman's Embedding Theorem is commonly stated as only applying to finitely generated countable groups. In fact, an old construction of Higman, Neumann and Neumann shows how to embed a countably generated group into a 2-generated group; if the countably generated group is recursively presented, then the 2-generated group can be taken to be recursively presented as well. Further update: Following Benjamin Steinberg's comments below answer and an argument in another paper of Chiodo (see also Francois G. Dorais's comments), I think we have a very exciting characterization of the sets of primes that can occur as torsion in finitely presented groups. (For the solvable-word-problem case, one also needs a theorem of Clapham, which says that the Higman Embedding can be made to preserve solvability of the word problem.) Very exciting theorem: Let $P$ be a set of primes. $P$ occurs as the torsion in some finitely presented group if and only if $P$ is $\Sigma^0_2$. $P$ occurs as the torsion in some finitely presented group with solvable word problem if and only if $P$ is recursively enumerable.
|
{
"source": [
"https://mathoverflow.net/questions/121178",
"https://mathoverflow.net",
"https://mathoverflow.net/users/28104/"
]
}
|
121,306 |
I originally posted this question on StackExchange , where it was suggested I post here. It was also suggested I read about Hilbert manifolds and Fréchet manifolds. Nevertheless, I am still looking for an answer to (mainly the first part of) my question. At a summer school I recently attended, infinite-dimensional manifolds popped up. I have never worked with them before (although I'm very familiar with finite-dimensional manifolds). The lecturer at the school did not give any details about the technical realization of infinite-dimensional manifolds, mentioning that there were issues (such as picking a topology) that he would leave out for the sake of clarity, since the relevant results were true independent of the exact technical details. An internet search reveals that Banach manifolds are one way of treating infinite-dimensional manifolds, but there are others. Are Banach manifolds the most common way of defining infinite-dimensional manifolds, or are there other notions commonly used? Is there a more or less universal consensus about when to use which treatment? What are the most important (dis)advantages of each? Supposing I want to learn the basics of infinite-dimensional manifolds, are there any well-written introductory texts you would recommend? (on StackExchange, The Convenient Setting of Global Analysis by A. Kriegl and P. Michor was recommended)
|
Banach manifolds have found many uses such as, gauge theory (Donaldson theory, Seiberg-Witten theory, Floer theory), symplectic topology (Gromov-Witten theory), to name few I am more familiar with. One great advantage of Banach manifolds over Frechet manifolds is the implicit function theorem which in the Banach context takes a simpler form, and thus Banach manifolds are easier to recognize. One disadvantage of Banach manifolds over Frechet manifolds is the fact that natural notions of real analyticity are harder to implement on Banach spaces. One place to learn about Banach manifolds is Lang's Differential and Riemannian Manifolds
|
{
"source": [
"https://mathoverflow.net/questions/121306",
"https://mathoverflow.net",
"https://mathoverflow.net/users/19956/"
]
}
|
121,340 |
In order to realise the K-groups of a ring as the homotopy groups of some space associated to that ring, Quillen proposed the following (roughly-sketched) construction: Recall that $K_1(R) = GL(R)/E(R)$, so we're, at least, looking for a space $X$ with $\pi_1(X) = K_1(R)$. The classifying space of $K_1(R)$ is obviously not a serious candidate, but we can start with the classifying space of $GL(R)$ (given the discrete topology), $BGL(R)$. Then, choosing representative loops whose classes generate $E(R) \subset \pi_1(BGL(R))$, and then attaching 2-cells using these loops on the boundaries, we end up with something that has a fundamental group of $K_1(R)$. Furthermore, now Quillen adjoins 3-cells essentially to correct the homology back to that of $BGL(R)$, which was messed up in the addition of those 2-cells. We end up with a space denoted $BGL(R)^+$, on which we define $K_i(R) := \pi_i(BGL(R)^+)$. More specifically, Quillen sought to find a space $BGL(R)^+$ for which $(BGL(R), BGL(R)^+)$ was an acyclic pair (that is, the induced map $H_*(BGL(R), M) \to H_{*}(BGL(R)^+,M)$ is an isomorphism for all $K_1(R)$-modules $M$). My question is In search of a space on which to define K-groups, why was it desirable to find something satisfying the above condition on homology? My best guess is that it was observed that $K_1(R) = GL(R)/E(R) = GL(R)_{ab} = H_1(GL(R), \mathbb{Z})$, and $K_2(R) = H_2(E(R), \mathbb{Z}) = H_2([GL(R), GL(R)], \mathbb{Z})$, and so it seemed reasonable that all K-groups should be related to the homology of $GL(R)$ - and so the above is a stab at preserving that homology. I am trying to learn about K-theory, but, as with most presentations of math, I'm sure, I'm coming across too many cleanly-shaven formulas and propositions, entirely divorced from any sorts of thought-processes, big-pictures, or mentions of what is trying to be done and to what ends. Please share with me what you think is going on; here, with the +-construction, that is. (And, if and only if it is not a completely unrelated question, what sort of K-theoretical phenomena suggested that these groups should be homotopy groups?)
|
Here are some thoughts, gathered from reading many texts about algebraic K-theory. Let me start with some historical remarks, then try to give a more revisionist motivation of the plus construction. First of all, it's true as you say that the already-divined definitions of the lower K-groups made it seem like the higher K-groups, whatever they might be, bear the same relation to the homology of GL(R) as the homotopy groups of an H-space bear to its homology groups; thus one is already looking for an H-space K(R) whose homology agrees with that of GL(R), in order to define the higher algebraic K-groups as its homotopy groups. This line of thought is amplified by the observation that the known partial long exact sequences involving lower K-groups seemed like they could plausibly arise as the long exact sequences on homotopy groups associated to fibrations between these hypothetical H-spaces. So it may already at this point be natural to try to turn GL(R) into a H-space while preserving its homology, which is what the + construction does. However, the facts that: 1) the + construction ignores the admittedly crucial K_0-group; and 2) in any case, at the time there were very few lower K-groups to extrapolate from in the first place mean that perhaps the above is insufficient motivation for defining and investigating such a seemingly ad hoc construction as the + construction. However, we can bear in mind that Quillen's definition of the + construction came on the heels of his work on the Adams conjecture, during the course of which --- using his expertise on the homology of finite groups --- he was able to produce a (mod l) homology equivalence BGL(F) --> BU when F is the algebraic closure of a finite field of characteristic different from l. Now, BU is a classifying space for complex K-theory (in positive degrees), so its homotopy groups provide natural definitions of the (topological) algebraic K-theory of the complex numbers. Furthermore, by analogy with the theory of etale cohomology (known to Quillen), it is not entirely unreasonable to guess that, from the (mod l) perspective, all algebraically closed fields of characteristic different from l should behave in the same way as the topological theory over C. (This was later borne out in work of Suslin.) Then the above (mod l) homology equivalence adds further weight to the idea that the hypothetical K-theoretic space K(F) we're searching for should have the same homology as BGL(F). But what's more, Quillen also calculated the homology of BGL(F) when F is a finite field, and found this to be consistent with the combination of the above and a ``Galois descent'' philosophy for going form the algebraic closure of F down to F. That said, in the end, there is good reason why the plus construction of algebraic K-theory is difficult to motivate: it is, in fact, less natural than the other constructions of algebraic K-theory (group completion, Q construction, S-dot construction... applied to vector bundles, perfect complexes, etc.). This is partly because it has less a priori structure, partly because it ignores K_0, partly because it has narrower applicability, and partly because it is technically inconvenient (e.g. for producing the fiber sequences discussed above). Of course, Quillen realized this, which is why he spent so much time working on the other constructions. Probably the only claim to primacy the + construction has is historical: it was the first construction given, surely in no small part because of personal contingencies --- Quillen was an expert in group homology. In fact, probably the best motivation for the + construction -- ahistorical though it may be -- comes by comparison with another construction, the group completion construction (developed by Segal in his paper "on categories and cohomology theories"). Indeed, Segal's construction is very well-motivated: it is the precise homotopy-theoretic analog of the classical procedure of going from isomorphism-classes of f.g. proj. modules to the Grothendieck group K_0 by formally turning direct sum into a group operation. To get this homotopy theoretic analog, one "simply" carries along the isomorphisms in this construction (c.f. also Grayson's article "higher algebraic k-theory II"). The connection with the plus construction comes from the ``group completion theorem'' (see the McDuff-Segal article on this subject), which, under very general conditions, allows to calculate the homology of such a homotopy-theoretic group completion in terms of the homology of the relevant isomorphism groups. If you look at the group completion theorem in the case of the space of f.g. proj. modules over a ring, you'll see the connection with the plus construction immediately.
|
{
"source": [
"https://mathoverflow.net/questions/121340",
"https://mathoverflow.net",
"https://mathoverflow.net/users/19313/"
]
}
|
121,379 |
In this post on the n -Category Café, Urs Schreiber says that, "The theory of G-principal bundles makes sense in any $(\infty,1)$-topos ." I followed the link to the nLab and tried to chase definitions, but I found too quickly my head spinning. What is an $(\infty,1)$-topos , and why is this an appropriate setting for the study of principal bundles, i.e., doing differential geometry?
|
Derived versions of differential topology are becoming prominent tools in symplectic geometry. Whether or not you think of them via topoi is not crucial (I certainly can't), and perhaps the terminology turns off more people than it draws, but these ideas are being put to serious use by very serious no-nonsense mathematicians -- I think an excellent (though of course not isolated) example is the work of differential geometer Dominic Joyce who explains beautifully the necessities that led him deep into this area, see his 800 page book project on D-manifolds (which admittedly adapts a truncated version of the $\infty$-world for concreteness but is undoubtedly part of this story. One way to express (very briefly) the issues is to say the derived (or $\infty$) language allows one to bypass the geometric but very subtle issues of transversality which seriously interfere with progress in some areas of geometry (Floer theory). Intersections, fiber products, and other constructions arising in moduli theory (obstructions/virtual fundamental classes) naturally lead
to derived manifolds, which retain enough structure to allow algebraic constructions to work without the need for establishing and keeping track of perturbations. (This is not my area, so I can't seriously defend the need for this against a skeptic, but Joyce can..) Let me also say that this kind of geometry makes lots of geometric results (like the Atiyah-Bott fixed point theorem, some Grothendieck-Riemann-Roch and index theorems etc) completely formal. That for me is the main draw of this higher language -- it makes math that has a chance to be formal indeed formal. That's not the case for many results (and probably everything I'm saying applies more to differential topology than geometry) but that's when there are large areas where you might have dreamed that elegant abstract constructions might work but reality has proved disappointingly different, it's exciting to see that there are new languages that may (or may not) turn out up to the task.
|
{
"source": [
"https://mathoverflow.net/questions/121379",
"https://mathoverflow.net",
"https://mathoverflow.net/users/238/"
]
}
|
121,565 |
Gauss famously discarded Abel's proof that an algebraic equation of degree five or more cannot have a general solution ( Abel himself had rejected divergent series as the work of the devil ). Cantor's theory of transfinite numbers was originally regarded as so counter-intuitive—even shocking—that it encountered resistance from mathematical contemporaries such as Leopold Kronecker and Henri Poincaré and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections. Ramanujan's work on divergent series was rejected by three leading English mathematicians of the time before he was discovered by Hardy. The above stories have become mathematical folklore. I would like to know the examples of other mathematicians whose works were initially criticized or rejected by contemporaries but later became widely accepted famous. I am particularly interested in modern mathematicians or lesser known mathematicians of the classical era who stories may not be as popular as those of other mathematical giants.
|
This is not an answer but a longish comment, which moreover is certainly "subjective and argumentative". Reading all the stories given in the 12 answers, I find I can classify them in three categories: (1) the stories that have no factual basis and are pure myths (e.g. the one about
Hilbert rejected by Gordan, or Grothendieck rejected by you-know-who, etc.). I'd like to add Fourier to this category but here I don't know the history well enough to be sure. What is certain is that Cauchy, faced with a contradiction between Fourier's result and a theorem he "proved" (limit of continuous function s is continuous) did not dismiss Fourier, and that others quickly dismissed (rightly) Cauchy's result. (2) The ones that don't really concern mathematics: Boltzmann, Bolzano (whose work in mathematic become admired as soon as it was known, and was controversial for something else), Giordano Bruno, and even Brouwer, who as a mathematician was respected and even admired by about everybody else, and was only controversial as a philosopher of mathematics - and certainly no more than any other philosopher is controversial. (3) The few ones that have a factual basis and do concern mathematics. I will restrict my attention to these cases as they are the only ones that really answer the question. Now,
I am afraid that in each of these stories, where a romantic genius makes a discovery that is ignored or rejected by the conservative establishment of mathematics , my heart is with
that so-called establishment, whom I can accuse of no wrongdoing, even with the benefit of the hindsight. Indeed, in none of these cases has the "romantic genius" been persecuted or even bullied (as was for example Giordanno Bruno, or to a lesser extent Gallileo). We, the mathematician community, have no auto-da-fé (not to speak of bonfire) in our history to apologize for. What happens
in all those cases is that there was a genial mathematician whose works suffered from serious shortcomings, and it was those shortcomings , and not the ones of the mathematical community, that made the process of assimilation of these works by the community
longer than it could have been. Let me explain my point by discussing some cases: Lévy, are we said, was a great probabilist, but not very rigorous. I agree with this description. Now is his work not being rigorous a plus, or a minus? To me the answer is obvious, and I hope everyone here agrees with that. At roughly the same time, Kolmogorov was founding rigorously abstract probabilities on Lebesgue's theory, and this gave his theorems
a convincing power that Lévy's have not. The mathematical community has done actually a
pretty good and relatively quick work in making Lévy's results rigorous and putting them in the mainstream theory. Cantor is an interesting case. An absolute genius, for sure, with sometimes almost idiotic remarks -- like when he writes to Dedekind that his bijection between the line and plane refutes the basic idea of dimension. Dedekind kindly answers to him that people working in
geometry only consider continuous functions. Now it is perfectly normal and healthy that his works in set theory were exposed to such harsh criticism in his time. There were serious foundational problems in what he was doing. From the important point of view of rigor, he was putting mathematics back to the time of the early calculus, forgetting all the progress in rigor made in the nineteenth century, and indeed, there were as is now well-known some serious paradoxes hidden in his theory. The harsh criticism against Cantor's work (such as Poincaré's) was the anti-thesis in a dialectical process, where the role of the synthesis was played by lovers of the Cantor's paradise, that didn't want to lose rigor and admit paradoxes. Hilbert was forced by those very criticisms to develop a far-reaching program of mathematics in order to clear the discovered inconsistencies. Now the partial failure of Hilbert's program (Gödel's incompleteness and inconsistency theorems) shows that there really was something rotten in Cantor's paradise, and the indecidability of Cantor's favorite problem (the continuum hypothesis) retrospectively gives weight to Poincaré's criticism: arguably, Poincaré never asked a question which was later shown to be undecidable, unlike as with the continuum hypothesis or questions of the gender of angels. Galois? Well, he has the best possible excuses for having written his genial discoveries
in such an unreadable way: he wrote them partly in jail, partly the night before his death,
and all before he was 22. Now for the very same reasons the mathematical establishment (the "Académie des Sciences", including people with a very different mind, like Fourier) have good excuses not to understand what he had done immediately. And again, very soon after his death (about 10 years after), his work was exhumed, intensely admired and integrated into living mathematics (especially by the German school). PS: please feel free to vote down this unromantic post. My earlier self would probably have done so.
|
{
"source": [
"https://mathoverflow.net/questions/121565",
"https://mathoverflow.net",
"https://mathoverflow.net/users/23388/"
]
}
|
121,571 |
A definition of the connected sum of two $n$-manifolds $M$ and $M'$ begins by considering two $n$-balls $B$ in $M$, $B'$ in $M'$, and glueing the varieties $M\setminus \mathring B$ and $M'\setminus \mathring B'$ along their boundary (an $(n-1)$-sphere) by an orientation-reversing homeomorphism. The construction depends a priori on these various choices, but it is asserted at many places of the litterature (Lee's book on topological manifolds for example, as well as Wikipedia )
that the result does not depend on these choices. In the differentiable case, a reference is given to a theorem of Palais ( Natural operations on differential forms , Thm. 5.5) which asserts — roughly — that two embedding of $n$-balls differ by a global diffeomorphism which is isotopic to identity. Are the details of this independence written somewhere in the litterature,
both in the continuous and the smooth case?
|
In the topological category the proof that connected sum is well-defined depends on the Annulus Theorem, first proved by Kirby; the necessity of the Annulus Theorem is seen from Bruno Martelli's answer. So you are not likely to find a proof before Kirby's paper. Perhaps someone jotted a proof down, maybe someone who thought about the Annulus Theorem when it was still a conjecture, and realized that well-definedness of connected sum was a good application. But, I do not know. Anyway, the proof is straightforward once you have the Annulus Theorem. Here's a sketch. There's a couple of missing hypotheses. One must assume $M,M'$ are connected. One must also assume $M,M'$ are oriented. And one must assume the balls $B,B'$ are "nicely embedded"; at the minimum, assume that the boundary spheres $S,S'$ are locally bicollared, which implies globally bicollared by Brown's theorem. This rules out nastiness like an Alexander horned ball. Now one shows that the connected sum is independent of the choice of gluing map $S \to S'$ (assumed to reverse orientation). This follows from the fact that any two homeomorphisms $S \to S'$ which agree on orientations are isotopic: once that is known, one absorbs the isotopy into the collar neighborhoods. Proving this fact may already require the Annulus Theorem. For the rest, it suffices to prove that for any two nicely embedded balls $B_1,B_2 \subset M$ there exists an orientation preserving homemeorphism of $M$ taking $B_1$ to $B_2$, in fact an ambient isotopy. Using the boundary bicollaring, we may assume $B_1,B_2$ are contained respectively in open balls $U_1,U_2$, which are centered on points $p_1,p_2$ in some coordinate chart. We can also assume that $p_1=p_2$, because there is an ambient isotopy of $M$ taking $p_1$ to $p_2$: connect $p_1$ to $p_2$ by a path, cover the path by finitely many charts, and concatenate a sequence of ambient isotopies supported in these finitely many charts, moving $p_1$ along the path step by step to $p_2$. We can also replace $B_1$ by an arbitrarily small subball in $U_1$ centered at $p_1$, and similarly for $B_2$; this is straightforward to check using an ambient isotopy supported in the coordinate charts for $U_1$ and $U_2$. In particular, we can assume $B_1$ is contained in the interior of $B_2$. Now apply the annulus theorem: the difference $B_2 \setminus B_1$ is homeomorphic to a sphere crossed with an interval. Using this, one can then ambiently isotope $B_2$ to $B_1$.
|
{
"source": [
"https://mathoverflow.net/questions/121571",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10696/"
]
}
|
121,993 |
Let $\mathcal{F}$ and $\mathcal{G}$ be abelian categories. It is well-known that if a functor $\phi : \mathcal{F} \rightarrow \mathcal{G}$ has a right-adjoint (so $\phi$ is itself a left-adjoint to some other functor), then $\phi$ is right exact. Similarly, if $\phi$ has a left-adjoint (so $\phi$ is itself a right-adjoint to some other functor), then $\phi$ is left-exact. I was going through the list of left- and right-exact functors I know, and they all were covered by the above condition. Can someone give me examples that appear "in nature" (so aren't too artificial) of left- or right-exact functors that are not adjoints? I'm aware of the adjoint functor theorem, but its conditions are much stronger than just being left- or right-exact.
|
I would disagree that the hypotheses of the adjoint functor theorem are much stronger than exactness. Left exactness is equivalent to preserving all finite limits, and the hypotheses of the adjoint functor theorem are existence of all limits, preserving all limits, and a smallness condition that usually is easy to verify. Furthermore, to know that a left exact functor preserves all limits, it suffices to know that it preserves arbitrary products (since any limit can be expressed as a kernel of an appropriate map between products). So in most typical applications, the only difference between being left exact and having a left adjoint is whether a functor preserves infinite products. This also shows how to find a counterexample: find a left exact functor that does not preserve infinite products. For instance, if $M$ is any flat module over a commutative ring $R$, tensoring with $M$ is left exact, but will not preserve infinite products unless $M$ has nice finiteness properties (if $R$ is Noetherian, the condition is that $M$ is finitely generated). In particular, if $R=\mathbb{Z}$ you could take $M=\mathbb{Q}$, or if $R$ is a field you could take $M$ to be any infinite-dimensional vector space. As Todd notes in his comment, you can similarly get an example for right exactness instead of left exactness by Homming out of a projective module that is not finitely generated. You can get a more artificial sort of counterexample by taking abelian categories with a size restriction on their objects that prevents the adjoint from existing (because you don't have all (co)limits). For instance, take the category of countable-dimensional vector spaces over some field, and consider the endofunctor given by tensoring with a countably infinite dimensional space $V$. This is right exact, and it ought to have a right adjoint given by $\mathrm{Hom}(V,-)$ (and it is easy to show that if the right adjoint exists, it must be given by $\mathrm{Hom}(V,-)$). But this right adjoint is undefined because (for instance) $\mathrm{Hom}(V,V)$ is uncountable-dimensional and so it doesn't exist in our category.
|
{
"source": [
"https://mathoverflow.net/questions/121993",
"https://mathoverflow.net",
"https://mathoverflow.net/users/31491/"
]
}
|
122,125 |
Suppose most mathematical research papers were freely accessible online. Suppose a well-organized platform existed where responsible users could write comments on any paper (linking to its doi, Arxiv number, or other electronic identifier from which it could be retrieved freely), or even ``mark it up'' (pointing to similar arguments elsewhere, catch and correct mistakes, e.g.), and where you could see others' comments and mark-ups. Would this be, or evolve into, a useful tool for mathematical research? What features would be necessary, useful, or to-be-avoided-at-all-costs? This is not a rhetorical question: a committee of the National Research Council is looking into what could be built on top of a World Digital Math Library, to make it even more useful to the mathematical community than having all the materials available. This study is being funded by the Sloan Foundation. Input from the mathematical community would be very useful.
|
I think such a thing would provide immense value. In particular I can think of instances when the following sorts of comments would have saved me a great deal of time: (1) No need to read pages XX-XXX, here is a one paragraph argument. (2) This result has since been strengthened, see ... (3) The following claims are not quite right, here is a counterexample, and here is how to fix it. (4) The following claims actually are right, even though the following might at first seem like a counterexample. (5) What the author really means by [SGA] is [SGA N, page XXX] (6) This result has the following interesting applications ...
(6a) What would be even better is an automated system where, not just can you see what papers cite a given paper as you can today, but you can even see where a given lemma or proposition is cited. (7) The author has only cited the relevant papers of his friends, the following other work in the subject is closely related. (8) This paper is actually much less / much more interesting than it sounds... (9) The following seems to be a gap in the argument: (10) This 200 page paper assumes along the way in places which are explicit but maybe you didn't notice the following conjectures... I think it would be essential however to ensure that people post under their own names and other measures are taken to ensure responsibility and measure the credibility of authors, but I think at the present stage of development of the internet we know how to do that. I also think items like (3), (4), (9), (10) will become increasingly important; already it seems that people who consider themselves sufficiently famous don't necessarily bother publishing in journals (and so are not subjected to the review system), or even if they do are perhaps sufficiently famous to override or intimidate the reviewers, perhaps by sheer number of pages, etc...
|
{
"source": [
"https://mathoverflow.net/questions/122125",
"https://mathoverflow.net",
"https://mathoverflow.net/users/31518/"
]
}
|
122,557 |
I would like to know if there are compact (n-1)-manifolds $N$ that are not spheres but such that there is a manifold with boundary $M$ which satisfies the following two properties: $\partial M\cong N$ $M-\partial M\cong \mathbb{R}^n$ I am primarily interested in this question in the category of smooth manifolds but I would be interested to know the answer in the topological case as well.
|
$N$ has to be a homotopy-sphere. So as long as it's dimension isn't $4$, there's a proof that it has to be the standard $S^{n-1}$. These arguments appear in the Kosinski book on smooth manifolds. The basic idea goes like this. 1) $N$ is simply connected. There's the inclusion map $N \to M$, but if $p \in int(M)$ then there's also a retraction map $M \setminus \{p\} \to N$. Provided $n \geq 3$ removal of a point does not affect the fundamental group. 2) $N$ is a homology sphere by Alexander/Poincare duality of the pair $(M,N)$. Part (1) technically gives us this as well but this argument works even if $M$ is not contractible. So by the Whitehead theorem, $N$ is a homotopy sphere. Moreover, $M$ is a contractible manifold whose boundary is a homotopy sphere. So $M$ is a disc by the h-cobordism theorem, and the disc has a unique smooth structure (not known in dimension $5$ still). So that resolves all cases except $dim(N)=4$.
|
{
"source": [
"https://mathoverflow.net/questions/122557",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10707/"
]
}
|
122,704 |
Why do all measure theory textbooks present the concept of push-forward measure, but never the concept of pull-back measure? Doesn't the latter exist? It's true that the naive treatment of such a concept would sometimes lead to contradictions. For instance, let $p:\mathbb{R}^2\rightarrow\mathbb{R}$ be given by $p(x,y)=x$, $\lambda$ be the Lebesgue measure on $\mathbb{R}$, and $A=[0,1]\times[0,1]$. If one decomposes $A=A_1 \cup A_2$ where $A_1=[0,1]\times[0,\frac{1}{2}]$ and $A_2=[0,1]\times(\frac{1}{2},1]$, then $(p^* \lambda)(A)=1$ while $(p^* \lambda)(A_1)+(p^* \lambda)(A_2)=2$, showing that the naive definition of the pull-back does not lead to a measure. Similarily, if $i:\mathbb{R}\rightarrow\mathbb{R}^2$ is given by $i(x)=(x,0)$, then the pull-back of the Lebesgue measure on $\mathbb{R}^2$ would be 0. Given the two situations presented above, can one define a fruitful concept of pull-back measure?
|
To define pullbacks of measures we need some additional data,
because otherwise one would be able
to obtain a canonical measure on an arbitrary measurable space M
by pulling back the canonical measure on the point along the unique map M→pt. One natural choice for such additional data is
a choice of measure on each fiber of the map f: M→N.
Using such a fiberwise measure one can define the pullback measure f*μ on M
given a measure μ on N as follows:
to integrate a function h on M with respect to f*μ we integrate h fiberwise with respect
to the fiberwise measure on M and then we integrate
the resulting function on N with respect to μ. To define pullbacks of complex valued measures
we have to require that the fiberwise measure
(which can now also be complex valued) is fiberwise finite
and its norm (total variation) is uniformly bounded
with respect to N. To ensure that pullbacks of probability measures
are again probability measures we have to require
that fiberwise measures are probability measures. All of this can be done in greater generality
for arbitrary noncommutative measurable spaces
(i.e., von Neumann algebras) and arbitrary L_p-spaces
(instead of just L_1-spaces, i.e., measures),
where p is an arbitrary complex number,
as explained in this answer: Is there an introduction to probability theory from a structuralist/categorical perspective?
|
{
"source": [
"https://mathoverflow.net/questions/122704",
"https://mathoverflow.net",
"https://mathoverflow.net/users/54780/"
]
}
|
123,081 |
When I applied for a PhD student position I had an interview with two professors. Somehow we touched the problem if $P$ is $NP$ and, once we got there, for some reason both professors made it clear that in their opinion there is absolutely no point attacking such a hard problem. Of course this is the case for a starting student, it is more fruitful to build the basis first. But they basically stated that the problem has been studied by so smart researchers that no mortal could do better anyway. This makes me wonder should one attack such hard problems at all? If one should, why and when? Will studying hard problems span new ideas? Is it even a necessity to understand some hard problems and, especially, why they are hard to solve? Or is it just pure waste of time? Or is it that one should learn some hard problems to educate oneself but not spend time attacking them?
|
Actually, I think trying a "hard problem" may be a good idea IF 1) You have a fair evidence that you are strong enough to tackle things other clever people gave up on. The evidence should be tangible. The best evidence is, of course, having solved at least one hard problem already, but that, obviously, cannot be applied to your first hard problem ever. Sometimes a good indication is other people saying something like "You should stop stealing other mathematicians' daily bread and do some real thing that no one else can do!" (Note that you shouldn't follow the first part of this advice.) 2) You have an escape strategy. That may be thinking of something else in parallel, making sure that your plan is such that even a partial progress can be of value, etc. 3) You are not afraid to fail and are used to the feeling of being a hopeless idiot (meaning you can calmly admit this frustrating fact about yourself without any reservations, excuses, or other kinds of self-deceit and still push ahead at your full strength). 4) You have enough free time and do not care too much of your career ups and downs. 5) You are sufficiently open-minded to see things at unusual angles and are trained to figure out reasonably quickly whether any given idea may possibly work or it certainly won't. Note that both are tough skills, which are almost completely untouched in most standard treatises on problem solving. 6) You love the problem. This should, actually, be #0 rather than #6, and it is hard to explain what it means in rational terms, but you can feel it when it happens. If those conditions are satisfied, go ahead and try shooting the Moon. If not, you'd better make your way up slowly step by step like most of us, picking the fight just slightly bigger than your own size every time. I'm not a great believer in "having a new idea from the start". The new idea or a combination of ideas usually comes eventually when working on the problem and the moment it comes is often very near the end of the story. The trail of failures that precedes it is well-hidden, but we all start with "I have no method, no feeling, no tools, no clue, and no hope" and proceed through "twisting this, we can get a bit more or something a bit different, however the main difficulty remains untouched". You have to figure out not only what doesn't work but also how exactly it doesn't work. Most of the time is spent on constructing examples and counterexamples to the steps in your initial plan, digressing into simpler models, checking that no information is lost at each particular step, i.e., that if the original theorem is correct, then the intermediate lemma you want to try is at least very plausible, and so on, and so forth. I do not know how it works for others, but for me any non-trivial problem is a scattered jigsaw puzzle, not an originally blurry but complete picture I merely need to focus the camera on. I'm not sure how much credibility I can claim myself when talking like this about solving hard problems, but, fortunately, most of these claims aren't my creations: I merely believe they are true and the opposites are false. So, take all this with a healthy grain of salt and keep in mind that out of 100 mathematicians, at most 5 are qualified to shoot the Moon in principle and, out of those 5, at most 1 will score a hit when making this long shot, so don't judge us, professors, too harshly when we just know our limitations and are unwilling to try to jump above our heads. There is a lot of stuff at the knee level that needs to be done and some of us (including myself) just feel that it will be more efficient to spend most of our time doing it there. One becomes a loser not when he aims and shoots lower than the Moon but when he stops seeing it in the sky :). As to the formal question list, I would answer as follows: Should one attack such hard problems at all? Yes. The gods won't do it for us, so it'll have to be one of us, poor mortals, who should try. If one should, why and when? See #1-#6 for "when". As to "why", if one asks this question, one shouldn't. Will studying hard problems span new ideas? Possibly. It can work out either way. Is it even a necessity to understand some hard problems and, especially, why they are hard to solve? No, nothing is absolutely necessary. You can live and work perfectly well without it. Or is it just pure waste of time? This depends on who and what you are. Or is it that one should learn some hard problems to educate oneself but not spend time attacking them? That works for some people too.
|
{
"source": [
"https://mathoverflow.net/questions/123081",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
123,331 |
Is it algorithmically decidable if two finitely presented amenable groups are isomorphic? Or slightly different: Does there exist a family of amenable groups (indexed by natural numbers) for which one cannot algorithmically decide if two elements of the family are isomorphic?
|
EDIT: The isomorphism problem for finitely presented solvable groups in the variety of all solvable groups of derived length $\le 7$ is undecidable. This was proved by Kirkinskiĭ and Remeslennikov (Kirkinskiĭ, A. S.; Remeslennikov, V. N.
`The isomorphism problem for solvable groups.' (Russian)
Mat. Zametki 18 (1975), no. 3, 437–443.). The Russian version of this article can be downloaded from here . The English translation is available here . Unfortunately this does not fully answer the original question, because the groups in this construction are finitely presented in the variety of solvable groups but may not be finitely presented in the variety of all groups. I would guess that one could use O. Kharlampovich's example of a finitely presented 3-step solvable group with unsolvable word problem (Harlampovič, O. G., `A finitely presented solvable group with unsolvable word problem.' (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 45 (1981), no. 4, 852–873, 928.) to construct the family of groups you need. Perhaps someone has already done this... Second EDIT: Indeed, this was done by Baumslag, Gildenhuys and Strebel (see Theorem 1 in Baumslag, Gilbert; Gildenhuys, Dion; Strebel, Ralph, `Algorithmically insoluble problems about finitely presented solvable groups, Lie and associative algebras. II.'
J. Algebra 97 (1985), no. 1, 278–285.), who proved that the isomorphism problem is undecidable in the class of finitely presented solvable groups of derived length 3. In fact, in one of her talks Olga Kharlampovich mentioned that she can construct a finitely presented 3-step solvable group $G$ with unsolvable word problem that is Hopfian. Then the isomorphism problem among the quotients of $G$ by one defining relation is unsolvable (because $G/\langle\langle g \rangle\rangle^G$ is isomorphic to $G$ if and only if $g=1$ in $G$).
|
{
"source": [
"https://mathoverflow.net/questions/123331",
"https://mathoverflow.net",
"https://mathoverflow.net/users/31864/"
]
}
|
123,392 |
Let $G$ be a finite group. Is it possible that there are "many" one-dimensional representations, and "very few" high-dimensional irreducible representations? Originally, I thought it's impossible for the following reason. If $\chi_1, \chi_2$ are two one-dimensional representations such that $\chi_2 \chi_1^{-1} \not= 1$, and let $\rho$ be a high-dimensional irreducible representation. Then $\chi_1 \rho$ and $\chi_2 \rho$ are two nonequivalent irreps. When I tried to prove this formally, I feel it's not the case. Also, I find a finite group $G$ which has $2^{k-1}$ one-dimensional irreps, and two $2^{k-1}$-dimensoinal irreps, which is defined as follows. Let $G$ be generated by $g_1, g_2, \ldots, g_{2k-1}, -1$ such that $g_i^2 = -1$ and $g_i g_j = -g_j g_i$. It seems that I already see the answer. But I hope to see some better explanation about it.
|
Let $p$ be an odd prime and $G$ the semi-direct product of the additive group $ {\mathbb F}_p= {\mathbb Z}/p{\mathbb Z}$ and the group $({\mathbb Z}/p{\mathbb Z})^*$ of the units in the finite field of $p$ elements, acting by multiplication on ${\mathbb F}_p$ . The group $G$ has $p-1$ characters (one dimensional representations) and one irreducible representation of dimension $(p-1)$ . Hence in the regular representation of $G$ , it occurs with multiplicity $p-1$ . Therefore, in the regular rep, these reps amount to $p-1+(p-1)^2=card(G).\quad $ Consequently, there are $p-1$ characters, and only ONE higher dimensional irrep for $G$ . The order of $G$ can be arbitrarily large. The irreducible representation of dimension $p-1$ is of the form $\rho= Ind_H^G(\chi)$ where $H={\mathbb F}_p$ and $\chi$ is a non-trivial character of $H$ . By Mackey's theorem (easy to prove in this special case), $\rho$ is irreducible. Suppose that $\rho$ is an irrep of dimension greater than one (exists since $G$ is not abelian). The restriction to $H$ must then contain a non-trivial character $\chi$ for $H$ . Conjugation by elements of ${\mathbb F}_p^*$ then implies that $all$ non-trivial characters of $H$ occur in $\rho$ . Hence $\rho$ has dimension at least $p-1$ . Since we have already seen that there is no room for anything else, the dimension of $\rho$ must be $p-1$ and $\rho =Ind _H^G (\chi)$ .
|
{
"source": [
"https://mathoverflow.net/questions/123392",
"https://mathoverflow.net",
"https://mathoverflow.net/users/26659/"
]
}
|
123,444 |
Is every finitely generated projective $\mathbf{Z}[x]$-module free?
|
While it's certainly true (per Fernando's comment) that this is a special case of the Quillen-Suslin theorem, it was certainly known long before Quillen and Suslin came along. There's a paper of Murthy from the mid-1960s which shows that every projective $R[x]$-module is extended whenever $R$ is a regular ring of dimension at most 2. ("Extended" here means "of the form $P[x]$ where $P$ is a projective $R$-module". Since all projective ${\mathbb Z}$-modules are free, extended is equivalent to free in this case.) But there's an even earlier paper of Bass which covers the case where $R$ is regular of dimension 1, which is all you need. The paper is called "Torsion Free and Projective Modules". Edited to add: And the case of a PID predates even Bass; I think it's due to Seshadri in the 1950s.
|
{
"source": [
"https://mathoverflow.net/questions/123444",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29980/"
]
}
|
123,513 |
Absolute neighborhood retracts (ANRs) are topological spaces $X$ which, whenever $i\colon X\to Y$ is an embedding into a normal topological space $Y$ , there exists a neighborhood $U$ of $i(X)$ in $Y$ and a retraction of $U$ onto $i(X)$ .
They were invented by Borsuk in 1932 ( Über eine Klasse von lokal zusammenhängenden Räumen , Fundamenta Mathematicae 19 (1), p. 220-242, EuDML ) and have been the object of a lot of developments from 1930 to the 60s (Hu's monograph on the subject dates from 1965),
being a central subject in combinatorial topology. The discovery that these spaces had good topological (local connectedness),
homological (finiteness in the compact case) and even homotopical properties
must have been a strong impetus for the developement of the theory.
Also, they probably played some role in the discovery of the homotopy extension property
(it is easy to extend homotopies whose source is a normal space and
target an ANR) and of cofibrations. I have the impression that this more or less gradually stopped being so in the 70s: a basic MathScinet search does not refer that many recent papers, although they seem to be used as an important tool in some recent works (a colleague pointed to me those of Steve Ferry). My question (which does not want to be subjective nor argumentative) is the following:
what is the importance of this notion in modern developments of algebraic topology?
|
Another reason you might not see the word ANR these days is that compact finite-dimensional spaces are ANRs if and only if they are locally contractible. Thus, "finite-dimensional and local contractible" can replace ANR in the statement of a theorem (and might help the result appeal to a wider audience). In comparison geometry, for instance, the existence of a contractibility function takes the place of the ANR condition. Borsuk conjectured that compact ANRs should have the homotopy types of finite simplicial complexes. Chapman and West proved that they even have preferred simple-homotopy types. This is part of the "topological invariance of torsion" package and is quite a striking result. Every compact, finite-dimensional, locally contractible space has a preferred finite combinatorial structure that is well-defined up to (even local!) simple-homotopy moves.
|
{
"source": [
"https://mathoverflow.net/questions/123513",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10696/"
]
}
|
123,633 |
I apologize in advance if this is somewhat elementary, but: Let $(M,g)$ be a compact Riemannian manifold. Is there a "characterization" of which symmetric bilinear tensors $B\in Sym^2(M)$ are Hessians of functions $f\colon M\to \mathbb R$ ? In other words, for which $B\in Sym^2(M)$ there exists $f\colon M\to\mathbb R$ such that $B(X,Y)=Hess \ f(X,Y)$ , where $Hess\ f(X,Y)=g(\nabla_X \nabla f,Y)=X(Y(f))-\mathrm df(\nabla_X Y)$ ? Obviously, since $M$ is assumed compact, any continuous function will attain a min and a max, so a symmetric bilinear form that is a candidate to Hessian cannot be definite. This maybe pretty naive, but I cannot think of any other property that distinguishes Hessians among bilinear symmetric tensors in this context... For example, it doesn't seem to be the case that $B$ has to be parallel, or satisfy any other sort of similar "nice" conditions. Note that even if we impose the extra condition that $B$ is the Hessian of a Morse function, I don't quite see if the topology of $M$ will impose any properties on $B$ via Morse theory, since $B$ does not know if it is at a (future) critical point or not, right?
|
There are local conditions, but they typically involve the curvature tensor of the underlying metric. For example, if the metric is flat, so that one can choose orthonormal coordinates $x_i$ in which the covariant derivatives are the ordinary derivatives, then the condition that a quadratic form $H = h_{ij}\ dx_idx_j$ be a Hessian is just that
$$
\frac{\partial h_{ij}}{\partial x_k} = \frac{\partial h_{ik}}{\partial x_j} \ .
$$
This is an overdetermined system, and, if it is satisfied, there are solutions $f$ of
$$
\frac{\partial^2 f}{\partial x_i\partial x_j} = h_{ij}
$$
and they are uniquely determined (locally) up to the addition of a function linear in the $x_i$. In the more general case, one has local conditions in terms of the curvature of $g$, so the answer is more complicated (and more interesting). I calculate using moving frames (see below at the end for the 'Nomizu-style' interpretation), so I would explain it this way: Let $g = {\omega_1}^2 +\cdots + {\omega_n}^2$ be the expression of $g$ in a local orthonormal coframing $\omega$. There will exist unique $1$-forms $\theta_{ij}=-\theta_{ji}$ so that (using the summation convention) $d\omega_i = -\theta_{ij}\wedge\omega_j$. (These are the first structure equations.) If $f$ is a function defined in the domain of the coframing, one will have
$$
df = f_i\ \omega_i
\qquad\text{and}\qquad
df_i = -\theta_{ij}\ f_j + f_{ij}\ \omega_j
$$
and, by definition, $\mathrm{Hess}_g(f) = f_{ij}\ \omega_i{\circ}\omega_j$. To determine whether a given symmetric quadratic form $H = h_{ij}\ \omega_i{\circ}\omega_j$ can be written as $H = \mathrm{Hess}_g(f)$ for some $f$, one computes the exterior derivative of the equations $df_i = -\theta_{ij}\ f_j + h_{ij}\ \omega_j$
and finds that one must have
$$
h_{ikl}-h_{ilk} = -R_{ijkl}\ f_j\ , \tag{1}
$$
where $dh_{ij} = -\theta_{ik}\ h_{kj} + \theta_{kj}\ h_{ik} + h_{ijk}\ \omega_k$ and the $R_{ijkl}=-R_{ijlk}$ are the components of the Riemann curvature tensor in this coframing, as defined by the second structure equations
$$
d\theta_{ij} = -\theta_{ik}\wedge\theta_{kj} + \tfrac12\ R_{ijkl}\ \omega_k\wedge\omega_l\ .
$$ The system (1) gives necessary conditions for $f$ to exist, since, in most cases, it completely determines the only possible functions $f_i$ that could be its derivatives in this coframing. It may be worth pausing to interpret (1) as a global equation. In Nomizu-style notation, one has $\mathrm{Hess}_g(f) = (\nabla\nabla f)^{\flat\flat}$, but I am going to ignore the distinction between $T=TM$ and $T^\ast=T^*M$ since we have a metric $g$ that is $\nabla$-parallel and just write $\mathrm{Hess}_g(f) = \nabla\nabla f$. Applying the Bianchi identities, one has
$$
\sigma\bigl(\nabla\bigl(\mathrm{Hess}_g(f)\bigr)\bigr) = \mathsf{R}(\nabla f),
$$
where $\sigma:\mathsf{S}^2(T^\ast)\otimes T^\ast\to T^\ast\otimes\Lambda^2(T^\ast)$ is the canonical skew-symmetrization operator, and $\mathsf{R}:T\to T^\ast\otimes\Lambda^2(T^\ast)$ is the usual mapping induced by the curvature operator. In particular, if $H = \mathrm{Hess}_g(f)$, then one must have
$$
\sigma\bigl(\nabla H\bigr) = \mathsf{R}(\nabla f)\tag{1'}
$$
which, in my moving frames notation, is the system (1). Now, the image of $\sigma$ is, as usual, the kernel of the further skew-symmetrization map $\sigma: T^\ast\otimes\Lambda^2(T^\ast)\to \Lambda^3(T^\ast)$ and, fortunately, the first Bianchi identity implies that $\mathsf{R}$ also takes values in this kernel, which has dimension $n\cdot \tfrac12 n(n{-}1) - \tfrac16 n(n{-}1)(n{-}2)= \tfrac13 n(n^2{-}1)$. We already know what happens when $\mathsf{R}\equiv0$, namely, the first order system of equations $\sigma\bigl(\nabla H\bigr)=0$ are necessary and sufficient that $H$ be locally expressible as a Hessian. However, the 'generic' situation is that $\mathsf{R}$ is injective, so let's consider that case. (I'll say a bit more about what happens in the intermediate cases at the end.) When $\mathsf{R}$ is injective, let $\mathsf{R}(T)\subset T^\ast\otimes\Lambda^2(T^\ast)$ be the image subbundle of rank $n$. The equations $(1')$ then say that $H$ must satisfy the system of $\tfrac13 n(n^2{-}1) - n = \tfrac13 n(n^2{-}4)$ first order equations
$$
\sigma(\nabla H) \equiv 0\ \text{modulo}\ \mathsf{R}(T).\tag{1''}
$$
Assuming that these equations do hold, then there is a unique vector field $F(H)$ on $M$ such that
$$
\sigma(\nabla H) = \mathsf{R}\bigl(F(H)\bigr),
$$
and this $F(H)$, which is a linear, first order differential expression in $H$ (whose coefficients involve the curvature of $g$), is the only possible candidate for $\nabla f$. However, in order for this to solve our problem, it must satisfy
$$
\nabla \bigl(F(H)\bigr) - H = 0,\tag{2}
$$
and this is a system of $n^2$ first-order equations on $F(H)$, which is, of course, a system of $n^2$ second-order equations on $H$. If $H$ does satisfy this system, then, because $\nabla\bigl(F(H)\bigr)=H$ is symmetric, it will follow that $F(H)$ is (locally) a gradient vector field of some function $f$, uniquely determined up to an additive constant. For example, when $n=2$ and the Gauss curvature of $g$ is nowhere zero, the equations $(1'')$ are trivial since $\mathsf{R}(T) = T^*\otimes\Lambda^2(T^\ast)$. Thus, the result of this analysis is that there exists a linear, second order differential operator $\mathsf{S}_g$ from symmetric quadratic forms to quadratic forms, namely $\mathsf{S}_g(H) = \nabla\bigl(F(H)\bigr)-H$, such that a symmetric quadratic form $H$ is (locally) a Hessian with respect to $g$ if and only if $\mathsf{S}_g(H)=0$. Moreover, when the first deRham cohomology group of $M$ vanishes and $\mathsf{S}_g(H)=0$, there will exist a global function $f$ such that $\mathrm{Hess}_g(f)=H$, and this $f$ is unique up to an additive constant. When $n>2$, the equations $(1'')$ are never trivial, so the condition that $H$ be a Hessian with respect to $g$ involves first order conditions and second order conditions. What is interesting is that, when $n$ is sufficiently large and $\mathsf{R}$ is sufficiently 'generic', it appears (I haven't checked for sure) that $(1'')$ can actually imply $(2)$, so that the conditions become first order again. (This is not true when $n=3$, though.) Finally, if $\mathsf{R}$ is not injective, one may have to go to higher order derivatives of $H$ to determine whether it is a Hessian. This is an interesting case, but I don't have time to go into it right now. Remark: Of course, the metric $g$ is not really essential in this story, since it really depends more on $\nabla$, than on $g$. The equation $\nabla(df) = H$ makes sense without any metric and is a reasonable equation, as long as $\nabla$ is a torsion-free connection on $M$. Thus, one could ask for characterizations of those symmetric quadratic differentials $H$ that can be written in the form $H = \nabla(df)$ where $\nabla$ is a given torsion-free connection on $M$. The answer in this case would have essentially the same form as the answer above since, if one carries out the computations carefully, one sees that the metric $g$ plays essentially no rôle in the problem.
|
{
"source": [
"https://mathoverflow.net/questions/123633",
"https://mathoverflow.net",
"https://mathoverflow.net/users/15743/"
]
}
|
123,739 |
Of the mathematical objects that I am familiar with, it is normally the case that the product of 2 objects is an object of the same type and that an equivalence relation on an object induces a quotient object of the same type. I think I have some understanding as to why the product of 2 fields is not a field, because a field is not an algebra in the universal algebra sense. But I don't see a reason as to why an equivalence relation on a metric space fails to induce a quotient structure, apart from the fact that it just doesn't work.
|
You can define a (pseudo)metric on a quotient of a metric space. Let $X$ be a metric space with metric $d$ and an equivalence relation $\sim$ . Say that a chain between two points $x,y\in X$ is a sequence of points $x=a_0\sim b_0$ , $a_1\sim b_1$ , $\ldots$ $a_n\sim b_n=y$ , and define the length of such a chain to be $\sum d(b_i,a_{i+1})$ . We can now define the distance $d([x],[y])$ between two equivalence classes to be the infimum of all lengths of chains from $x$ to $y$ . It's easy to see that this is a pseudometric on $X/{\sim}$ (a metric where the distance between two distinct points might be $0$ ). This descends to a true metric on the quotient $Y=X/{\sim}'$ , where $x\sim' y$ if $d([x],[y])=0$ . Furthermore, $Y$ can be characterized by the following universal property: (non-strictly) distance-decreasing maps from $Y$ to a metric space $Z$ are naturally in bijection with distance-decreasing maps from $f:X\to Z$ such that $f(x)=f(y)$ whenever $x\sim y$ . More generally, a similar construction shows that the category of metric spaces and distance-decreasing maps has all connected colimits (colimits over connected diagrams). If you generalize metrics to allow the distance between two points to be infinite, you can construct all colimits, and also all limits (use the sup metric on products).
|
{
"source": [
"https://mathoverflow.net/questions/123739",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29763/"
]
}
|
123,744 |
I've been tasked with proofreading an Engineering/Mathematics thesis paper. I was always told that numbers under 10 should be spelled out (one, two, three, ...) but I was wondering if this rule holds in math and science fields as well. This paper understandably uses a lot of numbers in it and switching back and forth between numbers written out in words and numerals seems inconsistent at best and somewhat confusing at worst.
|
Generally speaking you should try to distinguish between an English number and a mathematical number. As in "Over the next five chapters we will prove that 5 is a prime number." Never spell out a number which is a subject of study, like the "5" in that sentence. However ordinals like in "Section 4" and "Lemma 2" are always written with digits. "There are three results in Theorem 3."
|
{
"source": [
"https://mathoverflow.net/questions/123744",
"https://mathoverflow.net",
"https://mathoverflow.net/users/31992/"
]
}
|
123,942 |
Suppose $X$ is a projective variety over $\mathbb C$. I am happy to entertain more or different adjectives — I'm not looking for the most general statement, but rather to understand when and how smooth-manifold intuition leads me astray. I know very little algebraic geometry, and so please forgive and correct me if a statement below is mistaken. It is rare for a line bundle $\mathcal L \to X$ to have a nowhere-vanishing section, and when it does, there are usually very few (only $\mathbb C^\times$ many).
Suppose instead that I ask for a weaker structure than for $\mathcal L$ to have a section, but rather let me ask only that it has a flat connection. My question is: In algebraic geometry, how often does a line bundle have a flat connection? When it has a flat connection, how many flat connections can it have?
|
I presume that your variety $X$ is smooth. Consider the additive map $\mathrm d\log \colon \mathscr O_X^*\to \Omega^1_X$ that sends $f$ to $\mathrm df/f$. It induces a map $c_1$ in cohomology from $H^1(X,\mathscr O_X^*)$
to $H^1(X,\Omega^1_X)$ — a coherent avatar of the first Chern class.
By Hodge Theory, $H^1(X,\Omega^1_X)$ is a subspace of $H^2(X,\mathbf C)$ and the two notions
of first Chern class coincide. A line bundle $\mathscr L$ has a connection if and only if its first Chern class $c_1(\mathscr L)\in H^1(X,\Omega^1_X)$ vanishes. The proof is straightforward: take an open cover $(U_i)$ of $X$, an invertible section $s_i$ of $\mathscr L$ on $U_i$ and the associated cocycle $(f_{ij})$ representing your line bundle in $H^1(X,\mathscr O_X^*)$. A connection $\nabla$ maps $s_i$ to $s_i\otimes\omega_i$, for some 1-form $\omega_i\in H^0(U_i,\Omega^1_X)$. The condition that these $s_i\otimes\omega_i$ come from a global connection on $X$ is exactly the vanishing of $c_1(\mathscr L)$. It is a non-trivial fact that if $\mathscr L$ has an algebraic connection, then it is automatically flat. Torsten Ekedahl gave an algebraic proof on this thread of MO (Ekedahl also observes that $p$th power of line bundles in characteristic $p$ have an integrable connection), but an analytic proof seems easy. The algebraic connexion $\nabla$ gives rise to a connexion $\nabla+\bar\partial$ on the associated holomorphic line bundle. One checks that the curvature of this connection is a $(2,0)$-form, while it should be a $(1,1)$-form. Consequently, it vanishes. When non empty, the set of flat connections on a vector bundle $\mathscr E$ is an affine space under $H^0(X,\Omega^1_X\otimes\mathscr E\mathit{nd}(\mathscr E))$, a finite dimensional vector space. In our case, $\mathscr L$ is a line bundle, hence $\mathscr E\mathit{nd}(\mathscr L)$ is the trivial line bundle so that we get $H^0(X,\Omega^1_X)$. NB. Following the comment of Ben McKay, I edited the last paragraph.
|
{
"source": [
"https://mathoverflow.net/questions/123942",
"https://mathoverflow.net",
"https://mathoverflow.net/users/78/"
]
}
|
124,011 |
Remark: I have since learned that G.H. Moore addresses this question in the third reference listed at the end of this post, beginning on p. 157 in which he cites a letter from Kreisel to Gödel dated 4/15/63, followed by the parenthetical note: Thus Kreisel saw an analogy between forcing and Friedberg's [1957] priority argument. The following page suggests other logicians agree that forcing was implicit in priority arguments from recursion theory (p. 158) and mentions Kunen (who is cited in one of the answers here). On the same page, it is mentioned that Kreisel claimed he had a form of forcing in his interpretation of intuitionism in the 1961 paper "Set-theoretic problems suggested by the notion of potential infinity." However, Moore contends that Cohen was the first to use forcing and related ideas in Set Theory. Nevertheless, I am grateful for the responses thus far, and would warmly welcome further clarification regarding my question below from other experts in the area of Set Theory and/or Mathematical Logic. Background: Paul Cohen began to think deeply about the Continuum Hypothesis in 1962, and published his proof of its independence (in two parts) by the following year. Of course, there were earlier mathematical moments that led to his "discovery of forcing," including his familiarity with Skolem's work (in particular, the Löwenheim–Skolem theorem ) and a desire to think in terms of "decision procedures." I will include a few relevant references at the end of this question, including a retrospective/introspective piece by Cohen himself. My question concerns an occurrence in 1957, when Richard Friedberg provided a solution to Post's Problem. First, allow me to transcribe an excerpt from Cohen's talk at the 2006 Gödel centennial : At that time there was great interest among Raymond [Smullyan] and some other people about the Post Problem. And that’s a problem which could have interested me; it had a mathematical flavor to it. But I never thought about it, and occasionally we’d have coffee and I’d hear these people talk about it. But one day, someone came to my office and said, “This problem’s been solved.” And I said, “Really?” “Yes, here’s the letter. I can’t believe it’s true!” And he gave it to me and I read it. I went to the blackboard, took some chalk, and I said, “Well, it seems right.” This is the proof by Friedberg – and so that was my only contact with logic at that point. But I still never lost this idea of somehow thinking about the foundations of mathematics: trying to find some kind of inductive technique for simplifying propositions; perhaps leading to a decision procedure, when impossible. This talk is summed up in the introduction to a re-printing of "Set Theory and the Continuum Hypothesis" (Cohen, 2008) in which the remarks corresponding to the above excerpt are: A small group of students were very interested in Emil Post's problem about maximal degree of unsolvability. I did dally with the thought of working on it, but in the end did not. Suddenly, one day a letter arrived containing a sketch of the solution by Richard Friedberg (Friedberg, 1957), and it was brought to my office. Amidst a certain degree of skepticism, I checked the proof and could find nothing wrong. It was exactly the kind of thing I would like to have done. I mentally resolved that I would not let an opportunity like that pass me again. I find the last sentence of this latter quotation rather interesting, particularly since it concerns a time five years before Cohen's work on ~CH officially commenced. A quick check of Wikipedia gives a problem statement and solution (i.e., the priority method) that sound remarkably similar, at least on the superficial level, to Cohen's subsequent work with forcing. Unfortunately, work on Turing degrees falls well outside of my bailiwick. Question: Can someone who specializes in Set Theory or Mathematical Logic comment on the similarities between Post's problem/the priority method and ~CH/Cohen's forcing? In particular, is there reason to believe that what was Cohen's "only contact with logic" by 1957 would have contributed in a meaningful (mathematical) way to his work half a decade later? References: Cohen, P. (2002). The discovery of forcing. Rocky Mountain Journal of Mathematics, 32(4). Kanamori, A. (2008). Cohen and set theory. The Bulletin of Symbolic Logic, 351-378. Moore, G. H. (1988). The origins of forcing, Logic Colloquium ’86. Studies in Logic and the Foundations of Mathematics, North-Holland, Amsterdam, 143-173.
|
In this edit, a historical note is added at the end. The quote below from Kunen's classic text Set Theory maybe of interest to you. Note that Kunen is pointing out that certain classical constructions in recursion theory can be viewed as precursors to forcing; but he does not claim that priority arguments fall under this category (on the other hand, there have been attempts to couch priority arguments as forcing arguments, e.g., by Nerode and Remmel in the mid-1980s). "There are two important precursors to the modern theory of forcing:
one in recursion theory and one in model theory.
In recursion theory, many classical results may be viewed, in hindsight,
as forcing arguments. Consider, for example, the Kleene- Post theorem
that there are incomparable Turing degrees. Let $\Bbb{P}$ = Fn $(2$ x $\omega, 2)$, let $G$
be $\Bbb{P}$-generic over $M$, and think of $G$ as coding $f_0$ and $f_1\in2^{\omega}$, where $f_i(n) =\cup G(i,n)$. Furthermore, to conclude recursively incomparability of $f_0$ and $f_1$ it is not necessary that $G$ be generic over all of $M$; it is sufficient that $G$ intersect only a few of the arithmetically defined dense sets of $M$; so few that in fact
$G$, and hence also $f_0$ and $f_1$ may be taken to be recursive in $0'$. This forcing argument for producing incomparable degrees below $0'$ is in fact precisely the original Kleene-Post argument, with a slight change in notation. See [Sacks 1971] for some deeper applications of forcing to recursion theory and a comparison of these methods with earlier (pre-forcing) techniques". [From Set Theory , by Kenneth Kunen, p.236] Historical Note: Cohen did not develop forcing in terms of general partial orders. His forcing machinery was developed (1) only over models of set theory satisfying Gödel's axiom of constructibility (V=L), and (2 ) and for certain partially order sets [namely those of the form Fn($\kappa, 2$) in modern terminology]. The machinery of forcing over arbitrary models was first developed by Solovay and Scott in the guise of Boolean valued models. Their approach was later simplified by Shoenfield to yield the current textbook formulations in terms of arbitrary partial orders. Therefore, even though priority arguments can be viewed as forcing arguments (as developed by Nerode and Remmel, and described in the answer of Noah S.) Cohen's work on forcing, only when extended by new ideas of Solovay, Scott, and Shoenfield (and perhaps others, e.g., Rowbottom) led to a suffciently powerful technology that subsumes (at least formally) priority arguments.
|
{
"source": [
"https://mathoverflow.net/questions/124011",
"https://mathoverflow.net",
"https://mathoverflow.net/users/22971/"
]
}
|
124,308 |
When I tested this in Mathematica , I had expected it to say it did not converge. However, I got this: $$\prod_{n=1}^\infty n^{\mu(n)}=\frac{1}{4 \pi ^2}$$ Note: this is the reciprocal of (3) zeta-regularized product over all primes . This indicates there is a slight skew toward the square-free numbers that have an odd number of factors. I haven't been able to find this in the literature, but my guess is that it is already known. Can someone point me in the right direction? Edit: We have been told it is a bug. Mathematica V.10 does not produce a symbolic answer. Edit 2 We can make this work by creating a function $f(k)$ that returns the $k$-th square-free number (in the constructive? order). We can find a few ideas from A019565 . $$\prod_{n=1}^{\infty}f(n)^{\mu(f(n))}$$ $$\sum_{n=1}^\infty \mu(f(n))$$ At the steps when $n>2$ is a power of $2$, sum$=0$ and prod$=1$. Also, at those steps, $f(n)$ is the primorial with $p_k$ as the greatest factor, with $k=\frac{\log(n)}{\log(2)}$ There is a revised question over here. It is a variation to Edit 2.
|
We have
$$\frac{1}{\zeta(s)} = \sum_{n=1}^{\infty} \frac{\mu(n)}{n^s} \quad \mbox{for} \ Re(s)>1.$$
Taking the derivative with respect to $s$, we get the following
$$- \frac{\zeta'(s)}{\zeta(s)^2} = - \sum_{n=1}^{\infty} (\log n) \frac{\mu(n)}{n^s} \quad \mbox{for} \ Re(s)>1.$$ If we plug in $s=0$ to the second equation (which is not allowed, because the second equation is only valid for $Re(s)>1$) we get
$$ - \frac{\zeta'(0)}{\zeta(0)^2} = - \sum_{n=1}^{\infty} \mu(n) \log(n) \quad \mbox{(FALSE EQUATION)}.$$ We have
$$\zeta(0) = - \frac{1}{2} \quad \zeta'(0) = - \frac{1}{2} \log(2 \pi)$$
so
$$ - \frac{\zeta'(0)}{\zeta(0)^2} = 2 \log(2 \pi)$$ So, if we believed the false equation, we would have
$$\sum_{n=1}^{\infty} \mu(n) \log(n) = -2 \log (2 \pi) \ \mbox{and} \ \prod_{n=1}^{\infty} n^{\mu(n)} = \frac{1}{4 \pi^2}.$$ I don't know why Mathematica thinks it's okay to plug in $s=0$.
|
{
"source": [
"https://mathoverflow.net/questions/124308",
"https://mathoverflow.net",
"https://mathoverflow.net/users/16888/"
]
}
|
124,754 |
This is more of a question about terminology than about math. The term "automorphic form" is clearly a generalization of the term "modular form." What is not clear is exactly which generalization it is. Many sources use the term in different ways. Any classical holomorphic modular form for $\mathrm{SL}_2(\mathbb{Z})$ is called a modular form, and usually (but not always) so are modular forms for congruence subgroups. Often, "automorphic form" is used when one considers either other Fuchsian groups, forms on groups other than $\mathrm{GL}_2$, or non-holomorphic forms (such as Maass forms and real analytic Eisenstein series). Alternatively, Diamond and Shurman define an "automorphic form" in Section 3.2 as being like a modular form but possibly meromorphic instead of holomorphic. As another example, Miyake's book Modular Forms writes on p.114, "Automorphic functions and automorphic forms for modular groups are called modular functions and modular forms, respectively," and his definition of "modular group" seems to coincide with that of congruence subgroup. The Princeton Companion to Mathematics writes in section III.21, "automorphic forms, which are generalized versions of the classical analytic functions called modular forms [III.61]," but it does not specify what the generalization is. The book contains other, similar statements (in III.61, "And indeed, automorphic forms, which are generalizations of modular forms"). So what exactly is the* definition of modular form as opposed to automorphic form? Since there is likely no "right" answer, what I really want to know is what is the history and what are the different conventions and the relations between them.
|
Very briefly: until work of Hans Maass c. 1949, "modular" or "automorphic" both referred to holomorphic functions invariant-up-to-cocycle (that is, invariant holomorphic sections of a bundle) on a quotient $\Gamma\backslash X$. For $X$ the upper half-plane, these were ellipic modular forms, visible since the 19th century. For $X$ a product of upper half-planes, these were Hilbert-Blumenthal modular forms, also studied by Hecke and Siegel. For $X$ a Siegel upper half-space, Siegel modular forms, studied by Siegel and Hel Braun in 1939. The "elliptic" case arose partly from moduli problems concerning elliptic curves, indeed, and had its roots in the work of Abel and Jacobi in the early 19th century. The Hilbert-Blumenthal story seems to have been a conscious generalization, once the number-theoretic content of the elliptic modular case was illustrated in the late 19th century, e.g., as in in Fricke-Klein. The "Siegel modular" case has physical setting the moduli space for principally polarized abelian varieties of a fixed size. The "general" case of discrete subgroups (Fuchsian, etc.) of $PSL(2,\mathbb R)$ acting on the upper half-plane (or, equivalently, $PSU(1,1)$ on the disk) was investigated by Poincare and others in the very late 19th century, and called "automorphic". After Hecke saw that binary theta series give Dedekind zetas of complex quadratic extensions of $\mathbb Q$, he gave Maass the thesis problem of finding an analogue for real quadratic. This led Maass to discover "waveforms", solutions of $(\Delta-\lambda)f=0$ where
$\Delta=y^2({\partial^2\over \partial x^2}+{\partial^2\over \partial y^2})$
is the $SL(2,\mathbb R)$-invariant Laplacian on the upper half-plane. The Eisenstein series $E_s(z)=\sum'_{c,d} y^s/|cz+d|^{2s}$ are the most explicit examples, and also the "special waveforms" Maass found (that Mellin transform essentially to Dedekind zetas of real fields and Hecke L-functions thereupon) are explicit. These are "automorphic forms/functions". Selberg, Roelcke, Gelfand-et-al, and a few others continued looking at the "analytic" side of these things in the 1950s. Siegel and Braun continued to work on holomorphic modular/automorphic forms on higher-dimensional spaces, with Braun initiating the investigation of "Hermitian" modular forms, that is, attached to the group $U(n,n)$ rather than $Sp(n,\mathbb R)$ for Siegel modular forms. Starting in the late 1950s and 1960s, Shimura considered the algebraic geometry and arithmetic (that is, Hasse-Weil zeta functions, generation of classfields, and such) on many higher-dimensional "modular" varieties, subsumed mostly under the "PEL-type" label: polarization, endomorphism, level. By this point, it seems that "automorphic" was used to refer to these "general" situations, even though they had connections with moduli problems. Also in the late 1950s and 1960s, Gelfand and his collaborators (Pieatetski-Shapiro, especially) emphasized the representation-theoretic possibilities in studying not only holomorphic modular/automorphic forms/function, but also Maass waveforms and other "generalizations". In particular, by about 1960 it was clear that from a repn theoretic viewpoint "holomorphic modular forms" and "real-analytic waveforms" had the commonality that both generated irreducible repns of the Lie group acting. Further, for congruence subgroups, being an eigenfunction for Hecke operators essentially meant generating irreducibles for the p-adic groups acting, as well. (In fact, even the "bad prime" behaviors are included nicely, if less formulaically, under this umbrella.) Langlands' work on the spectral theory of automorphic forms and Eisenstein series in general, in the 1960s, and conjectures relating Artin L-functions to general cuspforms on $GL(n)$, etc., gave a big impetus to the "general" theory starting in the late 1960s. In part, this was made feasible by progress in the repn theory of semi-simple real Lie groups, especially by Harish-Chandra. The repn theory of p-adic groups, initiated mostly by MacDonald and Gelfand-et-al, started off a little more slowly, but also proved to be sufficiently robust as to be a "help" rather than "hindrance" in this aspect of the theory of afms. The general study of moduli problems similarly needed additional inputs to continue to make progress, and Grothendieck-et-al's newer algebraic geometry, in the hands of Deligne and others, turned out to be a good language/viewpoint for this. There is a lot more to be said, naturally. By this year, "modular" suggests "holomorphic", as well as "related to moduli problem". "Automorphic" suggests "something more general", but also can be used as an umbrella term. Holomorphic-except-for-singularities, that is, meromorphic forms, have also arisen in higher-dimensional settings, as in Borcherds products. On another hand, the "weak" automorphic/modular forms of Zwegers-et-al allow a controlled extension of the moderate-growth condition on waveforms. Edit... : and it may be worth noting that various more-formal "definitions" are highly non-trivial to compare to each other, often depending upon appreciation of big theorems from repn theory, algebraic geometry, and number theory, that are usually _not_named_ during a formal discussion of "definitions". Further, there are more-elementary incompatibilities that are often harmless in a given context, but are not overtly acknowledged. For example, the "moderate growth" condition is not met by $L^2$ automorphic forms, for the same reason that functions in $L^2(\mathbb R)$ are typically not of moderate growth. Another is that requiring $\mathfrak z$-finiteness precludes taking an $L^2$ closure. But these awkwardnesses of the formal language are not genuine obstacles.
|
{
"source": [
"https://mathoverflow.net/questions/124754",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1355/"
]
}
|
124,965 |
All free groups of finite or infinite countable rank are subgroups of the free non-abelian group $F_2$, which is linear. However, a free group of infinite uncountable rank will not be a subgroup of $F_2$. Is it linear, too ?
This might easily follow from model theory, but I could not find a proof in the literature so far.
|
Free group of rank $c$ embeds in $Sl(2, F(t))$ where $F$ is a field of cardinality $c$. Edit: Here is the detailed argument which, as Yves noted in his comment, proves a stronger result. Theorem. Let $L$ be a field which is not an algebraic extension of a finite field and let $c$ be the cardinality of $L$. Then the free group of rank $c$ embeds in $SL(2, L)$. Proof. Let $P$ be the prime field of $L$; then $L$ has the form
$$
P\subset E \subset L
$$
where $E$ is a purely transcendenetal extension of $P$ and $L$ is an algebraic extension of $E$. Under our assumptions, $E$ and $L$ have the same cardinality, thus, it suffices to consider the case when $L=E$. Then $L$ is isomorphic to the functional field $L=F(t)$, where $F$ is a subfield of $L$. I will consider the case when $F$ is infinite since otherwise $L$ is countable and everything is clear (as the question reduces to the case of free groups of finite rank). Thus, $F$ has the same cardinality $c$ as $L$. Let $T$ be the Bruhat-Tits building associated with $G=SL(2, L)$: This building is a simplicial tree with the path-metric $d$, where every edge has unit length. The group $G$ acts on $T$ by simplicial automorphisms with the kernel $\pm 1$. Detailed description and properties of $T$ and the action of $G$ are in Serre's book "Trees." Let $v\in T$ be the vertex stabilized by $K=SL(2, O)$, where $O=F[t]$ is the ring of polynomial functions in $t$. Then the link $L_v$ of $v$ in $T$ is naturally identified with the projective line over $F$ (so that $K$ acts on $L_v$ by linear-fractional transformations). In particular, the group $K$ acts transitively on pairs of distinct points in $L_v$. Let $g\in G\setminus K$ be a diagonal matrix with the axis $\gamma\subset T$. Then $\gamma$ contains $v$ and $g$ acts on $\gamma$ as a translation by some even integer distance $\ge 2$. In view of transitivity of the action of $K$ on pairs noted above, there exists a subset $K_o\subset K$ of cardinality $c$ so that the elements $g_k=kgk^{-1}$, $k\in K_o$ have axes $k(\gamma)$ with the property that the 2-point sets
$$
k(\gamma) \cap L_v, k\in K_o,
$$
are pairwise disjoint. (Call this property D.) Now, I claim that the elements $g_k, k\in K_o$, are free generators of a free subgroup of $G$. The proof is rather standard. For each $k\in K_o$ let $D_k\subset T$ denote the Dirichlet fundamental domain for the cyclic group $\langle g_k \rangle$:
$$
D_k=\{ x\in T: d(x, g_k^m(v))> d(x, v), \forall m\in {\mathbb Z} \setminus 0\}.
$$
Since each $g_v$ translates $v$ at least by $2$, and in view of Property D above, the domains $D_k$ have pairwise disjoint complements. Thus, Tits' ping-pong argument (from his proof of the Tits alternative) applies in this setting and the subgroup of $G$ generated by the elements $g_k$ is indeed free with free generators $g_k$. qed. Note that one has to exclude fields $L$ with are algebraic extensions of finite fields, since in this case the group $GL(n, L)$ is torsion (for every finite $n$) and, hence, cannot contain a free subgroup.
|
{
"source": [
"https://mathoverflow.net/questions/124965",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32332/"
]
}
|
124,991 |
The title of this post expresses what I really want, which is to learn how to wield the internal logic of a topos more effectively. However, to bring it down to earth, I'll ask a few basic questions about the topos $\mathcal{Grph}$ of directed graphs, whose underlying category is
$$\mathcal{Grph}:={\bf Set}^{\mathcal{G}}.$$
Here $\mathcal{G}$ is the indexing category for graphs, with two objects $A$ and $V$ and with two morphisms $src,tgt\colon A\to V\ $. What can I express in the internal logic of the topos $\mathcal{Grph}$, say using the Mitchell-Benabou language or the Joyal-Kripke semantics, or what have you? How can I use this logic to prove things? What kinds of things can't be said or proven in this way? Below are some concrete questions. In each case when I ask "Can I...", I mean "To what degree can I use the internal language and logic of $\mathcal{Grph}$ to...". Can I express that a graph $X$ is finite, (respectively complete, discrete)? Can I take a graph $X$ and produce the
paths-graph $Paths(X)\ $, whose
vertices are those of $X$ but whose
arrows are all finite-length paths
in $X$? Can I express that $Paths$ is a monad, i.e. produce some morphism $X\to Paths(X)\ $ and another $Paths(Paths(X))\to Paths(X)\ $ with some properties? Can I take a graph $X$ and produce the subgraph $L\subseteq X$ consisting of all vertices and arrows that are involved in a loop? (That is, an arrow $a\in X(A)$ is in $L$ iff there exists a path $P$ of length $n$ in $X$ such that $a\in P$ and $P$ is a loop: $P(0)=P(n)\ $. A vertex $v\in X(V)$ is in $L$ if it is the source of an arrow in $L$.) Can I prove that if a graph has no loops and finitely many vertices then it has finitely many paths? Can I do something else in $\mathcal{Grph}$ that might be fun and informative? Maybe this post reflects a basic misunderstanding of how to think about the internal logic of a topos. If so, please set me straight. Also, a quiz question might be nice -- something of the form "see if you can express/prove this in $\mathcal{Grph}$: _ _ ."
|
In general the internal language of a topos can only express those statements that make sense in every topos. In essence, this limits you to something like bounded Zermelo set theory, without global membership. The right way to use the internal language of a particular topos, such as your topos of directed graphs, is to enrich the general internal language of toposes with new primitive types and new axioms. If you are lucky you may be able to add just axiom and define the new types (i.e., the new types can be characterized in the internal language). Let us see what these may be in the case of the topos of directed graphs. Because we are dealing with a presheaf topos we can tell in advance that the (covariant) Yoneda embedding $y : \mathcal{G} \to \mathbf{Set}^\mathcal{G}$ will give us something important. Indeed, $y(V)$ is the graph with one vertex and no arrows, while $y(A)$ is the graph with two vertices and one arrow in between. Let me write $V$ and $A$ instead of $y(V)$ and $y(A)$, respectively. We might call $V$ "the vertex" and $A$ "the arrow". We call the objects of our topos "graphs", obviously. Simple calculations reveal that, for a given graph $G$: $G \times V$ is the associated discrete graph on the vertices of $G$. $G^V$ is the associated complete graph on the vertices of $G$. $G \times A$ is the following graph: for each vertex $g$ in $G$ we get two vertices $(g,s)$ and $(g,y)$ in $G \times A$ (think of them as "$g$ as a source" and "$g$ as a target"), and for each arrow $a : g \to g'$ in $G$ we get an arrow $a : (g,s) \to (g',t)$ in $G \times A$. This probably means something to graph theorists, I would not be surprised if they have a name for it. $G^A$ is the associated "graph of arrows": the vertices of $G^A$ are pairs of vertices $(g,g')$ of $G$; and for each arrow $a : g \to h$ we get an arrow $a : (g,g') \to (h',h)$ in $G^A$. This makes more sense once you compute the global points of $G^A$: they correspond precisely to the arrows in $G$. Also, it is helpful to think of the vertices of $G^A$ as "potential arrows of $G$". We can already answer some of your questions: A graph $G$ is discrete when the projection $G \times V \to G$ is onto. A graph $G$ is complete when the canonical map $G \to G^V$ is onto. You would like to have the graph of paths $\mathsf{Path}(G)$ of a given graph $G$. I think you've described the wrong gadget, i.e., what you should be looking for is a graph whose global points are the paths in $G$, but there will be many other "potential" things floating around. We have so far not used the fact that there are two morphisms $s, t : A \to V$. These allow us to form "generic paths of length $n$" $P_n$ as pullbacks: $P_1 = A$, $P_2 = A \times_V A$ is the pullback of $s : A \to V$ and $t : A \to V$, and so on. With a little bit of care we should be able to form the object of "generic paths" $P$, equipped with a concatenation operation that turns it into a monoid. I am going to naively guess that the vertices of $P$ are pairs of natural numbers $(k,n)$ with $k < n$ and that arrows are of the form $(k,n) \to (k+1,n)$. But this needs to be checked, and in any case it should be possible to define the "correct" $P$ internally. The graph $\mathsf{Path}(G)$ that you are looking for ought to be the dependent sum $\sum_{p : P} G^p$ (and this looks a lot like a polynomial functor). The monoid structure on $P$ should give you a monad. Regarding cyclic paths (you call them loops): if I am not mistaken the internally projective graphs are those graphs whose in- and out-degrees are all 1, in other words the cycles and the infinite path stretching in both directions. This should help with getting a grip on cyclic paths. That a graph $G$ is internally projective can be expressed in the internal language as "every $G$-indexed family of inhabited graphs has a choice function", i.e., these are the objects that satisfy the axiom of choice, internally. The vertex $V$ is a subobject of the terminal object $1$, which is the graph with a single vertex and a single arrow. Thus, there is a corresponding truth value $v \in \Omega$, which is a kind of "intermediate" truth value. We can define a closure operator $j : \Omega \to \Omega$ (a modality) by $j(p) = (v \Rightarrow p)$. This modallity should be called "vertex-wise". Indeed, if $H \hookrightarrow G$ is a subgraph of $G$ then its $j$-closure $\bar{H} \hookrightarrow G$ is the subgraph of $G$ induced by the vertices of $H$. Ah, but this is then the same as the compleement of the complement of $H$, so we see that $j$ is just the double negation closure. (I hope I am doing this right, I am speaking off the top of my head.)
If I am correct, then we can define $V$ in the internal language, using an axiom: Axiom: there is a truth value $v \in \Omega$ such that $(v \Rightarrow p) = \lnot\lnot p$ for all $p \in \Omega$. Then $V = \lbrace * \in 1 \mid v \rbrace$. We still have to do something about $A$, though. In any case, my experience with internal languages is that they are well worth using. It takes a bit of effort, thoough, to figure out the optimal way of setting up the internal language of a particular topos. The general idea is to introduce as few new types as possible, characterize them with suitably chosen axioms, and figure out what other useful axioms are valid in your topos.
|
{
"source": [
"https://mathoverflow.net/questions/124991",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2811/"
]
}
|
125,224 |
Call a point of $\mathbb{R}^d$ rational if all its $d$ coordinates are rational numbers. Q1 .
Are the rational points dense on the unit sphere $S :\; x_1^2 +\cdots+ x_d^2 = 1$, i.e. does $S$ contain a dense set of rational points? This is certainly true for $d=2$, rational points on the unit circle . Q2 .
If (as I suspect) the answer to Q1 is Yes , is there a sense in which the rational coordinates are becoming arithmetically more complicated with larger $d$, say in terms of their height ? If $x= a/b$ is a rational number in lowest terms (i.e. gcd$(a,b)=1$), then the height of $x$ is $\max \lbrace |a|,|b| \rbrace$. This is far from my expertise. No doubt this is known, in which case a pointer would suffice. Thanks! (Added, 22Mar13 ). I just found this reference. Klee, Victor, and Stan Wagon. Old and new unsolved problems in plane geometry and number theory . No. 11. Mathematical Association of America, 1996. p.135.
|
The question itself has already been answered. Let me just add that in case $d = 3$,
one can obtain a nice picture by marking all rational points with height less than some
upper bound, and projecting this to one of the coordinate planes. The following picture
shows such projection of one octant of the sphere (bound on height: 2048): This picture in resolution 2048 x 2048 pixels can be found at https://stefan-kohl.github.io/images/ratpoints2048.png . Larger versions of this picture are available as well: 5000 x 5000 pixels, bound on height = 5000 10000 x 10000 pixels, bound on height = 10000 Projecting the rational points on the sphere in a Riemann-sphere -like way
to the plane yields a picture like this: The point where the sphere touches the plane is in the middle of the picture.
One feature of the picture is a grid of white circles with mesh size 2, i.e.
the diameter of the sphere. The higher density of points around the middle of
the picture arises from the projection. Larger versions of this picture are available as well: 5000 x 5000 pixels, bound on height = 5000, both coordinates from -6 to 6 10000 x 10000 pixels, bound on height = 10000, both coordinates from -8 to 8
|
{
"source": [
"https://mathoverflow.net/questions/125224",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
125,501 |
Let $G$ be a finite non-abelian group of $n$ elements.
I would like a measure that intuitively captures the
extent to which $G$ is non-commutative.
One easy measure is a count of the non-commutative products.
For example, for $S_3$, 9 products are non-commutative,
or, 18 of the 36 entries in the multiplication table indicate
non-commutivity
(in the table, $r$=rotation; $f$=flip): So one might say $S_3$ is 50% non-abelian. Another idea is to determine the fewest element identifications
needed to make the group abelian. If one identifies
the elements $r$ and $r^2$ above, and calls the resulting merged element $a$,
then I believe $S_3$ is reduced to the abelian $C_2$: So one might say $S_3$ is one element identification away from being abelian. My question is: Is there some standard, accepted measure
of how far a group is from being abelian? Ideally such a measure would not be restricted to finite groups. Thanks
for pointers!
|
Of course, one might say that both $Z(G)$ and $[G,G]$, in a sense, "measure" the non-commutativity of $G$. But they are not very good "quantitative" measures. I think what you are aiming at is a notion introduced by Turán and Erdős ( Some problems of a statistical group theory IV , Acta Math. Acad. of Sci. Hung. 19 (1968), 413-435), the "probability that two elements of $G$ commute":
$$P(G) = \frac{\left|\{ (x,y)\in G\times G\mid
xy=yx\}\right|}{|G|^2}.$$
In fact, $P(G) = k/|G|$, where $k$ is the number of conjugacy classes of $G$. Gustafson proved that if $G$ is nonabelian then $P(G)\leq 5/8$, and extended the notion to compact groups using Haar measure (W. Gustafson, What is the probability that two group elements commute? American Math. Monthly 80 (1973) 1031-1034). MacHale proved that certain values cannot occur: if $P(G)\gt \frac{1}{2}$, then $P(G) = \frac{1}{2} + \left(\frac{1}{2}\right)^{2s+1}$; and $P(G)$ cannot satisfy $\frac{7}{16} \lt P(G) \lt \frac{1}{2}$. Joseph proved that if $G$ is not commutative and $p$ is the smallest prime that divides $|G|$, then $P(G)\leq \frac{p^2+p-1}{p^3}$ (K.S. Joseph, Commutativity in non-abelian groups , PhD thesis, 1969, UCLA). There's been some other work on this. In the case of $S_3$. $|G|=6$, and the set of pairs $(x,y)$ with $xy=yx$ is, as you note, $18$, so the probability that two elements commutes is precisely your "50% nonabelian". Your second notion seems to be that of looking at $G/[G,G]$, which is the "largest" quotient of $G$ which is abelian. Added: Since I edited to fix the accent on Erdős, I'll take the opportunity to add some references: Desmond MacHale, How commutative can a non-commutative group be? , Math. Gazette 58 (1974), 299-202. David J. Rusin, What is the probability that two elements of a finite group commute? , Pacific J. Math 82 (1979), no. 1, 237-247. Robert Guralnick and Geoff Robinson, On the commuting probability in finite groups , J. Algebra 300 (2006), no. 2, 509-528, MR 2228209 (2007g:60011); Addendum , J. Algebra 319 (2008), no. 4, 1822.
|
{
"source": [
"https://mathoverflow.net/questions/125501",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
125,695 |
Let $m$ be an integer and $q$ be an odd prime factor of $m^2 + 1$. Is there an obvious reason that $\left(\frac{2m}{q}\right)$ always equals 1? From some numerics, this seems to be the case. The last time I got stuck on something like this, it ended up just being because $-1 \equiv m^2 \pmod{m^2 + 1}$, so I'm wondering if there's something easy I'm missing. This would be useful for an explicit 2-descent I'm doing for prime twists of elliptic curves defined in terms of $m$.
|
Hi, $q|(m^2+1)$ means $m^2+1\equiv 0$ (mod $q$), and so $(m+1)^2=m^2+2m+1\equiv 2m$ (mod $q$). In other words, $2m$ is the same as $(m+1)^2$ modulo $q$, so it is square mod $q$.
|
{
"source": [
"https://mathoverflow.net/questions/125695",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32344/"
]
}
|
125,817 |
I have a soft question that is interesting for me in some aspects. I appreciate your answers and comments about it. Four years ago, one of my friends in MIT, in the biology lab, had working on neuroscience and specially he worked on Deja-Vu phenomenon. When he asked me about writing a program with Matlab for simulating this phenomenon with a network of cells that they want simulate the Sinc function, I found that there are many good theorems in graph theory that can be useful for his research. When I suggested him this idea, he found it very interesting. My question are about this event in a little bit different way. Is it possible that we publish a paper in some mathematical journals that: 1) The only new thing in the paper is relation between a real phenomena and a field of mathematics that is well known. For example, we just model the controversy with bandwidth problem, and no more things just using the theorems that proved for bandwidth problem. 2) This paper does not have new theorems as like as theorems that are common in mathematical papers. This paper just use mathematical theorems in its direction. Also, do we have some mathematical journals that publish such a papers? And if yes, is there some evidences for this type of publication? Maybe someone think about the Hilbert Spaces and quantum mechanic. But, in my view, this is not the case. We use Hilbert spaces to model some aspects of quantum mechanics and we get some new results and theorems in quantum mechanic. If we want to think this relation, the paper only must be contain the modeling of quantum mechanic by Hilbert spaces and no more. Briefly, suppose we found a connection between a real phenomenon and a field of mathematics that can be acceptable or a new view point for analysis the phenomenon. For example, if we found a relation between Darwin's evolutionary theory and a game on graph, is it possible that we can publish such a results as a paper in a mathematical journal? And what kind of mathematical journal is good for this work? Sorry me for long question.
|
Oh yes! Establishing a connection between some class of natural phenomena and a well known
field of mathematics can make you famous. And you do not have to prove new theorems.
The most striking recent example is Benoit Mandelbrot.
According to the Google Scholar he is THE MOST cited mathematician of all (at the time I write this).
And all his activity was exactly as you describe.
Even before fractals, he was looking for new connections between "well known" fields
of mathematics and real world. For example he was looking for "stable probability
distributions" everywhere, "power laws" etc.
But his greatest success was "fractals". The relevant mathematics was known for about
50 years. Well known to a very narrow circle of specialists, as it happens to most
areas of pure mathematics. He invented a catchy word "fractal" and then showed by examples
that "fractals are everywhere". I don't know a single new theorem that Mandelbrot proved.
But his influence on mathematics and science was really enormous. On a smaller scale we have Kramers-Kronig relations .
Which is nothing else but the "well known" Spkhotski-Plemelj formula.
It is not important that Kramers and Kronig discovered these known relations independently.
What is important is that they proposed a physical interpretation. And there are
thousands of examples like this. You can even receive a Nobel prize by establishing
a new relation to the real world of some well known mathematics.
|
{
"source": [
"https://mathoverflow.net/questions/125817",
"https://mathoverflow.net",
"https://mathoverflow.net/users/19885/"
]
}
|
126,106 |
Let $R$ be a finitely generated ring with identity, $M_n(R)$ the set of $n\times n$ matrices. Are there any nontrivial ring homomorphisms $M_{n+1}(R)\rightarrow M_n(R)$? This should be an elementary question in abstract algebra. But even if $R$ is a field, I couldn't get a quick (negative) proof. Any comments are welcomed. RMK: If we view the natrual map $M_{n}(R)\rightarrow M_{n+1}(R)$ as a ring homomorphism, we will not require that a ring homomorphism preserves identities.
|
According to the Amitsur-Levitzki theorem, $n \times n$ matrices over a commutative ring satisfy a polynomial identity of degree $2n$ and none of smaller degree. So there can be no injective ring homomorphism $M_{n+1}(R) \to M_n(R)$, which at least rules out the case when $R$ is a field.
|
{
"source": [
"https://mathoverflow.net/questions/126106",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1546/"
]
}
|
126,158 |
For ordered fields, we have a “mother of all ordered fields”, the surreal numbers $\mathbf{No}$, a proper-class “field” which includes (an isomorphic copy of) every other ordered field as a subfield. So, I wondered: does a similar mother object exist for other kinds of object, like groups? That is, is there some “meta-group” that's a proper-class “group” which every other group is a subgroup? If so, how could it be constructed? My attempt at a construction was this. It is based on Cayley's theorem, that every group is isomorphic to some group of permutations. In particular, the symmetric groups on $n$ letters, $S_n$, can be considered as “mother groups” for all finite groups of order $n$ or less: every such group is embedded isomorphically as a subgroup of $S_n$. It is also possible to see how this holds for infinite groups as well: there are groups $\mathrm{Per}_{\kappa}$ for infinite cardinals $\kappa$ of all permutations (bijective functions) of a set of cardinality $\kappa$ that act as mother groups for all groups of cardinality $\kappa$ or less. So what I imagine was the following idea. We start with a class of sets such that: they are ordered by inclusion, every cardinality is represented exactly once We here use the initial ordinals of cardinals, interpreted as sets in the usual way (we assume the axiom of choice here). Then we consider building up on each initial ordinal $I_{\alpha}$ (which is indexed with ordinals $\alpha$ such that as we run up from 0 through to $\omega$, but not reaching $\omega$, $I_{\alpha}$ is the finite ordinal $\alpha$, then $I_{\omega}$ is $\omega$, the initial ordinal of cardinal $\aleph_0$, then $I_{\omega+1}$ is $\omega_1$, the initial ordinal of cardinal $\aleph_1$, etc.) its corresponding group of permutations, $\mathrm{Sym}(I_\alpha)$. We now imagine the class-sized mega-union of all these $\mathrm{Sym}(I_\alpha)$, i.e. $$\mathbf{SYM} = \bigcup_{\alpha \in \mathbf{On}} \mathrm{Sym}(I_\alpha)$$. Next, we proceed to define a composition of permutations in $\mathbf{SYM}$. Let $p$ and $q$ be two such permutations. If $\mathrm{dom}(p) = \mathrm{dom}(q)$, then their composition $p \diamond q = p \circ q$ – the usual composition. However, if $\mathrm{dom}(p) \ne \mathrm{dom}(q)$, then we have to first extend one or the other permutation. Define, for $\mathrm{dom}(p) < \mathrm{dom}(q)$, $$p^q: \mathrm{dom}(q) → \mathrm{dom}(q)$$ $$p^q(a) = \begin{cases}p(a),&\ \mathrm{if}\ a \in \mathrm{dom}(p)\\
a,&\ \mathrm{otherwise}\end{cases}$$ . Then, if $\mathrm{dom}(p) < \mathrm{dom}(q)$, $p \diamond q = p^q \circ q$, and if $\mathrm{dom}(q) < \mathrm{dom}(p)$, $p \diamond q = p \circ q^p$. We then define an equivalence relation ~ on permutations in $\mathbf{SYM}$ such that two are equivalent if one can be extended to the other in the fashion above. Then an operation between equivalence classes of composition can be defined by taking two representative permutations and composing. Of course, this runs into a difficulty since we cannot collect these equivalence classes together as they themselves are proper classes. But we can take the representative defined on the lowest possible $I_\alpha$. Denote this lowest representative of a permutation $p$ by $\mathrm{low}(p)$. Now we should have a “mother of all groups”, given by the class of all $\mathrm{low}(p)$ for every $p \in \mathbf{SYM}$, with composition defined by taking the low of the composition as defined before. My questions are: does the above construction make any sense? If not, where's the flaw? If so, is there a way to get around the obvious use of the axiom of choice that I mentioned? One thing I notice about rejecting choice is that then the cardinals are not necessarily totally-ordered any more, so then however we'd go about choosing representatives, we'd run into the problem where we could not “nest” them together and so could not extend the permutations so as to be able to perform the composition. Does this mean the existence of the mother group depends on the axiom of choice? Also, what about “mother objects” of other types? I suspect that in a manner analogous to the above, we can construct a “mother of all rings” by the corresponding Cayley's theorem analogue for rings (rings are isomorphic to rings of endomorphisms of abelian groups). So this makes me wonder: what kind of conditions are required for some kind of structure to have a “mother structure” of proper-class size? Does any structure have one or just some? Another thing I notice here is that this “mother group” seems not to include all proper-class “super” groups, only “normal”, i.e. set, groups, whereas, I think, No includes all proper-class “super” ordered fields as well. Which makes me wonder: what kind of criteria are needed to ensure the existence of a “true mother” proper-class version of a structure that includes all other such structures including other proper-class ones as subsets/classes? What do ordered fields have that groups don't, and what else besides ordered fields share this property?
|
The surreal numbers exhibit much stronger universal properties
than you have
mentioned, for they also exhibit very strong homogeneity and saturation properties. For example, every automorphism of a set-sized elementary substructure of
the surreals extends to an automorphism of the entire surreal
numbers, and every set-sized type over the surreals that is consistent with the
theory of the surreal numbers is realized in the surreal numbers. That is, any first-order property that could be true about an object as it relates to some surreal numbers, which is consistent with the theory of the surreal numbers, is already true about some surreal number. For any complete first-order theory $T$, one can consider the
concept of a monster model of the theory. This is a model $\mathcal{M}$ of $T$, such that
first, every other set-sized model of $T$ embeds as an elementary
substructure of $\mathcal{M}$--not merely as a substructure, but
as a substructure in which the truth of any first-order statement
has the same truth value in the substructure as in big model--and
second, such that every automorphism of a set-sized substructure
of $\mathcal{M}$ extends to an automorphism of $\mathcal{M}$. One may also use approximations to these proper class monster
models by considering extremely large set-sized models, of some
size $\kappa$, such that the embedding and homogeneity properties
hold with respect to substructures of size less than $\kappa$. Every consistent complete first order theory $T$ has such monster
models, and they are used pervasively in model theory. Model
theorists find it convenient, when considering types over and
extensions of a fixed model, to work inside a fixed monster model,
considering only extensions that arise as submodels of the fixed
monster model. Your topic also has a strong affinity with the concept of the Fraïssé limit of a collection
of finitely-generated (or $\kappa$-generated) structures, which one aims to
be the age of the limit structure, the collection of
finitely-generated ($\kappa$-generated) substructures of the limit
structure. Fraïssé limits are often built to exhibit the same
saturation and homogeneity properties of the surreal numbers. As a
linear order, the surreal numbers are the Fraïssé limit of the
collection of all set-sized linear orders. And I believe that one
can also put the field structure in here. Update. When one has the global axiom of choice, the set-homogeneous property of the generalized Fraïssé limit allows one to establish universality for proper class structures. Basically, using global AC one realizes a given proper class structure as a union of a tower of set structures, and gradually maps them into the homogeneous structure. The homogeneity property is exactly what you need to keep extending the embedding, and so one gets the whole proper class structure mapping in. This kind of argument, I believe, shows that what you want to consider is homogeneity rather than merely the universal property itself. (This argument is an analogue of the idea that when one has homogeneity for countable substructures, then one gets universality for structures of size $\aleph_1$.) Regarding your specific construction, here is a simpler way to undertake the same idea, which avoids the need for the equivalence relation: Let $G$ be the proper class of all fixed-point-free
permutations of a set. This class supports a natural group
operation, which is to compose them, regarding elements outside
the domain as fixed by the permutation, and then cast out any newly-created fixed points. The identity element of $G$ is
the empty function, which is really a stand-in for the identity
function on the universal class. It is clear that every group finds an isomorphic copy inside $G$,
without using the axiom of choice, since every group is naturally
isomorphic to a group of permutations, and these are naturally
embedded into $G$, simply by casting out fixed-points. This does not use the axiom of choice. The class group $G$ is a natural presentation of the set-support symmetric
group $\text{Sym}_{\text{set}}(V)$ of the set-theoretic universe
$V$, the class of all permutations of $V$ having set support. Any such permutation of $V$ is represented in $G$ by restricting to the non-fixed-points. Note finally that in the case of the surreal numbers, one doesn't need the axiom of
choice in order to construct the surreal numbers, and one can get
many universal properties for well-orderable set structures and in
general from the axiom of choice for all set-sized structures. But
to get the universal property that you mention for class-sized structures, mere AC
is not enough, for one needs the global axiom of choice, which is
the assertion that there is a proper class well-ordering of the
universe. This does not follow from AC, for there are models of
Gödel-Bernays set theory GB that have AC, but not global
choice. But global AC is sufficient to carry out the embedding of any class linear order into No. A similar phenomenon arises with many other class-sized class-homogeneous set-saturated models, which require global AC to get the universality for class structures.
|
{
"source": [
"https://mathoverflow.net/questions/126158",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11576/"
]
}
|
126,395 |
Who invented projective space $\mathbb{P}^n$ as an extension of the usual affine space $\mathbb{A}^n$? Who was the first person to consider projective closure of plane affine algebraic curves (curves in $\mathbb{A}^2$)? Was it the same person?
|
The idea of projective space goes back to the study of perspective in painting. The first formalization known is due to G. Desargues, with the book Brouillon Projet d'une atteinte aux événements des rencontres du Cône avec un Plan (Rough draft for an essay on the results of taking plane sections of a cone) published in 1639. There it was developed a geometry of incidence without parallel lines. It was very dense and difficult to read. Until XIX century the topic did not developed in full. Monge and Gergonne redeveloped it. Möbius introduced the homogeneous coordinates and Plücker also worked in these early developments. Steiner gave the first axiomatic (or synthetic ) treatment. from there on, it playe a central role specially in the study of sets of solutions of polynomial equations. Today it makes one of the fundamental traits in modern algebraic geometry. But projective space considerations are present more or less implicitly also in topology, differential geometry, certain kind of differential equations and some descriptions of particle behavior in quantum mechanics.
|
{
"source": [
"https://mathoverflow.net/questions/126395",
"https://mathoverflow.net",
"https://mathoverflow.net/users/22481/"
]
}
|
126,474 |
The MSRI is organising a programme with the above title from Aug 11, 2014 to Dec 12, 2014. Here is a short description from their website : The branches of number theory most
directly related to the arithmetic of
automorphic forms have seen much
recent progress, with the resolution
of many longstanding conjectures.
These breakthroughs have largely been
achieved by the discovery of new
geometric techniques and insights. The
goal of this program is to highlight
new geometric structures and new
questions of a geometric nature which
seem most crucial for further
development. In particular, the
program will emphasize geometric
questions arising in the study of
Shimura varieties, the $p$-adic
Langlands program, and periods of
automorphic forms. Question Which new geometric structures, techniques and insights have been crucial for this recent progress ?
|
Knowing the organizers well and working in the field, I can try an answer, but this is nothing more than an educated guess. First, the breakthroughs in question include (i) The construction and study of Galois representations attached to self-dual cohomological automorphic forms for $Gl_n$ (satisfying local-global compatibility, etc.) This is the work of many people, based on the fundamental work of Arthur and Ngo, including Shin, Morel, Harris, Clozel, Labesse, and many many others. This can be considered as done, even if the four-volume Paris' book edited by Harris that should contain every detail is not completely ready. (ii) The construction and study of Galois representations attached to not necessary self-dual cohomological automorphic forms for $Gl_n$, announced last year by Lan, Harris, Taylor and Thorne (the preprint has yet to be released). (iii) The proof by Kisin and also by Emerton of large parts of the Fontaine-Mazur conjecture for $Gl_2$. (iv) The proof of Sato-Tate by many people with various multiplicity, the two highest being Taylor and Harris. (v) The progresses on the p-adic Langlands program, especially on the Breuil-Mezard conjecture. (vi) The progress on the theory of Shimura varieties, including the proof of two major conjectures of the subject by Kisin (one has an older, not universally accepted, proof by Vas):
the conjectures of Milne and of Langlands-Rapoport. At first I thought I should include (0) the proof of the fundamental lemma by Ngo, but since none of the organizers is a specialist of this area, I am not sure. Now why the emphasis on the "geometric methods", and what are those?
Well, there is a même saying that along the traditional tripartite
division of mathematicians as "algebraists", "analysts", and "geometers" (see e.g. Recoltes et Semailles), while people like Breuil (or Fontaine) are more on the algebraic side,
and perhaps Colmez on the analytic side, people like Kisin and Emerton are really on the geometric side, and that their geometric intuition played a crucial role in their recent successes. Whatever you think of this même (or even of the tripartite classification)
it is quite possible that it made its way to the mind of one or more of the organizer.
The geometric insights and methods include (a) the use of "eigenvarieties": families of automorphic or/and Galois representations
that have a geometric structure, and whose geometric properties, local and global
illuminate the properties of the individual objects that compose them. For example,
this plays a crucial role in Emerton's proof of Fonntaine-Mazur's conjecture (iii),
in constructing Galois representations by "passage to the limit" (i) and (ii), and also
in recent progress toward Bloch-Kato and Birch-Swinnerton-Dyer conjecture (work of Chenevier and myself, Urban and Skinner), and also in the work on the Breuil-Mezard conjecture (Kisin first, then others) (b) The better understanding of certain Shimura varieties, in particular the ones attached to unitary groups, in particular in connection with Rapoport-Zink spaces etc. For example one can cite the thesis work of Mantovan, which is used in Shin's subsequent work on (i).
Also whatever Kisin uses to prove the conjecture about Shimura variety (at this point I don't know what it is, but I am organizing a seminar at Yale to learn this eventually) (c) Also, the better understanding and the use of the boundary components of non-compact Shimura varieties, including in cases (this is mainly speculative so far) where this components has only the structure of a differentiable manifold, not of an algebraic variety.
I am not sure, but idea like that plays a role in (iv). (d) Study of cycle on Shimura's varieties, in particular in connection to periods and p-adic L-function (how to define them in higher ranks? that is very hard and important). (e) if (0) is included (which as I have said, I am not sure of), the geometric methods of
Ngo (and before him Laumon, Goreski, MacPherson: balloons, Hitchin's vibrations, etc.) used in proving the fundamental lemma, and perhaps also the ones of Laurent Lafforgue. But I think this might be the subject of another conference. I hope that helps... Sorry to anyone I forgot to mention, my list of people having a part in the recent breakthroughs is far from complete.
|
{
"source": [
"https://mathoverflow.net/questions/126474",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2821/"
]
}
|
126,513 |
I have a couple of conjectures on recursive functions, that I feel must have been proved or refuted by someone else, but I don't know where to look. In short: 1. The primitive recursive functions form a pseudoinitial small finite product category with natural number object. 2. The partial recursive functions form a pseudoinitial small regular category with natural number object. The longer version is as follows. In non closed Cartesian categories, the natural number objects should be defined in a more stable way: $N$ together with $0:1\to N$ and $s:N\to N$ is a natural number object if for each $f:X\to Y$ and $g:Y\to Y$ there is an $h:N\times X \to Y$ such that $h(0,x) = f(x)$ and $h(n+1,x) = g(h(n,x))$. This roughly means that the projection $N\times X\to X$ is a natural number object in the slice over $X$ for every object $X$. A 0-cell in a 2-category is pseudoinitial if there is an up to isomorphism unique 1-cell to every other two cell.
The first conjecture in full is as follows: The 2-category of small finite product category with a chosen natural number object and finite product preserving functors which preserve the choice of natural number object and all natural transformations, has a pseudoinitial 0-cell. One of these pseudoinitial 0-cells is the category whose objects are powers of $\mathbb N$ and whose morphisms are primitive recursive functions.
The second conjecture says that there is a pseudoinitial object in the category of small regular categories with NNO. This time the category of recursively enumerable sets and recursive functions is such a pseudoinitial 0-cell. The conjectures fail if non standard models of arithmetic can exclude primitive / partial recursive functions that exists in the standard model: categories of non standard recursive functions could be counterexamples. Have you seen anything like this before? If so, am I right?
|
The relevant piece of categorical folklore here is the notion of arithmetic universe . This was studied by André Joyal in 1973 with the goal of proving Gödel's Incompleteness Theorems in a categorical fashion. However, André never published anything and many people have tried without success to obtain any notes from him. I went to his office in Montréal in 1991 to get them, but just came away with a copy of Lawvere's thesis. Thanks to Todd for providing the link to Milly Maietti's recent paper, about which I did not know. Since it provides much of the technical information and bibliography, I will just give a more informal description. An arithmetic universe is essentially set theory as it is taught to computer science students , ie with finite powersets instead of general ones. So it has finite limits (products and equalisers), pullback-stable disjoint coproducts (disjoint unions), pullback-stable effective quotients of equivalence relations, pullback-stable free monoids (list objects) in each fibre. The first three of these give a pretopos , which we might think of as the "inorganic" part of finitary set theory and the last is the "organic" part, recursion. This is more than the questioner said, but disjoint unions and lists rather than just numbers are very useful for mathematical constructions. Another way of seeing this collection of structures is that they are the minimum that is needed to construct the free internal gadget of the same (or pretty much any useful) kind. I digress here for a little terminological rant. Milly describes the correspondence between the categorical structure and type theory. She, following widespread custom, calls the latter the internal language , but I claim that this name is wrong and would prefer proper language . A language is a structure capable of mathematical abstraction and internalisation just as much as a group is. For the application to Gödel's theorem, one would like to internalise the equivalence between diagrams and symbols that Milly describes, and so talk about internal categories and internal languages. But the language that she sets out is an external one. Anyway, one can construct the free arithmetic universe (I really think that introducing 2-categorical notions of initiality here makes things unnecessarily more difficult to understand) inside any arithmetic universe, and in particular in the free one. Having done that, in the outer category one can construct the equaliser
$$ G \rightarrowtail {\bf 1} \rightrightarrows H\equiv{Hom}(1,2), $$
where $H$ is a certain hom-object of the internal category. The statement of consistency is $G\cong{\bf 0}$ and André's version of Gödel's theorem is that this does not hold. There have been moments in the quarter century since I first heard about this when I thought that I understood why, but now is not one of those moments. Now that other people, especially Milly, have done the donkey-work to build the infrastructure for this theorem, I wish André would write up the proof of the punch-line. Milly and I both wrote papers about arithmetic universes for CTCS in Copenhagen in 2004, but I did not present mine because I broke my leg. They nevertheless both appear in the ENTCS proceedings. If you read the footnotes you will see that she and I disagreed about the definition, so I would like to explain that. I just asked for free monoids, whereas she said that they have to be stable under pullback. This was because I thought that stability could be proved from the other structure. I had overlooked something, but later Milly had an idea for filling in the gap. We studied this idea together in Padova in 2009, but it was not correct. I had other ideas but have not thought about the subject since then. I do not know whether she subsequently found a correct proof.
|
{
"source": [
"https://mathoverflow.net/questions/126513",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3603/"
]
}
|
126,519 |
Every mathematician knows what "simplify" means, at least intuitively. Otherwise, he or she wouldn't have made it through high school algebra, where one learns to "simplify" expressions like $x(y+x)+x^2(y+1+x)+3(x+3)$. But is there an accepted rigorous "mathematical" definition of "simplify" not just for algebraic expressions but for general expressions, which could involve anything, like transcendental functions or recursive functions? If not, then why? I would think that computer algebra uses this idea.
|
In full generality, there provably isn't any method for complete simplification (i.e., bringing an expression into a canonical simplest form). Simplifying should have two key properties: it should be algorithmic, and simplifying two different expressions for the same thing should give the same simplified form. If you have a simplification method with these properties, then it gives an algorithm for deciding whether two expressions are equivalent. However, Richardson proved that there is no algorithm to decide whether two closed-form expressions define the same function. (Of course you have to specify what you consider "closed-form". See D. Richardson, Some Undecidable Problems Involving Elementary Functions of a Real Variable , Journal of Symbolic Logic 33 (1968), 514-520, http://www.jstor.org/stable/2271358 .) Of course simplifying becomes easy if you give up on these properties. If you don't care about algorithms, just choose a representative for each equivalence class arbitrarily and declare it simplified. If you don't care whether equivalent expressions simplify to the same result, then just declare everything is already simplified. This argument rules out only a very general notion of simplification. It still makes sense in many important special cases, and as Joel David Hamkins observes in the comments, one could still define a notion of simplicity even if there is no full simplification method. Added in response to comments : Let's state things more precisely. Let the class $E$ of closed-form expressions contain $\log 2$, $\pi$, $e^x$, $\sin x$, and $|x|$ and be closed under addition, subtraction, multiplication, and composition of functions. These expressions all define continuous functions that are numerically computable (in the sense that one can algorithmically compute arbitrarily close approximations to their values at any given points). Call expressions $e_1$ and $e_2$ equivalent if they define the same function. Richardson proved that there is no algorithm that can test whether two expressions in $E$ are equivalent. It follows immediately that no algorithm can bring elements of $E$ into any canonical form. I.e., there is no computable function $f$ from $E$ to $E$ such that $f(e_1)=f(e_2)$ iff $e_1$ and $e_2$ are equivalent. Furthermore, one cannot even do it in the gradual sense described in the comments: there is no computable function $f$ from $\mathbb{N} \times E$ to $E$ with the following property: $f(n,e)$ is always equivalent to $e$, and if $e_1$ and $e_2$ are equivalent, then for all sufficiently large $n$ we have $f(n,e_1)=f(n,e_2)$ (of course how large $n$ needs to be may depend on $e_1$ and $e_2$). Think of $n$ as describing how hard you have tried to simplify your input, with the idea being that you eventually reach the canonical simplest form when $n$ is large enough, but you won't know when you've reached it (so you'll always be left wondering whether increasing $n$ would lead to further simplifications). This observation requires a different proof, but it is not difficult. If such an $f$ existed, you could computably enumerate all the equivalent pairs $(e_1,e_2)$: to do so, loop through all triples $(e_1,e_2,n) \in E \times E \times \mathbb{N}$ and output $(e_1,e_2)$ whenever $f(n,e_1)=f(n,e_2)$. However, it is easy to computably enumerate the inequivalent pairs: loop through all expressions $e_1$ and $e_2$, rational numbers $x$, and natural numbers $k$, and output $(e_1,e_2)$ if numerically computing the corresponding functions at $x$ to within error less than $1/k$ shows that these functions differ at $x$. All inequivalent pairs will occur in this list, so if we could separately enumerate all the equivalent pairs (using the magic simplification function $f$), then we could solve the equivalence problem by seeing which list $(e_1,e_2)$ turned up in. That would contradict Richardson's theorem, and consequently $f$ does not exist. What makes this tricky is that it's tempting to think the equivalent pairs should be computably enumerable. Can't you write down a list of all the expressions equivalent to $e$ by manipulating $e$ in all possible ways? Richardson's theorem implies that you cannot (for example, high school algebra manipulations are insufficient to get all equivalences, so high school classes give entirely the wrong impression). Proving two functions are different is easy, but proving two functions are the same is not, and there is no systematic way to do it.
|
{
"source": [
"https://mathoverflow.net/questions/126519",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7089/"
]
}
|
126,829 |
Clarification: My question concerns the homotopy type of the space of $C^k$ diffeomorphisms with the compact-open $C^k$ topology, where $0< k \leq\infty$. I have stated my question below with $k=1$ for definiteness and simplicity. I am not particularly interested in any specific value of $k$.$\newcommand{\Diff}{\operatorname{Diff}}$$\newcommand{\RR}{\mathbb{R}}$ It is fairly well-known that the space $\Diff(M)$ of $C^1$ diffeomorphisms of a closed smooth manifold $M$ is homotopy equivalent to a CW-complex. Here are the relevant facts: $\Diff(M)$ is a Banach manifold modelled locally on the space of $C^1$ vector fields on $M$. Metrizable Banach manifolds have the homotopy type of CW-complexes, as shown by Palais. [ Remark : When $M$ is closed, the spaces of $C^k$ diffeomorphisms of $M$ (for $0 < k \leq\infty$) are all homotopy equivalent to each other via the natural inclusions. This can be shown by embedding $M$ smoothly in $\RR^N$, and then using smoothing operators defined by taking convolution with a mollifier.] When we allow $M$ to not be compact, there are several common topologies on $\Diff(M)$. I am interested in the compact-open (or weak ) $C^1$ topology. Questions: Is it known whether the space $\Diff(M)$ of $C^1$ diffeomorphisms with the compact-open $C^1$-topology is homotopy equivalent to a CW-complex when $M$ is a smooth manifold without boundary? Is there a known counter-example? Are there particular cases where the answer is known, for example if $M$ is the interior of a compact manifold? Feel free to use instead $C^k$ diffeomorphisms and/or the compact-open $C^k$ topology for any $0 < k \leq\infty$. I would also be interested in hearing about any known results related to this question: e.g. for spaces of embeddings of manifolds in the compact-open/weak topology when the source is not the interior of a compact manifold. Edit: Allen Hatcher gave a very nice answer to my question. Afterwards, I also posted an answer very similar to Allen's, which I was writing when Allen posted his. A pertinent question still remains: (1) Does the result hold for the interior of a compact manifold? Here is a perhaps less pertinent question: (2) For $M$ without boundary, do the path components of $\Diff(M)$ have the homotopy type of a CW-complex?
|
Here is an example where ${\rm Diff}(M)$ with the compact-open topology is not homotopy equivalent to a CW complex. Take $M$ to be a surface of infinite genus, say the simplest one with just one noncompact end. I will describe an infinite sequence of diffeomorphisms $f_n:M\to M$ converging to the identity in the compact-open topology and all lying in different path-components of ${\rm Diff}(M)$. Assuming this, suppose $\phi:{\rm Diff}(M) \to X$ is a homotopy equivalence with $X$ a CW complex. The infinite sequence $f_n$ together with its limit forms a compact set in ${\rm Diff}(M)$, so its image under $\phi$ would be compact and hence would lie in a finite subcomplex of $X$, meeting only finitely many components of $X$. Thus $\phi$ would not induce a bijection on path-components, a contradiction. To construct $f_n$, start with an infinite sequence of disjoint simple closed curves $c_n$ in $M$ marching out to infinity, and let $f_n$ be a Dehn twist along $c_n$. The $f_n$'s converge to the identity in the compact-open topology since the $c_n$'s approach infinity. We can choose the $c_n$'s so that they represent distinct elements in a basis for $H_1(M)$ and then the $f_n$'s will induce distinct automorphisms of $H_1(M)$. If two different $f_n$'s were in the same path-component of ${\rm Diff}(M)$ they would have to induce the same automorphism of $H_1(M)$ since any path joining them would restrict to an isotopy of any simple closed curve in $M$ (see the next paragraph below) and a basis for $H_1(M)$ is represented by simple closed curves. If $g_t$ is a path in ${\rm Diff}(M)$ then the images $g_t(c)$ of any simple closed curve $c$ vary by isotopy since this is true as $t$ varies over a small neighborhood of a given $t_0$, so since the $t$-interval $[0,1]$ is compact, a finite number of these neighborhoods cover $I$ and the claim follows. Remark: The $f_n$'s were chosen to be Dehn twists just for convenience. Many other choices of diffeomorphisms would work just as well. One can easily see how to generalize to higher dimensions.
|
{
"source": [
"https://mathoverflow.net/questions/126829",
"https://mathoverflow.net",
"https://mathoverflow.net/users/21095/"
]
}
|
126,881 |
In Katz's article p-adic properties of modular schemes and modular forms in the Antwerp proceedings, the following definition of an elliptic curve over a base scheme $S$ is given: By an elliptic curve over a scheme $S$, we mean a proper smooth morphism $p: E \to S$, whose geometric fibres are connected curves of genus one, together with a section $e : S \to E$. Now this is a quite reasonable definition, which coincides with the usual notion of an elliptic curve when $S$ is the spectrum of a field. However, it does not seem (to me) to follow directly from the definition that such an elliptic curve over $S$ should be a group scheme over $S$ (an obviously desirable property which Katz seems to take as an obvious fact). When $S= \text{Spec }k$, I understand that this essentially follows from Riemann-Roch... So, what principle allows one to come to this conclusion in the general case? Thank you!
|
The argument that allows you to show that an elliptic curves defined as you say is a group scheme, and even a commutative one is the construction of a functorial and natural
isomorphism $E(T) \rightarrow Pic_{E/S}^0(T)$ for every $S$-scheme $T$. This allows
to see the functor $T \mapsto E(T)$ as a funtor in group, and since this functor is representable by $E$, this gives a structure of group scheme on $E$, which is the one you are looking for. Essentially this map is defined as follows:
one attached to a point in $E(T)$, that is a $T$-section of $E_T$, the invertible sheaf of
divisor this section minus the trivial section $e_T$ (obtained by base change
from the section $e$ which is part of the definition). To prove that this map is an isomorphism, one essentially reduces, using base change theorems for direct image in coherent cohomology, to the case of a field, where it becomes a consequence of Riemann-Roch.
Note that by definition, the trivial section $e_T$ is send to to the trivial sheaf, so that $e$ is the neutral section of the group scheme structure on $E$, as desired. Of course there are many details to deal with to make this argument a complete one,
but this is done with great care in the beginning of the (unique) book by Katz and Mazur.
|
{
"source": [
"https://mathoverflow.net/questions/126881",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6779/"
]
}
|
126,899 |
I am having a hard time in finding an upper bound in terms of the degree and the dimension for the Milnor number of an isolated hypersurface singularity. I am mostly interested in surfaces on the projective space. Can some one please give me a hint on this? Thanks!
|
The argument that allows you to show that an elliptic curves defined as you say is a group scheme, and even a commutative one is the construction of a functorial and natural
isomorphism $E(T) \rightarrow Pic_{E/S}^0(T)$ for every $S$-scheme $T$. This allows
to see the functor $T \mapsto E(T)$ as a funtor in group, and since this functor is representable by $E$, this gives a structure of group scheme on $E$, which is the one you are looking for. Essentially this map is defined as follows:
one attached to a point in $E(T)$, that is a $T$-section of $E_T$, the invertible sheaf of
divisor this section minus the trivial section $e_T$ (obtained by base change
from the section $e$ which is part of the definition). To prove that this map is an isomorphism, one essentially reduces, using base change theorems for direct image in coherent cohomology, to the case of a field, where it becomes a consequence of Riemann-Roch.
Note that by definition, the trivial section $e_T$ is send to to the trivial sheaf, so that $e$ is the neutral section of the group scheme structure on $E$, as desired. Of course there are many details to deal with to make this argument a complete one,
but this is done with great care in the beginning of the (unique) book by Katz and Mazur.
|
{
"source": [
"https://mathoverflow.net/questions/126899",
"https://mathoverflow.net",
"https://mathoverflow.net/users/16409/"
]
}
|
127,157 |
$\DeclareMathOperator\GL{GL}$ Apologies if this question has already been dealt with on MO. I am wondering about the status of the global Langlands conjectures for $\GL_2$ over the rational numbers. How close is humanity to the proof of these conjectures? I guess Langlands picture is related to (or, should I say, includes?) the Taniyama-Shimura, Fontaine-Mazur, and Serre Conjectures. There has been great progress on these latter conjectures recently. Even assuming these conjectures are settled, how much closer are we to understanding the Langlands program for $\GL_2$ ? It would be helpful if somebody points to a paper where a modern and precise version of the global Langlands conjecture for $\GL_2$ is stated. For the local Langlands conjecture, we have the nice paper of Vogan. Is there an analogous paper for the global theory?
|
This question has already been discussed here, though perhaps not exactly on these terms (but
see What makes Langlands for n=2 easier than Langlands for n>2? ). So first there is an ambiguity of what id global Langlands in general. Langlands was interested in all cuspidal
automorphic representations for $Gl_2$ : he conjectured they should be in
natural bijection with complex two-dimensional representations of a large group called the Langlands group of $\mathbb Q$ . Problem: the Langlands group of $\mathbb Q$ is a conjectural object,
whose very existence depends on a hypothetical structure of Tanakian category on the
category of all cuspidal automorphic representations for $Gl_n$ when $n$ varies. For this reason, it does not really makes sense of Langlands' conjecture for $Gl_2$ : it is really a conjecture concerning all $Gl_n$ together. Therefore, when people refers to global Langlands for $Gl_2$ they mean a more restrictive
conjecture which deal with only a subclass of the set of cuspidal automorphic
representation, namely the one which are algebraic at infinity . The conjecture is then as follows: Conjecture: Fix an auxiliary prime $\ell$ . There is a natural bijection between (1) the set of algebraic at infinity cuspidal automorphic representations for $Gl_2$ (2) the set of 2-dimensional continuous irreducible Galois representation of $G_{\mathbb Q}$ over the algebraic closure
of $ \mathbb Q_\ell$ which are unramified at almost all primes and de Rham at $\ell$ . Moreover, this bijection should be compatible to the local Langlands correspondence at almost every prime $p \neq \ell$ (or better, at every place). (This condition fixes the bijection uniquely). So what is proved in this conjecture. Well there are three type of representation $\pi_\infty$ which are algebraic, which leads to a tripartition of the set (1):
(1a) : the set of cuspidal automorphic forms for $Gl_2$ for which $\pi_\infty$ is a discrete series which is essentially the set of new cuspidal modular forms of weight $k\geq 2$ .
(1b) : the set of cuspidal automorphic forms for which $\pi_\infty$ is a limit of discrete series, which is essentially the set of new cuspidal modular forms of weight $k=1$ .
(1c) : the set of cuspidal automorphic forms for which $\pi_\infty$ is a specific principal series, which corresponds to the set of Mass form with Eigenvalue for the Laplace-beltrami operator (a.k.a. the Casimir) 1/4. To prove the conjecture, one needs (a) to construct one map from (1) to (2) which is compatible with Local Langlands, and (b) prove that it is surjective (the injectivity will be easy by strong multiplicity 1 for $Gl_2$ ). Now this map is known only in the case (1a) and (1b), not at all in the case (1c). Worse: the map was already constructed in 1970 in the cases (1a) and (1b)
by Deligne and Deligne-Serre, and we have made no progress since. So it is very hard to say
how far we are from constructing the map in general. There is no publicly voiced ideas
to prove it, but maybe some one will come tomorrow and prove it.
What about surjectivity? Well, the image of (1a) and (1b) together should be (2ab),
the set of Galois rep. as above which in addition are odd . The image of (1a)
should be the set (2a) of odd representations with distinct Hodge-Tate weights,
and actually we know that there is a surjection (1a) to (2a): this is the part
the conjecture of Fontaine-Mazur proved by Kisin and Emerton. We don't know
yet the surjectivity of (1b) to (2b) (defined as (2ab)-(2a)), which is essentially Artin's conjecture but special cases have been done by Taylor, Buzzard, and Calegari.
|
{
"source": [
"https://mathoverflow.net/questions/127157",
"https://mathoverflow.net",
"https://mathoverflow.net/users/12168/"
]
}
|
127,322 |
Being a new member, I am not yet sure whether my question will be taken as a research level question (and thus, appropriate for MO). However, I have seen similar questions on MO, couple of which led me asking mine, and I seem to not be able to find many resources except discussion on FOM and MO. So, any references to resolve the question and fix my possible confusion would be appreciated. As the title suggests, I want to understand the relation between $ZFC \vdash \varphi$ and $ZFC \vdash\ 'ZFC \vdash \varphi'$. Let me give my motivation (and some partial answers) asking this question so that what I'm trying to arrive at is understood. We know that if $ZFC \vdash \varphi$, then $ZFC \vdash\ 'ZFC \vdash \varphi'$ for we could write down the Gödel number of the proof we have for $\varphi$ and then check that the formalized $\vdash$ relation holds. I believe even more can be checked to be true for this provability predicate ( Hilbert-Bernays provability conditions ). Is the converse true in general? Not necessarily. (Just to make sure that it will be pointed out sooner if I am doing any mistakes, I will try to write down everything unnecessarily detailed using less English and more symbols!) Let us assume only that $ZFC$ is consistent (However, I am not assuming the formal statement $Con(ZFC)$, that is $\ 'ZFC \nvdash \lceil 0=1 \rceil'$). Then, it is conceivable that $ZFC \vdash\ \ 'ZFC \vdash \lceil 0=1 \rceil'$ but $ZFC \nvdash 0=1$. It might be that in reality ZFC is consistent but $\omega$-inconsistent. Indeed, if I am not missing a point, it is consistent to have this situation: $ZFC \vdash Con(ZFC) \rightarrow Con(ZFC+\neg Con(ZFC))$ (Gödel)
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models ZFC+\neg Con(ZFC)$ (Gödel)
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models\ ZFC+\ 'ZFC \vdash \lceil 0=1 \rceil'$
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models\ ZFC+\ 'ZFC \vdash\ 'ZFC \vdash \lceil 0=1 \rceil\ '\ '$ (Soundness and the second provability condition here )
$ZFC \vdash Con(ZFC) \rightarrow Con(ZFC+\ 'ZFC \vdash \neg Con(ZFC)\ ')$ So we cannot hope to have $ZFC \vdash\ 'ZFC \vdash \varphi'$ implying $ZFC \vdash \varphi$ for an arbitrary formula without requiring an additional assumption. At least, we know this for $\varphi: 0=1$ (this is not because of the consistency argument above, but because consistency and $\omega$-inconsistency of ZFC is a possibility). If you believe that ZFC's characterization of natural numbers coincides with what we have in mind and agree that ZFC should not be $\omega$-inconsistent, then you might want to throw in the assumption $Con(ZFC)$. Now imagine a universe where $Con(ZFC)$ holds but all the models of ZFC is $\omega$-nonstandard and believe $\neg Con(ZFC)$. I do not know whether this scenario is even possible (which is another question I am wondering) but if it is possible, then it would be the case that $'ZFC \vdash \neg Con(ZFC)\ '$, by completeness since $\neg Con(ZFC)$ is true in all models. Then if the implication in title (or should I say, an informal version of it: $V \models ZFC \vdash \varphi$ implies $V \models \varphi$) held, then $\neg Con(ZFC)$ which contradicts our assumption that there are models at all. The point is arbitrary models of ZFC may not be sufficient to have existence of ZFC-proofs implying existence of actual proofs. However, if we add a stronger assumption $\psi$ that there is an $\omega$-model, then whenever we have an arithmetic sentence $\varphi$, if $ZFC \vdash\ 'ZFC \vdash \lceil \varphi \rceil'$ then $ZFC+\psi \vdash \exists M\ \omega^M=\omega \wedge M \models ZFC+\ \varphi$ and because $\omega$ in the model is the real one, by taking care of quantifiers one by one we can deduce $ZFC+\psi \vdash \varphi$. Thus, existence of an $\omega$-model solves our problem for arithmetical sentences. I cannot see any reason to make this work for arbitrary sentences without strengthening the assumption. Here is a thought: We know, by the reflection principle, that we can find some limit ordinal $\alpha$ such that $\varphi \leftrightarrow \varphi^{V_{\alpha}} \leftrightarrow V_{\alpha} \models \varphi$. Thus, if we could make sure somehow that $V_{\alpha}$ is a model of ZFC while we reflect $\varphi$, then we would be done. But I could not modify the proofs of reflection in such a way that this can be done and am not even sure that this could be done. My question to MO is to what extent (and under which assumptions) can we get the implication in the title? Edit : After reading Emil Jerabek's answer, I realized I should clarify some details. Firstly, I want to treat ZFC only as a formal system (meaning that if you are claiming some assumption $\psi$ does what I want, I want to have a description of how that proof would formally look. This is why I kept writing all the leftmost $ZFC \vdash$'s all the time). Then, it is clear by the above discussions that even if we could prove $ZFC \vdash \varphi$ within our system, we may not prove $\varphi$ without additional assumptions on our system. One solution could be that our system satisfies the "magical" property that whenever we have $ZFC \vdash \exists x \in \omega\ \varphi(x)$, say for some arithmetic sentence, then we have $ZFC \vdash \varphi(SSS...0)$ for some numeral. This of course is not available by default setting for we know that the theory $ZFC$+ $c \in \omega$ + $c \neq 0$ + $c \neq S0$ +... is consistent if $ZFC$ is consistent. Thus, that magical property seems like an unreasonably strong assumption. To make my question very precise, what I want is some assumption $\psi$ so that for some class of formulas, whenever I have $ZFC \vdash ZFC \vdash \varphi$, then $ZFC + \psi \vdash \varphi$. For arithmetic sentences, existence of an $\omega$-model is sufficient. I agree that $\Sigma^0_1$ soundness should be sufficient for arithmetic sentences if what you mean by $\Sigma^0_1$ soundness is having $ZFC \vdash ZFC \vdash \varphi$ requiring (maybe even as a derivation rule, attached to our system!) $ZFC \vdash \omega \models \phi$, where $\phi$ is the translated version of $\varphi$ into the appropriate language, since I can again go through quantifier by quantifier and prove the sentence itself, that is $ZFC \vdash \varphi$. However, I see no reason why $\Sigma^0_1$ soundness should be enough for arbitrary sentence $\varphi$. It seems to me that what we need is some structure for which we have the reflection property that formal truth in the structure is provably equivalent to $\varphi$ and that structure being model of all the ZFC-sentences used in the ZFC-proof of $\varphi$. I believe existence of $\Sigma_n$-reflecting cardinals which are inaccesible is more than sufficient for sentences up to $n$ in the Levy hiearchy. By definition of those, we have the equivalence $\varphi \leftrightarrow V_{\kappa} \models \varphi$ and then provability of $\varphi$ in ZFC implies $V_{\kappa} \models \varphi$. However, I am not sure if we had to go that far. ${}{}$
|
$\def\zfc{\mathrm{ZFC}}\def\pr{\operatorname{Prov}\nolimits}$The statement $\zfc\vdash\pr_\zfc(\ulcorner\varphi\urcorner)$ implies $\zfc\vdash\varphi$ for every sentence $\varphi$ in the language of $\zfc$ is equivalent to the statement that $\zfc$ is either inconsistent or $\Sigma^0_1$-sound: the latter means that every $\Sigma^0_1$-sentence provable in $\zfc$ is true in standard integers. One direction is obvious as $\pr_\zfc(\ulcorner\varphi\urcorner)$ is a $\Sigma^0_1$-sentence, and its truth in $\mathbb N$ says exactly that $\varphi$ is provable in $\zfc$. The converse follows from the Friedman–Goldfarb–Harrington principle: if $T$ is a recursively axiomatized theory containing Robinson’s arithmetic and $\sigma$ a $\Sigma^0_1$-sentence, there exists a sentence $\varphi$ (that can also be taken $\Sigma^0_1$) such that $$I\Delta_0+\mathit{EXP}\vdash\pr_T(\ulcorner\varphi\urcorner)\leftrightarrow(\sigma\lor\pr_T(\ulcorner0=1\urcorner)).$$ $\Sigma^0_1$-soundness is stronger than consistency, but weaker than $\omega$-consistency. If you are wondering about foundational issues, it is best to consider it as a separate assumption on its own.
|
{
"source": [
"https://mathoverflow.net/questions/127322",
"https://mathoverflow.net",
"https://mathoverflow.net/users/33039/"
]
}
|
127,520 |
The classical Riemann Hypothesis has famous analogues for function fields and finite fields which have been proved. It has by now very many analogues, many of them still open. Are there important analogues that are now known to be false?
|
There is a well-known example of Davenport and Heilbronn of a Dirichlet series that in some sense is not so different from the Riemann-zeta function but that has zeros off the critical line. The function is defined
$$\sum_{n=1}^{\infty} \frac{a_n}{n^s}$$
where $a_n$ equals $1, c, -c, -1, 0$ for $n$ equal to $1,2,3,4,5$ modulo $5$, resp., with
$c$ a certain algebraic number [see the reference at the end for the actual value]. This function then fulfills a functional equation similarly to the Riemann-zeta-function and (thus) can be continued to the entire plane (for details see again reference below). Yet as mentioned above it has (nontrivial) zeros off the critical line.
And, it might be worth adding that for other Dirichlet series with periodic coefficient sequences (for example, Dirichlet L-series) one expects a generalisation of RH to be true. For some recent computational investigations on the zeros of this function see for example Zeros of the Davenport-Heilbronn Counterexample Mathematics of Computation, 2007. For an 'axiomatic' framework where no exceptions to (the analog of) the Riemann Hypothesis are currently expected while capturing many/most Dirichlet series that appear in practise see the Selberg class .
|
{
"source": [
"https://mathoverflow.net/questions/127520",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38783/"
]
}
|
127,530 |
Which smooth, closed surfaces $S \subset \mathbb{R}^3$ have no
single geodesic $\gamma$ that fills $S$ densely? Say a geodesic $\gamma$ "fills $S$ densely" if the closure of the set of points
through which $\gamma$ passes equals $S$.
Some examples: A sphere: every geodesic is a great circle. Zoll surfaces, as discussed here:
" Surfaces all of whose geodesics are both closed and simple ." An ellipsoid. (Image from GeographicLib .) A torus generally has many geodesics that fill the surface. ( Image by John Oprea ) My assumption is that almost all surfaces have geodesics that fill them.
Is this known, under any interpretation of "almost all"?
I would also be interested in extending the list of exceptional surfaces
beyond {sphere, Zoll, ellipsoid}.
Thanks for pointers! Answers Summary ( 18Apr2013 ): (Robert Bryant, Mikhail Katz)
Any surface of revolution with poles has no dense geodesic.
This holds for convex or nonconvex surfaces of revolution. (Robert Bryant)
There are generalizations of
Liouville surfaces (due to Goryachev-Chaplygin and to Dullin-Matveev) that have no dense geodesic. (Misha Kapovich)
Every surface may be perturbed by gluing on "focusing caps" so
that it has dense geodesics. (Keith Burns) Guess:
There is always a dense geodesic on a closed Riemannian surface of genus
$\ge 2$.
|
Any surface of revolution in $3$-space with poles will have this property. The reason is that, in this case, any geodesic either goes through a pole (i.e., a point where the axis of revolution meets the surface) and is a profile curve that lies in a plane or else, because of the Clairaut integral, it avoids that pole by some positive distance. Thus, no geodesic on the surface is dense in the surface. You mention ellipsoids, which furnish examples of these special surfaces. These are examples of so-called 'Liouville surfaces', i.e., Riemannian surfaces $(S,g)$ for which there exist two independent first integrals of the geodesic flow on $T^\ast S$ that are quadratic functions on the fibers of $T^\ast S\to S$, one of which is the co-metric associated to $g$ and the other of which is an independent first integral. As you probably know, surfaces of revolution are surfaces for which there exist a first integral of the geodesic flow that is linear on the fibers of $T^\ast S\to S$, namely the Clairaut integral. It has been known for some time that there are metrics on the $2$-sphere that don't possess any 'extra' first integrals that are linear or quadratic functions on the fibers of $T^\ast S\to S$, but do possess first integrals that are cubic or quartic functions on the fibers of $T^\ast S\to S$. These are due to Goryachev-Chaplygin (early 20th century) and Dullin-Matveev (2004). These are also examples for which no geodesic winds densely over the surface. All of these work because there are 'conservation laws' for the geodesic flow of a particular kind, and they properly generalize the Liouville surfaces (which includes the famous case of ellipsoids).
|
{
"source": [
"https://mathoverflow.net/questions/127530",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
127,531 |
Let $\mathcal{G}=(A\rightrightarrows X)$ be a groupoid.
Here $X={\rm Ob}(\mathcal{G})$, $A={\rm Ar}(\mathcal{G})$,
and we have 5 maps:
$s,t\colon A\to X$ (the source and the target, surjective),
$m\colon A\times_X A\to A$ (multiplication of composable arrows),
${\rm id}\colon X\to A$ ($x\mapsto{\rm id}_x$, injective),
and $i\colon A\to A$ ($a\mapsto a^{-1}$),
satisfying the usual axioms.
I say that my groupoid is connected if for any two objects $x,y\in X$ there exists an arrow $a\colon x\to y$. Assume that a finite group $\Gamma$ acts on $\mathcal{G}$, i.e., it acts on $X$ and $A$ such all the 5 maps are $\Gamma$-equivariant.
We say that $\mathcal{G}$ is a $\Gamma$-groupoid. Now I want to construct a fibered category (gerbe) $\mathbb{G}$ over the category (site) of finite $\Gamma$-sets,
starting from a connected $\Gamma$-groupoid $\mathcal{G}$.
In other words, for any finite $\Gamma$-set $S$, I want to construct a groupoid $\mathbb{G}(S)$,
and for a morphism $S\to T$ of finite $\Gamma$-sets, I want to define a restriction functor $\mathbb{G}(T)\to \mathbb{G}(S)$.
How can I do that?
I could not find this in Giraud's book.
|
Any surface of revolution in $3$-space with poles will have this property. The reason is that, in this case, any geodesic either goes through a pole (i.e., a point where the axis of revolution meets the surface) and is a profile curve that lies in a plane or else, because of the Clairaut integral, it avoids that pole by some positive distance. Thus, no geodesic on the surface is dense in the surface. You mention ellipsoids, which furnish examples of these special surfaces. These are examples of so-called 'Liouville surfaces', i.e., Riemannian surfaces $(S,g)$ for which there exist two independent first integrals of the geodesic flow on $T^\ast S$ that are quadratic functions on the fibers of $T^\ast S\to S$, one of which is the co-metric associated to $g$ and the other of which is an independent first integral. As you probably know, surfaces of revolution are surfaces for which there exist a first integral of the geodesic flow that is linear on the fibers of $T^\ast S\to S$, namely the Clairaut integral. It has been known for some time that there are metrics on the $2$-sphere that don't possess any 'extra' first integrals that are linear or quadratic functions on the fibers of $T^\ast S\to S$, but do possess first integrals that are cubic or quartic functions on the fibers of $T^\ast S\to S$. These are due to Goryachev-Chaplygin (early 20th century) and Dullin-Matveev (2004). These are also examples for which no geodesic winds densely over the surface. All of these work because there are 'conservation laws' for the geodesic flow of a particular kind, and they properly generalize the Liouville surfaces (which includes the famous case of ellipsoids).
|
{
"source": [
"https://mathoverflow.net/questions/127531",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4149/"
]
}
|
127,717 |
Let me detail the title of the question. I'm trying to give students an intuition of what the class number is. Let $K=\mathbb{Q}(\sqrt{-d})$, with $d>0$ a square-free integer, be a quadratic imaginary field. Let $\mathcal{O}_K$ be its ring of integers. It is of the form $\mathbb{Z}[\tau]$ with $\tau=\sqrt{-d}$ or $\tau=\frac{1+\sqrt{-d}}{2}$ depending on the value of $d$ mod $4$. So let us think of $\mathcal{O}_K$ as the lattice of $\mathbb{C}$ generated by $1$ and $\tau$. Then ideals should correspond to sublattices of $\mathcal{O}_K$ and two of them should define the same class in the class group if one can pass from one to another by multiplying by an element of $\alpha$, isn't it? Could anybody help me to make this analogy precise? For instance, how can one see that $\mathbb{Q}(i)$ has class number 1 but $\mathbb{Q}(\sqrt{-5})$ doesn't just by looking at the corresponding lattices? The (non-equivalent) decompositions $2.3=(1+\sqrt{5}i)(1-\sqrt{5}i)$ suggest to consider the lattices $\mathbb{Z}\cdot 2+\mathbb{Z}\cdot(1+\sqrt{5}i)$ and $\mathbb{Z}\cdot 3+\mathbb{Z}\cdot(1-\sqrt{5}i).$ Is that what I have to do? Thanks!
|
This is an interesting question that I've wondered about myself, so I can't really answer it properly but I'll make a couple elementary observations. First, for a lattice in ${\mathbb Z}[\tau]\subset{\mathbb C}$ to be an ideal just means that multiplication by $\tau$ takes the lattice to itself. For example, for the Gaussian integers this says the lattice is invariant under 90 degree rotation about the origin. It is easy to see that this means the lattice is a square lattice with a basis consisting of two of its shortest vectors. (This is really just a disguised version of the Euclidean algorithm for Gaussian integers.) Thus every ideal is principal in this case. Back to the general case, another observation is that if two ideals are in the same ideal class, then the two lattices are related by an orientation-preserving similarity, that is, rotation and rescaling, which is what multiplying an ideal by an element of ${\mathbb Z}[\tau]$ does to a lattice. (It seems the converse should be true as well). For example in the case $\tau =\sqrt5i$ the principal ideal class consists of rectangular lattices similar to ${\mathbb Z}[\tau]$ itself, and the other ideal class (the class number is 2 here) consists of lattices similar to the lattice $(2,1+\sqrt5i)$ which is skewed rather than rectangular. It is enlightening to draw a picture to see how this lattice is invariant under multiplication by $\sqrt5i$. Another thing that can be viewed geometrically is the correspondence between ideal classes and equivalence classes of binary quadratic forms of fixed discriminant. An ideal, viewed as a lattice, determines a quadratic form by restricting the usual norm (squared) $x^2 + y^2$ to the lattice, then renormalizing suitably to make equivalent ideals have equivalent quadratic forms. A textbook that explains this, to some extent at least, is Advanced Number Theory by Harvey Cohn. It would be interesting to work out some more examples to see what the different similarity classes of lattice-ideals look like, especially in cases when the class number is larger than 2. Is the structure of the ideal class group somehow visible in how the similarity classes are related?
|
{
"source": [
"https://mathoverflow.net/questions/127717",
"https://mathoverflow.net",
"https://mathoverflow.net/users/33163/"
]
}
|
127,841 |
$\DeclareMathOperator\Top{\mathit{Top}}$ I am not sure if its OK to ask this question here. Let $\Top$ be the category of topological spaces. Let $X,Y$ be objects in $\Top$ . Let $F:\mathbb{I}\rightarrow \Top(X,Y)$ be a function (I will denote the image of $t$ by $F_t$ ). Let $F_{*}:X\times \mathbb{I}\rightarrow Y$ be the function that sends $(x,t)$ to $F_t(x)$ . Is there a topology on $\Top(X,Y)$ such that $F$ is continuous iff $F_{*}$ is continuous ? Motivation: In the definition of a homotopy $F$ from $f$ to $g$ (for some $f,g\in \Top(X,Y)$ ) it is tempting to think of $F$ to be a "path" (as in the definition of $PY^X$ ) from $f$ to $g$ . Now I really wanted to see if $F$ could be thought of as real path from $f$ to $g$ in $\Top(X,Y)$ . More precisely, I wanted to know whether $F_{*}:\mathbb{I}\rightarrow \Top(X,Y)$ that sends $(x,t)$ to $F_t(x)$ is a path or not. Note that $F_{*}(0)=f,F_{*}(1)=g$ ,thus if $F_{*}$ is continuous it would be a path from $f$ to $g$ in $\Top(X,Y)$ . Hence, I still think that the case when $\mathbb{I}$ is the unit interval is still of some interest.
|
Briefly, this works very nicely when $X$ is locally compact, but not otherwise.
Then the function space carries the compact-open topology. John Isbell gave a survey of the story and literature in his paper General Function Spaces, Products and Continuous Lattices ,
in Math Proc Cam Phil Soc 100 (1986) 193--205. It is an ongoing matter in theoretical computer science. There is frequent and ongoing literature on this subject going back to
when Ralph Fox introduced the compact-open topology in On Topologies for Function-Spaces in Bull AMS 51 (1945). It was originally considered in homotopy theory,
then in category theory and topological lattice theory.
After that theoretical computer science took over,
under the headings of domain theory, realisability
and "exact" real computation. Along the way some very important concepts have been identified,
in particular the universal property of the exponential in
a cartesian closed category (as stated elsewhere on this page)
but also that of a continuous lattice . Briefly, a distributive continuous lattice is exactly the topology
of a locally compact space. I say this primarily as a warning to those (students in particular)
who may think that a little bit of tweaking of the category or
the universal property might yield better results.
There are a lot of broken ideas along the way,
some of which you will find surveyed in Isbell's paper.
Breaking a correct idea like the universal property
(by restricting its test object to a single space) is not going to help. The most important topological space is not the real interval but the
Sierpinski space, for which I write $\Sigma$ .
Classically, it has open open and one closed point.
It is important because there are (constructive) bijections amongst continuous functions $\phi:X\to\Sigma$ , open subspaces $U\subset X$ and closed subspaces $C\subset X$ . In particular, putting $Y\equiv\Sigma$ in the desired universal
property, a continuous map $\phi:\Gamma\times X\to\Sigma$ is an open subspace of $\Gamma\times X$ and you want that to correspond to a continuous function $\Gamma\to\Sigma^X$ . With $\Gamma\equiv{\bf 1}$ , this means that the points of $\Sigma^X$ must be the open subspaces of $X$ . With $\Gamma\equiv\Sigma^X$ , we want the transpose of $id:\Sigma^X\to\Sigma^X$ to be continuous, but this is $ev:\Sigma^X\times X\to\Sigma$ defined
by $ev(U,x)\equiv(x\in U)$ .
This map defines an open subspace of $\Sigma^X\times X$ ,
which is a union of rectangles ${\cal V}\times V$ .
If $x\in U$ then $(U,x)\in{\cal V}\times V$ and $x\in V\subset K\subset U$ where $K\equiv\bigcap{\cal V}$ is compact. So this works exactly when $X$ is locally compact and $\Sigma^X$ is its lattice of open subspaces, itself equipped with the Scott topology , which has a basis consisting of ${\cal V}\equiv\lbrace W|K\subset W\rbrace$ for $K$ compact. I forget why $K$ is compact, but a good place to look would
be the paper Local Compactness and Continuous Lattices by Karl Hofmann and Mike Mislove
in Springer Lecture Notes in Mathematics 871 (1981) 209-248.
It was in this paper that the interpolation property $x\in V\subset K\subset U$ was introduced
as the definition of a locally compact space that is (sober but)
not necessarily Hausdorff. [ PS: Peter Johnstone has a neat argument involving preservation of injectivity, in the final chapter of his book Stone Spaces .] So this is the reason why local compactness of $X$ is necessary. If $X$ is locally compact then the exponentials $Y^X$ exist for all spaces $Y$ .
However, even when $Y$ is locally compact, $Y^X$ need not be,
for example Baire space $N^N$ is not,
so locally compact spaces do not form a cartesian closed category.
Nevertheless, $\Sigma^X$ is always locally compact when $X$ is. Of course the argument for necessity above does not work
if you only allow $\Gamma\equiv[0,1]$ in the universal property.
However, it is not a good idea to mess around with such definitions. If you seriously want to use the collection of maps $X\to Y$ as another space then you require a notation and a way of computing
with functions as first-class objects .
This notation is called the (typed) lambda calculus . When the universal property of the exponential was recognised
in the 1960s, it was not only related to this question in general topology
but also to the formulation of symbolic logic,
that is, to the lambda calculus and to proof theory . I always write $\Gamma$ for the test object of a universal property
because it plays exactly the same role in category theory
as the context does in symbolic logic,
which is customarily written with this letter.
The context of an expression is the list of parameters (free variables)
in it and their types (the spaces over which they range). If you restrict $\Gamma$ to be just the singleton or interval
then you cannot have general parameters in your expressions. Dana Scott initially got involved in this subject because he wanted
to show that the untyped lambda calculus is meaningless.
However, he fairly quickly discovered models of it,
in the form of topological lattices such that $X\cong X^X$ .
See, for example, his Data Types as Lattices in
the SIAM Journal on Computing 5 (1976) 522-587. Out of this grew veritable industries called domain theory and denotational semantics .
In the 1980s, cartesian closed categories of domains came two-a-penny
(I was responsible for some of them), where
"domains" were particular kinds of partial orders
equipped with the Scott topology.
Denotational semantics used these to give mathematical
meanings to constructs in programming languages
in order to demonstrate the correctness of programs. If you do not like the story for the whole of the traditional category
of topological spaces then there are many alternatives. The "official" answer in homotopy theory was
the (full sub)category of compactly generated spaces . On the other hand, there are ways of enlarging the traditional
category to make it cartesian closed. Equilogical Spaces and Filter Spaces by Pino Rosolini
in Rendiconti del Circolo Matematico di Palermo 64 (2000) 157--175
gives an excellent survey of them,
explaining how they are reflective subcategories of
presheaves on the traditional category.
In particular, Scott had introduced equilogical spaces ,
defined as topological spaces equipped with formal equivalence relations;
the theory is set out in full in Equilogical Spaces by Andrej Bauer, Lars Birkedal and Dana Scott. Having gone to the trouble of writing this lengthy account
of (some of) the history of this question,
I would like to turn it back on the homotopy theorists. When topics like this were considered by categorists in the 1960s,
they aimed their papers at (for example) topologists.
Therefore they did not spell out the topology, because their
intended readers would know it.
This is very frustrating for subsequent students of category theory:
the papers just contain the category theory and it is impossible
to trace back to the preceding mathematical ideas. So I would be grateful if the homotopy theorists here would explain,
without rehearsing the category theory,
what the motivations were and are in their own subject
for asking for "convenient" or cartesian closed categories. PS Thanks to Tyler Lawson for the comment below answering this question. Is there a slightly more detailed explanation of these methods, say of the length of a MO answer, or a survey paper? In the context of an application of this kind, the next question is whether the cartesian closed categories that have been used (and mentioned above) are the most appropriate for the job. On the face of it, you're happy with "any old" CCC. But, when you look at the extra objects of this category, do the extensions of topological notions to them behave in the way that you would like? That is, according to whatever other intuitions of topology you have, such as developing results along the lines that Tyler mentions? Many early applications of category theory imported the benefits of "set theory" by working in the Yoneda embedding (presheaves) or a smaller category of sheaves. Rosolini showed (in the paper cited above) how the CCC extensions of categories of topological spaces are subcategories of the Yoneda embedding. There is a close technical analogy in that both kinds of subcategory are reflective, but for sheaves the reflector (left adjoint to inclusion) preserves all finite limits, whereas in these CCCs it preserves products but not all equalisers or pullbacks. My personal view is that these extensions are not topology but set theory with topological decoration . In this context, by "set theory" I mean, not the study of $\in$ , but that of discrete spaces, whereas I believe (following Marshall Stone) that mathematical structures should be intrinsically topological. I have a research programme called Equideductive Topology that tries to look at such extensions without importing set theory.
|
{
"source": [
"https://mathoverflow.net/questions/127841",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32135/"
]
}
|
127,889 |
"No". That was my answer till this afternoon! "Mathematics without proofs isn't really mathematics at all" probably was my longer answer. Yet, I am a mathematics educator who was one of the panelists of a discussion on "proof" this afternoon, alongside two of my mathematician colleagues, and in front of about 100 people, mostly mathematicians, or students of mathematics. What I was hearing was "death to Euclid", "mathematics is on the edge of a philosophical breakdown since there are different ways of convincing and journals only accept one way, that is, proof", "what about insight", and so on. I was in a funny and difficult situation. To my great surprise and shock, I should convince my mathematician colleagues that proof is indeed important, that it is not just one ritual, and so on. Do mathematicians not preach what they practice (or ought to practice)? I am indeed puzzled! Reaction : Here I try to explain the circumstances leading me to ask such "odd" question. I don't know it is MO or not, but I try. That afternoon, I came back late and I couldn't go to sleep for the things that I had heard. I was aware of the "strange" ideas of one of the panelist. So, I could say to myself, no worry. But, the greatest attack came from one of the audience, graduated from Princeton and a well-established mathematician around. "Philosophical breakdown" (see above) was the exact term he used, "quoting" a very well-known mathematician. I knew there were (are) people who put their lives on the line to gain rigor. It was four in the morning that I came to MO, hoping to find something to relax myself, finding the truth perhaps. Have I found it? Not sure. However, I learned what kind of question I cannot ask! Update: The very well-known mathematician who I mentioned above is John Milnor. I have checked the "quote" referred to him with him and he wrote "it seems very unlikely that I said that...". Here is his "impromptu answer to the question" (this is his exact words with his permission): Mathematical thought often proceeds from a confused search for what is true to a valid insight into the correct answer. The next step is a careful attempt to organise the ideas in order to convince others.BOTH STEPS ARE ESSENTIAL. Some mathematicians are great at insight but bad at organization, while some have no original ideas, but can play a valuable role by carefully organizing convincing proofs. There is a problem in deciding what level of detail is necessary for a convincing proof---but that is very much a matter of taste. The final test is certainly to have a solid proof. All the insight in the world can't replace it. One cautionary tale is Dehn's Lemma. This is a true statement, with a false proof that was accepted for many years. When the error was pointed out, there was again a gap of many years before a correct proof was constructed, using methods that Dehn never considered. It would be more interesting to have an example of a false statement which was accepted for many years; but I can't provide an example. (emphasis added by YC to the earlier post)
|
I was not going to write anything, as I am a latecomer to this masterful troll question and not many are likely going to scroll all the way down, but Paul Taylor's call for Proof mining and Realizability (or Realisability as the Queen would write it) was irresistible. Nobody asks whether numbers are just a ritual, or at least not very many mathematicians do. Even the most anti-scientific philosopher can be silenced with ease by a suitable application of rituals and theories of social truth to the number that is written on his paycheck. At that point the hard reality of numbers kicks in with all its might, may it be Platonic, Realistic, or just Mathematical. So what makes numbers so different from proofs that mathematicians will fight a meta-war just for the right to attack the heretical idea that mathematics could exist without rigor, but they would have long abandoned this question as irrelevant if it asked instead "are numbers just a ritual that most mathematicians wish to get rid of"? We may search for an answer in the fields of sociology and philosophy, and by doing so we shall learn important and sad facts about the way mathematical community operates in a world driven by profit, but as mathematicians we shall never find a truly satisfactory answer there. Isn't philosophy the art of never finding the answers? Instead, as mathematicians we can and should turn inwards . How are numbers different from proofs? The answer is this: proofs are irrelevant but numbers are not . This is at the same time a joke and a very serious observation about mathematics. I tell my students that proofs serve two purposes: They convince people (including ourselves) that statements are true. They convey intuitions, ideas and techniques. Both are important, and we have had some very nice quotes about this fact in other answers. Now ask the same question about numbers. What role do numbers play in mathematics? You might hear something like "they are what mathematics is (also) about" or "That's what mathematicians study", etc. Notice the difference? Proofs are for people but numbers are for mathematics. We admit numbers into mathematical universe as first-class citizen but we do not take seriously the idea that proofs themselves are also mathematical objects. We ignore proofs as mathematical objects. Proofs are irrelevant. Of course you will say that logic takes proofs very seriously indeed. Yes, it does, but in a very limited way: It mostly ignores the fact that we use proofs to convey ideas and focuses just on how proofs convey truth. Such practice not only hinders progress in logic, but is also actively harmful because it discourages mathematization of about 50% of mathematical activity. If you do not believe me try getting funding on research in "mathematical beauty". It considers proofs as syntactic objects. This puts logic where analysis used to be when mathematicians thought of functions as symbolic expressions, probably sometime before the 19th century. It is largely practiced in isolation from "normal" mathematics, by which it is doubly handicapped, once for passing over the rest of mathematics and once for passing over the rest of mathematicians. Consequently even very basic questions, such as "when are two proofs equal" puzzle many logicians. This is a ridiculous state of affairs. But these are rather minor technical deficiencies. The real problem is that mainstream mathematicians are mostly unaware of the fact that proofs can and should be first-class mathematical objects. I can anticipate the response: proofs are in the domain of logic, they should be studied by logicians, but normal mathematicians cannot gain much by doing proof theory. I agree, normal mathematicians cannot gain much by doing traditional proof theory. But did you know that proofs and computation are intimately connected, and that every time you prove something you have also written a program, and vice versa? That proofs have a homotopy-theoretic interpretation that has been discovered only recently? That proofs can be "mined" for additional, hidden mathematical gems? This is the stuff of new proof theory, which also goes under names such as Realizability, Type theory, and Proof mining. Imagine what will happen with mathematics if logic gets boosted by the machinery of algebra and homotopy theory, if the full potential of "proofs as computations" is used in practice on modern computers, if completely new and fresh ways of looking at the nature of proof are explored by the brightest mathematicians who have vast experience outside the field of logic? This will necessarily represent a major shift in how mathematics is done and what it can accomplish. Because mathematicians have not reached the level of reflection which would allow them to accept proof relevant mathematics they seek security in the mathematically and socially inadequate dogma that a proof can only be a finite syntactic entity. This makes us feeble and weak and unable to argue intelligently with a well-versed sociologist who can wield the weapons of social theories, anthropology and experimental psychology.
So the best answer to the question "is rigor just a ritual" is to study rigor as a mathematical concept , to quantify it, to abstract it, and to turn it into something new, flexible and beautiful. Then we will laugh at our old fears, wonder how we ever could have thought that rigor is absolute, and we will become the teachers of our critics.
|
{
"source": [
"https://mathoverflow.net/questions/127889",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29316/"
]
}
|
128,152 |
A virtual currency called bitcoins has been in the news recently. It is said that in order to "mine" bitcoins, you have to solve hard mathematical problems. Now, there are two kinds of mathematical problems. The difference is best explained by the following beautiful quotation from Langlands : [T]here is an appealing fable that I learned from the mathematician Harish-Chandra, and that he claimed to have heard from the French mathematician Claude Chevalley. When God created the world, and therefore mathematics, he called upon the Devil for help. He instructed the Devil that there were certain principles, presumably simple, by which the Devil must abide in carrying out his task but that apart from them, he had free rein. Both Chevalley and Harish-Chandra were, I believe, persuaded that their vocation as mathematicians was to reveal those principles that God had declared inviolable, at least those of mathematics for they were the source of its beauty and its truths. They certainly strived to achieve this. If I had the courage to broach in this paper genuine aesthetic questions, I would try to address the implications of their standpoint. It is implicit in their conviction that the Devil, being both mischievous and extremely clever, was able, in spite of the constraining principles, to create a very great deal that was meant only to obscure God’s truths, but that was frequently taken for the truths themselves. Certainly the work of Harish-Chandra, whom I knew well, was informed almost to the end by the effort to seize divine truths. Question Which kind of mathematical problems do you have to solve in order to mine bitcoins ? Let me clarify that I'm not interested in mining, only in knowing whether the problems are divine or devilish.
|
Bitcoin mining is based on hash functions. Specifically the SHA-256 hash function, which maps arbitrary bit strings to 256-bit outputs in such a way that nobody knows how to find a collision (two inputs with the same output), although the pigeonhole principle implies collisions exist. Bitcoin mining doesn't involve finding collisions, which would be way too hard. Instead, one has to find inputs that lead to outputs with special properties, namely a lot of consecutive zeros. This is a scaled-down version of inverting the hash function. Of course there's no proof that any of this is actually computationally difficult, and some earlier hash functions have turned out to be weaker than expected (for example, MD5 and SHA-1), but it certainly seems to be. SHA-256 is not a nice or simple function - it was designed to be hard to analyze - so I'd say this is a devilish problem.
|
{
"source": [
"https://mathoverflow.net/questions/128152",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2821/"
]
}
|
128,676 |
Is there a simple answer to the question "what happens to the continued fraction expansion of an irrational number when you add 1/2?" A closely related question is "what happens to such an expansion when you multiply by 2?" Remarks: I'm not sure what qualifies for an answer. The motivation comes from wanting to better understand the equivalence relation on integer sequences generated by tail equivalence (which is generated by adding integers and taking reciprocals) and closure under doubling/halving. It is known that this Borel equivalence relation is not hyperfinite, so the answer cannot be too simple. Edit: The answers are not really what I am asking for. It is clear there is some recursive procedure for doing this, just like there is a recursive procedure for taking a square root of a decimal expansion. I'm looking for something which one might call a "closed form". For instance, if you start with a periodic expansion, adding 1/2 produces a new periodic expansion. Is there a simple transformation on the initial and periodic parts which corresponds to adding 1/2? For instance, can this be done with a finite state automaton? An authoritative "there is no such nice answer" would actually be an acceptable answer.
|
Gosper showed how to produce the continued fraction for $$\frac{axy+bx+cy+d}{exy+fx+gy+h}$$ and even more complicated expressions given the continued fractions for $x$ and $y$. As a special case, he covers fractional linear transformations of $x$, which of course includes adding $1/2$ or multiplying by $2$. I haven't digested this yet. Here is a special case: $2[a_0; 2a_1, a_2, 2a_3, a_4,...] = [2a_0; a_1, 2a_2, a_3, 2a_4, ...]$. Another way of thinking about these is by way of the following two elementary results on simple continued fractions. First, if $p_n/q_n$ and $p_{n+1}/q_{n+1}$ are adjacent convergents of the simple continued fraction for $x$, then $|p_n/q_n - x| \le 1/(q_n q_{n+1}) = 1/(a_{n+1} q_n^2 + q_{n-1}) \lt 1/(a_{n+1} q_n^2)$, where $a_{n+1}$ is a coefficient of the simple continued fraction for $x$. Second, if $|p/q - \alpha | \lt 1/(2q^2)$ then $p/q$ must be a convergent of the simple continued fraction for $\alpha$. Every time you have a large coefficient in the simple continued fraction expansion for $x$ (say, at least $8$, though I think you can do better), the previous convergent $p/q$ is a particularly efficient approximation to $x$. Then $p/q + 1/2$ is just as close to $x+1/2$, and the denominator of $p/q + 1/2$ is at most $2q$, so $p/q + 1/2$ must be a convergent of the simple continued fraction for $x+1/2$. Similarly, if $p/q$ is a sufficiently close approximation to $x$, then $2p/q$ must be a convergent for $2x$. Even the convergents to $x$ which are not so close still constrain the convergents of $x+1/2$ and $2x$. Let $\begin{eqnarray}x &=& [a_0; a_1, a_2, a_3, ...] \newline x' &=& [a_1; a_2, a_3, ...] \newline x'' &=& [a_2; a_3, a_4, ...]\end{eqnarray}$. Then $\frac{1}{2} x = \begin{cases} [a_0/2 ; 2x']& a_0 ~\text{even} \newline [(a_0-1)/2;1,1,(x'-1)/2]& a_0 ~\text{odd}\end{cases}$ Multiplying by $2$ is similar: $2x = \begin{cases} [2a_0; x'/2] & a_1 \gt 1 \newline [2a_0+1; 1, (x''-1)/2] & a_1=1 \end{cases}$ Note that $(x'-1)/2$ is not guaranteed to be greater than $1$. If not, then this gives us a $0$ coefficient in the middle of the simple continued fraction for $\frac{1}{2}x$. The $0$ can be removed by adding together the coefficients on either side. $[a_0; a_1, a_2, ..., a_n, 0, a_{n+2}, a_{n+3},...] = [a_0; a_1, a_2, ..., a_n+a_{n+2}, a_{n+3}, ...]$ For example, if $x = [a_0; a_1, a_2, a_3, ...]$ with $a_0$ odd and the other coefficients even, then $\begin{eqnarray} \frac{1}{2}x &=& [(a_0-1)/2; 1, 1, (a_1-2)/2, 1, 1, (a_2 - 2)/2, 1, 1, (a_3-2)/2, ...] \end{eqnarray}$. Also, note that this pattern looks a lot like the simple continued fraction for $e = [2;1,2,1,1,4,1,1,6,1,1,8,...]$. This indicates that something related to $e$ has a simpler continued fraction, and indeed this is true. $\frac{e-1}{2} = [0; 1,6,10,14,18,...], \frac{e+1}{e-1} = [2;6,10,14,18,...]$. You can view this as that there are two states (though you could subdivide these). In the first, you alternately divide and multiply coefficients by $2$. In the second, you space the coefficients out by $1$s, subtract one and divide them by $2$. If you encounter an odd number when you have to divide by two, then you move to the second states or stay there. If you have to divide an even number by $2$, you move to or stay in the first state, and multiply the next coefficient by $2$. From David Speyer's observation, if you can multiply by $2$ and divide by $2$ then you can also add $1/2$. However, it is a little more complicated. Large coefficients either get approximately multiplied by $2$ or divided by $2$ when you multiply or divide $x$ by $2$. When you add $1/2$, large coefficients sometimes get roughly divided by $4$, sometimes they roughly stay the same, and sometimes they are roughly multiplied by $4$, and the operations depend on the state you are in and the remainder of the coefficients $\mod 4$. For example, if $x = [0; 1, 1000, 2000, 3000, 4000, 5000]$ then $x+1/2 = [1; 2, 249, 1, 3, 499, 1, 3, 749, 1,3, 999, 1, 3, 1249, 1, 3]$. If $y=[0;1,1001,2000,3000,4000,5000]$ then $y+1/2 = [1; 2, 250,8000,750,16000,1250]$.
|
{
"source": [
"https://mathoverflow.net/questions/128676",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10774/"
]
}
|
128,716 |
During courses on geometry it is sometimes necessary to draw a triangle on the blackboard that can easily be recognized as a general triangle . It must not be rectangular and must not have two or more equal angles. Further all angles should be less than $\pi/2$. Has anybody optimized this old problem of geometry-teachers?
|
The book by "Humor in der Mathematik" by Friedrich Wille (from the 1970s or 1980s) contains the tongue-in-cheek theorem "Up to similarity, there is a unique general triangle". (Google book search for "Friedrich Wille" "Allgemeines Dreieck") "General" is defined as "all angles must differ from each other, and from 90 degress, by at least 15 degrees". Given some further axioms (like: diagonals of acute angles have to be visually distinct from line of symmetry) it is also shown that there is a unique general quadrilateral, and that this quadrilateral is "teeming" (another technical term) with general triangles.
|
{
"source": [
"https://mathoverflow.net/questions/128716",
"https://mathoverflow.net",
"https://mathoverflow.net/users/112109/"
]
}
|
128,786 |
Inscribe an $n$-ball in an $n$-dimensional hypercube of side equal to 1, and let $n \rightarrow \infty$. The hypercube will always have volume 1, while it is a fun folk fact (FFF) that the volume of the ball goes to 0. I first learnt of this in relation to Gromov. In the story I heard, he used to ask incoming students to compute the distance $(\sqrt{n}-1)/2$ from a hypercube corner to the ball, and observe them to see if they realized that the volume of the hypercube is concentrated in its corners. Is this story correct? And is this the origin of this FFF? I could imagine a situation where several people noticed this at different times, but where the fact did not become "viral" until much more recenttly.
|
Brian Hayes wrote a column on the volume of the $n$ -sphere for American Scientist a couple of years ago, available online here. It includes a bit of history, with bibliography, toward the end, which might be of help here. Added 4/26/13 : Here are a couple of pertinent passages from Brian's article: "... Sommerville mentions the
Swiss mathematician Ludwig Schläfli
as a pioneer of n-dimensional geometry.
Schläfli’s treatise on the subject, written
in the early 1850s, was not published
in full until 1901, but an excerpt translated into English by Arthur Cayley appeared in 1858. The first paragraph of
that excerpt gives the volume formula
for an n-ball, commenting that it was
determined “long ago.” An asterisk
leads to a footnote citing papers published in 1839 and 1841 by the Belgian
mathematician Eugène Catalan." and "Not one of these early works pauses to comment on the implications of
the formula—the peak at n=5 or the
trend toward zero volume in high dimensions. Of the works mentioned by
Sommerville, the only one to make these
connections is a thesis by Paul Renno
Heyl, published by the University of
Pennsylvania in 1897."
|
{
"source": [
"https://mathoverflow.net/questions/128786",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13923/"
]
}
|
129,321 |
If $M$ is a connected smooth manifold, then it is easy to show that there is a sequence of connected compact smooth submanifolds with boundary $M_1\subseteq M_2\subseteq\cdots$ such that $M=\bigcup_{i=1}^\infty(M_i)^\circ$. I would guess it should also be true that if $M$ is a connected topological manifold then there is a sequence of locally tame connected compact submanifolds with boundary $M_1\subseteq M_2\subseteq\cdots$ such that $M=\bigcup_{i=1}^\infty(M_i)^\circ$. How would one try to prove such a statement? The only proof I know of the statement in the smooth category is to start with any exhaustion by open sets with compact closure and then "smooth" their boundaries. However, modifying an open set in a topological manifold so that its boundary is a tamely embedded codimension 1 submanifold seems much more delicate (and perhaps there is even an obstruction to doing it!).
|
Since topological manifolds of dimension $\le 3$ are smoothable, the question is about manifolds of dimension $\ge 4$. Kirby and Siebenmann proved for $n\ge 6$ that every topological $n$-manifold admits a handle decomposition; this was extended to $n=5$ by Freedman and Quinn (I think, it is Quinn's paper "Ends of maps, III"). This applies to noncompact manifolds as well. Using this handle decomposition you can easily construct the required exhaustion (just use finitely many handles). This settles the problem in all dimensions but 4. Handle decomposition is known to fail in dimension 4, but there is an alternative argument: Take $N^5=M^4\times R$, construct an exhaustion of $N$ as above by compact submanifolds $S_i$. Now, Quinn proved in 1988 a topological transversality theorem in all dimensions ("Topological transversality holds in all dimensions"), which allows you to perturb each $S_i$ to $S_i'$ whose boundary is transversal to $M\times 0$. Then $S_i'\cap M$ will be the required exhaustion.
|
{
"source": [
"https://mathoverflow.net/questions/129321",
"https://mathoverflow.net",
"https://mathoverflow.net/users/35353/"
]
}
|
129,350 |
Hello everybody! I would be interested in knowing, what the reason is for investigating coherent sheaves on complex manifolds. By definition a sheaf $F$ on a complex manifold $X$ is coherent, when it is locally finitely presented (i.e. every point possesses a neighborhood $U$, such that $F \mid U$ is the image of $O_X^n\mid U$ under some surjective morphism) and locally of finite presentation (i.e. for an open súbset $U \subseteq X$ and a surjective morphism $f: O_U^n \rightarrow F \mid U$ of sheaves the kernel of $f$ is locally finitely presented).
My question now is: Why are such sheaves especially interesting? Why might somebody be interested in studying them or what is the intuition behind them?
Every help will be appreciated.
|
First let me note that your definition of coherent sheaf is misleading: it implies that $\mathcal O_X$ is coherent by definition, whereas in reality coherence of $\mathcal O_X$ is a very deep theorem due to Oka . The correct definition is that a sheaf $\mathcal F$ of $\mathcal O_X$-Modules is coherent if : 1) $\mathcal F$ is locally finitely generated : $X$ can be covered by open subsets $U$ on which there exist surjections $\mathcal O_U^N\to \mathcal F\mid U$ . and 2) For any open subset $V\subset X$ and any morphism $f:\mathcal O_V^s\to\mathcal F\mid V $ the sheaf $Ker(f)$ on $V$ is locally finitely generated . As for the consequences of coherence, here is a typical application: if a sequence of coherent sheaves $ \mathcal F' \to \mathcal F\to\mathcal {F''} $ is exact at $x\in X$ (stalkwise) then it is exact on an open neighbourhood of $x$. The real power of coherence however comes through Cartan's Theorems A and B for coherent sheaves on a Stein manifold. These theorems have innumerable consequences: on a Stein manifold (i) Every meromorphic function is the quotient of two global holomorphic functions. (ii) Every topological line bundle has one and only one holomorphic structure, (iii) Every closed analytic subset is the zero set of a family of globally defined holomorphic functions. (iv) Every holomorphic function on a closed analytic subset can be extended to a holomorphic function on the whole Stein manifold. (v) Given global holomorphic functions $f_1,...,f_r$ without a common zero, there exist global holomorphic functions $g_1,...,g_r$ with $f_1g_1+\cdots +f_rg_r=1$. Just to illustrate how easy theorems become with those powerful tools in hand, let me prove (ii). Start from the exponential sequence of sheaves $ 0\to \mathbb Z \to \mathcal O\to \mathcal O^\ast \to 0 $ on the Stein manifold $X$ . The associated long exact sequence has as a fragment $$ \cdots \to H^1(X,\mathcal O) \to H^1(X,\mathcal O^\ast) \to H^2(X,\mathbb Z) \to H^2 (X,\mathcal O) \to \cdots $$
Since $H^1(X,\mathcal O) = H^2 (X,\mathcal O)=0 $ by theorem B, we get the isomorphism $$H^1(X,\mathcal O^\ast ) \cong H^2(X,\mathbb Z)$$ It factorizes as $H^1(X,\mathcal O^\ast )\to H^1(X,\mathcal C^\ast) \to H^2(X,\mathbb Z)$ and since $ H^1(X,\mathcal C^\ast) \stackrel {\cong}{\to }H^2(X,\mathbb Z)$ is well known to be the isomorphism given by the first chern class we get the result (ii) in the more precise form: On a Stein manifold $X$ the first Chern class induces an isomorphism of abelian groups $$\text {Pic}(X)=H^1(X,\mathcal O^\ast )\stackrel {c_1}{\cong}H^2(X,\mathbb Z)$$
|
{
"source": [
"https://mathoverflow.net/questions/129350",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29973/"
]
}
|
129,364 |
Philosophically why should proving that $\gamma$ is irrational (let alone transcendental) be so much harder than proving $\pi$ or $e$ are irrational?
|
There are number theorists who understand this subject much better than I do. However, I feel obliged to post an incomplete answer quickly before people have a chance to close this question. There are a lot more connections known between $\pi$ and $e$ and other numbers than between $\gamma$ and other numbers. We can get proofs of their irrationality by using some of these connections, such as continued fraction expansions for both. $\gamma$ may be thought of as a renormalized version of $\zeta(1)$, where $\zeta$ is the Riemann zeta function $\zeta(s) = \sum_{n=1}^\infty n^{-s}$. $$\gamma = \lim_{s\to 1} \bigg(\zeta(s) - \frac{1}{s-1} \bigg)$$ At even integers, $\zeta(s)$ may be rewritten as a sum over nonzero integers, not just the positive integers. That's one explanation for why it is easier to get a handle on $\zeta(s)$ at even values (where it is a rational times $\pi^s$) than at positive odd integer values. See the answers to "Establishing zeta(3) as a definite integral and its computation." There is some hope. Apéry proved that $\zeta(3)$ is irrational, and this can be related to proofs that other well known numbers are irrational. There are expressions for $\pi$, $\log 2$, $\zeta(3)$ as periods, definite integrals of algebraic functions on $[0,1]$. These can be used in a unified way to prove all of these are irrational (although it's still tricky for $\zeta(3)$), and there are conjectures about the possible rational or algebraic relations between periods. However, so far, $\gamma$ isn't known to be a period although it is an exponential period (as is $e$). No other values of $\zeta$ at positive odd integers are individually known to be irrational.
|
{
"source": [
"https://mathoverflow.net/questions/129364",
"https://mathoverflow.net",
"https://mathoverflow.net/users/16557/"
]
}
|
129,575 |
Does an existence of large cardinals have implications in more down-to-earth fields like number theory, finite combinatorics, graph theory, Ramsey theory or computability theory? Are there any interesting theorems in these areas that can be proved based on assumptions of existence of certain large cardinals? Or they are too far to change anything here? I know that existence of certain cardinals implies consistency of some axiomatic systems of the set theory, which in turn can be expressed as a statement about certain huge Diophantine equations having no solutions, but it looks unlikely that we could use these Diophantine equations for anything else.
|
Harvey Friedman has recently produced some results in this area. See for example Friedman, Invariant Maximal Cliques and Incompleteness , 2011. There is also a draft of a text book titled Boolean Relation Theory and Incompleteness also by Harvey Friedman which is apparently also in this area. In the paper it seems that Friedman has produced a graph theoretic theorem which is somewhat natural and requires a certain large cardinal axiom to prove. Unfortunately I'm not particularly familiar with either of these works, so I'm not able to give a better explanation (although maybe another poster will be able to put a good explanation in their answer).
|
{
"source": [
"https://mathoverflow.net/questions/129575",
"https://mathoverflow.net",
"https://mathoverflow.net/users/33664/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.